id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
249255322
pes2o/s2orc
v3-fos-license
MULTI-TEMPORAL DATA AUGMENTATION FOR HIGH FREQUENCY SATELLITE IMAGERY: A CASE STUDY IN SENTINEL-1 AND SENTINEL-2 BUILDING AND ROAD SEGMENTATION : Semantic segmentation of remote sensing images has many practical applications such as urban planning or disaster assessment. Deep learning-based approaches have shown their usefulness in automatically segmenting large remote sensing images, helping to automatize these tasks. However, deep learning models require large amounts of labeled data to generalize well to unseen scenarios. The generation of global-scale remote sensing datasets with high intraclass variability presents a major challenge. For this reason, data augmentation techniques have been widely applied to artificially increase the size of the datasets. Among them, photometric data augmentation techniques such as random brightness, contrast, saturation, and hue have been traditionally applied aiming at improving the generalization against color spectrum variations, but they can have a negative effect on the model due to their synthetic nature. To solve this issue, sensors with high revisit times such as Sentinel-1 and Sentinel-2 can be exploited to realistically augment the dataset. Accordingly, this paper sets out a novel realistic multi-temporal color data augmentation technique. The proposed methodology has been evaluated in the building and road semantic segmentation tasks, considering a dataset composed of 38 Spanish cities. As a result, the experimental study shows the usefulness of the proposed multi-temporal data augmentation technique, which can be further improved with traditional photometric transformations. INTRODUCTION In the last decade, the remote sensing community has rapidly grown, mainly due to the great deal of potential applications that have emerged. In fact, insights derived from the foto-interpretation of earth observation products can be used for urban planning (Guo et al., 2021) or disaster assessment (Ghaffarian and Emtehani, 2021) among other use cases. Traditionally, the foto-interpretation of large remote sensing images has been manually performed by experts, demanding a great deal of human effort and thus, entailing high costs. However, recent advances in deep learning, especially with Convolutional Neural Networks (CNNs), have made it possible to process vast amounts of remote sensing data, reducing costs and saving time (Zhu et al., 2017). Deep learning models are data-hungry since they require large amounts of labeled data to generalize to unseen scenarios. Furthermore, this problem may be even more evident in remote sensing imagery than in problems involving natural images, since earth observation images are subject to color spectrum variations caused by the sun's position, adverse atmospheric conditions, etc. (Guo et al., 2020). Therefore, it is very costly and time-consuming to develop deep learning models that generalize well even to cases where spatial and temporal shifts occur. In deep learning, Data Augmentation (DA) is commonly used to face the lack of labeled data by artificially introducing small changes to the inputs without altering the outputs, giving the models more variety without increasing the size of the dataset. DA techniques may be seen as a powerful tool to face the Despite photometric DA techniques have been proved beneficial in a wide range of remote sensing tasks, the resulting images may contain synthetic artifacts such as saturated pixels or null values, losing valuable spectral information. Furthermore, the parameters of photometric DA techniques are difficult to tune, since they depend on each specific problem. Moreover, there are events such as shadows casted by near buildings, seasonal rhythms, crop cycles, etc., that can not be simulated through photometric DA techniques. To address these problems, this paper proposes a simple methodology that takes advantage of the high revisit times provided by sensors such as Sentinel-1 (S1) and Sentinel-2 (S2) to perform a realistic multi-temporal color data augmentation (multitemporal DA). The idea is to consider multiple observations for the same area of interest to have a variety of color spectrums coming from real images. In this regard, for a given area of interest, multiple observations are considered, varying the color spectrum without including synthetic artifacts. To assess the usefulness of the proposed approach both building and road semantic segmentation problems have been considered following the experimental framework in (Ayala et al., 2021). It must be noted that buildings and roads have different degrees of variations in their shapes and colors, which makes them ideal for this study. For evaluating the proposed approach, a dataset composed of 38 Spanish cities has been considered. Moreover, for each city, four observations have been chosen corresponding to the four seasons of a year. The experiments, which have been evaluated using the Intersection over Union (IoU) and Fscore metrics, showed that the proposed methodology improves the results from traditional DA techniques in the two scenarios. Furthermore, when the proposed multi-temporal DA technique is combined with the traditional photometric DA transformations, the results are further enhanced. RELATED WORKS DA techniques such as geometric and photometric prevent overfitting artificially increasing the variety of the dataset. Generally, geometric transformations lead to larger improvements in the model's performance than photometric transformations (Taylor and Nitschke, 2018). Furthermore, the former is easier to implement and computationally more efficient compared to the latter. In remote sensing, distortions of rigid-shape objects are commonly avoided. Hence, geometric transformations such as the dihedral DA technique which combines 90-degree rotations along with vertical and horizontal flips are used, which do not alter the image content (Iglovikov et al., 2017). When photometric transformations are applied it is easy to lose spectral information, resulting in unrealistic images. In spite of this, in remote sensing, the models need to learn how to deal with color spectrum variations such as seasonal rhythms, shadows, etc., which are typical in every use case related to earth observation. There are more complex DA techniques that use domainspecific synthesis to expand the dataset. These techniques generate richer data compared to the generic geometric and photometric augmentations (Peng et al., 2014). For example, Yan et al., proposed a novel data augmentation method that simulates remote sensing images combining background images and 3D ship models for tackling the insufficient number of training samples in the ship detection task (Yan et al., 2019a). Thereafter, they extrapolate the methodology to aircraft detection tasks, employing 3D aircraft models to form simulated images (Yan et al., 2019b). Illarionova et al. proposed an object-based augmentation technique that exploits segmentation masks to generate new training samples copy-pasting objects in label-free backgrounds (Illarionova et al., 2021), outperforming standard geometric and photometric DA techniques. Generative Adversarial Networks (GANs) have been also used to generate plausible synthetic data along with their corresponding segmentation masks (Howe et al., 2019). However, the development of complex DA approaches requires domain-specific knowledge, making them not applicable to different problems. In this paper, we focus on exploiting multi-temporal data for data augmentation. Multi-temporal data is useful for a wide range of applications. Multiple observations of the same area can be used to learn transferable representations leveraging temporal information (Mañas et al., 2021). Furthermore, multi-scale spatio-temporal features can be extracted by making use of complex deep learning architectures that combine CNNs with Recurrent Neural Networks (RNNs) (Garnot and Landrieu, 2021). However, to the best of our knowledge, no previous work takes advantage of the high revisit times provided by sensors such as S1 and S2 to realistically augment the dataset, making models robust against color spectrum variations. PROPOSAL Photometric DA techniques such as random transformations of the brightness, contrast, saturation, and hue, may produce undesired synthetic artifacts, having a negative effect on the model performance. Moreover, the application of these techniques may result in unrealistic images, since the spectral information is arbitrarily altered. Furthermore, setting the proper hyper-parameters for these DAs is not straightforward, since they need to be adapted to each problem. For this reason, this paper proposes a novel easy-to-implement color DA technique, taking advantage of the high revisit times provided by S1 and S2 sensors. Rather than applying standard photometric DA techniques that alter the original image, multiple observations can be considered for the same area, preserving their original color information and hence, avoiding creating undesired synthetic artifacts. Our hypothesis is that this approach can be more effective than the traditional photometric DA since there are events such as seasonal rhythms, sun position, or shadows casted by buildings that can not be easily simulated. Figures 1 and 2 can help understanding the differences between photometric DA and the usage of multiple observations. Figure 1 shows the differences between three observations (O1, O2, and O3) and their corresponding augmented versions applying brightness, saturation and contrast photometric DA transformations to the RGB channels. This figure shows the fact that events such as harvesting cannot be easily simulated with standard photometric DA transformations (e.g., O1 cannot be obtained from O2 or O3). Moreover, Figure 2 shows the differences between three (O1, O2, and O3) S1 observations. As it can be seen in the figure, the nature of radar data makes the application of photometric DA transformations complex and meaningless. O1 O2 O3 In (Ayala et al., 2021) multiple observations were used to augment the dataset, however, the experimental setup did not assess the contribution of using multiple observations. Therefore, this paper aims to deeply study the effect that the proposed multitemporal DA technique has on the robustness of semantic seg-O1 O2 O3 Figure 2. Visual comparison of multiple observations for S1's VV and VH backscatter Red-Green composition. mentation remote sensing deep learning models. For this purpose four trimesters have been considered following the dataset described in (Ayala et al., 2021), leaving the last one for assessing the performance of the models. EXPERIMENTAL STUDY In this section, the experimental study carried out to assess the usefulness of the proposed multi-temporal DA technique is presented. First, the dataset generation pipeline is described in Section 4.1. Then, details regarding the experimental framework are given in Section 4.2. Thereafter, experiments carried out are outlined in Section 4.3. Finally, Section 4.4 summarizes the results and the conclusions extracted from the experiments. Dataset In this work, we have made use of the dataset described in (Ayala et al., 2021). The dataset has been generated by combining high-resolution S1 and S2 satellite imagery along with Open-StreetMap (OSM) building and road annotations. Moreover, given the high revisit times provided by S1 and S2 sensors, multiple observations have been considered. Specifically, we have Figure 3 depicts the overall pipeline for a generic region of interest. First, S2 products are downloaded from the Sentinels Scientific Data Hub (SciHub). The 10 m GSD bands from S2 are selected (Red, Green, Blue, and Near Infrared). Furthermore, the Normalized Difference Vegetation Index (NDVI) is also calculated and combined with the other bands. In the case of S1, we used the Level-1 GRD product in the Interferometric Wide (IW) swath mode. This product has a swath width of 250 kilometers, a resolution of 20 × 22 m (depending on the beam id), and could be provided in four polarization modes (VV, VH, HH, HV). However, because dual horizontal polarization (HH, HV) is limited to polar regions, only dual vertical polarization (VV, VH) has been considered. The SciHub has been used to download S1 raw products, queried by a time interval of 7 days ± the mean of the ingestion times of the S2 products considered in the preceding stage. After that, raw S1 products were pre-processed using the Sentinel application platform (SNAP). Firstly, in the radiometric calibration stage, backscatter intensities were estimated using the GRD metadata. Then, in the terrain correction step, the Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM) has been used to address the side-looking effects. Finally, backscatter intensities were log-scaled and converted to decibels. On the other hand, OSM has been proved useful for a great deal of remote sensing tasks (Kaiser et al., 2017). However, OSM should be reclassified beforehand due to the large number of layers it contains. In this regard, different types of roads have been aggregated to construct the road label, whereas the building polygon outlines constitute the building label. The selected OSM codes are presented in Table 1. It must be noted that, since OSM only contains roads' centerlines, line-strings were buffered to match S2's spatial resolution (10 m GSD). Moreover, due to the limited spatial resolution of S1 and S2 sensors, buildings with an area inferior to 50 m 2 have been filtered out. Finally, building and road vector features have been rasterized to 2.5 m GSD. It must be noted that, as suggested in (Ayala et al., 2021), sensor and label-specific validation masks have been taken into account to handle sensing noise and labeling errors, respectively. Accordingly, validation masks have been used at both training and testing times to filter out low-quality samples. The final dataset comprises 38 Spanish cities, which have been separated into two sub-sets following the machine learning standards. That is, in order to prevent data leakage, each city is assigned to either the training set or the test set, as shown in Table 2. It must be noted that this dataset is the same used in (Ayala et al., 2021), discarding cities with missing observations. Experimental framework The experimental framework also follows the specifications described in (Ayala et al., 2021). Regarding the deep learning network itself, a U-Net architecture (Ronneberger et al., 2015) has been considered. As it can be seen in Figure 4, the vanilla U-Net architecture has been modified including a bicubic upscaling layer prior to the feature extractor and replacing the base encoder with a ResNet-34 (He et al., 2016). As a result, semantic segmentation masks that quadruple the input spatial resolution are generated, making it possible to detect elements with sub-pixel width. Considering the large number of experiments we plan to run in order to contrast the usage of photometric DA with the proposed multi-temporal DA technique, we have opted for reducing the number of epochs from 1,000 to 200 in comparison with (Ayala et al., 2021). That is, all the models have been trained for 200 epochs consisting of 1,000 gradient updates. It must be noted that this modification does not alter the conclusions derived, since there is little margin for improvement after this epoch as contrasted in our preliminary experiments. The batch size has been set to 32 samples of 128 × 128 pixels. Furthermore, samples have been randomly taken, considering only those with at least 10% of pixels corresponding to the positive class (either road or building). Finally, since no validation set has been used, the last epoch model is taken. Regarding the loss function, a combination of the Binary Crossentropy and the Dice Loss has been chosen, to better control the trade-off between false positives and false negatives: LDICE(y,ŷ) = 2yŷ + 1 y +ŷ + 1 (1) whereŷ denotes the predicted segmentation mask, y the corresponding ground-truth mask, and the α parameter weights the contribution of the LDICE loss (0.5 in these experiments). The loss function has been minimized, using the Adam optimizer with a fixed learning rate of 1e −3 . The Intersection over Union (IoU) and F-score metrics have been chosen to evaluate the performance of the models: Additionally, both metrics are also calculated following a precision relaxation strategy (Mnih andHinton, 2010, Zhang et al., 2018) aiming at reducing the impact of the low spatial resolution on the metrics. That is, doubtful pixels located on the edges of the roads and buildings are disregarded. The experiments have been run on a computing node with a 2 × Intel Xeon E5-2609 v4 @ 1.70 GHz processor with 128 GB of RAM and 4 × NVIDIA RTX2080Ti GPUs (11 GB of RAM). Experiments Several experiments have been run to compare the proposed multi-temporal DA technique with the traditional photometric DA transformations. To make the evaluation fair, out of the 4 observations available in this dataset, the last one has been left out for testing purposes, whereas the remaining ones have been used to train the models. First, the effect of including more observations has on the performance has been studied. In this regard, models have been trained considering 1, 2, and 3 observations, and tested using the 4th one. It must be noted that for 1 and 2 observations all their possible combinations have been run and averaged whereas, in the case of using 3 observations, the results of three executions have been averaged. Thereafter, the proposed multi-temporal DA technique has been compared with the traditional photometric DA transformations, not only to determine which technique performs better but also to assess if they further improve the generalization capability when used together. Despite the aforementioned color DA techniques, geometric DA techniques that have been widely used as a de facto augmentation in remote sensing are also applied. In this regard, the dihedral transformation, which consists of combinations of horizontal and vertical flips along with 90-degree rotations have been considered as a base for all experiments. It must be noted that the same experiments have been run for building footprint detection and road network extraction tasks. Considering these two tasks, the usefulness of the proposed multi-temporal DA can be better assessed. Results and discussion Tables 3 and 4 summarize the quantitative results in terms of IoU and F-score obtained for the building footprint detection and road network extraction tasks, respectively. Additionally, a relaxed version of both metrics (Rlx. IoU and Rlx. F-score, respectively) is also calculated. Finally, the best results achieved in each task are presented in boldface. Overall, increasing the number of observations with multi-temporal DA improves the generalization capability of the models. In fact, it is more beneficial to increase the number of observations than to apply photometric DA. Nevertheless, applying both DAs together provides the best performance. In the following, we analyze these findings in detail. When working with mono-temporal imagery (a single observation) one can benefit from standard photometric DA techniques making models more robust to color spectrum variations (0.5051 ± 0.0107 vs. 0.4845 ± 0.0351 for buildings, and 0.5049 ± 0.0035 vs. 0.5008 ± 0.0072 for roads, in terms of IoU). Nevertheless, if two observations are available, one can apply the proposed multi-temporal DA technique outperforming the standard photometric DA transformations applied over a single observation (0.5091 ± 0.0397 vs. 0.5051 ± 0.0107 for buildings, and 0.5125 ± 0.0044 vs. 0.5049 ± 0.0035 for roads, in terms of IoU). Furthermore, there is a great increase in performance when considering 3 observations instead of only 2 (0.5635 ± 0.0101 vs. 0.5091 ± 0.0397 for buildings, and 0.5286 ± 0.0067 vs. 0.5125 ± 0.0044 for roads, in terms of IoU). In fact, the more the number of observations is, the better the generalization capability of the models becomes. Finally, for all number of observations tested (1, 2, and 3), the standard photometric transformations help making models more robust. Furthermore, when photometric transformations are combined with the proposed multi-temporal DA technique with 3 observations the best results are achieved (0.5741 ± 0.0166 and, 0.5295 ± 0.0025, in terms of IoU for the building and road extraction tasks, respectively). It must be noted that both color DA techniques, in general, have a greater impact on building metrics than road ones, which is due to the higher variance of buildings shapes and colors compared to roads. To complement the quantitative analysis, Figures 5 and 6 visually compare the performance of the proposed approaches in terms of visual IoU. That is, True Positives (TP) are presented in green, False Positives (FP) in blue, False Positives (FP) in red and True Negatives (TN) in white. According to these figures, one draws the same conclusions as those looking at Tables 3 and 4, respectively, with some extra information. Augmenting the dataset including multiple observations makes the model more robust against color spectrum variations. In this regard, the proposed multi-temporal DA technique is able to reduce the number of FP and FN. However, there are still some FP caused by labeling errors inherent to OSM. CONCLUSIONS AND FUTURE WORK In this paper, a novel color DA technique has been proposed, taking advantage of the high revisit times provided by sensors such as S1 and S2. Accordingly, multiple observations for the same area of interest are considered to have a variety of color spectrums coming from real images rather than augmenting the dataset synthetically using photometric DA transformations. The usefulness of the proposed method has been shown in two semantic segmentation tasks with different degrees of variation in their target's shapes and colors, outperforming standard photometric DA techniques. The multi-temporal DA technique requires no hyper-parameter tuning, which makes it easier to apply than traditional photometric DA transformations. Additionally, it can be directly applied to any sensor, including radar imagery such as S1, which is a limitation of photometric DA techniques. Finally, when the multi-temporal DA technique is combined with standard photometric DA techniques the best results are achieved. Nonetheless, there are still several research lines on this subject that should be pursued in the future. Regarding the dataset, more observations should be considered to further assess the effect that increasing the number of observations has on the generalization capability of the model. Moreover, it would be interesting to extrapolate the analysis to other sensors different from S1 and S2 (e.g. hyperspectral, thermal, microwave, ...). Finally, other tasks such as land use and land cover semantic segmentation or classification of remote sensing images may be considered to gain valuable insights regarding not only the usefulness but also the limitations of the proposed approach. ACKNOWLEDGMENTS Christian Ayala was partially supported by the Government of Navarra under the industrial Ph.D. program 2020 reference 0011-1408-2020-000008. Table 4. Results obtained in test set for the road extraction task.
2022-06-02T15:09:26.369Z
2022-05-30T00:00:00.000
{ "year": 2022, "sha1": "798b2c6e82ed745f28c216fef6d104c12e89300d", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B3-2022/25/2022/isprs-archives-XLIII-B3-2022-25-2022.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "79114dd7797fc0d3b497bb1d9c59aa71a749b604", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [] }
239509956
pes2o/s2orc
v3-fos-license
Primary Budd-Chiari Syndrome With Right Atrial Extension: A Rare Presentation of Intrahepatic Cholangiocarcinoma Budd-Chiari syndrome (BCS) is defined as hepatic venous outflow obstruction and can be classified as primary when the obstruction is due to a predominantly venous process, caused by multiple risk factors that lead to a prothrombotic state. We report a case of a primary BCS, with an exuberant thrombus extending from the supra-hepatic vein, via the inferior vena cava, to the right atrium, a rare form of presentation of intrahepatic cholangiocarcinoma (ICC). Introduction Budd-Chiari syndrome (BCS) is a rare but fatal disease caused by an obstruction in the hepatic venous outflow tract, and it can be classified as primary, when the obstruction is due to a predominantly venous process, or secondary, when the compression or invasion of the veins is caused by an extrinsic process [1]. Most patients with BCS have an underlying condition that should be promptly investigated and, if possible, treated. Multiple risk factors have been identified and are often combined in the same patient [2]. The presentation and clinical manifestations are extremely varied, so clinicians must have a high level of suspicion, and consider BCS in any patient with acute or chronic liver disease [3]. Case Presentation We present a case of a 71-year-old male patient, admitted due to abdominal distension, pain in the right hypochondrium, and fatigue, developing over two months. He denied nausea, vomiting, anorexia, weight loss, jaundice, or other symptoms. On admission to the ED, he was conscious and oriented, with blood pressure: 120/73 mmHg, heart rate: 65 beats per minute, afebrile and eupneic with oxygen saturation: 98% in room air. Cardiopulmonary auscultation revealed rhythmic sounds, with a systolic right-sided heart murmur, and crackling rattles in both lungs. The abdomen was distended, painful on palpation, and with ascites. Blood tests revealed: aspartate aminotransferase (AST): 232 U/L, alanine aminotransferase (ALT): 121 U/L, total bilirubin: 1.06mg/dL, alkaline phosphatase 131 U/L and gamma-glutamyl transpeptidase (GGT): 182 U/L. Alpha-fetoprotein was 20 ng/mL. An abdominal Doppler ultrasound revealed moderate ascites and thrombosis of the portal vein and suprahepatic veins, along with a nodular hepatic lesion. Abdominal CT scan ( Figure 1) and MRI ( Figure 2) confirmed thrombosis of the portal vein, suprahepatic vein, and inferior vena cava extending to the right atrium, associated with a hepatic tumor. An ECG was performed to better characterize the intra-auricular thrombus ( Figure 3). The diagnosis of intrahepatic cholangiocarcinoma (ICC) was confirmed by liver biopsy. Medical treatment was started immediately with anticoagulation and chemotherapy (gemcitabine and oxaliplatin), unfortunately, the patient had a poor therapy response, dying a few weeks later. Discussion The clinical presentation of BCS may vary from a completely asymptomatic condition to fulminant liver failure. It depends on the extent of hepatic vein occlusion and whether venous collateral circulation has developed [1,4]. Patients with fulminant courses develop acute liver failure, jaundice, and hepatic encephalopathy. Subacute form of BCS is the most common, and usually progress in months with abdominal pain, hepatomegaly, and ascites. The chronic form is manifested with complications of cirrhosis [4]. ICC is a primary tumor, originating from the bile duct lining epithelium, frequently with an indolent course. Patients often have a history of dull right upper quadrant pain and weight loss. Some patients are asymptomatic, with the lesions being detected incidentally as part of the workup of abnormal liver blood tests [5]. This case configured a subacute form of BCS, as the clinical symptoms developed progressively over two months. As for classification, we are facing a primary BCS, since the obstructive event of the hepatic veins was a venous thrombotic process, triggered by ICC. There are several cases in the literature of ICC with secondary BCS, related to tumor invasion of the hepatic veins, however primary BCS is rarely described [6]. BCS requires prompt diagnosis and treatment. As the presentation is highly variable, clinicians should consider it if the patient presents with acute liver failure or chronic liver disease [7]. Diagnosis can be made non-invasively using Doppler USG (sensitivity and specificity of 85%) [4], which is the technique of choice for initial investigation, however, contrast-enhanced CT and MRI can better demonstrate the necrotic areas of the liver [1,7]. In this case, workup with abdominal Doppler USG showed a nodular hepatic lesion along with thrombosis of the portal vein and suprahepatic veins. ICC usually presents as a malignant-appearing mass lesion in a noncirrhotic liver, and the main differential diagnosis should always include a primary hepatocellular carcinoma (HCC) or metastatic adenocarcinoma [8]. Also, in ICC, the liver blood tests usually show elevated levels of alkaline phosphatase and GGT, whereas serum bilirubin levels and alpha-fetoprotein are normal [9]. Our patient presented with moderate elevation of transaminases, alkaline phosphatase, and GGT, which can be explained by both BCS and ICC. Normal levels of alpha-fetoprotein supported the diagnosis of ICC over HCC. The diagnosis was confirmed by liver biopsy. Regarding BCS, it was possible with MRI to confirm the diagnosis and detect the extension of the thrombus from the vena cava to the right atrium, a rare form of presentation. Treatment of BCS is based on a stepwise management strategy that includes anticoagulation, correcting underlying disorders that predispose the development of a prothrombotic state, and complications of portal hypertension [10]. Patients without progressive liver necrosis (few symptoms, relatively normal liver function tests) and with ascites, require medical therapy alone. Patients with coagulopathy, encephalopathy, or hepatorenal syndrome (signs of poor prognosis), require immediate relief of the hepatic venous outflow tract obstruction, through thrombolytic therapy or angioplasty. In extreme cases, where prior therapy has failed, liver transplantation should be considered [3,4,7]. The prognosis of patients with BCS has improved in the past decades, due to a combination of faster diagnosis, new treatment modalities, and the routine use of anticoagulation [4,11]. Conclusions BCS has an extremely varied clinical course; therefore, clinicians must have a high level of suspicion and consider this diagnosis in any patient with acute or chronic liver disease. This case describes a rare form of presentation of an ICC through a primary BCS, with an exuberant thrombus from the suprahepatic veins to the right atrium, reminding us of the importance of always considering the presence of an underlying disorder in BCS, and consequently perform a full investigation to identify and treat the cause. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-10-23T15:22:43.142Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "f10be7ab3782c98fbc49f3c6a0c5fb323d070f5b", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/64570-primary-budd-chiari-syndrome-with-right-atrial-extension-a-rare-presentation-of-intrahepatic-cholangiocarcinoma.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2109ac8ff65845bc1378afe98e38e38309066928", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254440991
pes2o/s2orc
v3-fos-license
Peculiarities of the Relationships between Anxiety, Psychological Skills, and Injuries in Cuban Athletes : Background: In the present research, the general and specific relationships of anxiety and psychological skills with the history of injuries in a heterogeneous population of Cuban high-performance athletes are analyzed. Methods: Through a correlational and cross-sectional study, the Villa Clara basketball, baseball, soccer, and softball preselection’s were studied between 2019 and 2022. To obtain data on injuries, a specific questionnaire was applied. The state of the psychological variables was determined by means of the Competition State Anxiety Inventory and the Sports Execution Psychological Inventory. Data were analyzed using empirical frequency distribution, descriptive statistics, the Kolmogorov-Smirnov test, and Kendall's Tau_b nonparametric correlation coefficient. Results and Discussion: INTRODUCTION The study of the relationship between psychological variables and injuries has shown the importance of psychological preparation for the mental and physical health of the athlete, offering a holistic understanding of the psychological intervention and care for injuries in sport from a biopsychosocial approach. Several investigations have obtained evidence of the dependency of injuries on negative emotional states such as anxiety and low coping resources in the face of the stresses of sports activity [1 -6]. Although the Stress and Injury Model [7,8] guides the studies by offering a general explanation of the relationship between psychological variables and injuries, this field of research requires a greater degree of systematization of the results to determine which are the most relevant variables that they constitute risk factors in a general and specific way according to the type of sport and the competitive level of the athletes. It is essential to determine which are the most consistent relationships over time and what are the possible factors that condition these relationships. All this has not been possible due to the theoretical-methodological dispersion that has characterized the study of the subject, being expressed by several authors [9 -11]. However, psychological injury prevention programs have been developed based on general results of studies in large populations of athletes of different modalities, experience, and competitive levels [12,13]. Although these interventions have shown moderate effects on injury prevention, they have not been generalized, which means thinking about the development of compressive programs based on specific data in particular sports to maximize the results of psychological interventions given the principle of individualization of the sports training. Before achieving the development and implementation of these specific programs, it is necessary to obtain criteria that support such a proposal, which is feasible and laudable by obtaining empirical findings that show the general trends and particularities of the relationships between psychological variables and injuries in different sports. For this, the common and differentiating characteristics that suppose psychological demands of the specific sports activity must be considered [14]. In this sense, the study of anxiety and the psychological variables of sports performance in their relationships with injuries has gone from having few studies [1,3,15,16] to having a growing body of evidence in recent years, especially in Cuban athletes of high performance [6]. Although the results are still insufficient to establish trends and generalizations, they do allow a critical approximation of the characteristics of the relationships between these variables, which is why the present investigation was designed with the purpose of characterizing the relationships between anxiety and the psychological skills with the history of injuries in high-performance athletes of team sports. METHODS A correlational and cross-sectional study was carried out in the collective sports of the province of Villa Clara that have national championships in Cuba. The study was carried out between November 2019 and January 2022, always coinciding with the beginning of the preparation stage for the competitions of each year (2019,2020,2022). Data could not be obtained in 2021 because the COVID-19 pandemic did not allow championships to be held in that year and the teams were not integrated. Instrumentation To obtain data and information, three instruments were applied in printed format. The start of three morning training sessions was taken to apply each instrument separately in each sport. In coordination with the head coaches of each team, optimal conditions were guaranteed for the proper application of the ad hoc Questionnaire on Sports Aspects and Injuries, the Competition State Anxiety Inventory to assess anxiety in competition and the Psychological Inventory of Sport Execution for the mental skills. Ad hoc Questionnaire on Sports Aspects and Injuries To record the injuries and the sociodemographic and sports characteristics, a specific self-report questionnaire used in other investigations was used, which allows retrospective information to be obtained on whether the athlete has been previously injured, the number of injuries he has suffered, the severity and the context of occurrence [17]. Competition State Anxiety Inventory To assess competitive state anxiety, the Competitive Sport Anxiety Inventory was used in its Spanish version [18,19]. The instrument has 27 items distributed in three subscales that measure cognitive, somatic and self-confidence anxiety with four Likert-type response options (1= Not at all; 2= A little; 3= Moderately; 4= A lot). Only the total scores of the cognitive and somatic anxiety scales were considered to classify anxiety as high (70-59 points), medium (58-50) and low (less than 50). It was obtained with a Cronbach's Alpha of .85. Psychological Inventory of Sports Execution For the evaluation of psychological skills related to sports performance, the Psychological Inventory of Sports Execution was used. This instrument constitutes the adaptation and assessment carried out by Hernández [20] of the Psychological Performance Inventory [21]. It is made up of 42 items on seven Likert-type response scales (from 1 = Almost Never to 5 = Almost Always). The variables were classified by obtaining quartiles. Self-confidence and motivational level high (30-28 points), medium (27-25) and low (less than 25). High attention control (26-24 points), medium and low (less than 21). High Negative Coping Control (29-25 points), medium (24-20) and low (less than 20). Positive Coping Control and Visual Imaginative Control high (29-26 points), medium and low (less than 23). High Attitude Control (30-28 points), medium (27-25) and low (less than 25). A Cronbach's Alpha coefficient of .75 was obtained for the Self-confidence factor, .71 for Negative Coping Control, .74 for Attention Control, .65 for Visual Imaginative Control, .68 for Motivational Level, .69 Positive Coping Control and .75 for Attitudinal Control. Data Analysis Empirical distribution of frequencies and descriptive statistics of central tendency and dispersion such as the mean, standard deviation, asymmetry, and kurtosis were used. The Kolmogorov-Smirnov test was applied to determine the distribution of the data and the non-parametric correlation coefficient Kendall`s Tau_b to determine the relationship between the psychological variables and the variables that make up the injury history of the athletes. It is understood that the greatest strength of the correlation is expressed in the values closest to -1 and 1, while the negative and positive signs indicate the direction of the relationship between variables. A level of statistical significance where p ≤ 0.05 was considered. The IBM SPSS Software Package (Version 25.0 for Windows) was used. Ethical Considerations Informed consent was obtained from the participating athletes. The research was presented, approved, and endorsed by the Scientific Council and the Medical Ethics Committee of the investigation of the Provincial Center of Sports Medicine of Villa Clara. The investigative procedure and the treatment of the data strictly follow the ethical precepts contained in the Declaration of Helsinki. RESULTS Tables 1 and 2 describe the distribution of the variables under study. Most athletes experience high anxiety in competition, although high-level mental skills predominate. It is observed that negative and visuo-imaginative coping control are the psychological skills with the lowest distribution at high levels. There is a high presence of athletes with a history of injuries. The injuries suffered have been mostly moderate, occurring more frequently in competitions. The variables under study do not follow a normal distribution. Note. **p < 0.01 (two-tailed); *p < 0.05; HSI= History of sports injury; SIN= Sports injury number; SIS= Sports Injury severity; SIC= Sports injury context. Table 3 shows the analysis of the relationship between psychological variables and the injury history of athletes without specifying the sport they practice. It is appreciated that the motivational level, attention control, negative and positive coping establish an inverse relationship with the occurrence of the injury. The number of injuries suffered showed no relationship with psychological variables, while the severity of injuries established a positive relationship with anxiety in competition and a negative relationship with attention control and negative coping. On the other hand, the context of occurrence was negatively related to attitude control. Table 4 and Fig. (1) show that in basketball athletes only self-confidence establishes an inverse relationship with the number of injuries suffered. On the other hand, in baseball athletes, attention control, negative coping and positive coping are related to the occurrence of the injury. In these same athletes, the severity of the injuries suffered is positively related to competition anxiety and negatively related to attention control and negative coping. In soccer players, being injured is negatively related to motivational level, attention control and negative coping, while the severity of injuries is inversely related to attention control. In softball athletes, the occurrence of the injury is inversely related to self-confidence, the number of injuries is directly related to competition anxiety and inversely to self-confidence, negative and positive coping, while the severity of the injury it was inversely related to the control of the attitude, and the context of occurrence in a direct way with the level of anxiety in the competition. DISCUSSION The high presence of athletes who have suffered injuries is consistent with the findings of epidemiological studies that have confirmed the high prevalence of this relevant medical problem in sports [22 -24]. The propensity to experience high anxiety in competition and the high development of psychological skills to compete is an expected finding in highperformance athletes who have achieved sports mastery, coinciding with several investigations [25 -28]. In this heterogeneous population of high-performance ball game athletes, presenting low levels of motivation, attention control, negative and positive coping are risk factors for the occurrence of the injury. In addition, high competition anxiety and insufficient skills to control this negative emotion and stay focused on competitive activity are related to more serious injuries, while being injured during competition is related to poor attitude control. Previous studies that have analyzed these same psychological variables in heterogeneous populations of athletes do not coincide with the results obtained in the present investigation, although the findings of previous studies are fundamentally like each other in terms of the number of injuries suffered. determined in a study carried out on 34 male athletes in the technification process of Olympic Wrestling and Taekwondo, that the number of injuries suffered is related to the presence of less self-confidence, negative and positive coping control, while in a later study carried out on 84 athletes also in the process of technification of four individual disciplines (athletics, cycling, canoeing and taekwondo) found that the number of injuries was related to low control of negative coping and high anxiety in competition [16]. In another study also carried out in a heterogeneous population of 115 female and male athletics, cycling, canoeing and taekwondo athletes, also in the technification process, the authors determined that the number of injuries suffered is related to lower self-confidence and greater anxiety in competition, becoming psychological predictors of the number of times an athlete is injured [1]. On the other hand, an investigation carried out later 50 amateur triathletes of both sexes determined that the higher incidence of injuries is related to less positive coping control and attitude and that competition anxiety and low negative coping control explained 33% of the causes of injuries even when controlling the effect of other variables [3]. The previous studies [1,3,15,16] have analyzed the relationships between anxiety and psychological skills in individual sports and athletes of both sexes in the technification process or amateurs. The most consistent findings are that high anxiety in competition and fewer skills to control negative emotions are risk factors for a greater number of injuries and, to a lesser extent, agree that low self-confidence and positive coping control are relevant risk factors. A recent investigation with 63 high-performance male athletes (softball, soccer and baseball) also obtained divergent results, although the findings coincide in terms of the severity of the injuries suffered. Severity was related to low attention control and negative coping. These divergences, even in ( populations of similar athletes, may be due to the influence of other uncontrolled factors, such as the relationship established between the psychological variables analyzed [6]. Regarding the relationships between anxiety in competition and psychological skills with injuries depending on the type of sport, specific and differentiating correlation matrices were obtained in the present study, even though the findings differ markedly between each sport and the correlation matrix overall obtained. These results denote the intrinsic complexity of the relationships between both groups of variables. Even notable divergences were obtained when comparing the findings in the same sport with different subjects. In this regard, it was obtained that the results coincide to a greater extent in softball athletes [29], since in both analyses, it was obtained that the occurrence of the injury is related to less self-confidence, the number of injuries is also related to less self-confidence and less emotional control, while greater severity is related to less control of attitude. The results obtained in baseball athletes partially coincide with the findings of a previous investigation on baseball pitchers of different sports levels. The occurrence of the injury is related to lower self-confidence, negative and positive coping control. In addition, more serious injuries are related to greater anxiety in competition [30]. On the other hand, the results in basketball athletes differ almost completely from the findings in a similar investigation, only agreeing that the greater number of injuries suffered is related to lower selfconfidence [31]. CONCLUSION The divergences with the results of other investigations in heterogeneous and specific populations not only denote the complexity of the relationships between psychological variables and sports injury but also pose a problem for the generalization of psychological injury prevention programs following the Stress and Injuries model. Although the psychological preparation of the athlete must contain actions aimed at preventing injuries and their effect on the subjectivity of the athlete, the results obtained allow us to infer that preventive psychological intervention must be carried out in specific sports according to the relationships obtained between psychological variables and injuries. These findings go beyond the conception of stress as an antecedent of the injury, establishing the need to reorient research in this field of study. Future research should have as its purpose the explanation of the causes and consequences of the relationships between the psychological variables that configure the risk of injury and vice versa. In this sense, a future line of research could be the analysis of how the type of sport mediates the relationships between anxiety, mental skills and injuries depending on their differentiating characteristics. In addition, it is necessary to determine how the intrinsic relationships between psychological variables can explain the complex and specific nature of the relationship with past and future injuries. Despite the value of the results obtained in this research to glimpse the complexity of a phenomenon of notable relevance in sport, it is considered that the type of cross-sectional study and its descriptive-correlational scope constitute the main limitations of the findings, as well the low representation of the athletes analyzed over the Cuban sports population, which does not allow the results to be generalized at the national level. Therefore, it is considered that these limitations must be overcome to arrive at generalizable and more conclusive results. ETHICS APPROVAL AND CONSENT TO PARTICIPATE This study is part of the research project: "Psychological Preparation and Sports Injuries in Team Sports", approved and endorsed by the Scientific Council and the Medical Ethics Committee of the Provincial Center of Sports Medicine of Villa Clara, Cuba. HUMAN AND ANIMAL RIGHTS No animals were used for studies that are the basis of this research. All the humans were used in accordance with the ethical standards of the committee responsible for human experimentation (institutional and national), and with the Helsinki Declaration of 1975, as revised in 2013 (http://ethics.iit.edu/ecodes/node/3931). CONSENT FOR PUBLICATION The athletes participated voluntarily, giving their informed consent. STANDARDS OF REPORTING STROBE guidelines have been followed. AVAILABILITY OF DATA AND MATERIALS The data used in this study are available upon request from the corresponding author [J.R.G]. FUNDING None.
2022-12-09T16:08:45.928Z
2022-12-07T00:00:00.000
{ "year": 2022, "sha1": "bc1211dfd6032521be50a1a6fc963677529d631b", "oa_license": null, "oa_url": "https://doi.org/10.2174/18743501-v15-e221207-2022-60", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e1bad813aad331b72acd4efed6275d901772733b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
11041386
pes2o/s2orc
v3-fos-license
Neutrino production in UHECR proton interactions in the infrared background We discuss the contribution of proton photoproduction interactions on the isotropic infrared/optical background to the cosmic neutrino fluxes. This contribution has a strong dependence on the proton injection energy spectrum, and is essential at high redshifts. It is thus closely correlated with the cosmological evolution of the ultra high energy proton sources and of the inrared background itself. These interactions may also contribute to the source fluxes of neutrinos if the proton sources are located in regious of high infrared emission and magnetic fields. The assumption that the Ultra High Energy Cosmic Rays (UHECR) are nuclei (presumably protons) accelerated in luminous extragalactic sources provides a natural connection between these particles and ultra high energy neutrinos. This was first realized by Berezinsky&Zatsepin [1] soon after the introduction of the GZK effect [2]. The first realistic calculation of the generated neutrino flux was made by Stecker [3]. The problem has been revisited many times after the paper of Hill&Schramm [4] who used the non-detection of such neutrinos to limit the cosmological evolution of the sources of UHECR. These so called cosmological neutrinos are produced in photoproduction interactions of the UHECR with the ambient photon fields, mostly with the microwave background radiation (MBR). The GZK effect is the limit on the highest energy a cosmic ray proton can retain in propagation through the MBR. It sets a cutoff in the cosmic ray energy spectrum in case the UHECR sources are isotropically and homogeneously distributed in the Universe. The physics of these photoproduction interactions is very well known. Although the energy of the interacting protons is very high, the center of mass energy is low, mostly at the photoproduction threshold. The interaction cross section is studied at accelerators and is very well known. Most of the interactions happen at the ∆ + resonance where the cross section reaches 500µb. The mean free path reaches a minimum of 3.4 megaparsecs (Mpc) at proton energy of 6×10 20 eV. The average energy loss of 10 20 protons is about 20% per interaction and slowly increases with the proton (and center of mass) energy. The fluxes of cosmological neutrinos are, however, very uncertain because of the lack of certainty in the astrophysical input. The main parameters that define the magnitude and the spectral shape of the cosmological neutrino fluxes are: the total UHECR source luminosity L CR , the shape of the UHECR injection spectrum α CR in the case of power law spectrum, the maximum UHECR energy at acceleration E max and the cosmological evolution of the UHECR sources. These are the same parameters that Waxman&Bahcall [5] used to set a limit on the neutrino fluxes generated in optically thin sources of UHECR. The microwave background is not the only universal photon field that has to be taken in consideration. Especially interesting is the isotropic infrared and optical background (IRB). The number density of IRB is smaller than that of MBR by more that two orders of magnitude. On the other hand, protons of lower energy can interact on the IRB, and the smaller number density has to be weighted with the larger flux of interacting protons. The present Universe is optically thin to 10 19 eV and lower energy protons, but even at small redshift the proton interaction rate quickly increases. This is different from the interactions on MBR, where the interacting protons quickly lose their energy even at z=0. The cosmological evolution of UHECR injection is thus of major importance for the contribution of such interactions to the flux of cosmological neutrinos. We use the IRB model of Franceschini et al [6] shown in Fig. 1 together with the MBR in terms of energy density. The model consists of two components: 'star', near infrared, which covers the higher photon energies, and 'dust', far infrared that continues down to MBR. The total IRB number density is significantly smaller than that of MBR. The model yields 1.6 photons/cm 3 , a factor of 250 less than the MBR. The IRB is measured directly after subtraction of point sources and is also estimated from the absorption of TeV photons coming from extragalactic sources [7]. These estimates affect mostly the near infrared part of the spectrum. Photons of wavelength above 40 µm affect only the γ-ray fluxes above 10 TeV [8] where the statistics is usually low and the flux decrease could also be due to absorption in the γ-ray sources. [6]. The data points are from analyses of the DIRBE measurements [9,10]. In addition to the lower total photon density the IRB covers much wider wavelength range than the microwave background, and its photon density per unit energy is even smaller. The interactions of UHECR on IRB photons are indeed very rare in the present universe. Fig. 2 shows the fraction of the proton energy that is converted to neutrinos as a function of the proton energy in propagation on a distance of 200 Mpc. In the derivation of the neutrino limit Wax-man&Bahcall use cosmic ray source luminosity L CR = 4.5 ± 1.5 × 10 44 erg/Mpc 3 /yr between 10 19 and 10 21 eV for power law cosmic ray energy spectrum with α = 2. The assumption is that no cosmic rays are accelerated above 10 21 eV. The cosmological evolution of the source luminosity is assumed to be (1 + z) 3 to z = 1.9 then flat to z=2.7 with an exponential decay at larger redshifts. We will first use the parameters of this limit to find the contribution of the proton interactions on IRB. The resulting ν µ +ν µ spectrum for a cosmological model with Ω Λ = 0.7, Ω M = 0.3 and H 0 = 75 km/s/Mpc is shown with a dotted line in Fig. 3. The flux peaks at 10 16.3 eV at 2.5×10 −18 cm −2 s −1 ster −1 . The peak is at energy lower than the peak of the MBR interactions (shown with a dash-dot line) by a factor of 20, and its magnitude is also lower by a factor of 10. Next we show in the same figure with a dashed line the contribution of IRB for a scenario in which the injection spectral index is changed to α = 2.5 and all other parameters are the same. There is a noticeable shift of the peak position to still lower energy. The peak is now located at 10 15.7 eV and is higher by a factor of about 7. The contribution of IRB is now smaller than that of MBR (α = 2) only by about 30%. The highest curve in Fig. 3 shows the IRB contribution for α = 2.5 and cosmological evolution with n = 4 and then constant to z=10 followed by an exponential decrease. The location of the peak does not change but its magnitude increases by almost a factor of three. It is now 50% higher than the 'standard' MBR generated cosmological neutrinos. It is obviously not correct to compare fluxes obtained with different assumptions for the cosmological evolution and we do it only to have a feeling for the magnitude of the neutrino fluxes. The α = 2.5 spectra decrease the flux of cosmological neutrinos of energy above 10 19 eV. Both the spectral shape and the cosmological evolution of the UHECR sources affect the contribution of the IRB to the cosmological neutrino flux. The most important factor, however, is the shape of the injection spectrum. It is worth to note that the maximum proton energy at acceleration does not affect the IRB generated fluxes, since they are due mostly to protons of energy below 10 20 eV, as can be observed in Fig. 2 At energy about 3×10 18 eV the cosmological fluxes of ν µ +ν µ are very close to the limit for source neutrinos. The reason is simple -in propagation from large distances protons lose almost all of their energy in interactions on MBR. An interesting feature is the flux ofν e (not shown), which peaks at energy about 3×10 15 eV. The origin of this flux is neutron decay, and a smallν e flux is generated in neutron interactions on MBR. The cosmological evolution of the sources (n=3) increases the fluxes by about a factor of five compared to a no-evolution scenario. The increase, however, is not energy independent [12]. The highest energy neutrinos are generated at Fluxes of cosmological neutrinos (ν µ +ν µ ) generated only by interactions on IRB. All three calculations use the UHECR luminosity derived by Waxman [11]. The power law spectral indices and the cosmological evolution of UHECR sources n are given by each curve. The dashdotted line shows the 'standard' n=3 cosmological neutrino flux from interactions in the MBR. small redshifts. The low energy neutrinos come from high redshifts because of two reasons: the threshold energy of protons for photoproduction interaction decreases, and the generated neutrinos are further redshifted to the current epoch. The standard flux (α=2.0, n=3) would generate about 0.4 neutrino induced showers per km 3 year in the IceCube [14] neutrino detector and 0.9 events with energy above 10 19 eV in the Auger [15] observatory (for target mass of 30 km 3 of water) assuming that at arrival at Earth the flavor ratio ν e : ν µ : ν τ is 1:1:1 because of neutrino oscillations. It is difficult to estimate the rate in EUSO [16] because of its yet unknown energy threshold. These events come from the NC interactions of all neutrinos, CC interactions of ν e , the hadronic (y) part of the CC interactions of muon and tau neutrinos and from τ decay. Although very prominent, the Glashow resonance does not produce high rate of events because of its narrow width. Ice Cube should also detect very energetic muons with a comparable rate which is difficult to predict without detector Monte Carlo simulations. Log 10 E ν , eV Figure 4. Comparison of the cosmological neutrino fluxes with the Waxman&Bahcall limit, which is given as an shaded area for the 'standard' power law injection spectrum and cosmological evolution. The thick white line shows the limit derived in Ref. [13]. The dashed line shows the flux of cosmological neutrinos generated in interactions on MBR for the 'standard' parameters, and the solid one -for α = 2.5 and n = 4. The squares show the fluxes generated on the total photon background shown in Fig. 1: the open squares are for the 'standard' parameters and the full ones -for α = 2.5 and n = 4. Changing the proton injection spectrum to a power law with α = 2.5 moves the maximum of the cosmological neutrino flux to lower energy and increases the contribution of the interactions on IRB. At the same time the flux of higher energy cosmological neutrinos decreases. The shower event rates in IceCube and Auger become 0.44 and 0.31 respectively. Assuming a stronger source evolution, (1 + z) 4 makes a big difference in the expected fluxes. With a power law source spectrum with α = 2.5 it generates 1.2 events in IceCube and 0.66 events in Auger. The cosmological neutrino spectrum for ν µ +ν µ is shown with full squares in Fig. 4. The contribution of interactions on MBR is shown with a solid line. The biggest uncertainty in these results, which is not listed above, is the cosmological evolution of the infrared/optical background. The estimates above assume that it is the same as of MBR, i.e. that the IRB was fully developed at z=8, which is the limit of the redshift integration. This does not seem to be a realistic assumption, although models of the IRB emission [17] predict very strong evolution of the far infrared emission, especially between redshifts of 10 to 100. The maximum proton energy at acceleration E max is unknown, but having in mind the highest energy Fly's Eye shower of 3×10 20 eV one should expect that astrophysical sources accelerate protons at least to 10 21 eV. The injection spectrum is also not very well determined since the result of proton propagation depends on the UHECR source distribution. Attempts to derive the injection spectrum in the case of isotropic homogeneous source distribution end up with injection spectra not flatter than E −2.4 power law [18,19]. . The extreme case is developed by Berezinsky et al. [20] who derive an α=2.7 injection spectrum. The luminosity required for the explanation of the observed events above 10 19 eV grows with the spectral index, and in the case of Berezinsky et al. becomes 4.5×10 47 erg Mpc −3 yr −1 . Such steep spectrum would generate only a small event rate for neutrinos above 10 19 eV and would enhance the IRB contribution. Expressed in terms of (1 + z) n the cosmological evolution of different astrophysical objects is observed to be between n = 3 and 4. A strong evolution with n = 4, as used above, may be too optimistic, but not entirely out of range. As seen from Fig. 4 strong cosmological evolution does not only increase the total flux, but moves the peak of the cosmological neutrino spectrum to somewhat lower energy. Finally, the cosmic ray source luminosity, which was normalized to the flux of UHECR at 10 19 by Waxman [11] could easily be higher or lower by half an order of magnitude. One can then assume a pessimistic IceCube shower event rate of 0.1 event per km 3 yr and an optimistic rate of 4-5 events. It is obvious that a detailed calculation of the flux of cosmological neutrinos should include the interactions on the infrared background. We plan to do that with a better model of the IRB cosmological evolution and describe the calculation in more detail in a forthcoming paper. One should also keep in mind that if the UHECR sources are located in regions of high infrared and optical photon density, the fluxes of source neutrinos could increase. The effect may be much stronger if 10 19 eV and lower energy protons are contained in the region by high magnetic fields.
2014-10-01T00:00:00.000Z
2004-04-27T00:00:00.000
{ "year": 2004, "sha1": "08be791d5b155f08316744276131f1f725a77fb6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2004.05.075", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "08be791d5b155f08316744276131f1f725a77fb6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2866022
pes2o/s2orc
v3-fos-license
GhWRKY68 Reduces Resistance to Salt and Drought in Transgenic Nicotiana benthamiana The WRKY transcription factors modulate numerous physiological processes, including plant growth, development and responses to various environmental stresses. Currently, our understanding of the functions of the majority of the WRKY family members and their possible roles in signalling crosstalk is limited. In particular, very few WRKYs have been identified and characterised from an economically important crop, cotton. In this study, we characterised a novel group IIc WRKY gene, GhWRKY68, which is induced by different abiotic stresses and multiple defence-related signalling molecules. The β-glucuronidase activity driven by the GhWRKY68 promoter was enhanced after exposure to drought, salt, abscisic acid (ABA) and H2O2. The overexpression of GhWRKY68 in Nicotiana benthamiana reduced resistance to drought and salt and affected several physiological indices. GhWRKY68 may mediate salt and drought responses by modulating ABA content and enhancing the transcript levels of ABA-responsive genes. GhWRKY68-overexpressing plants exhibited reduced tolerance to oxidative stress after drought and salt stress treatments, which correlated with the accumulation of reactive oxygen species (ROS), reduced enzyme activities, elevated malondialdehyde (MDA) content and altered ROS-related gene expression. These results indicate that GhWRKY68 is a transcription factor that responds to drought and salt stresses by regulating ABA signalling and modulating cellular ROS. Introduction In the natural environment, plants often simultaneously confront a great variety of abiotic and biotic stresses. To cope with these stresses, they have evolved sophisticated defence mechanisms. Transcriptional modulation is vital for the complex genetic and biochemical networks to respond to stress. A number of transcription factors (TFs) have been shown to participate in regulating defence responses [1,2]. WRKY TFs are the most important TFs in plants, and they contain one or two highly conserved WRKYGQK sequences at the N-terminus and a zinc finger motif at the C-terminus [3]. Based on the number of WRKY domains and the primary structure of the zinc-finger motif, WRKY members can be subdivided into three major groups (I-III), and group II can be further split into five subgroups (IIa-e) [4]. It is generally assumed that WRKY TFs act as major regulatory proteins by specifically binding to the W-box [TTGAC(C/T)] to regulate gene expression [2,3,5]. There is an increasing amount of evidence that WRKY TFs are key regulators in the complex signalling and transcriptional networks of plant defences [6][7][8][9]. Previous studies have examined the roles of plant WRKY proteins in response to biotic stress. For example, BnWRKY33 plays an important role in B. napus defence against S. sclerotiorum, which is associated with the activation of the salicylic acid (SA)-and jasmonic acid (JA)-mediated defence responses [10]. CaWRKY40 is regulated by SA, JA and ethylene (ET) signalling, and it plays an important role in the regulation of tolerance to heat stress and resistance to Ralstonia solanacearum infection [11]. Over the last several years, an increased amount of evidence has shown that WRKY proteins are also involved in modulating abiotic stress tolerance [12][13][14][15]. In Arabidopsis, WRKY25, WRKY26 and WRKY33 play important roles in the response to heat stress [16]. Constitutive expression of WRKY57 can improve drought tolerance [14]. Transgenic Arabidopsis overexpressing TaWRKY2 or TaWRKY19 displayed improved salt and drought tolerance [17]. Three soybean WRKY-type transcription factor genes conferred differential tolerance to abiotic stresses in transgenic Arabidopsis plants. For example, GmWRKY21 transgenic plants were tolerant to cold stress, whereas GmWRKY54 conferred salt and drought tolerance. Transgenic plants overexpressing GmWRKY13 exhibited sensitivity to salt and mannitol stress [18]. ABO3/WRKY63 mediates responses to abscisic acid (ABA) and drought tolerance in Arabidopsis [19]. The alleles OsWRKY45-1 and OsWRKY45-2 play different roles in ABA signalling and cold, salt and drought stresses adaptation in rice [20]. Recently, Yan reported that GhWRKY17 responds to drought and salt stress through ABA signalling and the control of cellular ROS production in cotton [21]. Although evidence that WRKY proteins are involved in abiotic stress is increasing, the understanding of the roles of these proteins in the responses to abiotic stress is progressing relatively slowly [17], and the challenge to elucidate the molecular mechanisms of defence remains. Moreover, there is little information about WRKY genes in non-model plants. Abscisic acid (ABA), an important phytohormone, is a key signal for regulating a range of plant physiological processes in response to various biotic and abiotic stresses. Osmotic stresses, including drought and high salinity, can trigger the ABA-dependent signalling pathway and ABA accumulation [22,23]. An increased level of ABA activates downstream transcription factors and modulates the expression of various ABA-responsive genes [24,25]. ABA also promotes cellular reactive oxygen species (ROS) production in Arabidopsis guard cells [26], and Zhang [27] reported that ROS positively regulates the ABA inhibition of stomatal opening. Extensive research has demonstrated that ROS are important signal transduction molecules, modulating plant development, growth, hormonal signalling, and defence [28,29]. High ROS concentrations contribute to ROS-associated injury [30], and the regulation of ROS levels is crucial for abiotic stress tolerance in plants [31]. It is important to examine the roles of WRKY TFs in cross-network signalling between ROS and ABA in the response to drought and salt stress. Cotton is an important source of natural fibre used in the textile industry, and it is particularly susceptible to waterlogging stress [32]. Gossypium hirsutum L. represents more than 95% of the cotton cultivated worldwide [33]. In cotton, only a small number of WRKY TFs have been isolated and characterised. Therefore, understanding the underlying roles of WRKY proteins in the tolerance of osmotic stress is an important global problem for breeding programs. In the present study, a group IIc WRKY gene, GhWRKY68, was isolated from cotton (G. hirsutum L.). The gene can be induced by abiotic stresses and multiple defence-related signalling molecules. Many abiotic stress-responsive elements were observed in the promoter region of this gene, and β-glucuronidase activity driven by the promoter was enhanced by treatments with drought, salt, ABA and H 2 O 2 . The overexpression of GhWRKY68 in Nicotiana benthamiana significantly reduced resistance to drought and salt modulated by ABA signalling and the regulation of ROS. Hence, this study was conducted with the aim of understanding the mechanism by which the WRKY TFs regulate plant responses to drought and salt stress. Identification and sequence analysis of GhWRKY68 Based on the conserved region of plant stress-related WRKY genes, a pair of degenerate primers, WP1 and WP2, was designed, and a putative WRKY fragment was isolated. Next, the rapid amplification of cDNA ends by PCR (RACE-PCR) was used to amplify the 5 0 untranslated region (UTR) and the 3 0 UTR. Finally, the deduced full-length cDNA sequence consisting of 1118 bp was retrieved. Because AtWRKY68 is the Arabidopsis WRKY gene most closely related to this putative cotton WRKY gene, we designated this new WRKY gene as GhWRKY68 (Gen-Bank, KJ551845). The gene encoded a protein with a predicted relative molecular mass of 33.186 kDa and a theoretical isoelectric point of 8.71. Multi-alignment analysis revealed that the deduced WRKY protein was closely related to other plant WRKY proteins, sharing 57.14, 57.01, 51.90 and 57.32% homology with PtWRKY48 (XP_002301524), PtWRKY23 (ABK41486.1), VvWRKY48 (XP_002279385), and PtWRKY13 (ACV92015), respectively. Similar to the other WRKY TFs, GhWRKY68 has one WRKY domain that contains the highly conserved amino acid sequence WRKYGQK and a single putative zinc finger motif (C-X 4-5 -C-X 22-23 -H-X 1 -H) (Fig. 1A). Phylogenetic analysis further revealed the evolutionary relationship to other WRKYs from various plant species, suggesting that GhWRKY68 belongs to Group IIc of the WRKY family (Fig. 1B). In addition, with a pair of specific primers, WQC1 and WQC2, the GhWRKY68 genomic sequence (KJ551846) was amplified. The sequence consisted of 2543 bp interrupted by two introns of 228 and 138 bp. Currently, there is no other study on GhWRKY68 from cotton; therefore, we characterised this gene. Characterisation of GhWRKY68 as a transcription factor Using the Nuc-PLoc program and the CELLO version 2 program, we predicted that GhWRKY68 was localised in the nucleus. To test this prediction, 35S::GhWRKY68-GFP and 35S::GFP plasmids were constructed, and the latter was used as the control ( Fig. 2A). Then, the 35S::GhWRKY68-GFP and 35S::GFP plasmids were introduced into onion epidermal cells. The fluorescence was observed by confocal microscopy. The nuclei were stained with DAPI. As shown in Fig. 2B, onion epidermal cells carrying the 35S::GhWRKY68-GFP plasmid emitted fluorescence only in the nuclei, whereas the 35S::GFP control exhibited GFP signals in both the cytoplasm and the nuclei. These results demonstrated that the GhWRKY68 protein was localised in the nucleus. Numerous studies have demonstrated that WRKY TFs modulate protein expression by binding to the W-box [TTGAC(C/T)], which is present in the promoters of defence-associated genes as well as in many WRKY genes [3]. To test whether this binding also applies to GhWRKY68, a yeast one-hybrid experiment was performed. Three tandem repeats of the W-box (TTGACC) or the mW-box (TAGACG) (Fig. 2C) were inserted into the pAbAi vector and integrated into the genome of yeast strain Y1H gold with an Aureobasidin A resistance (AbA r ) reporter gene (AUR-1C). A yeast effector vector, pGADT7-WRKY68, and an empty vector, pGADT7, were transformed into the yeast strain Y1H Gold carrying a pAbAi-W-box or a pAbAi-mW-box plasmid. All of the transformed yeast cells grew on leucine (Leu)- and uracile (Ura_-deficient synthetic dextrose (SD) medium (SD/-Leu/-Ura), confirming the success of the transformation (Fig. 2D). Only the yeast clones harbouring the pAbAi-W-box and pGAD-GhWRKY68 grew on SD/-Leu containing 500 ng/ml AbA (Fig. 2D). These results demonstrated that GhWRKY68 bound to the W-box element and functioned as a transcriptional activator in this yeast system. To test whether GhWRKY68 activates gene expression by interacting with the W-box in plant cells, we performed transient co-expression experiments. The reporter vector W-box-35S mini-GUS (Fig. 2E) was either transformed alone or with the effector plasmid 35S:: GhWRKY68 (Fig. 2E) into N. benthamiana leaves using Agrobacterium-mediated transient expression followed by a GUS histochemical staining assay. The tobacco leaves co-transformed with W-box-35S mini-GUS and 35S::GhWRKY68 were stained dark blue. In contrast, leaves transformed with only the effector never stained blue, and leaves transformed with only the reporter vector showed a slight blue background (Fig. 2F). Thus, overexpression of GhWRKY68 can activate the expression of GUS in N. benthamiana leaves in a W-box-dependent manner. using the neighbour-joining method in MEGA 4.1. GhWRKY68 is highlighted in the box. The gene name is followed by the protein ID. The species of origin of the WRKYs are indicated by the abbreviations before the gene names: At Arabidopsis thaliana, Gh Gossypium hirsutum, Pt Populus tomentosa, Vv Vitis vinifera, Zm Zea mays, Gm Glycine max and Nc Noccaea caerulescens. GhWRKY68 with the yeast one-hybrid assay using the 3×W-box or mW-box as bait. Yeast cells carrying pGAD-GhWRKY68 or pGAD7 were grown on SD/-Leu/-Ura or SD/-Leu containing 500 ng/ml AbA. 1, pAbAi-W-box/pGAD-GhWRKY68, 2, pAbAi-W-box/pGAD7, 3, pAbAi-mW-box/pGAD-GhWRKY68, and 4 pAbAi-mW-box/pGAD7. (E) Schematic diagram of the reporter and effector constructs used for co-transfection. (F) Histochemical analysis of co-transfected N. benthamiana leaves. Fully expanded leaves from 8-week-old N. benthamiana were agro-infiltrated with the indicated reporter and effector at an OD 600 of 0.6. GUS staining was performed 3 days after the transformation. GhWRKY68 promoter analysis To clarify the mechanism underlying the GhWRKY68 expression patterns in response to multiple stresses, the 1118 bp promoter region of GhWRKY68 (KJ551846) was isolated using inverse PCR (I-PCR) and nested PCR [34]. Many response elements for abiotic and biotic stress, tissue-specific expression, development and light were predicted by the database search programs PLACE and Plant-CARE (Table 1). Among these elements was MBS, a MYB binding site involved in drought tolerance in Arabidopsis [35]. An AREB cis-acting element was predominant in ABA-dependent gene expression [36]. These results suggest that GhWRKY68 may play a role in the response to environmental stresses and in developmental pathways. To test the activity of the GhWRKY68 promoter, four independent transgenic Arabidopsis T 3 lines harbouring the ProGhWRKY68::GUS construct were used for GUS histochemical staining assays. As shown in Fig. 3A, GUS staining was mainly detected in the germination stage, and weak GUS staining was observed at the reproductive stage. The tissue-specific regulation of the GhWRKY68 promoter, assessed by GUS expression, was confined to the root, leaf and shoot apical meristem (SAM) of 2-week-old transgenic seedlings ( Fig. 3B (a-c)) and to the flower and pod at the reproductive stage ( Fig. 3B (d-e)). These results indicate that GhWRKY68 might be involved in developmental regulation. In addition, GUS expression was induced by various treatments. Slight GUS staining was observed in the absence of stress ( Fig. 3C (a)). However, GUS expression was strongly induced in the SAM, root and leaf after NaCl, PEG, ABA or H 2 O 2 treatments ( Fig. 3C (b-e)). Taken together, these results suggest that GhWRKY68 is a stress-inducible gene, and its expression is regulated spatially and temporally. The transcriptional levels of GhWRKY68 are infiuenced by various stresses Transcriptional modulation is a vital aspect of the complex signal transduction pathway that enables plants to respond to biotic and abiotic stresses [1,3]. To study the expression patterns of GhWRKY68 under diverse environmental stresses, transcript levels of this gene were measured after the cotton seedlings had been exposed to drought (PEG6000) and salt (NaCl). The expression profile of GhWRKY68 in plants that were not exposed to treatment was used as control, and no changes in GhWRKY68 transcript levels were noted during the 0-to 8-h series (Fig. 4A). As shown in Fig. 4B, treatment with NaCl strongly induced the transcription of GhWRKY68 a 3.8-fold induction was observed at 4 hours post-treatment (hpt). The PEG6000 treatment also induced the expression of GhWRKY68, but the induced levels were lower than after NaCl treatment (Fig. 4C). ABA and H 2 O 2 are important signalling molecules that play crucial roles in the mediation of the expression of downstream genes in plant defence reactions against biotic and abiotic stresses [37]. To characterise the function of GhWRKY68 in plant defences, we examined the expression pattern of GhWRKY68 in cotton treated with various phytohormones using qPCR. In response to ABA, the GhWRKY68 transcript levels increased within 1 hpt to 4 hpt, reaching maximal levels at 4 hpt (4.1-fold relative to mock-treated, Fig. 4D). The GhWRKY68 transcript level was enhanced by H 2 O 2 : a 3.7-fold induction was observed at 2 hpt (Fig. 3E). These results indicate that GhWRKY68 expression was induced under various stress conditions. Overexpression of GhWRKY68 enhances the drought sensitivity of transgenic plants The promoter analysis and differential expression patterns analysis suggested that GhWRKY68 may play a role in multiple stress defence responses, especially in the osmotic stress response. Further functional analyses of GhWRKY68 were performed through ectopic expression in N. benthamiana because the transformation of cotton plants is difficult and time consuming. Six independent transgenic N. benthamiana lines overexpressing (OE) GhWRKY68 were obtained by kanamycin resistance selection and confirmed by PCR (S1 Fig. A), and the efficiency of tobacco transformation was 54.5%. RT-PCR and qPCR analyses were performed to detect the expression levels of the transgene in different lines (S1 Fig To determine the infiuence of drought on the transgenic lines, the germination capacity of WT and OE plants was evaluated on 1/2 Murashige & Skoog (MS) medium supplemented with exogenous mannitol (0, 100 and 200 mM) to mimic drought conditions. As shown in Fig. 5A-B, there were no significant differences in the growth or germination rates of the WT and OE plants during germination under normal conditions. However, following treatment with mannitol, the germination of both the WT and OE lines was inhibited as a function of increasing mannitol concentration. In addition, transgenic seeds were more strongly suppressed than wild-type seeds, resulting in the germination of the OE plants being approximately 10-15% of that of WT plants in the presence of 200 mM mannitol 3 days after sowing ( Fig. 5A-B). To further assess the effect of GhWRKY68 overexpression on drought tolerance at the vegetative growth stage, 8-week-old WT and OE plants were grown in the same pot without water for 10 days. After 10 days of drought treatment, OE plants showed more leaf wilting than WT Table 1. Putative cis-acting elements of the promoter of GhWRKY68. Cis-element Position Sequence ( plants (Fig. 5C). When they were re-watered, the survival of the transgenic plants was approximately 30-43% lower than that of the WT plants (Fig. 5D). Additionally, the rate of water loss from the detached leaves of the OE plants was lower than that of WT plants under dehydration conditions ( Fig. 5E-F). Stomatal closure is a major plant mechanism for reducing water loss during drought [38]. Thus, the stomatal state was observed by microscopy under drought conditions. Under normal conditions, there was no significant difference between the stomatal length:width ratios of the opened stomata of the WT and OE plants. However, the stomatal apertures in the OE lines were more open than those of the WT plants after drought stress. After a 2-day watering recovery, the stomata reopened, and the OE lines showed a higher length: width ratio than the WT plants ( Fig. 5G-H). All our data indicate that the overexpression of GhWRKY68 can enhance drought sensitivity in transgenic tobacco plants at both the seedling and the vegetative growth stages. (Fig. 7D). Furthermore, the total chlorophyll content of the OE plants was significantly less than that of the WT plants. Taken together, these results indicated that the overexpression of GhWRKY68 might confer reduced tolerance to salt stress in transgenic plants during seed germination and in the vegetative stage. GhWRKY68 overexpression negatively regulates ABA signalling in transgenic plants ABA is an important phytohormone regulating plant development and various stress responses, including osmotic stress responses [39]. The expression of GhWRKY68 increased significantly in response to ABA, indicating that GhWRKY68 is involved in ABA signalling. Thus, the sensitivity of WT and OE plants exposed to ABA was explored. As shown in Fig. 7A-B, OE seeds showed lower germination rates than WT seeds in medium supplemented with various ABA concentrations. The responses to ABA during the post-germination growth stage were also assessed. The seeds of WT and OE plants were germinated on 1/2 MS medium for 2 days and were then transferred to medium supplemented with different ABA concentrations (0, 2 or 5 μM). In the absence of exogenously applied ABA, there was no significant difference in the root growth of WT and OE plants. However, in the OE plants, root growth was significantly inhibited, and the taproot length in the OE plants was less than that in the WT plants upon treatment with different concentrations of ABA ( Fig. 7C-D). ABA-mediated stomatal closure plays a key role in osmotic regulation. Thus, the stomata were analysed to investigate whether the overexpression of GhWRKY68 affected the sensitivity of guard cells to ABA treatment. Without ABA treatment, no difference in the stomatal length:width ratio was observed between WT and OE plants. However, the OE plants showed a lower ratio than the WT plants after 20 μM ABA treatment (Fig. 7E-F). Drought and salt stresses can trigger ABA-dependent signalling pathways [40]. The OE plants contained lower levels of ABA than the WT plants before and after drought and salt treatment (Fig. 7G). To elucidate the possible mechanisms of GhWRKY68-mediated drought and salt sensitivity involving the ABA signalling pathway, we examined the expression of some ABA-responsive genes in the transgenic plants during drought and salt treatments. These genes included NbAREB (ABA-responsive element binding), NbDREB (dehydration-responsive element binding), Nbosmotin, and NbNCED (nine-cis-epoxycarotenoid dioxygenase), NbERD (early responsive to dehydration), NbLEA (late-embryogenesis-abundant protein), and NbSnRK2.3 (SNF1-related protein kinase 2.3), which are stress-inducible marker genes that function in ABA-dependent and ABA-independent pathways [37,[41][42][43][44]. Under drought and salt stress, the expression levels of NbDREB, Nbosmotin, NbNCED, NbERD, NbSnRK2.3 and NbLEA in the transgenic plants were reduced compared with their levels in WT plants (Fig. 8B-G); however, NbAREB transcript levels were increased in OE plants compared with WT plants (Fig. 8A) Methyl viologen (MV) is an herbicide that causes chlorophyll degradation and cell membrane leakage through ROS production [47]. This compound was used to examine the potential role of GhWRKY68 in oxidative stress. As shown in Fig. 9B and 9C, in plants grown on medium containing 2 μM MV, more severe damage and significantly lower cotyledon greening rates occurred in OE plants than in WT plants. These results suggest that the overexpression of GhWRKY68 increases MV-induced oxidative damage during the germination phase. The molecular mechanism by which GhWRKY68 overexpression decreases oxidative stress tolerance in transgenic plants To explore the possible mechanisms underlying the decreased tolerance of oxidative stress, several physiological indexes, including antioxidant enzyme activity and the expression of oxidation-related genes, were examined. The basal levels of H 2 O 2 , proline and malondialdehyde (MDA) in WT and OE plants were not different in normal conditions (Fig. 10A-C). Under drought and salt stress, the H 2 O 2 and MDA contents were dramatically increased in the OE lines compared to the WT plants. In addition, drought and salt stress markedly increased the proline content in the leaves of the WT and OE plants, but the proline level was higher in WT plants than in the OE plants. Antioxidative systems play crucial roles in regulating the intracellular ROS balance [48]. The potential role of GhWRKY68 in oxidative stress was further evaluated by measuring antioxidant enzymatic activities. Three significant antioxidant enzymes, superoxide dismutase (SOD), peroxidase (POD) and catalase (CAT) were monitored before and after treatment (Fig. 10D-F). Under normal growth conditions, the activities of the three antioxidant enzymes were not different in the WT and OE plants. After the drought and salt treatments, the GhWRKY68 Mediates Salt and Drought Responses activities of SOD and POD were greatly increased in the WT and OE plants, and the WT plants showed significantly higher SOD and POD activity than the OE plants (Fig. 10D-E). The WT plants displayed a slight increase, and the OE plants displayed a significant decrease in CAT activity after the drought and salt treatment (Fig. 10F). Furthermore, the transcript levels of several important ROS-related genes were measured by qPCR in WT and OE plants after drought and salt treatment (Fig. 10G-L). The genes chosen included SOD, APX, CAT and GST, which encode ROS-scavenging enzymes, and the respiratory burst oxidase homolog genes (RbohA and RbohB), which encode ROS producers. After drought and salt treatment, the expression patterns of SOD, APX and CAT were only slightly altered in the OE lines, and the transcript accumulation was lower than in WT plants (Fig. 10G-I). However, the expression of GST was increased in the OE plants, although it remained lower than in WT plants (Fig. 10J). In addition, the expression levels of RbohA and RbohB were markedly elevated in the OE plants and were significantly higher than in the WT plants ( Fig. 10K-L). Discussion Although many WRKY genes have been studied over the last decade, our understanding of the regulation and action of the gene family is relatively limited [3]. To date, few WRKY genes have been functionally characterised in cotton (Gossypium hirsutum). In the present study, we report, for the first time, a novel group IIc WRKY gene, GhWRKY68, which encodes a nuclearlocalised protein that specifically binds to the W-box [TTGAC(C/T)] and transactivates the expression of downstream GUS reporter genes in plant leaves. Our results suggest that overexpression of GhWRKY68 in Nicotiana benthamiana remarkably reduces the plants' tolerance to drought and salt stresses through ABA signalling and the regulation of cellular levels of ROS. Numerous studies have suggested that the regulation of TFs is highly complex, involving transcript and protein levels, DNA binding, subcellular localisation, and other properties through posttranslational mechanisms [3]. The subcelluar localisation analysis demonstrated that the GhWRKY68 protein localised to the nucleus (Fig. 2B). Furthermore, a yeast onehybrid (Fig. 2D) and transient co-expression experiment (Fig. 2F) demonstrated that GhWRKY68 specifically binds to the W-box [TTGAC(C/T)] and functions as a transcriptional activator. These observations suggest that GhWRKY68 may activate the expression of target genes in the nucleus and may participate in various plant processes, forming a network with other genes by binding to the W-box [TTGAC(C/T)] in the promoters of defence-associated genes as well as many WRKY genes [2,3]. Growing evidence has shown that WRKY proteins are involved in plant responses to various abiotic stresses [13][14][15]. For example, WRKY70 and WRKY54 are negative regulators that modulate osmotic stress tolerance in Arabidopsis [12]. Transcript levels of DgWRKY1 were increased by drought and salt stress in Chrysanthemum [49]. In addition, Wang et al [50] reported that 15 VvWRKYs are involved in the low temperature-related signalling pathways in grapes. We found that GhWRKY68 transcripts can be induced by abiotic stresses (PEG and NaCl) and by multiple defence-related signalling molecules (ABA and H 2 O 2 ) (Fig. 4). Consistent with these observations, GUS staining analyses (Fig. 3C) of drought and salt stressed GhWRKY68-overexpressing plants (Fig. 5-6) revealed that GhWRKY68 plays a role in drought, salt, ABA and ROS stress tolerance, and we speculated that the role of GhWRKY68 in the plants' response to drought and salt stress depends on modulating ABA signalling and regulating cellular ROS. ABA is an important phytohormone that inhibits seed germination and seedling growth [51] and mediates plant development and response to various stresses [39]. Drought and salt stresses can induce ABA accumulation by triggering ABA-dependent signalling pathways [40], and an increased ABA content is beneficial for plants under stress conditions [39]. In our study, seed germination and root growth of GhWRKY68-overexpressing plants were significantly inhibited by exogenous ABA compared with WT plants (Fig. 7A-D). The GhWRKY68 transgenic plants were less sensitive to ABA-induced stomata closure (Fig. 7E-F). Moreover, the OE lines accumulated less ABA during drought and salt stress compared with the WT plants (Fig. 7G). All these results indicated that GhWRKY68 might confer reduced drought and salt tolerance by negatively regulating the ABA pathways. Consistent with these results, DgWRKY1 was involved in the ABA-dependent signalling pathway under salt stress conditions [49]. Under stress conditions, TFs play important roles by regulating the expression of target genes to enhance plant stress tolerance [3]. For example, ThWRKY4 activates many genes, including ARR15, ATCTH, EPR1 and ARR6, which were previously reported to be involved in stress tolerance [52]. In this study, GhWRKY68 responded to drought and salt stresses by modulating ABA signalling. Like some other WRKYs, GhWRKY68 can also regulate the expression of the ABA-responsive genes AREB, DREB, osmotin, ERD, SnRK2.3, LEA and NCED, which have been reported to function in the ABA-dependent or ABA-independent pathways [11,[41][42][43][44]. In the ABA-dependent pathway, AREB serves as a major ABA-responsive element that can bind and activate the expression of AREB-binding genes. DREB is known to regulate the expression of many stress-inducible genes in the ABA-independent pathways. Osmotin is responsive to ABA and is involved in the adaptation to low water potential [53]. ERD, SnRK2, and LEA genes were target genes of AREB or DREB genes [37,41,43,44], which may contain W-box elements in their promoters and be recognized by interacting WRKY proteins through formation of a DNA loop to regulate many genetic processes including transcription regulation [3]. The NCED gene encodes 9-cis-epoxycarotenoid dioxygenase, a key enzyme of ABA biosynthesis and a known a participant in ABA mediated responses [54]. As shown in Fig. 8A-G, under drought and salt stresses, NbAREB was up-regulated, but NbDREB, Nbosmotin, NbNCED, NbERD, NbSnRK2.3, and NbLEA were down-regulated in transgenic plants compared with WT plants. In summary, it is likely that GhWRKY68 is involved in drought and salt stress through ABA-dependent and ABA-independent signalling pathways. ROS mainly consist of O 2 and H 2 O 2 and can be induced in plants by drought and salt stresses [30,46]. The ROS level is critical for abiotic stress tolerance in plants [31]. Overproduction of H 2 O 2 can kill the leaf cells and cause leaf necrosis in plants [55]. Proline contributes to osmotic adjustment and protects macromolecules during dehydration, acting as both an osmotic agent and a radical scavenger. The accumulation of proline may participate in scavenging ROS in response to stress [56,57]. MDA is the final decomposition product of lipid peroxidation, and the level of MDA reflects the degree of plant damage [58]. In the present study, under drought and salt stresses, the overexpression of GhWRKY68 enhanced ROS accumulation, reduced the proline content (Fig. 10B) and elevated the MDA content (Fig. 10C). In plants, the most common mechanism for oxidative tolerance is the regulation of the ROSscavenging enzymes [59]. A subsequent study revealed that the activities of SOD and POD in the GhWRKY68-overexpressing plants were lower than those in the WT plants during drought and salt stress ( Fig. 10D-E). Furthermore, after drought and salt treatments, the transcript levels of the ROS-related genes SOD, APX, CAT and GST were lower, and the activities of SOD, POD and CAT decreased ( Fig. 10G-L). In addition, the expression levels of RbohA and RbohB were significantly higher than in WT plants ( Fig. 10K- Over the past several years, a substantial number of WRKYs have been shown to participate in protein-protein interactions, and complex functional interactions have been observed between WRKY proteins and other regulatory proteins (such as MAPKs, VQ proteins, chromatin remodeling proteins, histone deacetylases, 14-3-3 proteins and calmodulin) involved in the modulation of important biological processes [3,4]. For example, Arabidopsis MEKK1 directly interacts with the senescence-related WRKY53 transcription factor at the protein level [60]. In addition, approximately 50% of the VQ proteins interact with the group IIc AtWRKY51 [61]. AtWRKY7 and 10 additional Arabidopsis Group IId WRKY proteins can bind to calmodulin (CaM), which is a Ca 2+ -binding signalling protein [62]. Moreover, HDAC and histone proteins were recently identified as WRKY-interacting proteins [3]. Mapping of dynamic and complex protein-protein interactions in WRKY-mediated transcription of important target genes is critical to develop a comprehensive understanding of the WRKY signalling and transcriptional regulatory network [3]. In conclusion, GhWRKY68 functions as a transcription factor that responds to drought and salt stress by modulating ABA signalling and the regulation of cellular ROS. The modulation of ABA-responsive genes and the activation of ROS-related antioxidant genes and enzymes were partially correlated. It has been reported that cellular ROS levels are regulated through the ABA-triggered regulation of ROS-producing and ROS-scavenging genes [46], but the mechanisms that control ROS signalling through ABA during drought and salt stress remain unclear. Meanwhile, combined with stress signals, GhWRKY68 may regulate the downstream W-boxcontaining genes by binding to W-box motifs in the promoters of genes involved in ABA signalling, forming a network with other defence-associated genes. Thus, our findings not only extend knowledge regarding the biological function of the group IIc WRKY proteins but also provide new insights for further manipulation of crop plants to improve stress tolerance. Plants materials and treatments Cotton (Gossypium hirsutum L. cv. lumian 22) seeds were germinated and grown in greenhouse conditions at 25°C with a 16 h light/8 h dark cycle (light intensity of 200 μmol m -2 s -1 ; relative humidity of 60-75%). Seven-day-old cotton seedlings were subjected to different treatments. For the signalling molecule treatments, seedling leaves were sprayed with 100 μM ABA, or 10 mM H 2 O 2 as described previously [63]. For the salt and drought treatments, the seedlings were cultured in solutions containing 200 mM NaCl or 15% (w/v) PEG6000. Seedlings without any treatment were used as controls. All the samples were frozen in liquid nitrogen at the appropriate time and stored at -80°C for RNA extraction. Each treatment was repeated at least twice. Arabidopsis thaliana Columbia ecotype (Col-0) and transgenic Arabidopsis seeds were sown on 1/2 MS agar medium in a growth chamber at 22 ± 1°C with a 16/8 h light/dark cycle and a relative humidity of 80%. For the GUS assays, two-week-old transgenic Arabidopsis T 3 seedlings were exposed to 100 μM ABA, 10 mM H 2 O 2 , 15% (w/v) PEG6000 or 200 mM NaCl. Additionally, Nicotiana benthamiana seeds were surface-sterilised and germinated on 1/2 MS agar medium under greenhouse conditions. Then, two-or three-leaf stage seedlings were transplanted into soil and maintained in greenhouse conditions. RNA extraction, cDNA synthesis and DNA preparation An improved CTAB-ammonium acetate method was used for total RNA isolation from cotton according to the method described by Zhao et al [64]. Total RNA was digested with RNase-free DNaseI (Promega, USA) according to the manufacturer's recommendations to remove the genomic DNA. Then, the RNA was used for first-strand cDNA synthesis with reverse transcriptase (TransGen Biotech, China) following the manufacturer's protocol. Genomic DNA was isolated from seedling leaves using the CTAB method described by Porebski et al [65]. Gene isolation, vector construction and genetic transformation The GhWRKY68 cDNA and genomic sequences were isolated as described previously [34]. All of the primers used in this study are listed in S1 Table. Analysis of the amino acid sequence and the promoter sequence of GhWRKY68 were performed Using DNAman version 5.2.2 (Lynnon Biosoft, Quebec, Canada) and PlantCARE. The coding region of the gene was inserted into the plant expression vector PBI121 under control of the CaMV 35S promoter. The genetic transformations with the recombinant plasmids and the production of the transgenic N. benthamiana plants were accomplished using the procedures of Zhang et al [66]. The GhWRKY68 promoter fragment was fused to the GUS reporter gene in the pBI121 binary vector to construct the recombinant plasmid ProGhWRKY68::GUS. The transgenic Arabidopsis plants were obtained as described by Shi et al [67]. The transgenic T 3 lines were used for GUS histochemical staining assays to analyse the promoter activity as described by Baumann et al [68]. Quantitative real-time PCR Quantitative real-time PCR (qPCR) was performed using the SYBR Premix Ex Taq (TaKaRa, Dalian, China) and the CFX96TM Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA). The PCR mix was composed of 10 μl SYBR Premix Ex Taq, 1.6 μl of 1:10 diluted cDNA, 0.4 μl of each primer (10 mM), and 7.6 μl PCR grade water in a final volume of 20 μl. The reactions were incubated under the following conditions: 1 cycle of 95°C for 30 sec; 40 cycles at 95°C for 5 sec, 55°C for 15 sec, and 72°C for 15 sec; and then a single melt cycle from 65 to 95°C. Each sample was analysed in triplicate, and the expression levels were calculated using the 2 -ΔΔCt comparative CT method [69]. Three independent experiments were performed. The primers used in qPCR are listed in S1 Table. Subcellular localisation analysis of GhWRKY68 To construct the 35S-GhWRKY68::GFP expression plasmid, the GhWRKY68 coding region without the termination codon was inserted into the binary vector pBI121-GFP, which has a green fluorescence protein (GFP) gene driven by the Cauliflower mosaic virus (CaMV) 35S promoter. For transient expression, the recombined plasmid and the positive control 35S-GFP plasmid were transferred into living onion epidermal cells via the biolistic bombardment transformation method as described by Shi et al [67], using the Biolistic PDS-1000/He system (Bio-Rad, USA) with gold particles (1.0 μl) and a helium pressure of 1,350 psi. The fluorescence was observed using a confocal laser scanning microscope (LSM 510 META, ZEISS, Germany) after the tissues were stained with 100 μg/ml of 4',6-diamidino-2-phenylindole (DAPI) (Solarbio, Beijing, China) in phosphate-buffered saline buffer for 10 min as described previously [34]. Binding assays using the yeast one-hybrid system A yeast one-hybrid assay was performed using the Matchmaker Gold Yeast One-Hybrid Library Screening System (Clontech, Palo Alto, CA) for the binding assay. According to the manufacturer's protocol, the reporter vector pAbAi containing triple tandem copies of the W-box (TTGACC) was introduced into the yeast strain Y1HGold, forming a W-box-specific reporter strain used as bait. The pGAD-GhWRKY68 yeast expression vector was formed with the ORF of the GhWRKY68 fused to the one-hybrid vector pGADT7 with the GAL4 activation domain. Then, pGADT7 and pGAD-GhWRKY68 were transformed into the W-box-specific reporter strain. The cells were plated on SD/-Leu/-Ura medium containing 500 ng/ml AbA to observe yeast growth. The 500 ng/ml AbA completely suppressed the basal expression of the pAbAi-W-box reporter strain in the absence of prey. Mutant W-box (mW-box) (TAGACG) was used as a negative control. Co-transfection experiments The effector plasmid (35S:GhWRKY68) was constructed by inserting the GhWRKY68 ORF into the binary vector pBI121 and replacing the GUS downstream from the CaMV35S promoter. For the reporter vector (W-box-35S mini-GUS plasmid), three tandem W-box sequences were fused to the CaMV 35S minimal promoter (W-box-35S mini), which substituted for the CaMV 35S promoter in pBI121GUS (Clontech). The effector and reporter plasmids were introduced into Agrobacterium tumefaciens strain GV3101. The Agrobacteriummediated transient transformation assay was performed according to the method described by Yang et al [70]. Analysis of transgenic plants under salt and drought conditions For drought treatment, three T 3 generation independent GhWRKY68-OE lines (OE1, OE2 and OE3) and wild-type seeds were surface sterilised and plated on 1/2 MS medium with different concentrations of mannitol (0, 100, or 200 mM), and the germination percentage was measured daily. Additionally, water was completely withheld for 10 days from 8-week-old OE and WT plants sown in soil, and the survival rates (the number of surviving plants relative to the total number of treated plants) were recorded after re-watering for 1 week. For the transpiration water loss assay, fully expanded leaves of OE and WT plants were detached and weighed immediately (fresh weight) with an electronic balance at room temperature, and the changes in fresh weight were recorded at designated times thereafter. The rate of water loss was calculated relative to the initial fresh weight. After the drought treatment, stomatal changes were observed by microscopy, and the ratio of stomatal length to width was recorded. To examine salt tolerance, the seed germination percentage on 1/2 MS medium with different concentrations of NaCl (0, 100, or 200 mM) was measured by the method above. In addition, 8-week-old OE and WT plants were irrigated daily with 200 mM NaCl solution every day for 1 month and maintained under the same growth conditions as described above to record survival rates. Subsequently, the chlorophyll content was measured as described by Lichtenthaler and Wellburn [71]. The drought and salt stress analyses were repeated at least three times. ABA sensitivity analysis To examine the response to ABA, the seeds were sown on 1/2 MS with different concentrations of ABA (0, 2, or 5 μM). The seed germination percentages and the root lengths were measured. In addition, a stomatal aperture assay was performed essentially as previously described [72,73]. The stomatal apertures from the leaves of OE and WT plants treated with 5 μM ABA for 3 h were observed using a fiuorescence microscope (BX51 Olympus). The ratio of stomatal length to width indicated the degree of stomatal closure. For each treatment, at least 50 stomatal apertures were measured. Endogenous ABA was extracted as described previously [74], and the ABA content was measured using an ELISA kit (Fangcheng, Beijing, China) according to the manufacturer's instructions. Oxidative stress analyses For oxidative damage analyses, the seeds were germinated on 1/2 MS medium supplemented with 5 μM methyl viologen (MV), and the cotyledon greening rates were calculated. To detect the accumulation of H 2 O 2 and O 2-, a histochemical staining procedure was performed using 3, 3'-diaminobenzidine (DAB) according to the method described by Zhang et al [66,70]. In addition, the H 2 O 2 concentration and MDA content were determined using a hydrogen peroxide test kit and a maleic dialdehyde assay kit (Nanjing Jiancheng Bioengineering Institute), respectively, according to the manufacturer's instructions. The free proline content was monitored as described by Shan et al [75]. These experiments were repeated at least three times. Enzymes were extracted in a phosphate buffer (pH 7.8) and were quantified with the BCA Protein Assay Kit (Nanjing Jiancheng Bioengineering Institute). The antioxidant enzyme activities of superoxide dismutase (SOD), peroxidase (POD), and catalase (CAT) were measured with kits produced by the Nanjing Jiancheng Institute. Statistical analysis The results are expressed as the mean ± standard deviation (SD) of triplicate experiments (n = 3). Statistical significance was determined by Duncan's multiple range test with an analysis of variance (ANOVA) using Statistical Analysis System (SAS) version 9.1 (Version 8e, SAS Institute, Cary, NC, USA). The significance was set at P<0.05. Supporting Information S1 Table. Details of the primers used in this study.
2018-04-03T02:13:49.651Z
2015-03-20T00:00:00.000
{ "year": 2015, "sha1": "2c529754dd0c917c4b6e0ae958a809498e8a4be9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0120646&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c529754dd0c917c4b6e0ae958a809498e8a4be9", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234998866
pes2o/s2orc
v3-fos-license
A Simplified Approach for Ocular Rehabilitation-A Case Report Loss of an eye or any body part has an intimidating and crippling effect on the psychosocial well-being of the patient. Although the artificial prosthesis cannot restore the function but it can highly improve the patient’s esthetics and help them regain their psychological confidence. Literature has advocated various rehabilitation modalities including: empirical use of stock shell, modifying stock eye, custommade ocular prosthesis, ocular implants, etc. Custom-made ocular prosthesis, among all the techniques, shows improved adaptation to tissue bed, distributes uniform pressure, provides a more esthetic and precise result and is relatively cost-effective. This case-report explores a relatively comprehensive method of custom ocular prosthesis fabrication for an ocular defect with a satisfactory outcome. Loss of an eye leads to disfigurement of face which causes physical disability and significant psychological disturbances to patient. 3 Therefore, timely provision of artificial prosthesis improves the patient's esthetics and helps them regain their psychological confidence. Ocular prosthesis commended for rehabilitation of defects caused due to evisceration or enucleation can be either readymade stock shell, tailoring the stock eye or custom-made prosthesis. 1,4 Custom-made ocular prosthesis shows improved adaptation to tissue bed, distributes uniform pressure and provides a more esthetic and precise result. 5 Hence, this case report explores a relatively comprehensive method of custom ocular prosthesis fabrication for an ocular defect. It can be considered as a feasible and better alternative for contriving an eye prosthesis when reconstruction by plastic surgery or ocular implants is not possible or desired. Case Presentation A 56-year old female reported to the Department of Prosthodontics and Maxillofacial Prosthetics, Introduction E ye is considered to be one of the vital components of face which not only helps in vision but also in communication and facial expression. The reasons for loss of eye could be due to irrepairable trauma, tumour or congenital defects. 1 Surgical interventions for such conditions include: evisceration, enucleation or exenteration. Evisceration is a minimal surgical procedure where contents of globe are removed, leaving sclera intact. Enucleation is a more invasive procedure where the entire eyeball is severed from muscles and optic nerve. Exenteration, the most radical of all, involves en bloc removal of the orbital contents. 2 Peoples' Dental College and Hospital with the complaint of missing left eye. The patient gave a history of trauma to the left eye around 50 years back for which she did not undergo any kind of treatment and consequently the eye had shrunken leaving an ocular defect in her left eye ( Figure 1). On clinical examination, the intraocular tissue bed was healthy with adequate depth beneath the upper and lower fornices ( Figure 2). Fabrication of custom-made ocular prosthesis was planned to replace her left missing eye. Entire procedure was explained to the patient and written consent was obtained. Clinical and Laboratory Procedure Primary impression of the ocular defect was made with alginate (Zelgan, DentsplyInt) using 5 ml disposable syringe and custom ocular tray was fabricated with self-cure acrylic resin (DPI RR Cold Cure) (Figure 3 and 4). The tray was connected to a 5 ml disposable syringe to provide the channel for flow of impression material and final impression was made by injecting light body consistency polyvinyl siloxane elastomer (Reprosil, DentsplyInt) into the eye socket ( Figure 5). The patient was seated erect and asked to move her adjacent eye in all directions to allow the material to flow into the areas of the socket and record the anatomical details precisely. The patient was requested to stare at a distant spot, and instructed to hold her gaze in a forward position with eyes open while the impression was being made. After the material was set, final impression was retrieved from the socket and evaluated for any defects. The impression was invested in alginate and the alginate mold was partially split after setting, for retrieving the impression of socket ( Figure 6). Molten baseplate wax (Modelling wax, DPI) was poured into the mold to fabricate a scleral wax pattern. The wax pattern was then polished and checked for the proper fit and adjusted to obtain the satisfactory contours of eyelids (Figure 7). The adjacent working eye was taken as a reference to mark the iris position on the wax pattern. An iris disk approximately 0.5 mm smaller than the actual measurement was selected, to compensate the magnification of iris by clear acrylic which provides three dimensional effects. The iris disk was placed in the marked area after scooping out the wax. The wax pattern was then polished and trial was done to evaluate its position and gaze ( Figure 8). Shade selection of sclera of the natural eye was done. After dewaxing, it was processed in the two piece flask using heat polymerizing acrylic resin (DPI-Heat cure, DPI). Thus, the obtained scleral blank was tried in and the supraorbital folds, eyelid margins and iris plane were compared with the contralateral eye. The scleral blank was then painted with acrylic colors (W&N Artists' Acrylic) so as to match with the color of natural eye. The iris portion was colored in a layering fashion to mimic the colored striations of patient's iris. A black dark spot was painted at the center of iris in order to represent pupil. Further characterization was done by adding red rayon fibers in the scleral region to simulate vasculature ( Figure 9). Lastly, the characterized scleral blank was replaced into flask and clear acrylic heat cure resin was used to pack into the mold space. After acrylization, the final obtained prosthesis was finished and polished in order to give a high shine and natural appearance ( Figure 10). The outcome of the prosthesis was ascertained from the satisfied look on the patient's face and from the follow up after one day, a week later and every six months. The patient was given proper instructions on insertion, removal and hygiene of the prosthesis. Journal of Nepalese Prosthodontic Society (JNPS) Discussion Rehabilitation of ocular defect is a challenging task and requires individualized tailoring of the technique for each patient. Studies have suggested various techniques for fabrication of ocular prosthesis including empirical fitting of a stock eye, adjusting a stock eye and the custom-made eye technique. 1,6 The method of choice is largely governed by the type of defect, operator skills and availability of material and equipment. Stock eye prosthesis was largely advocated by Laney and Gardner 7 but the limitations like poor fit, constant tissue irritations, accumulation of fluid in tissue prosthesis interface, bacterial growth and compromised esthetic outcome exist. Hence, relining a stock eye shell 8 can improve the adaptation of the prosthesis to tissue bed, but the contour of sclera and iris position would still be questionable. Customized ocular prosthesis on the other hand eliminates the above mentioned demerits and provides good fit, enhanced esthetics, better eye movement, proper eyelid fullness, accurate sclera contour and iris color match and positioning. [9][10][11] The technique described in this article is a comprehensive and undemanding method tailored for rehabilitation of an ocular defect. The outcome of the prosthesis could be ascertained by the improved esthetics and ultimate patient satisfaction. Conclusion Custom ocular prosthesis fabrication that closely mimics the adjacent natural eye is a challenging task. Although the prosthesis cannot restore the vision, but it improves patients' appearance, mitigates their psychological trauma, boosts up the lost confidence and helps to lead a better life.
2021-05-22T00:03:34.689Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "06291e64a1f81c15d3f1eedd68fe101e04b2fb29", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/jnprossoc/article/download/36389/28404", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e43788ae020479ce3090280701308fb07c24414f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
242940501
pes2o/s2orc
v3-fos-license
Putting the individual into reliability: Bayesian testing of homogeneous within-person variance in hierarchical models Measurement reliability is a fundamental concept in psychology. It is traditionally considered a stable property of a questionnaire, measurement device, or experimental task. Although intraclass correlation coefficients (ICC) are often used to assess reliability in repeated measure designs, their descriptive nature depends upon the assumption of a common within-person variance. This work focuses on the presumption that each individual is adequately described by the average within-person variance in hierarchical models. And thus whether reliability generalizes to the individual level, which leads directly into the notion of individually varying ICCs. In particular, we introduce a novel approach, using the Bayes factor, wherein a researcher can directly test for homogeneous within-person variance in hierarchical models. Additionally, we introduce a membership model that allows for classifying which (and how many) individuals belong to the common variance model. The utility of our methodology is demonstrated on cognitive inhibition tasks. We find that heterogeneous within-person variance is a defining feature of these tasks, and in one case, the ratio between the largest to smallest within-person variance exceeded 20. This translates into a tenfold difference in person-specific reliability! We also find that few individuals belong to the common variance model, and thus traditional reliability indices are potentially masking important individual variation. We discuss the implications of our findings and possible future directions. The methods are implemented in the R package vICC Introduction Measurement reliability is an important aspect of repeated measurement designs, which are used extensively in the Research reported in this publication was supported by funding from the National Science Foundation Graduate Research Fellowship to DRW and the National Institute On Aging of the National Institutes of Health under Award Number R01AG050720 to PR. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies. We thank Craig Hedge and Claudia von Bastian for making their data available for reuse. Donald R. Williams drwwilliams@ucdavis.edu 1 University of California, Davis, CA, USA social-behavioral sciences. Their use spans from longitudinal studies that track individuals over their life span, to laboratory settings that can include hundreds of experimental trials for each person. Given that data are repeatedly obtained from the same individuals, they tend to result in non-independent structures, as measurements from the same individual are assumed to be more similar to one another than measurements from different individuals. This is commonly referred to as clustered data, in that units of observations are typically related to one another. These hierarchically structured data naturally lend themselves to assessing reliability by examining the degree of cluster cohesion. Intraclass correlation coefficients (ICC) are commonly used to assess the level of agreement, or internal consistency, of observations organized into the same cluster (Bartko, 1966;McGraw & Wong, 1996). In repeated measurement designs, individuals are considered to be the cluster and the repeated measurements are nested within that cluster or person. In clustered data, the ICC serves as a reliability index as it quantifies the similarity of the data points within, relative to the difference between clusters (Bliese, 2000). As such, an ICC can characterize test-retest and inter-rater reliability (Shrout & Fleiss, 1979;Weir, 2005). It also corresponds to the proportion of total variance accounted for by the clustering (Musca et al., 2011). Another classical example comes from educational settings, where hierarchical data are often gathered from students that are nested within different schools (Morris, 2008;Theobald, 2018). In this case, the ICC would index the degree of similarity among students that attend the same school. This logic also extends to experimental designs, such as classic laboratory settings (Li, Zeng, Lin, Cazzell, & Liu, 2015;Pleil, Wallace, Stiegel, & Funk, 2018). A reliable experimental manipulation should induce similar responses from the same individual (Rouder, Kumar, & Haaf, 2019). In order to compute the ICC, the different sources of variability need to be decomposed into within-and betweencluster variability (Hedges, Hedberg, & Kuyper, 2012). This can be accomplished either within an ANOVA framework (Shieh, 2012), or relatedly, within an unconditional hierarchical mixed-effects model with only random intercepts (i.e., "multilevel" models; Snijders & Bosker, 1993). In this current work we focus on the latter, because we will extend the classic mixed-effect model to allow the error terms to vary across and within clusters. This marks a drastic departure from the classical ICC literature that considers reliability to be fixed and non-varying. We present novel Bayesian methodology that allows for testing of varying intraclass correlation coefficients at the individual level. The foundation for this methodology is based upon the central idea of capturing individual differences with mixed-effects models. Consider the case of a random intercepts only model. There are two sources of variation, that is, This is commonly referred to as ICC(1), and it can also be viewed as a reliability index for single scores that ranges from 0 − 1 (Shieh, 2016). Note that there are several ICC indices (Bartko, 1976) and each allows for asking specific questions about reliability. In this case, because the focus is on individual variation, we only consider ICC(1). We describe straightforward extensions in the discussion section (e.g., average score reliability). In Eq. 1, σ 2 0 is the between-person variance and σ 2 is the withinperson variance, respectively. The latter is often referred to as measurement error. In cognitive inhibition tasks, for example, it captures trial-to-trial "noise" in reaction times. Thus, assuming that σ 2 0 is held constant, increasing σ 2 will necessarily decrease reliability (Hedge, Powell, & Sumner, 2018). This definition of ICC does not allow for the possibility of individual differences in reliability. However, if σ 2 is allowed to vary between individuals, this immediately leads to Eq. 1 representing the average reliability. Said another way, σ 2 can be viewed as the average within-person variance which suggests that it might not generalize to each person. In the tradition of individual differences research it seems reasonable that the reliability of, say, an educational test or experimental manipulation, would not be the same for all people or all situations. This notion of varying reliability is not new and can be traced back nearly 50 years to a (working) paper entitled, "A Note on Testing for Constant Reliability in Repeated Measurement Studies": This paper discusses the potential usefulness of applying tests for the equality of variances (and covariances) to data from repeated measurement studies prior to estimating reliability components and coefficients ... Prior to actually applying some method of reliability estimation to a body of data from a repeated measurement study, consideration needs to be given to what assumptions are tenable concerning the stability of true and error variances (p.1; Silk, 1978). To the best of our knowledge, this perspective has largely gone unnoticed in the literature. For example, an excellent paper by Koo and Li (2016) provides guidelines for selecting and reporting ICCs but it did not mention the implication of "Mean Squared Within" in an ANOVA framework, which is equivalent to σ 2 1 in Eq. 1. Of course, the ICC is often used descriptively (e.g., Noonan, Fairclough, Knowles, & Boddy, 2017) and assumptions are more important for significance tests (Bartlett & Frost, 2008). However, if there are notable deviations from the average, we argue that the estimate of reliability should account for this variation. This notion has serious implications for social-behavioral scientists: it provides the opportunity for researchers to fully characterize their measures with a fine-tooth comb. For example, a researcher could use the presented methodology to extract certain people or simply quantify how many individuals the traditional ICC is representative of. Additionally, this could show that sometimes heterogeneity in within-person variance is so large, that a researcher may want to explore why that is the case. This work provides a tool-and the insight that in common situations there could be large individual differences in reliability. And now research psychologists can test this possibility. To illustrate the importance of accounting for individual differences in ICCs, we will focus on cognitive inhibition tasks, where they are routinely computed to characterize reliability (Soveri et al., 2018;Strauss, Allen, Jorgensen, & Cramer, 2005;Wöstmann et al., 2013) and to justify subsequent statistical analysis steps (Hedge et al., 2018;Rouder et al., 2019). This literature serves as an excellent testing ground, although the presented methodology can be used for all hierarchically structured or clustered data. A recent debate surrounding the study of individual differences (Gärtner & Strobel, 2019;Hedge et al., 2018;Rouder et al., 2019), and in particular its relation to reliability formed the impetus for this current work. The emerging consensus is that reliability is too low (i.e., "noisy" measures) to adequately study individual variation in executive functioning. However, the discussion has revolved almost exclusively around the mean structure and avoided the within-person variance structure altogether (i.e., σ 2 ). While the former reflects average reaction times, the latter refers to reaction time (in)stability-that is, consistency of executive functions. Indeed, Williams, Rouder, and Rast (2019) recently demonstrated that there were large individual differences in consistently inhibiting irrelevant information (Figure 3 in Williams, Rouder, & Rast, 2019). Although reliability was not considered in that work, those findings imply that there could be individual differences in reliability. This would present a quagmire. On the one hand, low reliability is thought to hinder our ability to study individual differences. But on the other hand, individual differences in reliability at the level of within-person variance may be a target for an explanatory model itself. There is an interesting and storied literature on modeling within-person variance in hierarchical models (see references in : Cleveland, Denby, & Liu, 2003). The central idea goes back almost a century-that is, "[The quotidian variation] index may be of significance...since under the same test conditions individuals differ greatly in the degree of instability of behavior..." (p. 246;Woodrow, 1932). In other words, there are likely individual differences in within-person variability-which implies there is individual variation in reliability. These ideas are prominent in research areas that gather intensive longitudinal data (Hamaker, Asparouhov, Brose, Schmiedek, & Muthén, 2018;Hedeker, Mermelstein, & Demirtas, 2012;Rast & Ferrer, 2018;Watts, Walters, Hoffman, & Templin, 2016;Williams, Liu, Martin, & Rast, 2019). Indeed, to our knowledge, the notion of varying ICCs was first described in the context of ecological momentary assessment. In particular, Hedeker, Mermelstein, and Demirtas (2008) briefly described how the variances (e.g., σ 2 0 and σ 2 1 ) could be a function of covariates. This provided the foundation for Brunton-Smith, Sturgis, and Leckie (2017). That work in particular estimated group specific ICCs for interviewers using a hierarchical model (see Figures 2 and 3 in: Brunton-Smith et al., 2017). There are several novel aspects of the present work. We propose a novel testing strategy that is based upon Bayesian model selection. This extends the approach of Brunton-Smith et al. (2017), where it was not possible to gain evidence for the null hypothesis. In our formulation, the null hypothesis can be understood as the common ICC model given in Eq. 1, but tested at the level of the within-person variance. In practical applications, this would allow a researcher to determine whether their estimate of reliability generalizes to each person. Further, another major contribution of this work is providing methodology to classify individuals into a common variance model. The importance of this cannot be understated. That is, we not only introduce methods for characterizing individual differences in reliability and rigorously testing for invariant reliability, but we also provide a model comparison strategy for assessing which (and how many) individuals belong to the ICC in Eq. 1. These are novel contributions. These methods also have serious implications for how we view past estimates of reliability. Namely, if a small proportion of individuals belong to the common ICC model, this would suggest that we have been masking important individual differences in reliability. We have also implemented the methods in the R package vICC. 1 This work is organized as follows. In the first section we provide a motivating example. Our intention here is to demonstrate the need for varying ICCs, in addition to describing key aspects of the proposed model. This serves as the foundation for the remainder of the paper. We then introduce two models. The first tests for invariant withinperson variance, whereas the second tests which (and how many) individuals belong to the common variance model. We then employ the proposed methodology in a series of illustrative examples. We conclude by discussing future directions for psychological applications. Motivating example The presented methodology is based upon a straightforward extension to the traditional mixed-effects approach, which allows for partitioning the unexplained variance, or withinperson variance, and allowing for the possibility of individual variation. The technique to do so is termed mixed-effects location scale model (MELSM, pronounced mel·zem; Hedeker et al., 2008Hedeker et al., , 2012. The location refers to the mean structure (e.g., response time) and the scale refers to the (within-person) variance. The MELSM simultaneously estimates sub-models to both structures (Rast & Ferrer, 2018;Williams & Rast, 2018). In this work, we build upon this foundation and introduce a spike and slab approach for both the random effects variance and the individual random effects for the within-person variance. To our knowledge, the spike and slab formulation has never been used for the variance structure. As we show below, 1 varying Intraclass Correlation Coefficients this opens the door for answering novel research questions about the interplay between reliability and within-person variability in psychology. First we present a relatively simple example with the goal of clarifying the central idea behind this work. We start with the customary ICC(1) model for single scores (1), and then proceed to extend the formulation to accommodate individual differences in within-person variability. Illustrative data For the following we use data from a classical inhibition task that investigates the so-called "Stroop effect". These data were first reported in von Bastian, Souza, and Gade (2016). They consist of 121 participants, each of which completed approximately 90 trials in total. About half of the trials were in the congruent condition, wherein the number of characters matched the displayed numbers-e.g., 22. The remaining trials were in the incongruent condition-e.g., 222. The outcome is reaction time for correctly identifying the number of characters. Mixed-effects model For the ith person and j th trial, the one-way random effects model is defined as where β 0 is the fixed effect and u 0i the individual deviation. More specifically, β 0 is the average of the individual means and for, say, the first subject (i = 1), their respective mean response time is β 0 +u 01 . The variance components are then assumed to follow Here the between-person variance σ 2 0 captures the variability in the random effects var(u 0i ), and the individual deviations from the grand mean are assumed to be normally distributed with a mean of zero. Further, the residuals are also assumed to be normally distributed with a mean of zero and variance σ 2 . This readily allows for computing the ICC defined in Eq. 1 as σ 2 0 /(σ 2 0 +σ 2 ). Mixed-effects location scale model An implicit assumption of the standard mixed-effects model (e.g., Eq. 2) is that the residual variance is equal for each individual or group. Conceptually, this can be thought of as fitting i separate intercept only models, where each provides the respective reaction time mean, but then constraining the residual variance to be the same for each model. The MELSM relaxes this assumption, in that each person is permitted to have their own mean and variance, that is, As indicated by the subscripts i and j , the error variance σ 2 ij is now allowed to vary across i individuals and j trials given a log-linear model. The parameters in the scale model (the model for the error variance) are analogous to those in Eq. 2. η 0 represents the intercept and defines the average of the individual variances (i.e., σ 2 in Eq. 3) and u 1i represent the random effect, that is, the individual departures from the fixed group effect. Again for the first subject (i = 1), η 0 + u 11 is the variability of their respective response time distribution. Note the exponent is used to ensure that the variance is restricted to positive values, and thus, is lognormally distributed (Hedeker et al., 2008). It is also customary to assume that the random effects are drawn from the same multivariate normal distribution such that Here σ 2 0 is the random effects variance of location intercepts and σ 2 1 is the random effects variance of the scale intercepts. Further, location and scale random effects are allowed to correlate (i.e., ρσ 0 σ 1 ), thereby providing the mean-variance relation (Rouder, Tuerlinckx, Speckman, Lu, & Gomez, 2008;Wagenmakers & Brown, 2007;Williams, Rouder, & Rast, 2019). Individually varying reliability Modeling the variance structure leads to individually varying ICCs. This is accomplished with a straightforward extension to Eq. 1, that is, Note that the subscript i denotes the ith individual. For example, with i = 1, this formulation would provide the person-specific estimate of reliability for the first subject. Further, in Eq. 6, the covariance between two observations from the same individual remains unchanged from the customary definition of ICC(1). In other words, the only modification is that the correlation is now expressed as a function of the individual, within-person variance, estimates. Of course, if there is not much individual variability in the variance structure (i.e., σ 2 1 is small), this would result in Eqs. 1 and 6 producing similar estimates. This is because a mixed-effects model is a special case of the MELSM, but with an implicit fixed intercept only model fitted to the variance structure. Additionally, due to the hierarchical formulation, these reliability estimates will not be equivalent to solving Eq. 6 with the empirical variances. Indeed, in this model, the parameters share information (i.e., partial pooling of information) which can lead to improved parameter estimates due to shrinkage towards the fixed effect average (Efron & Morris, 1977;Stein, 1956). This is a defining feature of hierarchical estimation, and also applies to location-scale models. Application We fitted the MELSM and estimated varying ICCs with the R package vICC. 2 The parameter estimates are displayed in Fig. 1. Panel A includes the individual means. The betweenperson variance (σ 2 0 ) captures the variability in these estimates. Note that the slowest mean reaction time was 977 (ms) and the fastest was 519 (ms). As a point of reference, this is an 1.88-fold increase from the fastest to slowest individuals. These estimates can also be obtained from a standard mixed-effects model (2). Panel B includes the estimates of within-person variability. They are expressed on the standard deviation (SD) scale. In this case, the least consistent person had a SD of 321 (ms), whereas the most consistent had a SD of 94 (ms). This is a 3.41-fold increase from the least to most consistent individuals. Expressed as variance this is a 11-fold difference, which may be problematic, when considering the average (the dotted line) is used to compute reliability (1). Panel C includes the varying intraclass correlation coefficients (defined in Eq. 6). Before describing these results, it is important to note that ICC(1) provides the lowest score among the different ICC definitions. We refer to Shieh (2016), where it was described how an ICC(1) = 0.20 could exceed 0.80 for average score reliability. 3 The dotted line corresponds to the customary reliability estimate computed with the average within-person variance (ICC = 0.21, 90% CrI = [0.17, 0.25]). However, there were substantial individual differences in reliability. The smallest ICC was 0.08 and the largest was 0.51. In other words, for the classical Stroop task, there was a 6.10-fold increase from the least to most reliable individuals. This corresponds to over a 500% difference in reliability! Summary This motivating example provides the foundation for the proceeding methodology. The central idea behind modeling individual varying variances was described, and in particular, how this relates to computing reliability from a one-way random effects model. The results demonstrated there were substantial individual differences in the within-person variance structure (panel B), which then necessarily results in individual differences in intraclass correlation coefficients or reliability. The degree of variation was not small, in that the 90% credible intervals excluded the average ICC for over half of the individuals (≈ 52%) in the sample. We argue this sufficiently motivates the need for investigating varying ICCs in psychological applications. Importantly, the extent of this illustrative example parallels the work of Brunton-Smith et al. (2017). In particular, varying ICCs were computed for interviewers and then visualized in a similar manner as Fig. 1 (panel C). The rest of the paper includes our major and novel contributions. That is, we first describe methodology that tests for invariant within-person variance. This was not possible in Brunton-Smith et al. (2017), where the deviance information criteria (DIC) was used for model comparison (Spiegelhalter, Best, & Carlin, 2014). Our method allows for gaining (relative) evidence for the null hypothesis of invariant within-person variance with the Bayes factor. Further, for the goal of determining which (and how many) individuals belong to the common ICC model, we again focus on the within-person variance which directly targets the implicit assumption in Eq. 1. This is also based upon Bayesian hypothesis testing with the Bayes factor. At this point, it is important to note that the decision on whether we have a common ICC model, as described in Eq. 1, or a varying ICC model, as described in Eq. 6, is obtained via the random effect u 1i in the within-subject variance model of Eq. 4. Another, seemingly intuitive approach, would be to use credible intervals computed from the person-specific ICCs ( Fig. 1; panel C). However, this approach could only be used for detecting differences from the average ICC with an implicit null hypothesis significance test. Further, given that the varying ICC is a ratio of between-person and total variance, the posterior distribution also includes the uncertainty in the betweenperson variance. This can result in wider credible intervals. However, in our formulation, because the betweenperson variance is held constant, it follows that a difference in within-person variance results in a difference in reliability. The question at hand is therefore determined at Ascending Index Intraclass Correlation Coefficient(s) Fig. 1 This plot motivates the need for individually varying ICCs. Panels A and B highlight individual variation in the reaction time means and standard deviations. The estimates are random intercepts for the location (mean) and scale (variance) sub-models, respectively. While the former are provided by a customary mixed-effects model (A), the variance structure is also assumed to be fixed and non-varying. In other words, that each person has the same reaction time standard deviation which corresponds to the dotted line in B. However, there are substantial individual differences in the scale model (B). This necessarily results in there being individual differences in reliability. This can be seen in panel C. The dotted line denotes the traditional ICC that assumes a common variance for each person. This masks important individual differences. In fact, there is a sixfold difference from the largest to the smallest ICC! The bars represent 90% CrIs for the hierarchical estimates. Those in either blue or green excluded the average the level of within-person variance and before reliability is computed. Bayesian hypothesis testing Bayesian hypothesis testing is synonymous with model comparison. In contrast to classical testing (i.e., using p-values), the Bayesian approach provides a measure of relative evidence for which model is most supported by the data at hand. Thus, there must be at least two models under consideration-that is, In this formulation there are two models, M a and M b , that can be thought of as competing predictions. Note that the prediction task is not for unseen data, as in commonly used information criteria (Vehtari, Gelman, & Gabry, 2017), but instead for the observed data Y (Kass & Raftery, 1995). The Bayes factor is commonly referred to as an updating factor (Rouder, Haaf, & Vandekerckhove, 2018), because it is multiplied by our prior beliefs about the models (i.e., the ratio prior model probabilities). It is common practice to assume equal prior odds, P r(M a )/P r(M a ) = 1, which results in the Bayes factor and the posterior odds being equal to one another. Although this intuitive framework appears to provide a simple approach for comparing models, it turns out that computing the Bayes factor can be quite challenging. It requires computing the marginal likelihood or the normalizing constant. Numerous methods have been proposed to compute this integral, for example Laplace's approximation (Ruli et al., 2016), bridge sampling (Gronau et al., 2017), and Chib's MCMC approximation (Siddhartha, 1995). Further, it is common to use conjugate prior distributions that provide an analytic expression for Eq. 7. This approach is limited to particular classes of models (Rouder & Morey, 2012), which limits its usefulness for location scale models. Spike and slab prior distribution We employ the spike and slab approach for model comparison (George & McCulloch, 1993;Mitchell & Beauchamp, 1988;O'Hara & Sillanpää, 2009). This approach formulates model comparison in terms of a two-component mixture: 1) a "spike" that is concentrated narrowly around zero and 2) a diffuse "slab" component surrounding zero. The former can be understood as the null model, M 0 , whereas the latter is the unrestricted model, M u . Note that we prefer thinking of an unrestricted model and not necessarily a hypothesis (e.g., H 1 ). Thus, in our formulation, the unconstrained model can be thought of as "not M 0 ". A central aspect of this approach is the addition of a binary indicator, which in essence allows for switching between the two mixture components (i.e., transdimensional MCMC; Heck, Overstall, Gronau, & Wagenmakers, 2018). The proportion of MCMC samples spent in each component can then be used to approximate the respective posterior model probabilities. We refer interested readers to Rouder et al. (2018), that includes an excellent introduction to the spike and slab methodology. Further, O'Hara and Sillanpää (2009) presents an in-depth overview of the various specifications. Our specific application is clarified below. Model formulation These model formulations were inspired by Haaf and Rouder (2018) and, in particular, Wagner and Duller (2012). The former used a spike and slab approach to investigate cognitive inhibition in, for example, the "Stroop effect". In this case, they asked "...the posterior probability that all individuals are in the spike relative to the prior probability that all individuals are in the spike". This was specifically for the priming effect, and they did not consider the variance structure (the focus of this work). On the other hand, Wagner and Duller (2012) considered a spike and slab approach for logistic regression models with a random intercept. This work also focused on the mean structure, and we extend their formulation to model within-person variability. Testing the common variance model The common variance model refers to the implicit assumption of Eq. 1. Namely, that each person has the same (or similar) within-person variance. However, if there are individual differences in within-person variability, then the estimate of reliability should accommodate individual variation (6). The adequacy of a common ICC model can be inferred by testing the random effects variance in Eq. 5 (i.e., σ 2 1 ). That is, if there is evidence for zero variance in the scale intercepts (the spike component), this implies that Eq. 1 adequately describes each individual. The presented applications use reaction time data that includes several repeated measures for each person. Thus, for the ith person and j th trial, the likelihood for each data set is defined as This includes a location β 0i and scale η 0i intercept for each person. We employ the non-centered parameterization for hierarchical models-i.e., Here we are not modeling the intercepts directly, but instead inferring them from a latent variable z μ i . In Eq. 9, β 0 is the fixed effect or average reaction time across individuals and τ μ is the random effects standard deviation. They are each assigned a weakly informative prior distribution, with St + denoting a half Student-t distribution. We then model the scale random effects similarly, but with the addition of τ σ * and the Cholesky decomposition in order to include the correlation among the location and scale random effects, Here η 0 is the fixed effect or average within-person variability. This value is used to compute fixed and nonvarying reliability (1). ρ captures correlation between the random effects, which is the mean-variance relation. We then place a standard normal prior distribution on ρ. This is accomplished by taking the inverse of the Fisher Z transformation (i.e., F −1 ). The key difference from Eq. 9 is the introduction of τ σ * , which is the random effects standard deviation of the scale intercepts. This is where the spike and slab prior distribution is introduced-i.e., In this case, τ σ ∼ St + (ν = 10, 0, 1) is the slab component that can be understood as the unrestricted model (M u ). This formulation defines a Dirac spike at zero (i.e., a point mass). It was first introduced in Kuo and Mallick (1998). The key insight is that, for each MCMC iteration, a 0 or 1 is drawn from the Bernoulli distribution with the prior probability of sampling a 1 denoted π. To keep the prior odds at 1, π can be set to 0.5. Hence, this effectively allows for switching between a fixed effect τ σ = 0 (M 0 , i.e., common variance) and the random-effects model τ σ > 0 (M u )-i.e., The posterior model probabilities can then be computed as where S = {1, ..., s} denotes the posterior samples. Consequently, this formulation provides the necessary information for computing the Bayes factor defined in Eq. 7. For example, in the case of equal prior odds, results in the Bayes factor in favor of the spike component or the null hypothesis. We emphasize that this provides relative evidence compared the chosen unrestricted model (the slab), and it will also be influenced by the prior inclusion probability. Importantly, this is essentially variance selection for the within-person variance. As discussed before, zero variance (τ 2(σ ) = 0) is implied by the customary ICC given in Eq. 1. Thus, if there is evidence for M u , then varying ICCs should be computed with Eq. 6. The membership model The above approach focuses exclusively on the random effects variance and asks whether there is evidence for a common within-person variance. This question necessarily implies, "is there evidence for a common ICC or reliability?" that can be computed with the traditional ICC formulation (1). If there is evidence for varying ICCs, an additional question we can ask, relates to classification problems, such as, "which (or how many) individuals belong to the common variance model?" We term this the membership model. The spike and slab approach has been used for computing posterior probabilities of individual random effects. In particular, Frühwirth-Schnatter, Wagner, and Brown (see Table 7; 2012) employed the technique for random intercepts in logistic regression. This work exclusively focused on the mean structure. We extend the general idea and model specification to the variance structure. This is a novel contribution. The model formulation is almost identical to that described above ("Testing the common variance model"). The one change is that the indicator is removed from τ σ and applied to the random effects-i.e., That slab component, or M u , is now comprised of various aspects of this model. For example, the prior distributions for ρ, τ σ , and the latent variable z σ i . Importantly, in reference to Eq. 10, the key difference is that the random effects standard deviation, τ σ , is always included in the model and the target for selection is the random scale effects (i.e., η * 0i in Eq. 15). In this way, the inclusion probability for each individual can be computed, in that, when not included in the model, their estimate is equal to the grand mean. To understand the implied prior distribution, and thus the unrestricted model, we sampled from the prior distributions. This is visualized in Fig. 2, where it was revealed that the slab component resembles a mixture between a normal and Student-t distribution. This results in a heavy-tailedness, which is often recommend for the slab component (e.g., Frühwirth-Schnatter et al., 2012;Wagner & Duller, 2012). The key aspects to focus on are the subscript to the indicator (δ i ), which assigns each person a prior inclusion probability, and also the second line of Eq. 15. Recall that δ i will either be 0 or 1. Thus, when a 0 is sampled, the portion after the fixed effect, or the average within-person variance (η 0 ), drops out of the equation. In other words, for that particular MCMC sample, their estimate will then be equivalent to the average (η 0i = η 0 )-i.e., Importantly, since the average within-person variance is used to compute traditional ICCs, it follows that individual i is a member of the common ICC model (1) when δ i = 0. Thus, for each iteration, this specification allows each individual to have their own person-specific estimate or the fixed effect average. Hence, each individual has a posterior probability of membership for belonging to the common variance model. Assuming equal prior odds, for example, this can then be used to compute the corresponding Bayes factor-i.e., We again emphasize that η 0 corresponds to σ 2 1 in Eq. 1-i.e., σ 2 0 /(σ 2 0 +σ 2 1 ). Consequently, as we have argued, this implies membership to the common ICC model. Hypothetical example This section clarifies our spike and slab implementation. First, it is important to note that there are a variety of possible specifications (O'Hara & Sillanpää, 2009). To our knowledge, only a point mass at zero has been used in psychological applications Lu, Chow, & Loken, 2016. However, it is possible to consider a mixture of continuous distributions (Carlin & Chib, 1995;Dellaportas et al., 2000), or described more recently, a hyperparameter formulation for the variances (Ishwaran & Rao, 2005. A simulation study comparing the alternative approaches can be found in Malsiner-Walli and Wagner (2011). For our purposes, we chose the Dirac spike approach for theoretical reasons (exactly zero) and also in reference to the summary provided in O'Hara & Sillanpää (see Table 1: 2009). Namely, the Dirac spike was comparable in terms of computational feasibility and performance, while also providing estimates of exactly zero. For illustrative purposes, we plotted competing models in Fig. 2. Panel A includes M 0 and M u that were described above ("Testing the common variance model"). In particular, these competing models test whether there is a common within-person variance. This is implied when computing ICC(1) (i.e., Eq. 1). The black line represents the spike component (M 0 ), whereas the blue distribution is the slab component (M u ). Panel B includes a hypothetical posterior distribution. In this case, after conditioning on the observed data Y, there would be evidence for the spike P r(M 0 |Y) = 0.75. Assuming equal prior odds, this corresponds to evidence in favor of the null hypothesis of a common within-person variance (BF 0u = 3), which implies that there is (relative) evidence for a common ICC that is captured by the average within-person variance. This inference follows the customary guidelines provided in Kass and Raftery (1995) and Jeffreys (1961). On the other hand, panel C includes an example posterior that would provide evidence for vary within-person variance. Namely, the posterior model probability for the slab component is P r(M u |Y) = 0.90, which corresponds to BF u0 = 9.0 (assuming equal prior odds). 4 Thus, in this hypothetical example, there is evidence for individual differences in within-person variance, and as a result, there is also evidence in favor of computing varying ICCs. This notion also applies to the individual random effects, or the membership model, but in this case the spike component corresponds to the fixed effect average. This is plotted in Fig. 2 (panels C, D). To avoid redundancy it is further summarized in the caption. Illustrative examples We now apply the proposed methodology to two classical inhibitions tasks. The data are different from above ("Motivating example"). In particular, there are fewer people (n = 47) but (substantially) more repeated measurements from the same individual. They were originally collected and used in Hedge et al. (2018), and they were also analyzed in Rouder et al. (2019). Both of these papers raised concerns about the study of individual differences in relation to measurement reliability. They also focused on the mean structure. We use the same data to characterize individual variability in the within-person variance structure, and thus, measurement reliability. Data set 1: flanker task Rather than reword the study description, we instead directly quote the original study authors. The task protocol was succinctly described in Hedge et al. (2018): Participants responded to the direction of a centrally presented arrow (left or right) using the \and / 4 BF u0 = P r(Mu|Y) P r(M 0 |Y) = P r(Mu|Y) 1−P r(Mu|Y) keys. On each trial, the central arrow (1 cm × 1 cm) was flanked above and below by two other symbols separated by 0.75 cm...Flanking stimuli were arrows pointing in the same direction as the central arrow (congruent condition), straight lines (neutral condition), or arrows pointing in the opposite direction to the central arrow (congruent condition). Stimuli were presented until a response was given (p. 1196). We computed the reliability of correct responses for the congruent, incongruent, and neutral responses in separate models. We followed the protocol described in Haaf and Rouder (2017): reaction times less than 0.2 and greater than 2 s were removed from the data. Hedge et al. (2018) included several cognitive tasks that are thought to measure the same thing. We chose this task in particular because it most closely paralleled the flanker task. Thus we could fit models to the same types of responses. We again directly quote the experimental protocol from Hedge et al. (2018): Data set 2: Stroop task Participants responded to the color of a centrally presented word (Arial, font size 70), which could be red (z key), blue (x key), green (n key), or yellow (m key). The word could be the same as the font color (congruent condition), one of four non-color words (lot, ship, cross, advice) taken from Friedman and Miyake (2004) matched for length and frequency (neutral condition), or a color word corresponding to one of the other response options (incongruent). Stimuli were presented until a response was given. Participants completed 240 trials in each condition (720 in total) (p. 1196). This task included the same number of trials for each condition as the flanker task (i.e., 240). We again analyzed only the correct responses for congruent, incongruent, and neutral responses. These data were also cleaned following Haaf and Rouder (2017). Software and estimation The models were fitted with the R package vICC, which uses the Bayesian software JAGS (Plummer, 2016). Note that an advantage of JAGS is the ability to fit spike and slab models in particular (see the appendices in: Ntzoufras, 2002;O'Hara & Sillanpää, 2009). For each model, we obtained 20,000 samples from the posterior distribution, from which we discarded the initial burn-in period of 5000 samples. This number of samples provided a good quality of the parameter estimates and stable posterior model probabilities. We restrict our focus to the scale model and also the varying ICCs. The common variance model Before describing these results, first recall that the central focus of this work is the within-person structure. The idea is that, because reliability in repeated measurement studies is computed with the average within-person variance (e.g., mean squared within), it is a natural target for "putting the individual into reliability". That is, if there are large deviations from the average "error", then person-specific, varying ICCs, can be employed to gain further insights into measurement reliability. Figure 3 includes the individual, random effects, for the variance structure. Note that the estimates are reported as reaction time standard deviations, which eases interpretation. Importantly, the dotted line corresponds to the fixed-effect, or the average within-person variability. This estimate would traditionally be used to compute the ICC given in Eq. 1. This implicitly assumes that each person (or group) can be adequately described by the average. However, as revealed in Fig. 3, there are considerable individual differences in within-person variance. As an example, panel A includes the individual estimates for the congruent responses in the flanker task, where there is a fivefold difference from the least (0.05) to most variable individuals (0.25). There are recommendations pertaining to when unequal variances become problematic; for example, a common "rule of thumb" is when the ratio between the largest to smallest variance exceeds 3 or 4. In this case, when expressed on the variance scale, the maximum-minimum ratio exceeded 20! Moreover, the individual, within-person variability estimates, revealed a similar pattern between all three outcomes and both tasks. Namely, there were notable individual differences in the variance structure. This suggests that the inherent variation is not a peculiarity of one data source, task, or response type. This insight was made possible with the presented methodology. The histograms correspond to the random effects standard deviation for the scale intercepts (τ σ ). This captures the spread of the within-person variances, that are assumed to be sampled from the same normal distribution. Further, τ σ was subject to spike and slab model comparison. Here the spike component, or M 0 , corresponds to a fixed effect model (τ σ = 0). This corresponds to the assumption of homogeneous within-person variance. On the other hand, the slab component, M u , corresponds to the unrestricted model that permits heterogeneity in the variance structure. Our intention was originally to compute the Bayes factor, given in Eq. 7, for the competing models. However, for each outcome and task, the probability of the slab component was 1.0. Thus the Bayes factors were all infinite! This can be seen in Fig. 3. The posterior distributions are well separated from zero, which indicates overwhelming (relative) evidence for heterogeneous within-person variances. The membership model The membership model builds upon the common variance model. Namely, it allows for determining which (and how many) individuals are adequately described by the average within-person variance or mean squared within in an ANOVA framework. This is the implicit assumption of computing Eq. 1, in that this measure of reliability utilizes a common variance. Figure 4 includes these results. We focus on row 1. The varying ICCs can be seen on the x-axis, where the average ICC is denoted with a triangle. This shows the spread of measurement reliability in these data. For example, panel A includes congruent responses for the flanker task. Here the lowest ICC was 0.05 and the highest was 0.55. This corresponds to over a tenfold increase from the least to most reliable measurements for this outcome and task. Note that the other panels had less variability, but the maximumminimum ratio always exceeded 3. The y-axis includes the posterior probabilities in favor of belonging to the common variance model. That is, the evidence in the data for each person being accurately described by the average within-person variance. The shaded grey region corresponds to a Bayes factor of 3, which is a point of reference that indicates "positive" evidence for M 0 (Kass & Raftery, 1995). It was revealed that very few people across all outcomes and both tasks belong to the common variance model, whereas roughly half were determined to belong to the slab component. Indeed, for many individuals, the posterior probability of the spike was zero. Said another way, the probability of belonging to the slab component was 1 (an infinite Bayes factor). Figure 4 was conceptualized with a secondary goal of illustrating the central idea behind this model (again row 1). This can be seen by noting both axes in relation to the average ICCs that are denoted with triangles. For example, the highest posterior probabilities are centered directly above the average reliability. This is expected, in that, as we have highlighted throughout this work, the ICC is computed from the average within-person variance. Thus, for those that belong to the common variance model, their respective reliability will be very similar to the fixed and non-varying ICC given in Eq. 1. Further, the posterior probabilities in favor of M 0 gradually became smaller for larger deviations from the average ICC. Said another way, for increasingly larger differences from the average ICC, the posterior probabilities also became larger for the slab component or the unrestricted model M u (Fig. 2; panel D). The points correspond to person-specific within-person variability that is expressed on the standard deviation scale. The dotted lines denote the average within-person SD and the bars are 90% CrIs. This reveals substantial individual differences in the scale model for both tasks and all three outcomes. This necessarily results in there being individual differences in reliability. Importantly, the traditional ICC assumes a common variance for each person that corresponds to the dotted lines. This masks important individual differences. The histograms are the posterior distributions of τ σ , which is the random effects SD for the scale model. It captures the spread in individual variability, in that, if τ σ = 0, this suggests there is invariant within-person variance. For both tasks and all three outcomes, the posterior probability for the common variance model was zero, which results in an infinite Bayes factor in favor of varying within-person variance. This can be inferred from the histograms: The posterior distributions are well separated from zero Robustness check Thus far, we have not discussed a decision rule for the spike and slab approach for model comparison. This is intentional, in that Bayesian inference is focused on the weight of evidence and is thus decoupled from making decisions (Morey, Romeijn, & Rouder, 2016). Further, the most common decision rule does not entail computing a Bayes factor, but instead the median probability model is perhaps the most popular choice (Lu et al., 2016;Mohammadi & Wit, 2015). Here, variables are selected with P r(M a |Y) > 0.50, although this was originally proposed for the goal of future prediction and it assumed an orthogonal design matrix (Barbieri & Berger, 2004). We refer to Piironen and Vehtari (2017), where violations of this assumption were investigated and compared to the most probable model (among other methods). Regardless of the evidentiary threshold or decision rule, however, it will be influenced by the prior distribution to some degree. This is not a limitation, but instead, in our view, this can strengthen claims with counter-factual reasoning. In what follows, we adopt the perspective of trying to persuade a skeptic to the central implication of the results-i.e., relatively few people belong to the common variance model, corresponds to P r(M u |Y) = 1. In row 2, the points are personspecific ICCs, the dotted lines denote the average ICC, and the bars are 90% CrIs. The blue bars and points are individuals that belong to the common variance model. For demonstrative purposes, this was determined with a Bayes factor greater than three. This reveals that few people belong to the common variance model, which is used to compute Eq. 1, and that there are individual differences in reliability which (perhaps) calls into question traditional reliability indices. To convince her, we performed a sensitivity analysis to check the robustness of the results. In this work, she was primarily concerned with two sources that could influence the resulting inference. The first is the unconstrained model, M u , or the slab component. And the second is the prior inclusion probability π. To address these concerns, we varied the assumed prior distributions for the flanker task congruent responses. Recall that the prior distribution for the individual random effects is a scale mixture ( Fig. 2; panel D). We thus increased the scale for the prior on τ σ , ν ∈ {1, 2 and 3}, which increasingly results in more diffuse priors. This could hinder "jumps" to the slab component (O'Hara & Sillanpää, 2009), and when assuming a ground truth, this is known to favor the null hypothesis of a common variance (Gu, Hoijtink, & Mulder, 2016). Furthermore, she had a strong belief in the adequacy of the common variance model. This was expressed as P r(M 0 ) = 0.80, although we assumed a range of prior model probabilities. We used a decision based on the posterior odds exceeding 3. Figure 5 includes the results. Note that the random effects standard deviation, τ σ , was robust to all prior specifications we considered, with each resulting in a posterior probability of 1 in favor of varying intercepts, or individual differences in the variance structure, for the scale model. Consequently, we restrict our focus to the membership model. Further, because there was essentially no difference between the various scale parameters we only discuss ν = 1. This was used in the primary analysis. Panel A shows the proportion of individuals that belong to each mixture component, as function of the prior probability for the common variance model. This reveals the classification results were consistent, for example even with P r(M 0 ) = 0.80, the proportion of individuals belonging to M 0 did not exceed 25%. And the majority of individuals belonged to M u , or the slab component, regardless of the prior odds. Panel B shows the posterior probabilities as a function of the prior probabilities. The shaded area corresponds to the critical region. In this case, the probabilities in favor of M 0 gradually decreased to eventually there being zero individuals belonging to the spike component. Further note that, with P r(M 0 ) = 0.80, that corresponds to a strong belief, only one person changed from undecided to the common variance model. Together, this points towards robustness of the results that ultimately satisfied the skeptic. And this also highlights that our membership model works nicely for the goal at hand, in that the various models produced the expected results. For example, in panel A, the largest proportion of individuals belonging to M 0 was observed with the highest prior probability. And the proportion gradually diminished with decreasing prior probabilities. A similar pattern was revealed in panel B. In practical applications, we recommend that in lieu of strong prior beliefs, or a prior distribution that adequately reflects a hypothesis, similar robustness checks can be performed. These are implemented in the R package vICC. Discussion In this work, we proposed a novel testing strategy for homogeneous within-person variance in hierarchical models. The primary motivation for developing this methodology was for applications in measurement reliability. We argued that Importantly, only one person switched from being undecided and to the common variance model M 0 reliability in repeated measurements is often computed without considering the implicit assumption of a common within-person variance, which is typically assumed to be the case in ANOVA and hierarchical models, and thus also assumed in traditional formulations for computing intraclass correlation coefficients. Our method, for characterizing individual differences, specifically targeted reliability at the level of the within-person variance structure. This was accomplished by extending the traditional mixed-effects approach to include a sub-model that permits individual differences in within-person variance. Moreover, Bayesian hypothesis testing, and in particular the spike and slab approach, was used for comparing competing models. On the one hand, our model comparison formulation posited a common (within-person) variance that is represented by a spike component. On the other hand, the unrestricted, or the varying within-person variance model, was represented by a slab component. This approach allows researchers to assess (relative) evidence for the null hypothesis of a common variance, which is assumed to be representative of each individual when computing traditional measures of reliability. Further, we also introduced the membership model. Here the goal was to explicitly determine which (and how many) individuals belong to the common variance model. The importance of these contributions cannot be understated. First, a researcher can determine the generalizability of measurement reliability in their repeated measurement studies. Second, individual differences in within-person variance provides a natural target for improving reliability. For example, by developing methodology to hone in the final sample to either exclude individuals determined to be unreliable or considering subgroups that have a common variance. What is sufficient evidence? The presented approach did not employ a hard and fast threshold for determining whether the null hypothesis should be "rejected". For example, although we used the Bayes factor threshold of three as a reference point in Fig. 4, the overall message was that reliability varies which could be surmised from the posterior probabilities in relation to the individual level ICCs. In practice, however, it may be desirable to directly make a decision regarding which individuals share a common variance. To this end, there are two strategies. The first is to follow the guidelines provided in Kass and Raftery (p. 777, 1995), which are commonly used in psychology. Here a Bayes factor of three is considered "positive evidence" that will typically be more conservative than a significance level of 0.05. The second approach is to use a posterior probability greater than 0.50 (or a Bayes factor of one) that results in the "median probable model" (Barbieri & Berger, 2004). This approach can be used if a decision is necessary, given that using a Bayes factor of three can result in ambiguous evidence (neither hypothesis was supported). Furthermore, in the membership model, there is the issue of multiple comparisons, given that potentially hundreds of tests are being conducted. In a Bayesian framework, this can be remedied by adjusting the prior probabilities which is straightforward in the spike and slab formulation (e.g., by making π in Eq. 11 smaller). We refer interested readers to Scott and Berger (2010), that provides a full treatment of multiplicity control, and note that our package vICC allows for seamlessly changing the prior inclusion probabilities. A note on sample size Due to providing two models, it is worth discussing how the sample size would affect the posterior of each. For the test of a common variance, the random effects variance is the target, and thus it is ideal to have many individuals (or units). Intuitively, this is because the variance is being estimated from the random effects, which will be less accurate with few subjects. As a result, it will be harder to gather evidence for varying within-person variance, even when the null hypothesis is false. On the other hand, for the membership model, it is advantageous to have many observations from each person. This is because the target is the individual effects, such that more data from each subject will reduce uncertainty that then translates into more decisive evidence. Together, the target of selection should be considered when deciding how to gather observations. Our illustrative examples indicated that as few as 50 subjects can provide a clear picture of varying reliability, so long as there are many repeated measurements. Going forward, it would be informative to determine how few repeated measurements can be used to fit the proposed models. In our experience, data common to cognitive tasks in particular will be more than sufficient. Implications The utility of our method was demonstrated on cognitive inhibition tasks. As we mentioned in the Introduction, this literature is an excellent testing ground for assessing individual differences in within-person variance. Namely, in Rouder et al. (2018) and Hedge et al. (2018), it was argued that reliability was not high enough to adequately study individual differences. However, reliability was considered a fixed and non-varying property of these same tasks. This work demonstrated that there are substantial individual differences in the variance structure, and that reliability can be the target of an explanatory model. Further, we argue our findings present a challenge to the notion that individual differences studies in these tasks are necessarily "bound to fail" . First, there are large individual differences in the variance structure. This has not been considered in this debate, which is unfortunate, because within-person variance could be a key aspect of executive functions such as inhibition. In certain tasks the "stability of instability" has been shown to have adequate, and in some cases, excellent retest reliability (Fleming, Steiborn, Langner, Scholz, & Westhoff, 2007;Saville et al., 2011). This points towards a possible disconnect between methodological and substantive inquires, in that, for the latter, intraindividual variation (IIV) is often studied in these same tasks (Duchek, Balota, Tse, Holtzman, Fagan, & Goate, 2009;Fehr, Wiechert, & Erhard, 2014;Kane et al., 2016). Second, and more generally, if a researcher is interested in individual differences, they have to at least approach the individual level. This is not easily accomplished with a traditional mixed-effects model (p. 17 in: Hamaker, 2012). This has been an ongoing debate in longitudinal modeling in particular, but to our knowledge, it has not been considered in these recent debates in cognitive psychology. We refer interested readers to Molenaar (2004) and Hamaker (2012). Third, from our perspective, a satisfactory answer to the question of individual differences in, say, the "Stroop effect," would require addressing the extreme heterogeneity in within-person variance (and thus reliability) that is apparently a defining feature of these tasks. 5 This work not only raised this question, but the presented methodology and the conceptual framework of varying reliability can serve as a guiding light for answering this important question. An alternative perspective It would be remiss of us to not offer an alternative perspective. It is customary to view the residuals as mere "noise" and perhaps measurement "error". For example, that trial-to-trial fluctuations are a nuisance to understanding the latent process. On the other hand, there is a large literature that views these same fluctuations as a key aspect of the construct. A good example is personality traits, that were customarily considered fixed, but now an active area of research revolves around within-person variability of these traits (i.e., the fluctuations; Fleeson, 2001;Hutteman, Back, Geukes, Küfner, & Nestler, 2016;Williams, Liu, Martin, & Rast, 2019). So rather than there being individual differences in reliability, the alternative perspective is to view these as individual differences in stability. That is, individuals with larger residual variance are relatively more volatile or inconsistent, which in of itself, is inferential. In fact, reaction time variability is often studied in substantive applications, for example, it is thought to be a core feature of the ADHD cognitive profile (Borella, De Ribaupierre, Cornoldi, & Chicherio, 2013;Tamm et al., 2012). This is diametrically opposed to classical test theory (CTT), and thus the reliability literature, where measurements are construed as a "true" score plus error. And note that "individual differences in IIV inherently violate core assumptions of CTT" (p. 3; Estabrook, Grimm, & Bowles, 2012). We think this offers a plausible alternative worth considering: It is quite possible that we insist on unduly expensive measurement accuracy in some situations where we do not need it, because of limitations imposed by the intra-individual variation. At the same time, we may be blissfully unaware of the need for more refined measurement in certain other situations. (p. 159, Henry, 1959a) Limitations The idea behind this work was to put the "individual into reliability". This addresses recent calls in the social-behavioral sciences to place more emphasis at the individual level (Molenaar, 2004). In doing so, we assumed the same functional form for each person. However, completely separating group and individual dynamics is not easily achieved. In our experiences, we have found that the MELSM provides an adequate compromise between aggregation approaches and person-specific models. Further, our approach does not separate within-person variability from measurement error. This is not only an "issue" of this work, but it also applies to computing intraclass correlation coefficients more generally-i.e., "...variations between and within individuals characterize behavior, which may or may not be reliable regardless of measurement error" (Henry, 1959b). This hints at the notion of random vs. systematic error, which are not easily teased apart in mixed-effects models. One thought, assuming that a necessary ingredient of the latter is reproducibility (at minimum), is to compute a naive correlation between response types. We investigated this possibility in the flanker task, and found large correlations between not only the within-person variance but also the person-specific reliabilities. At the individual level, this suggest that there is some degree of systematicity. Future directions The proposed methodology provides a foundation for further quantitative advances. First, it is important to note that we did not directly target reliability, but instead an aspect of reliability. This is by design. There is some literature on testing for differences in ICCs. One strategy is to simply compare Fisher z-transformed correlations (Konishi & Gupta, 1989). These approaches are typically for comparing groups such as countries (Mulder & Fox, 2019) or schools located in different areas (e.g., rural vs. urban; Hedges & Hedberg, 2007). On the other hand, we view our methodology as more foundational. Rather than take reliability as a fixed property, that is, our approach allows for an uncanny attention to detail by explicitly modeling the variance components. The MELSM allows for predicting both the between and within-person variance structures. Thus the present framework allows for probing reliability at the level of both the numerator and denominator of Eq. 1-i.e., σ 2 0 /(σ 2 0 +σ 2 1 ). Second, the testing strategy for within-person variance can seamlessly be extended to all forms of intraclass correlation coefficients. Thus our work provides the necessary ingredients for considering individual differences in reliability more generally. These ideas point towards our future work. Conclusions Measurement reliability has traditionally been considered a stable property of a measurement device or task. This framework does not allow for the possibility of individual variation, because it assumes the residual variance is fixed and non-varying. We demonstrated that there can be large individual differences in within-person variance, which necessarily implies the same for reliability. Before computing reliability in hierarchical models, we recommend that researchers first assess whether a common variance is tenable. And if not, varying intraclass correlation coefficients should be computed to fully capture individual level variation in reliability. Open Practices Statements All data and materials are publicly available (data and materials). The methods are implemented in the R package vICC. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
2019-09-16T20:17:39.079Z
2019-07-23T00:00:00.000
{ "year": 2021, "sha1": "7a5aaac6521426b405cf6cc464716ca5934a4eb2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.3758/s13428-021-01646-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fb0c4a18847f0bc38f6255c0cbc8bb99b8c36c4f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
16712706
pes2o/s2orc
v3-fos-license
Molecular Characterization of Giardia duodenalis and Cryptosporidium parvum in Fecal Samples of Individuals in Mongolia The Giardia and Cryptosporidium species are widespread and frequent diarrhea-related parasites affecting humans and other mammalian species. The prevalence of these parasites in Mongolia is currently unknown. Therefore, we performed molecular analyses of G. duodenalis and C. parvum in stool samples from 138 patients hospitalized with diarrhea in Mongolia using nested polymerase chain reaction (PCR). A total of 5 (3.62%) and 7 (5.07%) fecal samples were positive for G. duodenalis and C. parvum, respectively. Giardia duodenalis and C. parvum infections were prevalent in children < 9 years of age. The assemblage-specific fragment patterns for the β-giardin gene of G. duodenalis revealed that all five samples testing positive belonged to Assemblage A by the PCR-restriction fragment polymorphism method. For sequencing and phylogenetic analysis of the 18S rDNA and HSP70 genes of all seven patients testing positive the genes were further identified to be of the C. parvum bovine genotype. This study is the first to report the prevalence of G. duodenalis and C. parvum and its molecular characterization of fecal samples from individuals with diarrhea in Mongolia. INTRODUCTION Giardia and Cryptosporidium, genera of common protozoan parasites that infect domestic and wild animals and humans, generally cause diarrhea. [1][2][3] The Giardia genus is composed of intestinal flagellates that infect a wide range of vertebrate hosts. The Giardia genus currently comprises six species that are distinguished on the basis of the morphology and ultrastructure of their trophozoites. 4,5 Giardia duodenalis, Giardia intestinalis, and Giardia lamblia should be considered as a species complex, with little variation in morphology among them. Recently, genetic analyses using polymerase chain reaction (PCR) characterized isolates of Giardia directly from feces, allowing the identification of a comprehensive range of genotypes from humans and animals. [6][7][8] The species G. duodenalis has assigned even assemblages from A to H. Assemblage A and B have been identified to infect humans and other mammalian hosts. 9,10 Although "Assemblage C" infects only dogs, Assemblage F infects only cats, and Assemblage D infects both dogs and cats. 11 Assemblage E infects cattle, sheep, and goats, and Assemblage G infects rats. Recently, Assemblage H infecting marine vertebrates has been reported. 12 Regarding the Cryptosporidium species, 22 valid species have been identified on the basis of differences in oocyst morphology, the site of infection, vertebrate class specificity, and genetic differences. 1 Among the Cryptosporidium species, Cryptosporidium parvum and Cryptosporidium hominis are known to infect cattle, humans, and other mammals. The Giardia and Cryptosporidium are shed in feces as oocysts and cysts and can be directly transmitted by the fecaloral route by contaminated water or food, especially raw vegetables. 13 Clinical giardiasis and cryptosporidiosis accompanied by diarrhea are major public health concerns in developing nations. 14,15 Approximately 200 million people currently have symptomatic giardiasis in Asia, Africa, and Latin America, and~5 00,000 new cases are reported each year 16 ; alternatively, 300,000 persons in the United States are expected to be infected with Cryptosporidium species annually. 17 In addition, the occurrence of Giardia and Cryptosporidium species has been reported in Russia and China. 18,19 In Mongolia, which is located in central Asia and borders Russia to the north and China to the south; many people work in the livestock industry, such as pasturage of cattle, sheep, goats, and horses in steppes, and the agriculture industry. Therefore, individuals in Mongolia may be considered to have a naturally high risk of contact with zoonotic parasites. However, no studies to date have examined specific G. duodenalis and C. parvum infections among individuals who have diarrhea in Mongolia. The aim of this study was to perform molecular detection and phylogenetic characterization of G. duodenalis and C. parvum from diarrheal fecal samples of individuals in Mongolia. MATERIALS AND METHODS Fecal sample collection and DNA isolation. A total of 138 stool samples from 138 patients admitted to the intestinal ward of the National Center for Communicable Diseases located in Mongolia who had diarrhea were collected and transported to the Laboratory of Parasitology for diagnosis of parasitic diseases. Each fresh stool sample (5 g) was suspended in 15 mL of phosphate buffered saline and filtered using four layers of gauze to remove coarse material. The filtrate was then centrifuged at 3,000 rpm for 10 min. The supernatant was eliminated, and the sediment was mixed with 5 mL of phosphate buffered saline. The pellet underwent repeated boiling (100 C) and deep freezing (−70 C) 10 times to break the thick wall of the Cryptosporidium and Giardia cyst. Total genomic DNA was isolated from the pellet using DNAzol (MRC, Cincinnati, OH) and stored at −20 C until use. PCR and characterization of G. duodenalis by PCRrestriction fragment length polymorphism (RFLP) assay. The amplification of the β-giardin gene was performed using a nested PCR protocol. In the primary PCR reaction, a 753 basepair (bp) fragment was amplified using Accure PCR Master Mix (Bioneer, Daejeon, Korea) containing 1 μM of *Address correspondence to Sang-Eun Lee, Division of Malaria and Parasite Diseases, Korea National Institute of Health, Korea Centers for Disease Control and Prevention, 187 Osongsaengmyeong2-ro, Osong-up, Cheongwon-gun, Chungbuk 363-951, Korea. E-mail: ondalgl@korea.kr the forward primer Gia7 (5 -AAGCCCGACGACCTCACC CGCAGTGC-3 ) and the reverse primer Gia759 (5 -GA GGCCGCCCTGGATCTTCGAGACGAC-3 ), as previously described. 20 In the nested PCR reaction, a 511 bp fragment was amplified using the forward primer (5 -GAACGAACGAGA TCGAGGTCCG-3 ) and the reverse primer (5 -CTCGA CGAGCTT CGTGTT-3 ).Thermal cycle reactions were set to an initial denaturing step (95 C for 5 min), 35 cycles of a denaturing step (95 C for 30 s), an annealing step (55 C for 30 s), an extension step (72 C for 60 s), and finally an extension step (72 C for 7 min). Amplification products were electrophoresed by an auto electrophoresis machine (QIAxcel, Hilden, Germany), as previously described. 21 The PCR products were purified using the agarose gel extraction kit (Qiagen, Hilden, Germany) and digested using 10 U/μL of Hae III (Enzynomics, Daejeon, Korea) in a final volume of 20 μL for 4 h at 37 C for assemblage analysis, according to previous reports. 22 Amplification of the 18S rDNA and heat-shock protein (HSP70) genes for C. parvum. The primers used to amplify a 695 bp fragment from the 18S rDNA gene were 18SSF, forward primer (5 -AGTCATAGTCTTGTCTCAAAGATT-3 ) and 18SR3B, reverse primer (5 -TTAACAAATCTAAGAA TTTCACC-3 ). 23 Thermal cycle reactions were set to an initial denaturing step (96 C for 2 min), 35 cycles of a denaturing step (94 C for 30 s), an annealing step (55 C for 30 s), an extension step (72 C for 45 s), and finally an extension step (72 C for 10 min). A nested PCR protocol was used to amplify the HSP70 gene from genomic DNA of selected Cryptosporidium isolates for nucleotide sequencing. 24 For the primary PCR reaction, a 448 bp fragment was amplified using the forward primer HSPF4 (5 -GGTGGTGGTACTTTTGATGTATC-3 ) and reverse primer HSPR4 (5 -GCCTGAACCTTTGGAATACG-3 ). Thermal cycle reactions were set to an initial denaturing step (94 C for 5 min), 40 cycles of a denaturing step (94 C for 30 s), an annealing step (56 C for 30 s), an extension step (72 C for 30 s), and finally an extension step (72 C for 10 min). For the secondary PCR, a 325 bp fragment was amplified using the primary PCR product and HSPF3 (5 -GCTGSTGATACTCACT TGGGTGG-3 ) and HSPR3 (5 CTCTTGTCCATACCAGCATCC-3 ) primers. The condition for the secondary PCR was identical to the primary PCR. Secondary PCR products were sequenced directly in both directions. Phylogenetic analysis of the 18S rDNA and HSP70 genes of C. parvum. The PCR products were analyzed by electrophoresis, purified using an agarose gel DNA purification kit (Qiagen), and sequenced with an ABI PRISM 3730xl Analyzer (Applied Biosystems, Foster City, CA). A search of highly similar 18S rDNA gene fragment sequences was performed using nucleotide BLAST (National Center for Biotechnology Information, Bethesda, MD) to confirm the genotype. Cryptosporidium 18S rDNA sequences were obtained from GenBank. Sequence alignment was performed using CLUSTAL W (Multiple sequence alignment computer program, Histon, Cambridgeshire, UK). Phylogenetic trees were constructed using the neighbor-joining method 25 with maximum composite likelihood distance correction in the Molecular Evolutionary Genetics Analysis (MEGA) program, 26 with robustness of groupings assessed using 1,000 bootstrap replicates of the data. 27 RESULTS Prevalence of G. duodenalis and C. parvum in human fecal samples in Mongolia. The 138 patients comprised 85 children 1-15 years of age (mean age, 3.6 years) and 53 adults 16-74 years of age (mean age, 32.5 years). Of the 138 patients included, 5 (3.62%) and 7 (5.07%) tested positive for G. duodenalis and C. parvum, respectively. Four of the 5 patients with a G. duodenalis infection were 4 years of age, and 3 of the 7 patients with a C. parvum infection were 4 years of age and 3 patients were 5-9 years of age except 1 patient. Our results showed that the positive rate of G. duodenalis and C. parvum in children was higher than that in adults ( Table 1). Identification of G. duodenalis assemblages by PCR-RFLP. The five samples with G. duodenalis were confirmed by β-giardin gene amplification by nested PCR ( Figure 1A). After digestion by Hae III, the assemblage-specific patterns were obtained, showing patterns of 201, 150, 110, and 50 bp ( Figure 1B). All five samples with G. duodenalis belonged to Assemblage A. Identification and phylogenetic analysis of C. parvum. A total of seven fecal samples (sample numbers Mongol-H05, H07, H08, H16, H28, H32, and H39) tested positive for the 18S rDNA and HSP70 genes of C. parvum in the nested PCR. A sequence analysis of these seven samples suggested the presence of C. parvum in all patients, with homologies from 97% to 99% (Table 2). Phylogenetic analysis showed that the 18S rDNA gene fragments were of the C. parvum bovine genotype in all patients except Mongol-H32 (Figure 2A). An analysis of the HSP70 gene showed similar results (all patients except Mongol-H39) ( Figure 2B). DISCUSSION Giardia and Cryptosporidium are significant worldwide causes of diarrhea and nutritional disorders in humans. In Asia, among patients with diarrhea in a study from the Philippines, the prevalence rates for Giardia and Cryptosporidium species were 2.0% and 1.9%, respectively 28 ; furthermore, the prevalence rates from a study in Malaysia were 0.7% for Giardia species and 0.3% for Cryptosporidium species. 29 In our study, the percentage of patients with diarrhea infected with G. duodenalis and C. parvum in Mongolia was higher than the above rates from the Philippines and Malaysia. Most outbreaks of human giardiasis in developing countries have mainly been detected in children 2 years of age. 30 Furthermore, it has been reported that cryptosporidiosis generally affects children 4 years of age. 31 In our data, children were more frequently infected than adults, and it is a finding similar to the findings from studies in other countries. In previous reports, the reasons for high prevalence of giardiasis and cryptosporidiosis in young children may be caused by the lack of immunity, and because they are easily exposed to contaminated water through playing water games. 31 In addition, Faubert reported that numerous factors contributed to infection with Giardia species, including the number of cysts ingested, the age of the host, the virulence of the Giardia strain, and the situation of the immune system at the time of infection. 32 Interestingly, in the current study, there was only one case of an adult infected with G. duodenalis, whereas the rest were all children. The reason for the high prevalence in children was unclear because we did not acquire any information on the patients except that they had diarrhea. Almost all of the infected children were living in Gers or houses equipped with indoor latrines and without tap water located in a steppe. The poor hygiene conditions of a Table 2 Genotyping of the 18S rDNA and HSP70 genes for each human fecal sample of patients from Mongolia testing positive for Cryptosporidium parvum using nested polymerase chain reaction Specimen ID Genotype (18S rDNA) Genotype (HSP70) Mongol-H05 C. parvum C. parvum Mongol-H07 C. parvum C. parvum Mongol-H08 C. parvum C. parvum Mongol-H16 C. parvum C. parvum Mongol-H28 C. parvum C. parvum Mongol-H32 ND C. parvum Mongol-H39 C. parvum ND ND = no detection. Figure 2. The phylogenetic relationships among Cryptosporidium species and genotypes according to the neighbor-joining analysis and the maximum composite likelihood distance correction (implemented using Molecular Evolutionary Genetics Analysis [MEGA]) of (A) a fragment from the partial 18S rDNA sequence and (B) HSP70 sequence. Sequences of other Cryptosporidium species and genotypes were obtained from GenBank. steppe, such as the low quality of water, poor cleanliness of containers for transporting water, and poor hand washing facilities, should be considered as contributing factors to infection with various pathogens, and these may be critical causes of infections. Further surveys for the detection of pathogens and the transmission through contamination of waters in poor environmental conditions should be performed. Additionally, in 3 of the 5 cases of C. parvum, Shigella flexneri (N = 2) and Salmonella enteritidis (N = 1), and in 2 of the 7 cases of G. duodenalis, S. flexneri were also detected (data not shown). These findings indicate that we detected the existence of mixed infections, both bacterial and parasitic, in patients with diarrhea in Mongolia. To understand the epidemiologic characteristics of the infections and to implement control measures, it is important to determine whether G. duodenalis and C. parvum can infect humans through a zoonotic route. Therefore, further epidemiologic studies examining the risk factors of infection among these protozoa in individuals with diarrhea should be carried out in the near future for improving public health. Recently, molecular epidemiologic studies with Giardia DNA directly extracted from feces have been performed to amplify techniques, and several PCR assays have been developed. 20,33 In this study, we successfully performed a molecular analysis of the β-giardin gene and pattern analysis using a PCR-RFLP assay with Hae III on Giardia DNA from fecal samples. An investigation of human isolates from stool samples in diverse geographic areas established that only G. duodenalis Assemblages A and B are related to almost all human infections. 34 For example, the occurrence of Assemblage A and B of G. duodenalis have been reported in Thailand, China, and the Philippines. 19,35,36 In this study population, only Assemblage A was identified, and this result is similar to the previous studies from Korea, Japan, Egypt, and Brazil. [37][38][39][40] Cryptosporidium species are classified on the basis of different oocyst morphology, sites of infection, vertebrate class, and genetic differences; such classifications of Cryptosporidium species include C. parvum, (a parasite of humans, cattle, and other mammals), C. hominis (a parasite of humans), and Cryptosporidium felis (a parasite of cats). 15,41 Particularly, Morgan and others 42 reported that the C. parvum bovine genotype and C. hominis are responsible for the majority of human infections. Our results showed that the Cryptosporidium DNA isolated from diarrheal fecal samples belonged to the bovine genotype using phylogenetic analysis. Our results are meaningful because they reveal that zoonotic parasitic infection cycles from cattle to humans may be possible in Mongolia. In this study, G. duodenalis and C. parvum genes from human diarrheal fecal samples in patients from Mongolia were identified by molecular analysis. In particular, G. duodenalis was classified as a zoonotic pathogen belonging to Assemblage A, and the C. parvum bovine genotype was discovered through phylogenetic analysis. From our results, we assume that C. parvum can possibly emerge important human pathogens with contact between humans and animals in Mongolia. Further epidemiological studies subject to humans and animals in different areas and/or a big population in Mongolia are needed to better characterize the transmission of Giardiasis and Cryptosporidiosis in humans. To our knowledge, this is the first study to report the prevalence and genetic identification of G. duodenalis and C. parvum, and it may contribute to the understanding of the epidemiologic characteristics and improve the preventive control of both parasites in Mongolia. Financial support: This work is performed to collaborative research both Korea CDC and Mongolia NCCD and this is supported by funding (4847-302-210-13, 2012) from the Korea National Institute of Health, Korea Centers for Disease Control and Prevention.
2016-05-12T22:15:10.714Z
2014-01-08T00:00:00.000
{ "year": 2014, "sha1": "8388253ce369742831ebf5421400c2c9ab0aa52d", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3886425?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b8344f1f956a8b207732a34492aa16e929ea1371", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259951195
pes2o/s2orc
v3-fos-license
Scalable Auction Algorithms for Bipartite Maximum Matching Problems In this paper, we give new auction algorithms for maximum weighted bipartite matching (MWM) and maximum cardinality bipartite $b$-matching (MCbM). Our algorithms run in $O\left(\log n/\varepsilon^8\right)$ and $O\left(\log n/\varepsilon^2\right)$ rounds, respectively, in the blackboard distributed setting. We show that our MWM algorithm can be implemented in the distributed, interactive setting using $O(\log^2 n)$ and $O(\log n)$ bit messages, respectively, directly answering the open question posed by Demange, Gale and Sotomayor [DNO14]. Furthermore, we implement our algorithms in a variety of other models including the the semi-streaming model, the shared-memory work-depth model, and the massively parallel computation model. Our semi-streaming MWM algorithm uses $O(1/\varepsilon^8)$ passes in $O(n \log n \cdot \log(1/\varepsilon))$ space and our MCbM algorithm runs in $O(1/\varepsilon^2)$ passes using $O\left(\left(\sum_{i \in L} b_i + |R|\right)\log(1/\varepsilon)\right)$ space (where parameters $b_i$ represent the degree constraints on the $b$-matching and $L$ and $R$ represent the left and right side of the bipartite graph, respectively). Both of these algorithms improves \emph{exponentially} the dependence on $\varepsilon$ in the space complexity in the semi-streaming model against the best-known algorithms for these problems, in addition to improvements in round complexity for MCbM. Finally, our algorithms eliminate the large polylogarithmic dependence on $n$ in depth and number of rounds in the work-depth and massively parallel computation models, respectively, improving on previous results which have large polylogarithmic dependence on $n$ (and exponential dependence on $\varepsilon$ in the MPC model). Introduction One of the most basic problems in combinatorial optimization is that of bipartite matching. This central problem has been studied extensively in many fields including operations research, economics, and computer science and is the cornerstone of many algorithm design courses and books. There is an abundance of existing classical and recent theoretical work on this topic [Kö16, Edm65a, Edm65b, Har06, HK71, LS20, Mad13, MV80, ALT21, DNO14,MS04]. Bipartite maximum matching and its variants are commonly taught in undergraduate algorithms courses and are so prominent to be featured regularly in competitive programming contests. In both of these settings, the main algorithmic solutions for maximum cardinality matching (MCM) and its closely related problems of maximum weight matching (MWM) are the Hungarian method using augmenting paths and reductions to maximum flow. Although foundational, such approaches are sometimes Finally, our algorithm can be implemented in the MPC model using O log log n ε 7 rounds, O(n · log (1/ε) (n) space per machine, and O (n+m) log(1/ε) log n ε total space. The best-known algorithms in the semi-streaming model for the maximum weight bipartite matching problem are the (1/ε) O(1/ε 2 ) pass, O (n poly(log(n)) poly(1/ε)) space algorithm of Gamlath et al. [GKMS19] and the O log(1/ε) ε 2 pass, O n log n ε 2 space algorithm of Ahn and Guha [AG11]. To the best of our knowledge, our result is the first to achieve sub-polynomial dependence on 1/ε in the space for the MWM problem in the semi-streaming model. Thus, we improve the space bound exponentially compared to the previously best-known algorithms in the streaming model. The best-known algorithms in the distributed and work-depth models required poly(log n) in the number of rounds and depth, respectively [HS22] (for a large constant c > 20 in the exponent); in the MPC setting, the best previously known algorithms have exponential dependence on ε [GKMS19]. We eliminate such dependencies in our paper and our algorithm is also simpler. A summary of previous results and our results can be found in Table 1. space algorithm of Ahn and Guha [AG11]. In the general, non-bipartite setting (a harder setting than what we consider), a very recent (1 − ε)-approximation algorithm of Ghaffari, Grunau, and Mitrović [GGM22] runs in exp 2 O(1/ε) passes and O i∈L∪R b i + poly(1/ε) space. Here, we also improve the space exponentially in 1/ε and, in addition, improve the number of passes by an O(log n) factor. More details comparing our results to other related works are given in Section 1.1 and Table 1. Concurrent, Independent Work In concurrent, independent work, Zheng and Henzinger [ZH23] study the maximum weighted matching problem in the sequential and dynamic settings using auction-based algorithms. Their simple and elegant algorithm makes use of a sorted list of items (by utility) for each bidder and then matches the bidders one by one individually (in round-robin order) to their highest utility item. They also extend their algorithm to give dynamic results. Due to the sequential nature of their matching procedure, they do not provide any results in scalable models such as the streaming, MPC, parallel, or distributed models. Other Related Works There has been no shortage of work done on bipartite matching. In addition to the works we discussed in the introduction, there has been a number of other relevant works in this general area of research. Here we discuss the additional works not discussed in Section 1. These include a plethora of results for (1 − ε)approximate maximum cardinality matching as well as some additional results for MWM and b-matching. Most of these works use various methods to find augmenting paths with only a few works focusing on auctionbased techniques. We hope that our paper further demonstrates the utility of auction-based approaches as a type of "universal" solution across scalable models and will lead to additional works in this area in the future. Although our work focuses on the bipartite matching problem, we also provide the best-known bounds for the matching problem on general graphs here, although this is a harder problem than our setting. We separate these results into the bipartite matching results, the general matching results, and lower bounds. MPC MWM O ε (log log n) rounds O ε (n poly(log n)) space p.m. Table 1: We assume the ratio between the largest weight edge and smallest weight edge in the graph is poly(n). Results for general graphs are labeled with (general); results that are specifically for bipartite graphs do not have a label. Upper bounds are given in terms of O(·) and lower bounds are given in terms of Ω(·). "Space p.m." stands for space per machine. The complexity measures for the "blackboard distributed" setting is the total communication (over all rounds and players) in bits. poly(log n, ε) for the specified results indicated by * hides large constant factors in the exponents, specifically constants c > 20. Our results often exhibit a tradeoff of one complexity measure with another in our various models. a semi-streaming algorithm in optimal O(n) space and O (log n log(1/ε)/ε) passes. They also provide a MWM algorithm that also runs in O(n) space but requiresΩ(n/ε) passes. Please refer to these papers and references therein for older results in this area. Ahn and Guha [AG18] also considered the general weighted non-bipartite maximum matching problem in the semi-streaming model and utilize linear programming approaches for computing a (2/3−ε)-approximation and (1−ε)-approximation that uses O(log(1/ε)/ε 2 ) passes, O n · log(1/ε) Bipartite Matching Ahn and Guha [AG18] also extended their results to the bipartite MWM and b-Matching settings with small changes. Specifically, in the MWM setting, they give a O(log(1/ε)/ε 2 ) pass, O(n · ((log(1/ε))/ε 2 + (log n/ε)/ε)) space algorithm. Specifically, they show that any non-trivial graph problem on n vertices require Ω(n) bits [KRZ21] in communication complexity. In a similar model called the demand query model, Nisan [Nis21] showed that any deterministic algorithm that runs in n o(1) rounds where in each round at most n 1.99 demand queries are made, cannot find a MCM within a n o(1) factor of the optimum. This is in contrast to randomized algorithms which can make such an approximation using only O(log n) rounds. For streaming matching algorithms, Assadi [Ass22] provided a conditional lower bound ruling out the possibilities of small constant factor approximations for two-pass streaming algorithms that solve the MCM problem. Such a lower bound also necessarily extends to MWM and MCbM. Goel et al. [GKK] provided a n 1+Ω(1/ log log n) lower bound for the one-round message complexity of bipartite (2/3 + ε)-approximate MCM (this also naturally extends to a space lower bound). For older papers on these lower bounds, please refer to references cited within each of the aforementioned cited papers. Finally, Assadi et al. [AKSY20] showed that any streaming algorithm that approximates MCM requires either n Ω(1) space or Ω(log(1/ε)) passes. Unweighted to Weighted Matching Transformations Current transformations for transforming unweighted to weighted matchings all either: • lose a factor of 2 in the approximation factor [GP13,SW17], or • increase the running time of the algorithm by an exponential factor in terms of 1/ε, specifically, a factor of ε −O(1/ε) [BDL21]. Thus, we cannot use such default transformations from unweighted matchings to weighted matchings in our setting since all of the complexity measures in this paper have only polynomial dependence on ε and all guarantee (1 − ε)-approximate matchings. However, we do make use of weighted to weighted matching transformations provided our original weighted matching algorithms have only polylogarithmic dependence on the maximum ratio between edge weights in the graph. Such transformations from weighted to weighted matchings do not increase the approximation factor and also allows us to eliminate the polylogarithmic dependence on the maximum ratio of edge weights. Preliminaries This paper presents algorithms for bipartite matching under various settings. The input consists of a bipartite graph G = (L ∪ R, E). We denote the set of neighbors of any i ∈ L, j ∈ R by N (i), N (j), respectively. We present (1 − ε)-approximation algorithms where ε ∈ (0, 1) is our approximation parameter. All notations used in all of our algorithms in this paper are given in Table 2. The specified weight of an edge (i, j) will become the valuation of the bidder i for item j. Scalable Model Definitions In addition, we consider a number of scalable models in our paper including the blackboard distributed model, the semi-streaming model, the massively parallel computation (MPC) model, and the parallel shared-memory work-depth model. Blackboard distributed model We use the blackboard distributed model as defined in [DNO14]. There are n players, one for each vertex of the left side of our bipartite graph (we assume wlog that the left side of the graph contains fewer vertices). The players engage in a fixed communication protocol using messages sent to a central coordinator. In other words, players write on a common "blackboard." Players communicate using rounds of communication where in each round the player sends a message (of some number of bits) to the central coordinator. Then, each player can receive a (not necessarily identical) message in each round from the coordinator. In every round, players choose to send messages depending solely on the contents of the blackboard and their private information. Termination of the algorithm and the final matching are determined by the central coordinator and the contents of the blackboard. The measure of complexity is the number of rounds of the algorithm and the size of the message sent by each player in each round. One can also measure the total number of bits send by all messages by multiplying these two quantities. Semi-streaming model In this paper, we use the semi-streaming model [FKM + 05] with arbitrarily ordered edge insertions. Edges are arbitrarily (potentially adversarially) ordered in the stream. For this paper, we only consider insertion-only streams. The space usage for semi-streaming algorithms is bounded by O(n). Symbol Meaning ε approximation parameter L, R bidders, items, resp. wlog |L| ≤ |R| i, j, i ′ , j ′ i ∈ L, j ∈ R, i ′ ∈ L ′ , j ′ ∈ R ′ , i ′ (resp. j ′ ) indicates copy of i (resp. j) p j current price of item j D i demand set of bidder i (i, a i ) bidder i ∈ L and currently matched item a i o i the item matched to bidder i in OPT u i the utility of bidder i which is calculated by 1 − p ai v i (j) the valuation of bidder i for item j, i.e. the weight of edge (i, j) C i , C j copies of bidder i ∈ L, copies of item j ∈ R, resp. ratio of the maximum weighted edge over the minimum weighted edge M max matching with largest cardinality produced ] is a parallel model where different processors can process instructions in parallel and read and write from the same shared-memory. The relevant complexity measures for an algorithm in this model are the work which is the total amount of computation performed by the algorithm and the depth which is the longest chain of sequential dependencies in the algorithm. An Auction Algorithm for (1 − ε)-Approximate Maximum Weighted Bipartite Matching We present the following auction algorithm for maximum (weighted) bipartite matching (MWM) that is a generalization of the simple and elegant algorithm of Assadi et al. [ALT21] (Appendix A) to the weighted setting. Our generalization requires several novel proof techniques and recovers the round guarantee of Assadi et al. [ALT21] in the maximum cardinality matching setting when the weights of all edges are 1. Furthermore, we answer an open question posed by Dobzinski et al. [DNO14] for developing a (1 − ε)approximation auction algorithm for maximum weighted bipartite matching for which no prior algorithms are known. Throughout this section, we denote the maximum ratio between two edge weights in the graph by W . Our algorithm can also be easily extended into algorithms in various scalable models: • a semi-streaming algorithm which uses O (n · log n · log(1/ε)) space (n is the number of vertices in the bipartite graph) and which requires O 1 ε 8 passes, • a shared-memory parallel algorithm using O m·log(n) ε 7 work and O log 3 (n) ε 7 depth, and • an MPC algorithm using O log log n ε 7 rounds, O(n log (1/ε) (n)) space per machine, and O (n+m) log(1/ε) log n ε total space. In contrast, the best-known semi-streaming MWM algorithm of Ahn and Guha [AG11] requires O(log(1/ε)/ε 2 ) passes and O n log n ε 2 space. Our paper shows a O 1 ε 8 pass algorithm that instead uses O (n · log n · log(1/ε)) space. Since ε = Ω(1/n) (or otherwise we obtain an exact maximum weight matching), our algorithm works in the semi-streaming model for all possible values of ε whereas Ahn and Guha [AG11] no longer works in semi-streaming when ε is small enough. Our algorithm follows the general framework given in Appendix A. However, both our algorithm and our analysis require additional techniques. The main hurdle we must overcome is the fact that the weights may be much larger than the number of bidders and items. In that case, if we use the MCM algorithm trivially in this setting where we increase the prices until they reach the maximum weight, the number of rounds can be very large, proportional to wmax ε 2 where w max is the maximum weight of any edge. We avoid this problem in our algorithm, instead obtaining only poly(log(n)) and ε dependence in the number of rounds. Our main result in this section is the following (recall from Section 1). Before we give our algorithm, we give some notation used in this section. Notation The input bipartite graphs is represented by G = (L ∪ R, E) where L is the set of bidders and R is the set of items. Let N (v) denote the neighbors of node v ∈ L ∪ R. We use the notation i ∈ L to denote bidders and j ∈ R to denote items. For a bidder i ∈ L, the valuation of i for items in R is defined as the function v i : R → Z ≥0 where the function outputs a non-negative integer. If v i (j) > 0, for any j ∈ R, then j ∈ N (i). Each bidder can match to at most one item. We denote the bidder item pair by (i, a i ) where a i is the matched item and a i = ⊥ if i is not matched to any item. For any agent i where a i = ⊥, the utility of a bidder i given its matched item a i is u i v i (a i ) − p ai where p ai is the current price of item a i . For an agent i where a i = ⊥, the utility of agent i is 0. We denote an optimum matching by OPT. We use the notation i ∈ OPT to denote a bidder who is matched in OPT and o i to denote the item matched to bidder i in OPT. Input Specifications In this section, we assume all weights are poly(n) where n = |L| + |R|. We additionally assume the following characteristics about our inputs because we can perform a simple preprocessing of our graph to satisfy these specifications. Provided an input graph G = (L ∪ R, E) with weights v i (j) for every edge (i, j) ∈ E, we find the maximum weight among all the weights of the edges, w max = max (i,j)∈E (v i (j)). We rescale the weights of all the edges by 1 wmax and remove all edges with rescaled weight < ε ⌈log (1/ε) (min(m,W ))⌉+1 . This upper bound of ε ⌈log (1/ε) (min(m,W ))⌉+1 is crucial in our analysis. In other words, we create a new graph G ′ = (L ∪ R, E ′ ) with the same set of bidders L and items R. We associate the new weight functions v ′ i with each bidder i ∈ L where (i, j) ∈ E ′ if v i (j) ≥ w max · ε ⌈log (1/ε) (min(m,W ))⌉+1 and v ′ i (j) = v i (j)/w max for each (i, j) ∈ E ′ . Provided that finding the maximum weight edge can be done in O(1) rounds in the blackboard distributed and MPC models, O(1) passes in the streaming model, and O(n + m) work and O(log n) depth in the parallel model, we assume the input to our algorithms is G ′ (instead of the original graph G). The computation of v ′ i can be done on-the-fly as we run through our auction algorithm since every node knows w max . In other words, we assume all inputs G = (V, E) to our algorithm have scaled edge weights and v i (j) for i ∈ L, j ∈ R are functions that return the scaled edge weights in the rest of this section. Detailed Algorithm We now present our auction algorithm for maximum weighted bipartite matching in Algorithm 1. The algorithm works as follows. Recall that we also assume the input to our algorithm is the scaled graph. This means that the maximum weight of the scaled edges is 1 and there exists at least one edge with weight 1; hence, the maximum weight matching will have value at least 1. We also initialize the tuples that keep track of matched items. Initially, no items are assigned to bidders (Line 1) and the prices of all items are set to 0 (Line 2). We perform ⌈ log 2 (W ) ⌉ phases of bidding (Line 3). In each phase, we form the demand set D i of each unmatched bidder i. The demand set is defined to be the set of items with non-zero utility which have approximately the maximum utility value for bidder i (Lines 4 to 6). This procedure is different from both MCM and MCbM (where no slack is needed in creating the demand set) but we see in the analysis that we require this slack in the maximum utility value to ensure that enough progress is made in each round. Then, we create the induced subgraph consisting of all unmatched bidders and their demand sets (Line 7). We find an arbitrary maximal matching in this created subgraph (Line 8) by first finding the maximal matching in order of decreasing buckets (from highest-bucket with the largest weights-to lowest). We partition the edges into buckets by their weight. An edge The "highest" bucket contains the largest weight edges and lower buckets contain smaller weight edges. This means that we call our maximal matching algorithm O(log(W )) times first on the induced subgraph consisting of the highest bucket, removing the matches, and then on the induced subgraph of the remaining edges plus the next highest bucket, and so on. We use the folklore distributed maximal matching algorithm where in each round, a bidder uniformly-at-random picks a neighbor to match; this algorithm is also used in [DNO14] for the maximal matching step. This simple algorithm terminates in O(log n) rounds with high probability using O(log n) communication complexity. Such randomization is necessary to obtain O(log n) rounds using O(log n) communication complexity. We rematch items according to the new matching (Lines 9 and 10). We then increase the price of each rematched item. The price increase depends on the weight of the matched edge to the item; higher weight matched edges have larger increases in price than smaller weight edges. Specifically, the price is increased by ε · v i (a i ) where v i (a i ) is the weight of the newly matched edge between i and a i (Line 11). The intuition behind this price increase is that we want to increase the price proportional to the weight gained from the matching since the price increase takes away from the overall utility of our matching. If not much weight is gained from the matching, then the price should not increase by much; otherwise, if a large amount of weight is gained from the matching, then we can afford to increase the price by a larger amount. We see later on in our analysis that this allows us to bucket the items according to their matched edge weight into O ⌈log (1/ε) (min(m, W ))⌉ buckets. Such bucketing is useful in ensuring that we have sufficiently many happy bidders with a sufficiently large total matched weight. Finally, we return all matched items and bidders as our approximate matching and the sum of the weights of the matched items as the approximate weight. Obtaining the maximum weight of the matching in the original, unscaled graph is easy. We multiply the edge weights by w max and the sum of these weights is the total weight of our approximate matching (Line 13). Analysis In this section, we prove the approximation factor and round complexity of our algorithm. We use nearly the same definition of happy that is defined in [ALT21]. Definition 3.2 (Unhappy). A bidder i is unhappy at the end of round d if they are unmatched and their demand set is non-empty. for each unmatched bidder i ∈ L do 5: 7: Create the subgraph G d as the subgraph consisting of i∈L D i ∪ L and all edges. 8: Find any arbitrary maximal matching M d of G d in order of highest bucket to lowest. 9: Match j to i by setting a i = j and a i ′ = ⊥ for the previous owner i ′ of j. 11: Increase the price of j to p j ← p j + ε · v i (j). 12: Let M ′ be the matched edges in this current iteration. 13: Return the matching M = arg max M ′ w max · i∈L v i (a i ) as the approximate maximum weight matching and (i, a i ) ∈ M as the matched edges. Note that a happy bidder may never be unhappy and vice versa. For this definition, we assume that the demand set of a bidder can be computed at any point in time (not only when the algorithm computes it). Approach The main challenge we face in our MWM analysis is that it is no longer sufficient to just show at least (1 − ε)-fraction of bidders in OPT are happy in order to obtain the desired approximation. Consider this simple example. Suppose a given instance has an optimum solution OPT with six matched bidders where one bidder is matched to an item via a weight-1 edge. It also has five additional bidders matched to items via weight-1 √ n edges. Suppose we set ε = 1/6 to be a constant. Then, requiring 5/6-fraction of the bidders in OPT to be happy is not sufficient to get a 5/6-factor approximation. Suppose the five bidders matched with edges of weight 1 √ n are the happy bidders. This is sufficient to satisfy the condition that 5/6-fraction of the bidders in OPT are happy. However, the total combined weight of the matching in this case is 5 √ n while the weight of the optimum matching is 1 + 5 √ n . The returned matching then has weight smaller than a 5 √ n -fraction of the optimum, and for large n, this is much less than the desired 5/6-factor approximation. Instead, we require a specific fraction of the total weight of the optimum solution, W OPT , to be matched in our returned matching. We ensure this new requirement by considering two types of unhappy bidders. Type 1 unhappy bidders are bidders who are unhappy in round k − 1 and remain unmatched in round k. Type 2 unhappy bidders are bidders who are unhappy in round k − 1 and become matched in round k. We show that there exists a round where the following two conditions are satisfied: 1. We bucket the bidders in OPT according to the weight of their matched edge in OPT such that bidders matched with similar weight edges are in the same bucket; there exists a round where at most (ε 2 )-fraction of the bidders in each bucket are Type 1 unhappy. 2. We charge the weight a Type 2 unhappy bidder i obtains in round k to i in round k − 1; there exists a round k − 1 where a total of at most ε · W OPT weight is charged to Type 2 unhappy bidders. Simultaneously satisfying both of the above conditions is enough to obtain our desired approximation. The rest of this section is devoted to showing our precise analysis using the above approach. Detailed Analysis Recall that we defined the utility of agent i to be the value of the item matched to her minus its price u i = v i (a i ) − p ai . In this section, we use the definition of ε-happy from Definition 3.1. A similar observation to the observation made in [ALT21] about the happiness of matched bidders can also be made in our case; however, since we are dealing with edge weights, we need to be careful to increment our prices in terms of the newly matched edge weight. In other words, two different bidders could be ε 1 -happy and ε 2 -happy after incrementing the price of their respective items by ε 1 and ε 2 where ε 1 = ε 2 ; the incremented prices ε 1 and ε 2 depend on the matched edge weights of the items assigned to the bidders. We prove the correct happiness guarantees given by our algorithm below. Observation 3.3. At the end of every round, matched bidder i with matched edge At the end of every round, unmatched bidders with empty demand sets D i are ε-happy. Proof. Let a i be the item picked by i. First, each matched bidder picks an item a i where v i (j) > p j and it has utility at least max This satisfies our given happiness definition. Second, if an item remains matched to bidder i that was previously matched to i, then the item's price has not increased. Furthermore, since prices of items are monotonically non-decreasing and (2ε · v i (j)) is fixed for each edge {i, j}; each bidder i who was matched to an item in a previous round would remain 2ε · v i (a i )-happy for the next round. Finally, for all unmatched bidders with empty D i , this means that allocating any item j ∈ N (i) to bidder i results in 0 gain in utility and hence For the weighted case, we need to consider what we call bidder weight buckets. We define these weight buckets with respect to the optimum matching OPT. Recall our notation where i ∈ OPT is a bidder who is matched in OPT and o i is the matched item of the bidder in OPT. We now show that if a certain number of bidders in OPT are happy in our matching, then we obtain a matching with sufficiently large enough weight. However, our guarantee is somewhat more intricate than the guarantee provided in [ALT21]. We show that in ⌈ log 2 (W ) ε 4 ⌉ rounds, there exists one round d where a set of sufficient conditions are satisfied to obtain our approximation guarantee. To do this, we introduce two types of unhappy bidders. Specifically, Type 1 and Type 2 unhappy bidders. Each unhappy bidder results in some loss of total matched weight. However, at the end of round k − 1 it is difficult to determine the exact amount of weight lost to unhappy bidders. Thus, in our analysis, we determine, in round k, the amount of weight lost to unhappy bidders at the end of round k − 1. The way that we determine the weight lost in round k − 1 is by retroactively categorizing an unhappy bidder in round k − 1 as a Type 1 or Type 2 unhappy bidder depending on what happens in round k. Thus, for our analysis, we categorize the bidders into categories of unhappy bidders for the previous round. A Type 1 unhappy bidder in round k − 1 is a bidder i that remains unmatched at the end of round k. In other words, a Type 1 unhappy bidder was unhappy in round k − 1 and either remains unhappy in round k or becomes happy because it does not have any demand items anymore (and remains unmatched). A Type 2 unhappy bidder i in round k − 1 is a bidder who was unhappy in round k − 1 but is matched to an item in round k. Thus, a Type 2 unhappy bidder i in round k − 1 becomes happy in round k because a new item is matched to i. Both types of bidders are crucial to our analysis given in the proof of Lemma 3.5 since they contribute differently to the potential amount of value that could be matched by our algorithm. Furthermore, the proof of Lemma 3.7 necessitates bounding the two quantities separately. In the following lemma, let OPT be the optimum matching in graph G and W OPT = i∈OPT v i (o i ). Let B b be the set of bidders i ∈ OPT in bidder weight bucket b. If a Type 2 unhappy bidder i gets matched to a i in round k, we say the weight v i (a i ) is charged to bidder i in round k − 1. We denote this charged weight as c i (a i ) when performing calculations for round k − 1. Lemma 3.5. Provided G = (L ∪ R, E) and an optimum weighted matching OPT with weight W OPT = i∈OPT v i (o i ), if in some round d of Line 3 of Algorithm 1 both of the following are satisfied, 1. at most ε 2 · |B b | of the bidders in each bucket b are Type 1 unhappy and 2. at most ε · W OPT weight is charged to Type 2 unhappy bidders, then the matching in G has weight at least (1 − 6ε) · W OPT . Proof. In such an iteration r, let Happy denote the set of all happy bidders. For any bidder i ∈ Happy ∩ OPT, by Definition 3.1 and Observation 3. where o i is the item matched to i in OPT and a i is the item matched to i from our matching. Before we go to the core of our analysis, we first make the observation below that we can, in general, disregard prices of the items in our analysis. Let M be our matching. The sum of the utility of every matched bidder in our matching can be upper and lower bounded by the following expression: As in the maximum cardinality matching case, all items with non-zero price are matched to a bidder. We can then simplify the above expression to give Eq. (1) follows from the fact that all non-zero priced items are matched. Eq. (2) follows from separating OPT∩Happy from the left hand side and moving the summation of the 2ε ·v i (a i ) values over OPT∩Happy from the right hand side to the left hand side. Finally, Eq. Let Unhappy 1 denote the set of Type 1 unhappy bidders and Unhappy 2 denote the set of Type 2 unhappy bidders. We let c i (a i ) be the weight charged to bidder i in Unhappy 2 in the next round. Recall that each bidder in Unhappy 2 is matched in the next round. For each bucket, b, we can show the following using our assumption that at most ε 2 · |B b | of the bidders in bucket b are Type 1 unhappy, Eq. (4) shows that one can lower bound the sum of the optimum values of all happy bidders in bucket b by the sum of the optimum values of all bidders who are not Type-2 unhappy minus some factor. First, is the sum of the optimum values of all bidders in bucket b except for the Type-2 unhappy bidders. Now, we need to subtract the maximum sum of values given to the Type-1 unhappy bidders. We know that bucket b has at most ε 2 · |B b | Type-1 unhappy bidders. Each of these bidders could be assigned an optimum item with value at most ε b−2 (by Observation 3.4). Thus, the maximum value lost to Type-1 unhappy bidders is ε 2 · ε b−2 · |B b |, leading to Eq. (4). Thus, the maximum value of weight lost to all Type-1 unhappy bidders in bucket Summing Eq. (6) over all buckets b we obtain We now substitute our expression obtained in Eq. (7) into Eq. (3), The last thing that we need to show is a bound on the weight lost due to bidders in OPT ∩ Unhappy 2 . We now consider our second assumption which states that at most ε · W OPT weight is charged to Type 2 unhappy bidders. Since all bidders i ∈ Unhappy 2 become happy in the next round, we can bound the weights charged to the Type-2 unhappy bidders using Observation 3.3 by Note first that j ∈{oi|i∈OPT∩Happy} p j ≥ i∈OPT∩Unhappy2 p oi since OPT \ (OPT ∩ Happy) includes OPT ∩ Unhappy 2 so we can remove the prices from these bounds in Eq. (10). We add Eq. (9) to Eq. (8) and use our assumptions to obtain Eq. (10) follows from Eq. (11) follows from moving i∈OPT∩Unhappy2 c i (a i ) to the right hand side. Eq. (12) follows from substituting our assumption that i∈OPT∩Unhappy2 c i (a i ) ≤ ε · W OPT . Eq. (13) follows from simple manipulations and since for all ε > 0 and gives the desired approximation given in the lemma statement. We show that the conditions of Lemma 3.5 are satisfied for at least one round if the algorithm is run for at least ⌈ log 2 (W ) ε 4 ⌉ rounds. We prove this using potential functions similar to the potential functions used for MCM. We first bound the maximum value of these potential functions. Lemma 3.6. Define the potential function Φ items j∈R p j . Then the upper bound for this potential is Proof. We show that the potential function Φ items is always upper bounded by W OPT via a simple proof by contradiction. Suppose that Φ items > W OPT , then, we show that the matching obtained by our algorithm has weight greater than W OPT , a contradiction. For a bidder/item pair, (i, a i ), the weight of edge (i, a i ) is at least p ai − 2ε · v i (a i ). Let p ′ ai be the price of a i before the last reassignment of a i to i. Furthermore, since i picked a i , it must mean that v i (a i ) > p ′ ai since a i would not be included in D i otherwise. This means that the sum of the weights of all the matched edges is at least (i,ai) v i (a i ) > (i,ai) p ′ ai ≥ Φ items > W OPT by our assumption that Φ items > W OPT . Thus, we obtain that we get a matching with greater weight than the optimum weight matching, a contradiction. where W OPT is the optimum weight attainable by the matching. Recall that we assign each bidder to a weight bucket using the weight assigned to the bidder in OPT. Proof. We use similar potential functions to the proof of Lemma 2.2 in [ALT21] for each bucket b but our argument is more intricate. First, the potential functions do not both start at 0. Specifically, we have a separate potential function for each bucket b, Φ bidders,b as well as a potential function on all the prices of the items, Φ items : The first one bounds the sum of the maximum utility of the bidders in OPT and in bucket b and the second one bounds the sum of the prices of all items in R. We have 0 ≤ Φ bidders,b ≤ |B b | for all valid b. The maximum possible utility obtained from each item is at most 1 because the weight of any edge is at most 1. There are at most |B b | items in bucket b so the maximum possible utility is |B b |. Now, we argue that the minimum value of Φ bidders,b is 0. The minimum value of the expression max j∈N (i) (v i (j) − p j , 0) is 0. Thus, the sum of the expressions for all bidders in B b is at least 0. We also have 0 ≤ Φ items ≤ W OPT as we proved in Lemma 3.6. We consider slots in increasing/decreasing our potential functions. We consider the slots to be the maximum number of times a particular price for an item j can increase before it becomes ≥ 1. By this definition, there are a total of log (1/ε) (W ) + 2 · 1 ε slots for each item j ∈ R. This is due to the fact that there are at most ⌈log (1/ε) (W )⌉ + 2 buckets provided that we removed all edges with weight less than ε ⌈log (1/ε) (min(m,W ))⌉+1 . For each bucket, the price can increase at most 1/ε times before it becomes too large and can no longer be increased by any edge with weight in that bucket. This results in the maximum number of slots per item being upper bounded by log (1/ε) (W ) + 2 · 1 ε . We say that a bidder increasing the price of an item as taking one slot from Φ bidders,b or Φ items . Since Φ bidders,b is monotonically non-increasing and Φ items is monotonically non-decreasing, once a slot is filled, it cannot become free again. We first show that Type-1 unhappy bidders in bucket b take at least one slot each from Φ bidders,b for each round they are unhappy. That is, we show that the increase in price is at least equal to ε b where b is the smallest bucket j ∈ D i is in for each Type-1 unhappy bidder i. This is the case since we match edges from largest to smallest weight; hence, if a bidder i is unmatched, then all of the items in D i are matched to bidders with edge weights in the same or higher buckets. The smallest bucket that j ∈ D i is in is given by U i /(1 + ε) since in order for j to be included in D i , it must be the case that Provided the number of buckets is upper bounded by log (1/ε) (W ) + 2, each unhappy Type 1 bidder uses at most ε 2 slots before their demand set becomes empty. Then, at most rounds exist where ≥ ε 2 · |B b | bidders in bucket b are Type-1 unhappy and Φ bidders,b > 0. We now consider Type-2 unhappy bidders. Let the item j ′ matched to i in round k be the charged item to Type-2 unhappy bidder i and c i (j ′ ) be i's charged weight in round k − 1. Suppose that round k − 1 has ≥ ε · W OPT charged weight where W OPT is the optimum weight. Noticeably, we charge the item that is matched to i in round k to i in round k − 1. Thus, in round k, the total increase in Φ items is at least ε · ε · W OPT = ε 2 · W OPT assuming the charged weight is at least ε · W OPT . Thus, in WOPT ε 2 WOPT ≤ 1 ε 2 rounds, there exists at least one round where < ε · W OPT weight (in charged weight) is lost by Type-2 unhappy bidders. The final observation that remains is that an unhappy bidder i in round k − 1 must either be Type-1 or Type-2 unhappy. This is true since i must be either matched or unmatched in round k. Thus, the unhappy bidder contributes to at least one of the potential functions. By our argument above, a total of rounds can exist where ≥ ε · |B b | bidders are Type 1 unhappy in bucket b. Furthermore, also by what we showed above, there exists at most 1 ε 2 rounds where Type 2 unhappy bidders contribute ≥ ε · W OPT weight to Φ items . There are O(log(W )) buckets where for each bucket at most total rounds can exist where ≥ ε · |B b | bidders are Type-1 unhappy. Thus, by the pigeonhole principle, in 2(log 2 (1/ε) (W )+2) ε 4 phases, both conditions will be satisfied. Using the above lemmas, we can prove our main theorem that our algorithm gives a (1 − 7ε)-approximate maximum weight bipartite matching in O log 3 (W )·log(n) Reducing the Round Complexity We can use the following transformation from Gupta-Peng [GP13] to reduce the round complexity at an increase in the communication complexity. For completeness, we give the theorem for the transformation in Appendix B. Theorem 3.9. There exists a (1−ε)-approximate distributed algorithm for maximum weight bipartite matching that runs in either: Proof. This follows from applying Theorem B.6 to Algorithm 1 with bounds given by Theorem 3.8. We define f (ε) = ε −O(ε −1 ) as in [GP13] (reconstructed in Appendix B). Semi-Streaming Implementation The implementation of this algorithm in the semi-streaming model is very similar to the implementation of the MCM algorithm of Assadi et al. [ALT21]. passes and O (n · log(1/ε)) space that computes a (1 − ε)-approximate maximum weight bipartite matching for any ε > 0. Proof. We implement Algorithm 1 in the semi-streaming model as follows. We use one pass to determine w max , the set of bidders, and the set of items. We initialize all variables to their respective initial values. Then for each round, we make two passes. In the first pass, we compute U i for each bidder. Then, in the second pass, we greedily find a maximal matching. We do not store D i in memory. Instead, we compute D i as we see the edges and for each edge that connects a bidder i to an item j in D i and i is not newly matched this round, we match i and j. This only requires O(n) space to perform this matching. We store M d , computed in this manner, in memory in O(n) space. Then, using our stored M d , we increase the price of each newly matched item. We can store all prices of items in O(n log(1/ε)) memory assuming the weights were originally at most poly(n). Finally, we store the matching after the current round and the maximum weight matching from previous rounds. Returning the stored matching does not require additional space. Altogether, we use O(n log(1/ε)) space. Reducing the Number of Passes We use the transformation of [GP13] as stated in Appendix B to eliminate our dependence on n within our number of rounds. The transformation is as follows. For each instance of (1 + ε)-MWM, we maintain the prices in our algorithm for each of the nodes involved in each of the copies of our algorithm. When an edge arrives in the stream, we first partition it into the relevant level of the appropriate copy of the structure. Shared-Memory Parallel Implementation The implementation of this algorithm in the shared-memory work-depth model follows almost directly from our auction algorithm. We show the following lemma when directly implementing our auction algorithm. depth that computes a (1 − ε)-approximate maximum weight bipartite matching for any ε > 0. Proof. To implement our auction algorithm in the shared-memory parallel model, the only additional procedure we require is a maximal matching algorithm in the shared-memory parallel model. The currently best-known maximal matching algorithm uses O (m) work and O log 2 n depth [BFS12, FN18, BOS + 13]. Combined with our auction algorithm, we obtain the work and depth as desired in the statement of the lemma. Using the transformations, we can reduce the depth of our shared-memory parallel algorithms. Proof. We apply Theorem B.7 to Lemma 3.12 with f (ε) = ε −O(1/ε) . MPC Implementation We implement our auction algorithm in the MPC model below. As before, we can improve the complexity of our MPC algorithm using the transformations in Appendix B. Proof. We apply Theorem B.8 to Lemma 3.14 with f (ε) = ε −O(1/ε) . A (1 − ε)-approximation Auction Algorithm for b-Matching We show in this section that we also obtain an auction-based algorithm for MCbM by extending the auctionbased algorithm of [ALT21]. This algorithm also leads to better streaming algorithms for this problem. We use the techniques introduced in the auction-based MCM algorithm of Assadi, Liu, and Tarjan [ALT21] (discussed in Appendix A) as well as new techniques developed in this section to obtain a (1 − ε)-approximation algorithm for bipartite maximum cardinality b-matching. The maximum cardinality b-matching problem is defined in Definition 4.1. The key difference between our algorithm for b-matching and the MCM algorithm of [ALT21] is that we have to account for when more than one item is assigned to each bidder in L; in fact, up to b i items in R can be assigned to any bidder i ∈ L. This one to many relationship calls for a different algorithm and analysis. The crux of our algorithm in this section is to create b i copies of each bidder i and b j copies of each item j. Then, copies of items maintain their own prices and copies of bidders can each choose at most one item. We define some notation to describe these copies. Let C i be the set of copies of bidder i and C j be the set of copies of item j. Then, we denote each copy of i by i (k) ∈ C i for k ∈ [b i ] and each copy of j by j (k) ∈ C j for k ∈ [b j ]. As before, we denote a bidder and their currently matched item by i (k) , a i (k) . In MCbM, we require that the set of all items chosen by different copies of the same bidder to include at most one copy of each item. In other words, we require if j (k) ∈ i ′ ∈Ci a i ′ , then no other j (l) ∈ i ′ ∈Ci a i ′ for any j (k) , j (l) ∈ C j and k = l. This almost reduces to the problem of finding a maximum cardinality matching in a i∈L b i + j∈R b j sized bipartite graph but not quite. Specifically, the main challenge we must handle is when multiple copies of the same bidder want to be matched to copies of the same item. In this case, we cannot match any of these bidder copies to copies of the same item and thus must somehow handle the case when there exist items of lower price but we cannot match them. In addition to handling the above hard case, as before, the crux of our proof relies on a variant of the ε-happy definition and the definitions of appropriate potential functions. Recall from the MCM algorithm of [ALT21] that an ε-happy bidder has utility that is at least the utility gained from matching to any other item (up to an additive ε). Such a definition is insufficient in our setting since it may be the case that matching to a copy of an item that is already matched to a different copy of the same bidder results in lower cost. However, such a match is not helpful since any number of matches between copies of the same bidder and copies of the same item contributes a value of one to the cardinality of the eventual matching. Our algorithm solves all of the above challenges and provides a (1 − ε)-approximate MCbM in asymptotically the same number of rounds as the MCM algorithm of [ALT21]. We describe our auction based algorithm for MCbM next and the precise pseudocode is given in Algorithm 2. Our algorithm uses the parameters defined in Table 2. We show the following results using our algorithm. We discuss semi-streaming implementations of our algorithm in Section 4.3. Let L be the half with fewer numbers of nodes. Algorithm Description The algorithm works as follows. We assign to each bidder, i, b i unmatched slots and the goal is to fill all slots (or as many as possible). For each bidder i ∈ L and each item j ∈ R, we create b i and b j copies, respectively, and assign these copies to new sets L ′ and R ′ , respectively (Line 1). This step of the algorithm 8: Find any arbitrary non-duplicate maximal matching M d of G d . 9: for (i ′ , j ′ ) ∈ M d do 10: Set a i ′ = j ′ and a iprev = ⊥ for the previous owner i prev of j ′ . 11: Increase p j ′ ← p j ′ + ε. 12: 13: 14: changes slightly in our streaming implementation. For each bidder and item with an edge between them (i, j) ∈ E, we create a biclique between C i and C j ; the edges of all created bicliques is the set of edges E ′ . The graph G ′ = (L ′ ∪ R ′ , E ′ ) is created as the graph consisting of nodes in L ′ ∪ R ′ and edges in E ′ . As before, we initialize each bidder's assigned item to ⊥ (Line 2). Then, we set the price for each copy in R ′ to 0 (Line 3). In our MCbM algorithm, we additionally set a price cutoff for each bidder c i ′ initialized to 0 (Line 2). Such a cutoff helps us to prevent bidding on lower price items previously not bid on because they were matched to another copy of the same bidder. More details on how the cutoff prevents bidders from bidding against themselves can be found in the proof of Lemma 4.5. We maintain the maximum cardinality matching we have seen in M max (Line 4). We perform ⌈ 2 ε 2 ⌉ rounds of assigning items to bidders (Line 5). For each round, we first find the demand set for each unmatched bidder i ′ ∈ L ′ using Algorithm 3 (Line 6). The demand set is defined with respect to the cutoff price c i ′ and the set of items assigned to other copies of bidder i. The demand set considers all items j ′ ∈ R ′ that are neighbors of i ′ where no copy of j, j (k) ∈ C j , is assigned to any copies of i and p j ′ ≥ c i ′ (Algorithm 3, Line 1). From this set of neighbors, the returned demand set is the set of item copies with the minimum price in N ′ (i ′ ) (Line 2). Using the induced subgraph of i ′ ∈L ′ D i ′ ∪ L ′ (Line 7), we greedily find a maximal matching while avoiding assigning copies of the same item to copies of the same bidder (Line 8). We call such a maximal matching that does not assign more than one copy of the same item to copies of the same bidder to be a non-duplicate maximal matching. This greedy matching prioritizes the unmatched items by first matching the unmatched items and then matching the matched items. We can perform a greedy matching by matching an edge if the item is unmatched and no copies of the bidder it will match to is matched to another copy of the item. For each newly matched item (Line 9), we rematch the item to the newly matched bidder (Line 10). We increase the price of the newly matched item (Line 11). For each remaining unmatched bidder, we increase the cutoff price by ε (Line 12). We compute the corresponding matching in the original graph using M ′ d (Line 13) by including one edge (i, j) in the matching if and only if there exists at least one bidder copy i ′ ∈ C i matched to at least one copy of the item j ′ ∈ C j . Finally, we return the maximum cardinality M max matching from all iterations as our (1 − ε)-approximate maximum cardinality b-matching (Line 15). Analysis In this section, we analyze the approximation error of our algorithm and prove that it provides a (1 − ε)approximate maximum cardinality b-matching. Approach We first provide an intuitive explanation of the approach we take to perform our analysis and then we give our precise analysis. Here, we describe both the challenges in performing the analysis and explain our choice of certain methods in the algorithm to facilitate our analysis. We especially highlight the parts of our algorithm and analysis that differ from the original MCM algorithm of [ALT21]. First, in order to show the approximation factor of our algorithm, we require that the utility obtained by a large number of matched bidders from our algorithm is greater than the corresponding utility from switching to the optimum items in the optimum matching. For b-matching, any combination of matched items and bidder copies satisfy this criteria. Furthermore, matching multiple item copies of the same item to bidder copies of the same bidder does not increase the utility of the bidder. Thus, we look at matchings where at most one copy of each bidder is matched to at most one copy of each item. Recall our definition of ε-happy given in Definition 3.1 and we let Happy be the set of bidders satisfying that definition. For b-matching, each bidder i is matched to a set of at most b i items. Let (i, O i ) ∈ OPT denote the set of items O i ⊆ R matched to bidder i in OPT. Recall from Appendix A that the proof requires u i ≥ 1 − p oi − ε for every bidder i ∈ Happy ∩ OPT to show that i∈L u i ≥ i∈Happy∩OPT 1 − p oi − ε. Using our bidder copies, C i , the crux of our analysis proof is to show that for every (i, O i ) ∈ OPT, we can assign the items in O i to the set of happy bidder copies in C i such that each happy bidder copy receives a unique item, denoted by r i ′ , and c i ′ ≤ p min,r i ′ where p min,r i ′ is the price of the minimum priced copy of r i ′ . Using this assignment, we are able to show once again that This requires a precise definition of Happy ∩ OPT. Let S i ⊆ C i be the set of all happy bidders in C i . Recall that the optimum solution gives a matching between a bidder i ∈ L and potentially multiple items in R; we turn this matching into an optimum matching in G ′ . If |S i | ≤ |O i |, then all happy copies in S i are in OPT; otherwise, we pick an arbitrary set of |O i | happy bidder copies in S i to be in OPT. Then, the summation is determined based on this set of happy bidder copies in Happy ∩ OPT. Once we have shown this, the only other remaining part of the proof is to show that in the ⌈ 2 ε 2 ⌉ rounds that we run the algorithm the potential increases by ε for every unhappy bidder in OPT for each round that the bidder is unhappy. As in the case for MCM, the price of an item increases whenever it becomes re-matched. Hence, Π items increases by ε each time a bidder who was happy becomes unhappy. To ensure that Π bidders increases by ε for each bidder who was unhappy and remains unhappy, we set a cutoff price that increases by ε for each round where a bidder remains unhappy. Thus, this cutoff guarantees that Π bidders increases by ε each time. Detailed Analysis Now we show our detailed analysis that formalizes our approach described above. We first show that our algorithm maintains both Invariant 2 and Invariant 3. We also show our algorithm obeys the following invariant. Invariant 1. The set of matched items of all copies of any bidder i ∈ L contains at most one copy of each item. In other words, i ′ ∈Ci a i ′ ∩ C j ≤ 1 for all j ∈ R. We restate two invariants used in [ALT21] below. We prove that our Algorithm 2 also maintains these two invariants. Proof. An item increases in price only when it is matched to a bidder by Line 11. A matched item never becomes unmatched in our algorithm. Thus, Invariant 4 is maintained. By definition of utility, the utility obtained from the matching produced by Algorithm 2 is Hence, Invariant 5 is also satisfied by our algorithm. Suppose for contradiction that Invariant 1 is violated at some point in our algorithm. Then, suppose i (k) , i (l) ∈ C i are two copies of bidder i that are matched to two copies of the same item. Either they matched to two copies of the same item in the same round or they matched to the items in different rounds. In the first case, Line 8 ensures no two copies of the same bidder are matched to copies of the same item in the same round. In the second case, suppose without loss of generality that i (l) was matched after i (k) . Then, this means that D i (l) contains a copy of of the same item that is matched to i (k) . This contradictions how D i (l) was constructed in Line 1. Thus, Invariant 1 follows. We follow the style of analysis outlined in Appendix A by defining appropriate definitions of ε-happy and appropriate potential functions Π items and Π bidders . In the case of b-matching, we modify the definition of ε-happy in this setting to be the following. is as defined in Line 1 of Algorithm 3 (i.e. contains all neighboring items j ′ where p j ′ ≥ c i ′ and no copy of the neighbor is matched to another copy of i ′ ). At the end of each round, it is easy to show that all matched i ′ and i ′ whose demand sets D i ′ are empty are (ε, c i ′ )-happy. Lemma 4.4. At the end of any round, if bidder i ′ is matched or if their demand set is empty, Proof. First, consider the case when the demand set D i ′ is empty. Let c i ′ be the cutoff price at the end of the round. This means that i ′ remains unmatched at the end of the round and c i ′ does not increase from the beginning of the round since D i ′ is empty. In this case, it means that all neighboring items with price ≥ c i ′ − ε and which were not matched to another copy of i at the beginning of the round had price 1. Then, the utility that can be gained from any of these items is 0 and our bidder i ′ , who has utility u i ′ = 0, is (ε, c i ′ )-happy. Suppose that instead i ′ is matched. Then, i ′ must have matched to an item from its demand set. Recall that the demand set consists of the lower priced items from the set of i ′ s neighbors with price at least c i ′ and which were not matched to any copy of i. This is precisely the set of neighbors we are comparing against. Since we matched against one of the lowest priced items in this set and the price of the item increases by ε after being matched, the utility is lower bounded by 1 − p j ′ − ε for all j ′ ∈ N ′ (i ′ ). In addition to the new definition of happy, we require another crucial observation before we prove our approximation guarantee. Specifically, we show that for any set of bidder copies C i and any set of |C i | items I ⊆ R, Lemma 4.4 is sufficient to imply there exists at least one assignment of items in I to happy bidders in S i such that each item is assigned to at most one bidder and each happy bidder is assigned at least one item where the minimum price of the item is at least the cutoff price of the bidder. Lemma 4.5. For a set of bidder copies C i and any set I ⊆ R of |C i | items where (i, j) ∈ E for all items j ∈ I, there exists at least one assignment of items in I to bidders in C i , where we denote the item assigned to copy i ′ by r i ′ , that satisfy the following conditions: 1. The assignment is a one-to-one mapping between bidders in C i and items in I. Any item j matched to In this proof, we prove a stronger statement which is sufficient to prove our original lemma statement. Namely, we prove that for each bidder i ′ ∈ L ′ , during any round d ≤ ⌈ 2 ε 2 ⌉, of the items in N (i), at most |C i | − 1 of them can have minimum price < c i ′ and each of these items can be assigned to a unique copy of C i that is not i ′ . This means that any subset of |C i | items in N (i) containing the items with minimum price < c i ′ can be assigned to these unique copies and rest of the items can be arbitrarily assigned to any of the remaining copies of C i . We now prove the above. Let i ′ ∈ L ′ be any bidder in L ′ . We say an item's minimum priced copy falls below c i ′ when c i ′ increases above the minimum priced copy of an item. An item's price falls below c i ′ only when another copy of the item is matched to another copy of i. Now we first argue there cannot be more than |C i | − 1 of these items. To show this, we first show that each copy i ′′ ∈ C i where i ′ = i ′′ can cause at most one item in R to have a copy with minimum price less than c i ′ . We say a bidder copy i ′′ caused j to have minimum price less than c i ′ if i ′′ was matched to a copy of j in the earliest round when the minimum priced copy of j drops below c i ′ and does not have price ≥ c i ′ in any later rounds up to the current round. In other words, suppose the current round is d and the minimum priced copy of j dropped below c i ′ in round d ′ < d because it was matched to item i ′′ . Then suppose the minimum priced copy of j does not exceed c i ′ again after round d ′ . We say that i ′′ caused j to have minimum price less than c i ′ . Suppose for contradiction that i ′′ can cause more than one item to have minimum price less than c i ′ . Then, suppose i ′′ caused both j 1 , j 2 ∈ R where j 1 = j 2 to have a copy with minimum price smaller than c i ′ . Without loss of generality, assume bidder i ′′ was initially matched to a copy of j 1 and then to a copy of j 2 . There are again several cases to consider. Bidder i ′′ may have switched to a copy of j 2 from a copy of j 1 during some round when the minimum priced copies of both items were the same. If they have price equal to c i ′ , then, they can have minimum price < c i ′ in the subsequent round if and only if both are matched to copies of i ′ . In that case, i ′′ cannot cause j 2 to have minimum price less than c i ′ . Suppose both item's minimum prices are less than c i ′ . Then, at some point j 2 must have been matched to some copy of i ′ to drop below c i ′ in price. Without loss of generality, suppose this is the first time that i ′′ switched its matching to j 2 since the minimum priced copy of j 2 dropped below c i ′ . Then, i ′′ cannot have caused the minimum price of j 2 to drop below c i ′ since j 2 already has minimum price below c i ′ when i ′′ switched to it. Since each item which falls below c i ′ requires a unique copy in C i (which is not i ′ ) there can be at most |C i | − 1 such items. Now, we conclude the proof by showing each such item with minimum price less than c i ′ can be assigned to a unique copy of C i . We proved above that a unique copy of i caused each item to drop below c i ′ in price. Furthermore, we also proved above that a bidder can switch to another item if and only if the items have the same minimum price. A bidder i 1 can be assigned to the item j 1 they originally caused to drop below c i ′ in price unless j 1 's price drops below c i1 . Suppose without loss of generality that this is the first such bidder whose original item fell below its cutoff price. Then, there must exist another bidder i 2 ∈ C i who matched to j 1 and was assigned item j 2 that has the same minimum price as j 1 . We switch the assignments of j 2 to i 1 and j 1 to i 2 in this case. We perform this switch sequentially for every such bidder whose original item fell below its cutoff price. Thus, we showed that each bidder in C i can either be assigned to the item they originally caused to drop below c i ′ or we can switch the assignment of two such bidders. We now perform the approximation analysis. Suppose as in the case of MCM, we have at least (1 − ε)|OPT| happy bidders in OPT (i.e. |Happy ∩ OPT| ≥ (1 − ε)|OPT|), then we show that we can obtain a (1 − ε)-approximate MCbM. Let OPT be an optimum MCbM matching and |OPT| be the cardinality of this matching. Proof. Let OPT be an optimum MCbM and (i, J) ∈ OPT be the bidder and item set pairs in OPT. Let |OPT| be the cardinality of the optimum matching. Using Lemma 4.5, for each pair (i, O i ), we assign the items in O i to C i . Now, we upper and lower bound the utility of all matched bidders as before using this assignment. The upper bound is the same as the case for MCM. since all items with non-zero price is assigned to a bidder and the maximum cardinality cannot exceed the cardinality of the obtained matching M . Then, to lower bound the sum of the utilities we obtain for each pair of bidder copy and assigned item This means that summing over all happy bidders results in Combining the lower and upper bounds we obtain our desired approximation ratio The potential argument proof is almost identical to that for MCM provided our use of c i ′ . Specifically, as in the case for MCM, we use the same potential functions and using these potential functions, we show that our algorithm terminates in O 1 ε 2 rounds. The key difference between our proof and the proof of MCM explained in Appendix A is our definition of Π bidders which is precisely defined in the proof of Lemma 4.7 below. Lemma 4.7. In ⌈ 2 ε 2 ⌉ rounds, there exists at least one round where |OPT ∩ Happy| ≥ (1 − ε)|OPT|. Proof. We use similar potential functions as used in [ALT21] (Appendix A) with the difference being the definition of Π bidders . We define Π bidders by picking an arbitrary set of |O i | bidder copies for each i ∈ L ′ to be contained in the set OPT. We let this set of copies be denoted as OPT. Then, we define the potential functions as follows: First, both Π items and Π bidders are upper bounded by |OPT| since the price of any item is at most 1 and the number of non-zero priced items is precisely the number of matched items by Invariant 4. We show that having at least ε · |OPT| bidders in OPT that are not happy increases the potential on one or both of the potential functions by at least ε 2 · |OPT|. Using the above lemmas, we can prove the round complexity of Theorem 1.2 to be O 1 ε 2 by Lemma 4.6 and Lemma 4.7. Semi-Streaming Implementation We now show an implementation of our algorithm to the semi-streaming setting and show the following lemma which proves the semi-streaming portion of our result in Theorem 1.2. We are guaranteed ε ≥ 1 2n 2 ; otherwise, an exact matching is found. In order to show the space bounds, we use an additional lemma below that upper and lower bounds the prices of any copies of the same item in R ′ . Lemma 4.9. For any j ∈ R, let j min be the minimum priced copy in C j and j max be the maximum priced copy in C j . Then, p jmax − p jmin ≤ ε. Proof. We prove this lemma via contradiction. Suppose for contradiction that p jmax − p jmin > ε for some j ∈ R. This means that during some round d, a bidder i ′ ∈ L ′ matched to an item copy j ′ where p j ′ > p jmin . By Algorithm 3, this can only happen if D i ′ contains j ′ but not j min . If j ′ ∈ D i ′ , then by definition of N ′ (i ′ ), it holds that j min ≥ c i ′ and no copy of j is matched to another copy of i. Then, j min ∈ N ′ (i ′ ) and j min ∈ arg min j ′ ∈N ′ (i ′ ) (p j ′ ), a contradiction to j ′ ∈ D i ′ since p j ′ > p jmin . Using the above, we prove our desired bounds on the number of passes and the space used. Proof. We implement the steps in Algorithm 2 in the semi-streaming model and show that they can be implemented within the bounds of this lemma. We maintain in memory the following: 1. The tuples (i ′ , a i ′ ) for each i ′ ∈ L ′ , and 2. The minimum and maximum prices for each item j ∈ R and a count of the number of item copies at the minimum price and the maximum price for each item. For each round (Line 5), we spend one pass finding the minimum price of items in the N ′ (i ′ ) of each bidder i ′ ∈ L ′ . Then we spend another pass greedily finding a non-duplicate maximal matching among the items that have this minimum price. To find a non-duplicate maximal matching that prioritizes unmatched items, we perform two passes in our streaming algorithm. During the first pass, for each edge we receive in the stream, we first check that the minimum price of the item equals the demand set price. If this condition is satisfied and the following are also true, 1. at least one copy of the bidder adjacent to the edge is unmatched and has sufficiently low cutoff price, 2. none of the copies of the bidder matched to any copies of the item, 3. and at least one minimum priced copy of the item is unmatched, then we match an unmatched copy of the item with an unmatched copy of the bidder (with sufficiently low cutoff price). We can do this greedily in the streaming setting since we maintain all copies of bidders in memory as well as the minimum and maximum prices of all items. This means that we can check all copies of all bidders to find an unmatched copy. Furthermore, we maintain pointers from items to their matched bidder copies so we can check the pointers as well as the minimum prices of items and their counters to greedily find the appropriate matchings. In the second pass, we match the matched items in the same manner as before in the first pass, except we consider all items in each node's demand set (not just unmatched ones). Reallocating the items and increasing the prices of rematched items can be done from the matching above in O i∈L b i + |R| log(1/ε) space without needing additional passes from the stream. Finally, computing M d can also be done using M ′ d in the same amount of memory without additional passes of the stream. We note that the space bound is necessary in order to report the solution. (There exists a given input where reporting the solution requires O i∈L b i + |R| log(1/ε) space.) Thus, our algorithm is tight with respect to this notion. Shared-Memory Parallel Implementation We now show an implementation of our algorithm to the shared-memory parallel setting. The main challenge for this setting is obtaining an algorithm for obtaining non-duplicate maximal matchings. To obtain nonduplicate maximal matchings, we just need to modify the maximal matching algorithm of [BFS12] to obtain a maximal matching with the non-duplicate characteristic. Namely, the modification we make is to consider all copies of a node to be neighbors of each other. Since there can be at most n copies of a node, this increases the degree of each node by at most n. Hence, the same analysis as the original algorithm still holds in this new setting. Theorem 4.11. There exists a shared-memory parallel algorithm for maximum cardinality bipartite bmatching that uses O log 3 n ε 2 depth and O m log n ε 2 total work where L is the side with the smaller number of nodes in the input graph. Proof. Finding the demand sets can be done using a parallel scan and sort in O(m log n) work and O(log n) depth. Then, finding the induced subgraph can be done using a parallel scan in O(m) work and O(log n) depth. Finally, we use a modified version of the maximal matching algorithm of [BFS12] to compute the maximal matching in each phase. Our modified version of the algorithm of [BFS12] considers all copies of the same node to be neighbors of each other; all other parts of the algorithm remains the same. This means that the degree of each node increases by at most n (resulting in a maximum degree of at most 2n) which means that the asymptotic work and depth remains the same as before with O(m) work and O(log 2 n) depth. Combined, we obtain the work and depth as stated in the lemma. In the above equations, Happy is the set of happy bidders in L and o i is the item matched to bidder i in OPT. Eq. (15) follows from Invariant 5 and the definition of happy (Definition A.1). Eq. (16) simplifies i∈OPT∩Happy 1 ≥ (1 − ε)|OPT| by the assumption. Eq. (17) follows since i∈OPT∩Happy ε ≤ ε|OPT|. Finally, they obtain Eq. (18) using Invariant 4 which implies that j∈R p j ≥ i∈OPT∩Happy p oi . Now, the only thing that remains to be shown is that in ⌈ 2 ε 2 ⌉ total rounds, there exists at least one round where ≥ (1 − ε)|OPT| of the bidders in OPT are ε-happy. They argue this through a clean and simple potential function argument. They define two potential functions (below) that ensure that for each unhappy bidder i that is also in OPT, the potential of one of these potential functions increases by ε for each round the bidder is unhappy: Both of the potential functions above are upper bounded by |OPT|. Otherwise, a higher potential implies a solution with larger cardinality than OPT, a contradiction to the optimality of OPT. Thus, since each unhappy bidder increases the potential of at least one of these potential functions by ε, the total increase in potential when at least ε|OPT| of the bidders in OPT are unhappy is at least ε · ε|OPT|. Then, the total number of rounds necessary before they obtain at least one round where at least (1 − ε)|OPT| of bidders in OPT are happy is upper bounded by ⌈ 2|OPT| ε·ε|OPT| ⌉ = ⌈ 2 ε 2 ⌉. B Gupta-Peng [GP13] Transformation We state modified versions of the Gupta-Peng [GP13] transformation in this section that can be applied to the distributed, parallel, and streaming settings. Our transformations are almost identical to the analysis given by [GP13] and we encourage interested readers to refer to the original work for the original analyses and to [BDL21] for adaptations to some of the different settings. For completeness and to make our paper self-contained, we include all relevant proofs in this paper. The purpose of the transformation is to take an algorithm which obtains an (1−ε)-approximate maximum weighted matching with a complexity measure that has a polynomial dependency on the maximum weight in the input graph and convert it into an algorithm with some greater dependency on the approximation parameter ε > 0 and polylogarithmic dependency on the maximum weight in the graph. The transformation works by maintaining several versions of a blackbox (1 − ε)-approximate maximum weighted matching algorithm on smaller instances of the problem to obtain a (1 − ε)-approximate maximum weighted matching algorithm with the desired new complexity bounds. For the remainder of this section, to be consistent with the notation used in [GP13], we refer to the approximations as "(1 + ε)-approximations". Such approximations can be easily converted to (1 − ε)approximations used as our notation for the rest of this paper. The transformation proceeds as follows. We first define some notation used to describe the algorithm. Let an edge e = (u, v) be in level ℓ if its weight is in a certain range to be determined later. Then, letM ℓ be a matching found for level ℓ by a (1 + ε)-approximate maximum weighted matching algorithm. Then, the approximate matching for the entire graph is produced by iterating from the largest ℓ to the smallest ℓ and greedily choose edges inM ℓ to add to the matchingM as long as the chosen edge is not adjacent to any endpoint of an edge inM . Let R(e) for an edge e = (u, v) be defined as R(e) = {e} ∪ {(x, y) | (x, y) ∈M ℓ ′ where ℓ ′ < ℓ, and {x, y} ∩ {u, v} = ∅} or, in other words, R(e) is the set of edges that contain e and all edges from lower levels that are part of the matchings in the levels but are removed due to e being added toM . The weight of edge e is given by w(e). As in [GP13], we overload notation and denote the sum of the weights of all edges in a set S to be w(S). We keep several copies of a data structure that partitions the edges into levels while omitting different sets of edges in each copy. For each copy, we maintain buckets consisting of edges and each level consists of a set of buckets. An edge e is in bucket b if w(e) ∈ [ε −b , ε −(b+1) ). Then, each level consists of C − 1 continuous buckets where C = ⌈ε −1 ⌉. We maintain C copies of our graph. In the c-th copy where c ∈ [C], we remove the edges in all buckets i where i mod C = c. Then, each level ℓ in copy c contains buckets in the range b ∈ [ℓ · C + c + 1, . . . , (ℓ + 1) · C + c − 1] which means that the ratio the maximum weight edge and the minimum weight edge is any level is bounded by ε −((ℓ+1)·C+c) ε −(ℓ·C+c+1) = ε −(C−1) = ε −O(ε −1 ) . LetM c be the approximate matching computed for copy c. Then, we denote copy c's structures forM ℓ , M ℓ , and R(e) bŷ M c ℓ and M c ℓ , and R c (e), respectively. We first prove the following lemma about the total weight of all edges in R c (e) compared to the weight of e. Proof. Let ℓ be the level that e is on. Then, each level ℓ ′ < ℓ contains at most two edges that are incident to an endpoint of e. The maximum weight of any edge in level ℓ ′ is ε −((ℓ ′ +1)·C+c) . Furthermore, edge e has at least ε −(ℓ·C+c+1) weight. Thus, we can upper bound w(R c (e)) by ≤ w(e) + 2ε · w(e) 1 − ε C ≤ w(e)(1 + 3ε). Now, we show the relation betweenM c and M c ; in particular, we show thatM c is close to M c in size up to a small multiplicative factor. Lemma B.2 (Lemma 4.8 of [GP13]). LetM c be the approximation produced by our transformation and M c be a maximum weighted matching in copy c, then (1 + 7ε)w(M c ) ≥ w(M c ). Proof. By our algorithm, eachM c ℓ is a (1 + ε)-approximate weighted matching of M c ℓ . Then, we have: Consider an edge e = (u, v) ∈M c ℓ , then either: e ∈M c and e ∈ R c (e) or e ∈ M c and e ∈ R c (e ′ ) and/or e ∈ R c (e ′′ ) where u ∈ e ′ and v ∈ e ′′ and e ′ , e ′′ ∈M c . This means that each e is mapped to at least one R c (e ′ ) for at least one edge e ′ ∈M c . Then, it holds that w(R c (M c )) ≥ ℓ w(M c ℓ ) (1 + ε) · w(R c (M c )) ≥ (1 + ε) · ℓ w(M c ℓ ) (1 + ε) · w(R c (M c )) ≥ w(M c ). We now show that there is at least one copy c where w(M c ) ≥ (1 − 1/C) · w(M). Proof. LetM c denote the set of edges in M that are not present in the c-th copy. By our algorithm, each bucket is removed in exactly one copy. Then, it holds that This means that the average of w(M c ) is at least (1 − 1/C) · w(M) and so there must exist at least one copy c where w(M c ) ≥ (1 − 1/C) · w(M). Combining the above, we obtain our final theorem. Proof. There are at most C = O(ε −1 ) copies of the graph and in each copy that are at most O(log (1/ε) (W )) buckets. We showed that in each level the weight ratio is upper bounded by ε −O(ε −1 ) . Hence, we run O log (1/ε) (W ) ε copies of our baseline approximation algorithm on graphs with weight ratios at most ε −O(ε −1 ) . Combining Lemmas B.2 and B.3, we get (1 + 16ε) · w(M c ) ≥ w(M). B.1 Extensions of Gupta-Peng Transformation to Other Models For the distributed and streaming settings, we use the transformations of Bernstein et al. [BDL21] and restate the key theorems in their paper. For the shared-memory parallel and massively parallel computation settings, we give short proofs of how to adapt their transformation for our settings. Let W again be the maximum ratio between the largest weight edge and the smallest weight edge in the input graph. For the below theorems, whenever we write log (1/ε) (W ), we assume the base of the logarithm is 1/ε. The following two (modified) transformations are inspired by Bernstein et al. [BDL21]. For completeness, we present the proofs of these transformations using our description of the Gupta-Peng transformation above. Proof. To obtain A ′ , we maintain each of the C = O 1 ε subgraphs {G 1 , . . . , G C } of the Gupta-Peng transformation in parallel incurring a factor of O log (1/ε) (W ) ε additional total work. The depth is now O D(n, m, f (ε), ε) · log (1/ε) (W ) since we now need to compute the matching per level sequentially. The computation for each subgraph and within the levels in each subgraph can be done in parallel and the depth is a function of the maximum ratio of weights in the graph in each level which is f (ε). Proof. To obtain A ′ , we maintain each of the C = O 1 ε subgraphs {G 1 , . . . , G C } of the Gupta-Peng transformation in parallel with each level partitioned across machines in the same way as the original algorithm. The number of rounds is equal to the number of rounds for any particular instance so it is equal to O (R(n, m, f (ε), ε)) since each instance has maximum weight ratio f (ε). Since each instance can be handled in parallel by the algorithm, the space per instance is O(S(n, m, f (ε), ε)). Once the matching per level is computed, all of the levels for the same copy are put onto one matching. Because each level is a matching and since there are O log (1/ε) (W ) levels. The total space per machine that is used is O n · log (1/ε) (W ) . The total space is now O T (n,m,f (ε),ε)·log (1/ε) (W ) ε since the computation for each subgraph and within the levels in each subgraph can be done in parallel and each requires T (n, m, f (ε), ε) total space.
2023-07-19T04:36:14.027Z
2023-07-18T00:00:00.000
{ "year": 2023, "sha1": "cc085285a96226f11ae4d3a7bf3cc2236ecb4976", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cc085285a96226f11ae4d3a7bf3cc2236ecb4976", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
7539501
pes2o/s2orc
v3-fos-license
Genome-wide analysis of alternative splicing of pre-mRNA under salt stress in Arabidopsis Background Alternative splicing (AS) of precursor mRNA (pre-mRNA) is an important gene regulation process that potentially regulates many physiological processes in plants, including the response to abiotic stresses such as salt stress. Results To analyze global changes in AS under salt stress, we obtained high-coverage (~200 times) RNA sequencing data from Arabidopsis thaliana seedlings that were treated with different concentrations of NaCl. We detected that ~49% of all intron-containing genes were alternatively spliced under salt stress, 10% of which experienced significant differential alternative splicing (DAS). Furthermore, AS increased significantly under salt stress compared with under unstressed conditions. We demonstrated that most DAS genes were not differentially regulated by salt stress, suggesting that AS may represent an independent layer of gene regulation in response to stress. Our analysis of functional categories suggested that DAS genes were associated with specific functional pathways, such as the pathways for the responses to stresses and RNA splicing. We revealed that serine/arginine-rich (SR) splicing factors were frequently and specifically regulated in AS under salt stresses, suggesting a complex loop in AS regulation for stress adaptation. We also showed that alternative splicing site selection (SS) occurred most frequently at 4 nucleotides upstream or downstream of the dominant sites and that exon skipping tended to link with alternative SS. Conclusions Our study provided a comprehensive view of AS under salt stress and revealed novel insights into the potential roles of AS in plant response to salt stress. Electronic supplementary material The online version of this article (doi:10.1186/1471-2164-15-431) contains supplementary material, which is available to authorized users. Background High salinity in soil is a major environmental condition that adversely affects crop production worldwide. Today, roughly 20% of the world's cultivated land and nearly half of all irrigated lands are affected by salinity [1]. High concentrations of salt in soil lead to ion imbalances and hyperosmotic stress in plants. Understanding the mechanisms of plant responses to salt stress is fundamentally important to the study of plant biology and also vital to continued development of rational breeding and genetic engineering strategies to improve salt tolerance in crop plants. Plant's cellular and molecular responses to salt stress have been studied intensively [2,3]. Among these responses is the dramatic change in the expression of a large number of plant genes, which are regulated at the transcriptional as well as the post-transcriptional levels. Alternative pre-mRNA splicing (AS) is an important mechanism for regulating gene expression and for increasing transcriptome plasticity and proteome diversity in eukaryotes [4]. AS is involved in many physiological processes in plants, including the response to biotic and abiotic stresses [5][6][7]. Although AS of some stress-responsive genes has been reported, large-scale or genome-wide studies of AS dynamics under salt stress conditions are still relatively scarce. Based on the data from Sanger sequencing of fulllength cDNA libraries from Arabidopsis plants exposed to different stresses such as cold, heat, and salt stress, it was found that the number of AS events under stress conditions (particularly low temperature) was significantly higher than the number under normal conditions [8]. Another study using a whole-genome tiling array in Arabidopsis with various stress treatments identified a group of AS events that were associated with stressresponsive genes and some essential regulatory genes [9]. These and other studies revealed the involvement of AS in response to abiotic stress [7]. The methods used in these studies, Sanger sequencing and tiling array, however suffer from relatively low resolution when compared with the recently developed high-throughput RNA sequencing (RNA-seq) methods. As a result, some AS events, particularly those with lower abundance, may escape detection. A recent study using high-throughput RNA sequencing was conducted with Arabidopsis plants that were exposed to various stresses or were at different developmental stages and time points in the diurnal cycle [10]. That study mainly focused on the complexity of AS rather than on a detailed description of the global changes from AS under salt stress conditions [10]. To investigate the global dynamics of AS under salt stress, in this study, we used the Illumina HiSeq platform to perform pair-end RNA sequencing with Arabidopsis plants that were exposed to different concentrations of salt and generated~110 million pair-end reads (101 bp in length). In what follows, we first describe the features of AS under salt stress based on comparative AS analysis. We then report on how the genes with differential AS are well associated with specific functional categories, such as the responses to stresses and RNA splicing. We also suggest that AS could represent a regulatory mechanism independent of the regulation of gene transcriptional activation. Finally, we discuss the change in pre-mRNA splicing patterns of serine/arginine-rich (SR) splicing factors under salt stress. Quality analysis of RNA-seq data We used the mRNA-sequencing (RNA-Seq) method to acquire whole transcriptomes from both NaCl-treated and untreated two-week-old Arabidopsis (ecotype C24) seedlings at the single-nucleotide resolution. To detect salt-induced AS events precisely, we subjected the seedlings to treatments with different concentrations of NaCl (0, 50, 150, or 300 mM). We obtained 110 million sequenced reads (101 bp in length) using the Illumina High-Seq sequencing system. On average, nearly 89% of these reads could be unambiguously aligned to the TAIR10 reference genome sequence (Additional file 1). To evaluate the quality of the RNA-seq data, we investigated the proportion of read alignments in the genome, the continuity of reads (3'/5' bias) along transcriptional units (TUs) and sequencing saturation. Firstly, comparing the mapped reads to the gene annotation revealed that about 98% of the reads were from exonic regions, whereas only 2% were mapped to intergenic and intronic regions ( Figure 1A). This was consistent with the quality of the Arabidopsis genome assemblies and annotation. Secondly, plotting the coverage of reads along each transcript exhibited a uniform distribution with no obvious 3'/5' bias, reflecting the high quality of the cDNA libraries ( Figure 1B). Lastly, we assessed the sequencing saturation and found that as more reads were obtained, the number of newly discovered genes plateaued ( Figure 1C), suggesting that extensive coverage was achieved. This was also supported by plotting the read coverage along each chromosome, which showed extensive transcriptional activity in the entire genome ( Figure 1D and Additional file 2). To confirm that the comparison of AS was performed at the same level, we randomly sampled 18 million properly paired mapped reads from each RNA-seq library for further analysis. Identification of AS events To identify AS events, we first predicted splice junctions using the software TopHat, which was designed to identify exon-exon splice junctions. We initially obtained 433,475 junctions from the four RNA-seq libraries (Additional file 3). After filtering the splice junctions by two criteria (for details, see Materials and Methods) -an overhang size of more than 20 bp and at least two reads to support the splice junctions -a junction data set of 397,321 confident junctions that we believe to be true splice junctions was obtained. Comparison of the junctions in this junction data set to the gene annotation (TAIR 10) revealed that about 363,383 (91.5%) junctions were previously annotated, and that the remaining 33,938 (8.5%) were novel junctions that had not been annotated in the TAIR10 Database (Additional file 3). After comparing all the confident junctions to the annotated genes, we identified all the AS events (including 2275 cassette exons, 6624 alternative 5' splice sites (SSs), 9654 alternative 3'SSs, 6 mutually exclusive exons, 253 coordinate cassette exons, 18 alternative first exons and 10 alternative last exons) under salt stress ( Figure 2A). We also identified 35,565 intron retention events that had at least five intron-reads (i.e., the reads were mapped within introns) and more than 80% of the intron region covered by intron-reads. Among all these AS events, 45.3% had already been annotated in Arabidopsis genes (TAIR10), and the remaining 54.7% were identified as novel AS events. Based on all identified events, we found that about 49.4% of intron-containing genes were alternatively spliced under salt stress. Intron retention was the most prevalent AS event under salt stress, although most intron retentions had relatively low read coverage compared to the read coverage of exons ( Figure 2B). This is consistent with the intron-retention background in Arabidopsis that was recently reported [11]. Following intron retention, the alternative 5' and 3' splice sites were relatively prevalent compared with the other types of AS events. Sequence analysis of alternative 5' splice sites (5'SSs) and alternative 3' splice sites (3'SSs) revealed that these activated splice sites were still associated with GU and AG dinucleotides ( Figure 2C). Moreover, we found that the occurrence of these alternative 5'SSs and 3'SSs was enriched in the downstream and upstream 4 bp region of the dominant 5'SSs and 3'SSs ( Figure 2D), respectively. These features of alternative 5'SSs and 3'SSs are consistent with those found in the human genome [12]. It is noteworthy that when correlating exon skipping events to alternative 5'SSs and 3'SSs, we found that about~17% of the skipped exons simultaneously had alternative 5'SSs or 3'SSs. This percentage of occurrence was significantly higher than that expected for random sampling of all annotated exons (the probability of random occurrence is 0.02%, Fisher Exact Test, p < 0.001). This result suggests a coordinated occurrence of exonskipping and alternative splice site selection. Salt-stress enhances AS We next compared the difference in AS between the control and NaCl treatments. We found that the number of AS events in salt-treated plants was obviously higher than that in the control plants (Figure 3), consistent with a previous report [8]. We ran a Fisher's Exact Test on the junction-read-counts/intron-read-counts (only for intron retention) and the corresponding exon-read-counts between the control and the treatments and identified 2065 AS events (including 279 alternative 5'SSs, 486 alternative 3'SSs, 102 exon-skipping, and 1198 intron retention events) from 1088 genes that were significantly over-represented in NaCl-treated plants (Additional files 4, 5, 6, 7 and 8). In contrast, we identified only 1320 AS events (including 184 alternative 5'SSs, 247 alternative 3'SSs, 53 exon-skipping, and 836 intron retention events) from 643 genes that were absent from these NaCl-treated plants (Additional files 4, 9, 10, 11 and 12). These data indicated the overall promotion of AS by salt stress. Changes in splicing patterns associated with stress response To investigate the potential influence of salt-stress-induced AS on cellular processes, we analyzed functional categories and pathways of the genes with differential AS under salt stress. We identified 1636 differential alternative splicing (DAS) genes in seedlings treated with 50, 150 or 300 mM NaCl, of which 28.3% were found in the seedlings from at least two of these treatments ( Figure 4A). An analysis of functional categories using the software DAVID [13,14] revealed that these differentially spliced genes were involved in several biological processes, including responses to abiotic stimulus and RNA processing, suggesting that salt stress may impact biological processes through changing pre-mRNA splicing (Additional file 13). In particular, the response-to-abiotic-stimulus functional category was markedly increased among the DAS genes, and was observed in the seedlings in all the salt stress treatments ( Figure 4B and Additional file 14). The results suggested that AS under salt stress was not a random process. Rather, it was associated with the stress response. Indeed, further analysis using Mapman [15] suggested that genes with aberrant splicing in NaCl-treated seedlings were involved in various stress response pathways, including hormone-signaling pathways, MAPK-signaling pathways, and transcription regulation of stress responses ( Figure 4C and Additional file 15). Notably, some important genes (such as ERD10, RD22, ATGSTF10, ATCPK32, CIPK3 and ERD14) involved in stress responses were differentially alternatively spliced in the NaCl-treated plants ( Figure 4D). Among them, ATCPK32 is an ABA signaling component that regulates the ABA-responsive gene expression via ABF4 [16], and ERD10 encodes a gene induced by low temperature and dehydrations [17]. Both genes showed decreased retention of their first introns under salt stress. In contrast, the other three genes (ERD14, RD22, and ATGSTF10) involved in abiotic stress responses [18][19][20] showed increased intron retention under salt stress. These intron retention events were validated by RT-PCR using intron-flanking primers. The amount of the corresponding PCR products was either increased or decreased under salt stress, consistent with the RNA-seq data. Sequence analysis of these intron-retained transcripts suggested that all of these intron retentions could generate pre-mature stop codons. Therefore, the decrease or increase in intron retention was predicted to increase or decrease the abundance of the functional transcripts, respectively. Since these genes and many other genes with significant intron retentions (Additional files 8 and 12) have been suggested to play roles in stress responses and they are induced by abiotic stresses, an increase in their functional transcript levels (e.g., ATCPK32 and ERD10) is likely to have positive effects on salt tolerance, whereas a decrease in the functional transcript levels of other genes (e.g., ERD14, RD22 and ATGSTF10) could have negative effects on salt tolerance in plants. These results suggested that alteration of AS in stress-responsive genes might impact a plant's tolerance to salt stress. We further compared the functional categories of DAS genes with the functional categories of genes without DAS. This comparison clearly revealed that different functional categories are over-represented in both populations (Additional files 13 and 16). Generally, among genes that produce alternative transcripts, several Gene Ontology (GO) categories related to stress, such as 'response to metal ion', 'response to abiotic stimulus', 'response to cadmium ion', 'RNA splicing' and 'RNA processing', are over-represented. On the other hand, among genes that are not alternatively spliced, several functions related to housekeeping, such as 'protein transport', 'DNA repair' and 'cell wall organization', are over-represented. The presence of genes that undergo AS in the stress-related and RNA processing categories is much higher than the presence of genes not alternatively spliced in these categories, further supporting the notion that stress-related genes are more predisposed to pre-mRNA processing than are genes involved in basic cellular functions ( Figure 4E). AS and gene expression are separately regulated in response to salt stress From the RNA-seq data, 1,368, 1,901 and 2,729 genes were defined as differentially expressed (DE) in the 50, 150 or 300 mM NaCl treatments, respectively, relative to the control (p < 0.01, or fold change >2 and p < 0.05) (Additional file 17). The differentially expressed genes identified in our study overlapped with those identified by other groups based on Microarray analyses of salt-stressed Arabidopsis seedlings (data from the Genevestigator database), indicating that the salt-stressinduced gene regulation found here was comparable to that of other studies (Additional file 18). Interestingly, when compared with the DAS genes, only 207 DE genes also exhibited significantly changed intron retention, alternative 5'SSs, alternative 3'SSs or exon skipping under NaCl treatments ( Figure 5A). Functional categorization of these 207 Figure 3 The counts of each type of AS events in the control and 50, 150 or 300 mM NaCl treatments. The number of alternative 5'SSs, 3'SSs and exon-skipping events are more in the NaCl treatments than in the control treatment. The green/blue bars represent forward and reverse sequencing reads. 5'SS, alternative 5' splice site; 3'SS, alternative 3' splice site. Nonetheless, these co-regulated genes account only for a relatively small portion of all DAS or DE genes in Arabidopsis. This finding suggested that AS and gene activation could be separately regulated in response to salt stress. Indeed, analysis of the DAS and DE genes confirmed that the over-represented functional categories differed largely between the two groups, revealing separate regulation of gene expression and AS in response to salt stress (Additional files 19 and 20). For example, some functional categories, e.g., 'RNA splicing' and 'RNA processing', were over-represented only among DAS genes, while other categories, such as 'transcription' and 'response to hormone stimulus', were found among the DE genes ( Figure 5C). Frequent alteration of SR splicing factors in AS patterns under salt stresses Whereas genes involved in RNA splicing are mostly not regulated by salt stress at the expression level, these genes are frequently alternatively spliced under salt stress. AS of these splicing-related genes could therefore represent an independent means of regulation of genes in response to salt stress. Strikingly, we identified 15 The representative AS events in six stress-responsive genes validated by RT-PCR and visualized by IGV browser. In the RT-PCR validation, the red asterisk (*) to the right side denotes the alternative splice form. The red arrow on the top indicates an increase (pointing upward) or decrease (pointing down) in AS events at that salt concentration. In the IGV visualization, the exon-intron structure of each gene is given at the bottom of each panel. The grey peaks above the exon-intron structure indicate the RNA-seq read density across the gene. The red arrows represent alternative splice sites. The blue arcs in CIPK3 indicate splice junction reads that support the junctions. (E) Enrichment of biological processes in DAS genes and genes without DAS. The top 10 functional categories in DAS genes are shown. In the stress-related and RNA processing categories, the possibility of genes that undergo AS is much higher than the possibility of genes that are not alternatively spliced. splicing factors with changes in AS under salt stress (Additional file 21). Ten of these splicing factors encoded SR (serine/arginine-rich) proteins. SR splicing factors play key roles in the execution and regulation of pre-mRNA splicing in plants. In Arabidopsis, there are a total of 18 SR proteins [21,22]. Previous studies suggested that pre-mRNAs of SR protein genes were frequently alternatively spliced under environmental stress, which is thought to alter the splicing of their targets and result in adaptive transcriptome changes in response to environmental conditions [23,24]. We validated six of all these splicing factors by RT-PCR and visualized them using the IGV junction browser ( Figure 6). Among the visualized genes, four SR genes (AT-RSP40, AT-RSP41, AT-RS2Z33 and AT-SCL33) exhibited a decrease in AS events under salt stress, and two SR genes (AT-RSP31 and AT-SCL30A) exhibited an increase in AS events under salt stress ( Figure 6). The intron retentions of AT-RS2Z33 and AT-RSP40 were detected in the second intron in plants under the control conditions, but they were weakly present in the plants treated with NaCl. This was also verified by RT-PCR using a forward primer in the third exon and a reverse primer in the second exon. The intron retention in both genes occurred in the 5'UTR region, which would lead to abnormal transcripts with long 5'UTRs that could interrupt the translation and lead to reduced synthesis of the protein. The decreased intron retention in these two genes under salt stress should therefore lead to a decrease in the level of these corresponding long 5'UTR transcripts, which could consequently increase the abundance of their functional transcripts and proteins. The intron retention of genes AT-RSP41 and AT-SCL33 that respectively occurred in the third and fourth introns were clearly detected in the control, while they were barely present in samples treated with 300 mM NaCl treatment. This was validated by RT-PCR using intron-flanking primers. Sequence analysis revealed that both intron retention events introduced premature stop codons (PTCs) and generated truncated proteins. Therefore, the decreased intron retention in both genes under salt stress could lead to the decrease in abnormal transcripts and the increase in functional transcripts. Alternative 3'SS and 5'SS were found in the third intron of AT-SCL30A (AT3G13570) under the 150 mM NaCl treatment, while they were weakly present in the control. This observation was validated by RT-PCR using a forward primer in the third/fourth exon and a reverse one covering the splice junction ( Figure 6). Further detailed analysis revealed that both alternative 3'SS and 5'SS actually introduced a novel exon (not annotated in the TAIR10 Arabidopsis genome) that was inserted into the region between the third and fourth exon and generated a novel isoform. Sequence analysis for this isoform suggested that this exon insertion would generate PTCs and thus could encode a truncated protein that was composed of 120 amino acids. Therefore, this exon insertion under salt stress could lead to a decrease in the functional transcripts. Nonetheless, it is unclear whether this novel isoform has any function. Finally, we identified an alternative 3'SS in the second intron in AT-RSP31, with an increased level under the 300 mM NaCl treatment. This observation was validated by RT-PCR using a forward primer covering the splice junction and a reverse one in the second exon ( Figure 6). This alternative 3'SS (not annotated in the TAIR10 Arabidopsis genome) extends the third exon into the next intron and thus generates a larger exon. Sequence analysis for this isoform suggested that this alternative 3'SS would introduce PTCs and thus could encode a truncated protein. It is also unclear whether this truncated protein has any function. Discussion Through comprehensive transcriptome analysis of highthroughput RNA-seq data, in this study, we disclosed features of genome-wide AS in Arabidopsis under salt stress. Our analysis suggests that 49% of the introncontaining genes in Arabidopsis genome are alternatively spliced under salt-stress conditions. Moreover, we found that AS is increased by salt stress and that 10% of the intron-containing genes showed significantly differential AS under salt-stress conditions. The analysis of functional categories demonstrated that genes with differential AS are associated with responses to stress and RNA splicing. Finally, we observed that genes encoding the splicing factors, i.e., SR proteins, are subject to frequent and specific AS under salt stress. An overview of AS in Arabidopsis under salt stress Recent studies using massively parallel RNA sequencing revealed that a large percentage of genes in Arabidopsis undergo AS [10,11], which potentially could significantly increase the plasticity of the transcriptome and proteome diversity. In this study, we conducted systematic analysis of the transcriptome of salt-treated Arabidopsis plants. Our data revealed that, under salt stress, 49% of the intron-containing genes are alternatively spliced. This number is higher than that reported by Filichkin et al. (42%) [10], but very close to that reported by Li et al. (48%) [25], and lower than a recent report that 61% of multi-exonic genes were alternatively spliced, as determined by a normalized cDNA library that facilitated the detection of AS events in low-abundance transcripts [11]. This marked AS under salt stress could provide molecular plasticity for the plants to adapt to stress conditions. In this study, we found that intron retention and alternative 5'SSs/3'SSs are much more prominent than exon skipping and other types of AS. These observations are consistent with the general view of AS in Arabidopsis reported previously [10,11]. Importantly, we uncovered two novel features of AS in Arabidopsis. First, alternative 5'SSs/3'SSs tend to occur around the downstream or upstream 4 bp region of the dominant (conical) 5'SS and Figure 6 Eight AS events in six SR genes validated by RT-PCR and visualized by the IGV browser. In the RT-PCR validation, the grey asterisk (*) to the right side denotes the alternative splice form. The red arrow on top indicates that an increase (pointing upward) or decrease (pointing down) in AS events is exhibited at that salt concentration. In the schematic exon-intron structure below each gel picture, the blue bars represent exons and the red bars represent splice junctions. The green arrows indicate primers designed for RT-PCR validation. In the IGV visualization, the exon-intron structure of each gene is given at the bottom of each panel. The grey peaks above the exon-intron structure indicate the RNA-seq read density across the gene. The red arrows represent alternative splice sites. The blue arcs indicate splice junction reads that support the junctions. 5'SS, alternative 5' splice site; 3'SS, alternative 3' splice site. Figure 2D). A similar AS pattern was also reported in the human genome [12], suggesting the conservation of this AS pattern in eukaryotes. Second, we found a coordinated occurrence of exon skipping and alternative splice site selection. We thus proposed a model where exon skipping and alternative splice site selection are coupled. We suggest that all the splice sites surrounding the dominant ones have the potential to be used as alternative splice sites. These include the splice sites located at the next or last exon, which would thus cause exon skipping. Previously, exon skipping and alternative splice site selection were usually considered as two independent AS events. Few links between them were previously reported. The discovery of the linkage between these two AS events provides a novel perspective of AS and its regulation. 3'SS ( Are stress-induced changes in splicing patterns stressassociated acclimation or damage? We found that AS events were obviously increased in Arabidopsis under salt stress. This finding is consistent with some previous studies on AS under environmental stresses [5]. For example, cDNA sequencing results indicated that the number of AS events was significantly higher in Arabidopsis plants exposed to different stresses, particularly low temperature, than in control plants [8]. This increased AS under stress conditions raises an important question on whether the increase is an acclimation response or merely a consequence of splicing errors caused by stress damage. We tend to believe that the increase comes from splicing errors based on the following reasons. First, in another study on the effects of the depletion and overexpression of one core component of the splicing machinery (SAD1, a Sm-like protein 5) on pre-mRNA splicing and stress tolerance [26], we found that the increase or decrease of AS in many stress-related genes can be dynamically controlled by the dosage of SAD1; moreover, the increase and decrease in AS are closely linked to the sensitivity and tolerance of the plants to stress, respectively. Therefore, we considered that increased AS could be a result of inaccurate splicing, which could weaken the function of the corresponding genes by decreasing the functional transcripts. In contrast, decreased AS could be an acclimation to stress resistance. Secondly, we did observe a stress-induced deregulation of splicing machinery. In our study, we noticed the down-regulation of U6 snRNAs under salt stress in quantitative RT-PCR assays (Additional file 22). The U6 snRNA is a core component of the spliceosome and is required for its assembly and catalytic activity during pre-mRNA splicing [27,28]. A decrease in the level of this snRNA would likely compromise the assembly of the spliceosome and its catalytic activity [29]. Thirdly, most stress-induced splicing variants may not be translated into functional proteins. Similarly, some important genes (such as from ERD14, RD22 and ATGSTF10) that are involved in abscisic acid (ABA) or salt stress responses show increased intron retention under salt stress conditions. These transcripts were predicted to generate a pre-mature stop codon that would lead to non-functional mRNAs or proteins, although we currently could not rule out the possibility that some of these truncated proteins may still have certain functions in plant salt tolerance. Thus, we suggest that stress-induced increase in AS could be ascribed to splicing errors or inaccuracies caused by stress. Nevertheless, if the increase in AS is merely a nonspecific consequence of stress damage, a random distribution would be expected among genes. However, our data, along with previous reports, demonstrated that the genes associated with stress response tend to be alternatively spliced under stress conditions ( Figure 4B). It is known that salt stress or other abiotic stresses can activate the expression of a large number of plant stress-responsive genes that are not expressed or are expressed at lower levels under normal non-stressful conditions [30,31]. With the simultaneous production of a large amount of these stressinducible pre-mRNAs, cells would need to immediately recruit a significant amount of splicing factors and other factors for their co-transcriptional or post-transcriptional processing. This imposes a huge burden on the splicing machinery and, as a result, a significant portion of these transcripts fails to be processed adequately when the splicing machinery is compromised. The discussion so far covers only to the global changes in AS under salt stress conditions. It should be noted that there are indeed specific cases in which AS plays a functional role in regulating the response and tolerance of plants to stress. Such cases have been described in the last few years [review in [5]. This functional role can also be seen in the splicing of several SR proteins as discussed below. Pre-mRNA spicing of SR genes under stress conditions The AS pattern of several SR proteins has been shown to change obviously under various abiotic stress conditions, including temperature stress, high salinity and high light irradiation [21,32,33]. In this study, we identified one-third of the SR genes (six SR genes from four SR families) that showed clear changes in AS under salt stress. This number is higher than that reported before and is probably attributable to the increased sensitivity of the sequencing technology used in the current study. Interestingly, we clearly identified four SR genes (AT-RS2Z33, AT-RSP40, AT-SCL33 and AT-RSP41) that showed decreased intron retention under salt stress ( Figure 6). Sequence analysis revealed that all the splice variants with reduced abundance under salt stress were aberrant transcripts with premature stop codons that may not produce functional proteins. A decrease in these aberrant transcripts and a simultaneous increase in the functional transcripts in these SR genes could be an acclimation response to stress that may subsequently help to sustain a positive feedback loop to increase the splicing efficiency and the production of functional proteins to combat the stress. Consistently, our recent study demonstrated that the mutations of AT-RSP40 or At-RSP41 led to sensitivity to salt stress, which implied the positive role of At-Rsp40 or At-Rsp41 in salt stress tolerance, probably via regulating the pre-mRNA splicing of certain stress tolerance genes [34]. We predict that regulating the expression of some of these SR genes or other splicing factors may increase plant tolerance to salt stress by enhancing the correct splicing of salt tolerance genes. Our recent study [26] and a few other studies showed that over-expression of certain splicing factors indeed could increase plant tolerance to salt and other stresses [21,32,33]. Conclusions Through analyzing global changes in AS under salt stress, we firstly identified~49% of all intron-containing genes that were alternatively spliced under salt stress, 10% of which experienced significant differential alternative splicing (DAS). We found that most DAS genes were not differentially regulated by salt stress, suggesting that AS may represent an independent layer of gene regulation in response to stress; DAS genes were associated with specific functional pathways, such as the pathways for the responses to stresses and RNA splicing. Finally, we revealed that serine/arginine-rich (SR) splicing factors were frequently and specifically regulated in AS under salt stresses, suggesting a complex loop in AS regulation for stress adaptation. Therefore, our study provided a comprehensive view of AS under salt stress and revealed novel insights into the potential roles of AS in plant response to salt stress. Plant materials and growth conditions Seeds of the Arabidopsis (ecotype C24) plants were surface-sterilized with 50% bleach in 0.01% Triton X-100 and planted on ½ Murashige and Skoog (MS) medium agar plates supplemented with 3% sucrose. After 4-day stratification at 4°C, the plates were placed in a chamber (Model CU36-L5, Percival Scientific, Perry, IA, USA) under 16 h-white light (~75 μmol m −2 s −1 ) and 8 h-dark conditions at 21 ± 1°C for germination and seedling growth. Twelve days after being incubated at 21 ± 1°C in the chamber, twenty whole seedlings were transferred from the agar plate onto filter paper (Catalog No. 05-714-4, Fisher Scientific, Pittsburgh, PA, USA) saturated with 20 ml of 0, 50, 150, or 300 mM NaCl solution in a 150 × 15 mm petri dish and incubated in the same chamber for 3 h before being harvested and frozen in liquid nitrogen for total RNA extraction [35]. RNA extraction, library construction and sequencing Using the TRIzol Reagent (Catalog No. 15596-026, Invitrogen), total RNAs were extracted from seedlings without or with salt stress treatment. Polyadenylated RNAs were isolated using the Oligotex mRNA Midi Kit (Catalog No. 70042, Qiagen). The RNA-seq libraries were constructed using the Illumina Whole Transcriptome Analysis Kit following the standard protocol (Illumina, HiSeq system) and sequenced by the Bioscience Core Facility at KAUST on the HiSeq platform to generate high-quality pair-end reads. Reads alignment and junction prediction TopHat [36] was used to align the reads against the Arabidopsis genome sequences and annotated gene models downloaded from TAIR10 (http://www.arabidopsis.org/) with default parameters. Meanwhile, TopHat was also used to predict the splice junctions. Based on the gene annotation information, the splice junctions were classified into the known and novel splice junctions. In addition, the expressed gene or transcripts were identified by the Cufflinks software [37]. Determination of the criteria for filtering positive junctions In the initial prediction, there were a great number of novel junctions that had short overhangs (i.e., fewer than 20 bp) with the corresponding exons, while most of the annotated junctions had larger overhangs, with the enrichment at~90 bp (Additional file 23A). Moreover, the novel junctions had relatively low coverage compared to the annotated junctions (Additional file 23B). In general, the junctions with short overhang size and lower coverage were considered as false positives, which are often caused by non-specific or error alignment. Therefore, to distinguish between true splice junctions and false positives, we assessed the criteria based on simulating data on a set of randomly-constituted junctions. To do this, we firstly generated a set of 80,000 splice junctions in which annotated exons from different chromosomes were randomly selected and spliced together in silico, and 119,618 annotated junctions from the gene annotation. Since the length of our sequencing reads was 101 bp, the splice junction sequences were determined to be 180 bp long (90 nt on either side of the splice junction) to ensure a 11 bp overhang of the read mapping from one side of the junction onto the other. Alignments to the random splice junctions were considered to be false positives, because such junctions are thought to rarely exist when compared to annotated junctions. The alignments of the raw RNA-seq reads to the random junctions revealed that 99.9% of false positive junctions had overhang sizes with fewer than 20 bp. In sharp contrast, the alignments of the annotated junctions indicated that most (98.6%) of annotated junctions had larger overhang sizes (Additional file 24A). In addition, we estimated that 56.9% of false positive junctions had only one read spanning the junction, while the annotated junctions had higher read coverage (Additional file 24B). To minimize the false positive rate, we required that the overhang size had to be greater than 20 bp with at least two reads spanning the junctions. Using these criteria, we filtered out almost all false positive junctions (Additional file 24C). Annotation of AS events JuncBASE [38] was used for annotating all AS events, including cassette exons, alternative 5'SSs, alternative 3'SSs, mutually exclusive exons, coordinate cassette exons, alternative first exons, alternative last exons and intron retention, which takes as input the genome coordinates of all annotated exons and all confidently identified splice junctions. Global comparison of AS The global comparison of AS among the control (0), 50, 150 or 300 mM NaCl treatments was started by equally and randomly re-sampling uniquely mapped reads to make sure that the comparison was at the same level. The comparison refers to the two facets: the absolute number of each type of AS event and the number of junction reads that was assigned to each type of AS event, because both of them can be used to measure the global changes of AS. Meanwhile, Fisher's Exact Tests in R (http://www.r-project.org/) were used to identify differential representation of each type of AS event, performed on the number of junction reads that were assigned to each type of AS event. The identification of differential AS events Fisher's Exact Tests were also used to identify the differential representation of each AS event. For alternative 5'SSs and 3'SSs and exon skipping events, Fisher's Exact Tests were performed on the comparison of the junctionread counts and the corresponding exon-read counts between the control and the 50, 150 or 300 mM NaCl treatments. The events with p values less than 0.05 were identified as significantly differential events. In addition, we considered those AS events that were uniquely identified in the control or the 50, 150 or 300 mM NaCl treatments significant if there were at least five junction reads to support and if the p value of these events was assigned to equal zero. Similarly, for intron retention, Fisher's Exact Tests were performed on the intron-read counts and the corresponding exon-read counts between the control and the 50, 150 or 300 mM NaCl treatments. The events with p value less than 0.001 were identified as significant differential events. In addition, we considered the intron retention events uniquely identified in the control or the 50, 150 or 300 mM NaCl treatments significant if there were at least 5 × sequence coverage to support and if more than 80% of intron regions were covered by intron-reads. The p value of these events was assigned to equal zero. RT-PCR validation The selected AS and intron retention events were validated by RT-PCR using a set of primers (Additional file 25) that were designed based on each AS event. Total RNAs from the control, 50, 150 or 300 mM NaCl treated seedlings were extracted as described above using the TRI solution, treated with DNAase I, and reverse-transcribed to cDNA (random priming) by using a standard protocol (Super-Script II reverse transcriptase, Invitrogen). classification of genes was done by the DAVID software. The top 20 functional annotations that were ordered by the enrichment scores were selected for the 2-D view, which indicates that genes with abnormal splicing were strikingly enriched in the response-to-abiotic-stress category.
2017-06-20T18:26:58.933Z
2014-06-04T00:00:00.000
{ "year": 2014, "sha1": "4564735049a8bc45b401060c662498a524dec4e7", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-15-431", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4564735049a8bc45b401060c662498a524dec4e7", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
155984655
pes2o/s2orc
v3-fos-license
A Full-Scale Field Study on Bearing Characteristics of Cast-in-Place Piles with Different Hole-Forming Methods in Loess Area ,is paper presents the results from a full-scale field study on the 3 different types of cast-in-place piles: rotary drilling piles (RDPs), manual digging piles (MDPs), and impact drilling piles (IDPs), for a bridge construction project of Wuqi–Dingbian Expressway, in Shaanxi. ,e results indicate that under the similar conditions, MDP exhibits the largest bearing capacity (11000 kN) in the loess area, followed by RDP (9000 kN) and IDP (8000 kN). And all tested values exceed the estimated value (7797.9 KN), indicating that the calculation formula of bearing capacity recommended by the Chinese standard is safe and conservative. During the load transfer process, the axial force attenuation rate of the pile body increases with pile side resistance. ,e average attenuation rate of MDP is the largest (24.2%), followed by RDP (19.72%) and IDP (16.69%). ,e bearing characteristics of these test piles are mainly pile side resistance, but the manual diggingmethod created the least amount of disturbance to the soil around the pile, and due to its hole wall being rough, this enhances the pile-soil interactions. Hole-forming methods mainly affect the exertion of pile side resistance compared with pile end resistance. In view of pile side resistance and pile end resistance not taking effect at the same time, degree of exertion of these 2 resistances should be considered when designing cast-inplace piles in loess areas, and different partial coefficients should be used. Introduction Pile foundations have been developed for thousands of years since the discovery of early pile foundation sites in the Republic of Chile.Up to now, pile foundations are also the most widely used building foundations or supporting structures [1][2][3][4].When surface soils are too loose (soft) to support the shallow foundation safely and economically, such geotechnical structures can be used to better distribute the loads through the soil (friction piles) or transmit loads to a stronger soil layer at depths (end-bearing piles) [5][6][7].As one of the most representative pile forms, cast-in-place piles are widely used in bridge and other engineering fields because of their great advantages (moderate cost, convenient construction, low construction noise, etc.) [8][9][10][11][12]. Loess is widely distributed in Asia (74 °N-32 °N, based on data from Baidu Encyclopedia), especially in the central and western regions of China, where there is continuous development of China's economy and infrastructures in the promotion of the Belt and Road program; a large number of transportation networks are being built in the loess area, and cast-in-place piles will be widely used in this process.According to incomplete statistics, in China alone, the annual use of cast-in-place piles are at least 1 million or more [13][14][15][16][17].Loess has strong structural characteristics.e construction of cast-in-place piles in the loess strata will inevitably destroy the structure of loess, which will affect the mechanical properties of loess and pile-soil relationship [18][19][20][21]. ere are many research studies on the holeforming methods and loess in the cast-in-place pile foundation engineering, but the influences of hole-forming methods on the bearing capacity of pile foundation are not universal [22][23][24][25][26][27].Meanwhile, the influence of holeforming methods on the bearing capacity of the pile foundation is also related to geological conditions, foundation forms, etc. erefore, the influences of hole-forming methods on bearing characteristics of cast-in-place piles in the loess are uncertain and need further research [28][29][30]. Presently, the cast-in-place piles for bridge engineering in the loess area mainly include rotary drilling piles (RDPs), manual digging piles (MDPs), and impact drilling piles (IDPs) [31][32][33][34].ere are obvious differences between the piles with different hole-forming methods (as shown in Table 1) [35][36][37][38].Many researchers [39][40][41] have analyzed the characteristics of these piles for different purposes; however, very few studies in the literature offer the side-byside comparison.Based on this, combined with the actual situation of Wuqi-Dingbian Expressway test area, static load tests of RDP, MDP, and IDP have been carried out.e influence of the hole-forming methods on the bearing characteristics of the cast-in-place piles is analyzed from the following aspects: (1) pile body settlement; (2) transfer law of axial force for the pile body; (3) distribution of the pile side resistance; and (4) exertion degree of pile end resistance. Structural Characteristics of Loess Structural characteristics are the intrinsic characteristic of soil.e structure of soil essentially includes the cementation and the composition (as shown in Figure 1).e former reflects the characteristics of soil skeleton connection while the latter reflects the geometrical and spatial characteristics of soil skeleton.Many research studies show that loess has strong structural characteristics [43][44][45].Loess deformation is mainly elastic before its original microstructure is destroyed under certain disturbance, and its pore pressure is also elastic pore pressure, which has no effect on the effective stress of soil skeleton, so the strength of soil will not change greatly [46,47].Once the structure of the soil has been destroyed, it will cause a series of adverse effects (such as the generation of plastic pore pressure, the decrease of effective stress, and the deterioration of mechanical properties).e construction of piles in loess will inevitably cause great disturbance and destroy the structure of loess, which will affect the mechanical properties of loess and the pile-soil relationship. Project Site and Subsoil Profile Wuqi-Dingbian Expressway (Figure 2) is located in Yan'an city and Yulin city, Shaanxi Province, China.e starting point of the expressway is located in Zoumatai, east of Wuqi County and ends in Shijingzi, southeast of Dingbian County. e expressway has an approximately total length of 92.22 km and is an important part of the 3rd vertical Dingbian to Hanzhong expressway in the network "2637" planned by Shaanxi Province.e abutments on both sides of the test area are located on the loess Liang-Mao region (the loess hilly area can be divided into 2 types according to its shape: the long strip is called "Liang" and the oval or round shape is called "Mao" © Baidu Encyclopedia), and the topographic relief of the abutment area is small.e ground elevation is 1629.60-1644.59m with a relative height difference about 14.99 m. e groundwater in the test area is deeply buried, and there is no groundwater distribution in the depth of drill hole (the depth of drilling is 60 m).According to the drilling results, soils in the test area are all in slightly wet and hard plastic state.Table 2 is the detailed characteristics of the soil in drill hole sampling. Static Load Test 4.1.Bearing Capacity Estimation.To ensure the safety and reliability of the test, the bearing capacity of the pile (as shown in Table 3) was estimated by using formula (1) recommended in Chinese standard [48] before the test: where Q sk is the total pile side resistance; Q pk is the total pile end resistance; μ is the perimeter of pile body; q sik is the pile side resistance of the i-layer soil around the pile (the friction resistance is the pile side resistance in friction piles); p sk is the specific penetration resistance near the pile end, and its value can be directly determined in the standard [48], according to parameters (such as the soil properties, the diameter of pile, and the length of pile); l i is the length of the pile passing through each layer of soil; α is the correction coefficient of pile end resistance (when the pile length is between 10 and 30, α is interpolated between 0.75 and 0.90 according to the standard); and A p is the pile end area. Test Design and Procedure. From Figure 3, it is noted that the roughness of every pile wall is different due to the different hole-forming methods.Compared with impact drilling method, the other 2 methods have less disturbance to the soil around the pile, and spade and rotary drilling rig continuously cut the soil in the hole, resulting in rough hole wall.And compared with IDP, the hole wall of MDP is rougher. Referring to the Chinese standard [49], the on-site static load test (Figure 4) was carried out.e results of the test for 3 different types of cast-in-place piles (MDP, RDP, and IDP) have been compared and analyzed.e pile diameter and length of the 3 test piles were all 1.5 m and 25.0 m. e diameter and the length of the anchor pile (AP) were 1.5 m and 30.0 m separately.e test piles were constructed with C30 concrete, and C40 concrete was used to reinforce the part 1.5 m away from the top of the piles.e anchor piles (APs) were all RDP, and all of them were cast by C30 concrete.e rebar used for the test piles was configured according to standard [49].Meanwhile, 3 rows of rebar strain gauges were installed on the sides of the piles, and 7 rebar strain gauges were arranged in sequence along the pile at 0.5 m, 3.5 m, 6.5 m, 11 m, 15.5 m, 20.0 m, and 24.5 m (Figure 5).e distance between 2 adjacent anchor piles was 7.0 m, and the test piles were arranged in the center of the adjacent 4 anchor piles.e rebar used in the anchor piles was also configured according to the standard [49].To fully study the bearing characteristics of test piles with different hole-forming methods, all 3 test piles were loaded to failure in field tests. Ultimate Bearing Capacity of Single Pile. e Q-S curves of the 3 test piles, as shown in Figure 6, are overall steeply descending.And according to the standard [49], for such steeply descending Q-S curve, the inflection point is the ultimate bearing capacity.erefore, the ultimate bearing capacities of MDP, RDP, and IDP are 11000 kN, 9000 kN, and 8000 kN, respectively.e corresponding settlements are 10.89 mm, 7.22 mm, and 3.35 mm. e bearing capacities of the 3 test piles obtained from the test are all larger than the estimated value (7797.9kN), indicating that the bearing capacity estimated by the recommended formula (formula (1)) of the Chinese standard is safe and conservative.However, the bearing capacities of these piles are quite different, which further illustrates that different hole-forming methods will have different effects on the improvement of bearing capacity of cast-in-place piles. e maximum load applied on the MDP, RDP, and IDP is 12000 kN, 12000 kN, and 14000 kN, respectively.Meanwhile, the corresponding maximum settlements are 77.49mm, 72.86 mm, and 63.9 mm.With the removal of the upper load, the final settlement values of MDP, RDP, and IDP are 63.58 mm, 63.72 mm, and 55.51 mm, where the pile body rebound displacement of MDP is the largest (13.91 mm), followed by RDP (9.15 mm) and IDP (8.14 mm).In the process of unloading on the pile top, the Q-S curves have been shown as a gentle rebound curve (Figure 6), indicating that the rebound value has a close relationship with the load applied on the pile top.e rebounds of settlement are mainly composed of elastic compression of the pile, followed by the back friction caused by the restoration of soil structure during the dissipation of frictional forces between piles and soils [50][51][52][53]. Transfer and Attenuation of Axial Force.By analysis of the field test data, the distribution of axial forces for MDP, RDP, and IDP is shown in Figures 7-9, respectively.Along the pile body, the regularities of distribution for these test piles are basically the same, which gradually decrease from top to bottom.And under the similar load, pile end resistance of IDP is the largest, followed by RDP and MDP. e results suggest that IDP has the best effect of pile end resistance, while, for MDP, pile side resistance exerts the best effect. Under the action of axial load, the pile top will generate axial displacement (settlement), which is the sum of elastic compression of pile body and soil compression of pile end.When the pile moves downward relative to soil, pile side frictional resistance acting upward on the pile will generate.In the process of load transferring downward along the pile body, the friction must be overcome continuously, which leads to the decrease of the axial force of the pile body with the increasing strata depth.When the axial force is transferred to the pile end, the axial force will be balanced by the pile end soil support.e axial force attenuation rate represents the effect of pile side resistance, and the effect of pile side resistance increases with the axial force attenuation rate. e attenuation rates of the axial forces of these piles are approximately the same.It is assumed that the axial force attenuation rate of the test pile is ]: where p i is the axial force on the i section of pile body; p i+1 is the axial force on the i + 1 section of pile body; and n is the number of rebar meter.Formula (2) can be used to calculate the axial force attenuation rates of these test piles, as shown in Figure 10. e variations of the axial force attenuation rates for these test piles are basically the same under all levels of load: all decreasing firstly, then increasing and decreasing lastly.e maximum, minimum, and average attenuation rates of axial force for MDP, RDP, and IDP are 26.13%,21.22%, and 18.4%; 20.5%, 17.22%, and 14.53%; and 24.2%, 19.72%, and 16.69%, respectively.f a0 is the bearing capacity of soil strata and q ik is the friction resistance.However, along their pile bodies, these pile side resistances' variations are roughly the same: all increase firstly and then decrease.Pile side resistances of all test piles reach the peak at the section of 11 m from the top of the pile and then the curve of IDP reaches the second peak at the bottom of the pile body.e peaks of these curves are related to the characteristics of the soil around the pile and pile-soil interactions [54][55][56][57]. As shown in Figure 14, when the load applied to the pile top is same, pile side resistance of MDP is the largest, followed by RDP and IDP. e result suggests that the concrete protection wall of MDP is conducive for the development of pile side resistance and enhancement of pile-soil interactions.Meanwhile, IDP will form a 2-3 mm thick mud cake around the pile during the construction process, which reduces the pile-soil interactions and is not conducive to the exertion of pile side resistance. e Relationship between Pile Side Resistance and Pile-Soil Relative Displacement. As shown in Figure 15, when the pile-soil relative displacements are less than 5 mm, pile side resistances of these test piles increase sharply with the displacements; however, as the relative displacements continue to increase, pile side resistances of these test piles increase slowly.When the relative displacements are the same, the MDP has the largest pile side resistance, followed by RDP and IDP.16, the pile end resistances of these test piles increase with the upper loads.And under the similar load, IDP's pile end resistance is the largest, followed by RDP and MDP. is Advances in Civil Engineering indicates that during the process of applying load, the ratio of pile end resistance for IDP is larger than the other 2 piles in the total resistance, and pile end resistance has the greatest impact on IDP. Pile End Resistance under Ultimate Bearing Capacity. From the static load test, all the pile side resistances and pile end resistances of these test piles in the ultimate bearing capacity state are obtained, respectively.As illustrated in 6 Advances in Civil Engineering Table 4, when every test pile reaches the ultimate bearing capacity, the ratios of pile side resistance to pile end resistance are not the same.MDP has the largest ratio of pile side resistance in total resistance, followed by RDP and IDP, while the ratio of pile end resistance is on the contrary.Although the bearing characteristics of these 3 test piles are mainly depending on the pile side resistance, the degree of exertion of pile side resistance and pile end resistance in the 3 test piles is not the same. e Change Law of Pile End Resistance with Pile Settlement.Due to the very small compression deformation of the test piles, the settlement of pile top is used as the settlement of piles in this paper.From Figure 17, it is noted that when the settlements of the 3 test piles are the same, pile end resistance of IDP is the largest, followed by RDP and MDP.With the increase of the pile settlements, the pile end resistances of 3 test piles increase continuously.And among them, pile end resistance of IDP increases the most, but pile end resistances of the 3 test piles do not tend to a definite value.It shows that pile end resistance does not reach the limit value during the process of applying the load at the pile top.When the settlements of piles are less than 5 mm, the differences in pile end resistances of the 3 test piles are small. Advances in Civil Engineering However, when the pile settlements are larger than 10 mm, especially larger than 15 mm, pile end resistance of IDP is significantly larger than that of RDP and MDP when the pile settlements are the same. e Relationship between Pile End Resistance and Pile Side Resistance.For 3 test piles under different loads, pile side resistances and pile end resistances are all shown in Figure 18, and the ratios of every single resistance to the total resistance are shown in Figure 19.From Figure 18, it is noted that when the loads are applied at the top of every test pile, the line graphs of pile side resistance and pile end resistance both show an ascending trend.And the continuous increase of the pile side resistance does not tend to a definite value, which suggests that pile side resistance does not reach the limit value after the first loading.In Section 5.1, the ultimate bearing capacity of MDP is 11000 kN, followed by RDP (9000 kN) and IDP (8000 kN), indicating that the pile side resistances of all test piles are not at the maximum state when the ultimate bearing capacity is reached.From Figure 19, (1) pile side resistance ratio of MDP is the largest, while the ratio of pile end resistance is the smallest.(2) Pile side resistance ratio of IDP is the smallest, while the ratio of pile end resistance is the largest.But the 8 Advances in Civil Engineering variations of the 3 test piles are basically the same.At the later loading, pile side resistance ratios of these piles all show a decreasing trend, and the ratios of pile end resistance show an increasing trend, but the change trend is not obvious.e results suggest that the 3 test piles in this filed test are all friction piles, and pile side resistances and pile end resistances are not in the limit state at the last loading.During the early loading on the pile top, especially during the first loading and the second loading, the ratios of pile side resistances of the 3 test piles decrease, but the ratios of pile end resistances increase.e result indicates the complexity of the mechanical properties of the soil and the development of pile side resistance requires a process.erefore, when designing cast-in-place piles in loess areas, it is necessary to consider the development degree of pile side resistance and pile end resistance and then use the different partial coefficients in pile side resistance and pile end resistance [58][59][60][61][62][63]. Conclusions rough the full-scale field study, bearing characteristics of the 3 types of cast-in-place piles (MDP, RDP, and IDP) under different loads in loess area are studied in this paper.Based on the analysis of test results, the following conclusions can be drawn: (1) Under the similar conditions, the ultimate bearing capacity of MDP is the largest (11000 kN), followed by RDP (9000 kN) and IDP (8000 kN).All actual bearing capacities are larger than the estimated value (7797.9kN), which indicates that the pile bearing capacity estimated by the standard is conservative.e final settlement values of MDP, RDP, and IDP are 63.58 mm, 63.72 mm, and 55.51 mm; after unloading, the pile body rebound displacement of MDP is the largest (13.91 mm), followed by RDP (9.15 mm) and IDP (8.14 mm). (2) e attenuation rates of axial force for piles (MDP, RDP, and IDP) are basically the same under the action of all levels of loading, all decreasing firstly and increasing finally.Among them, the rate of MDP is the largest (average is 24.2%), followed by RDP (average is 19.72%) and IDP (average is 16.69%).(3) e bearing capacity of all 3 test piles is mainly dependent on pile side resistances in loess area.Pile side resistance is not evenly distributed along the pile body, which is related to properties of soil around piles and pile-soil interactions.Under the same Advances in Civil Engineering loads, pile side resistance of MDP is the largest, followed by RDP and IDP, while pile end resistances of the 3 test piles are opposite. (4) Hole-forming methods mainly influence the roughness of hole wall in loess area, which determines the value of pile side resistance.Compared with the impact drilling method, the other 2 methods (manual digging method and rotary drilling), especially manual digging method, have less disturbance to soil around the pile, resulting in rough hole wall and enhancing the pile-soil interactions.(5) Different partial coefficients should be adopted rationally for pile side resistance and pile end resistance when designing cast-in-place piles in the loess area because pile side resistance and pile end resistance are not taking effect at the same time.Advances in Civil Engineering Figure 1 : Figure 1: SEM image of loess (the pore area ratios are shown at the top right corners) [42].(a) Loess.(b) Loess SEM image.(c) Loess SEM binary image. Figure 14 :Figure 15 :Figure 16 : Figure 14: Pile side resistance of test piles under various loads. Figure 17 :Figure 19 : Figure 17: e change law of pile end resistance with pile settlement. Table 1 : Comparison of all 3 types of cast-in-place piles. Table 2 : Properties of the soil in the test area. Table 3 : Estimation of bearing capacity for the test piles. Table 4 : Pile side and pile end resistance of test piles in ultimate bearing state.
2019-05-17T14:02:59.595Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "f51960b5240779ad619bd7786fe6529dce62c36f", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ace/2019/1450163.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f51960b5240779ad619bd7786fe6529dce62c36f", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
267457077
pes2o/s2orc
v3-fos-license
Managing Macadamia Decline: A Review and Proposed Biological Control Strategies : Macadamia decline poses a serious economic threat to the macadamia industry. It exhibits either a slow decline due to infection by Kre tz schmaria clavus or Ganoderma lucidum , or a quick decline caused by pathogens like Phytophthora spp., Lasiodiplodia spp., Neofusiccocum spp., Nectria rugulosa , Xylaria arbuscula , Phellinus gilvus , Acremonium recifei Introduction Macadamia (Macadamia integrifolia Maiden and Betche) is a valuable nut tree species native to the coastal areas of northern New South Wales and southern Queensland in Australia, with a wide climatic adaptability [1].Macadamia nuts are famous for their high nutritional value and healthcare benefits by containing abundant unsaturated fatty acids (e.g., oleic acid, arachidonic acid, and palmitoleic acid), protein, amino acids, and various vitamins (e.g., vitamin B1, B2, and nicotinic acid) [2].As the global demand for macadamia nuts increases, the cultivation of macadamia has expanded into other subtropical and tropical regions, including Australia, China, New Zealand, South Africa, the United States, Brazil, Guatemala, Kenya, Malawi, and Vietnam [3,4], with China owning the largest plantation area of more than 260,000 hectares in 2021 according to the Ministry of Agriculture and Rural Affairs. However, the escalating cultivation of macadamia is accompanied by the risk of various diseases, particularly macadamia decline.Macadamia decline was first observed on the east of Hawaii Island [5], but has become a major challenge to macadamia production in the major regions such as Queensland, China, South Africa, and Kenya [6].This disease includes slow decline and quick decline, with the latter being more prevalent.Macadamia decline is caused by multiple pathogens [7,8], leading to several symptoms, including root rot, leaf blackening, branch wilt, and seedling damping-off [8], so it represents a significant threat to macadamia production and profitability.For example, the macadamia decline caused by Phytophthora cinnamomi resulted in a yield reduction of approximately 60% in Kenya [9], and an annual economic loss exceeding AUD 20 million in Australia [10]. Despite many years of efforts in suppressing pathogens through the use of agrochemicals, resistant cultivars, and agronomic measures, the prevention and control of macadamia decline are still large challenges.For instance, while Fosphite ® application can alleviate symptoms of macadamia decline, it only extends the productive life of macadamia trees by approximately 700 days [11].The cultivation of disease-resistant cultivars is both expensive and time-consuming [12], and agronomic measures cannot entirely prevent disease occurrence [13].Consequently, macadamia decline remains a critical global issue that necessitates the adoption of an environmentally friendly control strategy. The microbiome plays a key role in plant health and disease [14].The utilization of beneficial biological control microbes presents a promising alternative to combat soilborne diseases [15,16].Numerous biological control agents, including Bacillus, Pseudomonas, Trichoderma, Streptomyces, Flavobacteria, Enterobacter, Actinomycetes, Serratia, Alcaligenes, and Klebsiella strains, function as disease antagonists, rhizosphere colonizers, and plant growth promoters [17,18].Many of these have been commercially exploited for the control of plant diseases [19].Synthetic microbial communities (SynComs) have demonstrated greater efficacy than single strains in long-term colonization and functionality within the rhizosphere soil [20,21].These SynComs can provide antibiotics, secondary metabolites, enzymes, and other compounds with pathogen inhibitory effects [22].Nevertheless, the role of biological control agents, particularly SynComs, in preventing macadamia decline remains poorly understood [23].This review aims to (i) synthesize current knowledge on macadamia decline and its control strategies, and (ii) explore the potential application of biological control in decline disease prevention. Symptoms and Pathogens Macadamia trees face two distinct forms of decline diseases, i.e., slow decline and quick decline.The slow decline, caused by Kretzschmaria clavus [5] or Ganoderma lucidum [24] is characterized by a progressive onset of symptoms such as leaf discoloration, leaf drop, and branch dieback.These two pathogens induce slightly different symptoms, with K. clavus producing small, mushroom-shaped lesions on the roots and basal trunks of the infected trees, marked by obvious black lines [5], but G. lucidum producing large brown basidiocarps on the lower trunk or above decaying roots [8]. Infection Sources Investigations conducted in the forests adjacent to macadamia orchards in Hawaii first revealed the presence of fruiting bodies of K. clavus on both dead and diseased trunks of melochia and trumpet trees (Figure 2a) [37].Isolates of K. clavus, sourced from these diseased trees within these forests, had the ability to infect macadamia trees [37], suggesting that they could be a significant source of infection for macadamia.Other recognized sources of infection include sporangia and zoospores of P. tropicalis, basidiospores of P. gilvus, stromata of K. clavus, ascospores, marconidia and microconidia of N. rugulosa and X. arbuscula, as well as conidia of A. recifei [5,38,39].This suggests that the fruiting bodies from diseased macadamia may be the primary pathosystem for the decline disease [40].When macadamia orchards become infected with these pathogens, diseased tissues and fruiting bodies generate propagules on exposed roots [8].Moreover, certain pathogens are more likely to attack tree trunks from the base upwards, subsequently spreading to the upper trunk and branches [25,32].Consequently, the macadamia decline pathogens can be extracted from the roots, rhizosphere, raceme lesions, leaf, and stem [10].The presence of fruiting bodies on living trees indicates that these fungi may reside inside the trunk for an extended period [8].When X. arbuscula and P. gilvus fruiting bodies were excised from diseased trees, over 90% of the cross-sectional surface was decayed [25,32].Fruiting bodies of diseased tissues can be washed away by rainwater and spread over long distances [41].The pathogens can persist in soils for more than 10 years and have the potential to infect the roots of neighboring healthy trees when macadamia is planted [42].including diseased branches, leaves, and roots.Some pathogens infiltrate into plants via root system, leading to an outbreak of pathogens in the rhizosphere soil.(b) Internal damage to macadamia by pathogens: pathological tissues carrying some pathogens infect neighboring macadamia trees, causing root damage.Additionally, stems and leaves become infested, resulting in plant disease or death.(c) Environmental factors accelerate decline progression: high temperatures can accelerate the spread of pathogenic spores.Rainwater can carry plant remnants carrying pathogenic spores to new orchards and cause new decline outbreak.Another way by which the decline disease can be spread is though insects such as beetles. Internal Damage Macadamia trees have substantial resilience and can sustain growth to some degree in the absence of conspicuous aboveground symptoms.This resilience is primarily due to several factors.First, the high crystallinity of cellulose in the plant can provide a certain physical barrier to prevent the rapid invasion of pathogens.Second, the high C/N ratios of tree biomass may inhibit the proliferation of pathogens in the heartwood of trees [40].Despite these natural defenses, the pathogens can still infect macadamia trees, since their roots are short and most proteiod roots are close to the soil surface [43].This kind of root system is susceptible to infection by pathogens such as X. arbuscula, resulting in approximately 10% of roots becoming rotted after a period of five years [5].The root system is pivotal in coordinating the tree's response to various stresses, including preventing pathogen attacks [44].Therefore, damage to the root system by pathogens would facilitate pathogen proliferation within the tree and potentially impair the functions of xylem and cambium tissues.Xylem, an organ that is responsible for nutrient and water transportation, would be adversely affected when macadamia decline occurs [45].Consequently, the damaged roots would result in diminished water and mineral nutrient transportation from the soil to the leaves [29], facilitating pathogen spread to the tree trunk and leaves (Figure 2b).When pathogens inflict severe damage on the vascular system, trees may die in a relatively short period due to a limited water supply for their growth and functionality [8].During the terminal stages of decline, pathogens would infect 90-100% of the bark and 58-97% of the wood [32,35]. External Factors Environmental factors can significantly influence the emergence and spread of macadamia decline.Temperature is a crucial factor, as it affects the growth and sporulation of pathogens.High temperatures, particularly those exceeding 30 °C, can cause a rapid surge in the number of sporangia of pathogens like P. cinnamomi [46,47] (Figure 2c).The isolates of Phytophthora have an optimal growth temperature of 34 °C, which is higher than the mean annual temperature in tropical regions [48].Rainfall plays a significant role in the spread of decline disease (Figure 2c).Fruiting bodies, often transported by rainwater from infected trees to the soils around other trees, remain in the rhizosphere until conditions are favorable for their subsequent outbreak [49].Certain insects can also contribute to the propagation of this disease by carrying and transmitting pathogens [50].Additionally, long-term, continuous monocropping negatively impacts both the soil environment and the health of macadamia trees [51], thereby increasing their susceptibility to pathogen infection [52].For the purpose of maintaining or improving soil quality and disease resistance, intercropping systems should be recommended. Hotspots and Frontiers of the Research of Macadamia Decline A bibliometric analysis was performed using the literature from January 1977 to May 2023 in the Web of Science Core Collection database (SCI-EXPANDED).The analysis focused on the topic "macadamia decline", using the query "macadamia" AND "decline OR die OR died OR death OR dieback".A total of 59 papers were recorded, comprising 49 original articles and 5 review articles (Figure 3a).The USA and South Africa contributed 28.13% and 10.94% to the total publications, respectively (Figure 3b).Five of the top ten contributing institutions were from the USA, while the others were located in Australia.The authors with the highest number of contributions from these institutions were Olufemi A. Akinsanmi, Wen-Hsiung Ko, and Olumide S. Jeff-Ego (Figure 3c).The articles were primarily published in Plant Pathology, Plant Disease, Australasian Plant Pathology, and Phytopathology.To enable subsequent statistical analysis, we grouped similar keywords together.For example, Macadamia integrifolia was treated as macadamia, oomycete as oomycetes, and quick decline and dieback as decline.Consequently, macadamia was the most frequent keyword in the keyword analysis, followed by decline and oomycetes (Figure 3d).The disease-related terms included "Kretzscmaria-clavus", "Acremonium recifei", "Nectria rugulosa", "Xylaria arbuscula", "disease", "phytoplasma", and "soil-borne pathogen".A word cloud was generated from the keywords.It should be noted that in these articles, "Oomycetes" mainly referred to Phytophthora species known to cause diseases in more than 5000 different plant species being infected [53].The keyword analysis highlighted that macadamia decline is affected by various fungal pathogens, including Phytophthora species, K. clavus, A. recifei, N. rugulosa, and X. arbuscula. Control Strategies of Macadamia Decline The control strategies of macadamia decline can be categorized into chemical intervention, the use of resistant cultivars, agronomic measures, and biological control measures (Figure 4).Nevertheless, integrated control strategies that utilize two or more of these measures are becoming more and more popular. Chemical Strategies Fungicides are extensively utilized to control the pathogens of macadamia decline [54], with a particular focus on Phytophthora species (Figure 4a).Different fungicides have been registered and used worldwide, such as carboxylic acid amide fungicides (dimethomorph, flumorph, pyrimorph, and mandipropamid) and benzamide fungicides (fluopicolide and propamocarb) [55].However, concerns have been raised regarding fungicide resistance; i.e., certain strains of Phytophthora have been found to be resistant to commonly used fungicides, e.g., metalaxyl [56].To avoid fungicide resistance, the simultaneous application of a combination of fungicides has been proven to be effective.For example, the mixture of Melody Duo (iprovalicarb 55 g kg −1 , propineb 613 g kg −1 ), Nordox 75 WP (copper oxide 86% w/w, 75% metallic copper 14% w/w), and Victory 72 WP (metalaxyl 80 g kg −1 , Mancozeb 640 g kg −1 ) is effective in controlling Phytophthora diseases [57].The fungicide phosphite (comprising 53% monopotassium phosphate and dipotassium phosphate) can efficiently suppress quick decline by decreasing lesion size by 70%, extending the lifespan of infected trees by 700 days [11].However, relying solely on chemical strategies may only mitigate, rather than eradicate, macadamia decline [58].Furthermore, an excessive or prolonged application of fungicides can be detrimental to plants [9], resulting in phytotoxicity and other adverse effects [59], or leading to drug resistance in pathogens, soil degradation, and environmental pollution. Resistant Cultivars Plant breeding strategies, aiming to enhance belowground traits that positively influence the rhizosphere microbiome, present a promising avenue for sustainable crop production [60].The severity of macadamia decline may be partly attributed to genetic factors [27].Although identifying and utilizing disease-resistant cultivars are challenging, it is rewarding [61].In addition to breeding selection focusing on yield enhancement, breeding more tolerant macadamia cultivars could reduce the incidence and severity of decline disease (Figure 4b) [11].Wild germplasms such as Macadamia integrifolia and M. tetraphylla are commonly used as resistance sources in macadamia breeding programs, demonstrating resistance to both P. cinnamomi and P. multivora in an in vivo assay [62].Research has also indicated that commercial cultivars of M. integrifolia exhibit resistance to P. cinnamomi [6].Among five macadamia cultivars (namely "HAES 816", "A16", "HAES 246", "HAES 344", and "HAES 741"), "HAES 344" was found to have the highest resistance to P. cinnamomi [62][63][64].Despite the advantages, few resistant cultivars have been applied, since the process of breeding new cultivars typically requires 8-10 years using conventional breeding approaches [65].Disease-resistant rootstocks have also been extensively employed in commercial orchards as a disease management strategy (Figure 4b).The preferred rootstocks include M. integrifolia cultivar H2 (Hinde), M. integrifolia, and M. tetraphylla hybrid cultivar D4 (Renown) [1].However, the selection of rootstocks often prioritizes rapid germination, a high grafting success rate, and a robust seedling vigor over stress resistance [66,67]. Agronomic Measures Good orchard hygiene helps reduce the spread of decline disease by minimizing pathogen infection (Figure 4c).Several key practices are recommended for pathogen suppression, including the removal of dead or dying limbs from the crown, the modification of canopy coverage in accordance with the severity of macadamia decline symptoms, and the installation of shade nets [68].Furthermore, temperature and humidity managements within the orchard are vital as they significantly affect the propagation of pathogens.P. cinnamomi infection is more likely in soils with poor drainage, areas with lower elevations, or during periods of heavy rainfall [28].Hence, selecting suitable planting locations or adjusting the microenvironment can mitigate macadamia decline.Organic fertilization emerges as a promising control strategy for controlling soil-borne diseases [69].The application of composts or animal manure can enhance soil fertility and inhibit the proliferation of pathogens such as P. cinnamomi [70].Compared to chemical fertilizers, the long-term application of organic fertilizers like cow manure and green manure can elevate microbial abundance and enhance soil enzyme activity [71,72].The role of soil microflora is pivotal in maintaining soil health and suppressing disease [73].The rhizosphere microbiome, in conjunction with its interaction with plant roots, exerts a significant impact on overall health [74]. Biological Control Measures Biological control strategies, utilizing microbial antagonists (bacteria and fungi) [17] or beneficial insects [75], has received tremendous attention as a safe and potentially efficacious approach against soil-borne pathogens (Figure 4d).Certain beneficial microbes, e.g., Trichoderma spp., have been shown to enhance the resistance of macadamia to decline diseases.For example, T. hamatum was employed as a biocontrol agent to shield macadamia from infection by Lasiodiplodia theobromae, a pathogen responsible for kernel rot, branch dieback, and macadamia decline [7].The application of a T. hamatum conidial suspension significantly reduced the size of lesions caused by L. theobromae on macadamia leaves [76].In another study, Trichoderma spp.isolated from the macadamia orchard effectively mitigated infection by Rosellinia spp., a pathogen associated with macadamia decline [33].Native isolates of Trichoderma spp.exhibited potential as biological control agents against Rosellinia spp.[33].Xyleborus beetles, particularly X. ferrugineus, X. affinis, and X. perforans, may exacerbate macadamia decline by being attracted by ethanol produced by stressed trees [50,77].Phymastichus LaSalle species, including P. xylebori LaSalle, P. coffea LaSalle, and P. holohol, were identified as biological control agents against Xyleborus beetles [75,78]. Research on the biological control of macadamia decline is currently very limited.Strains of Trichoderma, Pseudomonas, Bacillus, and Actinomycetes have been identified as effective biological control agents against diseases caused by Phytophthora [79], which is the primary pathogen contributing to macadamia decline.Among them, Trichoderma strains have been extensively studied.For instance, T. harzianum strains effectively reduced pear collar rot by 97% [80].Serratia plymuthica and its siderophore molecule (serratiochelin C) showed inhibitory effects on P. cinnamomi [81].Furthermore, Bacillus amyloliquefaciens, Burkholderia metallica, Burkholderia cepacian, and Pseudomonas aeruginosa were found to inhibit P. capsici sporangium formation and zoosporogenesis, thereby enhancing seed germination and plant growth [82,83].Since few studies have been reported regarding the applications of antagonistic microbes in combating the pathogens of macadamia decline, research is urgently needed. Control with Multiple Measures Besides the implementation of individual control measures, there is an escalating focus on the simultaneous use of multiple strategies for mitigating macadamia decline.In China, a combination of Trichoderma harzianum, humic acid, and urea was found to be effective in preventing and controlling macadamia decline [68].The management procedure includes the removal of dry branches and leaves based on the severity of decline symptoms, the installation of shade nets, and the irrigation of roots with a recovery solution.For long-term control, late topdressing was implemented by applying 5 kg of organic fertilizer per plant in ring channels 60-80 cm around the stem.Meanwhile, foliar fertilization was conducted to foster the recovery of macadamia trees and accelerate the growth of new leaves.This integrated approach resulted in the recovery of over 96% of the infected macadamia trees [68].This highlights the potential effectiveness of integrated management strategies in combating macadamia decline and ensuring the sustainable health of macadamia orchards. Identification of Antagonistic Microbes and Construction of SynComs The primary objective is to identify the beneficial microbes with antagonistic effects on decline pathogens (Figure 5).The disease-suppressive soils present an optimal resource for screening and isolating potential biological control agents [80].Core microbiomes, composed of antagonistic microbes, are instrumental in inhibiting disease incidence and fostering plant growth [84].Nevertheless, the inoculation with individual beneficial microbes for controlling soil-borne pathogens suffers from ineffective colonization in the rhizosphere and inconsistent field efficiency [85].Given that decline diseases in a macadamia orchard are usually attributed to multiple pathogens, the application of a combination of beneficial microbes, i.e., SynComs, should be recommended, by mixing several strains with different functions, including growth promotion, disease suppression, and high temperature and humidity tolerance (Figure 5).High-throughput and genomic sequencing techniques have been widely used to identify the isolated strains and ascertain their taxonomic status, functional characteristics, and abundance. Figure 5. Biocontrol application in mitigating macadamia decline: The initial step involves the isolation, screening, and identification of decline pathogens and antagonists.The subsequent step entails the construction of SynComs and their integration with various strategies such as seed coating, root dipping, seedling substrates, soil drenching, foliar spraying, and bio-organic fertilizer application to augment the colonization of SynComs in the rhizosphere soil.Ultimately, SynComs can serve as an alternative to plant protection by inducing systemic resistance and facilitating the release of root secretions including organic acids, defense enzymes, volatile organic compounds, and secondary metabolites.SynComs: Synthetic microbial communities. Macadamia Decline Prevention Strategies Using SynComs Several studies have demonstrated that the effectiveness of biological control can be influenced by different inoculation methods [86,87].To improve the survival rate of SynComs and enhance the management of macadamia decline, a combination of SynComs with various strategies is recommended, including seed coating, root dipping, seedling substrates, soil drenching, foliar spraying, and bio-organic fertilizer application (Figure 5). Seed Coating and Root Dipping with SynComs The seed coating delivery of biocontrol inocula is recognized as a cost-effective and efficient technology for safeguarding crops against both seed-borne and soil-borne phytopathogens [88,89].Recent attempts have integrated biocontrol agents such as Pseudomonas fluorescens [90], Bacillus subtilis [91,92], Yersinia spp.[93], Serratia entomophila [94], Paenibacillus alvei, nonpathogenic Fusarium oxysporum [95], and Rhizobium radiobacter [96] into seed coating.These studies have demonstrated that seed coating is efficient in protecting plants against soil-borne fungal pathogens, thereby fostering plant growth.It is imperative to sterilize and coat macadamia seeds with SynComs to mitigate biological disturbances or invasion, given that the transmission of pathogens by seeds is the first step of disease occurrence [97].Additionally, the practice of dipping seedling roots in a suspension with beneficial microbial strains is an effective strategy for improving plant resistance to pathogens [86].For example, immersing angelica seedling roots in a suspension of Enterobacter cloacae and Serratia ficaria effectively inhibited the growth of Phytophthora cactorum, significantly suppressing the incidence of Phytophthora root rot in a pot experiment or when the treated seedlings were planted in naturally infested soil [98].Therefore, it is worth investigating the effects of seed coating and root dipping with SynComs on macadamia decline prevention or control. Seedling Substrate and Soil Drenching The quality of germination substrates can significantly influence the growth and survival of plant seedlings [99].A substrate mixed with beneficial microbes benefits from the successful transplantation of seedlings into the field, and helps establish a robust biological barrier during their initial stages [100].For example, substrates inoculated with lysine, sucrose, and anaerobic digestion slurry, improved the ability of tomato to resist bacterial wilt [101].Similarly, incorporating suitable SynComs into the substrate offers a novel approach to promote the growth and disease resistance of macadamia seedlings.Moreover, soil drench methods, which involve the application of suspensions around the root zone of plants, have been proven effective in improving plant growth [100,102].A recent study revealed that the addition of T. atroviride conidial suspensions suppressed Fusarium wilt disease of tomato seedlings [103].However, the colonization rate of single strains using this method is relatively low [86].Hence, SynComs are essential to enhance the ability of colonization for nutrient competition, niche occupation, and the induction of systemic resistance. Foliar Spraying and Bio-Organic Fertilizer Application Biological control agents are capable of suppressing pathogen infection during seedling germination and early growth stages prior to transplantation into the field.However, the pathogens can persist in the rhizosphere soil for long periods.To improve plant resistance for growth, it is imperative to implement biological control strategies at different growth stages [100].Foliar spraying with an antagonistic microbial suspension is a widely used alternative method in the field to inhibit plant diseases and promote growth [104].The application of bio-organic fertilizer may be a crucial strategy for SynComs utilization.Compared to seed coating, root dipping, and soil drenching, foliar spraying and bio-organic fertilizers may be superior in disease control [100]. Mechanisms Underlying the Inhibitory Effects of SynComs on Macadamia Decline The assembly of plant-associated rhizosphere microbiomes is highly complex due to their inherent heterogeneity [105].Advanced multi-omics technologies, including metagenomics, transcriptomics, proteomics, and metabolomics, have been utilized to elucidate the function of the microbiome in the rhizosphere [106][107][108], and to explore plant-microbe and microbe-microbe interactions under SynComs inoculation.A comprehensive understanding of the synthetic microbiome's genome characteristics through metagenomics can pinpoint the antagonistic genes against specific pathogens [109].Transcriptomics serves as the most effective method for unveiling alterations in gene expression when plants interact with SynComs [110], thereby identifying the genes of plants responding to SynComs and inferring the metabolic pathways and biological processes involved.A proteomics approach allows for the identification of proteins associated with the biocontrol processes and differential expression [110].Metabolomic analyses have the potential to reveal perturbations in signaling or output pathways that significantly influence the outcome of a plant-microbe interaction [111].The combination of multiple omics analysis methods is helpful to elucidate information pertaining to the pathogenicity of plant pathogens, enhancing the efficacy of plant disease diagnosis and management through the inoculation of SynComs.The integration of metabolomics and transcriptomics has unveiled that the assembly of rhizosphere microbiomes is responsible for the systemic induction of root exudation of metabolites at both molecular and chemical levels [112].Root exudates are pivotal in plant-soil-microbe interactions [113], offering significant insights into the mechanism by which SynComs regulate rhizosphere microbiota to control soil-borne diseases.Therefore, multi-omics technologies may provide new insights into the mechanisms underlying the inhibitory effects of SynComs on macadamia decline. Conclusions This review provides a comprehensive overview of macadamia decline and its associated control measures.Given that limited information is available regarding the biological control of macadamia decline, we largely explored the potential of biological control in managing macadamia decline.Our proposed approach involves the use of SynComs, a promising biological control method, in conjunction with various measures such as seed coating, root dipping, seedling substrate, soil drenching, foliar spraying, and bio-organic fertilizer application to effectively manage macadamia decline. Figure 1 . Figure 1.Symptoms and pathogens of macadamia decline disease: (a) Diagram of slow decline and quick decline.(b) Typical brown leaves and roots of quick decline.(c) Infected sites and their characteristics of quick decline.(d) Timeline of the major decline pathogen studies over the past half-century. Figure 2 . Figure 2. Bottom-up infection caused by macadamia decline pathogens.(a) Infection by the original pathogens: pathogens are derived from nearby forest plants or soil-borne pathogen carriers, Figure 3 . Figure 3. Bibliometric analyses of the macadamia decline research based on data from the Web of Science Core Collection database, spanning the period from January 1977 to May 2023: (a) Bibliometric quantity of articles.(b) The top ten countries with the most publications related to
2024-02-06T18:10:19.258Z
2024-01-30T00:00:00.000
{ "year": 2024, "sha1": "49f73f00f517b016c7523923bb2250a1690799af", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/14/2/308/pdf?version=1706631744", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d9596682935a19dad84c6be6a1c2edc8fba7020c", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
267299116
pes2o/s2orc
v3-fos-license
Association between periodontal health status and quality of life: a cross-sectional study Introduction Attachment loss due to periodontal diseases is associated with functional limitations as well as physical pain and psychological discomfort, which may lead to a reduced quality of life. The purpose of this study is to determine if the oral health status, specifically the periodontal status, influences oral health–related quality of life. Materials and methods Survey data were collected in a US dental school clinical setting in a cross-sectional study. Quality of life related to oral health was assessed with the Oral Health Impact Profile-49 (OHIP-49). In addition, DMFT index, periodontal status, and health literacy scores (dental and medical health literacy) were recorded, and the data of n = 97 subjects were statistically analyzed. Results The DMFT index of the study population was 14.98 ± 6.21 (D: 4.72 ± 4.77; M: 3.19 ± 3.46; F: 7.12 ± 4.62). Of the subjects, 44% were identified as periodontitis cases. These periodontitis cases demonstrated significantly higher OHIP-49 scores (66.93 ± 30.72) than subjects without signs of periodontal diseases (NP) (32.40 ± 19.27, p < 0.05). There was also a significant difference between NP patients and patients with gingivitis (66.24 ± 46.12, p < 0.05). It was found that there was a statistically significant difference between Stage 3 (severe) periodontitis and periodontal health (p = 0.003). Pearson correlations were completed, and positive relationships were found with OHIP-49 and DMFT (0.206, p < 0.05), and periodontal risk self-assessment (0.237, p < 0.05). Age [odds ratio (OR) 4.46], smoking (OR 2.67), and the presence of mobile teeth (OR 2.96) are associated with periodontitis. Conclusions Periodontal diseases may negatively impact the oral health–related quality of life. Patients suffering from periodontitis also showed more missing teeth, which might influence function. Age and smoking are associated with a higher prevalence of periodontitis. A good general health literacy was no guarantee for having an adequate oral literacy. Introduction When patients are asked to evaluate their overall quality of life, it is not uncommon for them to provide an answer based primarily on how they feel from a strictly physical and psychological perspective (1).It is also not uncommon for patients to completely overlook their oral health condition, regardless of its condition, and not attribute their oral health status to their overall quality of life (2).Several studies have found that oral health and overall quality of life tend to go together and that poor oral health conditions have a negative effect on the overall quality of life (3,4).In addition, it was suggested that oral health problems impair the physical functioning, the social standing, and the wellbeing of individuals, which underlines the association of oral health and general health in terms of impacts on the quality of life (2,5).The oral health-related quality of life can be evaluated using the Oral Health Impact Profile-49 (OHIP-49) questionnaire (6).The OHIP-49 assesses seven domains, including functional limitation, physical pain, psychological discomfort, physical, psychological, and social disability as well as handicap.The higher the score, the lower the oral health-related quality of life (2). In the United States alone, much of the population deals with gingivitis and nearly half of the population have periodontitis (7).As we know with periodontitis, the loss of attachment and tooth support leads to discomfort of mobile teeth, further progression of disease, and many times tooth loss.Although replacement of missing teeth is no longer a difficulty with many different available treatment options, it is the destruction of the sites where those teeth once resided and their adjacent conditions that makes the replacement difficult (8).Because of this difficulty, patients are typically put in a position that requires them to manage their predisposing oral health condition, namely, periodontal disease.While gingivitis is reversible and limited to an inflammation of the gingiva, periodontitis is a chronic inflammatory process in which attachment and bone loss occur (9,10).When bone loss is severe enough, it leads to significant loss of support of the teeth (11).For many patients, this disease process does not happen suddenly and is a result of a lack of awareness and a lack of routine care with their dentist.It has been documented that the strongest risk factor for poor oral health-related quality of life, which was obtained from NHANES (National Health and Nutrition Examination Survey) data, was the perceived need to relieve dental pain (12).While many non-compliant patients would present to their dentist when they experience tooth pain, periodontitis often progresses silently and therefore results in severe damage, which is often too late to address appropriately. In the unfortunate cases where patients have lost teeth due to periodontal disease or caries, they are subjected to adapting to a new reality.They encounter reduced function, less esthetics, and sometimes comorbidities (13).It is not unknown that there have been associations made between periodontal disease and cardiovascular and mental health (14).Although the connection is very complex and requires more research, correlations are present to provide better answers.For example, diabetes has been shown to have a two-way relationship with periodontal disease.In patients with uncontrolled diabetes, the body's inflammatory process leads to faster and more significant destruction of the periodontium in the presence of bacterial plaque.It is also known that due to the same pathophysiological problems (i.e., RAGE-AGE) with diabetes, wound healing is significantly hindered.The ongoing discomfort and lack of proper healing requires patients to have more frequent visits to their dental provider and longer healing time before addressing other areas of concern (15).With diabetes as an example, it is no surprise that there are several factors that influence quality of life.In addition, numerous chronic systemic diseases are associated with periodontitis, and the prevalence of most chronic diseases increases with age (16).It is suggested that upregulated inflammatory mediators, cytokines, and other pathological reactions are the principal mechanisms linking oral infections to a number of systemic diseases, such as pneumonia, osteoarthritis, rheumatic diseases, inflammatory bowel diseases, kidney diseases, liver diseases, metabolic syndrome and diabetes, cancer, and Alzheimer's disease (17). The aim of this study was to assess the prevalence of periodontitis in an US-based dental school sample, to identify correlations with the quality of life and health literacy scores, and to determine the oral health literacy of the investigated population.The null-hypothesis is that there is no correlation between the presence of periodontitis and quality of life. Study design This cross-sectional study was approved by the Institutional Review Board of Marquette University (HR#: 3148).All participants in this study were newly accepted patients at the Marquette University School of Dentistry (Milwaukee, WI) who were scheduled for comprehensive dental examinations.These patients were admitted to the school through initial screening to ensure they qualified as patients and were approached during their radiology appointment prior to their comprehensive examination.The participant would be brought back for clinical examination to confirm periodontal diagnosis.A total of 115 participants were interviewed between 2017 and 2018.Of the 115 analyzed, 97 participants showed comprehensive data for complete analysis. Inclusion and exclusion criteria Participants were at least 18 years of age and required to be literate in English.To be included, the participants were required to finish the questionnaire and have a comprehensive evaluation.Participants were excluded if they could not read or write in English.Participants who were evaluated but did not return for clinical examination were also excluded.Further exclusion criteria were mental, vision, or hearing impairments. Questionnaires The session involved surveys and questions from Rapid Estimate of Adult Literacy in Dentistry-30 (REALD-30), Short Assessment of Healthy Literacy (SAHL), Periodontal Self-Risk Assessment, OHIP-49, Modified Dental Anxiety Scale, and general demographic information.Three calibrated periodontists (KA, JG, and AG) performed the interviews in a standardized manner. OHIP-49 The OHIP-49 is a questionnaire that was developed to assess oral health-related quality of life from a patient perspective (6). The questionnaire is comprised of 49 questions with answers on a scale of 0-4.The answers are then tallied for a grand total (maximum of 196).The greater the number, the lower assessed oral health-related quality of life. Health literacy 2.3.2.1 REALD-30 Oral health literacy was tested using a validated dental word recognition instrument (18).The interviewer provided the participant a sheet of the 30 words.Participants were asked to read each one out loud, and the investigator marked if they were able to correctly read the word.Participants were asked not to guess and say "pass" in the event they needed to guess or did not know the pronunciation of the word.To ensure no non-verbal cues were given, the investigator stood behind the patient for the duration of the questionnaire. SAHL The SAHL test illustrated medical literacy via word association (19).Participants were given a list of 18 words.Participants were asked to read the words out loud and waited for the investigator to say two words.One of the words was directly related, while the other word was relevant but not as closely associated.The participant was advised to select the word that was most directly related.In the event the participant did not know which option to choose, they were asked to simply state that they did not know.The threshold score was 14, and anything below this score was considered low risk. Periodontal risk self-assessment The periodontal risk self-assessment (PRSA) is a 13-item questionnaire, including age, gender, family history of periodontal disease, oral hygiene habits, clinical symptoms of periodontal diseases, smoking habits, and dental history.The answer options are weighted, and scores from 1 to 3 are given.High total scores correlate with the presence of periodontitis (20). Oral health All participants were diagnosed based on comprehensive exam information including full mouth radiographs as well as periodontal charting.This exercise was completed by two investigators (AG and KA) independently to ensure consistency.Periodontal status was diagnosed based on the World Workshop of Periodontal Classification (21-23) as periodontal health, gingivitis, and periodontitis.Periodontal health was characterized by <10% of bleeding on probing (BoP), the absence of bone loss, and normal gingival sulcus depths (24).A gingivitis case was diagnosed when BoP scores were ≥10% and the absence of bone loss (25).A periodontitis case was defined as interdental clinical attachment loss (CAL) at ≥2 mm non-adjacent teeth, or buccal or oral CAL ≥3 mm with pocketing >3 mm was detectable at ≥2 teeth (23).The decayed, missing, and filled teeth were recorded as DMFT (26). Statistical analysis All data were transferred to a spreadsheet for data organization and analysis.All the variables were described using appropriate statistics.For example, categorical variables were described as frequency and percentage, whereas all continuous variables were described as means and standard deviations.Two-sample t-test was used for comparison of other variables (OHIP, DMFT, PRSA, Age, SAHL).The two-sample t-test was used for comparison of the OHIP score for two groups (disease and no disease).The oneway ANOVA was used for comparison of the OHIP score of different periodontal stages.Relative risk (RR) and odds ratio (OR) were calculated between healthy and diseased individuals across different covariates.Chi-square test was used to compare SAHL and REALD-30.For all statistical tests, the alpha was set at 0.05 and all statistical analyses were done using statistical software (R version 4.2.2).Using GPower 3.1, a sample size of 97 with effect size of 0.40, the computed achieved power for using ANOVA fixed effect with alpha being 0.05 was 89% (27). Results The demographics and oral health status of the participants are presented in Table 1.Of the included n = 97 subjects, n = 22 were diagnosed with periodontal health, n = 32 with gingivitis, n = 21 with stage 1 or 2 periodontitis, and n = 22 with stage 3 or 4 periodontitis.The DMFT index of the study population was 14.98 ± 6.21 (D: 4.72 ± 4.77; M: 3.19 ± 3.46; F: 7.12 ± 4.62).The average age of the investigated population was 49 years, with a range from 18 to 84; 56.7% identified as female.When asking about participant race, 62.4% of the participants indicated they were "White" with 19.4% indicating they were "Black." Periodontal disease and quality of life Comparing the OHIP-49 scores of patients with periodontal disease and periodontal health, scores for patients with disease was 63.6, which is significantly higher, compared to the OHIP scores for the patient without disease (35.6; p < 0.001). Periodontally healthy patients had an OHIP-49 score of 34.2 ± 20.5, which was significantly lower than for patients with gingivitis with 66.7 ± 47.4 (p = 0.015) and patients with stage 3 and 4 periodontitis with 72.9 ± 31.9 (p = 0.006).Patients with stage 1 and stage 2 periodontitis had an OHIP-49 score of 60.7 ± 29.0 that was tentatively higher than for patients with periodontal health (p = 0.12; Figure 1). Health literacy The subjects of this cohort had a significant discrepancy between health (SAHL) and oral health (REALD-30) literacy (p < 0.001).While 92% showed adequate health literacy, only 57% of the participants were found to be having adequate oral health literacy (Figure 2).Patients with inadequate oral health literacy had a higher risk for severe periodontitis (OR: 1.6). Periodontal self-risk assessment Patients with periodontal health reported lower PRSA scores (16.2 ± 1.6) than those suffering from periodontal disease (17.6 ± 2.4; p < 0.05).However, this was driven by patients with stage 3 and stage 4 periodontitis (19.1 ± 2.3), who had a significantly higher PRSA score than patients with periodontal health (p < 0.001).The scores for patients with gingivitis (17.1 ± 2.4) or periodontitis stages 1 and 2 (17.4 ± 2.1) were not significant from those of the participants with periodontal health. Age and smoking were determined as being risk factors for periodontal diseases.While the relative risk for periodontitis was for age 1.28, it was 1.12 for smoking.Relative risks and odds ratios are presented in Table 2. Discussion The patient pool in a dental school typically presents with a specific background in terms of socioeconomic status, education level, and overall health condition.Because of these characteristics, patients may not experience the most ideal oral health status nor the most ideal or controlled overall health quality (28).The study was completed to determine if there was an association between periodontal health and quality of life patients experienced and attempted to identify factors that predispose or highlight possible risk associations.Our findings indicate that, in fact, there are significant associations with periodontal diseases and reduced overall quality of life.Patients with gingivitis and periodontitis had worse overall quality of life scores than those with periodontal health.This is consistent with the existing literature (29). Especially patients with gingivitis and severe/advanced periodontitis reported a lower oral health-related quality of life.This may indicate that a patient with gingivitis is more aware of the periodontal changes occurring, whereas those in stages 1 and 2 typically are more silent to a patient who has progressed past gingivitis.However, when a patient reaches stage 3, the awareness increases due to possible discomfort of the gingival tissues and teeth as well as mobility of teeth (30).Patients with an advanced stage of periodontitis are also more likely to have general health issues (e.g., diabetes, cardiovascular disease) that may be poorly controlled and contributing to the patient's poor oral health and/ or poor overall quality of life (31).The periodontitis risk selfassessment scores were higher in patients with severe/advanced periodontitis, which suggests that patients are typically aware of their condition and may be able to perceive what may be affecting their overall condition as well (32).Combined with the higher OHIP scores in this patient group, this may also indicate that it is at this stage of periodontitis a patient might be aware of their oral health condition and thus is more adept to being self-aware of their overall quality of life.Nisanci Yilmaz et al. reported that the Oral health impact profile-49 scores for patients with different periodontal disease status.The higher the OHIP-49 score, the lower the oral-health related quality of life. FIGURE 2 Discrepancy between oral health literacy (REAL-30D) and health literacy (SAHL).Significantly more study participants had adequate health literacy than being literate in oral health aspects.highest OHIP scores and with that lower quality of life were found in patients with stage 4 grade C periodontitis (33).They also found that OHIP scores were significantly related to symptoms of periodontal disease such as bleeding gums, bad odor, and loose or drifting teeth.In a 65-year-old Norwegian population, a researcher found that reduced oral health-related quality came with increased severity of periodontitis (34), which confirmed earlier findings that the severity and progression rate of periodontitis are associated with poor quality of life (35).The good news for those patients is that when they undergo periodontal treatment and participate in a well-structured periodontal maintenance program, the quality of life can be improved and retained (36).This is especially important since the loss of natural teeth due to periodontitis impacts the chewing function, which is associated with diminished nutritional intake (37). The patients included in this study showed better general health literacy and were less literate in oral health matters.Wehmeyer et al. reported that despite a high level of education of the participants in their cross-sectional study, lower oral health literacy was associated with more severe periodontitis among new and referred patients to their periodontics clinic (38).The present findings suggest that patients with an inadequate REALD-30 score had 1.6 times more severe periodontitis than patients with adequate oral health literacy.Impaired oral health impacts general health and negatively impacts quality of life, and low oral health literacy is associated with reduced quality of life (39).Increasing oral health literacy in educating our patients is critical in addressing poor oral health to prevent oral diseases (40).Nouri and Rudd recommend to use plain language and teach-back by providers as well as the incorporation of oral and aural literacy into community programs and healthcare provider (e.g., dentist, dental assistance, dental hygienists) education and training (41).In the medical field, it is known that the patient awareness of general health concerns is critical in self-care and helping patients seek care when they suspect ailments (42).The same could be said for the dental world, and it may be critical for the dental community to be more of an advocate for the patient to help them self-screen even in cases where symptoms are not as apparent.A possible manner to enhance this is to include our colleagues in medicine to also advocate the oral-systemic connection and educate the patients accordingly.A recent study showed for instance that oral hygiene measures such as brushing teeth are related to the outcome of cardiovascular disease (43).It was also suggested that tooth loss due to periodontal disease or caries caused by oral bacteria impairs the chewing function and health (44), and the disruption of intestinal bacteria can also impair health (45). Nevertheless, some limitations of this study must be acknowledged.Not all subjects who consented into the study and completed the questionnaires returned for the comprehensive oral exam and were not included in the full analysis.Financial stress, dental anxiety, occupational stress, and perceptions of needs among others might have presented as barriers for patients accessing dental care (46).The selection of questionnaires might also represent a limitation.There are numerous versions of the OHIP questionnaire.John et al. found that the 5-, 14-, 19-, and 49-item versions correlated highly, indicating that these versions measure oral health-related quality of life equally well, with the best being the OHIP-49 (47).However, they also suggest that the OHIP5 is a practical tool for general dentists to assess the oral health-related quality of life, which was also confirmed by others (48).The used oral health literacy tool measures word recognition (18).But there are more ways to assess health literacy (49), including test reading comprehension (50), testing the understanding of medical information (51), or testing numeracy and locate-the-information skills (52).However, several studies found a correlation between the REALD-30 scores and the status of periodontal health (38,49,53).The PRSA questionnaire was able to distinguish between periodontal health and periodontitis but failed to discriminate between gingivitis and periodontitis.Other self-reporting tools that are more sophisticated and include questions about systemic health, dietary intake, or psychological stress were shown to be able to assess the individual risk, the need for periodontal treatment, and can differentiate between gingivitis and periodontitis (54).This tool and others such as the periodontal screening score (32) can be helpful screening tools on a population level.In terms of risk or prognosis determination, tooth-level prognostic systems provide better information.Saleh et al. reported recently that the periodontal risk score (PRS), which includes parameters such as age, smoking, diabetes, tooth type, mobility, probing depth, and furcation involvement, was able to predict long-term tooth loss (55). Conclusion Periodontal diseases may negatively impact the oral healthrelated quality of life.Patients suffering from periodontitis also showed more missing, filled, and decayed teeth, which may have an effect on function and comfort.Age and smoking are associated with a higher prevalence of periodontitis.Good general health literacy was no guarantee for having an adequate oral literacy. TABLE 2 Relative risks and odds ratios (95% confidence interval) based on PRSA. TABLE 1 Characteristics of the included subjects based on the presence of periodontitis.
2024-01-28T16:50:46.027Z
2024-01-25T00:00:00.000
{ "year": 2024, "sha1": "dd06593aedef31eafc472f3c86ff1603e187e409", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/froh.2024.1346814/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9468eeb135592a72740f736b41f1c5c9f81e897c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221634888
pes2o/s2orc
v3-fos-license
Open source 3D phenotyping of chickpea plant architecture across plant development Background Being able to accurately assess the 3D architecture of plant canopies can allow us to better estimate plant productivity and improve our understanding of underlying plant processes. This is especially true if we can monitor these traits across plant development. Photogrammetry techniques, such as structure from motion, have been shown to provide accurate 3D reconstructions of monocot crop species such as wheat and rice, yet there has been little success reconstructing crop species with smaller leaves and more complex branching architectures, such as chickpea. Results In this work, we developed a low-cost 3D scanner and used an open-source data processing pipeline to assess the 3D structure of individual chickpea plants. The imaging system we developed consists of a user programmable turntable and three cameras that automatically captures 120 images of each plant and offloads these to a computer for processing. The capture process takes 5–10 min for each plant and the majority of the reconstruction process on a Windows PC is automated. Plant height and total plant surface area were validated against “ground truth” measurements, producing R2 > 0.99 and a mean absolute percentage error < 10%. We demonstrate the ability to assess several important architectural traits, including canopy volume and projected area, and estimate relative growth rate in commercial chickpea cultivars and lines from local and international breeding collections. Detailed analysis of individual reconstructions also allowed us to investigate partitioning of plant surface area, and by proxy plant biomass. Conclusions Our results show that it is possible to use low-cost photogrammetry techniques to accurately reconstruct individual chickpea plants, a crop with a complex architecture consisting of many small leaves and a highly branching structure. We hope that our use of open-source software and low-cost hardware will encourage others to use this promising technique for more architecturally complex species. Supplementary Information The online version contains supplementary material available at 10.1186/s13007-021-00795-6. reconstruct the 3D structure and assess canopy architectural traits of these "complex" plants across their development using photogrammetry. One of the main problems with conventional, direct measurements of plant structural properties is that they are laborious and often destructive. This is particularly evident when working with larger plants and plant species with many small leaves. Imaging intact plants can bypass the need to destructively harvest plants, allowing for the measurement of structural traits across plant development. Two-dimensional imaging techniques have long been used for the quantitative measurement of plant structural traits, including plant surface area, number of leaves, leaf shape and leaf colour (a database of such approaches is presented in [1], and is continually updated). Using new software tools, such as PlantCV [2], quantitative traits can even be extracted automatically from images, reducing user error and analysis time. However, most 2D imaging techniques were developed to only work for small plants with a simple structure, such as the two-dimensional rosettes of the model plant Arabidopsis thaliana [3], or to only extract relatively basic information, such as plant height [4]. For more complex or larger plants, 2D imaging techniques can result in inaccuracies due to overlapping features in captured images (i.e. occlusion of stems by leaves, leaves by leaves, etc.). 3D imaging addresses this issue, allowing us to capture the full detail of a plant's structure without self-occlusion of any plant tissues. There are several methods available to phenotype the 3D structure of plants (for detailed reviews see [5,6]). Laser scanning (LiDAR) can provide very detailed reconstructions of plants but there is often a trade-off between the cost of instrumentation and the complexity of 3D models. Commercial instruments can cost upwards of US $10,000 but can generate detailed models of plants with > 2 million points. Newly developed DIY instruments can cost as little as US $400 but only generate models with approx. 40,000 points [7]. LiDAR can also be inflexible, both in terms of sample size (i.e. one system may provide good resolution for small plants but not for large plants, and vice versa) and downstream data analyses (i.e. may be limited to certain commercial data analysis programs). Photogrammetry on the other hand can be highly cost effective and versatile. Photographs of the plant are taken from multiple angles using a standard camera and subsequent computer analyses are used to reconstruct a scaled 3D model. This 3D reconstruction can then be used for trait measurements, such as plant dimensions, plant surface area and leaf area index, and modelling simulations, such as ray tracing of the canopy light environment [8]. Data quality can be comparable to more expensive LiDAR systems and it can be used for subjects of wide-ranging sizes. Generally speaking, the more photos of the subject, the better the reconstruction will be with regards to precision and accuracy [9], albeit with longer capture and processing times. Many photogrammetry software packages are open-source (including Colmap, [10]; Meshroom, [11], and VisualSFM, [12]), meaning that they are freely available and can be modified at the code level to give users a highly customised and powerful experience. Photogrammetry has been used effectively for the 3D reconstruction of a number of monocot crop species, including wheat [13] and rice [14], and for species with larger leaves, such as sunflower [15] and soybean [16]. However, few studies have assessed whether it could be used to accurately reconstruct 3D models of plant species with many small leaves, such as chickpea. Here we demonstrate that several important changes to existing photogrammetric reconstruction methods could allow for reconstruction of species with small leaves and highly branching architecture. These changes will ensure that smaller elements are captured accurately during imaging and during the reconstruction process. Increasing the number of capture angles around the plant will reduce the opportunity of small leaves/branches being occluded from view. Capturing higher quality images at larger resolutions will further assist in the inclusion of small plant features during reconstruction. Refinements to the photogrammetry workflow that increase the density of 3D point clouds, such as preventing downsizing of images during feature matching, increasing the number of pixel colours used to compute the photometric consistency score and reducing the photometric consistency threshold, will also improve the detail and accuracy of resultant 3D reconstructions [17]. Chickpea (Cicer arietinum L.) has long been an important annual crop for resource poor farmers across the globe but there is growing demand elsewhere due to changing diets and a push for protein rich alternatives to meat [18]. Chickpea is often considered more sustainable than non-legume grain crops, such as wheat or rice, due to its ability to form symbiotic relationships with nitrogen fixing bacteria, reducing reliance on nitrogen fertiliser [19]. It can also be used effectively in rotation with cereal crops to break the life cycle of diseases and improve soil health [20]. Chickpea can therefore be a lucrative option for many growers, particularly considering there are also economic benefits, with returns to Australian growers of roughly AU $300 t −1 compared to around AU $100 t −1 for wheat between 2012 and 2014 [21]. Yet, whilst chickpea has an estimated yield potential of 6 t ha −1 under optimal growing conditions, annual productivity of chickpea worldwide currently sits at less than 1 t ha −1 [18]. This yield gap is the result of a lack of genetic diversity in breeding programs that has left cultivars susceptible to biotic and abiotic stresses. Phenotyping for natural variation in traits of interest across diverse germplasm could be used to minimise this yield gap and to improve grain yield potential. Chickpea is an indeterminate crop in which vegetative growth continues after flowering begins. This poses management challenges for growers [22] and can result in yield losses. Genes for determinacy have been found in other species [23][24][25] and could be explored in chickpea by phenotyping diverse populations across their development. Chickpea also has a highly branching structure, requiring more resources to be allocated to structural tissue, which may reduce remobilisation of nutrients to pods during reproductive growth [26]. Modification of plant architecture through targeted plant breeding has led to huge successes in other crop species, most notable was the introduction of dwarfing genes into elite varieties of wheat, which led to increased seed yields, reduced yield losses due to lodging and was integral to the green revolution of the 1960s and 1970s [27]. By assessing canopy architecture traits across chickpea genotypes, we will improve our understanding of the underlying genetics controlling these traits, how these traits influence plant productivity and can then use this information to make informed breeding decisions. The main aim of this work was to develop and validate a low-cost and open-source photogrammetric method for detailed 3D reconstruction of chickpea plants. The imaging setup consisted of three DSLR cameras, LED lighting and a motorised turntable, controlled by a user-programmable Arduino microcontroller (Fig. 1). 3D reconstruction and analyses of 3D models were performed using open-source software on a Windows PC (Fig. 2). The system was tested with a variety of chickpea genotypes (three commercial and three pre-breeding lines) and measurements were validated against conventional, destructive measurement techniques. We also assessed whether differences in plant architecture or growth rates could be observed across chickpea genotypes. Reconstruction validation The 3D reconstructions provided very reliable estimates of plant height and total surface area (Fig. 3), both with an R 2 > 0.99 and Spearman rank correlation coefficient (ρ) > 0.99 when compared to validation measurements. Height was slightly underestimated, with measurements from 3D reconstructions approximately 4% lower than validation measurements, yet there was little variation in this relationship (R 2 = 0.999, RMSE = 5.45 mm, MAPE = 4.4%, ρ = 0.992, p < 0.001) and it was consistent across all studied genotypes (p > 0.05; Additional file 17: Table S1). Plant surface area measurements were estimated within 0.5% (R 2 = 0.990, RMSE = 26.85 cm 2 , MAPE = 9.1%, ρ = 0.992, p < 0.001), although there was more overall variation in estimates and the validation relationship varied slightly across genotypes (p < 0.05; Additional file 17: Table S1). Specifically, the surface area of the breeding lines grown outdoors was slightly overestimated when compared to ground truthing measurements. This was likely caused by smaller, more curled up leaves that were not correctly assessed by ground truthing measurements, which assume all leaves are laid on a two-dimensional plane (for an example see Additional file 11: Figure S1). The MAPE in surface area estimates for commercial cultivars (excluding breeding lines) was 7.2%, whilst for the breeding lines the MAPE was 12.3%. Representative growth data The 3D scanner allowed us to accurately assess a variety of canopy traits as the plants grew (Fig. 4). Whilst there was some variation across individual plants and chickpea genotypes, general trends in growth were clear and easily recovered from 3D reconstructions. Height increased rapidly to a median of 101 mm in the first week after germination and then increased more gradually to 191 mm 5 weeks post-germination (Fig. 4a). Projected plant area, total surface area and canopy volume all showed characteristic exponential growth curves ( Fig. 4b-d). Projected plant area increased from a median of 17.0 cm 2 1 week after germination to a median of 220.9 cm 2 5 weeks post-germination, total surface area rose from 37.8 to 415.9 cm 2 in the same period, and canopy volume from 233 to 14,575 cm 3 . Plant area index was not found to vary greatly during the growth of the plants, with a median of The coloured circles highlight the three cameras angled to face the plant. Note that no cables are shown in the diagram for the purpose of clarity. Exact spacing of the set-up is shown in Additional file 11: Figure S1 1.91 m 2 m −2 1 week after germination and a median of 1.87 m 2 m −2 5 weeks after germination (Fig. 4e). Weekto-week RGR were greatest between weeks 1 and 2, with leaf area increasing on average 84.1% ± 4.4% during this period, dropping to 56.9% ± 4.4%, 51.4% ± 4.0% and 64.1% ± 7.0% between weeks 2 and 3, weeks 3 and 4, and weeks 4 and 5 respectively (Fig. 4f ) (for reference, corresponding daily RGRs were 8.1%, 7.3% and 9.2%, respectively). Whilst there was some variation in these growth-related traits across individual plants, we found no statistically significant differences across genotypes (p > 0.05). Overall variation increased as the plants grew, Note that all but step (3) can be automated in Windows using batch files or, in the case of (6), using R scripts with some apparent divergence across genotypes in the latter weeks of the experimental period. For example, standard error represented only 10.6% of the mean total surface area in week 1 whilst it represented 15.5% in week 5, with similar trends for the other traits. Vertical distribution of plant surface area Further analyses in R enabled us to retrieve detailed data about the distribution of plant surface area as a function of plant height in an automated and repeatable fashion. The visual summaries presented in each panel of Fig. 5 are directly outputted from R. These visual summaries provide a fast and semi-quantitative method of assessing how individual plants are partitioning surface area (and by proxy, their biomass). For example, in the representative data shown in Fig. 5, the Genesis Kalkee, PBA Hattrick, ICC5878 and PUSA76 plants ( Fig. 5a, b, d, f respectively) assign most of their plant area to the lower canopy; the PBA Slasher plant (Fig. 5c) has a relatively sparse canopy and the SonSla plant (Fig. 5e), albeit much smaller than the others, appears to have two discrete canopy layers. To make statistical comparisons of relative area distribution data across genotypes, individual plant data was normalised by plant height and total surface area (Fig. 6). Genotypes differed significantly in their relative vertical distribution of leaf area (p < 0.001), with particularly clear differences found between the breeding lines and commercial cultivars. The commercial cultivars were much denser in the lower half of the canopy, whilst the breeding lines, and in particular line SonSla, were denser in the mid-to upper-canopy. Discussion We have successfully built and validated a low-cost, open-source 3D scanner and data processing pipeline to assess the architecture and growth trends of individual chickpea plants. Chickpea has leaves that are considerably smaller than most other species that have been studied previously using photogrammetry. In our initial attempts to use the 3D reconstruction workflow developed for wheat by Burgess et al. [13], we found that there was not enough detail in the 3D reconstructions for accurate measurement of structural traits (Additional file 12: Figure S2). However, by modifying key parameters in the reconstruction workflow, we were able to produce reconstructions that provided consistent high-quality data. Validations of height and area measurements from reconstructions against ground truthing measurements highlight the reliability of the system (height, R 2 > 0.99, MAPE = 4.4%; area, R 2 = 0.99, MAPE = 9.1%). The accuracy of leaf area estimates is comparable to other photogrammetric estimates reported in the literature for larger leaved plant species (Brassica napus, R 2 = 0.98, MAPE = 3.7%, [28], maize, sunflower and sugar beet, R 2 = 0.99, MAPE = 3.9%, [29], selected houseplant species, R 2 = 0.99, MAPE = 4.1%, Itakura and Hosoi, 2018; tomato, R 2 = 0.99, MAPE = 2.3%, [30]. We noted Table S1 for details of genotype-specific regression models a difference in validation accuracy for plant surface area across genotypes, however, we assigned this to 2D ground truthing measurements underestimating the area of curled up leaves of the outdoor-grown breeding lines, rather than an overestimation of surface area from 3D reconstructions. This underestimation would also explain the greater overall MAPE for area estimates in our study versus other previously studied crops. A similar discrepancy was reported by Bernotas et al. [31] for Arabidopsis thaliana, where top down 2D images consistently underestimated rosette area relative to 3D models that accounted for leaf curvature. In this sense, our 3D reconstructions provide a better estimate of plant surface area than conventional, labour intensive and destructive measurement techniques of chickpea plants. The results we present here show that photogrammetry could be used as an effective tool to assess diversity in plant architecture and growth-related traits across chickpea lines and help to identify novel plant breeding targets. Although we did not find statistically significant differences in architecture traits or growth trends across the three commercial genotypes included in our study, we feel that screening more diverse chickpea lines and continuing to monitor growth for a longer period of time would help to elucidate trends across genotypes. The narrow genetic base of chickpea has hindered improvements in breeding programs in recent years [18]. Together with next generation sequencing technologies, the development of new breeding lines selected specifically for the investigation of traits of interest could help to address this [32]. Even more diversity might be found if we were to investigate traits in wild relatives of cultivated chickpea [33]. As the main aim of this study was to evaluate whether photogrammetry could be used to accurately reconstruct chickpea plants, we only monitored the growth of the plants for 5 weeks post-germination. We did notice there was more variation in architectural traits, both across and within genotypes, as the plants grew larger and future work should seek to assess these traits to plant maturity. The ability to comprehensively phenotyping platforms. Speeding up image capture will rely upon reducing the amount of time the plant remains stationary between rotational imaging steps. This could possibly be achieved using a smoother motor with high intensity lighting or synchronized flash photography, allowing for continuous capture of the plant without the need for stopping. With respect to the reconstruction process, automation and faster processing times may be achieved through use of high performance computing infrastructure or cloud computing resources, both of which are increasingly available to the research community. Unlike monocot grain crops such as wheat and barley, chickpea does not have discreet canopy layers, with fruits developing across the whole plant. As such, the optimum light environment for productivity of chickpea canopies will be quite different to that of wheat. The indeterminate nature of chickpea likely shifts this optimum further still, as leaves lower in the canopy will remain photosynthetically active for longer. Modelling could allow us to determine the theoretical optimum light environment and then by running ray tracing simulations with our 3D reconstructions we could determine how close current chickpea architecture is to this optimum. A number of recent studies have used such approaches to simulate the canopy light environment of other crop species, often coupling this to a photosynthetic model to estimate potential plant productivity (intercropped millet and groundnut, [14], sugarcane, [34], and wheat, [35]). We hope that our validated method and open dataset will enable future studies to model the light environment of chickpea. The method we present here provides very reliable estimates of overall plant surface area and other plant traits from whole chickpea plants. We were able to dissect each reconstruction into its component mesh triangles and investigate how plant surface area is distributed relative to plant height. However, what we have so far been unable to do is systematically distinguish between leaf, stem or other plant tissue types in the reconstructions. Segmentation of the models in this way would allow us to retrieve more detailed phenotypic information, including the ability to assess partitioning of biomass across plant tissues, accurately assess other phenotypic traits (such as leaf angles and leaf numbers) and even aid in yield prediction [36]. Automatic segmentation of 3D models has been achieved in other plant species with larger leaves using several approaches. Itakura and Hosoi [37] were able to segment individual leaves of a number of broad-leaved houseplant species using a combined attribute-expanding and simple projection segmentation technique. While they retrieved very accurate estimates of leaf area (R 2 = 0.99, MAPE = 4.1%) using this method, we feel that it would be highly unlikely to work with comparatively tiny chickpea leaves. Another approach would be to use a machine learning algorithm to segment different plant tissues based on pre-trained models. Ziamtsov and Navlakha [38] recently developed an open-source software package called P3D for this explicit purpose. In their work, they showed P3D to segment leaves and stems in point clouds of tomato and tobacco with > 97% accuracy. We attempted to use P3D to segment our chickpea models with limited success (data not shown), although this was likely due to the use of the default P3D training datasets developed with larger leaved species. We hope that in the future, with more relevant annotated training datasets, this segmentation technique could also work for chickpea. We provide the full complement of our processed point clouds and meshed models to aid in the development of these training datasets. The data processing pipeline we have presented here, whilst all open-source, does rely on a relatively powerful computer. Specifically, reliable reconstruction of a dense point cloud using PMVS takes a very long time if computer resources (CPU processing power and memory) are limiting. The smaller leaves of chickpea necessitated higher resolution photogrammetry than was needed for the reconstructions of wheat by Burgess et al. [13]. For our reconstructions on a desktop computer with a 16 core/32 thread 3.5 GHz CPU (Ryzen Threadripper 2950X; AMD Inc., Santa Clara, CA, USA) with 128 Gb 3200 MHz RAM (HyperX Fury; Kingston Technology Corp., Fountain Valley, CA, USA), the generation of a dense point cloud took roughly two hours per plant. We also found that running the reconstruction process from image data stored on a solid-state drive was considerably faster than running from images stored on a traditional hard disc drive. In the past, such computing resources would have been prohibitively expensive for most researchers however this is no longer the case, largely thanks to advances driven by computer gaming technology. Multicore computing is now the norm, even in portable laptop computers, and high capacity memory and fast solid-state storage are now reasonably priced. On the topic of cost, our imaging set up cost roughly AU $1300, considerably less than commercially available alternatives that offer similar data quality. Panjvani et al. [7] recently developed a comparably priced (US $400) DIY LIDAR system for 3D scanning of individual plants, however the quality of leaf area estimates was considerably less than ours (R 2 < 0.6 against ground truthing data, MAPE = 31.5%). By far the most expensive part of our set up was the cameras. In our method presented here, we used three DSLR cameras however we must highlight that the method can also be adapted to work with just one camera, substantially reducing cost. In our early testing, we used just one camera and rotated the plant three times, with the camera manually repositioned from one mounting point of the camera bracket to the next between each rotation. Whilst this took us longer to capture the image sets, we did not notice any reduction in data quality. It may also be possible to use cheaper cameras. Martinez-Guanter et al. [29] used a regular point and shoot camera for the 3D reconstruction of maize, sunflower and sugar beet plants, with an R 2 > 0.99 for both height and leaf area estimates compared against ground truthing measurements. Paturkar et al. [39] show that even a mobile phone can be used for image capture, with 3D reconstructions of chilli plants giving an R 2 > 0.98 for estimates of both height and leaf area. These technological advances and reductions in cost mean that photogrammetric techniques are more accessible than ever before to the plant phenotyping community. The increased availability of these technologies will allow for the adoption of data driven approaches plant science research where this was not possible before. Conclusions Our work has shown that it is possible to use low-cost photogrammetry techniques to accurately phenotype architectural traits and growth habits of individual chickpea plants. We hope that our use of open-source software and hardware will allow others to easily reproduce our method and to develop it further. In particular, there is a need to test whether photogrammetric reconstructions of chickpea could be used for simulations of the canopy light environment and whether they could be automatically segmented into different plant organs using deep learning algorithms. There is a need for higher yielding, environmentally friendly and stress tolerant chickpea varieties with increasing demand for high quality pulse protein worldwide. The use of novel measurement techniques and associated data analytics should assist us in identifying traits of interest and allow us to explore diversity in these traits so that breeders can make informed breeding decisions. Plant material Three commercial Australian chickpea (Cicer arietinum L.) cultivars (PBA Slasher, PBA Hattrick and Genesis Kalkee) were grown from seed in a controlled glasshouse in August 2019. These genotypes were selected as their architecture is known to differ in the field (Additional file 17: Table S2) and are referred to collectively herein as "commercial cultivars". Seeds were planted in potting mix containing slow release fertiliser (Osmocote Premium; Evergreen Garden Care Australia, Bella Vista, NSW, Australia) in 7 L square pots and watered to field capacity once daily. The daytime temperature in the glasshouse was controlled to 25 °C and the nighttime temperature controlled to 18 °C. The relative humidity was set to 60%. Supplemental lighting was provided by LED growth lights if ambient light fell below a photosynthetic photon flux density (PPFD) of 400 µmol m −2 s −1 , this effectively maintained a PPFD of > 400 µmol m −2 s −1 at the plant level at all times during the day. Fifteen plants (five for each genotype) were transferred from the glasshouse to the laboratory for imaging once per week and were returned to the glasshouse after measurement. Additionally, each week 15 plants (five of each genotype) were imaged and then destructively harvested for validation of 3D scanner measurements. Three chickpea genotypes (ICC5878, SonSla and PUSA76) were selected from local and international sources based on contrasting canopy architecture and growth-related traits (Additional file 17: Table S1) and are referred to collectively herein as "breeding lines". ICC 5878 is from the ICRISAT Chickpea Reference Set (http:// www. icris at. org/ what-we-do/ crops/ Chick Pea/ Chick pea_ Refer ence1. htm). SonSla is a fixed line (F7-derived) resulting from a cross between Australian cultivars Sonali and PBA Slasher. PUSA 76 is an older accession released by IARI, India and imported via the Australian Grains Genebank. These were grown outside in February-April 2020. Seeds were planted in potting mix containing slow release fertiliser (Complete Vegetable and Seedling Mix; Australian Native Landscapes Pty Ltd, North Ryde, NSW, Australia) in 7 L square pots and watered every 3 days to field capacity. Twelve plants of each genotype were imaged at 5 weeks post-germination and destructively harvested for validation of 3D scanner measurements. Semi-automated 3D imaging platform Plants were imaged using a turntable and camera photogrammetry setup (schematic in Fig. 7). The turntable is constructed from acrylic (Suntuf 1010493; Palram Australia, Derrimut, Victoria). It consists of a circular top plate on which the potted plant is placed and a base which houses a stepper motor (42BYG; Makeblock Co., Ltd, Shenzhen, China). A lazy susan bearing plate (Adoored 0080820; Bunnings Warehouse, Hawthorn East, Victoria, Australia) is used to connect the plate to the base to provide smoother movement and reduce strain on the motor during imaging. The turntable is connected to and controlled by a user-programmable Arduino microcontroller (Uno R3; Arduino LLC, Somerville, MA, USA) and a number of Arduino breakout boards. The stepper motor is driven via a stepper driver board (DRV8825; Pololu, Las Vegas, NV, USA), that provides precise control of turntable rotation, allowing for individual rotational microsteps as small as 0.06°. A copper heatsink (FIT0367; DFRobot, Shanghai, China) and 5 V fan (ADA3368, Adafruit Industries LLC, New York, NY, USA) are installed on the stepper driver to prevent overheating. The microcontroller triggers the cameras via a relay breakout board (Grove; Seeed Studio, Shenzhen, China) and a custom-made remote shutter cable. An LCD screen with integrated keypad (DFR0009; DFRobot) is used to operate the turntable and provide basic information during the capture process. A 5 V buzzer (AB3462; Jaycar, Sydney, NSW, Australia) audibly alerts the user when a full rotation is complete. Power is provided via a mains-12 V DC 5 A power supply (MP3243; Jaycar). The motor is powered directly with 12 V DC whilst a step-down voltage regulator (XC4514; Jaycar) is used to provide 5 V DC to the microcontroller and associated boards. A wiring diagram is provided in Fig. 7b. The turntable was set on a white table against a white backdrop (Fig. 1). The microcontroller is programmed using the opensource Arduino IDE software (Version 1.8.10; Arduino LLC). The automated capture program was designed such that it will turn the plant a set number of degrees Wiring diagram for the 3D scanner. Note that in b, 5 V wires are represented by solid lines and 12 V wires by dashed lines (determined by the user), pause briefly for the plant to stop moving (with a delay programmed by the user) and then trigger the camera(s) to capture an image. This process is repeated until a full rotation of the plant has been captured. The microcontroller also offers the user some control of the turntable via the buttons on the LCD shield (to increase/decrease the number of images captured per rotation, to manually turn the plant clockwise/anticlockwise and to start/pause/stop the automated capture sequence). Further control of the capture sequence can be achieved through modification of the code. The Arduino program is provided in Additional file 1. Lighting is provided by two large LED floodlights (generic LED floodlights bought on eBay) held in a vertical orientation with custom stands made from aluminium extrusion (Fig. 1a). A sheet of white acrylic (Suntuf 1010493; Palram Australia) is placed over the front of each light as a diffuser. Large cooling fans (MEC0381V3; Sunon, Kaohsiung City, Taiwan) were installed on the rear of the lights. In our imaging setup, the lights were set 80 cm away from the plant on either side of the tripod and angled to face the plant directly (Additional file 13: Figure S3). A tripod (190XPRO; Manfrotto, Cassola, Italy) was used as a base for a custom-made camera mounting bracket (schematic in Additional file 14: Figure S4). The top of the tripod was set level with the table on which the turntable was sat. The mounting bracket was constructed from a 110 cm length of aluminium square hollow extrusion with three quick release mounting points (323 Quick Change Plate Adapter; Manfrotto) for a camera positioned at 10 cm, 55 cm and 100 cm vertically from the base and angled towards the plant. A steel angle bracket (SAZ15; Carinya, Melbourne, Australia) was bolted to the bottom of the aluminium extrusion for secure attachment to the tripod. Camera setups Three digital SLRs (D3300; Nikon Corporation, Tokyo, Japan) were used for imaging, each with a 50 mm prime lens (YN50; Yongnuo, Shenzhen, China). The cameras were affixed to the custom mounting bracket such that images were captured in a horizontal orientation. Exposure was set to 1/100 s, aperture set to F8 and ISO set to 400. Each camera was manually focussed on the first plant imaged each day and remained fixed for the remaining plants. Images were captured in JPEG format at 24.2-megapixel resolution and saturation boosted incamera. Each camera was powered by an AC adaptor (EP-5A; Nikon). The cameras were connected via USB cables to a Windows computer running the open-source digiCamControl software (Version 2.1.2; Istvan, 2014) for live offload of captured images into the structured folders required for downstream data processing. Images were also backed up onto SD cards installed in each camera. 120 images were captured of each plant (40 with each camera). 120 images were chosen after initial testing (data not shown) revealed this to provide the best balance between reconstruction quality and reconstruction processing time. Semi-automated image processing and 3D reconstruction Image processing and 3D model reconstruction was conducted using open-source software on a Windows PC (as summarised in Fig. 2). A dense 3D point cloud was first generated from captured images using VisualSFM (Version 0.5.26 CUDA; [12]) and CMVS + PMVS2 [17] using a modified method of Burgess et al. [13]. Processing parameters were adjusted (in the nv.ini configuration file of the VisualSFM working folder; provided in Additional file 2) from the default settings to optimise the reconstruction of chickpea plants. The settings we used were modified from those used successfully for wheat plants by Burgess et al. [13] as these were unsuitable for reconstruction of the finer details of chickpea plants and underestimated plant surface area (as shown in Additional file 15: Figure S5). Briefly, compared to the settings used for wheat, the CMVS max_images parameter was increased from 40 to 120, allowing the whole image dataset to be analysed concurrently during reconstruction, rather than separated into batches. This was possible due to the large memory capacity on the computer that we used for processing (128 Gb) and reduced the likelihood of multiple point clouds being produced for each plant. The PMVS2 min_images parameter was increased from 3 to 4, meaning that each 3D point in the reconstruction must be visible in at least four images. Functionally, this reduces noise and improves the accuracy of the point cloud. The PMVS2 csize parameter was reduced from 2 to 1 to create a denser point cloud. The PMVS2 wsize parameter was increased from 7 to 12 to provide more stable reconstructions by including more colour information when computing the photometric consistency score. Finally, the PMVS2 threshold parameter was reduced from 0.7 to 0.45. The threshold refers to the photometric consistency measure above which a patch reconstruction is deemed a success and kept in the point cloud. Reducing the threshold allowed us to retain more of the less consistent points in the point cloud. Note that more detailed descriptions of these parameters can be found in the CMVS + PMVS2 documentation. Point cloud generation was automated using a Windows batch file (provided in Additional file 3). Dense point clouds were scaled (using the width of the pot as a reference), denoised based on colour (removing all but the green/brown points), reoriented (such that the ground was parallel to the X-Y plane) and any remaining non-plant points removed manually in Meshlab (Version 2020.06; [40]). Statistically outlying points were then removed using the statistical outlier removal (SOR) feature of CloudCompare (Version 2.11.0; GPL software). The remaining points were sub-sampled using Poisson disk sampling (Explicit Radius = 0.5, Montecarlo oversampling = 20; [41]). A meshed model was created from the sub-sampled point cloud using a ball pivoting algorithm (default settings; [42]) and any large holes The area of the ith triangle (S i ) is then calculated using lengths A i , B i and C i using Eq. 5: The script outputs a visual summary of plant surface area as a function of height, as well as a comprehensive. CSV file that contains extracted parameters (XYZ coordinates for the vertex of each triangle, XYZ coordinates of the centre of each triangle, the area of each triangle, etc.) from each reconstruction. This R script is provided in Additional file 10. For comparisons across genotypes, for each plant, height data was normalised based on the overall plant height and area data was normalised based on total surface area. Validation measurements The height of each plant was measured using a ruler, from the base of the stem to the highest point of the canopy. Plants were then destructively harvested, the harvested plant material laid flat on a large sheet of white paper and an image taken from above using a DSLR camera (Canon EOS R; Canon Inc., Tokyo, Japan) mounted to a tripod for validation of total surface area (representative images used for ground truthing are presented in Additional file 15: Figure S5). A ruler was included in the image for scaling. Lens corrections were first performed on the captured images in Adobe Photoshop (Adobe Inc., San Jose, CA, USA) to remove distortion and then images were analysed using ImageJ (Fiji 1.52p; [43]) to obtain measurements of total plant surface area. To test the assumption that 2D image analysis techniques would not be accurate in assessing area-related traits due to overlapping plant elements, we analysed the side projected green area of two images of each chickpea plant from the week 5 image set used to reconstruct the 3D models. The two images chosen for each pair were separated by 90 rotational degrees but were both taken from the same height. Using a modified method of Atieno et al. [4], each image was scaled and a HSV colour thresholding mask used to compute the area of green plant material in ImageJ [43]. The mean variation in side in the meshed model filled with the close holes feature (max size to be closed = 50). All but the scaling, manual removal of non-plant points and outlier removal were run in a consistent and automated fashion using Meshlab scripts and a Windows batch file (provided in Additional files 4, 5, 6, 7, 8 and 9). Meshed models consisted of n triangles with 3D coordinates of the ith triangle given by a vector (x i1 , y i1 , z i1 , x i2 , y i2 , z i2 , x i3 , y i3 , z i3 ), where x and y correspond to coordinates parallel to the ground and z corresponds to height above the ground. Analyses of geometric features (height, max width, etc.) and plant surface area were performed using the base functions in Meshlab. The surface area from Meshlab was divided by 2 to provide a "one-sided" area, which is referred to herein as total surface area. Canopy volume was measured in Meshlab after fitting a convex hull to meshed model. A top down orphographic projection of the model was exported as an image file and processed in ImageJ (Fiji 1.52p; [43]) to estimate projected plant area. Plant area index (PAI) was calculated as total surface area/projected plant area. Week-to-week relative growth rates (RGR) for total surface area were derived for each plant as per Pérez-Harguindeguy [44], using Eq. 1: where t is the time between measurement of leaf areas A1 and A2. An R script was written to calculate the area of each individual triangle making up the surface of the meshed model and then to calculate plant surface area as a function of height. The script uses the png (version 0.1.7; [45]), rgl (version 0.100.54; [46]), Rvcg (version 0.19.1; [47]) and tidyverse (version 1.3.0; [48]) R packages. Briefly, the length of the ith triangle's edges (A i , B i and C i ) is first calculated using the XYZ coordinates of its three vertices (x i1 , y i1 , z i1 ,x i2 , y i2 , z i2 ; and x i3 , y i3 , z i3 ), using Eqs. 2-4: projected area between the two images was found to be 8.4%, whilst the maximum variation for an image pair was 25.1% (Additional file 17: Table S3), highlighting the need for 3D phenotyping techniques. We were concerned that movement of chickpea leaves during the measurement period (09:00-15:00) could influence estimates of surface area from the 3D scanner. Chickpea leaves move considerably during the day we thought this diurnal rhythm may affect the results. To alleviate this concern, we scanned the same plant several times across this measurement time window and found minimal variation (< 2.3% variance from the mean) in area estimates over time (Additional file 16: Figure S6). Statistical analyses Statistical analyses were performed in R [49]. For validation data, linear regressions models were plotted to visually compare conventional and 3D scanner measurements. Root mean squared error (RMSE) and mean absolute percentage error (MAPE) were calculated using base R and the MLmetrics package (version 1.1.1; [50]) respectively. Spearman rank correlation coefficient (ρ) was used to statistically analyse the regressions. Analysis of variance (ANOVA) was used to determine whether regression models differed statistically across genotypes. For representative growth data, statistical comparisons across genotypes were analysed using a repeated measures ANOVA with post-hoc Tukey's HSD test using the emmeans package in R (version 1.4.7; [51]). Normalised area distribution data were analysed statistically using a non-parametric ANCOVA using the sm package (version 2.2-5.6; [52]). All regressions and representative data were visualised using ggplot2 in R (version 3.3.2; [53]).
2020-09-13T13:12:15.707Z
2020-09-09T00:00:00.000
{ "year": 2021, "sha1": "61c7c7f9bdfffa384702aad1edc3a15bf0cdbe96", "oa_license": "CCBY", "oa_url": "https://plantmethods.biomedcentral.com/track/pdf/10.1186/s13007-021-00795-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ba2993b555317c64bdfd84a968b94f9889de59c7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Computer Science" ] }
256297548
pes2o/s2orc
v3-fos-license
Awake Prone Positioning for Non-Intubated COVID-19 Patients with Acute Respiratory Failure: A Meta-Analysis of Randomised Controlled Trials Introduction: Awake prone positioning (APP) has been widely applied in non-intubated patients with COVID-19-related acute hypoxemic respiratory failure. However, the results from randomised controlled trials (RCTs) are inconsistent. We performed a meta-analysis to assess the efficacy and safety of APP and to identify the subpopulations that may benefit the most from it. Methods: We searched five electronic databases from inception to August 2022 (PROSPERO registration: CRD42022342426). We included only RCTs comparing APP with supine positioning or standard of care with no prone positioning. Our primary outcomes were the risk of intubation and all-cause mortality. Secondary outcomes included the need for escalating respiratory support, length of ICU and hospital stay, ventilation-free days, and adverse events. Results: We included 11 RCTs and showed that APP reduced the risk of requiring intubation in the overall population (RR 0.84, 95% CI: 0.74–0.95; moderate certainty). Following the subgroup analyses, a greater benefit was observed in two patient cohorts: those receiving a higher level of respiratory support (compared with those receiving conventional oxygen therapy) and those in intensive care unit (ICU) settings (compared to patients in non-ICU settings). APP did not decrease the risk of mortality (RR 0.93, 95% CI: 0.77–1.11; moderate certainty) and did not increase the risk of adverse events. Conclusions: In patients with COVID-19-related acute hypoxemic respiratory failure, APP likely reduced the risk of requiring intubation, but failed to demonstrate a reduction in overall mortality risk. The benefits of APP are most noticeable in those requiring a higher level of respiratory support in an ICU environment. Introduction Patients with acute respiratory failure secondary to moderate to severe Coronavirus Disease 2019 (COVID-19) often require non-invasive respiratory support [1]. Unfortunately, Methods This meta-analysis was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines (Supplementary Table S1) [15]. This study did not require ethical approval. Our protocol was registered with PROSPERO, The International Prospective Register of Systematic Reviews (CRD42022342426). Data Sources and Search Strategy An electronic search was performed on the Cochrane Central Register of Controlled Trials (CENTRAL via the Cochrane Library), MEDLINE (via PubMed), Embase (via Ovid), clinicaltrials.gov, and ProQuest Dissertations and Theses Global (PQDT) from inception up to August 2022. MeSH terms and relevant keywords for (prone position*) AND (awake or non-intubated) AND (COVID-19 OR SARS-CoV-2) were used. The detailed search strategy is available in Supplementary Table S2. Hand-searching of pertinent review articles and bibliographies of the included original articles were also undertaken. We performed forward citation searching using the Web of Science to identify further eligible articles. Study Selection and Eligibility Criteria All of the articles were imported into Mendeley Desktop 1.19.8 (Mendeley Ltd., Amsterdam, Netherlands) and duplicates were removed. Two authors (A.S. and A.A.) independently screened the titles and abstracts of all of the retrieved articles and removed those not fulfilling the inclusion criteria. Full texts of the remaining articles were reviewed against the eligibility criteria. Conflicts or disagreements were discussed and resolved with a third author (H.A.C.). We included RCTs comparing APP with supine positioning or standard of care with no prone positioning for non-intubated adult (>18 years old) patients with COVID-19-related acute hypoxemic respiratory failure. We did not apply any language restriction. We excluded articles that evaluated patients intubated before or at enrolment, and those including paediatric patients (<18 years of age). We also excluded case reports, observational studies, reviews and editorials, and articles that did not report any of our pre-specified outcomes. Outcomes The co-primary outcomes of our meta-analysis were the risk of intubation and the reported all-cause mortality in patients with COVID-19-related acute respiratory failure, while secondary outcomes included (1) the need for escalating respiratory support, (2) length of ICU stay, (3) length of hospital stay, (4) ventilation free-days, and (5) adverse events. Data Extraction For baseline characteristics, data extraction was done by two independent groups of authors, which included author names, year of publication, country, RCTs enrolment centres or location, patient enrolment, details including outcome measures for intervention and control groups, age, gender, BMI, corticosteroids use, and follow-up days. Data were then checked by a third independent group of authors. For categorical outcomes, the number of events for that outcome and the total number of patients were extracted, while for continuous outcomes, the sample size, mean, or median were extracted as provided in the studies and the medians were converted to means for data analysis [16]. Risk of Bias and Certainty of Evidence Assessment Two independent authors (S.O. and R.H.) assessed the risk of bias in the included studies using the Cochrane "Risk of bias" tool for randomized trials (RoB 2.0) [17]. RoB 2.0 addressed five specific domains: (1) bias arising from the randomization process, (2) bias due to deviations from intended intervention, (3) bias due to missing outcome data, (4) bias in measurement of the outcome, and (5) bias in the selection of the reported results. We applied this tool to each included study and the source of bias was graded as high, low, or unclear, which determined the risk of bias as high, low, or some concerns. Disagreements were discussed and resolved with a third author (M.S.). The quality of evidence was graded as very low, low, moderate, or high using the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) assessment tool on the basis of risk of bias, publication bias, imprecision, inconsistency, and indirectness [18]. Data Synthesis Risk ratios (RR) and mean differences (MDs) for dichotomous and continuous outcomes, respectively, with 95% confidence intervals (CIs), were pooled using the DerSimonian and Laird random-effects model. The pooled results were represented graphically as forest plots. The Chi 2 test and the I 2 statistic were used to assess heterogeneity across studies. I 2 values were interpreted according to the Cochrane Handbook for Systematic Reviews of Interventions, section 10.10 [19]. p < 0.10 was considered statistically significant for the Chi 2 test [19]. We conducted the statistical analysis using Review Manager (RevMan, Version 5.4; The Cochrane Collaboration, Copenhagen, Denmark). We used Funnel plots and Egger's test to assess publication bias when at least 10 studies were included in a meta-analysis, using the Jamovi (version 1.8; Jamovi, Sydney, Australia) MAJOR module, which is based on the metafor package for R [20]. We conducted subgroup analyses on our primary outcomes based on the respiratory support level and the patient location at enrolment. With regards to the respiratory support level, conventional oxygen therapy was defined as oxygen therapy without positive pressure such as a nasal cannula or a mask, and a higher level of respiratory support was defined as the use of positive airway pressure via high-flow nasal cannula or non-invasive ventilation. The location at enrolment was ICU versus non-ICU. Intermediate ICU or the emergency department was classified as ICU, whereas non-ICU indicated general hospital wards. p-value < 0.10 was considered statistically significant for the test for subgroup differences [21]. We also performed a post hoc exploratory meta-regression analysis, using the OpenMetaAnalyst software, under the random-effects model for our primary outcomes with the duration of APP in the intervention group as the covariate. Search Results The literature search retrieved 1712 studies. After the exclusion of duplicates, reviews, and ineligible articles, we included a total of 11 RCTs with a cumulative sample size of 2385 patients (1218 in the APP group and 1167 in the control group) in our review [14,[22][23][24][25][26][27][28][29][30][31]. The literature screening process is summarised in Figure 1. support level, conventional oxygen therapy was defined as oxygen therapy without positive pressure such as a nasal cannula or a mask, and a higher level of respiratory support was defined as the use of positive airway pressure via high-flow nasal cannula or noninvasive ventilation. The location at enrolment was ICU versus non-ICU. Intermediate ICU or the emergency department was classified as ICU, whereas non-ICU indicated general hospital wards. p-value < 0.10 was considered statistically significant for the test for subgroup differences [21]. We also performed a post hoc exploratory meta-regression analysis, using the OpenMetaAnalyst software, under the random-effects model for our primary outcomes with the duration of APP in the intervention group as the covariate. Search Results The literature search retrieved 1712 studies. After the exclusion of duplicates, reviews, and ineligible articles, we included a total of 11 RCTs with a cumulative sample size of 2385 patients (1218 in the APP group and 1167 in the control group) in our review [14,[22][23][24][25][26][27][28][29][30][31]. The literature screening process is summarised in Figure 1. Characteristics of Included Studies All of the studies were published between 2020 and 2022. There were five multicentre studies, four single-centre studies, and one multinational RCT. The APP procedures were of a variable duration in the included studies, ranging from 1 h to 16 h, or up to the tolerance of the patient. The follow-up duration ranged from 28 to 30 days for most trials; one trial had a follow-up of 1 day only [24], one of 14 days [27] and only one RCT had a followup of greater than 30 days (60 days) [14]. All of the included studies used different types of initial respiratory support. In five studies, patients were given a lower level of respiratory support (i.e., conventional oxygen therapy) [14,24,28,29,31]. Five studies used a higher level of respiratory support (i.e., high-flow nasal cannula or NIV) [22,23,26,27,30]. Patients were exclusively in an ICU setting in one study [26], exclusively in a general ward setting in six RCTs [23][24][25]27,29,31], and in both settings in one RCT [22]. In the study by Characteristics of Included Studies All of the studies were published between 2020 and 2022. There were five multicentre studies, four single-centre studies, and one multinational RCT. The APP procedures were of a variable duration in the included studies, ranging from 1 h to 16 h, or up to the tolerance of the patient. The follow-up duration ranged from 28 to 30 days for most trials; one trial had a follow-up of 1 day only [24], one of 14 days [27] and only one RCT had a follow-up of greater than 30 days (60 days) [14]. All of the included studies used different types of initial respiratory support. In five studies, patients were given a lower level of respiratory support (i.e., conventional oxygen therapy) [14,24,28,29,31]. Five studies used a higher level of respiratory support (i.e., high-flow nasal cannula or NIV) [22,23,26,27,30]. Patients were exclusively in an ICU setting in one study [26], exclusively in a general ward setting in six RCTs [23][24][25]27,29,31], and in both settings in one RCT [22]. In the study by Ehrmann et al., patients were in a mixed setting of ICU, intermediate care unit, emergency department, and general ward [30]. In the study by Gad et al., patients were in the critical care isolation unit [28]. In the study by Alhazzani et al., a monitored acute care unit was used [14]. The detailed characteristics of included studies are shown in Table 1. * Data are reported as mean ± SD or median (IQR). APP, awake prone positioning; BMI, body mass index; ICU, intensive care unit; P/F, ratio of partial pressure of arterial oxygen to fraction of inhaled oxygen; S/F, ratio of pulse oxygen saturation to fraction of inhaled oxygen; NR, not reported. Quality Assessment of Included Studies Assessment of risk of bias using RoB 2.0 found a high risk of bias in three studies and some concerns of bias in six studies (Supplementary Figure S1). The most common issue was in the domains of the randomization process and deviations from intended interventions. The remaining two RCTs were judged to be at a low risk of bias [14,30]. Primary Outcomes Risk of Intubation All 11 studies reported the need for intubation as an outcome. Patients in the APP group were at a significantly lower risk of needing intubation compared with the control group (RR 0.84, 95% CI: 0.74-0.95; p = 0.98, I 2 = 0%; Figure 2). Egger's test for funnel plot asymmetry did not demonstrate any suspicion of publication bias (p = 0.553; Supplementary Figure S2). The quality of evidence was judged to be moderate, with concerns about the risk of bias in the included studies (Supplementary Table S3). Quality Assessment of Included Studies Assessment of risk of bias using RoB 2.0 found a high risk of bias in three studies and some concerns of bias in six studies (Supplementary Figure S1). The most common issue was in the domains of the randomization process and deviations from intended interventions. The remaining two RCTs were judged to be at a low risk of bias [14,30]. Risk of Intubation All 11 studies reported the need for intubation as an outcome. Patients in the APP group were at a significantly lower risk of needing intubation compared with the control group (RR 0.84, 95% CI: 0.74-0.95; p = 0.98, I 2 = 0%; Figure 2). Egger's test for funnel plot asymmetry did not demonstrate any suspicion of publication bias (p = 0.553; Supplementary Figure S2). The quality of evidence was judged to be moderate, with concerns about the risk of bias in the included studies (Supplementary Table S3). In the subgroup analysis for the type of respiratory support at enrolment, a significant reduction in the need for intubation was reported in the APP group versus the control group for a higher level of respiratory support (RR 0.82, 95% CI 0.71-0.93), but not for conventional oxygen therapy (RR 1.07, 95% CI 0.66-1.73). However, there was no significant difference between the two subgroups (Pinteraction = 0.29; Supplementary Figure S3). In the subgroup analysis for enrolment location, a significant reduction in the need for intubation was reported in the APP group versus the control group for the ICU setting (RR 0.83, 95% CI 0.73-0.95), but not for the non-ICU setting (RR 0.88, 95% CI 0.44-1.76). However, the test for subgroup differences was not significant (Pinteraction = 0.87; Supplementary Figure S4). The results of meta-regression showed a non-significant negative association of APP duration with the risk of intubation, with longer durations of APP demonstrating a possible trend towards a greater benefit (coefficient = −0.033; p = 0.160; Supplementary Figure S5). Mortality All 11 studies reported the incidence of all-cause mortality. Risk for all-cause mortality was comparable for the APP versus the control group (RR 0.93, 95% CI 0.77-1.11; p = 0.87; I 2 = 0%; Figure 3). Egger's test did not show any evidence of publication bias (p = In the subgroup analysis for the type of respiratory support at enrolment, a significant reduction in the need for intubation was reported in the APP group versus the control group for a higher level of respiratory support (RR 0.82, 95% CI 0.71-0.93), but not for conventional oxygen therapy (RR 1.07, 95% CI 0.66-1.73). However, there was no significant difference between the two subgroups (P interaction = 0.29; Supplementary Figure S3). In the subgroup analysis for enrolment location, a significant reduction in the need for intubation was reported in the APP group versus the control group for the ICU setting (RR 0.83, 95% CI 0.73-0.95), but not for the non-ICU setting (RR 0.88, 95% CI 0.44-1.76). However, the test for subgroup differences was not significant (P interaction = 0.87; Supplementary Figure S4). The results of meta-regression showed a non-significant negative association of APP duration with the risk of intubation, with longer durations of APP demonstrating a possible trend towards a greater benefit (coefficient = −0.033; p = 0.160; Supplementary Figure S5). Mortality All 11 studies reported the incidence of all-cause mortality. Risk for all-cause mortality was comparable for the APP versus the control group (RR 0.93, 95% CI 0.77-1.11; p = 0.87; I 2 = 0%; Figure 3). Egger's test did not show any evidence of publication bias (p = 0.204; Supplementary Figure S6). The quality of evidence was judged to be moderate with concerns of imprecision and risk of bias (Supplementary Table S3). Figure S6). The quality of evidence was judged to be moderate with concerns of imprecision and risk of bias (Supplementary Table S3). In the subgroup analysis for the type of respiratory support, we found no difference for APP versus the control group in patients on a higher level of respiratory support (RR 0.92, 95% CI 0.76-1.10) as well as patients on conventional oxygen therapy (RR 1.14, 95% CI 0.47-2.75; Pinteraction = 0.64; Supplementary Figure S7). 0.204; Supplementary In the subgroup analysis for enrolment location, there was no difference for APP versus the control group in patients admitted to the ICU (RR 0.91, 95% CI 0.75-1.10) and patients not admitted to the ICU (RR 0.81, 95% CI 0.41-1.59; Pinteraction = 0.75; Supplementary Figure S8). In the meta-regression, there was no significant association between the duration of APP and the risk of all-cause mortality (coefficient = 0.006; p = 0.870; Supplementary Figure S9). Length of ICU Stay The length of ICU stay was reported by five studies [22,23,26,28,30]. No significant difference was observed for APP versus the control group (MD 0.08, 95% CI 0.96-1.12; p = 0.88, I 2 = 8%; Supplementary Figure S10). The quality of evidence was judged to be moderate with concerns of imprecision only (Supplementary Table S3). Ventilator-Free Days Three studies reported ventilator-free days as an outcome [14,22,25]. Our analysis reported no significant difference between APP and the control group (MD 3.36, 95% CI 7.20-13.92; p = 0.53, I 2 = 95%; Supplementary Figure S13). The quality of evidence was In the subgroup analysis for the type of respiratory support, we found no difference for APP versus the control group in patients on a higher level of respiratory support (RR 0.92, 95% CI 0.76-1.10) as well as patients on conventional oxygen therapy (RR 1.14, 95% CI 0.47-2.75; P interaction = 0.64; Supplementary Figure S7). In the subgroup analysis for enrolment location, there was no difference for APP versus the control group in patients admitted to the ICU (RR 0.91, 95% CI 0.75-1.10) and patients not admitted to the ICU (RR 0.81, 95% CI 0.41-1.59; P interaction = 0.75; Supplementary Figure S8). In the meta-regression, there was no significant association between the duration of APP and the risk of all-cause mortality (coefficient = 0.006; p = 0.870; Supplementary Figure S9). Secondary Outcomes Length of ICU Stay The length of ICU stay was reported by five studies [22,23,26,28,30]. No significant difference was observed for APP versus the control group (MD 0.08, 95% CI 0.96-1.12; p = 0.88, I 2 = 8%; Supplementary Figure S10). The quality of evidence was judged to be moderate with concerns of imprecision only (Supplementary Table S3). Ventilator-Free Days Three studies reported ventilator-free days as an outcome [14,22,25]. Our analysis reported no significant difference between APP and the control group (MD 3.36, 95% CI 7.20-13.92; p = 0.53, I 2 = 95%; Supplementary Figure S13). The quality of evidence was downgraded to very low with concerns of inconsistency, imprecision, and risk of bias (Supplementary Table S3). For safety outcomes, the APP group and the control group were comparable in terms of risk of adverse events and serious adverse events (RR 1.29, 95% CI 0.52-3.21; p = 0.59, I 2 = 76% and RR 1.60, 95% CI 0.94-2.73; p = 0.08, I 2 = 0%, respectively; Supplementary Figures S14 and S15). No publication bias was detected in the outcome of the incidence of adverse events (p for Egger's = 0.173; Supplementary Figure S16). The quality of evidence for the risk of adverse events was downgraded to very low with concerns of imprecision, inconsistency, and risk of bias, while for serious adverse events it was judged to be moderate with concerns of imprecision only (Supplementary Table S3). Discussion The results of our meta-analysis demonstrate a likely reduction in the risk of intubation with APP with no increase in the incidence of total adverse or serious adverse events. In other important patient-centred outcomes, APP and control groups were comparable in terms of all cause-mortality, ICU-length of stay, ventilator-free days, and the need for escalating respiratory support. The setting where APP was delivered appeared to influence the effectiveness. Patients admitted to the ICU and those receiving a higher level of respiratory support, including high-flow nasal oxygen, showed a reduction in risk of intubation with APP, while patients receiving conventional respiratory support and those not admitted to the ICU showed no benefit with APP. Prone positioning improves oxygen delivery and may reduce mortality in patients on mechanical ventilation suffering from severe ARDS [7,32,33]. The specific pathophysiological effects of APP in viral pneumonia are not yet fully understood. The observed improvements in oxygenation suggest that similar mechanisms are at play in patients who are mechanically ventilated. In prone positioning, the gravitational shift in the thoracic cavity enhances the reopening of poorly ventilated atelectatic areas and promotes the recruitment of dorsal-dependent lung regions [34]. APP in spontaneously breathing patients may promote more homogeneous diffusion and dispersal of pleural pressure, reducing the strain on the lungs in acute hypoxaemia [8,30,35,36]. The improvement in ventilation/perfusion (V/Q) matching can also be attributed to the redistribution of blood flow due to gravitational forces to the better-ventilated areas and during the APP session due to the relaxation of hypoxic pulmonary vasoconstriction, leading to a better right ventricular performance [37][38][39]. Additionally, APP is associated with a decrease in the fluid collection in the alveoli, which may improve the hypoxemic state and supply of adequate oxygen throughout the lungs [30,40,41]. These beneficial changes in V/Q matching could help with improvement in oxygenation, ameliorating the high respiratory drive and reducing the risk of self-inflicted lung injury, leading to reduced risk of intubation in COVID-19 patients. Our results, which are significantly influenced by the meta-trial by Ehrmann et al. [30], are consistent with a significant reduction in the risk of intubation, similar to a previous meta-analysis by Li et al. [11]. This reduced risk of intubation, however, failed to translate into a reduction in mortality with APP. Notably, our results do not agree with the previously published meta-analyses that report a reduction in mortality in the APP group [12,13,42]. We only included RCTs in our meta-analysis, thus removing the confounding and selection biases that these previous meta-analyses suffered from. Our results and Li et al. both highlight a trend towards a greater benefit of APP for patients in the ICU and patients receiving a higher level of respiratory support. Several factors such as an increased staff-to-patient ratio and close respiratory monitoring promoting greater adherence may account for the different efficacy of APP in ICU versus non-ICU settings, and higher level versus conventional respiratory support. Nevertheless, these findings should be interpreted with caution as the difference between the subgroups was not significant in our study. This may be attributed to low power due to the limited sample size in each subgroup. Interestingly, the RCT by Alhazzani et al. [14], which had the second highest weightage in our meta-analysis, showed that prone positioning may not be advantageous for patients with a more severe disease, which is in contrast with the results of the meta-trial by Ehrmann et al. [30]. A recent study highlighted that response to prone positioning is significantly different when atelectatic areas change to dense consolidation [43]. As these changes develop over a variable time period, the observed difference in the trials may be attributed to the different inclusion criteria and APP protocols. The duration of prone positioning varied considerably in the included studies, ranging from 1-2 h/day to 8-10 h/day; however, most studies had an APP duration of less than 8 h/day. In our meta-regression analysis, increased duration of APP was associated with a lower risk of intubation; however, as this trend did not achieve statistical significance, it should be interpreted with due caution. Nevertheless, a recent study showed that an increased APP duration to 8 h/day was associated with better clinical outcomes [44]. Future RCTs should attempt to investigate this association further to determine the optimal duration of APP. There is a concern about complications such as device displacement, pressure ulcers, and hemodynamic instability during APP [45]. In our meta-analysis, we found that the rate of adverse events was similar between the two groups, albeit with substantial imprecision, which does not rule out an increased risk of serious complications. There are several barriers to APP in complex hospital settings, as noted by multiple authors of the studies included in this meta-analysis. Patients' hesitation, discomfort, lack of knowledge, and pregnancy itself can be potential barriers and the current literature shows poor adherence towards APP [46]. It has been shown that musculoskeletal pain and other discomforts are common in severe COVID-19 disease, and this increases the risk of non-adherence even with relatively simple medical procedures [47]. Another possible barrier can be a lack of knowledge, perception, and attitude towards APP, as has been the case with prone positioning of mechanically ventilated patients [45]. Additionally, APP can be a labour-intensive procedure and requires careful respiratory monitoring to recognise therapy failure and avoid harm from self-inflicted lung injury and late intubation. Moreover, leadership and team dynamics play an essential role, and thus team discomfort and lack of experience with APP along with prior negative experiences and observed adverse effects of the intervention could be significant barriers to APP [48]. Limitations Our study has several limitations that need to be considered while interpreting our results. The findings of our meta-analysis might not be generalizable to non-COVIDrelated acute hypoxemic respiratory failure as it includes patients with COVID-19 only. Furthermore, some patients in the control group followed prone positioning for a few hours, while some patients in the prone positioning group reverted to the supine position to attain a comfortable position [25][26][27]. Additionally, values quoted by studies for actual awake prone positioning were unmethodical as it was observed and recorded inconsistently with unknown accuracy by bedside clinicians. Patient adherence to the APP protocol must be objectively measured in any future studies conducted. To achieve this, the start and end times for each position, including prone, lateral, and supine positions, must be accurately recorded. Conclusions Our meta-analysis showed that APP likely lowers the risk of intubation in COVID-19 patients with acute hypoxemic respiratory failure, with no increase in the incidence of adverse events. The risk of mortality, length of stay, and length of mechanical ventilation is not affected by APP; however, the studies are underpowered and heterogeneous for these outcomes. Future large-scale RCTs are required to confirm these findings and identify the subpopulations, degree of disease severity, and optimal duration in order to benefit the most from APP. Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/jcm12030926/s1, Figure S1: Risk of bias assessment of included studies; Figure S2: Funnel plot for risk of intubation; Figure S3: Subgroup analysis of risk of intubation for type of respiratory support before randomization; Figure S4: Subgroup analysis of risk of intubation for enrolment location; Figure S5: Meta-regression for risk of intubation by APP duration; Figure S6: Funnel plot for all-cause mortality; Figure S7: Subgroup analysis of all-cause mortality for type of respiratory support before randomization; Figure S8: Subgroup analysis of all-cause mortality for enrolment location; Figure S9: Meta-regression for risk of all-cause mortality by APP duration; Figure S10: Effect of APP on length of ICU stay; Figure S11: Effect of APP on length of hospital stay; Figure S12: Effect of APP on need for escalating respiratory support; Figure S13: Effect of APP on ventilator-free days; Figure S14: Effect of APP on the incidence of adverse events; Figure S15: Effect of APP on the incidence of serious adverse events; Figure S16: Funnel plot for the incidence of adverse events; Table S1: PRISMA 2020 checklist; Table S2: Search strategy for PubMed which was adapted for other databases; Table S3: Grading of recommendations assessment, development, and evaluation (GRADE) summary of findings. Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.
2023-01-27T16:09:32.453Z
2023-01-25T00:00:00.000
{ "year": 2023, "sha1": "e8c23a1bf6be5c1dd23274b64121e9712b88900e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/3/926/pdf?version=1674636837", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a9c2efd14d1f8933a8b448a8f81dbd9a55f533f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
244824387
pes2o/s2orc
v3-fos-license
Soil Microcosms for Bioaugmentation With Fungal Isolates to Remediate Petroleum Hydrocarbon-contaminated Soil The aim of this work was to isolate indigenous PAH degrading-fungi from petroleum contaminated soil and exogenous ligninolytic strains from decaying-wood, with the ability to secrete diverse enzyme activity. A total of ten ligninolytic fungal isolates and two native strains, has been successfully isolated, screened and identied. The phylogenetic analysis revealed that the indigenous fungi (KBR1 and KB8) belong to the genus Aspergillus niger and tubingensis. While the ligninolytic exogenous PAH-degrading strains namely KBR1-1, KB4, KB2 and LB3 were aliated to different genera like Syncephalastrum sp, Paecilomyces formosus, Fusarium chlamydosporum, and Coniochaeta sp., respectively. Basis on the taxonomic analysis, enzymatic activities and the hydrocarbons removal rates, single fungal culture employing the strain LB3, KB4, KBR1 and the mixed culture (LB3+KB4) were selected to be used in soil microcosms treatments. The Total petroleum hydrocarbons (TPH), fungal growth rates, BOD 5 /COD ratios and GC-MS analysis, were determined in all soil microcosmos treatments (SMT) and compared with those of the control (SMU). After 60 days of culture incubation, the highest rate of TPH degradation was recorded in SMT[KB4] by approximately 92±2.35% followed by SMT[KBR1] then SMT[LB3+KB4] with 86.66±1.83% and 85.14±2.21%, respectively. Introduction Over the last two decades, accelerated industrialization, and the massive use of aromatic compounds in explosives, dyestuffs, pesticides and pharmaceuticals has resulted in serious environmental contamination of soil, water, and air. Oil spillage is a serious threat to all compartments of the ecosystem 1 . During extraction, transportation, storage and distribution operations, crude oil and its re ned products are frequently exposed to accidental spillage causing soil pollution 2 . Soils contaminated with persistent organic pollutants (POPs) associated with petroleum such as PAHs, have high potential health risk because its ability to enter food chain and its a nity for accumulation in living organisms 3. Soil matrix properties and functions are closely related to the different activities occurring on land and xenobiotic structures like PAHs-associated with petroleum. Owing to the chemical stability of PAHs, their hydrophobicity and recalcitrance to microbial degradation, spilled oil may damage the biological and physico-chemical properties of the petroleum-polluted soil. Petroleum hydrocarbons cause alteration of soil biological properties, affecting the microbial diversity and the enzymatic activities as well as the physico-chemical characteristics 4,5 . Certain essential soil function may be lost due to the high toxicity of such persistent aromatic hydrocarbon structures 6 . Indeed, spilled oil may develop anaerobic conditions and asphyxia in soil pore with consequent impacts on the microbial activities 7 . In this regard, Klamerus-Iwan et al. 8 demonstrated a signi cative decline of the Therefore, bioaugmentation approaches are necessary to enhance the performance of indigenous microbial population several fold through the introduction of microbes with speci c metabolic activities for an effective in-situ remediation of polluted areas 19 . Typically, fungi are suited for bioremediation of crude oil in polluted sites owing of their diverse metabolic activities. They are able to secrete a board range of ligninolytic and non-ligninolytic enzymes to use petroleum hydrocarbons as a carbon and energy source and assimilate into fungal biomass 20 . Moreover, the e ciency of fungal culture to remove or degrade PAHs from petroleum contaminated soil is related to various factors, among them pollutant bioavailability, survival of microorganism and their metabolic diversity are essential for bioaugmentation 21 . Previously, it was well demonstrated that Soil microcosms (SM) serve as test systems that may be adapted to various environmental conditions. Indeed, outcomes of microcosm studies are often used to develop remedial pilot process specifications 22 . The current study highlights the application of newly fungal isolates (indigenous from the investigated soil and exogenous from decay wood) in the PAHs-contaminated soil remediation processes. Individual and mixed fungal cultures are selected based on their taxonomy and metabolic diversity, to enhance the bioremediation performance in soil microcosmos systems. Soil Sampling, physico-chemical characteristics, and microbial population PAHs contaminated-soil samples were collected from spots around the oil well (7) located in Dammam city (Saudi Arabia). All samples of soils were taken at depth 5-10 cm from upper surface of topsoil. Samples were transferred to laboratory in nylon sterilized sac closed tightly and marked with relevant information (number, location speci c characteristics and date). Before utilization in the treatment study, all soil samples were mixed and sieved to remove particulars greater than 1.25 cm and saved under 4 °C. The physicochemical analysis was performed by the Arabia Life Sciences Division-Environmental Saudi Arabia (ALS) which is diversi ed testing services organization. The pH, moisture content, biological oxygen demand (BOD), total petroleum hydrocarbon (TPH) semivolatile and volatile organic compounds-BTEX (benzene, toluene, ethylbenzene and xylene) of the petroleum-contaminated soil before and after fungal treatments were determined (Table 3). For the estimation of the total Petroleum hydrocarbon (TPH) (semi-volatile), the USEPA 8015B method gas chromatography/ ame ionization detection (GC/FID) was conducted. Sample extracts were analyzed by Capillary GC/FID and quanti ed against alkane standards over the range C10 -C40. TPH Volatiles / BTEX were determined using the method EPA 8260 Purge and Trap-gas chromatographymass spectrometry (GC/MS). Extracts are analyzed by Purge and Trap. Capillary GC/MS. Methanol Extraction of Soils for Purge and Trap was performed by the method (USEPA SW 846 -5030A) The soil pH was determined by digital pH meter in a soil water suspension (1:2.5) as described by Jackson 48 The moisture contents of soil were determined gravimetrically, based on weight loss over 12 hours drying period at 103-105 °C. This method is compliant with NEPM (2013). Enumeration of bacteria from contaminated soil was performed by using serial dilution and plating technique. The microbial populations were counted in terms of Colony forming units (CFUs) 49 . Isolation of decay-wood decomposing fungi and indigenous soil fungi Decay-wood samples were collected in sterilized and labeled plastic bag from different biotopes of the region of Barzah and Rahat from Khulais-Jeddah city, during March 2020. Malt Extract Agar (MEA) (30.0 g/L, pH 5.5.) supplemented with antibiotics (0.01% of ampicillin and streptomycin) was used as selective media for the isolation of fungal strains. The isolation of decay-wood decomposing fungi was carried out by direct plat method as suggested by Daâssi et al. 50 . The purity of the fungal strain was proofed by microscopic observation. Both soil-plates and soil-suspensions methods were used for the isolation of soil fungi 23 . Plates were prepared by transferring 0.05 to 0.015 g of the contaminated-soil to be examined into a sterilized Petri dish. Cooled medium of MEA was added and the soil particles dispersed throughout the agar by gentle shaking of plates before the agar solidi es. Soil-suspension solution was prepared from 10 g of the dried contaminated soil dissolved in 100 mL of sterile physiological water (NaCl 9 g/L) and maintained for 20 min under agitation on a reciprocating shaker at 120 rpm. After shaken, serial dilutions and plating technique were performed according to Agrawal et al. 51 . All plates were incubated at 30 °C during 5 to 7 days. Fungal colony was sub-cultured on fresh MEA supplemented with 0.01% of ampicillin and streptomycin until getting a pure strain. Preliminary identi cation of the fungal isolates was performed through macroscopic and microscopic observation. Selection of Hydrocarbon-degrading fungal isolates Preliminary screening of oil-degrading fungal isolates (both ligninolytic and native fungal isolates) was performed by agar well diffusion method. As sole source of carbon and energy, 5% of Diesel fuel (Saudi Aramco, de ned according to Daâssi et al. 52 has been spread on the surface of the MEA plates with glass rod. Fungal strains suspensions (in sterile water) was prepared and downloaded in MEA plate wells. The culture plates were incubated at 30 °C, and the appearance of substantial growth was daily monitored during 5 days. Also, the culture plates with and without addition of Diesel fuel were examined for growth. Then the selected fungal strains on the agar well diffusion method, were checked for its ability to grow and mineralize the petroleum hydrocarbons in the contaminated soil. Cultures were carried out on Mineral (NH 4 ) 6 Mo 7 O 24 ⋅4H2O, 0.01 g. Before autoclaving, the pH of the solution was adjusted. 5 g of the soil sample was mixed with 20 mL of MM and inoculated with 1% of spore suspension of each isolate. Then the 250-mL Erlenmeyer asks were incubated in stationary-phase for 14 days at 30 °C. Plate-agar tests for the investigation of the enzymatic activities Plates containing selective media (supplemented with the suit enzyme substrate) were used as a qualitative test to detect the enzymatic activities in the fungal collection. Under aseptic conditions, mycelial fraction was taking from each pure strain, and placing on the surface of the selective agar media. After incubation at 30 ºC, fungal strains were recorded as positive or negative based on the appearance of degraded halo surrounding mycelium growth. Laccase activity To detect laccase-producing fungi, strains were grown on selective solid medium MEA supplemented with 150 µM copper sulfate (as Lac inducer) and 5 mM of 2,6-DMP/or 0.2 mM of ABTS (Lac substrates) then, incubated at 30 ºC. Fungal isolates which showed red-brown (with 2,6-DMP) and green halos with ABTS), were selected as Lac positive (Lac (+)) strains and transferred into 250 mL asks of MEB added with 150 µM CuSO4, as inducer, for further characterization. Proteolytic activity Sterile milk (250 mL L -1 (v/v)) was incorporated as fungal protease substrate in the nutrient agar medium (pH 5.5) containing 5 g/L of peptone and 3 g/L of yeast extract after sterilization and semi-cooling of the media. The presence of the degraded milk halo is evidence of the proteolytic activity 53 . CMCase activity Modi ed Mandels and Reese medium was used as screening medium to select CMCase positive (CMCase+) species 54 . Fungi were grown on the agar medium and incubated for 7 days at 30 °C. After fungal growth, CMC agar plates were stained with 1% (w/v) Congo-Red solution for 15 min and discolored with NaCl (1 M) for 15 min. CMCase activity is detected by the presence of a halo around the isolate 55 . Lipase activity For the detection of lipase activity, a selective medium of MEA amended by 1% olive oil and 0.001% rhodamine B was used. The fungal isolates were incubated at 30 °C during 7 days then revealed using 365 nm light. The positive strains showed uorescence under the UV-light 50 . Amylase activity For the α -amylase activity detection, starch agar plate method was performed. The fungal isolates were inoculated onto a starch plate and incubated at 30 °C until growth is seen. The petri dish is then ooded with an iodine solution to visualize the degraded halo. Amylase positive (Amyl+) species presents a clearing halo around the mycelial growth 56 . Optical microscopy Optical microscopy images of the suspended mycelium were taken after a seven-days old MEA-plate fungal cultures using an optical VHX-5000 digital microscope (Keyence). Identi cation and phylogenetic tree of fungal isolates The selected hydrocarbon-degrading fungal strains were cultivated in a 150 mL ask containing 50 mL of liquid Malt Extract Broth medium (MEB) for 5 days. Then mycelium was harvested by ltration and successive washings with sterile Milli-Q water. The genomic DNA was extracted from the fungal cells using a DNeasy Plant Mini Kit (QIAGEN). The purity and the quantity of DNA samples were estimated by the optical density ratio A260/A280. The molecular identi cation was carried out with the protocol suggested by Daâssi et al. 57 . The primers used for the ampli cation were ITS1 (5 -TCCGTAGGTGAACCTGCGG-3) and (3-TCCTCCGCTTATTGATATGC-5) 58 . Blastn analysis was used for the resulting sequences (www.ncbi.nlm.nih.gov/BlastN). The organisms were identi ed based on the subjected sequences in the databases showing the highest identity. Multiple sequence alignment was achieved using ClustalW between the selected subjected sequences and the query ITS sequences of the isolated strains 59 . Phylogenetic tree was inferred using the neighbor-joining method (NJ) 60 in the MEGA11 program with bootstrap values based on 1000 replicates 61 . Sequences have been deposited in GenBank. Fungal inoculum preparation Mycelial suspension from the selected hydrocarbon-degrading fungi exogenous ligninolytic (LB3; RB4) and indigenous (BKR1) was prepared as described by Potin et al. 62 . Seven-days old MEA-plate fungal cultures of the 3 selected isolates was washed with 5 mL of sterile physiological water to obtain the fungal suspension which further was ltrated through sterile glass wool to separate mycelia from spores. The collected spore suspensions were estimated by Thoma cell counting chamber. 25 mL MEB amendment was added to each microcosm in order to induce spore germination in the microcosms. Spores were added to the medium in calculated volumes to give a nal total spore concentration of 10 4 spores g soil -1 . Soil microcosms assays for bioaugmentation studies Soil microcosms were used in this study for the mycoremediation of the PHC-contaminated soil. Each microcosm contained 50 g of 6 mm sieved soil mixed with 125 mL of MEB inside 400 mL glass bottles sealed with rubber caps and aluminum seals and incubated at room temperature for 60 days. Control soil Microcosms (zero day/untreated/ without fungal inoculation) and test soil microcosm (treated) for each fungal culture (mono or co-culture) were set up during 60 days of treatment in order to evaluate the biotic vs abiotic degradation of the PHC contaminated soil. All the microcosm experiments were conducted in triplicate. Soil microcosm treatments (SM) were designed: (SMU-sterile): control microcosms conducted by air-dried untreated contaminated soil (sterile soil) to assess abiotic losses of hydrocarbons. (SMU-Not sterile): control microcosms formed by untreated not sterile soil to assess biotic degradation. All the microcosm treatments were inoculated with an initial concentration of 10 4 spores per gram of soil. Soil samples from each microcosm were collected manually with clean and sterilized (ethanol 70%) stainless steel spatulas on days 15, 30 and 60 for the analytic analysis to assess hydrocarbons degradation in the soil microcosms. Assessment of Petroleum-contaminated soil degradation by fungal isolates Biomass estimation The variation in the rate/biomass of fungal cultures (mono or co-cultures) was determined gravimetrically. The weight of ask was taken before and after incubation period. Biological Oxygen Demand (BOD) and Chemical Oxygen Demand (COD) The rate of degradation and the e ciency of the fungal isolates in the treatment of the PAHscontaminated soil, were evaluated by the BOD and COD analysis performed by standard methods 63 (APHA, 2001 and IS-3025) Total Petroleum Hydrocarbon (% TPH) and Gas chromatography-Mass Spectrometry (GC-MS) analysis The extraction of the residual petroleum hydrocarbons from the contaminated soil was carried out by mechanical shaking as described by Siddique et al. 64 with some modi cations. For each culture incubation period (15, 30 and 60 days), the remaining petroleum hydrocarbons (PHCs) was extracted from each soil microcosms (untreated/treated) using of 30 mL of Dichloromethane (DCM). 10g of soil sample were put in glass bottle added with anhydrous sodium sulphate (Na 2 SO 4 ) to remove moisture. The mixture was acidi ed with of concentrated HCl (12 N) to avoid further degradation and shaken on a reciprocating shaker at 120 rpm for 3 hours. Then, the he reactional mixture was separated by centrifugation (10 min, 8000, at 4 °C). The supernatant was transferred in separating funnel (250 mL) to remove the aqueous layer and sequentially extracted twice with 2 volumes of dichloromethane, respectively. Finally using a rotator evaporator, the dichloromethane was evaporated in 55 °C and the extracts were concentrated to near dryness then re-dissolved in 1 mL dichloromethane solution. The samples were kept at -20 C until being analyzed. The residual PHC concentration of the untreated and the treated by the mono or the co-fungal cultures was determined. The percentage of Total Petroleum Hydrocarbon (TPH) by gravimetrically and results were expressed as percentages of respective controls 51 . The treated/extracted petroleum hydrocarbons were analyzed by gravimetric analysis and gas chromatography (GC) by Agilent GC-MSD (6890N-5973) with pole temperature kept at 80 °C for 4 min, then increased at a rate of 5 °C·min -1 to 250 °C and maintained at 250 °C for 20 min. Results And Discussion Isolation of decay-wood decomposing fungi and soil native fungi Pieces of decaying wood samples used in this study were collected from different biotopes of western Saudi Arabia (Khulais-Jeddah). Barzah and Rahat are two natural habitats from the Khulais, selected for the decaying wood sampling during March 2020. Primary identi cation of the ligninolytic fungal isolates was based on plate morphology while the purity of the isolates was proofed using microscopic parameters ( Fig. 1a-f). From the decay wood samples, a total of ten pure wood-decomposing fungal isolates were obtained and proofed by microscopic observation. Ten morphologically different fungal strains were isolated successfully, two abundant strains from the polluted soil and ten from the decay-wood, then were maintained as pure cultures in MEA. Both soil-plate and soil-suspension methods were used for the isolation of soil native fungi 23 . Two fungal strains presented the highest abundance and growth ability in the soil sample were examined by the soil plate and soil suspension isolation method. These native fungi designed by KBR8 and KBR1 were chosen and identi ed by morphological characters and taxonomical keys. Based on the morphological aspects and microscopic observation, the strains KBR1 and KBR8 demonstrated the general characteristics of the genus Aspergillus. Selection of hydrocarbon-degrading fungal isolates The indigenous fungal isolates and the ligninolytic fungi were tested for their ability to degrade petroleum hydrocarbons. Primary screening was conducted on culture plates basis on the agar well diffusion method (Fig. 2). Out of the twelve isolated strains investigated in the culture plate experiment, six were recorded as petroleum hydrocarbon-degrading via a degraded halo surrounding mycelium growth. The diameter of the halo demonstrates the ability of fungus to utilize petroleum hydrocarbons. In all petri dishes the highest growth diameter was around the mycelia of KBR1, KBR8 (native soil isolates) and KB4, LB3 (ligninolytic isolates) indicating good ability to degrade diesel hydrocarbons among the other isolated strains. However, KBR1-1 and KB2 showed a tight halo during fungal growth. In addition, the mix culture consisting of LB3+KB4 showed the highest growth diameter. Lot nasabasl et al. 24 in related study on fungal isolated strains from soil hydrocarbon-polluted samples indicated that isolated fungi can be used in hydrocarbon bioremediation processes however, their e ciency varied within the species and the metabolic diversity according to the fungi. Later, a con rmatory assay for hydrocarbon degradation, potentials of the isolated fungi was conducted on Erlenmeyer asks with the investigated soil. The gravimetric determination of the residual hydrocarbon after biodegradation was performed by weighing the quantity of the petroleum hydrocarbons against the control. The estimated crude oil degradation e ciency after 14 days demonstrated that the ligninolytic isolate KB4 showed maximum ability to utilize crude oil, giving the highest percent degradation of 29.32%, followed by the native isolates KBR1 and KBR8 indicating 22.68 and 20.34% degradation, respectively (Fig. 3). These results are in line with those recorded by Lot nasabasl et al. 24 who reported a highest remediation rate belongs to Aspergillus niger (20.55%) and the lowest rate belongs to Penicillium sp (16.453%). Additionally, the mycelial growth and the biomass gain pro les indicated that all the selected strains previously tested on the culture plate were able to grow and use the petroleum hydrocarbons as carbon source. Figure 3 shows an increase in rates of fungal growth in the media containing petroleum contaminated soil compared with inoculated media (MM) with not polluted soil. This result demonstrates the ability of the fungal strains to assimilate petroleum hydrocarbons molecules using diverse extra cellular enzymes for their growth. In this regard, Oboh et al. 25 reported the ability of Aspergillus sp., Penicillum, Rhizopus and Rhodotorula species to grow on crude petroleum as the sole source of carbon and energy. Fungal culture on petroleum polluted soil proofed the potent of the selected fungi in the degradation of TPH from the contaminated soil and were thus selected for further study. The degradation ability of different genera from different habitats makes their catabolic potential even more versatile to transform persistent organic compounds into inert and non-toxic molecules 26 . Among the Twelve isolated fungi, the most interesting fungal strains in term of crude oil degradation (LB3, KBR1, KBR8, KB2, KBR1-1) were cultivated and run for molecular identi cation based on the analysis of the ampli ed nucleotide sequences of the nuclear ribosomal ITS1-5.8-ITS4 region. Molecular identi cation of the selected petroleum hydrocarbon-degrading fungi The molecular identi cation of the strains was performed by BLAST alignment tool of the National Center for Biotechnology Information (NCBI) database. Closely related sequences were obtained from the GenBank database with similarity greater than 95% (Table 1). The ITS regions of the fungal strains were sequenced at Macrogen (Republic of Korea) and submitted to GenBank database with accession numbers; MZ817958, MW699896, MW699897, MZ817959, MW699895 and MW699893. Based on the percentage of similarity (Table 1) Based on the multiple alignment of the ITS sequences provided by Clustal Omega program, the phylogenic analysis was run to nd the evolutionary relationships of the newly fungal isolated to previously characterized species (Fig. 4). A phylogenetic tree was created by Neighbor Joining Method (NJ) based on alignment of the ITS sequences of exogenous fungal strains (KBR1-1, KB4, KB2 and LB3) and indigenous strains (KBR1 and KB8) with their homologue sequences obtained from NCBI database. Analysis of 18S rRNA genes of the genus Coniochaeta revealed that the taxon appears as a monophyletic group related to teleomorphs of the genus Lecythophora. According to Lopez et al. 28 Lecythophora (Coniochaeta) is a lamentous ascomycetous which belongs to the Coniochaetaceae family and Sordariales order. The strain KB4 (Accession no. MW699897) showed 98.39% ITS identi es with Paecilomyces formosus, Thermoascaceae sp. and Penicillium sp. and closed to the genus Byssochlamys spectabilis (98.12%). The morphological traits of the fungus were determined to a liate the isolate to the genus Paecilomyces formosus (yellow septate hyphae, conidia unicellular). The genus Paecilomyces was rst described by Bainier 29 as a genus closely related to Penicillium and comprising only one species, P. variotii Bainier. Accordingly, previous study of Moreno-Gavíra et al. 30 reported that the genus Paecilomyces has yellowish septate hyphae, with irregularly branched conidiophores and smooth walls. The conidia are unicellular; in chains; and the youngest conidium is at the basal end. The indigenous isolates (KBR8 and KBR1) were a liated to the genus Aspergillus based on BLAST analysis of the ITS sequences. In addition, the phylogenetic analysis showed that the two indigenous isolates clustered in a clade comprising exclusively Aspergillus species, with high bootstrap values for each branch. ITS sequences of the fungal isolates were deposited at GenBank under accession numbers MW699895 and MW699896 for KBR8 and KBR1 respectively. It can be inferred from the phylogenetic tree that the strain closest to isolate KBR1-1 is the species Syncephalastrum racemosum. The related sequence, corresponding to strain KBR1-1 was deposited under the accession no. MZ817958. In the present work, Aspergillus niger (KBR1), Lecythophora (Coniochaeta) (LB3), Paecilomyces formosus (KB4), Syncephalastrum racemosum (KBR1-1), Aspergillus tubingensis (KBR8), and Fusarium chlamydosporum (KB2) were the perfect fungal isolates demonstrated e ciency to biodegrade petroleum hydrocarbons. Our results agree with results of Gesinde et al. 31 who reported that Aspergillus niger have very active degradation capability of Nigerian and Arabian Crude Oils. Furthermore, in the same study, the genera, Aspergillus, Penicillium and Fusarium species were demonstrated as the most e cient metabolizers of hydrocarbons.in comparison with other isolates. Screening of the enzyme activity for the newly isolates The capacity of the fungal isolates to produce several enzymatic activities such as lipases, proteases, amylases, cellulases, and laccases enzymes was investigated on selective solid media. Out of the twelve tested isolated strains, seven strains were recorded to secrete laccases, 10 strains were cellulases positive, 4 strains were amylases positive, 21 strains were found to produce proteases and 4 strains were able to produce lipases ( Table 2). Hence, according to Lopez et al. 28 the ascomycete Coniochaeta ligniaria NRRL was able to produce lignocellulose-degrading enzymes including cellulase, xylanase and two lignin peroxidases (manganese peroxidase, MnP and lignin peroxidase, LiP), but no laccase activity was recorded. Due to the complexity of the lignin-cellulosic materials, ligninolytic fungi are involved in the cycling of nutrients versatile metabolic activities such as (hydrolases, oxidoreductases and esterases) for the degradation of the complex organic molecules into simplest 32,33 . Recent researches reported the role of lipase activity in the petroleum hydrocarbons degradation. Similarly, Ramdass and Rampersad 34 demonstrated the presence of lipase activity in ve newly isolates from crude oil polluted soil. Our results of the enzyme activity screening demonstrate a high metabolic diversity of the isolated fungal strains makes their catabolic potential in the PAHs remediation processes. Basis on the taxonomic analysis and the metabolic diversity, the isolates KBR1, LB3, KB4 and the mixed culture of LB3+LB4 were selected to be used for the PAH-contaminated soil remediation in microcosm systems. Microcosms for petroleum-contaminated soil remediation The petroleum-contaminated soil samples were collected from the oil well (7) located in Dammam city (Saudi Arabia), and maintained in plastic containers. The soil samples were collected from the surface layer (5-10 cm) at different spots around petroleum pipe line spillage. The soil samples were mixed together forming a composite sample which will used to represent areas of contamination in this study. The composite sample was sieved with a 6 mm grid before soil microcosm treatments and 2 mm grid for soil characterization. The primary physical properties of the soil were the dark color, the pH of 6.8, and the average moisture content of 1.6 ± 0.2%. (Table 3). Similar high content of TPH was shown in the study of Torres et al. 35 Soil microcosms batch were conducted in several treatment systems (Table 4), to assess the potential of single or mixed fungal strains in the remediation of petroleum-polluted soil. The investigated soil was not sterilized to preserve its microbial indigenous ora as well as its physico-chemical properties 37 . Indeed, the indigenous soil ora constitute an important heterogenous microbial population for the enhancement of the biodegradation process Soil microcosms were treated with selected fungal strain shown ability to degrade petroleum hydrocarbons and with broad enzymatic capacities. Pro le of the TPH in soil microcosms To study the petroleum hydrocarbon removal ability of the fungal strains, the TPH in each SM system was followed at different periods during the treatment (Fig. 5). The kinetic of TPH during the soil microcosms treatments (Fig.5) demonstrated a rapid decrease of the hydrocarbons in the SMT[LB3+KB4] compared with others SM inoculated by single strain. This result highlights the essential role of the co-occurrence of different microbial species to enhance the biodegradation yields 40,41 . While, previous ndings obtained by Okerentugba and Ezeronye 42 , demonstrated that single fungal culture found to be better than mixed cultures. Allover results highlight the improvement of the biodegradation yields by bio-augmenting soil microcosms by indigenous or exogenous fungi. BOD5 and COD in soil microcosms For the evaluation and the monitoring of the degradation process, BOD and COD were estimated at different period of the soil microcosm treatments. The results of organic removal are shown in Fig. 7. BOD 5 and COD of the control microcosm (SMU) were 57 and 145 mg/L, respectively as seen in Table 3. The average BOD 5 and COD percent removal e ciency in this study was approximately 86.5% and 57.8%, respectively. The BOD 5 /COD ratio can give indication on the biodegradability of the petroleum contaminated soil. Polluted samples may be biodegradable when the BOD 5 /COD ratio value was between 0.4 and 0.8 43 . The BOD 5 /COD ratio within the same time interval was found as 0.28. The ratio of BOD 5 (Figure 7), the samples can still not be considered as highly biodegradable (BOD 5 /COD > 0.4). GC-MS analysis for soil microcosms The remained petroleum hydrocarbons (PHCs) were extracted and characterized by GC-MS for each soil microcosm (SM) treatment system after 60 days of cultures incubation. Fig. 8 illustrated the superposed pro les of PHCs in the degrading soil microcosm by GC-MS analysis. PHCs remained in the soil microcosms, showed decrease in the area of major peaks compared to control SMU, suggesting degradation of the main compounds; while the appearance of new peaks in these samples indicated the breakdown of products or presumed metabolites. As seen in the Fig. 9, chromatograms revealed a signi cant reduction in the intensity of PHC peaks after SM treatment by fungal bioaugmentation (Fig. 9b-d) compared with the control (Fig. 9a). GC-MS analysis performed after biodegradation showed that the biodegradation patterns of petroleum hydrocarbon fractions in SM treated by single strain and SM treated by mixed species, were markedly different throughout time, compared to the control microcosm. GC-Ms pro les demonstrated the e ciency of the newly isolates fungal strains to remediate petroleum contaminated soil in the microcosm system. Further quantitative and qualitative identi cation of the main compounds in the extracted PHCs were conducted at different SM treatment systems ( Table 5). The control system (SMU) (Not sterile soil) represented the biotic effect of the native microbial comities in the contaminated soil, was incubated in same experimental conditions as the treated SM. Conclusion The Present study revealed that, the fungal ligninolytic isolates and the indigenous fungi collected from petroleum contaminated soil samples holds promise for the effective PAHs-bioremediation. Further, to enhance the biodegradation e ciency, bio-stimulation, the properties of biosurfactant and enzymes and mechanism of degradation is necessary. Phylogenetic tree for the hydrocarbon-degrading isolated fungi (exo and indigenous isolates) and related sequences based on the BLAST alignment of ITS sequences. The ClustalW program was used to generate the phylogenetic trees using the NJ method with bootstrap replicates. Page 34/38 Degradation e ciency (%) in soil microcosm for each treatment against control microcosm. Supplementary Files
2021-12-03T16:06:07.162Z
2021-11-30T00:00:00.000
{ "year": 2021, "sha1": "1fe60569701774a309c11282f4bff947badbe358", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1086969/latest.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "de832bd225cc8a8db3a96ec0edb847999cecbe89", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
55773095
pes2o/s2orc
v3-fos-license
CASTOR BEANS QUALITY SUBJECTED TO DIFFERENT STORAGE TEMPERATURES AND PERIODS The control of the temperature of the storage air have the ability to improve a better preservation of some types of agricultural products. However, the most effective temperature for storage as well as the duration of storage varies between products. Therefore, special attention should be concentrated on the storage temperature and its effects on the integrity and longevity of the produce. The objective of the present study was to evaluate the effects of storage temperature and storage period on the quality of castor beans. Castor seeds with a water content of approximately 6.1% (w.b.) were stored for 180 days at temperatures of 15, 25 and 35 °C. The quality of the seeds was evaluated every 45 days throughout the study period by measuring dry matter loss, electrical conductivity, color, and the free fatty acid and peroxide content of the crude oil extracted. Our results indicated that: a) higher storage temperature negatively affect the quality of the seeds and the extracted oil; b) the negative effect of temperature increase with longer storage period; c) the storage temperature of 15 °C least affect the quality of castor bean and the extracted oil. INTRODUCTION Castor bean (Ricinus communis L.) has become a highly versatile and promising crop in recent decades in terms of its highly profitable agroenergy supply chain and capacity to generate employment opportunities directly and indirectly (Poletine et al., 2012;Goneli et al., 2016).These benefits are due to some unique characteristics of castor bean, including their adaptability to a wide range of environmental conditions across temperate and semi-arid regions, and its high oil content (approximately 45%), which is rich in ricinoleic acid (90%).These beneficial characteristics prompted the development of the Brazilian National Biodiesel Program and a broad range of industrial applications, because the raw material can be used for the production of several products, including lubricants, paints, varnishes, foam products, plastics materials, food products, and pharmaceuticals products (Campos & Santos, 2015;Mohamed & Mursy, 2015). Therefore, considering the range of application of castor bean, research interest has intensified to identify the limitations of the production of castor beans and to increase the value of the harvested crop.In this context, research advancements have improved the production and maintenance of castor bean, which in turn determine the market value and commercial viability of this crop (Negedu et al., 2013;Silva et al., 2013;David et al., 2014;Santoso et al., 2015).With the storage of the product, in addition to its qualitative maintenance, a better time for commercialization can be achieved due to the capacity to storage the material and wait for its most attractive price (Hartmann Filho et al., 2016).However, in Brazil, given the difficult interaction between the field and the commercial sector, generally, the static capacity becomes insufficient to meet the production and, a series of problems appear, compromising both market range and commercialization value (Nascimento et al., 2016). Storage characteristics of castor bean are of great importance because they directly affect the quality and market value of the beans.With respect to storage practices, storage temperature and storage period, which are the most important variables; storage temperature can affect product quality during the storage period, while the storage period can accentuate product deterioration (Zonta et al., 2014;Polat, 2015;Dias et al., 2016).In this context, product deterioration involves dry matter losses, changes in color, and poor quality of by-products such as crude oil, due, mainly, to the lipidic deterioration, wich have the ability to elevate some undesirable characteristics undesirable to the oil, like the acidity and peroxide content (Del Campo et al. 2014;Paraginski et al. 2015;Hartmann Filho et al., 2016). Therefore, the present study aimed to evaluate the effects of storage temperature and storage period on the quality of castor bean. MATERIAL AND METHODS The study included crop processing, storage, and evaluation of the quality of the castor beans and its crude oil, and was conducted at the Laboratory of Physical Properties and Quality of Agricultural Produce at the Viçosa Federal University in the city of Viçosa, Minas Gerais, Brazil. The Guarani castor bean variety was used in this study.The fruits were harvested manually from the middle part of the first plant bundle with approximately 11% (w.b.) water content.(average water content that the product was collected in the cultivation region, Várzea da Palma, MG).Following this, the fruits were mechanically broken, and the seeds were separated from the shell.For the processing, a machine for corn on cob was adapted to the culture of castor bean.In view of the processing made in a machine adapted to the culture in study, a manual selection of grains, with no type of defect, was made to do not generate results inconsistent to the reality. After processing, the water contents of the beans were approximately 6.1% (w.b.).The beans were transferred to jute sacks and subjected to storage temperatures of 15, 25, and 35 °C (to have a wide product temperature range) for 180 days.The analyses of the variables were determined at the start of the experiment and subsequently at intervals of 45 days.The products were stored in B.O.D. incubators at the three temperatures under the same storage conditions.The mean values of relative humidity of environmental air were recorded during the storage period using two thermo-hygrometers, six times a day.The temperature of the grain mass remained constant or in thermal equilibrium with the storage temperatures throughout the experiment. Castor beans quality was evaluated by analyzing dry matter loss, electrical conductivity, and color.The quality of crude oil was assessed by determining the free fatty acid and peroxide content.The water content of each treatment was determined via gravimetric analysis in an oven at 105 ± 3 °C, for 24 h, with two repetitions (Brasil, 2009). The dry matter of castor beans during storage was evaluated by performing five test repetitions for each treatment.Each repetition was composed of approximately 100 g of product, which was placed in perforated and waterproof packages, that were sealed and transferred to jute sacks containing the remaining beans.The mass of each repetition and water content of the material was measured at the beginning and then at 45-day intervals.The dry matter loss was calculated using [eq.( 1 For the evaluation of electrical conductivity, four subsamples of 50 seeds were used in each treatment.Seeds were weighed and then hydrated using 75 mL of distilled deionized water for 24 h at 25 °C in 200 mL plastic cups.After this period, the electrical conductivity of the solution was determined using a Digimed conductivity meter, model DM3.Each reading (measured in S cm -1 ) was divided by its respective mass, and the final result of the test was expressed in S cm -1 g -1 (Vieira & Krzyzanowski, 1999). The color was assessed by directly reading the reflectance of the beans using a tristimulus colorimeter (illuminant 10°/D65) and the Hunter color scale.Coordinate values were obtained for the coordinates "L" (luminosity), "a" (green and red hues), and "b" (blue and yellow hues).For each measurement, an average of three readings was obtained. Using the values of the perceived variations represented in the coordinates "L", "a", and "b" (Equations 2, 3, and 4, respectively), we calculated the total color difference (Equation 5) and the chroma index, which defined color intensity and purity (Equation 6). Where, E: color difference; Cr: chroma index; t: storage period, days, and t0 : beginning of the storage period. The crude oil was extracted according to the norms of the AOCS (1993), method Ac 3-44.A Soxhlet extractor was used with hexane solvent.Extraction was performed for six hours. The free fatty acid content was measured in accordance with the AOCS (2012), method Ca 5a-40, by dissolving three samples 5 g oil samples in ethyl alcohol, heating the solution to 60 -65 °C, and titration with 0.1 N sodium hydroxide 0.1 N. The fatty acid content was calculated using [eq.( 7)].The predominant fatty acid in castor beans is ricinoleic acid, with a molar mass of 298 g mol -1 . The peroxide content was determined in accordance with the AOCS (2011), method Cd 8b-90, by dissolving three 5 g oil samples in 50 mL of a solution of acetic acid isooctane (3:2, v/v), with addition of 0.5 mL of a saturated solution of potassium iodide, followed by trituration with 0.01 N sodium thiosulfate solution (Na2S2O3) 0.01 N. The volume used in titration after addition of 0.5 mL of the starch indicator solution indicated the peroxide concentration in meq of peroxide kg -1 , using [eq.( 8)]: Where, PI: peroxide index content in meq kg -1 of the lipid fraction; A: Volume of the sodium thiosulfate (Na2S2O3) solution used during trituration, in ml; B: Volume of the sodium thiosulfate (Na2S2O3) solution used during titration of the reagents without the sample, in ml; N: normality of the sodium thiosulfate (Na2S2O3) solution, and m: sample mass, in g. The experiment was conducted in subdivided plots 3 x 5, with three storage temperatures in each plot and five distinct storage periods in each subplot, in an entirely random design, with four replicates.The latent effect was evaluated by subjecting the data to polynomial regression analysis.The models were selected on the basis of the size of the coefficient of determination (R 2 ), the significance of regression (using the F test), and the biological phenomenon under study. RESULTS AND DISCUSSION The water content of the castor beans throughout the storage period varied during the study.However, this may be a result of the hygroscopic characteristics of the beans and relative humidity variation during the storage.Consequently, phenomena such as sorption and desorption likely occured, as described by Tiecker Junior et al. (2014) and Bessa et al. (2015) (Table 1).Moreover, throughout the 180 day storage period, the increase in the storage temperature decreased the equilibrium water content of the beans; this occurred because relative humidity decreased as the storage temperature increased and resulted in a lower equilibrium water content (Table 1).The average relative humidity at the storage temperatures of 15, 25 and 35 °C, was 75%, 62%, and 40%, respectively. It is worth mentioning that although all treatments presented the same initial water content (at t0), the treatments at the lowest and highest temperatures (15 and 35 °C) presented a greater variation between the initial and final values.This indicates that, because the temperature directly affected ambient relative humidity, the water content of the beans became susceptible to alterations in either sorption (in the case of beans stored at 15 °C) or desorption (in the case of beans stored at 35 °C (Table 1). The evaluation of the dry matter loss during storage revealed that, in general as storage temperature increased, dry matter loss increased.This effect was accentuated as storage period increased (Figure 1).After 45 days of the storage period -the first evaluation phase -the dry matter loss observed for storage temperatures of 15, 25, and 35 °C was 0.17%, 0.22%, and 0.29%, respectively (Figure 1).Therefore, due the storage time potentiated the deleterious effects, as evidenced by the multiplicative parameters of the adjusted equations and by the linear manner as the storage progressed, at 180 days storage was observed losses of 0.7%, 0.9%, and 1.2%, respectively.Therefore, independent of the storage temperature used, the dry matter loss exceeded 0.5%.However, this loss occurred at different periods at the three temperatures evaluated, and corresponded to 135, 100, and 77 days storage at 15, 25, and 35 °C, respectively (Figure 1).Del Campo et al. (2014) and Paraginski et al. (2015) found that storage temperature directly affected dry matter loss, due the capacity of this factor influence the respiratory processes, increasing, or not, the metabolic rate of the product, and therefore the consumption of its reserves.Therefore, while the process of deterioration is inevitable, this process can be attenuated by using lower storage temperatures to increase product storage (Coradi et al., 2015a). The analysis of electrical conductivity indicated that the increase in the storage temperature was a determining factor in the decrease of bean quality, as well as storage time.This change in electrical conductivity was observed, generally, at 45 days, due to the increased in the results with the elevation of temperature, and progressed linearly over time in this action (Figure 2).Furthermore it was observed for the adjusted equations the elevation of multiplicative coefficients as the elevation of the storage temperature, which proved the negative effect of the increase factor over time.Our study design also provided information on the integrity of the seeds -in particular on its cellular membranes.An increase in storage temperature and of storage period accentuated the deterioration processes, causing physical and structural damage in the beans.This damage might have contributed to lixiviation of the solubles, which are essential for the preservation of the quality of the produce and, consequently, its storability.With the deterioration of cellular integrity, lixiviation of solubles occurs because the cellular membranes no longer efficiently function as a selective barrier at the start of absorption, and thus resulted in higher electrical conductivity values in the solution (Dias et al., 2016;Hartmann Filho et al., 2016).Deterioration often occurs when the product is stored under extreme conditions, such as storage for long periods at high temperatures because membrane integrity is gradually deteriored (Bezerra et al., 2015). Similar results were observed by Hartmann Filho et al. ( 2016) when evaluating the color of coffee beans after harvest.The authors observed the darkening of the produce, which was expressed as a decrease in the values of "L," "a," and "b," and this darkening correlated with higher storage temperatures and longer storage periods. It is likely that higher storage temperatures increased the metabolic rate of the produce, as observed in Figure 3 for dry matter loss, and directly affected the product's appearance.So, this variable was demonstrated by the fact that the increase in storage temperature affected bean quality, and the effect was accentuated by the increase in the storage period.This result corroborates the findings of Coradi et al. (2015b) for soybeans and of Elias et al. (2016) for kidney beans. In addition, the change in coloration (darkening) could be confirmed by determining the chroma index.This index reveals the extent of reduction in the saturation of the typical colors of the material, which acquires a more grayish hue, as observed by Mcguire (1992) (Figure 5).Silochi et al. (2016) reported that the chroma index could be used to characterize product quality, which directly impacts market acceptance and commercialization.Cr15 °C = 13.1687-1.49x 10 -4 ST 2 (R 2 = 0.9274) Cr25 °C = 13.0517-1.59 x 10 -4 ST 2 (R 2 = 0.9478) Cr35 °C = 12.9158 -1.67 x 10 -4 ST 2 (R 2 = 0.9653) FIGURE 5. Chroma index of castor beans as a function of the storage temperature and storage period. The analysis of the extracted oil indicated that higher storage temperatures were correlated with increased free fatty acid and peroxide content, beginning at the first observation period of 45 days storage.The differences in contents were higher between storage temperatures of 15 and 35 °C, and smaller between storage temperatures of 15 and 25 °C (Figure 6).The free fatty acid content of crude oil extracted from beans stored at 15, 25, and 35 °C was 0.27%, 0.29%, and 0.36% at 45 days of storage (Figure 6a), whereas the peroxide content under the same conditions was 2.21, 2.44, and 4.56 meq kg -1 of the lipid fraction (Figure 6b).However, due to product deterioration during storage, which was evident by an increase in the multiplicative coefficients of the adjusted equations, the harmful effect of higher temperatures may be accentuated.This effect was corroborated by the results observed at 90 days of storage, when free fatty acid content at 15, 25, and 35 °C was 0.33%, 0.39%, and 0.57%, respectively, and peroxide content at these temperatures was 3.07, 3.68, and 6.35 meq kg of the lipid fraction (Figure 6).Bezerra et al. (2015) observed that the quality of castor bean crude oil was influenced by ambient conditions and the storage period of the raw material.The degree of preservation of the raw material can determine the properties of the by-products.In addition, the quality of the crude oil can be used as a parameter for evaluating the degree of preservation of the raw material because characteristics such as the free fatty acid content indicates the stage of deterioration of the raw material (Hartmann Filho et al., 2016). Despite the depreciation of the castor bean crude oil with storage temperature and period, we found that 180 days storage of castor beans resulted in qualities that were lower than the limits established by ANVISA (Brasil, 2005).Our results indicated that the acidity index of the crude oil from castor beans at 15, 25, and 35 °C at 180days storage was 0.90, 1.17, and 1.96 mg KOH g -1 , respectively, and the peroxide content for the same treatments was 4.78, 6.15, and 9.92 meq kg -1 of the lipid fraction (see Figure 6).This restricts the commercialization of castor bean crude oils in Brazil since acidity indexes exceed 4.0 mg KOH g -1 and peroxide content exceeds 15.0 meq kg -1 of the lipid fraction. The acidity of castor bean oil can also be determined using the acidity index, which is defined as the amount of KOH (in mg) necessary to neutralize 1 gram of sample.The free fatty acid content can be converted to the acidity index by multiplying the value by 1.99.Crude castor bean oil can be subclassified using this index.According to Santos et al. (2007) reported that this oil could be commercially classified as medicinal (no acidity), industrial n o 1 (maximum acidity of 1%), and industrial n o 3 (maximum acidity of 3%).Therefore, castor bean oil can be classified as industrial oil n o 1 when the beans were stored at 15 °C for 180 days and industrial n o 3 when stored at 25 or 35 °C for 180 days. CONCLUSIONS Increased storage temperature negatively affect the quality of castor beans and its byproducts.This effect is accentuate as the storage period increased from 15 to 35 °C and storage duration increase from 45 days to 180 days. The highest grade commercial quality oil is only achieve when beans were stored at 15 °C for up to 180 days. )]. ): initial mass of bean samples, g.; m(θ): mass of the samples at time θ, g; Ui * : initial water content of bean samples, decimal d.b., and Uθ * : water content of the samples at time θ, decimal d.b. fatty acids, %; C: molar mass of the predominant acid; Vg: volume of standardized NaOH solution, ml; N: normality of NaOH solution, and m: sample mass, g. FIGURE 1. Dry matter loss in castor beans, as a function of storage temperature and storage period. FIGURE 2 . FIGURE 2. Electrical conductivity of absorption solution of castor beans as a function of storage temperature and storage period. FIGURE 3. Differences in bean color as a function of storage temperature and storage period. FIGURE 6 . FIGURE 6. Free fatty acid content (a) and peroxide content (b) of crude oil extracted from castor beans as a function of storage temperature and storage period. TABLE 1 . Average water content (% w.b.) of beans during storage at different temperatures.
2018-12-08T21:49:24.107Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "33d011c412ac68ea9a1ecd119a87062317673deb", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/eagri/v38n3/0100-6916-eagri-38-03-0361.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "33d011c412ac68ea9a1ecd119a87062317673deb", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
263955737
pes2o/s2orc
v3-fos-license
Development of a dual-flow tissue perfusion device for modeling the gastrointestinal tract–brain axis Despite the large number of microfluidic devices that have been described over the past decade for the study of tissues and organs, few have become widely adopted. There are many reasons for this lack of adoption, primarily that devices are constructed for a single purpose or because they are highly complex and require relatively expensive investment in facilities and training. Here, we describe a microphysiological system (MPS) that is simple to use and provides fluid channels above and below cells, or tissue biopsies, maintained on a disposable, poly(methyl methacrylate), carrier held between polycarbonate outer plates. All other fittings are standard Luer sizes for ease of adoption. The carrier can be coated with cells on both sides to generate membrane barriers, and the devices can be established in series to allow medium to flow from one cell layer to another. Furthermore, the carrier containing cells can be easily removed after treatment on the device and the cells can be visualized or recovered for additional off-chip analysis. A 0.4 μm membrane with cell monolayers proved most effective in maintaining separate fluid flows, allowing apical and basal surfaces to be perfused independently. A panel of different cell lines (Caco-2, HT29-MTX-E12, SH-SY5Y, and HUVEC) were successfully maintained in the MPS for up to 7 days, either alone or on devices connected in series. The presence of tight junctions and mucin was expressed as expected by Caco-2 and HT-29-MTX-E12, with Concanavalin A showing uniform staining. Addition of Annexin V and PI showed viability of these cells to be >80% at 7 days. Bacterial extracellular vesicles (BEVs) produced by Bacteroides thetaiotaomicron and labeled with 1,1′-dioctadecyl-3,3,3′,3′-tetramethylindocarbo-cyanine perchlorate (DiD) were used as a model component of the human colonic microbiota and were visualized translocating from an apical surface containing Caco-2 cells to differentiated SH-SY5Y neuronal cells cultured on the basal surface of connected devices. The newly described MPS can be easily adapted, by changing the carrier to maintain spheroids, pieces, or slices of biopsy tissue and joined in series to study a variety of cell and tissue processes. The cell layers can be made more complex through the addition of multiple cell types and/or different patterning of extracellular matrix and the ability to culture cells adjacent to one another to allow study of cell:cell transfer, e.g., passive or active drug transfer, virus or bacterial entry or BEV uptake and transfer. I. INTRODUCTION Microfluidics provides an alternative to the traditional static cell line model due to the presence of flow, providing an important physiologically relevant component to the model.A number of devices incorporating multiple cell types and various support scaffolds have been described (Nolan et al., 2023); however, these tend to be highly specialized for a specific tissue and/or function.Here, we describe a generic platform that can be modified to study monolayers and tissues, alone or joined in series.One of the most widely used microfluidic devices is the Colon Intestine-Chip™ produced by Emulate Inc. in which maintained epithelial cells display well-defined tight junctions and low permeability, as originally described by Kim et al. (2012), and then adapted to include co-culture with bacteria (Kim and Ingber, 2013).Although applicable to many established cell lines, studies with this chip have principally used Caco-2 which was originally derived from a colon adenocarcinoma and consistently forms cell monolayers on a semipermeable membrane.Kim and colleagues also demonstrated that these chips reproduce the in vivo 3D structural and biomechanical features such as peristalsis (Kim et al., 2012).More recently developed devices contain cell lines from different organs, e.g., colon (Caco-2) and liver (HepG2), separated by a porous membrane that allows metabolite transfer, with the aim of assessing drug metabolism (Choe et al., 2017).Others have hypothesized that the in vivo gastrointestinal tract (GIT) environment is better reflected by co-culturing Caco-2 and HT-29 cell lines, with the latter producing mucin which is an important component of mucosa (Santbergen et al., 2020).Alternatively, groups have focused on the importance of the extracellular matrix (ECM), to replicate the in vivo environment, with flow rates, gas concentration, and ECM components combining to affect cell behavior in 2D and 3D cultures in microfluidic devices; reviewed by Goy et al. (2019).The reality is that a true model of the complete GIT does not exist and there is, therefore, a need to ensure that any devices have their uses and limitations explicitly stated.Replicating brain pathophysiology on a microfluidic device has the added complication that it is essential to incorporate the blood brain barrier (BBB) as this is the interface between the peripheral blood system and the central nervous system.The BBB must be efficiently circumvented if drugs are to reach the neural tissue and treat diseases as diverse as Alzheimer's, Parkinson's or malignancies.As for the GIT chips, many different approaches have been followed to create devices, recently reviewed by Kawakita et al. (2022).A focus of many BBB devices has been the integration of a transepithelial electrical resistance (TEER) system to assess the integrity of the endothelial barrier of human brain microvascular endothelial cells (hBMECs), co-cultured with various other cells including astrocytes and pericytes.Palma-Florez et al. (2023) have recently developed a model, using human cell lines that measured the efficiency of peptidetargeting, polyethylene glycol functionalized gold nanoparticles.Alternatively, Vatine et al. (2019) used induced pluripotent stem cell (iPSC)-derived brain microvascular endothelial-like cells, astrocytes, and neurons to create a human BBB unit.Physiologically relevant TEER values were generated and the device showed protection of neural cells from plasma-induced toxicity.Despite the many benefits of the new devices both the GIT and BBB chips have limitations including complexity of manufacture, steep learning curves, limited device usability and flexibility and cost, i.e., can the device, or part of the platform, be re-used or are they entirely disposable? It is clear that the GIT is central in health and many diseases, with its function and impact extending beyond a role in nutrient absorption.It is the largest immune organ in the body and accommodates 10 13 -10 14 microorganisms that collectively make up the GIT microbiome which is integral to maintaining health (Seton and Carding, 2019).The gut-on-chip technology and interactions between tissues has been comprehensively reviewed by Ashammakhi et al. (2020) and Guo et al. (2023), highlighting the crosstalk between the gut-axis and other organs, in particular, the gut-liver and gut-brain interactions.In addition, the gut has been included in several body-on-a-chip approaches, in which multiple "organs" are established and interconnected, e.g., Maoz et al. (2018).Recently, there have been attempts to alter the GIT microbiome to improve gut health and/or ameliorate illness using a variety of microfluidic devices.For example, investigating the effects of oxygen concentration (Dickson, 2019), different diets (Garcia-Gutierrez and Cotter, 2021), and direct introduction of new microbial populations (Jalili-Firoozinezhad et al., 2019).Although these models have demonstrated proof of concept, and the ability to modulate and measure effect, they have not yet been widely adopted, nor have they facilitated a reduction in the reliance on animal models.Obstacles to adoption include the complex nature of some of the devices, restricted access to cells or tissues, and/or assay problems including lack of robustness or sensitivity of detection (Sathish and Shen, 2021). The current study describes a flexible and robust tissue perfusion platform that can be used to address many of the shortcomings of current devices.It comprises a monolayer of cultured adherent epithelial cells, juxtaposed to an endothelial cell line, mimicking the blood-tissue barrier.The purpose of using Bacteroides thetaiotaomicron (Bt) produced bacterial extracellular vesicles (BEVs) in this system is to determine if the microfluidics based organ-on-a-chip system we have developed can reproduce the biodistribution of BEVs in vivo that we have previously reported on (Jones et al., 2020).BEVs are nano-size vesicles naturally produced by Gram-negative bacteria that mediate many microbe-microbe and microbe-host interactions (Juodeikis and Carding, 2022).We have previously shown that BEVs produced by Bt mediate interactions with host cells of the GIT and are trafficked via an intracellular (uptake) or paracellular (transmigration) route to cross the intestinal epithelium in vivo and reach distant tissues such as liver and brain (Jones et al., 2020 andModasia et al., 2023).Here, Bt BEVs were used to visualize translocation between the semi-permeable organ compartments of the connected, dual-flow, perfusion devices. A. Design and fabrication of a dual-flow perfusion device The device consists of two poly(methyl methacrylate), PMMA, (Kingston Plastics, Hull, UK) outer plates and a PMMA insert with a semi-permeable membrane designed to allow for the culture of cells.The two PMMA plates, both 60 × 70 mm 2 , were milled horizontally to produce 0.5 mm holes to which inlet and outlet silicone tubing could be attached via Luer elbow connectors.A central recess (24 × 10 mm 2 ) was milled to house a removable insert, allowing direct flow across the chamber, and four holes with hexagonal recesses were made for the insertion of M6A2 stainless steel bolts to secure the unit [Fig.1(a)].A removable PMMA insert (24 × 10 mm 2 ) containing a polyethylene terephthalate (PET) membrane, in a similar style to transwell inserts, was used for the culture of cells to enable ease of seeding [ESI, Fig. 1(a)].The inserts were fabricated using a laser cutter (60W LS6840 laser, HPC Laser, Halifax, UK) to cut and engrave 1 mm thick PMMA sheets and to cut PET membranes (22 × 8 mm 2 , 8 μm thickness, either 0.4 or 8 μm pore size, Sabeau, Germany) to fit the carriers.Solvent bonding was used to adhere the PET membrane to the etched region of the PMMA insert and left to dry for 24 h.Prepared inserts were sterilized in 70% EtOH for at least 1 h before use.Although both 0.4 and 8 μm pore size were investigated, the work presented below uses 0.4 μm exclusively. For single cell line static or on-chip cultures, Caco2, HT29-MTX-E12, and HUVEC cells were seeded, 1 × 10 5 cells in 100 μl, onto the apical side of the semi-permeable PET membrane contained in the PMMA insert, and allowed to adhere for 72 h in a six-well plate.SH-SY5Y neuronal cells were differentiated according to a published method based on the sequential removal of serum from the medium (Shipley et al., 2016).Cells were seeded on MaxGel Extracellular Matrix (Sigma) coated PMMA inserts at day 10 of the differentiation protocol and cultured for a further 8 days in differentiation media.The PMMA support was designed in house and printed using a Creator 2 3D printer using PLA filament (Farnell Element 14, Leeds, UK).The inserts with cells were subsequently placed into the perfusion device and secured with bolts.For the connected gut-brain model, Caco2, HT29-MTX-E12, or differentiated SH-SY5Y cells were cultured on the apical side of the semi-permeable membrane as above and HUVEC seeded onto the basal side of the membrane in the PLA support 72 h prior to placing into the connected perfusion devices.The inlets of the devices were connected to 20 ml sterile syringes with Tygon tubing.Continuous perfusion was carried out using a Harvard PhD 2000 syringe pump (Harvard Apparatus) at a flow of 2.94 μl min −1 (Baldwin, 2020).The devices were maintained in a Covatutto 24 Eco incubator at 37 °C.Effluent was collected in 1.5 ml polypropylene tubes (Sarstedt) and stored at 4 °C until analysis. C. Assessment of membrane permeability Membrane integrity was assessed using phenol red and phenol red free medium, flowed in the apical and basal channels, respectively, at 2.94 μl min −1 .Effluent was collected over the course of the experiment and analyzed for the presence of phenol red using a plate reader (Bio-TEK, Synergy HT) to measure absorbance at 558 nm. Barrier permeability was assessed using Fluorescein isothiocyanate (FITC) dextran (Dawson et al., 2016).In either a static or flow model, FITC dextran (0.5 mg ml −1 , 10 kDa Dextran, Sigma) was added to, or flowed over the apical side of the membrane (2.94μl min −1 ).Samples were taken from the basal side of both models at set time points and the concentration of FITC-dextran determined using a fluorescent plate reader (480 nm excitation, 520 nm emission).A blank membrane was tested as a reference and to distinguish the effect of the cell monolayer compared with simple membrane permeability. D. Immunofluorescent staining of cells Cells were stained for the presence of the tight junction protein ZO-1 (O' Rourke et al., 2016).Cells were fixed with 4% (w/v) formaldehyde then quenched and permeabilized using NH 4 Cl/Triton-X100 solution (50 mM/0.2%v/v).Cells were washed twice with phosphate buffered saline pH 7.4 (PBS), blocked with 1% (w/v) bovine serum albumin (BSA) in PBS for 30 min, and incubated with primary antibody (ZO-1, Rabbit mAb, Biolabs, UK), diluted 1:1000 in a blocking buffer at room temperature for 1h.After incubation, cells were washed three times with the wash buffer (PBS with 0.05% Triton-X100) for 5 min each time on a rocking plate, and incubated at room temperature for 1h with the secondary antibody conjugated with AlexaFluor 488 (anti-rabbit IgG, Biolabs) diluted 1:500.Concanavalin A (ConA) conjugated with rhodamine (1:500 Vector labs, UK) was used for visualization of membranes, Alexa-488 conjugated Phalloidin (1:1000) was used for visualization of actin filaments (Abcam) and nuclei were visualized using Hoechst 33342 (ThermoFisher).Counterstains were incubated on the cells for 30 min before washing in PBS and the addition of the mounting medium (Vectorshield, Vector Labs, UK).Slides were imaged using a Zeiss LSM710 or LSM880 confocal microscope equipped with 63×/1.4 oil DIC objective and Zen black software (Zeiss). E. Periodic Acid-Schiff (PAS) staining of cells Cells on membrane inserts were fixed in 4% (w/v) formaldehyde before transfer to 70% EtOH.Staining was carried out according to the manufacturer's instructions [Periodic Acid-Schiff (PAS) staining system, Sigma].Cells on membranes were first immersed in the periodic acid Solution for 5 min before rinsing in three changes of distilled water.The membranes were then placed in Schiff's reagent for 15 min before rinsing in running tap water for 5 min.The membranes were counterstained in Harris Haematoxylin solution (Sigma) for 60 s.Counterstained membranes were rinsed in running tap water then dried by blotting on tissue and mounted using Hydromount™ mounting solution (National Diagnostics).Slides were dried overnight before imaging with an Olympus IX71 inverted fluorescence microscope equipped with a 40× objective and cellSens software (Olympus). F. Bt BEV isolation and labelling BEVs were isolated and characterized as previously described (Fonseca et al., 2022).Briefly, Bt (strain VPI 5482) was grown under anaerobic conditions at 37 °C in bacteroides defined medium (BDM), centrifuged at 6000 × g for 50 min at 4 °C and the supernatants filtered through polyethersulfone (PES) membranes (0.22 μm pore-size, Sartorius) to remove debris and cells.Supernatants were concentrated by cross-flow ultrafiltration (100 kDa molecular weight cut-off, Vivaspin 50R, Sartorius), the retentate was rinsed once with 500 ml of PBS and concentrated to 1 ml.Further purification of BEVs was performed by the fractionation of the suspension by size-exclusion chromatography using qEV original 35 nm columns (Izon) according to the manufacturer's instructions.Fractions containing BEVs were finally combined and filtersterilized through a 0.22 μm PES membrane (Sartorius); suspensions were stored at 4 °C.Absence of viable microorganisms was confirmed by plate count and absence of lipopolysaccharide was confirmed by the Limulus Amebocyte Lysate (LAL) test (Sigma). The size and concentration of the isolated Bt BEV suspension was determined by nanoparticle tracking analysis using the ZetaView PMX-220 TWIN instrument according to manufacturer's instructions (Particle Metrix).Aliquots of BEV suspensions were diluted 1000-20 000-fold in particle-free water for analysis.Size distribution video data were acquired using the following settings: temperature: 25 °C; frames: 60; duration: 2 s; cycles: 2; positions: 11; camera sensitivity: 80 and shutter value: 100.The ZetaView NTA software (version 8.05.12) was used with the following post-acquisition settings: minimum brightness: 20; max area: 2000; min area: 5 and trace length: 30. G. BEV translocation assay For the connected Caco-2 and differentiated SH-SY5Y (gutbrain) perfusion model, DiD-labeled Bt BEVs were added to the input medium (1 × 10 10 ml −1 ) and flowed over the apical side of the membrane.Following removal from the device, cells on inserts were labelled with Alexa-488 conjugated Phalloidin for visualization of actin filaments (Abcam) and nuclei were visualized using Hoechst 33342 (ThermoFisher).Counterstains were incubated on the cells for 30 min before washing in PBS.Following excision from the inserts, cells on membranes were mounted on slides using Fluoromount-G mounting medium (ThermoFisher). H. Statistical analysis Data are presented as means ± standard deviation (SD). A. Evaluation of flow in single and connected perfusion systems Flow was assessed in both a single device and two devices connected through the basal channel by perfusing with two media streams, one containing phenol red and one phenol red free, both at a flow rate of 2.94 μl min −1 for six days (chip conditions shown in ESI Table S1 in the supplementary material).Figure 2(a) shows that isolated flow can be maintained between the apical and basal channels of the device as shown by the difference in absorption between the phenol red and phenol red free channels, illustrated with the use of colored dyes [Figs.2(c) and 2(d)] to visualize the flow.In Fig. 2(a), the phenol-red containing medium passed through the apical channel, in Fig. 2(b), the phenol-red containing medium went through the apical channel of chip 1 which is subsequently connected to the apical channel of chip 2. The experiment with two connected devices [Fig.2(b)] showed that the flow could be maintained in individual channels when the devices were connected in series.Using the devices in series allowed modeling of multiple tissue barriers in a connected system.To further assess the device flow, sheer stress was calculated using the Navier-Stokes equation (Seymour et al., 2020).At a flow rate of 2.94 μl min −1 , the cells were subjected to 4.0 × 10 −3 dyn cm −2 shear stress. Permeability was further investigated using FITC-labeled fluorescent dextran of 10 kDa (ESI Fig. S3 in the supplementary material) to assess both the flow and permeability of the membranes.While limited permeation of FITC-dextran was seen diffusing through the membrane, it was found that the pore size of the membrane (0.4 vs 8 μm) was a more important factor in inhibiting transport across the membrane than cell confluency.However, 0.4 μm was subsequently chosen over the 8 μm membrane as it allowed for the support of cells without transfer of labeled dextran to the other side. B. Evaluation of colonic epithelial cell lines in the MPS The single device was initially used to assess its ability to maintain apical cultures of two commonly used colonic cell lines.Caco-2 cell viability was maintained on-chip for up to 7 days under flow conditions [Figs. 3(a) and 3(c)] using an FDA/PI assay.The HT29-MTX-E12 cell line also retained viability for at least 7 days on the chips [Figs. 3(b) and 3(d)].Cells reached 100% confluency by day 3 and remained confluent throughout the 7 day experiment.LDH assays carried out on effluent showed low levels of LDH were released which further confirmed cell viability (data not shown). The integrity of Caco-2 and HT-29-MTX-E12 cells grown under either static or flow conditions was examined by immunostaining with ConA to stain membrane glycoproteins highlighting cell walls, and an antibody specific for the cell membrane tight junction (TJ) protein ZO-1.By 72 h, Caco-2 cells formed a confluent monolayer under both conditions [Figs. 4(a), 4(b), 4(d), and 4(e)] with >90% of cells displaying ZO-1 staining consistent with a confluent monolayer (Anderson et al., 1989).HT29-MTX-E12 cells had a similar appearance under flow conditions, generating a confluent monolayer of cells with uniform ConA and ZO-1 staining by 72 h on-chip [Figs. 4(c) and 4 C. Evaluation of cell lines in the connected gut-brain MPS Maintenance of Caco-2, differentiated SH-SY5Y, and HUVEC cell viability was first confirmed when cultured on the apical side of the semi-permeable membrane in PMMA inserts under static conditions ].The cell lines were then cultured in connected gut-brain devices (ESI, Fig. 2).Caco2 or differentiated SH-SY5Y were cultured on the apical side of the semi-permeable membrane in the PMMA inserts and HUVEC endothelial cells were cultured on the corresponding basal side of the membrane in D. Connected devices as a gut-brain model: Trafficking and translocation of BEVs Bt BEVs were used as a model component of the luminal gut microbiota to visualize translocation between the semi-permeable organ compartments of the connected gut-brain devices.At 24 h post inoculation of the apical medium, an intracellular punctate pattern of BEV uptake was observed in both the Caco-2 cells (gut-chip) and differentiated SH-SY5Y neuronal cells (brain-chip) cultured in the connected devices [Figs. 6(c) and 6(d)]. IV. DISCUSSION We describe here a new microphysiological system (MPS) platform consisting of connected perfusion devices that have been designed and developed to maintain a variety of cell lines for at least 7 days under flow conditions.Additionally, the devices were used to assess the acquisition and uptake of BEVs with established colorectal and differentiated neuronal cell monolayers which make up the majority of neuronal cells in the adult brain.Furthermore, the transport of BEVs across the gut-endothelial cell monolayers and uptake by brain neuronal cells has shown the suitability of the connected devices for modeling the translocation of substances between organ compartments. Several microfluidic devices have been described previously that utilize flow in multiple channels connected by semi-permeable 2021) being used to create co-cultures of gut and liver cells as a model of the gut-liver axis.However, multi-component devices such as these often require complex, multi-staged, assembly and are limited to specific cell lines as they need to have similar doubling times and media compatibility.The use of disposable PMMA inserts in the devices described in the current work means that it is simple to seed multiple cell lines in static conditions before the introduction of flow, and include more complex models such as differentiated SH-SY5Y neuronal cells described here, then assemble connected devices when required.The integrity of the cell monolayers was demonstrated with cell viability being maintained at >85% at 7 days, when the experiments were stopped.Mucin secretion was detected in both static and flowing conditions, and although not quantified here previously this has been shown to be affected by shear stress (Navabi et al., 2013).Current work is ongoing to determine how the cell monolayers cultured on the insert membranes behave over longer timepoints, and the applicability to a larger array of cell types and tissue or organ models. In vivo shear stresses vary between 0.002 and 0.8 dyn cm −2 (Langerak et al., 2020].It is well known that changes in shear stress affect gene expression and cell function.A study by Delon et al. (2019) showed that increasing shear stress from 0.002 to 0.03 dyn cm −2 improved villi formation, increased F-actin production and tight junction formation in a cell monolayer.The calculated shear stress within the devices described here was 0.004 dyn cm −2 , at the flow rate of 2.94 μl min −1 which is relatively low in comparison with the PDMS device described by Kim et al. (2012), that reported a sheer stress of 0.02 dyn cm −2 , although both fall within the physiological range.Bein et al. (2018) demonstrated the mechanical effects of different flows on the physiological response of a gut model in vitro.Shear stress can also be modified by the use of hydrogels to provide a type of ECM that supports the cell line and has been used to provide a framework for more complex 3D structures, as in the device described by Kim et al. (2021), which we did not investigate here, although it could be explored later to further improve the physiological relevance of the MPS device. An important facet of most microfluidic devices is the ability to sample effluent repeatedly for analyte detection.The new devices described here allow this to be done as required, offering flexibility, as well as the easy removal of the cell monolayer for staining or subsequent analysis.A noted limitation of organ-on-a-chip devices is the low levels of biomarkers obtained from the effluent which minimizes the analysis able to be carried out.This has also been reported by groups such as Jeon et al. (2020) who have developed ways of increasing sensitivity through recycling of the culture.This has been done using reservoirs to contain small amounts of medium that are perfused repeatedly through the microfluidic device using a rocking mechanism.We have not found sensitivity to be an issue with cells and tissue maintained in other devices we have developed; preliminary investigations confirmed that IL-6 was reliably detected (data not shown).Similar devices used by Riley et al. (2021) also successfully identified a panel of factors in effluent using a Proteome Profiler (Biotechne, Abingdon, Oxford).If biomarker release was determined to be too low it is possible to concentrate the effluent, however there are limitations to this as additives to cell medium such as serum can interfere with any subsequent assays once concentrated. A further additional advantage of the cell-line based MPS developed here is the ease and flexibility of setup.Specifically, inserts can carry different cell layers and cell culture can be developed off-chip and then introduced to on-chip flow conditions, overcoming problems with initial growth and establishment.Additionally multiple devices can be linked with each possessing a unique cell type, with effluents being sampled as appropriate.The ability to use well-characterized cell lines means that the devices can easily be adopted by other research groups, without requiring the often difficult and costly processes concerned with involving surgical collaborators providing fresh human tissue material.Finally, the devices have been designed to a scale that makes them suitable for use in Biosafety Level 3 and 4 (BSL3 and 4) environments where a worker's manual dexterity is compromised.We have previously shown that Bt BEVs can be transported across the intestinal epithelium in vivo to reach distant tissues (Jones et al., 2020).Due to their nano-size, BEVs can directly translocate across multiple barriers including the intestinal epithelial cell layer, the blood endothelial layer and ultimately the BBB to reach the brain (Modasia et al., 2023).This has been studied within microfluidic devices by other groups, such as a PDMS device described by Kim et al. (2021) where they culture multiple cell lines within a single device.This work described the transport of exosomes between the gut and BBB and it was noted that the addition of flow could improve uptake of exosomes to the BBB, however this was only seen within the BBB part of the device (Kim et al., 2021).The gut-brain MPS we have incorporates flow and showed that BEVs could be transported not only across the intestinal epithelium of the GIT-chip but can also be transported to a secondary brain-chip device, where they interact with, and are endocytosed by the neuronal cells. V. CONCLUSION We describe a new and flexible tissue perfusion platform that can be joined in series to study cell and tissue processes, specializing in transport across epithelial and endothelial membranes.The device is fabricated from PMMA, which can be re-used multiple times, with an inexpensive carrier that allows simple loading and removal of cell cultures maintained under flow conditions.Here, we have exemplified the platform's attributes by successfully demonstrating the transport of Bt BEVs across epithelial and endothelial layers in dual gut-brain devices.We are currently using the platform to study the transmission of SARS-CoV-2 between the dual connected devices to develop a model of long-covid, exploiting the scale of the device that make it highly amenable for use in BSL3 and 4 facilities. SUPPLEMENTARY MATERIAL See the supplementary material for further details of the device fabrication, flow conditions, and shear stress. FIG. 1 . FIG. 1.The dual-flow perfusion device.(a) Expanded schematic of the device, with acrylic top and bottom chambers, and a PMMA insert held in place between two O-rings (securing bolts excluded for clarity).(b) Schematic of device setup, with syringes connected to the apical and basal channels of the device.Medium was pumped through the device, connected with Tygon tubing and collected in polypropylene tubes.Detailed images of the device sections are shown in ESI (Figs.S1 and S2 in the supplementary material). FIG. 2 . FIG. 2. Assessment of flow within single and connected devices.(a) Absorbance of effluent collected from a single perfusion device over six days, measured using a plate reader.(b) Absorbance of effluent collected from two connected devices over six days; apical channel of the gut chip connected to the apical channel of the blood brain barrier (BBB) chip.Images of single (c) and connected (d) devices showing isolated flow are maintained in both channels throughout the experimental period.Data representative of five independent repeat experiments, error bars show standard deviation (SD). both devices.The gut-brain connected devices were maintained under flow conditions and morphology visually monitored, with no significant loss of viability observed in either device after 24 h [Figs.6(a) and 6(b)]. FIG. 3 . FIG. 3. Viability of colonic epithelial cell lines assessed through FDA/PI staining.(a) Caco-2 cells or (b) HT29-MTX-E12 cells at 3 days on-chip.(c) Caco-2 cells or (d) HT29-MTX-E12 cells at 7 days on-chip.The bar charts depict the quantification of viability of Caco-2 cells (e) and HT29-MTX-E12 cells (f ) maintained on-chip for 3 and 7 days using FDA/PI staining.Viable cells shown in green and dead cells in red.N = 3 devices, error bars show SD. FIG. 4 . FIG. 4. Immunofluorescent images of Caco-2 cells stained for ZO-1 under static (a) and on-chip conditions (b) and HT29-MTX-E12 cells in on-chip conditions (c) rhodamine-ConA labeled Caco-2 cells under static (d) and on-chip conditions (e) and HT29-MTX-E12 cell under flow conditions (f ), n = 3. PAS stain was used to identify the presence of mucins (g)-(i).Images are representative of cells grown on two separate devices, n = 2. Scale bars 10 μm.
2023-10-14T05:13:00.488Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "59b68d158fc83aa8188abdcafa0faabbcd62071a", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "002fcc3e996295c2b102bc10e51128853747eca2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
117522902
pes2o/s2orc
v3-fos-license
Generation of phase-squeezed optical pulses with large coherent amplitudes by post-selection of single photon and weak cross-Kerr non-linearity Phase-squeezed light can enhance the precision of optical phase estimation. The larger the photon numbers are and the stronger the squeezing is, the better the precision will be. We propose an experimental scheme for generating phase-squeezed light pulses with large coherent amplitudes. In our scheme, one arm of a single-photon Mach-Zehnder interferometer interacts with coherent light via a non-linear optical Kerr medium to generate a coherent superposition state. Post-selecting the single photon by properly tuning a variable beam splitter in the interferometer yields a phase-squeezed output. Optical phase estimation has many practical applications such as metrology [1], optical communications [2][3][4], quantum communications [5,6], and quantum computation [7][8][9][10]. Generally, the precision of the phase estimation is limited by the standard quantum limit (SQL) [1,11]. In order to enhance the precision of the optical phase estimation, two quantum-optics methods have been proposed: entangling of photons into a N00N state [1,[12][13][14][15][16][17] and phase-squeezed light [2,18,19]. In a N00N state, the entangled photon state is written as (|N |0 − |0 |N )/ √ 2, where |0 is the vacuum state and |N is the N -photon number state; this can improve the statistical scaling of the error from that of the SQL, N −1/2 , to that of the Heisenberg limit, N −1 , for quantum metrology [1,14,15]. However, since post-selection is required to create the N00N state [14,15], the measurement is probabilistic. Moreover, it is technically difficult to implement the N00N state with a large photon number [12,[20][21][22]. The N00N state with photon number N = 5 has previously been experimentally achieved by mixing of squeezed vacuum and coherent light [12]. However, generation of N00N states with larger photon numbers is limited by photon losses. The other approach, phase squeezing of light, suppresses phase fluctuations of the coherent light to sub-SQL levels rather than enhancing the amplitude fluctuations [2,18,19]. It has been noted that quantum metrology without post-selection can be implemented by using phase-squeezed light [23]. Moreover, phasesqueezed light can be directly applied to coherent optical communications [2], in applications such as in quantum repeaters using entangled atoms (prepared by entanglement generation by communication [24]), to improve both the fidelity between an ideal atomic Bell state and the generated one and the success probability of the entanglement generation, and in quantum computation using atomic entanglement generation by communication [25], to suppress the error probability in the entanglement generation. To improve the precision with which the applications mentioned above can be implemented, it is important to generate strongly phase-squeezed light pulses with large coherent amplitudes. There are several theoretical proposals for generating phase-squeezed light [18,19,26,27]. As an example, the phase-squeezed state can be mathematically treated as a "displaced" squeezed vacuum state [18,19,28]. When such a displacement operation can be performed on a strongly squeezed vacuum state, strongly phase-squeezed light with large coherent amplitudes may be achieved. The displacement operation is implemented by mixing of two modes of light using a beam splitter [29] and noiseless amplification of squeezed vacuum [26]. While a squeezed vacuum with a squeezing of 12.7 dB, generated with a 1064-nm pump laser, has been observed [30], a large displacement operation has not been reported. This may be because photon loss degrades the squeezing effect. As another example, phase squeezing using subharmonic generation has been proposed [27], though this scheme has not yet been implemented experimentally. Experimental examples of using second-harmonic generation include the observation of 3.2-dB phase-squeezed light (continuous wave) at 860 nm using PPKTP [23] and at 1064 nm using KTiOPO 4 [31] with mean photon numbersn = 10 6 s −1 (0.24 pW) andn = 8.6 × 10 14 s −1 (0.16 mW), respectively. However, the generation of phase-squeezed light pulses with large coherent amplitudes has not yet been reported experimentally. In this Letter, we propose a scheme to perform phase squeezing on coherent light pulses with large coherent amplitudes. In our proposed scheme, quantum interference on coherent light induced by the post-selection of single photons coupled with coherent light pulses via a weak cross-Kerr nonlinearity is essential to create the desired state. In order to verify the phase-squeezing effect, we calculated the maximum fidelity between the generated state and an arbitrary squeezed coherent state. We found that a 2.08-dB phase-squeezed state can be generated with a fidelity F = 0.99 and a success rate of 21.89 Hz under an experimentally feasible setup for inducing cross-phase modulation (XPM), previously described in Ref. [32]. Our proposed setup, shown in Fig. 1, has a singlephoton Mach-Zehnder interferometer with arms a and b and uses a nonlinear optical Kerr medium to induce XPM on coherent light in arm c. In the single-photon Mach-Zehnder interferometer, the input single photon passes through a half beam splitter (HBS) and is divided into two arms, a and b; the resulting vacuum-one-photon qubit state is written as where |1 a and |1 b are single-photon states and |0 a and |0 b are vacuum states, in arms a and b, respectively. Then, the single photon on two arms passes through a variable beam splitter (VBS) with a transmissivity t and a reflectivity r where t 2 + r 2 = 1. The quantum states at the output ports f and f ′ are given by respectively. Furthermore, the effect of the nonlinear optical Kerr medium that is placed between arms b and c is represented by a unitary operatorÛ = exp(iφ 0nbnc ), where φ 0 ≪ 1 is the phase shift angle caused by the XPM, andn b andn c are the photon number operators in arms b and c, respectively. It is assumed that the input state of arm c is a coherent photon state |α c with coherent amplitude α. Using the nonlinear optical Kerr medium, the initial total state |i |α c is transformed as follows: When the single photon is detected at a photon detector D1 at port f , the total state |Ψ is post-selected to |f . The post-selected state |ψ can be written as where the success probability P suc of the post-selection is . We note that this proposed setup is the same as that described in Ref. [33]. In the following, we show that the post-selected state |ψ for 1/ √ 2 < t < 1 can be regarded as a quasiphase-squeezed state that has a high fidelity to a pure phase-squeezed state. Therefore, we evaluate the maximum fidelity F = ξe i2θ , γe iθ |ψ between the postselected state |ψ and the ideal squeezed coherent state |ξe i2θ , γe iθ for 1/ √ 2 < t < 1 and evaluate two parameters (the squeezing parameter ξ = xe iϕ and the phase angle θ for the maximized fidelity F ), under the following assumptions. We assume that the mean photon numbersn of the coherent state |α and the squeezed coherent state |ξe i2θ , γe iθ are unchanged since the XPM does not affect the photon number [34]. Further, we assume α = 10 5/2 and φ 0 = 2π × 10 −5 , which are the same values used in Ref. [33]. We numerically evaluated the maximized fidelity F for various values of the transmissivity t; the results are plotted in Fig. 2(a). The estimated parameters ξ est = x est e iϕest and θ est depend on the transmissivity t, and since the fidelity monotonically increases with transmissivity, these parameters can also be written as functions of the fidelity F . These parameters are plotted in Figs. 2(b) and (c), respectively. The estimated parameter ϕ est should always be π to obtain the maximum fidelity F . This means that all estimated squeezed states are phase-squeezed states. Therefore, the post-selected state |ψ can be regarded as the phase-squeezed state for the high-fidelity cases. Moreover, since our scheme requires post-selection, the success probability of the postselection P suc for the fidelity F is shown in Fig. 2(d). The estimated parameters of the representative cases (1)-(4) in Fig. 2(a) are summarized in Table I. In case (1), since the post-selected state is equivalent to the odd coherent state for t = r = 1/ √ 2, the fidelity is worse than that of the phase-squeezed state (F = 0.69). By contrast, when the post-selection succeeds for a transmissivity of t = 1, the single photon is transmitted in only arm b of the Mach-Zehnder interferometer. Therefore, the post-selected state is just the phase-shifted coherent state |αe iφ0 c . These cases cannot be regarded as the phase-squeezed state. On the other hand, an effective squeezing is obtained in both cases (2) F = 0.99 (for t = 0.717) and (3) F = 0.999 (for t = 0.724) with high fidelity. Therefore, these post-selected states can be regarded as quasi-phase-squeezed states. Throughout this Letter, we refer to the post-selected state as the phasesqueezed state when the fidelity F ≥ 0.99 and x est ≥ 0.01 To verify the squeezing effect, we calculate probability density distributions | p|ψ | 2 [35], which represent the outcomes of the quadrature measurement of the postselected state projected onto the p-axis as shown in Fig. 3(a)-(d), which correspond to cases (1)-(4) in Fig. 2(a), respectively. For t = 1/ √ 2 (i.e., the odd coherent state, as mentioned above) as shown in Fig. 3(a), two "squeezing-like" distributions are formed by the quantum interference in the overlap between |α c and |αe iφ0 c due to the post-selection. For t = 1, the quantum interference does not occur, as shown in Fig. 3(d); the post-selected state is just a phase-shifted coherent state. However, when the probability amplitudes of Eq. (2) are post-selected, the quantum interference leads to almost the complete elimination of one peak of the probability distribution and formation of a squeezed probability distribution compared to the Gaussian case, as shown in Figs. 3(b) and (c). This is why our proposed scheme can be regarded as a phase squeezer. Further, when the overlap α|αe iφ0 c is very small, i.e., the distance between the states |α c and |αe iφ0 c is very large, the effect of the quantum interference in the case of two well-separated peaks is miniscule. Therefore the squeezing effect does not occur, as shown in Fig. 4. We numerically confirm that using φ 0 0.01 with α = 10 5/2 does not achieve effective squeezing for any transmissivity value. We note that the back-action of the coherent light on the single-photon interferometer should be compensated for. The interaction between the coherent state and the single-photon interferometer induces a relative phase shift between the arms of the interferometer, which may be represented as follows: The squeezing effect can be obtained only when |α| 2 φ 0 is near a multiple of 2π since the post-selected state is also affected by the relative phase shift e i|α| 2 φ0 in the interferometer. For example, for |α| 2 φ 0 = π using α = 10 5/2 and φ 0 = π × 10 −5 , the post-selected state with t = 1/ √ 2 is approximately the coherent state. Therefore, compensation by using a phase shifter on the single-photon interferometer is needed when |α| 2 φ 0 is not near a multiple of 2π in order to obtain the phase-squeezing effect. Summing up, we have described how to achieve the phase-squeezing effect for a coherent light pulse with large coherent amplitude by proper post-selection of single photons coupled with coherent light pulses via the weak cross-Kerr nonlinearity. Let us discuss the experimental feasibility of our scheme by considering already established methods for creating XPM with a single-photon-level nonlinearity. Three methods to implement single-photon-induced XPM have been experimentally reported [32,36,37]. In Refs. [36,37], atoms and quantum dots in the cavity were used, respectively. Although a large phase shift (> 0.05π) can be achieved in both cases, our proposal is incapable of using such a large phase shift as mentioned above. Therefore, these methods cannot be employed in our proposal. On the other hand, in Ref. [32], a cross-Kerr-induced phase shift of 10 −7 rad was measured in a photonic crystal fiber for coherent light at single-photon-level intensities by averaging over 3 × 10 9 pulses at a repetition frequency of 1 GHz at room temperature. Since a pulsed laser with a wavelength of 802 nm is used, the photon loss of the homodyne measurement can be made small using a Si photodetector [38] that operates at around 800 nm. We note that the total photon loss of the homodyne measurement is ∼ 0.07 at 860 nm [39]. Here, as mentioned above, we assume that the effect of photon losses on the generated light is negligible. In addition, the photon loss in the Mach-Zehnder interferometer is also negligible because of the event selection. The mean photon number of the coherent light is set to |α| 2 = 3.0 × 10 6 since self-Kerr and second-order nonlinear effects are observed for larger mean photon numbers. For φ 0 = 10 −7 and F = 0.99, the amplitude of the squeezing of the post-selected state is obtained to be x est = 0.24 (2.08 dB) with a phase shift angle θ est = 2.88 × 10 −4 and a probability P suc = 2.19 × 10 −8 . In this condition, the transmissivity should be tuned to t = 0.70719. Similarly, for φ 0 = 10 −7 and F = 0.999, x est = 0.13 (1.13 dB) is obtained with θ est = 2.06 × 10 −4 and P suc = 5.06 × 10 −8 . In this condition, the transmissivity should be tuned to t = 0.70725. These estimated values for the amplitude of the squeezing parameter are the same for both fidelities in the case of |α| 2 = 10 5 and φ 0 = 2π × 10 −5 . This is because the effect of the quantum interference between |α c and |αe iφ0 c due to the post-selection of the single photon is the same as in the above case since the estimated phases θ est are smaller. The success rates of the postselection are about 21.89 Hz and 50.55 Hz for F = 0.99 and F = 0.999, respectively. For these success rates, the post-selection is experimentally achievable by employing a beam shutter that works at kHz rates. In conclusion, we proposed an experimental scheme for generating phase-squeezed light pulses by the postselection of single photons coupled with coherent light pulses via a weak cross-Kerr nonlinearity. To implement the post-selection of single photons, a Mach-Zehnder interferometer with a VBS as an output is used. When one arm of the Mach-Zehnder interferometer interacts with the coherent light via the weak cross-Kerr nonlinearity, a superposition of the non-phase-shifted coherent state |α c and the phase-shifted coherent state |αe iφ0 c is generated. The post-selection of the single photon achieves quantum interference between |α c and |αe iφ0 c . When the transmissivity and the reflectivity of the VBS are properly set, an effective squeezing can be obtained such that the output has high fidelity to the ideal phasesqueezed state. Finally, there remain few further considerations with regard to our proposed scheme. First, our proposal may be related to the context of the weak-value amplification for the phase shift of coherent light by post-selection of single photons, which has already been discussed in Ref. [33]. Our estimated phases are also amplified, as shown in Table I, and thus the squeezing effect in our proposal also may be associated with weak values. Furthermore, the relation between quantum interference and weak values has already been discussed [40,41]. Since the argument of the weak value is taken as the geometric phase, the relative phase may be characterized by the geometric phase of the single-photon interferometer. Additionally, our scheme may be generalized to an arbitrary quadrature squeezer, since quantum interference can be controlled by changing the relative phase. Second, to obtain a larger squeezing effect, we investigate the repeated use of our proposed scheme. The same degree of squeezing can be obtained for an output with the same fidelity to the phase-squeezed state by straightforward extension. However, the squeezing effect under photon losses using a beam splitter model is considered [42]. In particular, in our proposed scheme, photon losses should be considered in the single-photon interferometer and the post-selected state. In the first case, the photon loss may change the success rate of the post-selection. In the latter case, the photon loss may collapse the post-selected state to a coherent state, and the measurement loss may be dominant since the measurement of the generated light is carried out after the post-selection.
2017-05-27T13:23:01.000Z
2014-10-29T00:00:00.000
{ "year": 2014, "sha1": "00de793e31060d99bd9ca9d6388bc3255e46680a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1410.8046", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "00de793e31060d99bd9ca9d6388bc3255e46680a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119120436
pes2o/s2orc
v3-fos-license
Poincar\'{e} Sobolev equations in the Hyperbolic space We study the a priori estimates,existence/nonexistence of radial sign changing solution, and the Palais-Smale characterisation of the problem $-\De_{\Bn}u - \la u = |u|^{p-1}u, u\in H^1(\Bn)$ in the hyperbolic space $\Bn$ where $1<p\leq\frac{N+2}{N-2}$. We will also prove the existence of sign changing solution to the Hardy-Sobolev-Mazya equation and the critical Grushin problem. Positive solutions of (1.1) has been extensively studied in [9]. In fact it was shown in [9] that (1.1) has a positive solution iff either 1 < p < N +2 N −2 and λ < (N −1) 2 and N ≥ 4. The solutions are also shown to be unique up to isometries (except in N = 2 where there is a restriction on p). In this article we focus on sign changing solutions of (1.1). The subcritical case is quite different from the critical case where the lack of compactness of the problem comes in to picture. We will present this compactness analysis in Section 3, Theorem 3.1 and Theorem 3.3. In section 4, we will some appriori estimates on the solution. In the fifth section we will prove our main existence results Theorem 5.1, Theorem 5.2, Theorem 5.3 and Theorem 5. 5. Some preliminaries about Hyperbolic space are discussed in the appendix. Priliminaries Let B N := {x ∈ R N : |x| < 1} denotes the unit disc in R N . The space B N endowed with the Riemannian metric g given by g ij = ( 2 1−|x| 2 ) 2 δ ij is called the ball model of the Hyperbolic space. We will denote the associated hyperbolic volume by dV B N and is given by The hyperbolic gradient ∇ B N and the hyperbolic Laplacian ∆ B N are given by denotes the Sobolev space on B N with the above metric g, then we have H 1 (B N ) ֒→ L p (B N ) for 2 ≤ p ≤ 2N N −2 when N ≥ 3 and p ≥ 2 when N = 2. In fact we have the following Poincaré-Sobolev inequality (See [9]) : For every N ≥ 3 and every p ∈ (2, 2N N −2 ] there is an optimal constant S N,p,λ > 0 such that for every u ∈ H 1 (B N ). Existence of exremals for (2.5) and their uniqueness has been studied in [9]. If N = 2 any p > 2 is allowed (See [2], [10] for a more precise embedding in this case). Thanks to (2.5) solutions of (1.1) can be characterised as the critical points of the energy functional I λ given by Conformal change of metric. Let f : M → N be a conformal diffeomorphism between two Riemannian manifolds (M, g) and (N, where ∆ g , S g and ∆ h , S h are the Laplace Beltrami operators and scalar curvatures on M and N respectively. Then if v is a solution of (2.8), then one of the integral is finite. As an easy consequence, if τ ∈ I(B N ) the isometry group of B N and u any solution of (1.1) then v = u • τ is again a solution of (1.1) and I λ (u) = I λ (v). See the appendix for details about the isometry group I(B N ). As another consequence, noting that the hyperbolic metric g = φ and the scalar curvature u solves the Euclidean equation ). Let us denote the energy functional corresponding to (2.9) by Then for any u ∈ H 1 (B N ) ifũ is defined asũ = ,ṽ whereṽ is defined in the same way. Compactness and non-compactness In this section we will study the compactness properties of (1.1). Let u ∈ H 1 (B N ) and b n ∈ B N such that b n → ∞ and τ n be the Hyperbolic translations(see Appendix) such that τ n (0) = b n . Define u n = u • τ n , then ||u n || = ||u|| but u n ⇀ 0 in H 1 (B N ). This shows that the embedding Hence the problem (1.1) is non compact even in the subcritical case. Below we will show that we can overcome this problem in the subcritical case by restricting to the radial situation. The critical case is more involved, we will show that the noncompactness can occur through two profiles. The radial case. Let H 1 r (B N ) denotes the subspace Since the hyperbolic sphere with centre 0 ∈ B N is also a Euclidean sphere with centre 0 ∈ B N (See the appendix),H 1 r (B N ) can also be seen as the subspace consisting of Hyperbolic radial functions. Proof. Let u ∈ H 1 r (B N ) then u(x) = u(|x|), by denoting the radial function by u itself. Then where ω N −1 is the surface area of S N −1 . Thus for u ∈ H 1 r (B N ) Then upto a subsequence we may assume u m ⇀ u in H 1 r (B N ) and pointwise . To complete the proof we need to show now u m → u in L p (B N ). The convergence of 1st integral follows from Relich's compactness theorem. The convergence of 2nd integral follows from the dominated convergence theorem as in {|x| > 1 2 } we have the estimate |u m (x)| p ≤ C 1−|x| 2 2 N−1 2 p and This completes the proof. But the above theorem fails for p = 2 and 2 * . Palais Smale Characterisation In this section we study the Palais Smale sequences of the problem To be precise define the associated energy functional I λ as We say a sequence u n ∈ H 1 (B N ) is a Palais Smale sequence ( PS sequence) . One can easily see that PS sequences are bounded. Therefore if we restrict I λ to H 1 r (B N ) and p < N +2 N −2 then it follows from Theorem 3.1 that every PS sequence has a convergent subsequence. This is not the case if we relax either one of the above conditions as we will see below. In this section we will analyse this lack of compactness of PS sequences. First observe that the equation (3.11) is invariant under isometries. i.e., if u is a solution of (3.11) and τ ∈ I(B N ), then v = u • τ is also a solution of (3.11). Thus for a solution U of (3.11), if we define where τ n ∈ I(B N ) with τ n (0) → ∞, then u n is a PS sequence converging weakly to zero. We will see that in the subcritical case noncompact PS sequences are made of finitely many sequnces of type (3.13). However in the critical case p = 2 * − 1 we can exhibit another PS sequence coming from the concentration phenomenon. Let V be a solution of the equation The associated energy J(V ) is given by where ǫ n > 0 and ǫ n → 0, then direct calculation shows that v n is also a PS sequence. Moreover we have Lemma 3.2. Let u n be a PS sequence of (3.11), and τ n ∈ I(B N ) then v n := u n • τ n is also a PS sequence of (3.11). Thus if τ n ∈ I(B N ) and v n as in (3.16) then u n = v n • τ n is also a PS sequence. We show that any PS sequence is essentially a superposition of the above type of PS sequences. Then ∃n 1 , n 2 ∈ N and functions u j where U j , V k are the solutions of (3.11) and (3.14) corresponding to u j n ,and v k n . Classifiacation of PS sequences has been done for various problems in bounded domains in R N and on compact Riemannian manifolds, where the lack of compactness is due to the concentration phenomenon (See [15], [13], [5],... and the references therein). However the present case should be compared with the case of infinite volume case, say the critical equations in R N . In this case lack of compactness can occur through vanishing of the mass (in the sense of the concentration compactness of Lions). However in the Euclidean case by dialating a given sequence we can assume that all the functions involved has a fixed positive mass in a given ball and hence we can overcome the vanishing of the mass. However in the case of B N this is not possible as the conformal group of B N is the same as the isometry group. We will overcome this difficulty by doing a concentration function type argyment near infinity. For this purpose let us define is the open ball in the Euclidean space with center a and radius r > 0. Note that for the above choice of a and r, ∂B(a, r) is orthogonal to S N −1 . We also have, Proof. Let M : B N → H N be the standard isometry between the ball model and maps A(a 1 , r 1 ) to A(a 2 , r 2 ). Proof of Theorem 3.3. From standard arguments it follows that any PS sequence is bounded in and hence boundedness follows. Thus up to a subsequence we may assume Step 1. In this step we will prove the theorem when u = 0. Proof. Since su n is a PS sequence we have Since the square root of LHS is an equivalent norm in H 1 (B N ) and u n does not converge strongly to zero we get . Let us define the concentration function Q n : (0, ∞) → R as follows. Now lim r→0 Q n (r) = 0,and lim r→∞ Q n (r) > δ as for large r, A(x, r) approximates the intersection of B N with a half space {y ∈ R N : y · x > 0}. Therefore we can choose a sequence R n > 0 and x n ∈ S Rn s.t Since T n is an isometry one can easily see that {v n } is a PS sequence of I λ at the same level as u n and Moreover v solves the equation (3.11). Let us consider the two cases: is the Euclidean ball with center a and radius √ 3. Now Now using (3.18), Cauchy-Schwartz and the Poincaré-Sobolev inequality (2.5) we get which is a contradiction. This implies Since a ∈ S √ 3 is arbitrary, the claim follws. Define v n = θv n , then the above claim shows that v n is again a PS sequence . Let us consider a conformal change of the metric, from the hyperbolic to the Eucledean metric. Definẽ where a is a smooth bounded function in B(0, R) given by a( s are solutions of (3.14) and φ ∈ C ∞ c (B N ) such that 0 ≤ φ ≤ 1, φ(x) = 1 for |x| < r and φ(x) = 0 for |x| > R. Moreover the associated energy J λ (ṽ n ) is given by where J λ and J are as in (2.10) and (3.15). Thus Hence the theorem follows in this case. Since v solves (3.11), w n is a PS sequence. Moreover since u n ⇀ 0 in H 1 (B N ), we have T −1 n (0) → ∞ and hence w n is a PS sequence of the form (3.13). We claim that Claim : u n − w n is a PS sequence of I λ at level d − I λ (v). Since . Combining these facts with the invariance of I λ under the action of I(B N ), we get because the linear part follows easily. Using the Hölder inequaliy the L.H.S of (3.21) can be estimated by standard arguments using Vitali's convergence theorem shows that the term inside bracket is of o(1). this proves the claim. In view of the above claim if u n − w n does not converge to zero in H 1 (B N ) we can repeat the above procedure for the PS sequence u n − w n to land in case 1 or case 2. In the first case we are through and in the second case either we will end up with a converging PS sequence or else we will repeat the process. But this process has to stop in finitely many stages as any PS sequence has nonnegative energy and each stage we are reducing the energy by a fixed positive constant. This proves Step 1. Step 2: Let u n be a PS sequence. Then we know that u n is bounded and hence going to a subsequence if necessary we may assume that u n ⇀ u in H 1 (B N ), pointwise and in L p+1 loc (B N ). Thus as before we can show that u n − u is a PS sequence converging weakly to zero, at level d − I λ (u). Now the theorem follows from Step 1. A priori estimates From the standard elliptic theory we know that the solutions of (1.1) are in C 2 (B N ). But we do not have any information on the nature of solution as x → ∞ (equivalently as |x| → 1). If u is a positive solution of (1.1), then u is radial with respect to a point and the exact behaviour of u(x) as x → ∞ has been obtained in [9] by analysing the corresponding ode. In the general case we prove Proof. We will prove the theorem in a few steps. First we will show that u is bounded. Step 2: Let R be as in Step 1, then there exists C > 0 such that sup B(0,R) |u•τ | ≤ C, for all τ ∈ I(B N ) and hence u is bounded. Proof: As in the previous step we will prove sup B(0,R) |u| ≤ C and the constant remains unchanged if u is replaced by u • τ . Thanks to the Step 1 we have λ + |u| p−1 ∈ L q B(0,2R) for some q > N 2 . Define, λ + |u| p−1 = g, hence |g| L q B(0,2R) ≤ C(R, ||u|| H 1 (B N ) ). From the expression (4.24) of step 1 we can see that . Now let 1 q ′ = θ + 1−θ r . then using interpolation inequality we get where θ depends on N, t, q ′ . Note that 2r = 2 * . Therefore, where C depends on ||u|| H 1 (B N ) , N. Now using Poincaré-Sobolev inequality in the above expression we get Now letting m → ∞ we get is finite. C is a positive constant independent of γ. Now we will complete the proof by iterating the above relation. Let us take γ = 2, 2χ, 2χ 2 ... i.e. γ i = 2χ i for i = 0, 1, 2,... . Now by iteration we obtain Hence u + is bounded in B(0, R). Applying the same argument to −u instead of u we get u − is also bounded by the same. Since we can take τ = τ b , the hyperbolic translation for any b ∈ B N we get sup Step 3: u(x) → 0 and Proof: Let b n ∈ B N such that b n → ∞. Let τ n ∈ I(B N ) be the hyperbolic isometry such that τ n (0) = b n . Define v n = u • τ n , then we know that v n ⇀ 0 in H 1 (B N ). Since v n 's are uniformly bounded and v n satisfies (1.1), we get ∆ B N v n is uniformly bounded and hence v n 's are uniformly bounded in W 2,p loc (B N ), ∀p, 1 < p < ∞. Combining with Sobolev embedding theorem we get v n → 0 in C 1 (B(0, 1 2 )). In prticular |v n (0)| → 0 and |∇v n (0)| → 0. Writing in terms of u, we get |u(b n )| → 0 and (1 − |b n | 2 )|∇u(b n )| → 0. Now the theorem follows as Next we prove an improvement of the above result under some restrictions on λ. Proof. First consider the case p = 2 * − 1. In this case using the conformal change of metric we know that if u solves (1. From standard elliptic theorey we know that v ∈ C 2 (B N ). Now to prove the bound near infinity, we do a Moser iteration. Fix a point x 0 ∈ R N such that 2 and |∇ϕ| ≤ C r i −r i+1 . Then 0 ≤ w ∈ H 1 0 (B N ) and hence from (2.9) we can write Again as in Step 1 of Theorem 4.1 we get Sinceλ ≤ 0 we can ignore the term which contains singularity at the origin to obtain Now we can do the standard Moser iteration techniques as in Step 1 and Step 2 of Theorem 4.1 to conclude v ∈ L ∞ (B(x 0 , R) ∩ B N ). Since x 0 is arbitrary and we can cover B N ∩ {x : |x| ≥ R 2 } by finitely many sets of the form B(x 0 , R) ∩ B N the claim follows. When p < 2 * − 1, the conformal change will give us an equation of the form where t = N − N −2 2 (p + 1). Again one can proceed as before to do a Moser iteration to get the result. Of course while estimating the terms on the RHS one has to use the Hardy inequality Existence and Non Existence of sign changing radial solutions In this section we will study the existence and non existence of sign changing solutions of the problem where λ < ( N −1 2 ) 2 and 1 < p ≤ N +2 N −2 . We will see below that there is a significant difference between the cases 1 < p < N +2 N −2 and p = N +2 N −2 . In the subcritical case we have Theorem 5.1. Let 1 < p < N +2 N −2 , then there exists a sequence of solutions u k of (5.29) such that ||u k || → ∞ as k → ∞. Remark 1. The above result holds when λ = ( N −1 2 ) 2 , with u k ∈ H and the corresponding norm goes to infinity as k → ∞. As an immediate corollary we obtain the existence of sign changing solutions for the Hardy-Sobolev-Mazya equation and the critical Grushin equation. 2) admits a sequence v k of sign changing solutions such that ||∇v k || 2 → ∞ as k → ∞. Proof. As mentioned in the introduction cylindrically symmetric solutions of (1.2) are in one one correspondence with the solutions of (1.1) with N = n − k + 1 and p = p t . One can easily see that p = p t < N +2 N −2 , thus Theorem 5.1 apply . Let v k be the solution of (1.2) corresponding to u k , then since ||u k || → ∞ we get ||∇v k || 2 → ∞ (see [9],section 6, for details). was established in ( [9]). Since the existence of a nontrivial sign changing radial solution to (5.29) gives a solution to the above problem in a geodesic ball we conclude: Similarly we have Solutions of (5.29) is in one to one correspondence with the critical points of the functional From the Poincaré-Sobolev inequality we know that J is well defined and C 1 in H 1 (B N ). The main difficulty in finding critical points of J is due to the lack of compactness, which we have already analysed in the last section. Proof of Theorem 5.1. Thanks to Theorem J : H 1 r (B N ) → R satisfies the Palais-Smale condition and hence using the standard arguments using genus as in Ambrosetti-Rabinowitz ( [1] Theorems 3.13 , 3.14 ) we get a sequnce u k , k = 1, 2, ... of critical points for J| H 1 r (B N ) with ||u k || → ∞. Also we know that the critical points of J| H 1 r (B N ) are critical points of J in H 1 (B N ) (see [11]) as well. this proves the theorem. Proof of Theorem 5.5. We know from [9] that (5.29) has a unique positive raidal solution say u 0 . In order to prove the existence of a sign changing solution we proceed as in [6] (see [16] for the same kind of result on compact Riemannian manifolds). First recall that the unique positive radial solution u 0 satisfies where N is the Nehari manifold Next observe that if u is a sign changing solution then u ± ∈ N . Thus to look for a sign changing radial solution we need to look at only the H 1 r (B N ) function whose positive and negative parts are in N . More precisely let us define for u ∈ H 1 and f λ (0) = 0. Let N 1 and U be defines as We can easily check that U = ∅ and the Poincare Sobolev inequality tells us that there exists α > 0 such that u ∈ U ⇒ ||u ± || > α. Claim: Let β = inf Assuming the claim, let us observe from Theorem (3.3) that the PS sequence otained in the above claim must be of the form u n = u + o(1) where u is a nontrivial solution of (5.29). Since I λ (u) = β we immediately see that u changes sign and hence the theorem follows. Now it remains to prove the claim. Proof of claim: Existence of the PS sequence at level β follows exactly as in [6]. We will just outline the arguments and refer to [6] and the references therein for details. Let us define P to be the cone of non negative functions in H 1 r (B N ) and Σ be the collection of maps σ ∈ C(Q, H 1 r (B N )) where Q = [0, 1] × [0, 1], satisfying σ(s, 0) = 0, σ(0, t) ∈ P, σ(1, t) ∈ −P, (I λ • σ)(s, 1) ≤ 0, f λ • σ(s, 1) ≥ 2 for all s, t ∈ [0, 1]. The very same arguments used in [6] tells us that If β is not a critical level then we can use a variant of the standard deformation lemma to conclude that the above min max level can be further lowered leading to a contradiction (See [6] for details). Thus the crucial step to prove is the estimate on β. Let φ ∈ C ∞ c (B N ) such that 0 ≤ φ ≤ 1 and φ = 1 on |x| < r where 0 < r < 1. Let u 0 be the unique positive solution of (5.29) then for suitable a, b ∈ Thus the estimate on β follows once we show that sup a,b∈R Making a conformal change enough to show that sup a,b∈R where J λ is as in (2.10) and v 0 (x) = ( 2 1−|x| 2 ) N−2 2 u 0 (x). Before proceeding to prove this we need to calculate few estimates. w µ Now let us recall some results from [4] Now using (i), (ii), (iii) and the fact that v ε has support in B(0, R) where R < 1 we can compute the following estimates For proof of (4) and (5) see appendix. Taking b = 0, a = 1 we can see that sup a,b∈R J λ (av 0 + bv ε ) > 0 and J λ t(av 0 + bv ε ) < 0 while |t| → ∞ and a, b to be fixed. Therefore it is enough to consider sup |a|,|b|<K where K is chosen sufficiently large. Therefore using the above estimates and I λ (u 0 ) = J λ (v 0 ) we get Now taking ε to be small enough we can conclude J λ (av 0 +bv ε ) < 1 N (S Appendix In this appendix we will recall a few facts about the Hyerbolic space, especially the disc model. For proofs of theorems and a detailed discussion we refer to [12]. Disc and Upper half space model. We have already introduced the Disc model. The half space model is given by (IH N , g) where , is an Isometry between IH N and B N with M −1 = M. The Hyperbolic distance in B N . The Hyperbolic distance between x, y ∈ B N is given by We define the hyperbolic sphere of B N with center b and radius r > 0, as the set It easily follows that a subset S of B N is a hyperbolic sphere of B N iff S is a Euclidean sphere of R N and contained in B N probably with a different center and different radius. Isometry group of B N . Let a be the unit vector in R N and t be a real number. Let P (a, t) be the hyperplane P (a, t) = {x ∈ R N : x.a = t}.The reflection ρ of R N in the hyperplane P (a, t) is defined by the formula ρ(x) = x + 2(t − x.a)a. Now let b ∈ R N and r is positive real number, then the reflection σ of R N in a sphere S(b, r) = {x ∈ R N : |x − b| = r} is defined by the formula σ(x) = b + ( r |x−b| ) 2 (x − b). Let us denote the extended Euclidean space by,R N := R N ∪ ∞. Definition 6.1. A sphere Σ ofR N is defined to be either a Euclidean sphere S(a, r) or an extended planeP (a, t) = P (a, t) ∪ {∞}. Lemma 6.1. Two spheres ofR N are orthogonal under the following conditions: • The spheresP (a, r) andP (b, s) are orthogonal iff a and b are orthogonal. • The spheres S(a, r) andP (b, s) are orthogonal iff a is inP (b, s). With these definitions we have the following characterisation of the isometry group of B N . Again we refer to [12] for a proof. Hyperbolic Translation. Let S(a, r) be a sphere in R N with r 2 = |a| 2 − 1, therefore S(a, r) is orthogonal to S N −1 . Let σ a be the reflection of R N in S(a, r) and ρ a is the reflection of R N in the hyperplane a.x = 0. Using Lemma 6.1 and Theorem6.2 we get σ a • ρ a is an isometry of B N . Define a * = a |a| 2 . Then we get σ a • ρ a (x) = (|a| 2 − 1)x + (|x| 2 + 2x · a * + 1)a |x + a| 2 In particular, σ a • ρ a (0) = a * . Let b be any nonzero point in B N , and b * = a(say). Then |a| > 1 and a * = b. Now if we take r = (|a| 2 − 1) 1 2 , then S(a, r) is orthogonal to S N −1 by Lemma 6.1. Therefore we can define a Möbius transformation of B N by the formula Therefore in terms of b we can write τ b (x) = (1 − |b| 2 )x + (|x| 2 + 2x · b + 1)b |b| 2 |x| 2 + 2x · b + 1 As τ b is the composition of two reflections in hyperplanes orthogonal to the line (− b |b| , b |b| ), the transformation τ b acts as a translation along this line. We define τ 0 to be the identity. Then τ b (0) = b for all b ∈ B N . The map τ b is called the hyperbolic translation of B N by b.
2011-03-24T14:47:57.000Z
2011-03-24T00:00:00.000
{ "year": 2011, "sha1": "0dab88b93499fce3893efac402e444b0c662ffc7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1103.4779", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0dab88b93499fce3893efac402e444b0c662ffc7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
252185993
pes2o/s2orc
v3-fos-license
Clinical outcomes of long-term corneas preserved frozen in Eusol-C used in emergency tectonic grafts To report the clinical results on the use of corneas frozen in Eusol-C as tectonic corneal grafts.Retrospective review of medical records of patients who received frozen corneas as emergency tectonic grafts from 2013 to 2020. Corneas had been stored in Eusol-C preservation media at − 78 °C for a mean time of 6.9 months. Diagnosis, transplant characteristics, microbial culture results, anatomic integrity, epithelial healing, neovascularization, transparency, infection and need for additional surgeries were registered. Fifty corneas were used in 40 patients (mean age 60.5 years, 20 males) with a median follow-up of 27.3 months after surgery. Need for tectonic graft was due to: perforation secondary to immune diseases (6, 12%), neurotrophic ulcer (11, 22%), trauma (3, 6%), corneal infection (11, 22%), chronic disorders of the ocular surface (9, 18%) and previous corneal graft failure (10, 22%). Mean size of grafts was 5.6 mm and 36 cases (72%) also received an amniotic membrane graft. Thirty-eight corneas achieved epithelization (76%), 25 (50%) were clear and 19 (38%) developed neovascularization. None of the corneas were rejected. Seventeen corneas (34%) failed: 7 (14%) due to reactivation of baseline disease and 10 (20%) due to primary graft failure. Four corneas (8%) had positive microbial cultures suggestive of contamination and 2 (4%) developed a cornea abscess non-related to a positive microbial culture. Long-term preservation of donor corneas in Eusol-C at − 78 °C is a viable technique to meet the needs of emergency grafts with minimal equipment. Introduction Around 40 million people worldwide are blind, corneal-related blindness, such as trachoma and corneal opacities, representing the third cause of blindness, behind cataracts and glaucoma. (GBD 2019 Blindness and Vision Impairment Collaborators & Vision Loss Expert Group of the Global Burden of Disease Study 2021; Pascolini and Mariotti 2012) The great majority of patients with visual impairment due to corneal disease can be treated with corneal transplantation. Most common causes for transplant are Fuchs dystrophy, keratoconus and sequalae of infectious keratitis (Gain et al. 2016). Corneal transplant is considered the world's most frequent type of transplantation and more than 185,000 corneal transplants are performed every year (Gain et al. 2016). Nevertheless, there are millions of patients worldwide who are waitlisted and around 50% of the world's population have no access to corneal transplantation (Gain et al. 2016;GBD 2019 Blindness and Vision Impairment Collaborators & Vision Loss Expert Group of the Global Burden of Disease Study 2021) Corneal tissue shortage has thus been an ongoing issue, which has worsened due to the Covid-19 pandemic (Aiello et al. 2021;Servin-Rojas et al. 2021). Efforts have been made to optimize the use of available corneal tissue, with individual corneas being implanted in two and even three different recipients (Heindl et al. 2011;Vajpayee et al. 2007). Furthermore, the use of long term stored corneas has also helped reduce this tissue shortage. Cryopreservation is the only current long-term storage method that can virtually preserve tissue structure (Armitage 2009;Chaurasia et al. 2020). Also, ophthalmic emergencies such as corneal perforation or impending corneal perforation require immediate detection and prompt intervention. In this context, the use of frozen corneas is promising because it allows simple, fast and low cost amassing of sizable amounts of material with long storage time (Kim et al. 2016;Robert et al. 2012). For the past years, unused corneal lenticules have been stored in Eusol-C (Corneal Chamber, Alchimia, Ponte San Nicolo, Italy) at − 78 °C in our hospital in order to have corneas stored for urgent transplants in cases of corneal perforations. However, this technique has not been widely described. Given the low cost and easiness of this conservation method, it could be useful to reflect our good results for Eye Banks in developing countries or those with fewer resources. Therefore, we hereby report the clinical results regarding efficacy and safety on the use of corneas frozen in Eusol-C as tectonic corneal grafts in a tertiary centre in Spain. Methods For this retrospective study, the medical records of patients who received frozen corneas as emergency tectonic grafts from 2013 to 2020 at the Hospital Clinico San Carlos in Madrid, Spain were reviewed. The protocol was approved by the hospital's Ethics Committee and no informed consent was necessary. All corneas that had been stored in Eusol-C preservation media (Corneal Chamber, Alchimia, Ponte San Nicolo, Italy) at − 78 °C and later used in corneal transplant surgeries were included. The only exclusion criterion was insufficient follow-up data in the medical history. All donors are routinely tested for human immunodeficiency virus, hepatitis B virus, and hepatitis C virus, syphilis, toxoplasma gondii, brucella, Epstein-Barr virus, herpes simplex types I and II, varicellazoster, cytomegalovirus and human T cell lymphotropic virus (HTLV) I. Age, sex and cause of death of the donor were noted. Diagnosis, transplant characteristics (size of tectonic graft, adjuvant use of amniotic membrane graft), microbial culture results, anatomic integrity, epithelial healing, neovascularization, transparency, infection, rejection, need for additional surgeries and follow-up time were registered, as well as storage time of the cornea. The primary efficacy endpoint was whether reepithelization was achieved, while secondary endpoints were transparency, neovascularization, rejection and need for additional surgeries. Safety endpoint was positivity of microbial cultures. All removed failed grafts were tested for microbiology. Statistical analysis was performed using SPSS v25.0 (SPSS, Chicago, IL). Quantitative variables are represented by their mean, along with their standard deviation (SD), while qualitative variables are shown as proportions. Results Fifty corneas that had been stored in Eusol-C preservation media and stored at − 78 °C for a mean time of 6.9 months (range 15 days to 12 months) were included. The fifty corneas were used in forty patients (mean age 60.5 years, 20 males) with a median follow-up of 27.3 months after surgery. Mean size of grafts was 5.6 mm (range 3 to 11) and 36 cases (72%) also received an amniotic membrane graft. The surgeries were successful in all cases with restitution of the globe integrity. Thirty-eight corneas achieved epithelization (76%), 25 (50%) were clear and 19 (38%) developed neovascularization. None of the corneas were rejected. 17 corneas (34%) failed, thus requiring a new surgery: 7 (14%) due to reactivation of baseline disease, 8 (16%) due to primary failure of the graft and 2 (4%) due to perforation of a corneal abscess. In 10 of these 17 failures, a new frozen cornea was used due to a lack of fresh tissue at that moment. Four corneas (8%) had positive microbial cultures suggestive of contamination and 2 (4%) developed a corneal abscess non-related to a positive microbial culture, both corneas. Discussion Shortage of corneal tissue for transplant is an ongoing issue, which has worsened in the past years, when many potential donors have been discarded due to the pandemic and the difficulty in booking ocular surgeries. We present a viable method of conservation for emergencies that requires little material and is thus suitable for eye banks with few resources. In addition, the use of frozen corneas in Eusol-C may serve as an efficient temporary measure for tectonic restoration of perforated corneas (Fig. 1). During the last decades, with the development of new corneal storage media, an important improvement in the corneal tissue conservation has been achieved. Optisol-GS (Bausch & Lomb, Inc., Rochester, NY, USA) is the most widely used storage medium in the United States, while in Europe it is Eusol-C. Eusol-C is an intermediate-term storage media which has dextran as osmotic agent, sodium pyruvate and glucose as energetic sources, amino acids, mineral salts and vitamins as nutrients, along with gentamicin as antibiotic, Hepes and bicarbonate as buffers, and phenol red as pH indicator (Yüksel et al. 2016). Long-term corneal storage techniques are much more elaborate than short-term ones and include glycerol preservation, lyophilization, gamma irradiation, and cryopreservation. These techniques are generally limited by lack of viable cells in the tissue, the longer time to epithelialization and the graft opacification (Fig. 2). However, advantages of long-term corneal preservation include the limited expression of MHC antigens, reducing the risk of rejection and that keratocytes and endothelial cells of the donor can repopulate the acellular cornea (Lynch and Ahearne 2013). In some cases, frozen and fresh corneal donors have even proven to be equally efficient and safe, with similar recuperation of visual acuity and no untoward complications, such as melting, leaks, or endophthalmitis (Fig. 3) (Robert et al. 2012). Fig. 1 A 50-year-old patient, with a DSAEK 5 years earlier due to bullous keratopathy, presented with central corneal perforation (A). A 4 mm tectonic graft was sutured along with an amniotic membrane graft. Reepithelization was achieved, with no corneal neovascularization and good transparency. There was no graft rejection or infection. Images illustrate graft appearance on the first day (B), 2 weeks (C), 6 weeks (D), 6 months (E) and 12 months (F) after surgery The present case series proves that storage of cornea in Eusol-C at − 78 °C is a simple technique, requiring minimal equipment, which avoids wasting viable tissue more suited for elective surgeries and can effectively be used in tectonic grafts. The reason to include Eusol-C as medium, instead of a more suitable medium for cryopreservation or with glycerol, was because of its wide availability in most hospitals not necessarily performing corneal transplantation. All of the surgeries were successful in terms of restitution of the globe integrity. Overall, 76% of Fig. 2 A 59-year-old patient with ocular pemphigoid, which had required multiple previous amniotic membrane grafts in the past, presented with corneal perforation (A) due to reactivation of the disease. A 4 mm corneal graft combined with amniotic membrane was performed. Reepithelization was achieved, although there was corneal neovascularization and loss of transparency. Images of the first day (B), first month (C) and third month (D) are shown. Despite good results, further corneal grafts were needed at 6 and 8 months due to relapse of the disease Fig. 3 An 83-year-old patient presented in the Emergency Room with corneal perforation due to pellucid marginal degeneration (A). A horseshoe shaped graft with an external diameter of 10 mm was cut and an amniotic membrane graft was also sutured. Reepithelization was achieved, with no neovessels and excellent transparency. No graft rejection or failure was noted during 3 years of follow-up. Images of the first day (B), 1 week (C), 1 month (D), 8 months (E) and 2 years (F) after surgery are shown the corneas achieved epithelization (38 patients), 25 (50%) were clear and 19 (38%) developed neovascularization. None of the corneas were rejected. However, 17 corneas (34%) failed: 7 (14%) due to reactivation of baseline disease and 10 (20%) due to primary failure of the graft. In long-term techniques the graft tends to remain opaque and cosmetically unsatisfactory. However spontaneous graft clearing is also possible in cases where the surrounding host endothelium is healthy and migrates into the graft (Sharma et al. 2001). In a series of 195 patients who underwent therapeutic penetrating keratoplasty with cryopreserved corneas without endothelium, 18 patients (9.23%) achieved a clear graft postoperatively (Ying et al. 2022). Despite graft sizes ranging from 7.0 to 9.5 mm, a mean cell density of 991 cells/mm 2 (range, 782-1531 cells/ mm 2 ) was reached. The greater graft size is probably responsible for a lower proportion of endothelial regeneration and thus less patients with clear grafts. In smaller grafts like those here presented, cell proliferation and cell migration are probably more easily achieved and explain the better outcomes. One of the main risk factors for corneal transplants is donor cornea contamination. The use of antibiotics in storage media remains as one of the most important safety measures in order to minimize the contamination risk in corneal preservation. In the present study, four corneas (8%) had positive microbial cultures suggestive of contamination and 2 (4%) developed a cornea abscess non-related to a positive microbial culture. These results are quite similar to other series (Gruenert et al. 2017, Li et al. 2019Ling et al. 2019). Li et al. (2019) reported 8.2% of positive cultures in 111 donor corneas, most common bacteria and fungi being Acinetobacter baumannii complex (19.8%) and Candida spp. (9.0%), respectively. Only two patients (1.8%) who received contaminated corneal buttons developed postoperative infections. Death due to cardiac disease and longer preservation time was associated with increasing contamination rates. In another series including 3306 donor corneas, (Gruenert et al. 2017) the overall contamination rate was 7.8% and, in this case, younger donor age, a death-toexplantation interval of more than 24 h, hospitalization prior to death and death caused by sepsis were associated with a higher risk of contamination. Most common microbes were Enterococci (19%), Staphylococci (10.8%) and Candida (37.4%). On the other hand, Vignola et al. (2019) tested 100 donor corneas for microbial contamination after cold storage, corneal culture and corneal deswelling at the Eye Bank of Rome. Tissue contamination was unexpectedly high given that 67% of the Eusol-C samples were contaminated, mainly by Staphylococcus spp. Some limitations of the method hereby described should be acknowledged. A relatively high number of patients suffered graft failure (34%), although many of them (14%) were due to worsening of baseline disease. Therefore, it might not be the perfect technique for elective surgeries, but can be a useful option in emergency tectonic grafts. Also, despite graft opacifications in some cases, this technique may circumvent the problem of corneal shortage and serve as an intermediary procedure with a definitive optical transplant with fresh tissue at a later date, avoiding having to discard unused corneas. In conclusion, long-term preservation of donor corneas in Eusol-C at − 78 °C is a viable technique to meet the needs of emergency grafts with minimal equipment and in corneal shortage scenarios. Funding The authors did not receive support from any organization for the submitted work. The authors have no relevant financial or non-financial interests to disclose. Conflict of interest The authors have no competing interests to declare that are relevant to the content of this article. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. There are no conflicts of interest to disclose.
2022-09-12T06:19:19.379Z
2022-09-10T00:00:00.000
{ "year": 2022, "sha1": "a40fcd2cf5b118484b88e12a4e988d110d55dcc5", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10561-022-10037-1.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "075a55037bdf40e8c487755fe14ade30bf710fd8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
144063254
pes2o/s2orc
v3-fos-license
Culture, Cognition and Language in the Constitution of Reading and Writing Practices in an Adult Literacy Classroom In this article we analyze a discursive interaction between a researcher and an Youth and Adult Education student intending to show the meanings and uses of reading and writing taken by him. We take as our basis for discussion the theoretical-methodological contributions from Historical-Cultural Psychology and Paulo Freire’s theories, which are combined with Bakhtin’s concept of dialogue. This procedure allowed us, on one hand, getting into the other’s perspective and, on the other hand, to make relations between cognition, language and culture to understand the adult students’ metacognitive strategies, in the appropriation process of literacy practices of school culture. Thus, we could discuss the intimate relationship between doing and knowing and the importance of school in the transition from concrete thinking to the abstract thinking and vice-versa. The analysis presented in this text was produced from a survey in which, concerned with the issue of educational exclusion of many Brazilian citizens, whether children or adults, we seek, among other objectives, to understand some of the meanings and senses (Vygotsky, 1934(Vygotsky, /1993) of reading and writing, which were constructed by students of the school Adults Education (AE) in the interface between school learning and the use they make of reading and writing in their everyday lives. In this analysis, we take from Vygotsky (1989Vygotsky ( , 1934Vygotsky ( / 1993) ) the hypothesis that the intersubjective learning of reading and writing is reworked internally by the student and the product of this internalization is qualitatively different from that produced on the interpersonal level.However, for awareness of this internal reworking, it is necessary to establish dialogue in the classroom, allowing students to speak about their own learning processes, so that, assuming a metacognitive approach, they can become aware of what and how they learned and, thus, evidence for themselves and for others the meanings they have built by the acts of reading and writing, inside and outside of school. The perspective from which we approach metacognition is similar to that assumed by Fonseca (2001), in giving prominence to the rhetorical and argumentative base of the metacognitive processes.As in that study, we return to the understanding that oriented the collected works of Middleton and Edwards (1990a), moving us closer to reflection on the plausibility of a dialogical basis of human thought (Vygotsky, 1989(Vygotsky, , 1934(Vygotsky, /1993;;Wertsch, 1988), when considering metacognition as "the development of a culturally shared discourse that serves to make statements about mental processes, to argue, justify and make the others realize what we want to know" (Middleton & Edwards, 1990b, p. 45), "and that which we confess we do not know" (Fonseca, 2001, p. 349). Hence, we take Bakhtin's conception of dialogue (1992), which we combine with Freire's (1970Freire's ( /2005)), to give support to our exercise in understanding another's perspective, and our willingness to know their thoughts, intentions and feelings about learning through reading and writing.Thus, we work with the concepts of meaning (social) and sense (personal) developed by Vygotsky (1934Vygotsky ( /1979)), based on studies by Paulhan, the author with whom he converses in the text Thought and Language, by assuming that: the meaning of a word is the sum of all the psychological events that the word awakens in our consciousness.It is a complex, fluid, dynamic which has several zones of unequal stability.The meaning is no more than one of the zones of sense, which is the most stable and accurate.A word derives its meaning from the context in which it arises; when the context changes, the meaning changes as well.The meaning remains stable through all changes of sense (p.191).To capture the relationship between culture, cognition and language and the meanings produced by adult students in literacy process, we adopt ethnography as a methodology, developing a survey, with a longitudinal character, in that we accompany an Education of Youth and Adults class, from the initial enrollment and during three years.Of the empirical material produced in this study, we have selected a discursive interaction between a researcher and an adult student, Mr. Sebastião (the use of the student's real name, and that of the others, was permitted by the students), whose analysis, based on theoretical and methodological principles of Interactional Ethnography, the theories of Paulo Freire and the Historical-Cultural Psychology, is what we intend to present this article. Theoretical and Methodological Assumptions The methodology used in this research that afforded us the production of the material analyzed here is based on Interactional Ethnography (Santa Barbara Classroom Discourse Group, 1992).We seek to develop an investigative logic based on the understanding of classroom dynamics as a socially constructed practice by members of a group.From this point of view, we understand that the participants (teachers and students) will set the standards for action and use of language, that is, they will produce the classroom culture, which is taken as a reference for engaging in activities that the classes will develop.In agreement with Collins and Green (1992), and Green and Harker (1982), we consider that the classroom functions as a culture, whose members reconstruct ways to interact with one another and with the objects in the cultural practices in which they participate.These forms of interaction among group members, in turn, lead not only to the establishment of particular forms of doing and knowing, but also to the construction of common knowledge and framework that guide the interpretation and participation in the group. From this perspective, according to Green and Wallat (1981) and Gumperz (1986), the class is constituted through instructional conversations that are part of the life of the classroom.In order to understand the meanings that Mr. Sebastião has built about reading and writing, we utilize a microgenetic analysis to contemplate an event that the student protagonizes and the social processes that engender such an event.Góes (2000) clarifies that such an analysis is considered as micro "to be oriented towards the minutiae indexed -resulting in the necessity of excerpts of a period that tends to be restricted" (p.15).It is, however, from the "genetic [analysis] in the sense of being historical, by focusing on the movement during processes and conditions relating past and present, trying to explore what, at present, is pregnant with future projection" (Góes, 2000, p. 15).The analysis we undertake is therefore sociogenetic "in seeking to relate the singular events with other cultural plans, social practices, circular discourse, and institutional spheres" (Góes, 2000, p. 15).This is, principally, to construct, obtain, and ascribe meaning to what is learned through use and functions of written language that are relevant and meaningful to learners.For Vygotsky, writing is a symbolic activity, like other symbolic activities (gesturing, drawing, games, etc.), involving the representation of one thing for another, the use of auxiliary signals to represent meaning (Fontana & Cruz, 1997).So, what it means to learn to read and write, in the context of the classroom, can only be examined if they are considered discursive interactions, the actions of participants and their social and singular stories (Castanheira, 2004;Gomes, Dias, & Silva, 2008;Gomes & Mortimer, 2008).This implies teaching/learning the written language and not just writing letters, as stated by Vygotsky (1934Vygotsky ( /1993)).It spotlights the need for social interaction between students and teachers and among students themselves, in order for them to construct the activity, the use, the practice, and the knowing how to read and to write as discursive processes (Smolka, 1999). Therefore, assuming this conception of learning to read and write and the logic that we set forth to give to our research, the understanding of the relationship between culture and cognition, already widely studied by many scholars like Bruner (1990Bruner ( /1997Bruner ( , 1996Bruner ( /2001)), Cole (2006), Cole and Scribner (1974), Oliveira (2009), Oliveira and Oliveira (1999), demands, from our point of view, the establishment of a link between such relationships and a theoretical perspective of the speech and language.This is because we consider language as a labor (Orlandi, 1987), a sociocultural practice of a group, its activities, and its social environment.It is therefore more than the writing of speech and the reading of transcribed data: it involves a particular perspective of discourse and social action of a particular group (Gee & Green, 1998).Spradley (1980) affirms that it is through discourse analysis that we can identify what the actors of a social group produce as cultural models, understanding, as cultural models, the way of life, the way of being, experiencing the world of the participants.Thus, the study of cultural models is done through the ethnographic perspective, guided by the analysis of discourse that, in the words of Fairclough: is not simply an analysis of the form as opposed to the analysis of the content or meaning . . . it is a dynamic and dialectical intertextual analysis such as that conceived by Bakhtin (1992), and may mediate the connection between language and social context.(1993, p. 184).In this study we articulate, therefore, the ethnographic approach to discourse analysis and the historical-cultural theory of Vygotsky (1934Vygotsky ( /1993)), considering that the subject is social and that, through social interactions, is also constructed as a singular subject.The uniqueness or particularity is, therefore, constructed socially and by the mediation of language, culture, and others.Thus, as we place one of the research subjects (Mr. Sebastião) in the center of the analysis, we recognize that his individual construction of written language was forged in the social interactions of the classroom, that is, through the mediation of the teacher and his peers, which was provided by daily contact over two years and which constituted -and also derived -the constructions that Mr. Sebastião made of non-educational opportunities in which he deals with written records.From this perspective, cognition itself is revealed as a social process, mediated by others and by language.The cognitive activity is taken as intersubjective and discursive (Fontana, 1996).Thus, internalization of everyday and educational concepts by the adults move from the interpersonal level to the intrapersonal level, the latter, according to Vygotsky (1934Vygotsky ( /1993)), being an internal redesign of the interpersonal process.Thus, the word is the mediator of this internalization process of concepts.In the historical-cultural perspective, the concepts "are not analyzed as intrinsic categories of the mind, nor as a reflection of individual experience, but as historical products and significant mental activity mobilized in the service of communication, knowledge and problemsolving" (Fontana, 1996, p. 13). In the reflection we propose herein, we will focus on the tension between the concepts of reading and writing forged in the daily of life a roofer and that (those) propagated by the school.In this investigative exercise, we will take ownership of the discussion developed by Vygotsky (1989) on the relationship between everyday concepts and scientific concepts, to examine the clash between the concepts of reading and writing that Mr. Sebastião brings to the dialogue with the researcher and the educational concepts that circulate in the classroom.We will consider both those concepts emphasized by the pedagogical actions of the teacher (based on decoding), and those who support the research then in progress (who consider awareness of the uses and functions of writing to be critical for literacy).Thus, we work with the idea that concepts are tools for the decontextualization of the immediate reality, of change from a situational mode to an abstract mode of thinking, as well as enabling the development of metacognitive processes that promote a new departure from the world of experience (Oliveira, 1999). And, once again, in agreement with Vygotsky, we assume that the development of concepts involves linguistic, cognitive, affective, and sociocultural identity aspects, implying, therefore, the consideration of relationships between the concepts built into the educational environment (which in this analysis assume of role that Vygotsky attributes to the scientific concept), and the concepts built in other areas of social life (that herein behave like what he calls everyday concepts), as constituting the process of concept development.For Vygotsky (1934Vygotsky ( /1993)), the strength of everyday concepts is the weakness of scientific/educational concepts and vice versa.The everyday concepts are embedded in the personal experience of the subjects, and the scientific concepts are presented by the teachers and do not enter in everyday life, yet they require the concept of everyday experiences and of the experiences to develop.Learners, in order to assimilate the scientific concepts, remake and interpret them in their own way. In the episode analyzed herein, Mr. Sebastião, prompted by the researcher's question on learning of reading and writing in the first two years at the school, displays of his private record of a quantity of pieces of wood and measurements of the pieces necessary for the construction of a hypothetical roof.It is this attitude that instills the dialectical relationship in which educational concepts and daily life meet.The scientific/school conception of reading and writing is questioned by the everyday uses of reading and writing for Mr. Sebastião.On the other hand, it is this educational conveyed by the researcher's question that allows the development of conceptions of reading and writing in everyday life of the subjects with little schooling, bringing them to the "systematization, consciousness and the deliberate use" (Fontana, 1996, p. 22) of the acts of reading and writing.In this movement lies the possibilities of cultural and mental development of adult education learners and educators at AE, in a process of appropriation of reading and writing, as "the reflexive consciousness of the culture, the critical reconstruction of the human world, the opening of new pathways, the historical project of a common world, [and] the courage to say his word" (Freire, 1970(Freire, /2005, p. 21), p. 21). Analysis Procedures For Empirical Material The methodological procedures that we used throughout the research included: participant observation in the classroom and field notes, while we made video recordings of the classes; individual and collective interviews with the teacher and students; and the collection of artifacts produced by students and teacher. From these procedures, we produced the empirical material, which was transcribed, initially in the Event Map form, with the objective of discerning how time was used, by whom, for what purpose, when, where, under what conditions, with what results, and in what manner group members signaled a change in classroom activities (Castanheira, 2004).For detailed analysis and discursive research material, we used transcription of the Discursive Sequence in Message Units (Green & Wallat, 1981) that represent the smallest unit coded in the message system generated by social interactions.The message unit is the smallest unit of conversational meaning produced by speakers.Each message unit is defined in terms of its origin and form, its purpose and comprehension level and the links between them.The boundary of a message unit is linguistically marked by cues of contextualization.According to Gumperz (1986), these clues may be: verbal (intonation, pauses, and cuts of speech), nonverbal (gestures, facial expressions, miming), and co-verbal (prosodic), which can define a message or an event that want to analyze. The discursive interactions we focus on in this article took place in the class on November 13, 2007, and integrate the first event that we identify which puts the class under analysis: an interview with the students (activity coordinated by the researchers).After the interview, three other events were identified: word dictation; syllabic separation, and individual writing of phrases using dictated words (activities coordinated by the teacher). The event Interview with students was our choice to support this reflection on the meanings that students construct for learning to read and write, because the interlocutive game that it established helps us to better understand the links between culture, cognition and language.This interview lasted for 01h:00m:58s, and the transcript of discursive sequences totaled 3649 lines.However, we will examine in this article, the interactions presented between the lines 1833 and 1889, when dialogue takes place between the researcher and Mr. Sebastião. This student was very active in class, interested in learning to read and write because he wanted a driver's license.He had not attended school as a child or adolescent, nor participated in any other AE initiative.When the episode analyzed herein occurred, he had attended this school for two years.He was 45 years old, married, with one son.He came from the Minas Gerais countryside, where he worked in a farm.In the state capital city of Belo Horizonte, he practiced the trade of roofing, which provided him with the learning that led him to build concepts, woven in the experiences of designing and assembling the roof timbers.Such concepts involve notions of length and area, measurement and estimative, angle and symmetry, strength and balance, and have developed in the practical and personal experience in roof construction, but also by social transmission, through the mediation of language, learning with guidance or reported experience of teachers or fellow craftsmen. The researcher's conversation with the students, which took place prior to the interactions examined herein, was based on three central questions: "How did you learn to read and write here in school?What reading and writing activities facilitated learning for you?What do you read and write outside of school?"Questions that have become urgent by the end of the second year of research, because we wanted to know how each of them learned to read and write, and what this learning meant to them, in addition to the evidence of appropriation on the intersubjective plane, with which our empirical data was filled. Many students expressed themselves during the interview, revealing the relationships that were built between their schooling and working worlds.When we thought that the interview had ended, a new round of conversation began, opened by Mr. Sebastião, who asks the researcher for permission to show her a record that he used to make in his work environment and would like her to read.The conversation that takes place between the researcher, the first author of this text, and Mr. Sebastião is what we analyze here, and that discursive interaction was chosen to lead this discussion because it exposes the relevant points we set out to analyze.Relevant points or Rich Points is a concept coined by Agar (2002) in order to cover facts that become visible where there is differentiation in frame of reference.Relevant points in ethnography, then, are those in which the differences of understanding, action, interpretation and participation become marked.At these points, the practices and cultural sources that members outline becomes visible in their efforts to maintain the participation as members of a classroom group (Green, Dixon, & Zaharlic, 2001). "You See the Kind of Mind Of the People!" In the title of this subsection, which reproduces a statement delivered by Mr. Sebastião, reveals the core discussion of this article.He shows the researcher a record that he has made in the class: "5 X P X 14 6 7", to answer the question "What do you read and write outside of school?"Adopting a discursive strategy different from that of his colleagues, he uses one of his own writings to illustrate his response. In this way, in the discursive sequences in the table below, we see the sense of Mr. Sebastião's record constitutes, not in the record itself, but in the space of interaction between the student and researcher, reiterating what Orlandi affirms (1987): "the meaning of the text is not in any of the interlocutors, specifically; it is in the discursive space between the interlocutors" (p.184).Indeed, in the discursive sequences in this table, researcher and student share words and numbers that, supposedly, mean the same thing for both.However, the functions of letters and numbers in mental activity of each seem to be very different.Such activity is marked by the cultural practices of the subjects, forged in the workplace, in educational trajectories and other instances of social life. What happens, at the beginning of these interactions, exposes the researcher's total lack of knowledge regarding the content and form of the student's text, which he, in turn, anticipates as the researcher's inability, challenging her proficiency as a reader ("Take a look to check if you can/read it for me").Mr. Sebastião bet on the limitations that would be brought to the researcher by her not knowing about roof construction and not making an immediate connection with the record under the scope of this construction.In this context of dialogue, on one hand, the researcher uses knowledge of letters and numbers just as a beginning student of reading and writing would, who knows the letters and numbers but doesn't produce the expected meanings from them.On the other hand, it reveals the student's reflective movement in establishing discursive positions and constituting literacy practices, to bring the discussion on this record, to position themselves in relation to the uses of what they learned in school about reading and writing -thematic of the proposed interview by the researchers.In a way, the pragmatic function that the work experience confers on the written entry questions the educational view with which we conceive the form and intentions of writing: 1844 -Sometimes I'm going to make a roof / 1845of a hundred meters / 1846 -of two hundred meters/ 1847 -I write down it here by myself / 1848 -I get there and I say / 1849 -I want five pieces / 1850 -Do you see the letter P / 1851 -Fourteen by six / 1852 -By seven meters.In these interactions, the student takes ownership of a text/annotation of the measurements of the wood pieces he'll need to make a roof ("I write down it here, by myself").Making use of writing ("if I haven't write it down I was in trouble"), he reveals the inherent capacity of humans to make use of mediatory tools and signs to resolve a problem of everyday life (Vygotsky, 1934(Vygotsky, /1993)).The record here does not have a communication function with another that reads it, but is a support for the memory of someone who writes it, to later retrieve information that will be communicated orally ("I get there andI say"), to someone who will make another record ("write down to me/like that like that") with another function: to make a budget. If, in that format, Mr. Sebastião can make record for himself, to establish themes, within the school, the nature and form of the notation that he produce, he knows he will have to explain the significance of what he wrote.When available, therefore, to present and discuss this What is it He gives a paper to the researcher where is written: 5 X P X 14 6 7 The researcher tries to decode the text written by Mr. Sebastião -she reads numbers and letters separately. Mr. Sebastião decodes to the researcher what he wrote -at this point everybody acknowledges the content of the written text and the meaning it has for the student. record, the student accepts the challenge of the researcher to reflect on the uses that he makes of reading and writing, and thus assumes a metacognitive approach. The record and the opportunity that it tends to produce it mobilize different concepts related to roofing, such as: measurements of length and area, measurement units, angle, shape, relocating the material objects and produce including the possibility of creating a hypothetical situation ("sometimes I'm going to make a roof"), and generalizing ("of a hundred meters/of two hundred meters").This shift allows the mediation of writing and speaking and assigning meaning to a record in negotiating with the functions of reading and writing that circulate inside and outside of school. Still, Mr. Sebastião does not transcribe a text; he produces it, defining quantity of wood pieces that the roof demands.It seems that the five pieces to which he refers would be the beams -the wood pieces that support the roof -and that the "fourteen by six" which define its thickness are not established by calculation, but through knowledge of the list of cut pieces of wood on the market and the ones that are suitable will be used as beams.Though, based on that record, which is based on the example proposed by the roofer and in the discourse produced by the student, we could produce another text: 5 pieces of 14 cm by 6 cm, of 7 meters.This record would still be insufficient to be understood by a reader who is outside of the situation in which discourse is produced, though he explicits, in the writing, the number of parts and their measurements: one cannot know through it, what these pieces are, the purpose of them, who needs them, how to get them, nor who is the speaker of the words.Only when Mr. Sebastião said "sometimes I'm going to make a roof" (line 1844) is it revealed to the researcher that his note refers to the construction material of roof and that he relies on the prior knowledge of his speaking partner that they are the required wood pieces for such a construction. In the sequence, Mr. Sebastião takes care to establish parameters for the dimensions of the roof ("of a hundred meters"; "of two hundred meters", lines 1845 and 1846), justifying the amount, thickness and length of the pieces.This shows the exercise to consistency checking between the textual content and production situation of the text. From that point forward, we have discursive content that allows us to think about rewriting the entry made by Mr. Sebastião, moving towards the construction of a written text, with a beginning, middle and end, with cohesion and coherence, especially when he continues his speech: "I want five pieces/do you see the letter P/ fourteen by six / by seven meters" (lines 1849 to 1852). It is interesting to note, based on these interactions, the constitutive role of culture producing shared symbolic systems, ways of living and working together, shared modes of discourse for negotiating differences of meaning and interpretation (Bruner, 1990(Bruner, /1997)).Negotiations and sharing that, at school, can lead students to break free "from the immediate perceptual context, through the process of abstraction and generalization made possible by language" (Oliveira, 1999, p. 55).And, thus, allowing for the mediation of speech and writing, they communicate and reflect on their actions and thoughts, reformulating them.Every word (spoken or written) carries a generalization, is an act of verbal thought (Vygotsky, 1934(Vygotsky, /1993)), is meaning produced by the praxis, whose discursivity flows from the storicity transforms the world, producing the kind of mind of the people, producing the kind of people, of sociocultural subjects that we are. This dynamic movement of sociocultural constitution of subjects, in and through language, requires us to make connections between cognition, language and culture to better understand the metacognitive strategies of the AE students, in the process of appropriation of literacy practices of the educational culture.These strategies can hardly happen without school intervention (Oliveira, 1999).In the continuity of the explanation that Mr. Sebastião provides the researcher, with respect to the production situation of that entry, he continues his metacognitive exercise. Mr. Sebastião said that if he haven't write it down, he was in trouble (line 1868) revealing to recognize, in writing, a support for memory and the organization of his acts and thoughts.Thinking thusly, he produces meanings for what he wrote.Through the mediation of writing, he anticipates and systemizes the demands (of wood), for himself.Through the mediation of speech, makes the necessary lumber purchases, but doesn't show the saleswoman his notes.Mr. Sebastião legitimizes the seller's writing, now in the context of budget production, elaborated from the demand he has presented. To the saleswoman it is, however, only the transcription to a text written in the computer, of the text produced orally by Mr. Sebastião: "I go to the shop and see the lady/ I just tell to her, write down to me".The rework that she lends to Mr. Sebastião's request does not include the calculation of how much wood will be needed to build the roof -definition of the scope of the roofer (the attributes of a roofer include building the wood frame of the roof, laying the tiles and sealcoating).The student roofer displays his expertise, which is not grounded in a conceptual approach of building roofs, but an elaboration based on the experience of many roofs built or constructions that were described by his teachers (master of roofers) and by colleagues (in this craft).That's what enables him to make roofs "the way it goes, round, square, any way".The appropriation of these work practices takes us to the work of Scribner and Cole (1981) and the observation of Bruner (1996Bruner ( /2001) that "the mind is an extension of the hands and tools" and that "culture provides a rebus (in the classical sense) for cognitive activity" (p.145).In the case of Mr. Sebastião, it was in the sharing of knowledge and practices, including practices of writing, that the roofer's identity and cognitive activity are constituted in and through language.And now, through the mediation of the school, this identity could be presented and reflected, both by Mr. Sebastião, and by the researcher and the dialogue participants in that classroom, producing shared meanings for what he does, what he writes, and what he doesn't read in the daily life of a roofer. Mr. Sebastião said that he does all the calculations "in [his] head" and then the markings required to define the amount and type of wood that he needs to buy to build a roof, be it "round, square, any way".Like all who write, Mr. Sebastião draws, in his mind, a text with meaning for himself.By externalizing his thoughts through writing, he adopts a coding system that is different from the standard writing system taught by the schools, but it allows him to produce a text that tends to the formation of connections, the establishment of relations between different concrete impressions, to the union and generalization of distinct objects, the order and systematization of his experience as "roofer" -"5 X P X 14 6 7".That is, he produces a kind of writing that reflects his cultural experience as roofer and producer of text, and still does not match the record that would be adopted in a writing class -"5 pieces of 7 X 0.14 X 0.06 m³ ". In determining the quantity and measurements of the wood pieces for the hypothetical roof construction, Mr. Sebastião relies on tradition (based on experience) to make the roof beams (rectangular) with pieces of wood called "15 by 8" (or "14 by 6"), and five in number.The length of the piece turns out to be the definition most likely to change, even if parameterized by the maximum span that a beam can "beat" or the maximum length of the wood available on the market. The student demonstrates that knowledge is not stored in drawers, but preserved in memory, using a coding system, which creates a system of ideas.So what makes the memory can be revived and developed by individuals (Luria, 2008) in social interactions. Concluding Remarks In the dialogue between Mr. Sebastião and the researcher, different subjects participate and reveal their cultural affiliations, their knowledge and the relationships they are building with reading and writing.The situation experienced by Mr. Sebastião, as being in and of the world, in making use of his graphic code, unlocks the problem to be solved: to record his purchase order for work materials, "if I haven't write it down I was in trouble".Challenged by the situation of interaction with the researcher, he reflects on the appropriation of a writing system, in a cogniscent act that provokes new understanding of the challenge.And this cognitive exercise makes it possible to critically reflect on his reality and its effect on hers.According to Freire (1970Freire ( /2005)), "this reflection on the situationality is a thought on the very condition of existence.A critical thought through which men find themselves in a situation" (p.118). When discovered, the subjects transform themselves and the learning process of reading and writing into a process of discovering new words and new meanings.These senses are initially constructed in the personal experience (principally in the family and work environments).The school experience allows them to extend those senses, or make them stronger or more flexible, as it brings new meaning to personal experiences.The mediation of language is crucial in this process.According to Luria (2008), "language is a fundamental part of every process of perception, memory, thinking, behavior and cognition through which we analyze our perceptions, we distinguish what is essential and what is not, and establish categories of different impressions" (p.50), as well as organizing our inner life.Mr. Sebastião tell us part of is history as reflection of philosophical nature: "You see/the kind of mind of the people!"The "kind of mind of the people" is variable and, therefore, surprising, because it is constituted by the diversity of sociocultural practices that we live. This inserts in our discussion the cultural nature of appropriation of knowledge, as situated, distributed, interpretative and constructive, which "proceed as much from the outside in as the inside out, as much from culture of mind to mind of culture" (Bruner, 1990(Bruner, /1997, p. 95), p. 95).Thus, we inquire about the use of symbolic systems of culture, their language and modes of discourse, forms of logical explanation and narrative, and patterns of mutual dependence of ordinary life (Bruner, 1990(Bruner, /1997) ) of our students, especially students of AE.We are concerned that the school allows them the appropriation of scientific/school concepts, but also promotes their participation in the process of self-construction that mobilizes capacities for reflection and projection of alternatives for themselves. In the process, these AE subjects can conceive other modes of being, acting and engage in cultural practices (Bruner, 1990(Bruner, /1997)), given that their own practices constitute themselves as "content" of teaching-learning, requiring the exercise of metacognition that promotes improvements in their mental and cultural development.Development associated with literacy and education promoted the withdrawl from the experience in the concrete reality and immersion of students, like Mr. Sebastião, in scientific/school and cultural activities, leading them to a "greater self control, self-regulation and transcendence from the world of immediate experience" (Oliveira, 1999, p. 57). We agree with Freire (1982Freire ( /1996) ) that men are unfinished beings capable of having for themselves and their own activity as objects of consciousness and reflection.They are therefore aware of themselves and of the world and, to the extent that they objectively distance themselves from their activity, they are also capable of overcoming "limiting situations" through concrete action over reality.Through their action in the world, men create the field of culture and history, and only they are beings of praxis, reflection, creation of material goods, their social institutions, their ideas and their conceptions: "You see/the kind of mind of the people!"The type of mind which is an extension of the hands and the tools used by Mr. Sebastião, and the researcher who conversed with him, because the practices of the work of both established an intimate relationship between culture and cognition mediated by language within and outside the school, making each of them very different historical and cultural subjects, with different "types of minds" and, therefore, with different opportunities for learning and development.In the formation of these differences, an intimate relationship is revealed between doing and knowing, and the contribution of education in the transition from concrete thought (everyday) to the abstract (scientific/school) and vice versa, between the mental plane and objective reality (Oliveira, 1999 ). Table 1 It starts a Dialogue between Student and Researcher Table 2 It Continues the Dialogue between the Student and Researcher
2018-12-04T07:15:37.383Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "ea6bd8e03eb080ac480bdf67c8c5f3af5141ad1f", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/prc/a/nGdt6Rqr9PZt4DNzP8ZKXxB/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ea6bd8e03eb080ac480bdf67c8c5f3af5141ad1f", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
243176182
pes2o/s2orc
v3-fos-license
Ranking Dynamics of Economic Burden of Infectious Diseases as a Criterion of Effectiveness of Epidemiologic Сontrol экономического от инфекционных Purpose: rank-based assessment of the economic impact of infectious diseases in the Russian Federation for the further analysis of effectiveness of their prevention and for prioritization of preventive measures. Materials and Methods. The annual economic burden was eStimated by using inflation-adjusted standard economic costs of one case of infectious disease in the Russian Federation. The data on the number of cases were obtained from the official statistical reports (Forms 1, 2) for 2009–2019. The annual burden of the specific disease was estimated by multiplying the standard cost of 1 case by the number of cases registered within a given year. The economic costs were assessed and ranked. Results and discussion. In 2019, the greatest economic burden was exerted by acute respiratory infections, tuberculosis, acute gastrointestinal infections, chickenpox, HIV infection (newly diagnosed cases and deaths in 2019). The economic burden of rotavirus infection was assessed and ranked for the first time. The ranking analysis of the economic costs in 2009–2019 showed the largest decrease in the economic burden of influenza, rubella, acute and chronic hepatitis B. At the same time, the economic burden of measles, pertussis, hemorrhagic fevers and tick-borne borreliosis demonstrated an upward trend. The possibility of using ranking dynamics of economic burden as a performance indicator of epidemiological control has been demonstrated. In response to limited public funding of healthcare, the offered method can be used in setting priorities in decision making in the field of epidemic control. Acknowledgments. Introduction Amidst the healthcare reforms unfolding in Russia, the economic analysis comes to the fore, being instrumental in making managerial decisions aimed at achieving maximum effect in disease prevention with limited labor and money resources. The methods of economic analysis can be applied to any healthcare interventions, including prevention techniques, to assess their economic feasibility. The epidemic control measures, which must be taken in full and on time to prevent any emergence and spread of infectious diseases, include sanitary measures within the Russian Federation, industrial control, restrictive measures (quarantine), isolation of patients with infectious diseases, disinfection measures, preventive vaccination, regular health exams, hygiene education and training, etc. Data resulting from socio-economic impact assessment of a nosological disease are traditionally used for selecting targets in prevention programs of different levels. The method based on "standard" economic costs of 1 case was offered and adapted to Russia by I.L. Shakhanina [1][2][3][4] for assessment of economic impact of infectious illnesses. Weighted averages of economic burden inflicted by one infectious disease case are quite informative and can serve the purposes of healthcare management [4]. Economic impact is estimated in accordance with GOST R 57525-2017 1 , where "the cost of illness includes all the costs related to treatment of patients with a particular disease, both during a particular stage (period of time) and during all stages of medical care, as well as to disability and premature death". Economic impact of diseases is estimated as burden inflicted on the economy and is measured in rubles. In the meantime, numerous objective and subjective factors that affect economic costs of each disease, including inflation inputs, make it impossible to provide accurate estimates required for comparative assessment of the economic costs of different diseases in their dynamics. Difficulties associated with the assessment of economic impact of diseases impede the possibility to choose the relevant and most efficient preventive programs to channel the available limited resources. The purpose of this study is to perform rankbased assessment of the economic impact of infectious diseases in the Russian Federation for the further analysis of effectiveness of their prevention and for prioritization of preventive measures. Materials and Methods Standard economic costs of one infectious disease case in Russia were used as inputs for estimation of annual economic burden. Most of the standard economic costs per 1 weighted average case of infectious disease are given in publications of I.L. Shakhanina [2,4]. The economic cost of one disease case was calculated as the sum of direct and indirect costs. The direct costs included the cost of pharmaceuticals, inpatient and outpatient care. The estimation took into account clinical forms broken down by severity. Indirect economic burden was assessed as the gross domestic product unproduced because of labor time (days and years) lost due to an employee's illness or due to the illness of an employee's (acting as a parent or guardian) child. The economic costs of a tuberculosis [5] and HIV infection [6] case were obtained from available publications; costs of rotavirus infection [7], pertussis [8], chickenpox and shingles [9] were calculated during our own studies. All standard economic costs were adjusted for inflation by using data published by the Russian Federal Statistics Service for the studied time period. The data on the number of cases of infectious diseases were obtained from the publicly available statistical reports (Forms 1 and 2 of the Federal Statistical Monitoring of Infectious Morbidity in the Russian Federation) for 2009-2019. The annual cost of a single infectious disease was calculated by multiplying the standard economic cost of 1 case of the given disease by the number of cases registered in a particular year. Further on, the economic costs of infectious diseases were ranked in descending order and assessed. This method was used for the first time by the authors of the article for the State Report on Sanitary and Epidemiologic Well-Being in 2014, and, later on, it was regularly used for state reports of Federal Service for Surveillance on Consumer Rights Protection and Human Well-Being in 2015-2018. This article analyzes the dynamics of economic cost rankings for specific diseases in 2009-2019. Results In 2019, Russia demonstrated a 2.4% decrease in the total number of infectious and parasitic diseases as compared to 2018: The number of registered cases was 34,338,157 cases against 35,166,730 cases in 2018. The growth trend in the incidence of infectious diseases was not pronounced, while the incidence of parasitic diseases declined significantly (Figure). The last 3 years were characterized by a steady downward trend in the incidence of infectious and parasitic diseases. The performed calculations show that the economic burden resulting from as few as 36 infectious diseases exceeded RUB 646 billion ( Table 1). The economic burden prevented due to decreased incidence of some infectious diseases amounted to around RUB 3.56 billion as compared to 2018. In the meantime, due to the increased number of cases of some nosological diseases, the economic burden increased by more than RUB 7 billion. The absolute economic costs of infectious diseases increased by 1.4% in 2019 as compared to the previous year. When adjusted for the inflation, which, as reported by the Russian Federal Statistics Service 2 , reached 3% in 2019, the total cost of infectious diseases went down by 1.6%. Following Table 2) demonstrated the largest decrease in the economic burden resulting from influenza (the ranking changed from the 2 nd to the 11 th ranking position), rubella (from the 25 th to the 30 th position), acute hepatitis B (from the 17 th to the 21 st position) and HBV carrier state (actually, chronic hepatitis B) (from the 12 th to the 17 th position) as well as acute hepatitis A (from ranking 11 th to ranking 15 th ) and shigelloses (from ranking 13 th to ranking 16 th ). At the same time, the economic burden resulting from measles (down from the 29 th position to the 19 th position) and pertussis (from ranking 22 nd to ranking 13 th ) showed an upward trend. The upward trend was also observed in the economic impact of hemorrhagic fevers (from ranking 14 th to ranking 9 th ) and Lyme dis ease (from ranking 16 th to ranking 12 th ). As compared to 2018, the ranking results for 2019 showed a decrease in the economic burden of acute and chronic hepatitis C (by 1 and 2 points, respectively), scarlet fever, Lyme disease, diphtheria, tularemia (by 1 point). The economic burden of the following diseases moved up the ranks: hemorrhagic fevers (by 3 points), measles (by 2 points), pertussis (by 1 point). Discussion The offered method of ranking costs associated with economic burden gave the possibility to compare not only economic losses caused by different diseases, but also to cross-reference the burden imposed by each nosology within 10 years. The analysis of changes in the rankings of infectious diseases made it possible to assess the effectiveness of ten-year-long control measures taken to fight a particular disease. As expected, the largest reduction of economic burden was achieved in vaccine-controllable infectious diseases -influenza (from 2 nd to 11 th position), rubella (moved down by 5 points), hepatitis B and A (moved down the ranks by 4 points). This fact proves another time that vaccination is the most economically efficient method of epidemic control, in general, and for rubella [10] and hepatitis A [11] and B [12], in particular. As for influenza, the reduction can be also explained by changes in the approaches to case registration -only laboratory-confirmed influenza cases were taken into account during certain time periods [13]. If a disease moves up the ranks in the economic burden ranking list, it may be indicative of existing problems encountered by control measures targeted at a particular infection. For example, the increased eco-nomic burden of measles (up from the 29 th to the 19 th ranking position) results from the recurrence of en demic circulation of measles virus and the increased number of unvaccinated people who contribute to the growing number of infection sites 3 . The higher ranking positions for the economic burden of pertussis (moving up from the 22 nd to the 13 th position) can be explained by the improved accuracy of infection diagnosis through using more sensitive laboratory techniques and by the increased participation of preschool and school-aged children in spreading of pertussis, thus requiring that the booster vaccination against this infection should be included in the National Immunization Schedule [14,15]. The increased economic impact of hemorrhagic fevers (up from the 14 th to the 9 th ranking position) and Lyme disease (up from the 16 th to the 12 th position) suggests not only the improved accuracy laboratory diagnostic techniques used for the above diseases, but also signifies the need to strengthen the measures aimed at prevention of transmissible infectious diseases amid changing climate conditions, expanding business activities within natural focal spots and decreasing scope of disinfestation measures [16]. The "standard" weighted average economic costs of 1 disease case can be later revised and corrected, taking into account regional specific features, among other things. While previously a number of parameters were estimated for a group of diseases, for example, for acute gastrointestinal infections of known etiology, further estimations will give more accurate profiles for individual infections of the given group. For example, we estimated the burden of 1 case of rotavirus infection [7]. Thus, the burden resulting from this disease was singled out of the combined economic losses caused by the group of acute gastrointestinal infections with the identified pathogen. Although the burden estimation based on "standard" weighted average economic costs of 1 disease case is clearly not accurate and very rough, it is highly important for planning and prioritizing preventive and anti-epidemic measures targeted at diseases ranking high in economic burden. The invariably high ranking of the burden caused by chickenpox (ranking 2 nd -3 rd among 33 nosologies in Table 2) emphasizes the urgent need to optimize the infection control, to use the potential of scheduled and emergency preventive vaccination for efficient epidemic control. 2. In response to limited public funding of healthcare, the offered method can be used in setting priorities for preventive and epidemic control measures.
2020-05-07T09:09:52.967Z
2020-05-06T00:00:00.000
{ "year": 2020, "sha1": "92980ec36b1faf6510daba439b4f95157a3bff5c", "oa_license": "CCBY", "oa_url": "https://microbiol.elpub.ru/jour/article/download/781/481", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "208cd4121f40aed8af9d5c273ac6a614df18c0b4", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [] }
11322610
pes2o/s2orc
v3-fos-license
F-Actin and Myosin II Accelerate Catecholamine Release from Chromaffin Granules The roles of nonmuscle myosin II and cortical actin filaments in chromaffin granule exocytosis were studied by confocal fluorescence microscopy, amperometry, and cell-attached capacitance measurements. Fluorescence imaging indicated decreased mobility of granules near the plasma membrane following inhibition of myosin II function with blebbistatin. Slower fusion pore expansion rates and longer fusion pore lifetimes were observed after inhibition of actin polymerization using cytochalasin D. Amperometric recordings revealed increased amperometric spike half-widths without change in quantal size after either myosin II inhibition or actin disruption. These results suggest that actin and myosin II facilitate release from individual chromaffin granules by accelerating dissociation of catecholamines from the intragranular matrix possibly through generation of mechanical forces. Introduction Chromaffin cells of the adrenal gland are a widely used model system to study exocytosis (Jahn et al., 2003). The kinetics of catecholamine release from single chromaffin granules has been characterized in great detail using various approaches, such as amperometry (Wightman et al., 1991), capacitance measurements (Debus and Lindau, 2000), and patch amperometry (Albillos et al., 1997). The small foot signal preceding amperometric spikes (Chow et al., 1992) is an indication of catecholamine release through the fusion pore formed between the vesicular lumen and the extracellular space, upon fusion of the secretory vesicle with the cell plasma membrane (Albillos et al., 1997). Experimental evidence has been accumulated suggesting a role for the actin cytoskeleton in regulating neuroendocrine cell exocytosis (Malacombe et al., 2006). According to the current view, a meshwork of filamentous actin (F-actin) underneath the plasma membrane acts as a physical barrier to exocytosis (Aunis and Bader, 1988) that must be disassembled for vesicles from a reserve pool to enter the release-ready pool (Vitale et al., 1995). However, this model has been challenged by recent findings that suggest the participation of molecular motors, such as myosin Va, nonmuscle myosin II, and other actin-binding proteins (Malacombe et al., 2006) in dynamic interactions with actin, supporting a more specific role for actin in the process of exocytosis. Biochemical studies have demonstrated association of myosin Va with chromaffin granules and reduction in secretion with antimyosin V antibodies in permeabilized chromaffin cells has been reported (Rosé et al., 2003). More recently, it was shown that pharmacological inhibition of myosin II and overexpression of an unphosphorylatable mutant of the regulatory light chain (RLC) of myosin II slowed down chromaffin granule movement as well as catecholamine release from single chromaffin vesicles (Neco et al., 2004(Neco et al., , 2008. However, the interaction between the actin cytoskeleton and the myosin molecular motors and how their interplay regulates secretion is unclear, specifically because myosin V but not myosin II has been found to interact with chromaffin granules (Rosé et al., 2003). If the modulation of release kinetics by myosin II is mediated by interactions with actin filaments, then inhibiting actin polymerization would be expected to also affect individual secretory events. To investigate the roles of actin and myosin II in chromaffin granule mobility, fusion pore properties, and catecholamine release from single vesicles, we performed confocal fluorescence microscopy, amperometry, and cell-attached capacitance recordings on single chromaffin cells following inhibition of either actin polymerization or the ATPase activity of myosin II. Materials and Methods Cell preparation, reagents, and solutions. Bovine chromaffin cells were prepared as previously described (Parsons et al., 1995). The buffered solution used for all the amperometric, capacitance, and fluorescence measurements contained (in mM) 140 NaCl, 5 KCl, 5 CaCl 2 , 1 MgCl 2 , 10 HEPES/NaOH, 20 glucose, pH 7.3. The pipette solution used for the capacitance recordings contained (in mM) 50 NaCl, 100 TEA-Cl, 5 KCl, 5 CaCl 2 , 1 MgCl 2 , 10 HEPES/NaOH, pH 7.3. Ionomycin was purchased from Sigma and stock solution was prepared in ethanol. (Ϫ)-Blebbistatin, cytochalasin D, 1-(5-iodonaphthalene-1-sulfonyl)-1 Hhexahydro-1,4-diazepine hydrochloride (ML-7), and latrunculin A were all purchased from Sigma, and stock solutions were prepared in dimeth-ylsulfoxide. Immediately before the beginning of an experimental session, stock solutions were diluted in the bath solution at a final concentration of 10 M for ionomycin, 4 M for cytochalasin D, 10 M for blebbistatin, 3 M for ML-7, and 2 M for latrunculin A. Chromaffin cells were incubated with the different inhibitors for 15 min at 37 C and 10% CO 2 immediately before the recordings. A similar incubation was performed for control cells to take into account possible temperature effects on exocytotic activity . Quantification of cortical actin. Chromaffin cells treated with the different inhibitors were fixed with 3.7% formaldehyde for 10 min after 30 min incubation with the inhibitor at 37°C. Cells were then permeabilized with 0.1% Triton X-100 for 5 min and actin filaments labeled with Alexa 568 phalloidin. Confocal microscopy was performed with a Leica TCS SP2 system with an acoustic optic-tunable filter and a 63ϫ 0.9 NA waterimmersion objective. The density of cortical actin was quantified at the equatorial plane by integrating the total fluorescence intensity in an annular region containing the cell plasma membrane and dividing by the annulus area. The annular width was kept constant to 1.5 m. Vesicle tracking. Chromaffin granules were labeled with 3 M Lyso-Tracker Green (Invitrogen) for 5 min before imaging. Confocal microscopy was performed using the system described above with an optical slice thickness of ϳ0.9 m at the interface between the glass surface and the cell plasma membrane. Images were acquired at a frame rate of 1.67 s Ϫ1 and the coordinates of individual vesicles were obtained by using the public domain program ImageJ. The vesicle tracking plug-in used was an implementation of an algorithm previously described (Sbalzarini and Koumoutsakos, 2005). Vesicles were automatically detected by the program after setting criteria for vesicle image size (circular spot of Յ500 nm diameter) and cutoff intensity (50% of the brightest particles detected). Vesicles were followed for several frames as long as they remained detected as a particle by the program. All tracks were overlaid with the original time series and visually inspected for accuracy. Only tracks longer than 10 frames were used for the analysis. After setting the selection criteria for vesicle size, cutoff intensity, and trajectory length, 9 -17 vesicles per cell were left for analysis from which the 9 -10 brightest were chosen per cell to ensure that the tracking occurred for a similar number of vesicles per cell for all treatment groups. Mean squared displacements (MSD) were calculated as described (Qian et al., 1991) using a custom MATLAB (MathWorks) routine using the following equation: where n and j are positive integers with n ϭ 1, 2, . . . (N Ϫ 1). (x( j␦t), y( j␦t)) and (x( j␦t ϩ n␦t), y( j␦t ϩ n␦t)) are the granule's coordinates at times j␦t and j␦t ϩ n␦t, respectively (Manneville et al., 2003). The data were fitted to a simple diffusion model: MSD(n␦t) ϭ 4 Dn␦t ϩ c, where D is the diffusion coefficient and c is a constant that accounts for the limited accuracy of the experimental set-up (Manneville et al., 2003). All experiments were performed in 35 mm Petri dishes with coverglass bottoms (0.16 -0.19 mm; MatTek). Amperometry. Amperometry was performed using custom made carbon fiber electrodes (CFEs) and a patch-clamp amplifier (EPC-8, HEKA-Elektronik). The current was low-pass filtered at 500 Hz using the built-in analog low-pass filter of the EPC-8 amplifier. The CFE was in touch with the cell surface, as verified visually by a slight deformation of the cell membrane. The CFE voltage was kept at ϩ 700 mV versus a chlorinated silver reference electrode (Ag/AgCl). A glass pipette with ϳ2.5 m tip diameter containing 10 M ionomycin solution was positioned ϳ40 m away from the cell and a 3 s 3.5 ϫ 10 4 Pa puff was applied to the pipette using a pressure application system (PicoSpritzer II, Parker-Hannifin/General Valve) to stimulate exocytosis. Amperometric recordings were performed for 10 min after stimulation and the data were digitized at 2 kHz rate by a 16-bit resolution NIDAQ board (BNC-2090, National Instruments). A digital notch filter at 60 Hz (Igor Pro, WaveMetrics) was used to remove line frequency noise. Recordings were analyzed as previously described (Mosharov and Sulzer, 2005). Spikes with amplitude Ͻ10 pA, or half-width Ͼ300 ms, and overlapping spikes were excluded from the analysis. The 10 pA threshold was high enough for amperometric signals to be discerned from noise and low enough for the majority of amperometric spikes in all treatment groups to be included in the data analysis. The thresholds used for identifying foot signals were 1 pA amplitude and 5 ms duration. Cell-attached capacitance measurements. High resolution capacitance measurements were performed in the cell-attached configuration as previously described (Debus and Lindau, 2000) using a HEKA EPC-7 amplifier and patch pipettes of nominal resistance between 1 and 2 M⍀. A dual lock-in amplifier (SR 830, Stanford Research Instruments) was used to obtain the complex admittance using a 50 mV-rms amplitude and 20 kHz frequency sine wave applied to the patch pipette. The lock-in amplifier outputs were digitized at 1 kHz rate by two 16-bit resolution channels of the NIDAQ board. Custom written software (Dernick et al., 2003) in Igor Pro converted the two orthogonal traces (real and imaginary part) into measurements of fusion pore conductance G P (units of nS) and vesicle capacitance C V (units of fF) as described (Debus and Lindau, 2000). From these recordings, vesicle size C V , fusion pore lifetime, fusion pore conductance, and fusion pore expansion rate were derived as described (Dernick et al., 2003). For this analysis, only exocytotic events with lifetime Ն15 ms were used (Dernick et al., 2003), since shorter events were heavily affected by the lock-in low-pass filters ( ϭ 1 ms, 24 dB, which corresponds to 10 -90% rise time of 5 ms) and their conductance properties are not reliably determined. The fusion pore initial expansion rate was calculated as the slope of a linear fit to the initial 15 ms segment of the conductance trace. The fusion pore lifetime was the time from fusion pore opening until the fusion pore conductance value exceeded 2 nS (Dernick et al., 2003). Statistical analysis. All reported signal parameters, amperometric (quantal size, half-width, spike amplitude, mean foot signal amplitude, and foot duration) and patch-capacitance (vesicle size, fusion pore initial and average conductance, fusion pore initial expansion rate, and fusion pore lifetime), were statistically analyzed by taking the median values of the events from individual cells and subsequently averaging these values per treatment group. Therefore, data are represented as mean Ϯ SEM, where n is the number of cells in each treatment group. Differences were considered to be statistically significant for p Ͻ 0.05 as assessed by Student's unpaired t test for both the amperometric and patch-capacitance data. All experiments were performed at room temperature at day 1 after cell isolation. The data came from two and four different cell preparations for amperometry and capacitance, respectively. Results To investigate the roles of actin and nonmuscle myosin II in exocytosis of chromaffin granules we used cytochalasin D and latrunculin A, which inhibit actin polymerization, blebbistatin, a specific inhibitor of nonmuscle myosin II (Straight et al., 2003), and ML-7, an inhibitor of myosin light chain kinase (MLCK). Blebbistatin treatment decreases vesicular motion Myosin II and the actin cytoskeleton have been implicated in vesicular motion (Neco et al., 2004). We characterized vesicular movement in unstimulated cells using confocal microscopy focused on the actin rich cortical region of the cell (Fig. 1 A). For this purpose, we tracked the motion of 94 vesicles from 9 untreated cells and 92 vesicles from 10 cells treated with blebbistatin. The x-and y-coordinates of each vesicle were tracked in a series of images ( Fig. 1 B) and converted into mean squared displacement (MSD) for that particular vesicle. These were then averaged for all the cells per treatment group and plotted versus time (Fig. 1C). A linear fit to the data (Fig. 1C), revealed the apparent diffusion coefficient for the vesicles in each treatment group. The resulting apparent diffusion coefficients were 2.07 Ϯ 0.06 ϫ 10 Ϫ3 m 2 /s for control cells and 6.8 Ϯ 0.8 ϫ 10 Ϫ4 m 2 /s for blebbistatin-treated cells, thus approximately threefold lower in cells where the ATPase activity of myosin II was specifically inhibited compared with control cells. Treatment with cytochalasin D or blebbistatin did not affect intracellular calcium concentrations and protein kinase C distribution (supplemental Figs. 1, 2, available at www.jneurosci.org as supplemental material), indicating that the changes of vesicle mobility in blebbistatin-treated cells were specifically due to inhibition of myosin II and not a consequence of changes in intracellular calcium or protein kinase C activation, which may also affect vesicular motion, cortical actin distribution, and exocytosis (Cuchillo-Ibáñez et al., 2004). Cytochalasin D but not blebbistatin affects cortical actin distribution To test whether the decreased mobility following inhibition of myosin II is a consequence of cortical actin destabilization, fluorescence microscopy was used to determine how cytochalasin D and blebbistatin treatment affected the distribution of cortical actin fluorescence. As expected, cytochalasin D-treated cells, showed disruption of cortical actin in contrast to blebbistatin-treated cells, which showed a similar distribution as control cells (Fig. 2 A). Quantitative analysis (Fig. 2 B) showed a 44% decrease in cortical actin fluorescence intensity ( p Ͻ 0.001) in cytochalasin D-treated cells, while blebbistatin-treated cells showed no significant difference ( p Ͼ 0.35) when compared with control cells (Fig. 2C). These results indicate that the observed changes in vesicle mobility as well as the observed changes in release event properties (see below) are a direct consequence of myosin II inhibition in the absence of cortical actin disintegration. Calcium influx stimulated with ionomycin also produced a decrease in cortical actin as expected (Cuchillo-Ibáñez et al., 2004), which was similar to that produced by cytochalasin D. Combined application of cytochalasin D and ionomycin did not produce a further decrease indicating that the loss of cortical actin reaches a limiting threshold (supplemental Fig. 3, available at www.jneurosci. org as supplemental material). Blebbistatin treatment did not affect the distribution of myosin II (supplemental Fig. 4, available at www.jneurosci.org as supplemental material), suggesting that the observed effects were not due to changes in the intracellular localization of myosin II. Interestingly, the peripheral localization was also retained in cytochalasin D-treated cells indicating that the peripheral myosin II localization is not immediately lost upon disintegration of cortical actin. Inhibition of myosin II slows individual release events The kinetics of catecholamine release from single vesicles was determined by carbon fiber amperometry. Figure 3A shows a typical recording from a chromaffin cell under control conditions. To characterize the average release kinetics an average amperometric spike shape was constructed (Fig. 3B). All amperometric signals detected from a single cell with amplitude Ͼ10 pA and half-width Ͻ300 ms were normalized to their peak amplitude, aligned in time at the point of their maximum slope (occurring shortly before the spike maximum) and averaged, providing the average spike shape for this cell. Subsequently, the average spikes from each cell in a treatment group were again averaged in the same way to obtain the average spike shapes for the different groups. Finally, the aver- aged spikes for the three groups were normalized such that they all had the same quantal size, consistent with the statistical analysis of integrated amperometric charge (see below). Both blebbistatin-and cytochalasin D-treated cells showed reduced spike amplitude with increased half-width. To determine the statistical significance of the changes in amperometric spike properties five parameters were determined for each spike: quantal size, amperometric spike half-width, peak amplitude, mean foot current amplitude, and foot signal duration (Fig. 4 A) (Mosharov and Sulzer, 2005). When the average spike half-width was determined for each cell and the mean for all cells in a treatment group was calculated, values of 12.7 Ϯ 1.0 ms (control), 18.6 Ϯ 1.2 ms (blebbistatin), and 24.7 Ϯ 2.2 ms (cytochalasin D) were obtained, in excellent agreement with the values from the averaged spikes (Fig. 3B). A more robust method avoiding spurious artifacts due to outliers is to determine the median value for each spike parameter for each cell and subsequently calculate the mean of these median values for each treatment group (Fig. 4 B-G) (Mosharov and Sulzer, 2005). The half-widths determined with this method after normalizing to the control values were 100 Ϯ 7.9% (control, n ϭ 19 cells), 153.5 Ϯ 8.9% (blebbistatin, n ϭ 18 cells), and 224.8 Ϯ 22.8% (cytochalasin D, n ϭ 18 cells), confirming that the observed increase in spike half-width due to inhibition of myosin II function or due to inhibition of actin polymerization are highly significant (Fig. 4 B). Consistent with these results, the inhibition of MLCK with the inhibitor ML-7 also increased the amperometric half-widths to a similar value as blebbistatin (145.3 Ϯ 12.2%, p Ͻ 0.01) (Fig. 4 B). The increases in amperometric spike halfwidth by these treatments were accompanied by decreases in amperometric spike amplitude (Fig. 4C) with no significant changes in quantal size (Fig. 4 D). To test whether the effects of cytochalasin D were specifically due to inhibition of actin polymerization, amperometric recordings were also performed in cells treated with latrunculin A, which also hinders actin polymerization. Latrunculin A treatment produced an increase of amperometric spike half-width and decrease of spike peak amplitude without affecting quantal size (Fig. 4 B-D), indistinguishable from the effects of cytochalasin D, indicating that these changes are specific consequences of actin depolymerization. The number of exocytotic events recorded within 10 min after stimulation was unchanged when myosin II was inhibited (Fig. 4 E). In contrast, in cells treated with ML-7 the number of events was significantly reduced to 33% of control (Fig. 4 E) ( p Ͻ 0.01). This suggests that inhibition of MLCK by ML-7 may affect other molecules independent of myosin II, consistent with recent evidence (Xu et al., 2008), leading to the observed reduction in exocytotic events. However, in contrast to blebbistatin and ML-7, cytochalasin D or latrunculin A treatment increased the number of spikes by ϳ66% compared with control cells (Fig. 4 E), in good agreement with the proposed role of actin as a barrier to exocytosis (Aunis and Bader, 1988;Rosé et al., 2003). Inhibition of actin polymerization but not myosin II affects the early fusion pore The foot signal preceding single amperometric spikes (Chow et al., 1992) has attracted significant attention as it is directly related to the early fusion pore formed during chromaffin granule exocytosis (Albillos et al., 1997;Dernick et al., 2003;Gong et al., 2007). Neither inhibition of myosin II by blebbistatin or ML-7 nor inhibition of actin polymerization by cytochalasin D and latrunculin A had an effect on the mean foot current amplitude ( Fig. 4 F), suggesting that neither myosin II nor actin affect the structure of the early fusion pore. In contrast, the average foot signal duration was significantly increased (Fig. 4G) by ϳ65% in cytochalasin D ( p Ͻ 0.0001)-and latrunculin A ( p Ͻ 0.05)treated cells, but was unchanged by inhibition of myosin II with blebbistatin or ML-7 (Fig. 4G). Foot signal duration could be reliably determined only for foot signals with duration Ն5 ms and amplitude Ն1 pA. The percentage of amperometric spikes that had a foot signal in this range was similar for control and blebbistatin-treated cells (33% and 35%, respectively), but was increased to 45% for cytochalasin D-treated cells, consistent with the overall increase in foot duration. Alteration of early fusion pore properties by cytochalasin D Time-resolved cell attached patch clamp capacitance measurements provide a more direct assessment of individual fusion pore properties. The data analysis (Fig. 5A) reveals the capacitance C V of the fused vesicle, the initial and average fusion pore conductance, the fusion pore lifetime and the fusion pore expansion rate (Lindau, 1991;Debus and Lindau, 2000). Vesicle capacitance (Fig. 5B), as well as initial and average fusion pore conductance (Fig. 5C,D) were unchanged in cells treated with blebbistatin or cytochalasin D. Thus, inhibiting myosin II function or actin polymerization has no effect on vesicle size, vesicular catecholamine concentration or early fusion pore structure. However, inhibiting actin polymerization by cytochalasin D prolonged significantly Figure 3. A, A typical recording from an untreated chromaffin cell stimulated with ionomycin. B, Spikes from each cell were normalized to their peak amplitude, aligned in time at the point of their maximum slope (occurring shortly before the spike maximum) and averaged, providing the average spike shape for this cell. The average spikes from each cell in a treatment group were again averaged in the same way to obtain the average spike shapes for control (CT, n ϭ 19 cells, 786 spikes), blebbistatin-treated (BL, n ϭ 18 cells, 633 spikes), and cytochalasin D-treated (CD, n ϭ 18 cells, 1229 spikes) cells. Last, the averaged spikes were normalized to the same quantal size. The half-widths of these averaged spikes were 11.9 ms for control, 18.7 ms for blebbistatin, and 26.0 ms for cytochalasin D-treated cells. the fusion pore lifetime (Fig. 5E) and reduced the fusion pore expansion rate (Fig. 5F ), explaining the observed increase in amperometric foot duration with unchanged foot current amplitude in amperometric recordings from cytochalasin D-treated cells (Fig. 4 F, G). This data includes only detected fusion pores with lifetime Ն15 ms. The percentage of fusion pores with lifetime Ն15 ms was similar for control and blebbistatin-treated cells (21% and 30%, respectively), but was increased to 51% for cytochalasin D-treated cells. Distribution of foot signal durations and fusion pore lifetimes To better characterize the fusion pore kinetics, we constructed survival curves for the detected amperometric foot signal durations (Fig. 6 A) and fusion pore lifetimes (Fig. 6 B) for control, blebbistatin-treated, and cytochalasin D-treated cells. The survival curves for blebbistatin-treated cells are very similar to those for control cells whereas increased foot duration and fusion pore lifetime is evident for cytochalasin D-treated cells. Accordingly, single exponential fits provided similar time constants for foot duration and fusion pore lifetimes in control and blebbistatintreated cells but approximately twice as long for cytochalasin D-treated cells (Table 1). However, single exponential fits failed to reproduce the survival curves accurately, as is particularly evident in the logarithmic plots (Fig. 6C,D). This indicates that the kinetics is not homogeneous but reflects an inhomogeneous population with a distribution of rate constants. A distribution of activation energies [or log(k)] leads to kinetics that is better described by a power law function (Austin et al., 1973(Austin et al., , 1975: where k is the rate constant corresponding to the peak of the distribution and n corresponds to the width of the distribution (small n indicates a broad distribution). The power law fits reproduced the data well (Fig. 6, dotted lines). Table 1 provides the parameters returned from the fitting procedure for each treatment group. Again, the parameters for blebbistatin-treated cells are very similar to those for control cells. For cytochalasin D-treated cells, the main difference is a much smaller parameter n, which indicates a much broader distribution of rate constants, extending to much longer foot durations and fusion pore lifetimes when actin polymerization is inhibited. The fraction of amperometric spikes with detectable foot signal and of fusion pores measured by capacitance measurements with lifetime Ն15 ms was increased in cytochalasin D-treated cells compared with control and blebbistatin-treated cells (Table 2). This is consistent with the prolonged foot signal duration (Fig. 4G) and the increased fusion pore lifetime in cytochalasin D-treated cells (Fig. 5E), which should increase the fraction of foot signals or fusion Differences between treatment groups were tested for statistical significance by Student's unpaired t test and are indicated by single ( p Ͻ 0.05), double ( p Ͻ 0.01), or triple ( p Ͻ 0.001) asterisks. Figure 5. A, The real (blue trace) and imaginary (red trace) parts of the complex admittance are converted into fusion pore conductance G P (green dots) and vesicle capacitance C V (black dots). The fusion pore initial and average conductance are depicted by the dashed horizontal black lines, while the fusion pore expansion rate is the slope of the linear fit to the initial 15 ms segment of the conductance trace (solid black line). The fusion pore lifetime is the time for the conductance to reach 2 nS from its initial value. B-F, Vesicle step size (B), fusion pore initial conductance (C), fusion pore average conductance (D), fusion pore lifetime (E), and fusion pore initial expansion rate (F ) for control (CT, n ϭ 7 cells, 86 fusion pores), blebbistatin-treated (BL, n ϭ 8 cells, 82 fusion pores), and cytochalasin D-treated (CD, n ϭ 8 cells, 78 fusion pores) cells. Data are represented as mean Ϯ SEM, where n is the number of cells. Statistically significant differences ( p Ͻ 0.05) are indicated by single asterisks. pores longer than the detection limits of 5 and 15 ms, respectively. The fitted data sets included only foot signals with duration Ն5 ms, and fusion pores with lifetime Ն15 ms, since shorter durations were affected by the respective low-pass filters used and could thus not be reliably quantified. Table 2 compares the fraction of amperometric spikes and fusion events that fulfilled these criteria with the fraction of events predicted by the power law fits. While the fractions of fusion pore lifetimes Ն15 ms are in rather good agreement with the predictions from the power law fits, the fractions of amperometric spikes with a detectable foot signal is much lower than the predictions of the power law fit. However, this is not unexpected since foot signals may escape detection not only because of short duration but also due to small amplitude. The mean foot current amplitude calculated for all detected foot signals (not averaged per cell) was 3.5 Ϯ 2.8 pA (mean Ϯ SD) for control cells and similar for drug-treated cells. Since the detection limit was 1 pA, a significant fraction of foot signals with duration Ͼ5 ms will not be detected due to small amplitude. Reduced vesicular motion following inhibition of myosin II activity Inhibition of myosin II reduced chromaffin granule mobility, consistent with previous reports (Lang et al., 2000;Neco et al., 2004). In contrast to cytochalasin D treatment, inhibition of myosin II did not lead to reduction of cortical actin filaments, indicating that the role of myosin II in chromaffin vesicle motion near the cell surface is not mediated by disintegration of the actin-rich cortex. Although myosin motor function is highly regulated (Somlyo and Somlyo, 2003), myosin activity at resting calcium concentration appears to contribute to vesicle mobility. Frequency of exocytotic events Inhibition of actin polymerization by cytochalasin D or latrunculin A led to a 66% increase in the number of exocytotic spikes consistent with the role of actin as a physical barrier to exocytosis (Aunis and Bader, 1988). Blebbistatin treatment of chromaffin cells, however, did not result in a change of the number of measured exocytotic spikes, consistent with the presence of normal cortical actin filaments in these cells. In contrast, the nonspecific MLCK inhibitor ML-7 reduced the number of exocytotic events, suggesting that ML-7 inhibits exocytosis via a mechanism that may not be mediated by inhibition of nonmuscle myosin II (Tokuoka and Goda, 2006). Inhibition of myosin II function or actin polymerization slows catecholamine release during amperometric spike phase Inhibition of myosin II increased the average amperometric spike half-width, consistent with experiments using chromaffin cells overexpressing an unphosphorylatable mutant of the myosin II RLC (Neco et al., 2004). Inhibition of actin polymerization by cytochalasin D broadened the amperometric spikes even more than blebbista-tin. Myosin II could thus exert its role via interaction with or independent of actin. It has been suggested that tension in the vesicle membrane drives fusion pore expansion (Monck et al., 1991) and myosin II and actin may contribute to increased membrane tension helping to expand the fusion pore. It has so far not been possible to measure directly the fusion pore conductance in chromaffin cells during the amperometric spike. However, fusion pore dynamics can be resolved for the early fusion pore that gives rise to the amperometric foot signal. If F-actin and myosin II accelerate fusion pore expansion, we would expect that this should be reflected in the dynamics of the early fusion pore. Modulation of early fusion pore expansion by F-actin but not myosin II activity Indeed, inhibition of actin polymerization resulted in prolonged fusion pore lifetimes indicated by increased amperometric foot signal durations and increased narrow fusion pore lifetimes determined by cell-attached capacitance measurements. The fusion pore expansion rate was reduced while the initial and average fusion pore conductance as well as the average foot signal amplitude were unchanged. We conclude that cortical actin does not determine the structure of the early fusion pore, but facilitates the process of fusion pore expansion. Survival curves constructed for amperometric foot signal durations and fusion pore lifetimes were well fitted with power laws as expected for processes that reflect distributed kinetics based on a distribution of activation energies (Austin et al., 1973(Austin et al., , 1975Lindau and Rüppel, 1983). Fusion pore expansion is modulated by many factors including Ca 2ϩ concentration (Fernández-Chacó n and Alvarez de Toledo, 1995;Hartmann and Lindau, 1995) and PKC (Scepek et al., 1998) such that a kinetic heterogeneity is not unexpected. Inhibition of actin polymerization broadened the kinetic distribution toward longer fusion pore lifetimes providing the first direct evidence that actin contributes to fusion pore expansion in chromaffin cells. Inhibition of myosin II activity, on the other hand, altered neither the early fusion pore structure, nor the fusion pore expansion rate or the early fusion pore lifetime, suggesting that myosin II is not medi- ating the role of actin during the early fusion pore. In contrast to our results, the expansion of the early fusion pore was slower in chromaffin cells overexpressing an inactive form of myosin II RLC (Neco et al., 2008). One possible explanation for this apparent discrepancy would be that blebbistatin inhibition of myosin II may be incomplete and that the residual myosin II activity in blebbistatin-treated cells is sufficient to maintain normal fusion pore expansion kinetics. However, alternative explanations are at least equally possible. In our experiments blebbistatin treatment was performed for 15 min before the experiment. In contrast, cells overexpressing the inactive form of Myosin II RLC were used 1 or more days after infection. Blebbistatin inhibition thus reveals the immediate consequences of myosin II inhibition and presumably its direct function in the release event. On the other hand, overexpression experiments may in addition reveal longer term consequences. Clearly, vesicle mobility is affected by myosin II inhibition and the changes in early fusion pore expansion may reflect longer term consequences of myosin II inhibition such as changes in vesicle maturation, docking, or priming. The two experimental approaches are thus not directly comparable and provide complementary information. Despite normal early fusion pore dynamics, amperometric spike half-width was significantly increased in blebbistatintreated cells, suggesting that the increased amperometric spike half-width may not be due to slower fusion pore expansion. It was suggested that dissociation from the granular matrix is the major process determining amperometric spike half-width (Jankowski et al., 1993;Wightman et al., 2002). The amperometric spike time course shows no strong correlation with quantal size (Schroeder et al., 1996) as would be expected for a rate limiting fusion pore. The time course of release of different granular contents from cytochalasin D-treated PC-12 cells was also not correlated with the size of the particular compound, as would be expected for fusion pore limited release (Felmy, 2007). These results suggest that association with and dissociation from the intragranular matrix determine the kinetics of release. Additional support for this view came from a recent study showing that release events from chromogranin A null mice exhibit reduced amperometric spike half-widths (Montesinos et al., 2008). Possible mechanisms for F-actin and myosin II function in exocytosis The relaxation of membrane tension exerted by actin filaments on the cell plasma membrane in cytochalasin D-treated cells may be responsible for slower fusion pore expansion. In contrast, in-hibition of myosin II had no detectable effect on early fusion pore expansion suggesting that actin mediates fusion pore expansion by its interactions with other proteins (Dillon and Goda, 2005;Cingolani and Goda, 2008). Myosin II, however, contributes to accelerating release during the amperometric spike. How can interactions of the extragranular actin and nonmuscle myosin II modulate catecholamine release kinetics from chromaffin granules? Our results suggest that mechanical forces (tension) on the granules may promote dissociation from the matrix and thus expel catecholamines. It has been proposed that in Xenopus eggs, cortical granules are compressed by F-actin during exocytosis, contributing to the driving force for granules to secrete their contents (Sokac et al., 2003). Nonmuscle myosin II may exert its mechanical function on chromaffin granules by its ability to bind and contract filamentous actin. Release from the matrix appears to be governed by a low effective diffusion coefficient within the matrix (Amatore et al., 1999). The change in amperometric spike width might be a consequence of a changed effective diffusion coefficient that could result from mechanical forces exerted on the matrix affecting its catecholamine binding interactions. Alternatively it could be a consequence of a changed rate at which the surface of the granule matrix is exposed to the extracellular medium (Amatore et al., 1999) or the size of the exposed matrix area during the rapid release phase giving rise to the amperometric spike. However, considering that the amperometric spike time course appears to be independent of vesicle size, the latter mechanism would require that the rate at which the membrane surrounding the vesicle is unwrapped or the finally exposed area is increased for larger vesicles. In either case, the role of myosin II is likely to exert mechanical forces on the granule by matrix compression or by expelling the matrix more rapidly, thus facilitating release by exposing the whole granule core to the extracellular solution and accelerating dissociation from the granular matrix. The interactions between the vesicles and the cortical actin cytoskeleton could be mediated by myosin V, which has been localized to chromaffin granules (Rosé et al., 2003), providing a possible link between an actin-myosin II scaffold and the secretory granule. However, interactions of myosin II with chromaffin granules should not be ruled out. The interaction of the secretory granules with actin filaments appears to be mediated by localized adaptor molecules, such as N-Wasp and ARP2/3 (Gasman et al., 2004) or Rab27A and MyRip (Desnos et al., 2003). One possibility is that upon stimulation the actin cortex redistributes to allow Table 1. Fit parameters returned for the single exponential () and power law (1/k, n) fits to the foot signal duration (amperometry) and fusion pore lifetime (capacitance) survival curves of each treatment group Amperometry Capacitance granules to collapse (Doreian et al., 2008). However, residual polymerized actin at the immediate fusion site may persist due to localized accessory molecules allowing actin-regulating proteins, such as myosin II to exert control on granule fusion, consistent with the unchanged localization of myosin II in cytochalasin Dor ionomycin-treated cells where cortical actin is dramatically reduced. It thus appears possible that myosin II may dynamically interact with actin and secretory granules via currently unidentified adaptor proteins.
2017-04-08T14:00:33.361Z
2009-01-21T00:00:00.000
{ "year": 2009, "sha1": "4b6c55aea5ac1d66ada71a4e014cd1fc3e718ef2", "oa_license": "CCBY", "oa_url": "http://www.jneurosci.org/content/29/3/863.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4b6c55aea5ac1d66ada71a4e014cd1fc3e718ef2", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
226087281
pes2o/s2orc
v3-fos-license
ECG calibration signal database construction based on IEC 60601-2-25 using MATLAB ECG machines should be calibrated and tested to assure their accuracy. IEC 60601-2-25 standard described signals for calibrating ECG amplitudes and frequencies. The problem was this standard does not clearly describe the formulas of the calibration signals nor the complete database of these signals. The aim of this study was to get a database of ECG calibration signals for testing based on IEC 60601-2-25 standard clause 201.12.1.101. Data were constructed with a series of sinus function in Matlab software to simulate P, Q, R, S, and T segments. The data were visually and statistically compared with the data from commercial CTS database. Data were constructed for 3 different lead of 12 ECG calibration data. Four ECG calibration data with elevation or depletion ST segment were excluded from this study. This study demonstrated that these ECG calibration signals were slightly visually different and statically had differences in some of S wave and most of T wave. This data can be used by designers or manufactures, but for the testing laboratory is recommended using a commercial product. Introduction Electrocardiograph (ECG) is one of medical devices that is used for recording heart activities by measuring electrical signals produced by the heart muscle [1,2]. An ECG signal can provide great range of information, particularly the heart's structure and performance (rate, rhythm, size, position), heart muscle damage, cardiac drug influences, and implanted pacemaker perform [3]. The complex waveform signal is captured using electrodes attached on skin. The amplitude of the signal is very weak, specifically in the range of 0.2-5 mV [2,4]. However, in actual the strong background noises from human body frequently contaminate the heart signal [2,4]. An ideal ECG can differentiate ECG signal and background noise by having great common rejection ratio and high gain [2]. To ensure that an ECG operates properly, it must be analysed and calibrated regularly in order to support its performance. An uncalibrated ECG may lead to misreading and furthermore mistaken treatment to patient because of misdiagnose [5]. One of important ECG testing is essential performance and accuracy of medical electrical equipment in accordance with IEC 60601-2-25:2011, especially sub-clause 201.12.1.101. The aim of the test is to measure the accuracy of the amplitude and frequency of the ECG signal by comparing the ECG signal and the CTS (Conformance Testing Services) database. CTS database is an artificial ECG waveform to automatically test the amplitude and interval of measurement during performing ECG testing. However, the problem is the standard does not clearly describe the formula of the calibration signals nor the complete database of these signals. The standard IEC 60601-2-25 only provides several references value. Therefore, the purpose of this study it to establish a complete database of ECG calibration signals using the sine function in Matlab Software and references value from the standard. It is hoped that this method can be used by manufacturer to gain their product or by other laboratories to help them conducting the test. Figure 1. Nomenclature of calibration ECGs An ECG signal consist of P wave as atrial depolarization, QRS complex as ventricular depolarization, and T wave as ventricular repolarization that is shown in Figure 1. The X-axis and Y-axis of electrocardiogram are represented as duration (in minutes) and amplitude (in millivolt), respectively. Each section of P, Q, R, S, and T has intervals and amplitudes that are adjusted from the requirements in the standard (Annex HH in IEC 60601-2-25). Each segment was constructed by sinus function as Equation 1, where A is amplitude, f is frequency that is determined by duration, and φ is phase that distinction with previous segments. Each segment was combined into full signal and then was drawn in Matlab. Matlab function was run to get signal data. Data signal from Matlab was compared with data from CTS Database, by visually and statically. Significant level of CTS database and Matlab construction is analysed using statistical method Mann-Whitney. Table 1 which is differentiated by peak voltage, heart rate, and QRS-form. These signals are chosen because they do not have elevation or depression level of ST segment. The peak voltage is the maximum or minimum voltage value of signal, while the heart rate is the number of heart's contraction (heartbeat) per minute. Normal people heart rate is 60 beat per minute. 120/min heart rate means there are two signals in a minute, or in other words a signal only takes half a minute. QRS-form is the shape of QRS-complex segment. Results and Discussion The graph results show that the CTS database and Matlab signal have the same pattern for all calibration signals. However, to see more closely to the detail, the CTS database peaks are more narrow than the Matlab peaks, even though they have same interval and peak value. These visual results are similar to the statistical results. Mann-Whitney results show that the level of difference between CTS database and Matlab for a whole signal have probability above 0.05, which mean they have no different. In addition, the data are also compared for each segment using statistical Mann-Whitney method, as shown in table 2. It is demonstrated that CTS database and Matlab are significantly different for some calibration signal of most of T wave and some of S wave. It is still needed further study to create more general function, so that the elevation or depression level of ST segment can be illustrated as well by the function. Because the data compiled by the Matlab have same segment duration and amplitude, it can be used by designers or manufacturers to verify their product. However the testing laboratories are recommended using a commercial product (CTS database) for accurate whole data. Conclusion The sine function using Matlab can be used to establish ECG database, but the waveform are slightly different and can not be used for the waves that have elevation / depression level of ST segment. This data can be used by designers or manufactures, but for the testing laboratories are recommended using a commercial product.
2020-06-11T09:10:36.829Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "f14e07866e97461172b7538ec931fec3c515b218", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1528/1/012060", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1e0b5194b8957bc586e1921dca98679a6c81e283", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
246478132
pes2o/s2orc
v3-fos-license
Characterization of Cell Cycle-Related Competing Endogenous RNAs Using Robust Rank Aggregation as Prognostic Biomarker in Lung Adenocarcinoma Lung adenocarcinoma (LUAD), one of the most common pathological subtypes in lung cancer, has been of concern because it is the leading cause of cancer-related deaths. Due to its poor prognosis, to identify a prognostic biomarker, this study performed an integrative analysis to screen curial RNAs and discuss their cross-talks. The messenger RNA (mRNA) profiles were primarily screened using robust rank aggregation (RRA) through several datasets, and these deregulated genes showed important roles in multiple biological pathways, especially for cell cycle and oocyte meiosis. Then, 31 candidate genes were obtained via integrating 12 algorithms, and 16 hub genes (containing homologous genes) were further screened according to the potential prognostic values. These hub genes were used to search their regulators and biological-related microRNAs (miRNAs). In this way, 10 miRNAs were identified as candidate small RNAs associated with LUAD, and then miRNA-related long non-coding RNAs (lncRNAs) were further obtained. In-depth analysis showed that 4 hub mRNAs, 2 miRNAs, and 2 lncRNAs were potential crucial RNAs in the occurrence and development of cancer, and a competing endogenous RNA (ceRNA) network was then constructed. Finally, we identified CCNA2/MKI67/KIF11:miR-30a-5p:VPS9D1-AS1 axis-related cell cycle as a prognostic biomarker, which provided RNA cross-talks among mRNAs and non-coding RNAs (ncRNAs), especially at the multiple isomiR levels that further complicated the coding–non-coding RNA regulatory network. Our findings provide insight into complex cross-talks among diverse RNAs particularly involved in isomiRs, which will enrich our understanding of mRNA–ncRNA interactions in coding–non-coding RNA regulatory networks and their roles in tumorigenesis. INTRODUCTION Lung cancer, one of the most common fatal cancers, has been the leading cause of cancer-related deaths, with an increasing incidence worldwide (1). This cancer can be categorized into 2 major types, non-small cell lung cancer (NSCLC;~85%) and small cell lung cancer (SCLC;~15%). The former is further classified into three major subtypes according to histopathology and clinical features: lung adenocarcinoma (LUAD;~40%), lung squamous cell carcinoma (LUSC;~25%-30%), and large cell carcinoma (LCC;~10%-15%). LUAD and LUSC are the most common pathological subtypes in lung cancer (2)(3)(4), and LUAD is specifically the most frequent subtype in never or light smokers (5). LUAD patients are mainly caused by a combination of multiple genetic and environmental factors (6). The prognosis of NSCLC patients is not optimistic, and the 5-year survival rate is less than 1% (7,8), which is mainly attributed to regional or distant metastasis (9,10). Patients often have little opportunity of receiving effective treatments because they lack specific clinical symptoms and therefore are diagnosed at a very late stage. Characterization of new cancer-specific diagnostic and prognostic biomarkers is quite necessary, which will greatly assist in timely diagnosis, prognosis, treatment selection, and guiding further clinical treatment. In recent years, non-coding RNA (ncRNA), mainly including microRNA (miRNA), long ncRNA (lncRNA), and circular RNA (circRNA), has been widely studied as a class of important regulatory molecules, especially for their crucial roles in tumorigenesis (11)(12)(13). These ncRNAs have been of interest because of their potential roles as biomarkers for the diagnosis and prognosis of various cancers (14)(15)(16). The interactions with messenger RNAs (mRNAs), especially via competing endogenous RNAs (ceRNAs), indicate that ncRNAs and mRNAs can function as ceRNAs by competitively binding with miRNAs through sharing miRNA recognition elements to regulate their expression levels (17). Based on this hypothesis, relevant RNAs have been studied, particularly for their potential prognostic roles in tumorigenesis. For example, the circRNA hsa_circ_0072088, miRNAs (hsa-miR-532-3p and hsa-miR-942-5p), and mRNAs (IGF2BP3, MKI67, CD79A, and ABAT) may serve as prognostic markers in LUAD via a circRNA-mediated ceRNA network (18); LINC00324/miR-9-5p (miR-33b-5p)/ GAB3 (IKZF1) may play a pivotal role in regulating TAM risk and prognosis in LUAD patients (19), and some studies focus on cancer-related lncRNAs to search crucial RNA interactions based on ceRNA networks (20,21). These studies provide potential crucial gene interactions in tumorigenesis, which are quite necessary to reveal the detailed molecular mechanism of diverse cancers. However, it is not enough to present these interactions from these RNA levels, because the small regulatory RNA, miRNA, is not a single sequence but a series of multiple isomiRs (22)(23)(24)(25)(26). Do these small flexible isomiRs also contribute to RNA cross-talks and the occurrence and development of cancers? It is urgent to explore these interactions at the isomiR levels, which will help us understand the interesting cross-talks in the RNA world. In this study, to further understand the potential cross-talks among diverse RNAs in LUAD (Figure 1), we mainly discuss the interactions among ncRNAs and mRNAs, particularly from the isomiR level. Firstly, via an integrative analysis of several datasets, consistent deregulated genes are surveyed using robust rank aggregation (RRA) algorithm, and their functional implications are queried to understand the potential contributions in tumorigenesis. Secondly, protein-protein interaction (PPI) networks are used to screen potential hub genes associated with cancer through integrating multiple algorithms, and these hub genes are further screened by survival analysis. Thirdly, relevant miRNAs of these hub genes are obtained, and then these interacted miRNAs are used to survey related lncRNAs. Finally, based on the potential biological interactions, a ceRNA network is constructed, and involved RNAs are further analyzed to understand their expression correlations and potential roles in tumorigenesis, especially for the analysis at the isomiR level. Our study will provide insight into RNA cross-talks and more references for potential crucial RNAs associated with lung cancer, particularly focusing on coding-non-coding RNA interaction networks at the isomiR level. These findings will contribute to discovering the novel potential anticancer drug target in precision medicine. Screening and Identification of Deregulated RNAs The limma (29) was used to screen and identify deregulated RNAs in GEO and TCGA datasets using the Bioconductor packages. The common candidate cancer-associated mRNAs were firstly screened using R package RobustRankAggreg (30) in 9 GEO datasets, and candidate mRNAs were further analyzed with deregulated mRNA profiles from TCGA dataset. mRNAs with |log 2 FC| > 1 and padj < 0.05 were primarily identified as abnormally expressed genes. Functional Enrichment Analysis of Gene Sets To understand the detailed functional implication of differentially expressed gene sets or screened specific genes, the Database for Annotation, Visualization and Integrated Discovery (DAVID) version 6.8 (31) and clusterProfiler (32) were used to perform functional analysis. Simultaneously, based on identified Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways, z scores were estimated according to the following formula (33): where the up and down indicate the numbers of upregulated and downregulated genes, respectively, and the count was the total number of involved deregulated genes. Furthermore, to understand the detailed expression patterns of the screened genes, their expression distributions in KEGG pathways were also queried, and significantly enriched pathways were further presented using Pathview (34,35). A p-value <0.05 was considered to have statistical significance. Screening and Identification of Potential Cancer-Associated Hub Genes To survey the potential hub genes in LUAD, PPI networks were firstly constructed based on deregulated mRNA profiles using the STRING online database with default parameters (36). Networks were constructed using upregulated and downregulated genes. For the PPI network, the candidate key genes were firstly screened based on the potential modules using the CytoHubba plug-in in Cytoscape 3.7.2 (37). Then, we selected the top 10 node genes from 12 algorithms results (including Betweenness, BottleNeck, Closeness, ClusteringCoefficient, Degree, DMNC, EcCentricity, EPC, MCC, MNC, Radiality, and Stress) as candidate genes. Genes with degree scores <10 were excluded, and the remaining genes detected in more than 4 other algorithms were finally selected as candidate hub genes. We here used the PageRank algorithms to explore the hub genes from the significant-difference expression genes. As a method of evaluating the importance of nodes, the PageRank was also a useful algorithm to explore the relative topological importance, and the PageRank had been used to discover the herb's relative importance and determine the core herbs (38). For primarily screened hub genes, further analysis was performed to understand the potential role in tumorigenesis, mainly including drug sensitivity and correlations between hub genes and immune infiltrates (http://bioinfo.life.hust.edu.cn/ web/GSCALite/) (39). Moreover, gene set variation analysis (GSVA) scores for hub gene sets were also estimated using GSCALite. Characterization of Potential Prognostic Values of Candidate Genes It was necessary to query the potential prognostic values of the screened cancer-associated hub genes, which will help us to understand their roles in tumorigenesis. Then, survival analyses were used to estimate the correlations of the candidate genes (also including further screened candidate miRNAs and lncRNAs) with cancer prognoses. The clinical data, mainly including survival status, cancer stage and grade, survival time, and molecular subtype, were obtained from TCGA using the "TCGAbiolinks" package (28). The log-rank test was used to estimate the potential differences, and statistical significance was set at p < 0.05. Simultaneously, in order to obtain the integrated results to ensure the potential prognostic values of screened genes, prognostic results were also obtained from the GEPIA (40,41) and StarBase (42,43) databases. Screening and Identification of Relevant Cancer-Associated Non-Coding RNAs Candidate hub mRNAs with potential prognostic values were firstly used to screen related miRNAs based on biological interactions because the small ncRNAs have been widely studied as a class of important regulators in gene expression. The miRNA: mRNA interactions were firstly collected from the StarBase database (42,43), and those miRNAs remained as candidaterelated miRNAs if they had opposite expression patterns with target mRNAs and had significant prognostic results. Here, due to the phenomenon of multiple isomiRs in the miRNA locus (22)(23)(24)(25)(26), we selected the most dominant isomiR as the classical miRNA to perform the relevant analysis. The detailed isomiR expression patterns were further queried for the final screened cancerassociated crucial miRNAs, because the multiple isomiRs may lead to perturbed coding-non-coding RNA regulatory network (44) that may also perturb the ceRNA network. Next, based on the screened miRNAs that were crucial intermediate nodes correlating mRNAs and lncRNAs, miRNArelated deregulated lncRNAs were further surveyed from LncBase Predicted v.2 (45), and lncRNAs were identified if they had opposite expression patterns with miRNAs and had potential prognostic values in cancer prognosis. Construction of Competing Endogenous RNA Network to Screen Cancer-Associated Crucial RNAs According to screened cancer-associated abnormal RNAs, mainly including hub genes, interacted miRNAs, and associated lncRNAs, a ceRNA network was constructed based on their regulatory relationships using the R package of "networkD3" (https:// CRAN.R-project.org/package=networkD3). The primary constructed ceRNA network contained a series of mRNAs and ncRNAs, and then these related mRNA:miRNA and miRNA: lncRNA pairs were further queried for their expression relationships. A correlation analysis was used to estimate their expression correlations, and if the correlation coefficient was less than −0.20, p < 0.05, and the average expression level (log 2 TPM) was more than 10 (ensure the abundant enrichment level), further analysis of the genes remains to be performed. In-Depth Analysis for Screened Crucial RNAs Moreover, although all of the above-screened associated genes were dominantly and abnormally expressed in tumor samples, and they also had significant correlations with cancer prognosis, it is necessary to further understand the expression patterns across diverse cancer types (46) that will help us assess the potential expression and function of genes in different tissues and tumorigenesis. Therefore, a pan-cancer analysis was used to track their expression patterns. Simultaneously, the binding events of diverse RNAs were visualized using DIANA (http://carolina. imis.athena-innovation.gr/diana_tools/web/index.php?r=site% 2Findex) (47,48), which could indicate the interactions among different RNAs in the ceRNA network. Furthermore, the screened crucial mRNAs were queried for the potential roles in immune infiltrates in LUAD (46), which would contribute to understanding the biological role of the hub genes. Statistical Analysis and Network Visualization An unpaired t-test and the Wilcoxon rank-sum test were used to estimate differentially expressed genes for the unpaired samples. For interactions between related genes, especially among different RNAs, further network visualization was presented using Cytoscape 3.8.2 (37). A Pearson's or Spearman's correlation coefficient was estimated to assess expression relationships among different RNAs. All of these statistical analyses were performed using the R programming language (version 3.4.3), and Venn distributions were performed with a publicly available tool (http://bioinformatics.psb.ugent.be/ webtools/Venn/). Messenger RNA Expression Profile in Lung Adenocarcinoma According to 9 GEO datasets (Table S1 and Figure 1), the RRA algorithm was used to screen deregulated mRNAs, and a total of 787 abnormally expressed genes were obtained based on distributions of scores in the RRA algorithm (Figures 2A and S1A). Subsequently, 5,476 abnormally expressed genes were obtained from TCGA data (Figure S1A), and 710 genes (including 474 downregulated genes and 236 upregulated genes) with consistent expression patterns were collected as candidate genes to perform further analysis ( Figure 2B). Some abnormal genes were reported with important roles in tumorigenesis. For example, upregulated CST1 can promote gastric cancer migration and invasion through activating the Wnt pathway (49), and CST1 also promotes cell proliferation, clone formation, and metastasis in breast cancer cells, indicating that CST1 is a novel potential prognostic biomarker and therapeutic target for breast cancer (50). The screened upregulated and downregulated genes were further queried for their expression patterns, respectively, and we found that both of them showed significant expression differences ( Figure 2C, p = 2.20e−16 for the upregulated genes and p = 2.20e−16 for the downregulated genes). Most of them showed abundant expression distributions, indicating that these screened candidate genes were dominantly expressed in LUAD. To understand whether these surveyed genes had a potential function, functional enrichment analysis was performed. Both upregulated and downregulated genes showed significant Gene Ontology (GO) terms ( Figures S1C, D), indicating that these abnormal genes might contribute to multiple biological processes. These primarily screened genes were also enriched in several KEGG pathways, especially for cell cycle and oocyte meiosis pathways ( Figures 2D, E, and S2). In the detailed pathways, many relevant genes were involved in deregulated expression patterns ( Figure S3A), which may perturb the relevant pathways. Screening of the Potential Most Influential Genes in Protein-Protein Interaction Networks Based on the obtained upregulated and downregulated gene sets, the PPI network was constructed. According to the primarily constructed complex networks, the potential hub genes were further screened using 12 different algorithms. Based on the top 20 genes in the PPI network ( Figures 3A, B), some genes were detected with a higher ranking score, such as upregulated genes in the EPC network and downregulated genes in the EcCentricity network ( Figures 3A, B). Most genes were filtered if they were not simultaneously detected by Degree and other >4 algorithms, and only 31 genes (including 13 upregulated genes and 18 downregulated genes) were obtained as candidate hub genes associated with LUAD. Many hub genes were detected in multiple algorithms and simultaneously had higher degree scores, and most showed consistent scores in specific algorithms ( Figure 3). These implied that candidate hub genes had higher confidence levels and might be the most influential proteins in PPI networks, further indicating that they might be crucial genes in tumorigenesis. To validate whether these candidate hub genes indeed had crucial roles in tumorigenesis, 31 genes were queried for the potential roles in biological pathways, apoptosis, cell cycle, DNA damage response, etc. These candidate hub genes were found to activate and inhibit some biological pathways ( Figures 4A and S3A), implying their roles in relevant pathways that were crucial in the occurrence and development of cancer. Simultaneously, we have performed the analysis of the association between immune cells' infiltrates and hub genes' CNV levels. The results showed that CD4 + cells had a higher copy number variation (CNV) level in the hub gene CNV amplificated group than that in the wild-type group, and CD8_native cells had a significant CNV level in hub gene CNV deleted group compared with the wild-type group ( Figure 4B). These genes did not show a significant difference between tumor and normal samples (p = 0.1200), but they showed significant differences among different subtypes of LUAD (p = 4.00e−13) and different stages of LUAD (p = 8.32e−4, Figure 4C). These varieties revealed that these screened genes were associated with subtypes and diverse stages. Moreover, these genes had positive or negative correlations with some drugs ( Figure S3B). For example, trametinib was positively correlated with CCNA2, KIF11, MKI67, and MAD2L1. In June 2017, the Food and Drug Administration (FDA) approved trametinib plus dabrafenib for the treatment of BRAF V600E mutation-positive metastatic NSCLC patients. These showed the potential associations with anticancer drugs and roles as potential drug targets in future cancer treatment. Further Validation of Hub Genes and Relevant Non-Coding RNAs To further survey and validate the hub genes associated with LUAD, their potential prognostic values were queried as an important index. A total of 16 genes were detected with significant prognostic values ( Figure 4D), and all of them showed significantly deregulated expression patterns based on median expression values of tumor and normal samples. The overall survival curve of these genes showed that patients with lower expression had a higher survival probability than those with higher expression levels ( Figure 4D). Accordingly, these candidate genes were identified as hub genes associated with LUAD, which were used to survey relevant miRNAs to explore the potential interactions among diverse RNAs, especially among mRNAs and ncRNAs. Interestingly, some of them were homologous genes in a specific gene family, including CCNA2, CCNB1, and CCNB2. Some of them, CCNA2, MKI67, and KIF11, were identified as cell cycle-related factors, implying their roles in the cell cycle pathway. A series of relevant miRNAs were surveyed based on the potential biological relationships with the 16 hub genes. Based on expression patterns and the significant correlations with cancer prognosis (log-rank p < 0.05), 10 miRNAs were obtained ( Figures 5A, B). These miRNAs showed significant abnormal expression in LUAD, including 6 downregulated and 4 upregulated miRNAs, and all of them were detected with abundant enrichment levels. Of these, 3 of them were identified as homologous miRNAs, in let-7 gene family, and these miRNAs also had similar sequence, expression distributions, and biological roles. These miRNAs had opposite expression patterns with their target mRNAs ( Figure 5C), implying their potential regulatory roles in the relevant mRNA expression process. Then, the primarily screened miRNAs were used to survey relevant lncRNAs based on their biological relationship. According to expression patterns and prognostic values, 2 lncRNAs as well as 4 mRNAs and 2 miRNAs were finally identified as candidate relevant RNAs, and most paired RNAs showed significant expression correlations ( Figures 5D and S3C). These diverse RNAs showed potential regulatory relationships, and these obtained lncRNAs also had significant correlations with cancer prognosis and were detected with abundant enrichment levels ( Figure 5E). Among these, both miR-145-5p and miR-30a-5p were identified regulators with 3 mRNAs and 1 lncRNA, respectively. These screened RNAs have been reported with important biological roles. Competing Endogenous RNA Construction and In-Depth Analysis A total of 8 diverse RNAs were used to construct a ceRNA network based on their expression correlations ( Figure 6A), showing their potential interactions across different RNAs, especially among ncRNA and mRNAs. LncRNA may control mRNA expression via binding to the regulator of mRNA and miRNA, and the complex interactions might further complicate the coding-non-coding RNA regulatory network. Based on involving RNAs in the ceRNA network, further analysis was performed to verify their regulatory interaction, mainly including expression level, expression correlation, and survival analysis. Finally, the 5 RNAs, including CCNA2, MKI67, KIF11, miR-30a-5p, and VPS9D1-AS1, were further identified as candidate crucial RNAs associated with cancer. A significant expression correlation could be found between miRNA and its relevant mRNA and lncRNA ( Figure 6B), and an in-depth analysis of the three RNAs was performed to verify their potential biological roles. To understand the potential roles of surveyed RNAs in other cancer types, a pan-cancer analysis was performed to discuss their expression patterns. Involved genes (CCNA2, MKI67, and KIF11) were found with abundant expression levels in many tissues, and they showed a significantly upregulated expression pattern in many cancer types ( Figure 6C). Simultaneously, lncRNA VPS9D1-AS1 also showed a significant overexpression pattern in many cancer types ( Figure 6D), and the consistent expression trends implied their competition binding with miR-30a-5p. Moreover, although miR-30a-5p was identified as a crucial miRNA, it is not a single miRNA but a series of multiple isomiRs. Then, based on dominantly expressed isomiRs, 6 abundant isomiR were selected, and they showed diverse expression patterns than mRNAs and lncRNAs ( Figure 6E). The dynamic expression of isomiRs implied their flexible regulatory expression, which may contribute to specific biological pathways in different tissues based on their broadspectrum target RNAs. Further, these 6 dominant isomiRs were found with the consistent 5′ ends and seed sequences (nucleotides 2-8) that were binding sites with target RNAs, and they were only involved differently in the 3′ ends and diverse expression patterns. It is unclear whether the length difference would influence stability or regulation efficiency, but most of them were found with unexpected enrichment levels that ensured their biological function. These isomiRs with the same seed sequences have diverse length and expression levels, which would further complicate the interaction network among coding-non-coding RNA regulatory networks. Furthermore, the crucial genes, CCNA2, MKI67, and KIF11, were further queried for their roles in immune infiltration in LUAD. In different immune cell types, all of them showed a significant positive correlation with immune infiltration (Figures 7A-C). These results showed that a higher expression level of CCNA2, MKI67, and KIF11 might lead to higher infiltration levels, implying their roles in immune infiltration, a key step in the pathological process of cancer. Potential Prognostic Marker via RNA Cross-Talk As cancer-associated crucial RNAs, the 5 screened RNAs showed a significant difference between groups with high and low expression, and patients with higher expression of mRNAs and lncRNA had a poorer prognosis than those with lower expressions (p < 0.0001, p = 0.00014, p < 0.0001, and p = 0.0088, Figure 7D). However, patients with lower expression of miR-30a-5p had a poorer prognosis than those with higher expressions (p = 0.0016). Their prognostic values were also verified by analysis of hazard ratio (the global log-rank p = 1.43e−06, Figure 7E). These results significantly showed that different RNAs, CCNA2/MKI67/KIF11:miR-30a-5p:VPS9D1-AS1 axis-related cell cycle, could be a potential prognostic marker via RNA cross-talk, especially for the cross-talks among ncRNAs and mRNAs ( Figure S3D). Furthermore, CCNA2 and KIF11 were identified as core essential genes according to the common data of Hart et al. (51), Blomen et al. (52), and Wang et al. (53). CCNA2 contributed to the cell cycle pathway, and it also had a role in the hallmarks of cancer in reprogramming energy metabolism. These contributions implied their key role in the occurrence and development of LUAD, even in cancer diagnosis and prognosis. The interactions with CCNA2, MKI67, and KIF11, particularly for the small and long ncRNAs, may have great importance as potential drug targets based on their contributions in multiple biological pathways (Figures S3A, B). DISCUSSION Based on the potential interactions or cross-talks among different RNAs, it is quite necessary to perform an integrative analysis to survey the relevant RNAs as a potential prognostic marker. Due to the fact of being the leading cause of cancer-related death, lung cancer has been widely of concern, and it is urgent to obtain prognostic markers with higher sensitivity that will largely contribute to adjusting drugs and cancer treatment, especially in precision medicine. Herein, based on an integrative analysis of diverse RNAs from different datasets, CCNA2/MKI67/KIF11:miR-30a-5p: VPS9D1-AS1 axis-related cell cycle is identified as a potential prognostic marker via constructing a ceRNA network and indepth analysis, and all of them are characterized as crucial RNAs in the occurrence and development of LUAD. Of the three mRNAs, CCNA2 has been studied because of its role in cancer, including its prognostic value in breast cancer (54)(55)(56), colorectal cancer (57), pancreatic cancer (58), LUAD (59), gastric cancer (60), bladder cancer (61), etc. MKI67, a marker gene in the cell cycle, also has been reported with prognostic value in NSCLC (62) and breast cancer (63). Furthermore, the prognostic value of KIF11 has been reported in oral cancer (64) and colorectal cancer (65). Our analysis shows that CCNA2 is an important gene in the cell cycle, and it is significantly upregulated in many cancer types. The disorder of CCNA2 contributes to multiple cancers, implying its potential role in cancer diagnosis and prognosis. Tanshinone IIA can significantly downregulate the expression of the CCNA2-CDK2 complex and suppress the progression of LUAD by inducing cell apoptosis and arresting the cell cycle (66). One of its regulators, miR-30a-5p, also has been widely of concern as an important miRNA, especially for its role via crosstalk with other RNAs in some pathways in different cancers (67)(68)(69). The overexpression of another ncRNA, lncRNA VPS9D1-AS1, a potential prognostic marker, can be used to predict poor prognosis in NSCLC (70), and its role in cancer has been validated (71,72). All of these RNAs have been validated with roles in tumorigenesis, and this axis may be a proper marker to predict cancer progression. Meanwhile, based on the widespread phenomenon of isomiRs occurring in the miRNA locus, the screened crucial miR-30a-5p is also further analyzed at multiple isomiR levels. A series of multiple isomiRs can be detected, and dominantly expressed isomiRs are also unexpectedly enriched, which may ensure their regulatory roles. Although these isomiRs are not involved in causing the differences of 5′ ends and seed shifting events, their expression and length difference still provide a possibility to perturb the original coding-non-coding RNA regulatory network. The main reason may possibly be derived from these isomiRs with expression and sequence heterogeneities, but it is unclear whether these isomiRs may competitively bind to target RNA (mRNA and lncRNA). If the 5′ ends are involved differently, the novel seed sequences will be found, which may lead to some novel targets simultaneously losing some targets. It is quite necessary to perform analysis from the multiple isomiR levels despite many studies only focusing on the traditional/ classical miRNAs. The small ncRNAs largely contribute to the complex cross-talks among diverse RNAs, especially in codingnon-coding RNA regulatory network, which is more complex than we thought because of the phenomenon of isomiRs in the miRNA locus. Taken together, based on the potential cross-talks among diverse RNAs, this study finally screened and identified CCNA2/ miR-30a-5p/VPS9D1-AS1 axis as a potential prognostic marker in LUAD. All of the relevant RNAs have been widely studied with roles in the occurrence and development of cancers, indicating their crucial roles in tumorigenesis, especially for association with cell cycle via direct or indirect contribution. Further study should focus on their values as a potential therapeutic target for cancer treatment. Our findings will provide insight into cross-talks among diverse RNAs, especially from the unique perspective of multiple isomiRs from a given miRNA gene locus, which will enrich our understanding of mRNA-ncRNA interactions in coding-non-coding RNA regulatory network in tumorigenesis. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
2022-02-03T14:30:18.526Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "41f79f35602cd89c2b9c415966bb9851086e1090", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.807367/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "41f79f35602cd89c2b9c415966bb9851086e1090", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229334476
pes2o/s2orc
v3-fos-license
Dual Boundaries: The Mechanism of Boundary Construction Operating in Interethnic Settings in Georgia This paper addresses the question of the boundary construction mechanism between different ethnic groups in Georgia. It demonstrates the duality of boundary construction strategies that operate distinctively in the public and private domains of life. By exploring this substantive issue, I utilize relatively new theoretical perspectives in the study of interethnic boundary construction by concentrating on its multilevel operational character. Drawing on rich data sources within a mixed method approach, I provide empirical evidence concerning how ethnic and national codes of identity are negotiated and combined in everyday interethnic settings. The analyses focus on three ethnic groups residing in the Republic of Georgia – Georgians, Armenians, and Azerbaijanis. Introduction What mostly characterises ethnic boundary theory is its linearity. In almost all theoretical models, which explore the mechanism of ethnic boundary construction, the main question is directed at the set of factors defining the ethnic closure. According to the approved theoretical schemes, ethnicity, as well as ethnic identity, is always defined through a certain repertoire of symbolic codes without considering how each of these codes operate and become interpreted at different levels of life (Shils 1957, Eisenstadt 1998, Eisenstadt & Giesen 1995. Precisely, how and to what extent a multidimensionality of human life creates preconditions for the particular types of boundary construction. Otherwise, how a micro and macro, public and private level of human life generates a unique context for interpretation of each defining code of ethnicity. Thereby, what often remains beyond attention is the multilevel character of the boundary construction process. And, the fact that each boundary defining factor itself does not function identically in the micro and macro, as well as the private and public spheres of life, and that the logic of the boundary making mechanism is essentially dual. The boundaries, constantly reproduced and maintained on the private level of human life, can become simultaneously crossed on the public level. Indeed, the ethnic boundary lines, that are conserved and maintained in the private sphere, simultaneously can become eliminated in the public domain. This multilevel operational feature of the boundary construction mechanism provides new perspectives in the study of ethnicity. Therefore, the private/public operational structure as a conceptual framework can be useful to understand the complex logic of the ethnic boundary construction mechanism in everyday life. As mentioned above, this issue has been underestimated by scholars of ethnic boundary theory. Regardless of this theoretical neglect, there have been certain studies undertaken in this spirit. The dual character of boundary construction has been articulated by Frederic Barth. He explicitly noted that interethnic boundaries can be crossed and simultaneously maintained (Barth 1967). However, Barth didn"t explore further the operational logic of this mechanism in everyday settings, especially in the context of diversified life domains. He observed that the contrastive cultural characteristics of ethnic minority groups are located in the non-articulating sectors of life, defined by the author as a "backstage", where so called "stigmatic" characteristics from the perspective of the dominant majority culture can be covertly reproduced (Barth 1967). In fact, he implicitly describes how most "stigmatic" and "contrastive" characteristics can be maintained and reproduced in the "private" (backstage) sphere of life. Based on the empirical data, sociologist Shirley Kolack (1987) emphasizes the unequal level of the value internalization process in the public and private spheres of life. Family life, on the one hand, accompanied with traditions and religious practices represents a cultural enclosure of each ethnic group. Whereas, on the other hand, work and political activity represent a public sphere. In his paper, the author asks how the internalization processes of Soviet values in multiethnic Soviet countries has "been greatest in the areas of politics and work, the least in the areas of culture and family life." (Kolack 1987:44). Instead of the above mentioned conceptual and empirical explorations, the multilevel character of the boundary construction process has been poorly reflected in the field. In the following sections I will provide an overview of the relevant theoretical framework for this article as well as the empirical foundations. Theoretical Framing This research is framed within the theories of collective identity and substantially conceptualized through the boundary making approach (Shils, 1976. Eisenstadt & Giessen 1998, Cohen 1985, Delanty, G. 1999, 1995. The conceptual linkage between collective identities and boundaries is widely reflected in social sciences literature. Just as identity does not exist without boundaries, boundaries do not exist without identity. The constructivist approach to collective identity is displayed through a set of symbolic codes of distinctions between those inside and outside of the group and serves as a main conceptual instrument for analyzing the boundary construction process in interethnic settings (Eisenstadt & Giessen 1998). Following the constructivist model of collective identity, I differentiate primordial, civic and cultural codes of distinction which create an essential ground for the formation of interethnic boundary lines (Shils, 1975;Geertz, 1973). Additionally, I am adding a specific subjective code to the following model which reflects feelings of self-identification. Primordiality is associated with factors that are considered "objective", unquestionable and inherently natural. "The boundaries of primordial communities consist of strong lines separating incommensurable insides and outsides (Eisenstadt & Giessen 1998:78). Civic codes of identity represent distinctions related to social and civic routines as well as institutional or constitutional arrangements of community (Delanty 1998). Cultural codes of identity are linked to "the realm of the sacred …defined as God or Reason, Progress or Rationality (Tenbruck, F. H. 1989;Eisenstadt & Giessen 1998:82). Religion is considered quintessential for this type of scheme. Boundaries constructed on such a collectiveness can be easily crossed as everyone is "capable of overcoming his inferiority, his emptiness and his errors, by converting to the right faith, adopting the superior culture, and crossing the boundary (Eisenstadt & Giessen 1998:83). The boundary lines are mainly drawn on these codings with constantly varying compositions. "These codes have to be seen as ideal types, while real codings always combine different elements of these ideal types. Therefore, concrete historical codings of collective identity are not homogenous." (Eisenstadt & Giessen 1998:76). These symbolic codes of distinctions between ethnic groups serve as a crucial factor for interethnic boundary construction and are considered as constructors of collective identity per se (Eisenstadt & Giessen 1998:77). This study illustrates that the coding repertoire of each ethnic group"s identity is presented as a combination of various components and vary in private and public level of life. Further, the multilevel approach of the boundary making process appears more suitable to illustrate a predominantly definitive character of the duality of boundaries in everyday life. The research programs of social constructivist theories that explain the reproduction of collectivity based on the boundary making approach emphasize however the defining factor of situational particularity in this process. It is the situation in general that activates a certain type of coding component of identity and ascribes them particular importance and priority. The group identity is always represented through various components of these symbolic codings, "the importance of which varies in different situations." (Eisenstadt & Giessen 1998:76) Almost all situationalist approaches (Spicer 1971, Nagel 1994) admit a definitive future of contextuality at the general level which gives a space for reinterpretation and modification of each symbolic code of distinctions between ethnic groups, without specifying a particular contextual arrangement that creates a unique basis for boundary definition. The empirical study of such contextuality can provide a new analytical tool for the multidimensional examination of the boundary making process. This research focuses on the distinctive levels of everyday life presented at the micro and macro level or, more specifically, in the private and public domains, which reveal a unique character of the boundary construction mechanism described here as duality. By using the term duality I try to characterize the multilevel operational nature of the boundary construction mechanism in everyday interethnic settings. Popularized by the influential theory of structuration, the term refers to two distinct and independent features a phenomenon can entail at the same time, as two different sides of one coin (Giddens 1984). The concept of duality has been accurately utilized within the theory of ethnic identity (Deaux 2006). As a form combining both ethnic and national identity, the dual identity has been defined as one of the alternatives in the multiple identity options ethnic minority groups have to choose from in everyday practice (Baysu, Phalet & Brown 2011). The conceptualization of this bipolar modus of identity implicitly underlines the dual character of the boundary making process in interethnic settings. Members of ethnic minority have to choose between different strategies of identity construction as well as various combinations of ethnic and national identities (Berry 2006). The proximity to one pole of the bidimensional scheme allows members of the ethnic minority group to distance themselves from another pole. In this extreme way, the predominantly ethnic ("separated") and predominantly national constructions of identities with respective coding combinations can be differentiated (Ruder, Alden, Paulhus 2000). Though the bi-dimensional scheme of identity can produce a dual identity form represented as a negotiated and combined construction of ethnic and national identity (Deaux 2006). Sharing a national identity with fellow citizens and at the same time their ethnic identity with their minority group members allows them to navigate successfully in everyday life. The duality of the boundary construction mechanism can be indicated as one of the strategies in the range of its potential application. The public and private normative context as well as the acceptance of cultural diversity by both majority and minority group members have critical importance in the dual identity construction process. Though in this and other theoretical implications (Alba, R. 2005) the crucial question remains the same: What are those specific preconditions that produce a dual identity construction, on which level of interpersonal relationships becomes possible the negotiation of interethnic distinctions? I address this question using a multilevel analysis of the boundary making process. More specifically, the research will reveal the duality of the boundary construction mechanism by examining its reproduction and formation on the public and private level of everyday life separately, which will be overviewed in the next section. Public and Private Structure of Life Public and private are one of those "grand dichotomies" (Bobbio 1989) developed in the human history of thought that functions as a conceptual tool for understanding the normative order of human life. Explicit demarcation of the human world into two domains with appropriate institutionalized normative orders and constantly reproduced boundaries can serve as a conceptual framework for understanding the logic of interethnic boundary construction in everyday life. Following the classic ancient legacy of private/public dichotomization, modern authors use this concept by addressing reference units in each sphere. Private is defined as a personal, and public as an impersonal domain of life (Arendt 1958, Habermas 1964, Silver 1997. According to Arendt, public is identified as "everything that appears . . . can be seen and heard by everybody and has the widest possible publicity . . . [and] appearancesomething that is being seen and heard by others as well as by ourselves -[is what] constitutes reality"" (1958:50) and is "distinguished from our privately owned place in it. (1958:52) In contrast, "To live an entirely private life means above all to be deprived of things essential to a truly human life: to be deprived of the reality that comes from being seen and heard by others"" (1958:58). Habermas (1964) conceptualizes a public sphere as a ""realm of our social life in which something approaching public opinion can be formed, access is guaranteed to all citizens [, and a portion of it] comes into being in every conversation in which private individuals assemble to form a public body . . . Today newspapers and magazines, radio and television are the media of the public sphere"" (1964:49). The explicit dichotomization in two normative order[s] is one of the crucial markers of modernity for Elias who equates the private to the intimate and secret mode of human behaviour in comparison to the public one. "… with the advance of civilization the lives of human beings are increasingly split between the intimate and a public sphere, between secret and public behaviour. And this split is taken so much for granted, becomes so compulsive a habit, that it is hardly perceived in consciousness." (Elias 1939:190). The private is considered something that "is hidden or withdrawn versus what is open, revealed, or accessible" (Weintraub 1997:5). The individual is perceived as an ontological opposition to the collective, which affects the interest of the collectivity of individuals. Private is not only ascribed to the family domain and primary groups, it functions on the base of intense emotional and intimate parts of human life. In this way "the contrast between the "personal", emotionally intense, and intimate domain of family, friendship and the primary group and the impersonal, severely instrumental domain of market and formal institutions "becomes explicit." (Weintraub 1997:20) Another framework is provided by feminist discourse according to which the private sphere of life is identical to family and domestic settings as opposite to the public one conceptualized as "gender-linked in terms of both social structure and ideology." (Weintraub 1997:28). In the framework of sociology, further enhancement of conceptualizations is provided in light of modernity (Sennett 1977, Fischer 1981, Hunter 1985, Lofland 1998. In contemporary western urban society, the public sphere equals "the world of strangers, the cosmopolitan city"" which contrasts with the private sphere of intimate relationships. … "The absorption in intimate affairs is the mark of an uncivilized society"" (Sennett 1977: 340). The main criteria of distinction between these poles is the scale of social distance presented as an alienated and estranged interpersonal relationship. "A res.ccsenet.org Review of European Studies Vol . 13, No. 1;2021 world of strangers" as Fisher states, is a "world of people who are personally unfamiliar to one another" (Fisher 1981:307). Public and private are defined as different normative and mutually interdependent orders in which correlated modes of social practice are incorporated: the private, the parochial and the public social orders (Hunter 1985). The private order refers to the primary groups where "the values of sentiment, social support, and esteem are the essential resource"; the parochial order is ""based on the local interpersonal networks and inter locking of local institutions that serve the diurnal and sustenance needs of the residential community"; and the public order is "located preeminently in the formal, bureaucratic agencies of the state" (Hunter 1985:233-234). Specifically, the author defined the private realm as a place of intimate ties between the primary group members who are mostly involved within "households" and "personal networks"; the parochial realm as a space of the commonality sense between acquaintances and neighbours who are involved in interpersonal networks within "communities"; and, the public realm as an opposite of the private sectors of urban areas where individuals are personally unknown or only "categorically" known to one another (232-233). Structural positions within these different social orders create "equivalent" dynamics for interactions. By utilizing a public and private normative order as a conceptual framework I attempt to reveal a multidimensional operational character of the boundary construction mechanism. More specifically, I will try to illustrate how these normative orders which structure distinctively the domains of life are responsible for the reproduction of dual boundaries between the ethnic groups. Design and Methods To demonstrate how the dual boundary construction mechanism operates in interethnic settings, and more specifically, how boundary lines between the ethnic groups are defined according to the normative orders of certain life domains rather than by identity codings per se, a complex undertaking of empirical research will proceed. All dimensions of the research will be defined with a mixed method approach, integrating both quantitative and qualitative empirical data. At the first stage, I will measure the symbolic codes of collective identity for three ethnic groups. This will help explore how interethnic boundary lines are drawn and what kind of composition of the symbolic codes of interethnic distinctions constitute the main characteristics of boundary lines. I will also try to reveal how the combination of defining factors of interethnic boundaries are operating in the private and public spheres. By doing so, I try to demonstrate how distinctively each of these symbolic codes of collectivity are functioning on the public and private levels of everyday life and shift the dual nature of interethnic boundary lines. In a Caucasus Barometer questionnaire, which includes a section for measuring national identity, I selected variables compatible with the theoretical model of Eisenshtadt and Giessen (1998). The operationalization of the theoretical items helped to explore empirically how boundary lines of three ethnic groups are constructed. In sum, seven items for three groups of identity codings have been identified. The primordial dimension of symbolic distinctions which conceptually creates the basis for boundary line construction with reference to "origin" and "nature" includes three items such as kinship, birth and language. The civic dimension has been depicted by variables such as citizenship and acknowledgement of institutional arrangements. The cultural code consists of two items related to the systems of internalized normative order. After measuring each of these identity codes for three ethnic groups with an aim of exploring an interethnic boundary construction, I verified resulted models of boundary constructions on the private and public level. This enabled me to reveal a nonlinear, multilevel operational mechanism of the boundary making process, namely, the dual character of boundary construction, which functions distinctively at the private and public level (Wimmer 2004(Wimmer , 2013. As a key variable for measuring boundaries in the private sphere I select marriage, which alongside its substantially intimate and personal nature is considered to be a strong predictor of in-group solidarity. At the same time marriage as a sacral act in many cultures is closely related to the religious connotations and intensifies feelings towards ethnic affinities. In the public sphere I select the business partnership which, corresponding to organizational structure, is fairly distanced from the familiar and intimate orbit of the private sphere. According to its intrinsic instrumental logic it relatively lacks value orientations of a substantial nature (Weber 1970). Based on the empirical analysis the study reveals that the boundary lines constructed predominantly on primordial differences, become easily crossed in the public sphere of life. And conversely, the boundary lines essentially defined through the civic codes of identity appear to be strongly maintained in the private sphere. For strengthening the empirical evidence by verifying this theoretical statement, qualitative methods have also been utilized. Based on the research data generated from in-depth interviews and focus-groups I identify the strategies respondents use in constructing the interethnic boundaries in everyday settings. By analysing narratives, I try to understand how interethnic boundaries as well as identities are constructed and constituted (Somers 1994:607). In sum, 7 focus groups and 23 in-depth interviews were conducted. The fieldwork was carried out in summer 2017, 2018 and 2020 in three sites of Georgia: Tbilisi, Marneuli and Akalthsike. The main criteria of location selection was the quantitative distribution of the populated ethnic groups. The respondents have been selected by age, gender, location and ethnicity. The gender parity criteria was applied. The one generational cohort born in post-soviet period with the respondents aged 18-25 were chosen. Background Georgia represents an interesting site for the study of boundary construction in multiethnic societies. Multicultural and multiethnic composition has been a peculiar feature of the country throughout the centuries. Apart from ethnic majority Georgians; Jews, Armenians, Azerbaijanis, Greeks, Kurds, Russians, Ukrainians, Chechens/Kists, Ossetians, Abkhaz, and other ethnic groups constitute the multiethnic composition of society, which has undergone permanent changes in different historical periods. According to the census (CSEM 2016:2) in 1926 the minority groups comprised 33% of the entire population, with 11.51% of Armenians, 5.17% Turks and 3.60% Russians. The 1940s and 1980s represent historical periods of growth in ethnic diversity. Though after the 1990s, there was a significant decline in the minority ethnic groups, so that in 2014 the minorities represented 15% of the whole population, comprised of 6.27% Azeris, 4.53% Armenians and 0,71% Russians. The political, social and cultural exclusion of ethnic and religious minorities remains one of the major challenges for the Georgian state (NITG 2008). By analyzing interethnic relations (social cohesion) academics often limit their focus to the developments of the post-Soviet period. This underestimates the essence and complexity of the problem, which is largely shaped by the legacy of Soviet times. In the Georgian academic sphere, the interethnic thematic is broadly reflected, especially, in reference to national identity (Tevzadze 2009, Nodia 2009, Zedania 2011, Wheatley 2009, Reisner 2009, Kirvalidze 2014) and the collective memory formation process in post-Soviet Georgia. Under the ideological agenda of equality and rights, Soviet power institutionally maintained ethnicity with its crucial constitutional elements such as language, with the intention of eliminating primordial affinities through extended enforcement of social and political patterns of class identification (Kravetz 1980:14). The new secularized and ideologized patterns of identification have been assumed to establish a "set of overarching shared values of the country as a whole to replace the core values of the various ethnic groups." (Kolack 1987:38). In fact, this process was followed by explicit ethnic hierarchies, centralized authority structures, efforts of Russification and asymmetric power relations between nations and ethnicities (Suny 1993, Shanin 1989. It should be noticed that alongside Russian, used as a lingua franca, the "nationality label and native language" remained as the basic and most stable indicators of national and ethnic identity in the Soviet Union (Silvan 48). Soviet identity defined primarily by social class (brotherhood of workers) and secularized supranational civic codes (Soviet citizen) was assumed to operate as a substitute for other primordial codes of identification. The unequal level of internalization of Soviet values in the private and public sphere and the maintenance of the distinct languages, cultural and religious traditions in familiar spheres, each ethnic group sustained its cohesiveness and the interethnic boundaries were reproduced. The concentration of ethnic minority populations mainly in rural areas in a form of compact ethnic settlement has favored the survival of traditional agents of socialization, traditional social patterns, values, and modes of behavior. Another reinforcement of feelings of ethnic identity was the survival of religion, which was closely intertwined with a sense of ethnic identity of most ethnic groups (Silvan 85). Demographic conditions of ethnic minorities such as territorial and urban-rural dispersion was a strong predictor not only for ethnic identity maintenance but also for the degree of inequality between the ethnic groups (Silvan, 85). The concentration of the ethnic minority population mainly in rural areas in the form of a compact ethnic settlement has favored the enforcement of unequal living standards and environmental developments between the ethnic groups. The difference in the quality of life which "existed between the urban-rural settings has its effects on the different spheres of life (Kravetz 1980:152). Despite the efforts towards universal literacy accompanied by industrialization and economic developments the critical importance remained the problem of "the great disparity between rural and urban educational services: personnel, buildings, materials and access," on the one hand and, "the cultural divisiveness within ethnic groups which result[ed]s in less schooling for women and, generally, a negative view of schooling", on the other hand (Kravetz 1980: 22). After the collapse of the Soviet Union national minorities have faced "specific structural handicaps" and become "particularly vulnerable to impoverishment, isolation and under-education" (NITG 2010:11). This development made the issue of interethnic social cohesion particularly interesting to research. Quantitative Insights The Caucasus Barometer 2019 national survey data verifies the hypothesis of the dual boundary construction mechanism. The data reveals that interethnic boundaries are distinctively drawn in the private and public spheres of life. More specifically, interethnic boundaries are crossed on the public level when members of one ethnic group have business relations with representatives of another ethnic group. At the same time these ethnic boundaries are maintained and reproduced on the private level when it concerns intermarriage. There is a significant difference in how interethnic boundaries are drawn simultaneously in the two spheres of life. The low level approval of intermarriage in all three ethnic groups reveals that interethnic boundary lines are strictly drawn in the private sphere. Though at the same time interethnic boundaries are significantly eliminated in the public domain with a comparably high level of approval of business relationships among ethnic group members. According to the national survey data (Table 1) members of the Georgian ethnic group prefer to have business relations with the religiously different ethnic Azeris (71%) in comparison with the religiously familiar ethnic Armenians (64%). At the same time, more of the members of the Georgian ethnic group approve marriage with ethnic Armenians (40%) than with Azeris (33%). These empirical data suggest support for the hypothesis stated above that religious and other primordial boundaries are crossed differently in the private and public spheres of life. Additionally, it supports the second hypothesis that interethnic boundaries are defined by other social and symbolic codes of identity than the religious one. Table 1 shows that there is a relatively similar picture in the case of the Azeri and Armenian ethnic minorities. Table 2 shows that people who define their identity in general through the civic codes (Respect to the Georgian institutions and laws -90%), in the specific domain of life, namely, in the private sphere construct their boundary lines with primordial codes (Speaking Georgian -90%). 79 % of respondents who consider primordial codes important for their national identity appear to be less favorable to it when acting in the private sphere. Only 74 % of them with approval of interethnic marriages name ancestry as an important factor for their identification. On the other hand, in the public sphere almost 83 % of them consider this primordial code as a significant factor for their identification. By approval of business cooperation with other ethnic groups, this primordial factor of identity appears again redefined. The same picture can be displayed in the case of other identity codes. 71 % of respondents, who name religion as one of the cultural codes as an important factor for their personal identification, change their mind when considering it at the private level of life. Only 58 % of them name it in regards to intermarriage approval. The difference is also remarkable when analyzing the importance of these codes specifically in public life. Only 68% of them consider this code of identity as an important factor when approving interethnic business cooperation. The civic codes, as one of the most definitive factors of identity displayed in the table with 90 % of respondent"s approval, seems to be less appreciated in the private sphere of life. 86% of respondents who approve of a woman"s interethnic marriage name respecting Georgian institutions and laws as a significant aspect of their identity. The results of this empirical examination shows that identity defining codes appear to be operating distinctively in the private and public sphere of everyday life. This proves again the hypothesis of a dual boundary construction mechanism which operates simultaneously in the private and public domain and demonstrates a multidimensional feature of the boundary making process. Qualitative Insights The ethnic narratives and discourses attained through the in-depth interviews and focus groups provided in the following section serve for understanding the mechanism of boundary construction in interethnic settings. In particular, how ethnic, religious and other primordial codes of identity become subordinated to and eliminated by other social and everyday behavioural patterns of differences in light of the public and private spheres of life. The rich qualitative data reveals how this is harmonised within the logic of double boundary construction. Double Boundaries: Modern Daughters-in-Law One vivid illustration of the double boundary making mechanism is the case of "modern daughters-in-law". These are the young women, wives and daughters-in-law who break the practices and behavioural patterns traditionally followed by women within the Azeri ethnic group. They mostly represent families with stable and good economic resources. Their spouses are not necessarily educated but have a certain amount of economic and social capital in the inter and out-group milieu. These type of young daughter"s-in-law speak Georgian, are educated and mostly employed in public organizations. It is important to note that they have a high respect and symbolic capital within their own ethnic community. Respondents discuss cases of how they manage to cross ethnic boundaries and at the same time remain recognized as respectful members of the ethnic group. There is a category of young men who prefer to have an educated wife with a stable job. They also buy cars for them, like Georgians do. These young daughters-in-law are appreciated in religious circles, they participate in the traditional celebrations and do everything that they are supposed to do like every daughter-in-law in the ethnic minority community. (Samira,22,Marneuli) In one interview a daughter-in-law describes her feelings towards the interethnic boundary crossing in the private sphere of life, more specifically related to her marriage decision. In the situation of permanent interethnic boundary crossing in the public sphere, the interethnic boundaries are still strictly demarcated in the private sphere of life, namely in the case of partner selection for marriage. I know that my life style is different from most other Azeri girls... I studied at university and have mostly Georgian friends but I never thought I will marry a Georgian boy. I would not do this to my father, to my family. (Narsin,25,Marneuli) The new forms of practices as well as lifestyle are essentially perceived as an elimination of distinctions which create interethnic boundaries. But the crucial thing here is that this boundary crossing process takes place in the public sphere. They are dressed, speaking and behaving like Georgian daughters-in-law, you can hardly find a difference. And this is so different from our lifestyle… Though, they follow all religious and other traditions in their families and house. It cannot be otherwise. (Arzu,24,Marneuli) A boy from a village describes how boundaries are constructed between the two spheres of life in providing examples of young daughters-in-law: When I see them in bank office or in other public places it is so obvious that they are not behaving like most of our women do, they are different, almost like Georgians. But if you visit them at home, for example, during a religious celebration, I am sure you will recognize them as Azeri daughters-in-law. (Arsen 21, Marneuli) These passages articulate explicitly how interethnic boundary lines between ethnic groups, namely between Azeris and Georgians, are eliminated in the public sphere of everyday life. It is inverse as well -how the private sphere preserves and reproduces the symbolic repertoire for interethnic boundary lines between Azeris and Georgians. The discussions reveal that the space factor is closely related to the interethnic boundary construction process. Urban centres appear the crucial site for interethnic boundary crossing through the adoption of new behavioural patterns, modifying the established forms of social praxis etc. However, there are daughters-in-law who work in public institutions. This is commonly a category of family which lives in a capital or in Marneuli. Their behaviour is not considered bad. On the contrary. (Abas,26,Marneuli) To the question of whether this category of young women are using the traditional ethnic or religious attributes in their appearance the young Miranda replays: When it is necessary to have a "mantia" on religious celebrations, they will use it but never in a public space every day. (Said,22,Marneuli) The typical milieu and social circle of these young Azeris appears more interethnic in comparison to the traditional one. Relative frequency of contacts with Georgians, friendship with Georgian families etc. leads to interethnic boundary crossing. I know a few young daughters-in-law in my town who are employed in public space, have a friendly circle with Georgians and follow the religious traditions in the family. This happens often . (Hasan,27,Marneuli) From discussions it also becomes evident that economic as well as social status is not definitive for boundary crossing. And, that education and socialisation play decisive roles in this process. There are many wealthy families who have no education. And they are not interested in teaching the Georgian language and educating their children. They are more closed people; they mostly do not have relations with Georgians. (Vagaf,24,Marneuli) The girl from the village discusses how the frequency of relationships and shared social milieu can eliminate interethnic boundaries. My girlfriend was married to a Georgian boy. In their families none of them have education, nor in the boy's family. This girl's family is very close to Georgians, their parents work in the same workplace and know each other well. The religion of this girl is not a problem for the boy, they are familiar with each other's traditions and respect them. (Narmina, 24, Marneuli) Tradition and Education The discussions related to the topic of tradition and education reveal again the double character of the boundary construction mechanism respondents are using in everyday interethnic settings. In the narratives of young minorities, receiving an education (in Georgian) and maintaining the ethnic traditions are not strictly separated from each other as in the case of older generations. They perceive both as coexisting. The main thing is that they do not interfere or exclude each other. In most of the narratives of respondents they do not appear mutually exclusive. Going to the Georgian school or not has nothing to do with the traditions. (Gaiane, 23, Akaltsikhe) In the narratives, the education associated with modern values is mainly considered as a strategic resource for success in the public sphere. Tradition, in contrast, is mostly linked to their family life and the private sphere, which at the same time, seems to be sacred and remains preserved. This coexistence of inclusive and exclusive codes of identity produce dual boundary lines which operate simultaneously in everyday interethnic settings. In these narratives respondents reveal again the double character of the boundary construction mechanism they use in everyday interethnic settings. Education in Georgian as well as a modern way of life shared with their Georgian counterparts is more related to their social life, which enables them adaptation and success in the modern world. Religion and other ethnic markers seem to be easily overcome in the social sphere though their existence continues in the private domain. This illustrates how nonlinear the boundary construction mechanism is operating with this double mechanism in interethnic settings. Interethnic boundaries are constructed and crossed, maintained and eliminated at the same time. Respondents are discussing situations of everyday life when they reproduce the existing religious boundaries at the private sphere and cross them in the public sphere (Eller & Coughlan 1993). Respondents are talking about the radical change of attitudes towards education, but at the same time emphasize a modest circle of those who are seeking higher education. Only three from eight in my class have become enrolled in university. Others did not want to continue study. One went to Azerbaijan, one went to Turkey, one married, and so on. But today people have woken up, they understand that education is important. (Sevil, 25, Marneuli) Conclusion The aim of this paper is to open new directions in the nonlinear multidimensional study of boundary making processes and to tease out the significance of the private/public grand dichotomy in the boundary construction process. As crucial organizing categories of social and everyday life, public and private normative order create a unique basis for dual boundary construction processes. Both of these normative orders produce a space for the reinterpretation and modification of each symbolic code of identity that defines the lines of interethnic boundaries. This paper demonstrates that it is not the defining factors (symbolic codes) of identity per se that are responsible to the definition of the boundary lines between the ethnic groups, as it is accurately reflected in the theoretical paradigms of boundary construction, but the specific domains of life in which they are operating. It proves that the public and private level of life generates a particular logic for interpretation of each of the defining factors of ethnicity that produce a duality of the boundary construction mechanism. This evidence-based research highlights the multilevel operational character of the boundary construction mechanism that highlights a new direction in the empirical study of boundaries. The defining factor of situational particularity in group identity formation, especially through the in-group and out-group boundary drawing process, has been broadly explored within the framework of constructivist paradigms (Spicer 1971, Nagel 1991, Waters, M. C. 1990). This empirical study of particular contextual arrangements, that creates a unique basis for boundary definitions between the ethnic groups, extends the knowledge of multidimensional operational character of boundary construction and reveals its dual nature. With the attempt to navigate from the most abstract theorizing schemes of constructivist and moreover, situational paradigms, to the most practical and immediate domains of everyday life, it seeks to contribute in the multidimensional empirical study of the phenomenon (Lamont 1992, Wimmer 2013. The verification of this theoretical statement has been made on the basis of the empirical data related exclusively to the Georgian case. The further extension of the research focus with an aim of strengthening the following theoretical statement can be considered as a next step of this study. The list of selected variables can be regarded as another limitation of this research, as it does not cover all dimensions of the phenomenon. The aim of this paper was to demonstrate how interethnic boundary lines are constructed in everyday life. The complex empirical data reveals that interethnic boundaries are more strongly defined by normative orders characteristic to the public and private spheres of life which create duality in the boundary reproduction process. Public and private normative categories appear as strong definers of interethnic boundary lines rather than interethnic differences per se. Each of the symbolic codes of identity that define the boundary lines can be reinterpreted and modified following the normative orders operating in the private and public spheres of life. The normative patterns that govern the way the interethnic boundaries are manifested into social actors are predominantly determined by the categorical logic of public and private orders of everyday life. The study reveals that it is not a composition of symbolic distinctions defining the boundaries between the in group and out group, but the normative order of everyday life according to which these distinctions are reinterpreted and reflected.
2020-12-21T04:46:58.258Z
2020-12-17T00:00:00.000
{ "year": 2020, "sha1": "c0d92a3ab3105cbb4b546df0f3d3a4c9931778d7", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/res/article/download/0/0/44408/47372", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c0d92a3ab3105cbb4b546df0f3d3a4c9931778d7", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
235906878
pes2o/s2orc
v3-fos-license
Use of cleaved wedge geometry for plan‐view transmission electron microscopy sample preparation A fast, convenient, and easy to perform method for preparing plan‐view transmission electron microscopy (TEM) specimens of brittle materials is proposed. The method is ideal for thin films/coatings and based on obtaining wedge‐shape geometries of the samples via conventional cutting and cleaving followed by gentle focused ion beam (FIB) milling to electron transparency. It enables multiple parallel windows for depth sectioning of the samples and facilitates FIB lift‐out procedure. The method has been successfully applied for preparing high‐quality plan‐view TEM samples for a range of films deposited on Si, SiC, and Al2O3 which significantly enhances throughput and reduces time at the FIB. The method further offers high success rate even for the novice, stable handling and reproducibility, which greatly widens the application of advanced plan‐view TEM studies in material science. | INTRODUCTION TEM is the most established tool for simultaneous acquisition of structural, crystallographic, and compositional analysis of micro-and nanoengineered films at the atomic level (Williams & Carter, 2009). Yet, TEM specimen preparation commonly presents a practical challenge for most samples and frequently limits the level of detail available through TEM characterization. The diversity of accessible preparation methods makes TEM sample preparation both an art and a science. It requires special skills and experience for achieving accurate control of the preparation process (Ayache, Beaunier, Boumendil, Ehret, & Laub, 2010). To provide consistent analysis about the film structure, it is essential to observe the film from two (minimum) perpendicular directions. To rectify these challenges, I have developed a fast and flexible solution for plan-view TEM sample with an emphasis on films deposited on brittle substrates. It combines the simplicity of conventional cleaving with the precision of FIB. The preparation procedure is provided in detail step-by-step, and key factors to ensure success are elucidated. | PREPARATION METHOD The proposed plan-view TEM sample preparation procedure consists of mechanical and ion-beam treatment stages. The description of involved stages, composed of number of steps, is discussed and demonstrated in detail below in Figures 1 and 2. The method is verified using a~2-μm-thick TiB 2+Δ film deposited on the Al 2 O 3 substrate (~500 μm thick), with an intentionally oxidized surface. F I G U R E 1 Mechanical treatment steps: 1.1 sawing (a-e), 1.2 cleaving (f-h), 1.3 mounting (i-k) and 1.4 back polishing (l, m) involved in the proposed preparation method | Mechanical processing The purpose of the mechanical processing is to obtain a sample that possess a wedge-shape geometry from the substrate-side, mounted and immobilized onto a support grid compatible with FIB processing and standard TEM holders. Tools and supplies used to exemplify this procedure included a low-speed diamond wheel saw (Model 650, South Bay Technologies), diamond blade (diameter: 76 mm and thickness: 0.2 mm), a razor blade (thickness: 0.15 mm), glue (Gatan G-1 epoxy), and conventional polishing paper. Optical micrographs of the mechanical processes are shown step-by-step in Figure 1. All in all, the mechanical processing of the sample, as shown below, required approximately 1.5-2 hr of work. | Sawing The preparation procedure starts by cutting a segment,~1.8 mm wide, from the as-received sample using a low-speed wheel saw ( Figure 1a,c,e). Prior to cutting, the film side of the sample was glued to a glass slide for protecting the film surface during cutting and handling. The length of the strip will define the number of potential cleaved pieces and should not be shorter than~1.6 mm. The collected segment (~1.8 mm wide) is remounted onto a fresh glass plate. A series of cuts into the sample is performed, 90 to the section sides, with a separation of~0.8 mm. The dimension of segment (1.8 mm  0.8 mm) is compatible with the grid size onto which segment will be mounted (see Figure 1i-m). The cuts are performed from the substrate-side and must not penetrate all the way through the sample ( Figure 1b,d). The depths of these cuts are of paramount importance. If the cuts are too deep, the pieces will separate while removing the sample from the glass plate. If they are too shallow, it will be challenging to cleave the sample in the next step. Preferably, the cuts should penetrate more than half-way through the substrate thickness ( Figure 1d). Important to note that the sample area under in the cuts will eventually become the electron transparent window(s) after FIB processing step (shown in Figure 2). | Cleaving The section now requires cleaving to obtain pieces exhibiting wedge-shaped geometries at the sample edge. The cleaving procedure is performed by using a razor blade, which is inserted in the produced cut, and consecutively bent by applying a force parallel to the film surface (see the schematic shown in Figure 1g). As a result of cleaving, a wedge-shape geometry is produced at the sample area under in the cut (Figure 1f-h). In case more sample pieces are produced than needed, they can be used to prepare cross-sectional TEM samples using, for example, sandwich approach as the dimensions of the pieces (1.8  0.8 mm 2 ) are compatible with openings of, for example, standard Ti TEM grids (Barna et al., 1999). | Mounting The cleaved sample piece needs to be mounted onto the standard half-moon grid (e.g., Cu) compatible with FIB sample preparation. The applied glue preferably needs to ensure fast hardening and for this reason G1 epoxy was chosen. A minute amount of G1 epoxy was applied to the central part of grid using a thin metal wire. The cleaved sample piece was then placed onto the grid with the film side of the sample facing the grid and wedge sticking out from the grid ( Figure 1i-k). The surface of the wedge-shape must remain free from the applied glue. In case the sample gets contaminated with the glue (wrong positioning, excessive glue, or sliding of the piece), it may be cleaned using, for example, acetone and the gluing procedure repeated. For fast hardening of G1 epoxy, the glass plate holding grid and sample was placed onto a hotplate heated to~200 C for~2 min. | Polishing The sample requires thinning for easy handling both in the FIB and in the TEM (Figure 1l-m). Prior to back polishing, the sample was fixed onto a glass plate using wax with the film side of the sample facing the glass plate. The substrate side of the sample was consequently polished with an abrasive diamond paper of 30 μm roughness. The thickness of the sample was reduced from~500 μm (initial substrate thickness) to~150-200 μm. The sample was cleaned using acetone and isopropanol and ready for the FIB processing. | Focused ion beam milling The purpose of the FIB milling procedure is to prepare electron transparent window(s) located at the apex of the wedge-shaped sample which will be ready for TEM examination. The milling shown in the examples below was performed in a FIB instrument (Carl Zeiss cross beam 1540 ESB system). SEM images of the ionic preparation steps from major perspectives are shown in Figure 2. The procedure is straightforward and involves gentle milling of the sample apexes using low milling currents (20-200 pA). This part of the process required 0.5 hr and will scale linearly with the number of electron transparent windows. | Milling from the substrate-side The sample was loaded in the FIB system standard way (the grid is standing up), once the site-specific area was identified along the apex, the sample was aligned through rotation for performing milling from the substrate-side. The sample stage was tilted to 54 (as in Carl Zeiss system FIB column leaned to 54 from SEM column) to reach the configuration in which the FIB beam is parallel to the film surface (Figure 2a-c). As the purpose of the procedure to obtain plan-view TEM sample from the as-grown film, the ion milling procedure was set to locally remove the Al 2 O 3 substrate and part of the film in a~3-μm-wide window (Figure 2d For ion beam sensitive samples, prior to ion milling, the targeted area is preferably protected by depositing a strip of, for example, protective Pt. Additionally, the width of the milling window (in this casẽ 3 μm) could be varied depending on the material. It was noted that some materials produce high quality lamellas for wider windows (e.g.,~5 μm) while for other, for example, film with high internal stresses, the lamellas bend, and thus the window width should be reduced. The same applies for optimizing milling angle within few degrees range and milling currents. | Depth sectioning This approach enables the preparation of the multiple windows for depth sectioning of the film, as illustrated by the SEM images shown in Figure 3. Multiple windows are exemplified on a~3-μm-thick Ti(Al) B 2ÀΔ film deposited on the Si substrate, with an intentionally oxidized surface. F I G U R E 2 SEM images recorded from the cleaved wedge (a-c) before ion milling. FIB milling steps: 2.1 milling from the substrate-side (d-f) and 2.2 milling from the film-side (f-h) involved in the proposed preparation method | Lift-out from the cleaved wedge The plan-view FIB lift-out technique has limited applications, as the procedure for it is much more stringent than for lift-out cross-sections (Li et al., 2018;Stevie et al., 1998). It suffers from challenges related to, for example, the redeposition of sputtered material (causing problems to detach the specimen from the bulk sample) together with impeached monitoring of the milling processes underneath the target area resulting to specimen failures. Further, for such cases as preparing TEM sample (from bulk sample) on microelectromechanical systems (MEMS) chipsthe FIB lift-out procedure is the only option (Duchamp, Xu, & Dunin-Borkowski, 2014). In the light of current challenges and needs, the cleaved wedge geometry offers the unique opportunities for efficient lift-out approach as demonstrated in Figure 4. The plan-view FIB lift-out technique is verified using a~400-nm-thick TiB 2ÀΔ film deposited on the Al 2 O 3 substrate (~500 μm thick) onto MEMS heating chip. | Milling trench from the substrate-side The sample was loaded in the FIB system standard way (the grid is standing up) and tilted to 54 , identical to Figure 2 configuration. The ion milling procedure was set to locally remove the Al 2 O 3 substrate in a~20-μm-wide and~4-μm-broad area (Figure 4a). The milling was performed using 2 nA (30 kV) current. | Milling frame around lamella from the substrate-side The sample was tilted to 0 and substrate-side orientated towards the FIB gun through stage rotation of 180 . The ion milling procedure was set to locally remove the material around the lamella in the frame fashion which makes the lamella hold to the bulk though the connecting bridge (Figure 4b). The frame milling was performed using 2 nA (30 kV) current. | Milling electron transparent window The sample was tilted back to 54 , the same configuration as in 4.1 step. To finalize the pre-lift-out procedure, an electron transparent window was obtained by the milling a~3-μm-wide window on the right end of lamella (opposite to the bridge) from the substrate and F I G U R E 3 SEM images recorded from the multiple windows with varying depth, indicated by the arrows, obtained using the proposed preparation method film (rotated 180 ) surface side (Figure 4c, identical to Figure 2d-h). The milling was performed initially using 50 pA (30 kV) currents while the final cleaning carried out using 20 pA (30 kV) currents. | Lift-out onto MEMS chip For executing lift-out procedure the sample needs to be reloaded in the FIB system with the grid laying down (instead of standing up) with the substrate-side facing the SEM column. The manipulator is inserted and welded to the lamella followed by cutting the connecting bridge. The lamella is transferred onto the laying down MEMS chip and welded to it (Figure 4d). Finally, the lamella is cut loose from the needle and procedure is complete (Figure 4e). Important to note, although the electron transparent window was obtained before the actual liftout, it provides the high-quality STEM images after complete processing (Figure 4f). For achieving this, the electron transparent window should not be images with FIB/SEM during 4.4-4.5 steps as it might results in contamination. Alternatively, the lift-out procedure can be attempted after milling frame around lamella (4.2) and reloading the sample, while final milling can be performed after completing the sample transfer onto the MEMS chip. In the case, plan-view FIB lift-out from the cleaved wedge is desired onto the standard (e.g., half-moon Cu) grid, the lift-out procedure needs be executed after milling frame around lamella (4.2) without the need to reload the sample. | RESULTS AND DISCUSSION The resulting plan-view TEM sample characteristics together with microstructure of the film were explored using scanning TEM high angle annular dark field (STEM-HAADF) imaging and selective area electron diffraction (SAED). Microscopy was performed in doublecorrected Linköping FEI Titan 3 60-300, operated at 300 kV. Figure 5 represents a series of STEM images, with increasing magnification, acquired from the plan-view TEM sample shown in Figure 2. The width and height of the lamella was estimated~3 and~5 μm, respectively. The homogenous STEM contrast within the lamella region indicates no bending artifacts (consistent with SEM observations in Figure 2g) or pronounced thickness variations. Figure 5b shows a higher magnification STEM image from the TiB 2+Δ film. The lamella thickness was rather uniform although monotonically increasing while moving away from the edge, as judged by the STEM intensity increase, indicated as a line profile in Figure 5b. Additionally, the tiny amorphized layer was present on the top of the lamella which comes as an artifact from the milling process and is typical for the employed approach. Although it does not affect the structure below, it can be avoided by depositing protective layer before ion milling procedure. The microstructure of the TiB 2+Δ film is easily accessed and owns a dense nanocolumnar grain structure. SAED pattern shows that the film is constituted of the TiB 2 phase with pronounced (0001) texture. In Figure 5c, high-resolution STEM image reviewed that nanocolumns are composed of the subcolumns which are separated with darkcontrast regions attributed to high boron content typically observed in overstoichiometric TiB 2+Δ films (Mayrhofer, Mitterer, Wen, Greene, & Petrov, 2005). Figure 5b,c reveals the atomic scale characteristics of the film and proves the proposed preparation method to be capable of delivering a high-quality specimen for plan-view TEM analysis. In fact, the maturity of the proposed preparation method is verified by its successful application in a handful of the studies elsewhere F I G U R E 4 SEM images showing the plan-view FIB liftout steps from the cleaved wedge: 4.1 milling trench from the substrate-side (a), 4.2 milling frame around lamella (b), 4.3 milling electron transparent window from the substrate-and film-sides (c), 4.4 transfer to the MEMS chip, (d) and 4.5 final sample (e). High-magnification STEM images recorded from as prepared film Bakhit, Palisaitis, Thörnberg, et al., 2020;Bakhit et al., 2021;Dorri et al., 2021;Mockute et al., 2019;Mockuté et al., 2017;Nedfors et al., 2016;Nedfors et al., 2019;Nedfors et al., 2020;Novoselova et al., 2018;Palisaitis et al., 2021;Thörnberg et al., 2020). STEM images in Further, it was observed that for films with a weak adhesion to the substrate, proposed sample cleaving produce (Figure 1) produces the cleave wedge with the substrate-fee films sticking out from the wedge. In such cases, FIB milling procedure is even more time efficient as enables to prepare electron transparent window(s) without the need to mill the substrate. Depth sectioning is commonly neglected in plan-view TEM investigations. The applied approach facilitated the multiplewindows sectioning of this partly oxidized film for decoding the oxidation mechanisms in understoichiometric Ti(Al)B 2ÀΔ films (see Figure 6) and further described elsewhere Bakhit, Palisaitis, Thörnberg, et al., 2020). A range of films deposited on Si, SiC, and Al 2 O 3 brittle substrates have been successfully prepared for plan-view TEM investigations by the proposed preparation method. Plan-view samples of films as thin as~100 nm have been successfully realized; however, additional F I G U R E 6 SAED, an overview and higher-magnification STEM images recorded from the multiple-windows with varying depth shown in Figure 3 F I G U R E 5 STEM images recorded from a plan-view TEM sample shown in Figure 2. (a) An overview image of the electron transparent window located in the center, (b) an overview image displaying the coating's microstructure, SAED shown as inset (c) high-resolution image from the coating, higher-magnification image is shown in the inset attention must be paid during milling of thin film samples, and milling from the film-side could be omitted if needed. The relatively big sample dimensions (1.8  0.8  0.15 mm 3 ) and immobilization onto the half-moon grid reduces the mechanical stress and supports "curved in" electron transparent windows after the sample preparation. This, in turn, ensures the TEM sample's rigidity and minimizes the failure risk. In very rare case of sample separation from the grid, the sample can easily be retrieved and glued back onto the grid. If a window gets broken, a new one can be easily produced from the existing piece. Finally, the plan-view FIB lift-out of lamella from the cleaved wedge shape apexes were successfully demonstrated for preparing TEM samples on MEMS heating chips (Figure 4). Cleaved wedge geometry eases lift-out approach due to minute amount of material which needs to be removed (before the lift-out) and uninhibited monitoring of the milling process. DATA AVAILABILITY STATEMENT Data available on request from the authors.
2021-07-16T06:16:33.094Z
2021-07-15T00:00:00.000
{ "year": 2021, "sha1": "2129dd65c69a2cb5746acfac784f2731d35e4329", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jemt.23876", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "cf5e902b37ae2032ea4449f1b0e97b55dacd93a2", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
221017139
pes2o/s2orc
v3-fos-license
Early Transcriptomic Changes upon Thalidomide Exposure Influence the Later Neuronal Development in Human Embryonic Stem Cell-Derived Spheres Stress in early life has been linked with the development of late-life neurological disorders. Early developmental age is potentially sensitive to several environmental chemicals such as alcohol, drugs, food contaminants, or air pollutants. The recent advances using three-dimensional neural sphere cultures derived from pluripotent stem cells have provided insights into the etiology of neurological diseases and new therapeutic strategies for assessing chemical safety. In this study, we investigated the neurodevelopmental effects of exposure to thalidomide (TMD); 2,2′,4,4′-tetrabromodiphenyl ether; bisphenol A; and 4-hydroxy-2,2′,3,4′,5,5′,6-heptachlorobiphenyl using a human embryonic stem cell (hESC)-derived sphere model. We exposed each chemical to the spheres and conducted a combinational analysis of global gene expression profiling using microarray at the early stage and morphological examination of neural differentiation at the later stage to understand the molecular events underlying the development of hESC-derived spheres. Among the four chemicals, TMD exposure especially influenced the differentiation of spheres into neuronal cells. Transcriptomic analysis and functional annotation identified specific genes that are TMD-induced and associated with ERK and synaptic signaling pathways. Computational network analysis predicted that TMD induced the expression of DNA-binding protein inhibitor ID2, which plays an important role in neuronal development. These findings provide direct evidence that early transcriptomic changes during differentiation of hESCs upon exposure to TMD influence neuronal development in the later stages. Introduction There has been worldwide concern over the increasing number of patients with depression and children with developmental disorders [1,2]. Recent studies suggest that the increasing prevalence of developmental disability in children is due to not only genetic factors but also some environmental factors [3]. Environmental factors including exposure to chemicals such as pesticides and air pollutants during the developmental age could play a role in the development of neurodevelopmental diseases [4,5]. In addition, the developing brain has been shown to be more sensitive to environmentally hazardous chemicals than the adult brain [6,7]. Recent studies indicate that in vitro models are starting to replace traditional in vivo models for evaluation of the effects of external substances on fetuses and for the assessment of neurotoxicity to chemical exposure; this alternative approach can bridge the mechanistic gap between humans and animals and can be used to elucidate new therapeutic approaches [8][9][10]. Therefore, systems using human embryonic stem cells (hESCs) and human induced pluripotent stem cells (hiPSCs) have been developed to directly predict human risks; the development of these systems would provide important information to elucidate the neurodevelopmental toxicities of numerous environmental chemicals. We previously developed in vitro models using hESCs for studying the neurodevelopmental toxicities caused by various environmental pollutants [11][12][13]. Our previous work also showed that thalidomide (TMD) inhibits the development of dopaminergic neurons from neuronal progenitor cells [14]. Anti-depressant-like effects of maternal exposure to TMD was also observed in mice [15,16]. Early exposure to TMD in the pregnant period caused developmental abnormality in the human brain [17,18]. TMD is currently used to treat multiple myeloma, while it is a known teratogen and neurodevelopmental toxicant. Although it was recently shown that the degradation of spalt-like transcription factor 4 (SALL4) may be an essential component of TMD-induced teratogenicity that causes severe birth defects in the fetus [19,20], this mechanism is not enough to explain developmental neurotoxicity of TMD observed in in vitro and in vivo experiments. In addition, 2,2 ,4,4 -tetrabromodiphenyl ether (BDE-47), bisphenol A (BPA), and 4-hydroxy-2,2 ,3,4 ,5,5 ,6-heptachlorobiphenyl (4OH-PCB187) were also included in this study in comparison to TMD, since they show a high association with neuronal developmental disorders in epidemiologic studies and in animal and cellular experiments [21][22][23][24]. Grandjean and Landrigan suggested polybrominated diphenyl ethers (PBDEs) as one group of newly recognized developmental neurotoxicants including organophosphate pesticides, herbicides, fungicides and manganese. Bisphenol A is also suggested as another suspected developmental neurotoxicant [25]. 4OH-PCB187 is one of main metabolites for PCBs and they concentration was found in blood at a higher concentration, rather than other congeners [26]. PCB and their metabolites are very similar to the structures of PBDEs and their hydroxyl metabolites [27]. Collectively, to understand the molecular events underlying the neurodevelopmental effects of environmental chemicals including drugs, endocrine disruptors, and flame retardants, we have studied the effects of TMD and three environmental pollutants including BDE-47, BPA, and 4OH-PCB187 on global gene expression during neurosphere formation and during the following differentiation into neuronal cells. Morphological Analysis of the Effect of Chemical Exposure at the Early Stage of Development on the Neuronal Differentiation from hESC-Derived Spheres In order to investigate the neurodevelopmental effects, we generated a protocol for sphere formation from hESCs and differentiation to neuronal cells ( Figure 1A). In this model, we confirmed the differentiation of spheres into neuronal cells on Day 28 by immunostaining with anti-microtubule-associated protein 2 (MAP2) and anti-tyrosine hydroxylase (TH) antibodies ( Figure S1A). Next, we exposed the spheres to each chemical, and examined the effects of chemical exposure on differentiation potency. Our results showed that TMD significantly increased the total cell numbers, and the presence of MAP2-positive and TH-positive neuronal cells in a dose-dependent manner ( Figure 1B and Figure S1A). Slight induction of TH-positive cells but not MAP2-positive cells was observed with exposure to 10 −8 M BDE-47 ( Figure 1C). No adverse effects were observed upon exposure to BPA and 4OH-PCB187 ( Figure 1D,E, respectively). Furthermore, the normalized MAP2-positive area and TH-positive area with the total cell number showed that the neuronal differentiation-promoting effect was only observed with TMD ( Figure S1B). However, there was no significant change in TH/MAP2 ratio, which may be due to the promotion of nervous system differentiation or the expansion of the sphere itself. At that expansion stage, the TMD-treated cells in spheres may not be involved in promoting or suppressing the differentiation of TH. The increase in number of neuronal cells upon TMD exposure was not consistent with another study that has investigated high-concentration exposure of this chemical [28]. These results indicate that the early exposure to TMD in vitro promotes neuronal development at a later stage of hESCs, contrary to previous reports that hESC-derived neuronal progenitor cells were exposed at the later stage [14]. The increase in TH-neuronal cells in the BDE-47 exposure was consistent with another study that the similar dose range of BDE-47 slightly increased the TUBB3 expression associated with dopaminergic neuron [29]. These results highlighted the importance to evaluate risk of environmental chemicals, such as the effect on neuronal development, in the hESC-derived sphere model at realistic blood concentration levels and timing of exposure. Transcriptional Analysis of the Effect of Chemical Exposure at the Early Stage of Neuronal Development Derived from hESCs Next, to explore the molecular mechanism underlying the effects of chemical exposure on the differentiation of hESCs, we performed microarray-based transcriptome analysis to examine the global gene expression changes at the sphere stage (on Day 7). The chemical concentration for the microarray analysis was determined according to the results of the morphological analysis. For BPA and PCB, the highest concentrations were chosen since there were no significant effects at any concentration. For TMD and BDE, the lowest observed effect concentrations were chosen. Each chemical induced the changes of gene expression at a comparable level. For this comparison, only genes with a fold change of more than two compared to vehicle control were considered ( Figure 2A). Further, exposure to different chemicals showed that there are similar functional annotations of differentially expressed genes ( Figure 2B). The percentage of differentially expressed transcriptional factors was approximately 9% in all chemical cases ( Figure 2B). These results suggest that our experimental strategy was able to capture the similar impact of different chemical exposures on transcriptomic changes. To further understand the functional difference upon each different chemical exposure, the differentially expressed genes were imported into the IPA program. Canonical pathway analysis showed the strongest impact of TMD exposure in regulating multiple biological pathways as compared to other chemicals ( Figure 2C). In accordance with the findings in the morphological analysis ( Figure 1B), the gene expression of MAP2 was selectively induced upon TMD exposure ( Figure S1C). In addition, among the differentially expressed genes induced upon TMD exposure, we could identify an enrichment of genes encoding molecules regulating neuronal functions, such as "nNOS Signaling in Neurons", "Extrinsic Prothrombin Activation Pathway", "Circadian Rhythm Signaling", and "Synaptogenesis Signaling Pathway" ( Figure 2C). Gene expression profiling showed that TMD exposure selectively modulated the expression of CACNB4, CDH6, CPLX3, CREB5, EPHB1, GRIA3, GRIA4, GRIN2A, KALRN, NRXN1, PRKCD, SYT15, and SYT4 ( Figure 2D and Figure S2). TMD-Specific Effect on Gene Expression at the Early Stage of Neuronal Development Derived from hESCs The differentially expressed genes that are induced by TMD but not the other chemicals were selected if they have a fold change higher than the fold change of 2. TMD-specific genes with 377 candidates were selected after the comparison between TMD and each of the three chemicals correspondingly ( Figure 3A and Table S1) and was imported into IPA for pathway analysis. Interestingly, top network function analysis showed that a wide range of signaling pathways associated with neurological disease and embryonic development were specifically affected by TMD exposure ( Figure 3B). The most highly populated network entitled "Neurological Disease, Organismal Injury and Abnormalities, Cell Morphology" indicated the central role of extracellular signal-regulated kinases ERK1/2 in controlling the transcriptional response of hESC-derived sphere in response to TMD exposure ( Figure 3C). Further upstream causal network analysis confirmed that TMD might be exerting its neurodevelopmental effect via the suppression of ERK1/2 activation ( Figure S3). In accordance with our findings in the hESC-derived sphere model, a previous study in the human neural stem cell model showed that inhibition of ERK by chemical inhibitors promoted TMD-Specific Effect on Gene Expression at the Early Stage of Neuronal Development Derived from hESCs The differentially expressed genes that are induced by TMD but not the other chemicals were selected if they have a fold change higher than the fold change of 2. TMD-specific genes with 377 candidates were selected after the comparison between TMD and each of the three chemicals correspondingly ( Figure 3A and Table S1) and was imported into IPA for pathway analysis. Interestingly, top network function analysis showed that a wide range of signaling pathways associated with neurological disease and embryonic development were specifically affected by TMD exposure ( Figure 3B). The most highly populated network entitled "Neurological Disease, Organismal Injury and Abnormalities, Cell Morphology" indicated the central role of extracellular signal-regulated kinases ERK1/2 in controlling the transcriptional response of hESC-derived sphere in response to TMD exposure ( Figure 3C). Further upstream causal network analysis confirmed that TMD might be exerting its neurodevelopmental effect via the suppression of ERK1/2 activation ( Figure S3). In accordance with our findings in the hESC-derived sphere model, a previous study in the human neural stem cell model showed that inhibition of ERK by chemical inhibitors promoted neuronal generation, especially of TH-positive neurons [30]. To further understand the neurodevelopmental effect of TMD, the effect of TMD on biological pathways related to embryonic development was evaluated according to the activation z-score, which is a statistical measure in IPA and can be used to predict the activation state (activated or inhibited) of a biological molecule or function based on a statistically significant pattern match of up-and down-regulated gene expression [31]. Interestingly, the function of "Differentiation of embryonic cells" was predicted to be activated by TMD based on the expression of enriched genes such as FRZB, TET2, CoL12A1, ID2, NRP1, HHEX, IL6ST, RUNX1, KDM2B, NPY1R, FOXA2, NODAL, and ANGPT1 ( Figure 3D,E). neuronal generation, especially of TH-positive neurons [30]. To further understand the neurodevelopmental effect of TMD, the effect of TMD on biological pathways related to embryonic development was evaluated according to the activation z-score, which is a statistical measure in IPA and can be used to predict the activation state (activated or inhibited) of a biological molecule or function based on a statistically significant pattern match of up-and down-regulated gene expression [31]. Interestingly, the function of "Differentiation of embryonic cells" was predicted to be activated by TMD based on the expression of enriched genes such as FRZB, TET2, CoL12A1, ID2, NRP1, HHEX, IL6ST, RUNX1, KDM2B, NPY1R, FOXA2, NODAL, and ANGPT1 ( Figure 3D,E). The biological pathways of embryonic development generated in IPA were ranked by z-score, which can be used to find the likely regulating molecules based on a statistically significant pattern match of up-and down-regulation, and also to predict the activation state (activated or inhibited) of a putative regulator. (E) Expression of enriched genes involved in the "Differentiation of embryonic cells" pathway. Integrated Network Analysis of Transcriptional and Morphological Changes during Neuronal Differentiation from hESC-Derived Spheres Finally, integrated network analysis was performed to determine the connection between the genetic and morphological actions of TMD during neuronal differentiation from hESC-derived spheres. Candidate feature genes involved in the function of "Differentiation of embryonic cells" (Figure 3E) with correctly predicted expression patterns in comparison with previous findings in the Ingenuity Knowledge Base were selected for network analysis (Table S2). As the result, expression data of four genes (TET2, HHEX, ID2 and NRP1) and two morphological measures of TH and MAP2 staining after chemical exposure were used for network analysis using three approaches: correlation-based network analysis, Bayesian network analysis and physiological network analysis. The correlation network was generated by calculating the Pearson correlation coefficient between each pair of gene and morphological parameters ( Figure 4A). Either the four genes or the two morphological measures were correlated with each other. For the relationship between the gene and morphological parameters, NRP1 was correlated with TH and MAP2, while HHEX was correlated with MAP2. However, a limitation of correlation networks is that they can be confounded by indirect relationships. In contrast, methods that infer the data such as a whole, such as the Bayesian network, includes only direct effects and is considered more biologically interpretable due to removal of indirect correlations [31]. The integrative Bayesian network showed that the node of a transcriptional regulator ID2 was located at the top of the network hierarchy and was positively related to MAP2 ( Figure 4B and Figure S4). The enzyme TET2 was not connected to the network, suggesting that the inferred network using a Bayesian algorithm was able to remove indirect relationships. Finally, to verify the biological relevance of the inferred connections, a physiological network was generated based on findings of from previous studies using the Ingenuity Knowledge Base. In accordance with Bayesian network, the physiological network showed that there were no direct relationships between TET2 and the functions of either "Neurogenesis" or "Differentiation of embryonic stem cells", whereas ID2 was connected to "Neurogenesis" ( Figure 4C). It was reported that induction of the ID2 gene expression, which was also observed in hESC-derived sphere after TMD treatment in the present study, increased differentiation of Tuj1 and GFAP-positive neurosphere cells [32]. Three environmental chemicals other than TMD had very limited or no significant changes in neuronal differentiation in this model. BDE47 has recently been reported to have inhibitory effects on human ES-derived neuronal cells similar to our model [21,33]. Similarly, BPA has been found to have suppressive effects on neuronal stem cells [34,35]. Regarding OH-PCB187 or mother compounds of PCB, epidemiological studies suggest a negative relationship with IQ and a relationship with ADHD [36], but no report has been made in vitro. This is the first report of exposure of OH-PCB187 to human ES. However, a limitation of correlation networks is that they can be confounded by indirect relationships. In contrast, methods that infer the data such as a whole, such as the Bayesian network, includes only direct effects and is considered more biologically interpretable due to removal of indirect correlations [31]. The integrative Bayesian network showed that the node of a transcriptional regulator ID2 was located at the top of the network hierarchy and was positively related to MAP2 (Figures 4B and S4). The enzyme TET2 was not connected to the network, suggesting that the inferred network using a Bayesian algorithm was able to remove indirect relationships. Finally, to verify the biological relevance of the inferred connections, a physiological network was generated based on findings of from previous studies using the Ingenuity Knowledge Base. In accordance with Bayesian network, the physiological network showed that there were no direct relationships between TET2 and the functions of either "Neurogenesis" or "Differentiation of embryonic stem cells", whereas ID2 Chemical Exposures, Culture and Neuronal Differentiation of Human ESCs Dimethyl sulfoxide (DMSO) and BPA were obtained from Sigma-Aldrich Co. (St. Louis, MO, USA). TMD were obtained from Wako Pure Chemicals (Tokyo, Japan); 4OH-PCB187 and BDE-47 were obtained from AccuStandard (New Haven, CT, USA). DMSO was used as the primary solvent for all chemicals. The final concentrations of DMSO in the media did not exceed 0.1% (v/v). Human embryonic stem cells (khES3) were maintained and differentiated as described previously [10,11]. The hESC line, KhES-3 (XY genotype), was provided by Dr. Hirofumi Suemori, Research Center of Stem Cells, Institute for Frontier Medical Science, Kyoto University according to the NIES institutional guidelines for the use of human ES research [37]. All experiments using hESCs were approved by the ethics committees of the National Institute for Environmental Studies and the University of Tokyo in accordance with the guidelines of the Japanese Ministry of Education, Culture, Sports, Science, and Technology. The procedures for the maintenance of hESCs were performed as described previously [37][38][39]. MEFs were used as feeder cells for the culture and passage of the hESC line KhES3 in the DMEM/F12 media containing 20% KSR, 100 µM NEAA, 2 mM l-glutamine, 100 µM 2-ME, and 5 ng/mL bFGF. After five times of passages with additional MEFs, the MEFs were eliminated by a brief enzymatic treatment. The hESC colonies left on the dishes were harvested. The hESCs (purity > 99%) were seeded at 9.0 × 10 3 cells/well in the medium containing DMEM/F12, 20% KSR, 100 µM NEAA, 2 mM l-glutamine, 100 µM 2-ME, and 10 µM of ROCK inhibitor Y-27632 (Day 1). The generated EBs were cultured for 7 days in the medium, which was exchanged every two days, followed by growth in the medium without Y-27632 for two days. The growing EBs were cultured for 2 additional days in NIM containing DMEM/F12: Neurobasal ® Medium (1:1), N-2 Supplement, B-27 ® Supplement, GlutaMAX™-I, Penicillin-Streptomycin to promote neuronal differentiation. Then, EBs were re-plated onto O/L-coated 24-well-plates at 20 EBs/well. They were cultured for 7 days in neuronal proliferation medium (NPM) containing DMEM/F12: Neurobasal ® Medium (1:1), two-fold concentrations of N-2 Supplement, two-fold concentrations of B-27 ® Supplement, GlutaMAX™-I, Penicillin-Streptomycin, 20 ng/mL bFGF. The medium was exchanged every 3 days. hESCs were allowed to form embryoid bodies (EBs) in the round bottom 96-well plate (Falcon 351177). The EBs were seeded onto ornithine-laminin (O/L)-coated 24-well plates to promote neuronal differentiation with the sequential exchange of authentic appropriate neuronal differentiation media every other day. The schedules for the formation of sphere and neuronal differentiation of hESCs are summarized in Figure 1A. Briefly, hESCs were allowed to form sphere in the round bottom 96 well plate for 7 days. Cells were exposed to each chemical from Day 3 to 7 during sphere formation. Doses of chemicals used here were within the clinical dose range for TMD, blood, urinary or breast milk concentrations reported in population studies for BPA, 4OH-PCB187 and BDE-47 [40][41][42][43][44][45]. The spheres were then seeded onto poly-ornithine/laminin 111-coated 24-well plates to promote proliferation and neuronal differentiation for 21 days, and the medium was refreshed every 3 days, as reported previously [11]. Immunocytochemistry and Image Analysis Differentiated cells on day 28 were immunolabeled with human anti-MAP2 (M4403, 1:200, Sigma-Aldrich) or anti-TH (AB152, 1:200, Millipore, Burlington, MA, USA) antibodies, followed by staining with Alexa 546-conjugated secondary antibody (1:1000, Invitrogen, Carlsbad, CA, USA). The nuclei were stained with Hoechst 33,342 solution (Dojindo, Tokyo, Japan). The values for the area of fluorescent signals were analyzed using an IN Cell Analyzer 1000 (GE Healthcare UK Ltd., Buckinghamshire, UK), as previously reported [11,46]. In brief, immunofluorescent images were automatically acquired using the IN Cell Analyzer to quantify the differences in the cellular nuclei and cellular phenotypes. Fluorescent microphotographs (12 fields per well of a 24-well plate) were obtained automatically. Hoechst-positive nuclei and MAP2-positive or TH-positive neurites were recognized using IN Cell Developer Software. Removal of apoptotic or necrotic cells was performed by measuring the Hoechst-positive nuclei's morphological features with nuclei fragmentation and chromatin condensation. Viable cells and apoptotic cells were classified according to nuclei size and the nuclei fluorescence signal density. The software was also used to identify neurons that stained positive with both nuclear stain and MAP2 antibodies, and to characterize the neurites extending from these cells. The length of the neurite for each identified cell was measured, respectively. Data were expressed as the mean neurite area per cell. Cells were cultured and stained in 6 independent wells for each condition, 12 fields per well were observed and values were calculated. Statistical analyses for cellular morphology were performed with Excel statistics (Microsoft 2016). All data were expressed relative to the means of the control groups. All results were represented as mean ± standard error (SE). All data were analyzed by one-way analysis of variance (ANOVA), followed by Fisher's least significant difference (LSD) post hoc test, to compare the effect of each dose with the DMSO control groups. p-values less than 0.05 were considered statistically significant. Microarray Gene Expression Profiling Four spheres from different experimental groups were pooled together separately. RNAs were isolated from each group on day 7. To detect changes in gene expression in spheres after the chemical exposure, microarray analyses were performed on the RNA sample using a microarray. Total RNA from spheres was isolated with the RNeasy Mini Kit (Qiagen, Hilden, Germany). Fifty nanograms of total RNA pooled from three independent samples was fluorescently labeled and hybridized to Agilent 8 × 60 K-Human Genome Microarrays (Sureprint G3 Human GE 8 × 60 K Ver.2.0 1color 4; Agilent Technologies Inc., Santa Clara, CA, USA). The arrays were hybridized and scanned in accordance with the manufacturer's directions at the facility of Hokkaido System Science Co., Ltd. (Sapporo, Japan), as reported previously [10]. The raw data was filtered based on signal intensity values and in the lowest 20 percentile and then filtered by FLAG-tag to remove entities that were not detected using GeneSpring GX12.10 software (Agilent Technologies Inc., Santa Clara, CA, USA). The microarray data were submitted to Gene Expression Omnibus (GEO) and registered as GSE151239 [47]. Knowledge-Based Pathway Analysis and Network Analysis To explore the biological interpretation of the transcriptome data, the canonical pathway analysis, disease and bio-function annotation, and upstream causal network analysis were performed using the knowledge-based functional analysis software, Ingenuity Pathways Analysis (IPA, Ingenuity Systems, Redwood City, CA, USA). In IPA analysis, the fold change is a ratio (case/control), and it is up-regulated as it is between 1 and +infinity, and the value (x) between 0 and 1 is converted as "−1/x". It is down-regulated with a distribution from −infinity to −1. Correlations of gene expression and morphological measures were calculated according to the Pearson correlation coefficients method in R Bioconductor (https://www.bioconductor.org/). Bayesian network analysis was applied based on using the TAO-Gen algorithm using the web-based RX-Taogen software (http://extaogen.nies.go.jp/). The replicate exchange time was set as 20,000, as reported previously [39]. The networks were visualized using the Gephi software (https://gephi.org/). Conclusions The recent technology using three-dimensional neuronal sphere models derived from pluripotent stem cells has provided new insights into the etiology of neurological diseases and new therapeutic strategies for assessing chemical safety. In this study, we explored the comparative effects of TMD and environmental chemicals such as BDE47, BPA, and 4OH-PCB187 at realistic blood concentration levels on the neuronal differentiation of hESC-derived spheres. In conclusion, exposure to TMD, but not other chemicals, at the early stage of development influenced neuronal differentiation of MAP2-positive and TH-positive neuronal cells from hESC-derived spheres. Transcriptomic analysis and functional annotation at the early stage of neural differentiation showed specific TMD-induction genes associated with the ERK and synaptic signaling pathways. Computational network analysis of genetic and morphological actions of TMD during neuronal differentiation predicted that TMD-induced expression of DNA-binding protein inhibitor ID2 played an important role in neuronal development from hESC-derived spheres. These findings provide direct evidence that early transcriptomic changes during differentiation of hESCs by chemical exposure influenced neuronal development in later stages. Figure S1: Effects of chemical exposure on cellular morphologies and MAP2 gene expression during the neuronal differentiation from human embryonic stem cell (hESC)-derived spheres. Figure S2: Synaptogenesis Signaling Pathway. Figure S3: Upstream causal network. Figure S4: Representative network with probability values in Bayesian network analysis. Networks were generated based on the TAO-Gen algorithm using the web-based RX-Taogen software. Table S1: Lists of the TMD-specific genes selected after the comparison between TMD and each of the three chemicals correspondingly for IPA-pathway analysis. Table S2: Prediction of expression patterns of genes involved in the function of "Differentiation of embryonic cells" by IPA. Conflicts of Interest: The authors declare no conflict of interest.
2020-08-06T09:04:07.791Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "700c51300a8da889f56952ed960fed734ab988ef", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/15/5564/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5f2b55d987b4252bfbe7e4162847f9f0d4b97e0", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235640432
pes2o/s2orc
v3-fos-license
How can we minimize the risks by optimizing patient's condition shortly before thoracic surgery? The “moderate-to-high-risk” surgical patient is typically older, frail, malnourished, suffering from multiple comorbidities and presenting with unhealthy life style such as smoking, hazardous drinking and sedentarity. Poor aerobic fitness, sarcopenia and “toxic” behaviors are modifiable risk factors for major postoperative complications. The physiological challenge of lung cancer surgery has been likened to running a marathon. Therefore, preoperative patient optimization or “ prehabilitation “ should become a key component of improved recovery pathways to enhance general health and physiological reserve prior to surgery. During the short preoperative period, the patients are more receptive and motivated to adhere to behavioral interventions (e.g., smoking cessation, weaning from alcohol, balanced food intake and active mobilization) and to follow a structured exercise training program. Sufficient protein intake should be ensured (1.5–2 g/kg/day) and nutritional defects should be corrected to restore muscle mass and strength. Currently, there is strong evidence supporting the effectiveness of various modalities of physical training (endurance training and/or respiratory muscle training) to enhance aerobic fitness and to mitigate the risk of pulmonary complications while reducing the hospital length of stay. Multimodal interventions should be individualized to the patient's condition. These bundle of care are more effective than single or sequential intervention owing to synergistic benefits of education, nutritional support and physical training. An effective prehabilitation program is necessarily patient-centred and coordinated among health care professionals (nurses, primary care physician, physiotherapists, nutritionists) to help the patient regain some control over the disease process and improve the physiological reserve to sustain surgical stress. Introduction In thoracic cancer surgery, treatment modalities are usually discussed at Tumor Board meetings where information regarding patient history, comorbidities and quality of life, as well as tumor extent, pulmonary function and laboratory results are presented and shared between oncologists, surgeons, anesthesiologists, pneumologists and radiologists. How can we minimize the risks by optimizing patient's condition shortly before thoracic surgery? Saudi Journal of Anesthesia / Volume 15 / Issue 3 / July-September 2021 In the early cancer stages, surgical resection remains the best therapeutic option as approximately 60% of patients are expected to survive at least 5-year after surgery compared with less than 15% under medical management. [1] Non-surgical treatments (e.g., chemo-immuno-and radiotherapy) can be proposed to patients unable to sustain surgical stress given preexisting severe organic dysfunction or poor health condition. [2] Based on medical history, clinical examination and functional investigations, the anesthesiologist assesses and stratifies patient's perioperative risks. [3] The use of simple questionnaires that address exercise tolerance (Metabolic Equivalent Task, MET) and daily life activities (Duke Activity Status Index [DASI]) or simple dynamic tests (e.g., time up to go, gait speed) enables the perioperative physicians to estimate the patient's aerobic fitness and functional capacity. [ Table 1] [4] In thoracic surgery patients, cardiopulmonary exercise testing (CPET) on a cycloergometer or a treadmill represents the reference tool to quantitate aerobic fitness by measuring peak oxygen consumption (peakVO 2 ), anaerobic threshold, peak workload and ventilatory efficiency (slope or ratio of ventilation to carbon dioxide production). These CPET-derived parameters reflect the integrative response of the respiratory, circulatory and muscular systems during maximal exercise. [5] Alternatively, low technology exercise tests (e.g., shuttle, stair climbing, six-minute walk distance) can be used as a screening tool in preoperative evaluation and when CPET is not readily available. [6] Historically, research efforts were initially focused on cardiovascular assessment since myocardial infarcts, arrhythmia, heart failure and stroke were the leading causes of operative mortality. Since 1990, the Goldman risk index and later the Revised Cardiac Risk Index (RCRI) coupled with the evaluation of aerobic fitness have been largely adopted to stratify cardiovascular risks and guide further investigations and treatments before surgery. [7] Better management of coronary artery disease, arrythmias and heart failure with myocardial revascularization, resynchronization/ablation techniques as well as pharmacological treatments have contributed to improve patient's cardiovascular condition and in turn, to minimize the perioperative risk of major cardiovascular events. [7] Nowadays, postoperative pulmonary complications (PPCs), namely atelectasis, pneumonia, acute respiratory distress syndrome, broncho-pulmonary fistula, pleural effusions-, are the most common adverse events after thoracic surgery, exceeding by far the incidence of cardiovascular complications. These PPCs pose major healthcare challenges by increasing hospital length of stay and medical costs while decreasing long term patient's quality of life and survival. [8] Risk Factors and Mechanisms of Postoperative Complications Surgical trauma induces neurohumoral and inflammatory responses that parallel the extent of tissue injury. [9] The resulting transient hypermetabolic status is manifested by a moderate elevation of body temperature, increased oxygen consumption and cardiac output, fluid retention, hyperglycemia due to central and peripheral insulin resistance as well as by mobilization of energy reserve to ensure tissue repair. Importantly, the catabolic processes that exceed anabolic activities on the days following surgery result in muscle wasting with the release of amino acids into the circulation and their preferential uptake by the liver to synthesize acute phase proteins and glucose (neoglucogenesis). Sufficient preoperative physiological reserves are required to meet the postoperative energy demand and to sustain the surgical stress-induced mobilization of muscular protein while preserving patient functional capacity to breathe and move adequately. The risk factors leading to poor postoperative outcomes have been identified by analyzing large databases. Advanced age, cardiopulmonary disease severity, complex and prolonged surgical time, smoking and alcohol consumption, mechanical ventilation using large tidal volume and driving pressure, poor nutritional status as well as low aerobic capacity (<5 MET or <16 ml/kg/min peakVO 2 ) are all strongly predictive factors of PPCs. [10][11][12] Low aerobic fitness is reported in up to 20-30% patients scheduled for lung cancer surgery and is predictive of poor survival. Likewise, sedentary individuals and patients with chronic inflammatory diseases, coronary artery disease (CAD), heart failure (HF), chronic obstructive pulmonary disease (COPD) and neurological disorders are all characterized by an impaired cardiopulmonary exercise tolerance and a reduction in lean body mass that both represent risk factor for diminished long term survival. [13] In the early postoperative period, lung volumes (end-expiratory and end-inspiratory) become smaller compared with the preoperative phase, for two main reasons: (1) the lungs are stiffer, the reduced pulmonary compliance results from inflammation and ventilation-induced lung injuries with surfactant dysfunction/depletion consequent to the effects of anesthesia and overdistension and/or collapse of different parts of the lung (bio-volo-baro-and atelect-trauma), 2) the respiratory muscles are weaker with impaired contractile performance of inspiratory muscles resulting from residual depressive effects of anesthetic agents, surgery-induced systemic inflammation, ventilator-associated respiratory muscle disuse and incisional pain associated to inhibition of phrenic nerve activity. [14] Accordingly, weaker respiratory muscles are less "fatigue resistant", particularly when faced with the increased inspiratory loading conditions of stiffer lungs which require higher transpulmonary pressure to mobilize air and open alveolae, particularly in dependent lung areas. Consequently, the inefficient respiratory pumping capacity results in lower functional residual volumes, promoting ventilation-perfusion mismatch and atelectasis that paves the way for bacterial translocation and later onset of pneumonia. Preoperative Patient Assessment and Implementation of improved recovery after surgery pathways During the preoperative visit, the anesthesiologist plays a crucial role acting as a "gatekeeper" by judging the patient's ability to sustain the surgical procedure, mitigating the stress response with an individualized anesthesia/analgesia plan and, in selected patients, by prescribing optimization therapies through nutritional support and exercise training to enhance physiologic reserves before surgery. [15] Regarding risk stratification, professional guidelines recommend using the American Society of Anesthesiology Physical Status (ASA-PS) score, the MET (or DASI, CPET-derived parameters), the Revised Cardiac Risk Index (or the dedicated Thoracic RCRI), the Assess Respiratory Risk in Surgical Patients in Catalonia (ARISCAT) score and the Clinical Frailty Scale [ Table 1]. [3,[16][17][18] These scoring systems are also helpful to predict major postoperative complications, to identify "unfit" patients and to guide optimal curative or palliative treatments. [19] In very high-risk patients, alternative non-surgical treatments, a less invasive approach or palliative care should be considered and agreed upon by the thoracic team. In moderate-to-high risk patients, "modifiable" risk factors should be the focus of interest and a treatment strategy should be designed to solve potential problems and mitigate the risk. Whenever possible, sufficient time should be allowed to correct nutritional deficits, increase muscle mass and aerobic fitness as well to inform, educate and empower the patient about the risks induced by sedentarity, smoking and alcohol consumption. Saudi Journal of Anesthesia / Volume 15 / Issue 3 / July-September 2021 The anesthesiologist's assessment and proposals are incorporated into the Fast Track Clinical pathways or improved recovery after surgery pathways programs that have been adopted in many hospitals. These evidence-based protocols are aimed at standardizing the processes of perioperative care and at improving clinical and functional outcome while minimizing variability, errors and costs. For instance, health care professionals should adhere to specific recommendations, namely: Carbohydrate drinks up to 2 hours before surgery, skin preparation and antibiotic prophylaxis, minimally invasive surgery, lung protective ventilatory strategies, restrictive or goal-directed intravenous fluids management, prevention of nausea and vomiting, avoidance of/or early removal of drains and tubes as well as early mobilization and resumption of oral feeding after surgery. [20] General recommendations are issued to optimize patient preoperative condition by stabilizing active illnesses (e.g., CAD, HF, COPD, asthma, infection), adjusting drug therapy (e.g., anticoagulants, anti-platelets, antihypertensive medications), correcting anemia and malnutrition as well as by encouraging patients to adopt a healthier lifestyle (physical activity, dental care and mouth disinfection, smoking cessation and limitation of alcohol consumption). [20][21][22] Preoperative Patient Nutritional Condition The term malnutrition defines "unbalanced" nutritional states that encompass either over-or undernutrition, which are responsible for abnormalities in body compartments, immune defense and organ function. [23] In Western countries, undernutrition in surgical patients (referred as malnutrition) results from insufficient nutrient intake owing to socio-economic factors, chronic/acute inflammation, malabsorption or bowel obstruction, cardio-pulmonary insufficiency as well as drug-induced adverse effects. In Western countries, undernutrition in surgical patients (here referred to as malnutrition) results from insufficient nutrient intake owing to socio-economic factors, chronic/ acute inflammation, malabsorption or bowel obstruction, cardio-pulmonary insufficiency as well as drug-induced adverse effects. Malnutrition is reported in up to 10 to 50% of patients admitted to the hospital, particularly among those with catabolic derangements and insufficient energy balance. [23] In more than 50% of community-dwelling older subjects, the minimal dietary protein requirements are not met and contributes to muscle wasting, reduced walking capacity, risk of falls and loss of physiological reserve. [24] Therefore, nutrition screening tools such as the MUST (Malnutrition Universal Screening Tool), the MNA (Mini-Nutrition Assessment, Table 2) or the PONS (Preoperative Nutrition Score) should be routinely used to identify undernourished patients before major cancer surgery. [25] In addition, computed tomographic thoracic scans that are performed preoperatively may precisely detect patients with low muscle mass and fatty muscle infiltration which are predictive of mortality, major complications and prolonged hospital stay after colorectal surgery and lung cancer resection. [26,27] When severe malnutrition is identified (weight loss >10%, body mass index <18.5 kg/m 2 , serum albumin <30 g/L), further investigations are required to estimate the energy needs (nitrogen balance, indirect calorimetry), to assess skeletal muscle mass and the fat compartment (mid-arm circumference, triceps skinfold, bioelectrical impedance analysis, dual-energy X-ray absorptiometry, computed tomography and magnetic resonance imaging) as well as the muscle strength (handgrip dynamometer, leg or chest press). [23] Dietary adjustments are made with high energy nutrients (~30-40 kcal/kg/day, carbohydrates, omega-3 fatty acids), high-quality source of proteins (~1.5-2 g/kg/day of protein spread over several meals; creatine monohydrate, essential aminoacids with arginine, glutamine and cysteine) and selective supplements (e.g., vitamin D, acid folic, cyanocobalamine, iron). For instance, consuming a multi-ingredient mixture composed of whey protein, creatine, calcium, vitamin D, and omega-3 polyunsaturated fatty acids has demonstrated favorable effects to improve lean body mass and muscular strength in the elderly, with further gains when nutrition support was combined with resistance exercise training. [28] In a meta-analysis of 56 trials including 6'370 patients undergoing gastrointestinal surgery for cancer, perioperative nutritional support was associated with fewer postoperative complications (risk ratio (RR) and 95% confidence interval (CI) of 0.78 (0.72-0.85), particularly infectious complications and, a shorter length of hospital stay (pooled mean difference of 1.6 days (95%CI -1.8 to -1.3). [27] Interestingly, implementation of a short-term nutrition support (probiotics, multi-vitamins, proteins, complex carbohydrates) among patients awaiting lung cancer resection has been associated with lesser costs and better postoperative outcome in terms of bowel recovery, major complications and hospital length of stay). [29,30] Given the association between undernutrition, poor physical fitness, decreased immune defense and the risk of postoperative complications, personalized diets should be ideally prescribed over 4 to 12 weeks to replenish muscle mass, correct nutritional deficiencies and restore both muscular strength and aerobic fitness. [31] A greater total protein intake coupled with active mobilization is often necessary to match the elevated protein turnover and anabolic resistance induced by surgical trauma, ongoing inflammation and malignant disease. Preoperative Patient Physical Condition Impact of aging, sedentarity and chronic diseases on skeletal muscles Skeletal muscles represent 30-40% of total body mass and 70-80% of the organism's protein reserves. Muscle function supports key activities such as any movement (including breathing), static contractions (posture), thermoregulation and metabolic homeostasis. Age-associated reduction in aerobic capacity (5 to 15% per decade), in muscle mass and strength (sarcopenia and dynapenia, respectively) begins as early as 25 years of age, accelerates after 60-70 years, and is associated with increased falls, fractures and mortality. [32] In surgical patients, preexisting poor muscle function and further muscle wasting due to ongoing inflammation and immobilization result in difficulties for patients to sustain increased respiratory loads, to stand up and walk in the early postoperative period. The number and size of muscle fibers (type I, slow-twitch, oxidative and type II fast-twitch, glycolytic) decline with aging in parallel with the loss of motoneurons, the rarefaction of capillaries and replacement by fat and connective tissue. [32] Noteworthy, type I fibers, through down regulation of the peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC1α), are more susceptible to inactivity, immobilization and denervation-induced atrophy while type II fibers, through modulation of the transforming growth factor beta (TGFβ) and the nuclear factor kappa B (NF-κB), are more affected by cancer, diabetes and heart failure. Importantly, type I fibers, through down regulation of the peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC1α), are more susceptible to inactivity, immobilization and denervation-induced atrophy while type II fibers, through modulation of the transforming growth factor beta (TGFβ) and the nuclear factor kappa B (NF-κB), are more affected by cancer, diabetes and heart failure. [32] With aging, type I fibers are less affected than type II due to collateral re-innervation and fast-to-slow fiber shift. The mechanisms and implications of muscle wasting are summarized in Figure 1. Compared with younger individuals, skeletal muscles in older persons exhibit reduced insulin-stimulated glucose uptake and oxidation due to decreased glycogen stores and transmembranar transporter (reduced GLUT4 protein in type II fibers). Moreover, the aging mitochondria display morphological abnormalities, a decline in mitochondrial DNA and mRNA capacity, slower trafficking through the respiratory chain, reduced oxidative phosphorylation, impaired adenosine triphosphate (ATP) synthesis and excessive generation of reactive oxygen species (ROS) that contribute to the breakdown of myofibrillar proteins and cellular autophagy/apoptosis. [33] Genetic factors determine about 20% to 40% of the oxygen transport and utilization capacity by influencing cardio-pulmonary function, hemoglobin content, muscle blood flow and mitochondrial ATP production. [32] Besides concomitant cardiopulmonary diseases and inappropriate nutritional intake, a sedentary behavior commonly prevalent among elderly may well be the prime cause of the aging-related decrease in aerobic capacity associated with the loss of muscle mass and strength. [17] Impact of preoperative exercise programs on postoperative outcome Physical training programs mainly encompass resistance or strength type exercises and endurance or aerobic type Saudi Journal of Anesthesia / Volume 15 / Issue 3 / July-September 2021 exercises that are focused specifically on respiratory muscles (inspiratory muscle training, IMT), selected muscle groups (upper/lower body, trunk/abdomen) or whole body (e.g., running on a treadmill, cycling or rowing). Increasing the muscular mass is usually achieved by "resistive work" or static (isometric) contractions with little change in muscle fiber length. [32] In contrast, dynamic (isotonic) muscle actions entail concentric and eccentric contractions leading to muscle shortening and lengthening, respectively. [32] Chronic endurance training (ET) in master endurance athletes (>60 yrs) is associated with preservation of the aerobic capacity (~43 ml/kg/min VO 2 Max vs 27 ml/kg/min in age-matched controls) and lesser decline in muscle strength. [34] Likewise, mitochondrial gene expression and protein content of the electron transport chain complexes and the PGC-1α are all substantially greater in the vastus lateralis muscle of older highly trained individuals compared with younger individuals and age-matched controls. [35] Because PGC-1α level is reduced in sedentary individuals and that physical training upregulates its expression, ET could represent a simple measure to counteract the effects of aging and chronic diseases on mitochondrial biogenesis, oxidative capacity and muscle mass development. [36] Likewise, resistance training at moderate loads has been shown to induce hypertrophic changes of type II fibers with increased muscle strength, these effects being augmented by the intake of dietary components (e.g., proteins, macronutrients) and nutritional supplements (e.g., creatine, vitamin-D, omega-3 polyunsaturated fatty acids). [37] In a meta-analysis of seven trials including 248 older individuals, inspiratory muscle performance was significantly improved after IMT at moderate intensity levels (30-80% of maximal inspiratory pressure) over at least 4 weeks compared with sham treatment. [38] Various physical training modalities have been applied within the limited time frame preceding thoracic surgery to enhance patients' physiological reserve and facilitate postoperative recovery. [39,40] In a meta-analysis of 29 RCTs including 2'070 patients scheduled for major surgery, preoperative exercise training resulted in enhanced aerobic fitness (~+12%) and maximal inspiratory pressure (~+15%), decreased occurrence of PPCs (OR of 0.43, 95% confidence interval 0.31 to 0.59) and shorter hospital length of stay (-2.4 days, 99% CI -4.1 to -0.8). [41] The exercise-induced beneficial effects were effective across various surgical procedures (cardiac, abdominal and thoracic), even within a short time delay (one week, 1 to 8 weeks) and using different exercise modalities (ET, IMT or a combination of both). A strong body of scientific evidence lends support to the improved oxygen transport capacity and aerobic fitness following short-term ET through upregulation of PGC-1α within skeletal muscles (respiratory and locomotor) and cardiovascular adaptive changes manifested by an expansion of the circulatory volume, improved ventricular and vascular relaxation, greater capillary density and reduced sympathetic activity with vagal neural predominance. Likewise, short-term IMT using resistive threshold loading devices, volume incentive spirometry and/or breathing exercises has all been shown effective to strengthen inspiratory muscles and to increase diaphragm thickness owing to hypertrophic changes of fast-twitch fibres and a higher proportion of slow oxidative fibres. Finally, both ET and IMT result in structural and adaptive changes within the respiratory muscles that confer higher strength and resistance to fatigue and therefore enable patients to sustain the higher ventilatory workload while improving gas exchange and minimizing atelectasis formation. With improved metabolic capacity and more efficient contraction-relaxation cycling of the respiratory muscles there would be less muscle fatigue, which in turn would alleviate the sympathetically-mediated vasoconstriction and promote the redistribution of blood flow from the respiratory muscles towards the limb muscles (metaboreflex), thereby improving walking capacity. Conclusions Many patients scheduled to undergo curative lung cancer resection present with low physical fitness, poor muscle strength and mass as well as inappropriate food intake. There is sound physiological rationale and scientific evidence for training-induced improvement in aerobic capacity and for nutrition-induced increase in muscle mass within the short time frame before surgery with the aim to enhance the patient's ability to sustain surgical stress and facilitate early functional recovery [ Table 3]. Continuation of the exercise training program and adhesion to healthier life style is necessary to consolidate functional gains and increase life expectancy. [42] Future studies will help to design an individualized optimization approach based on a greater understanding of the complex interplay between the patient's genetic background, pathophysiological responses to surgery and social environments. Financial support and sponsorship Nil.
2021-06-26T14:01:41.885Z
2021-06-19T00:00:00.000
{ "year": 2021, "sha1": "bc17ac16d12afbde4139ec888992e90118c73efc", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/sja.sja_1098_20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac7e5b13f31e98b1993ee87c17c6db6df5b3c993", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
248908600
pes2o/s2orc
v3-fos-license
Inference of Local Climate Zones from GIS Data, and Comparison to WUDAPT Classification and Custom-Fit Clusters : A GIS-based approach is used in this study to obtain a better LCZ map of Berlin in comparison to the remote-sensing-based WUDAPT L0 approach. The LCZ classification of land use/cover can be used, among other applications, to characterize the urban heat island. An improved fuzzy logic method is employed for the purpose of classification of the zone properties to yield the GIS-LCZ map over 100m × 100m grid tiles covering the Berlin region. The zone properties are calculated from raster and vector datasets with the aids of the urban multi-scale environmental predictor (UMEP), QGIS and Python scripts. The standard framework is modified by reducing the threshold for the zone property impervious fraction for LCZ E to better detect paved surfaces in urban areas. Another modification is the reduction in the window size in the majority filter during post-processing, compared to the WUDAPT L0 method, to retain more details in the GIS-LCZ map. Moreover, new training areas are generated considering building height information. The result of the GIS-LCZ approach is compared to the new training areas for accuracy assessment, which shows better overall accuracy compared to that of the WUDAPT L0 method. The new training areas are also submitted to the LCZ generator and the resulting LCZ-map gives a better overall accuracy value compared to the previous (WUDAPT) submission. This study shows one shortcoming of the WUDAPT L0 method: it does not explicitly use building height information and that leads to misclassification of LCZs in several cases. The GIS-LCZ method addresses this shortcoming effectively. Finally, an unsupervised machine learning method, k-means clustering, is applied to cluster the grid tiles according to their zone properties into custom classes. The custom clusters are compared to the GIS-LCZ classes and the results indicate that k-means clustering can identify more complex city-specific classes or LCZ transition types, while the GIS-LCZ method always divides regions into the standard LCZ classes. Introduction The urban heat island (UHI) effect has been defined as a phenomenon whereby the near-surface urban canopy air temperature is, on average, higher than that of its surrounding countryside [1]. Its intensity is likely to keep increasing in the future due to population and urbanization growth [2]. The UHI intensity characterizes urban climates and is related to a negative impact on the environment by increasing energy demand due to an increase in air conditioning, elevating emission of greenhouse gases and air pollutants, endangering human health and comfort and impairing water quality [3]; therefore, the prediction of UHI is becoming significantly important. Mesoscale numerical climate simulation software such as the Weather Research and Forecasting (WRF) [4] as well as simplified urban canopy models [5] is used to estimate the UHI intensity; however, they require highly resolved and accurate land use/land cover based on the LCZ scheme [21]. A GIS-LCZ map was also generated for Vienna, Austria by Hammeberg et al. [22]. This work applies 100 m × 100 m grid tiles as the BSU and employs five zone properties: building height, aspect ratio, and the usual three surface fractions. The classification of the zone properties utilizes a naive Bayes classifier. There was no post-processing performed in this study. The evaluation for the classification result was carried out by comparing it to the WUDAPT L0 map. Wang et al. [14] derived the GIS-LCZ map for Hong Kong applying a 100 m × 100 m grid tile as the BSU. They used three zone properties and one additional land use data. The zone property's building height, building surface fraction, and sky view factor were employed to classify the built-up classes (LCZs 1-10). Additional land use data are used for the classification of the land cover classes (LCZs A-G). The classification was achieved by modifying the standard rules proposed by Stewart and Oke [7]. An accuracy assessment was carried out for the resulting GIS-LCZ map by comparing it to the established validation samples. Another GIS-LCZ method was conducted by Estacio et al. [23] for Quezon City, Philippines. The study employed seven zone properties: sky view factor, building height, roughness length, surface albedo, and the three usual surface fractions. These zone properties were calculated over 100 m × 100 m grid tiles, which were further classified by applying a fuzzy logic algorithm modified from Lelovics et al. [18]. The result of the classification was aggregated by using cellular automata to derive the LCZ map. The map was validated using expert knowledge. The land surface temperature profile for each LCZ type was also assessed in this research. A clustering method was also applied in the work of Hidalgo et al. [24] to classify three cities in France (Toulouse, Paris and Nantes) based on the GIS datasets. They used eight parameters to categorize the built up classes (LCZ 1-10), and from those parameters, there was only one parameter that corresponds to the LCZ framework, which is mean building height. The other parameters are building density, building typology, majority building typology within the polygon, mean minimum distance, median minimum distance, main morphological group, and number of buildings. Morphological groups are used to identify LCZs 7 and 8. For the other built up classes, k-means clustering is applied for the classification. LCZs A-G were classified with another set of parameters: urban vegetation data from satellite images (LCZs A and B), road fraction indicator (LCZ E), and water fraction indicator (LCZ G). In this paper, the GIS-LCZ method was applied with several novelties. It was used to define the LCZs in the city of Berlin, which, to the best of the authors' knowledge, has not been classified with the GIS-based method before. The resulting GIS-LCZ map is expected to improve the existing remote sensing-based LCZ map (WUDAPT L0). The classification of the zone properties employs an improved fuzzy logic algorithm and modification of the standard LCZ framework. The post-processing applied to the result of the GIS-LCZ classification is a majority filter as applied by the WUDAPT L0 approach, but in this paper, the window size in the majority filter is reduced to retain more detail from the GIS-LCZ data. New TAs of LCZ classes for Berlin are also generated considering the building height information, which is not considered by the TAs generated for WUDAPT L0. The resulting GIS-LCZ map and the WUDAPT L0 map are compared to the TAs for assessing the accuracy. Moreover, the TAs are also submitted to the LCZ generator website to derive an improved RS-LCZ. Furthermore, the k-means clustering method is used to cluster the grid tiles according to their zone properties into custom classes. Finally, the custom clusters are compared to the GIS-LCZ classes to investigate their similarities and differences. Section 2 in this paper explains the study area and methodology. Section 3 describes the classification results and discussions. Section 4 contains conclusions and research outlook. Methodology In this study, a GIS-based method was employed to derive the LCZs for Berlin. The general GIS-LCZ mapping process is illustrated in Figure 1. The process was initialized by collecting vector and raster data that are later used to calculate the zone properties. The zone properties are calculated based on a basic spatial unit (BSU), which is defined as a grid tile. Each BSU is classified applying a fuzzy logic algorithm to derive the LCZs. During post-processing, a filtering is carried out to the classification result of LCZs. An evaluation was performed on the final GIS-LCZ by comparing it to the new training areas. The new training areas were further used to evaluate the existing WUDAPT L0 map and to generate an LCZ map from the LCZ generator. Furthermore, the zone properties of the grid tiles were clustered, and then compared to the GIS-LCZ result. Study Area and Datasets The focus of this study is the city of Berlin. Berlin has an area of 892 km 2 . As of 2019, its population was around 3.8 million, which makes it the most populous city in Germany. One of the reasons for choosing this city is the availability of the GIS data, which we require for deriving the GIS-LCZ map. A Google Earth image of Berlin and the newly created training areas (see Section 2.5) shown in Figure 2. The DLR dataset is obtained from the work of Heldens et al. [25]. They generated raster data of Berlin for a microclimate simulation. The raster dataset includes rasters of building height, terrain height, vegetation height, water, streets, and bridges. The dataset provides several resolution ranges from 1 m to 16 m. On the other hand, OSM is a vector dataset containing primarily building land use, road, and water features. OSM is an open source data generated by a community of mappers [29]. Satellite imagery form Copernicus is also used in this study. Copernicus is the European Earth monitoring system where data are acquired from different sources, such as in situ sensors and Earth observation satellites. Raster data of land cover and high resolution layers, such as imperviousness density (IMD) are provided by Copernicus [27]. As mentioned previously, WUDAPT data are used for evaluation purposes in this paper. WUDAPT is a project initiated by urban climate research to provide universally coherent and consistent information on form and function of urban morphology for climate studies [9]. WUDAPT information consists of three levels of detail (L) and is gathered using distinct methodologies. Level 0 (L0) data comprises a local climate zone map, which is based on the work of Stewart and Oke [7]. On the other hand, Level 1 (L1) data provide a better representation for each LCZ through sampling by providing information regarding urban form and function in a finer spatial resolution. L1 data have a representation in three-dimensional form. Furthermore, Level 2 (L2) data provide more information by giving detailed descriptions of urban parameter values for boundary layer modeling. The WUDAPT project provides level 0 (L0) maps for many cities around the world. The WUDAPT L0 map for Berlin was downloaded from the WUDAPT portal [30]. The map was produced in 2016 and was derived from Landsat 8 Images from March and April 2015. The resolution of the map is 100 m and it was resampled from 30 m Landsat 8 input image resolution. The training areas can also be downloaded through the WUDAPT website [31]. The thermal property used for the classification of GIS-LCZ of Berlin is anthropogenic heat flux that is obtained from AH4GUC. This dataset provides global present and future hourly data of anthropogenic heat flux (AHF) with a resolution of 1 km, which is derived from a global population density map, global gridded monthly air temperature, a global nighttime lights map, and a global combustion sources map [28]. Inference of Urban Morphology from GIS Data According to the standard LCZ methodology, there are 10 zone properties defining the 17 local climate zones. These properties are: sky view factor, aspect ratio, building surface fraction, impervious surface fraction, pervious surface fraction, height of roughness elements (building heights), terrain roughness class, surface admittance, surface albedo, and anthropogenic heat flux. However, due to limited data sources, in practice, only a subset of those properties can be used for the classification of the LCZs. For this study, seven zone properties are calculated to generate the LCZ map: sky view factor (SVF), mean building height (H) or mean vegetation height (H v ), aspect ratio (H/W), building surface fraction (BSF), impervious surface fraction (ISF), anthropogenic heat flux (AHF), and roughness length (z 0 ). The basic spatial unit for the classification of the zone properties is in the form of grid tiles with the size of 100 m × 100 m. Every zone property will be resampled to this resolution. The polygon of the grid tiles is created in QGIS in the shapefile format with the extent of the area of Berlin with the coordinate reference system (CRS) of European Petroleum Survey Group Geodesy (EPSG) 25833 (ETRS89/UTM zone 33N). This CRS is used as the default CRS for all the calculation of the zone properties. The total number of grid tiles for the area of Berlin is 90517. The sky view factor (SVF) is calculated by applying raster height data of building, terrain, and vegetation patch derived from the DLR dataset with a resolution of 5 m. The calculation is performed by employing the Urban Multi-scale Environmental Predictor (UMEP) plugin in QGIS by Lindberg et al. [32]. The DLR rasters of building and vegetation patch height with a resolution of 1 m are also used for the calculation of mean building height (H) for urban classes and mean vegetation height (H v ) for natural classes, respectively. The building surface fraction (BSF) is calculated to define the percentage of building area in a grid tile. Because not all building areas in Berlin are covered by the DLR data, additional building polygon data from OSM are used to define the BSF. The impervious surface fraction (ISF) is the percentage of the area covered by impervious (paved or rock) materials in a grid tile. The Copernicus Impervious Density (IMD) raster is used for the calculation of the ISF by calculating the mean of the 20 m resolution raster data over the grid tile; however, the IMD cannot be directly used to represent the ISF needed by the LCZ framework since the IMD also includes building information. The ISF as a zone property in the LCZ classification excludes the information of buildings since that information is already covered by BSF. Thus, the BSF should be subtracted from the IMD in order to obtain the ISF, which can be formulated as: ISF = IMD − BSF; however, when this formula is implemented, it results in negative ISF values in several grid tiles. This can be due to the fact that the IMD raster is not fully harmonized with the BSF value since they are from different data sources and have different resolutions and acquisition methods. To avoid negative values in the ISF, the IMD is corrected by taking the maximum between the original IMD and the BSF which can be formulated as IMD = max(IMD, BSF). The aspect ratio (H/W) is the ratio between mean building height (H) and building spacing (or street width). The width of building spacing (W) is estimated by the equation modified from Samsonov and Varentsov [33]: S 0 is the grid tile area, which, in our case, is 10.000 m 2 . N BLD is the number of buildings that is obtained from building data of OSM and DLR. The resulting H/W calculation contains outliers where the grid tiles have a mean building height H but either they have no building width W or the value of it is small (less than 1 m). This results in incorrect values of H/W. These outliers occur, for example, in the grid tiles covered mostly or fully by buildings. To solve this issue, another H/W is calculated from the grid tiles of 250 m × 250 m to obtain a broader perspective of H/W. The new H/W is resampled to 100 m × 100 m grid tiles and assigned to the grid tiles that have incorrect H/W values or having a mean width W of less than 1 m. The anthropogenic heat flux (AHF) is obtained from the global AHF dataset with the resolution of 1 km. The raster of AHF is resampled to 100 m × 100 m to be further used to calculate the mean AHF in a grid tile. The roughness length (z 0 ) is calculated from the formula suggested by Oke (cited from Estacio et al. [23]): f 0 is 0.1, which is a constant value generally used for surfaces andz H is the average difference of DSM and DTM, which is calculated as the addition of two rasters: building and vegetation patch heights with their 1 m resolution. Water raster data from DLR with a resolution of 1 m are also used in the classification of GIS-LCZ. The raster is applied directly to classify grid tiles that contain mainly water into LCZ G. The calculation is performed in QGIS by applying the Zonal Statistic tool. Table 1 summarizes the data sources and the calculation method to derive the zone properties and LCZ G. The calculation result of the zone properties, which are SVF, H, H v , H/W, BSF, ISF, AHF, and z 0 , are shown in Figure 3. The zone properties are visualized in QGIS with the equal count (quantile) of 5 categories. Classification to Local Climate Zones The zone properties used in the classification of the GIS-LCZ method in this study are simplified into 12 classes instead of 17 classes. LCZ 1 (compact high-rise), LCZ 7 (lightweight low-rise), LCZ C (bush and scrub), and LCZ F (soil/sand) are excluded due to the quasi-nonexistence of these LCZ classes in Berlin. For the classification, the grid tiles are divided into nine urban classes (LCZ 2, 3, 4, 5, 6, 8, 9, E) and three natural classes (LCZ A, B, D). The natural classes are categorized as grid tiles having BSF ≤ 10 and ISF ≤ 10 or containing water. The rest of the grid tiles, which are not natural classes, are classified as urban classes. Furthermore, the grid tiles are classified based on their zone properties into LCZ classes based on the range of values adapted from the standard LCZ framework of Stewart and Oke [7], which is shown in Table 2. In this study, LCZ E is considered as an urban class; therefore, its aspect ratio H/W is modified from the standard framework, where the W indicates building spacing instead of tree spacing. Its ISF range value is also modified from ISF ≥ 90 to ISF ≥ 60, so that it can better detect bare rock or paved surfaces. Table 2. Zone properties of LCZ classes (adapted from Stewart and Oke [7]). LCZ SVF The zone properties are classified into LCZ classes by applying fuzzy logic with a trapezoidal membership function modified from Estacio et al. [23]. The membership of every zone property for every LCZ class is determined as shown in Figure 4 (example case of LCZ 2 and its ISF property). The property value which is in the specified range will have a membership value of 1 and the membership value will linearly decrease from 1 to 0 when the property value is out of the range. To understand how this membership function works, an example from Figure 4 is explained here. The zone property of the ISF of LCZ 2 has a range value from 30-50, which implies 30 as the left bound (LB), 50 as the right bound (RB) and length L = RB − LB = 20. When a grid tile has an ISF value, which is in this range (30-50), the membership value will be 1. On the other hand, when a grid tile has an ISF which is not in this range, the membership value will depend on how far it is away from the LB or RB. The membership value will become 0 for ISF values of less than 100 and more than 70. These values will be called as left zero bound (LZB) and right zero bound (RZB), respectively. The value of the RZB and LZB are defined as LZB = LB − L = 10 and RZB = RB + L = 70. A problem arose for the zone properties that do not imply the RB, for example, the property value of H for LCZ 4, which is bigger than 25 m. The LB is 2 and the RB goes to infinity. It will not be a problem for defining the RZB because it can be set to infinity as well; however, it will be a problem for defining the LZB. The LZB cannot go to negative infinity, as it will overestimate the membership value of the zone property, which is less than LB. For this case, Estacio et al. [23] chose a value of 0 as the LZB; however, when choosing 0 as the LZB, it will still overestimate the membership value of H/W that is less than 25, because the H will obtain membership values from all the urban classes of LCZs which ranges from 0 (LZB) to 25 (LB). This is not desirable since the purpose of the classification is to obtain a relatively distinct classification outcome. To tackle this issue, the LZB is chosen from the second highest RB value of the range values of H defined in Table 2, which is 15 m. The other zone properties, which do not define LB or RB, are SVF, BSF, and ISF. The common value is chosen as the LB or RB. For SVF, the LB would be 0 and the RB would be 1. For BSF and ISF, the LB would be 0 and the RB would be 100. Filtering The classification result is further processed by applying a post-processing step. As summarized by Quan and Bansal [17], previous GIS-LCZ studies carry out post-processing steps for two main reasons: diminishing unnecessary heterogeneity of the LCZ classes and maintaining their minimum area requirement. In this study, during post-processing, a filtering was applied to obtain more homogeneous LCZ areas. The filter is a majority filter and it is applied using SAGA with a filter radius of one pixel and the search mode of square. These parameters yield a window filter of 3 × 3 pixels. The majority filter is also used by the WUDAPT L0 approach but with a greater size of the window filter, which is 5 × 5 pixels. We reduced the window size to retain more details of the GIS-LCZ classification. New Training Areas In generating WUDAPT L0 map, training areas (TAs) are digitized by the local experts who know the overview of the urban morphology of the city. Based on its guidelines [34], the WUDAPT L0 approach specifies TAs of LCZ classes in the respective city based on the view on Google Earth imagery. This implies that the approach relies only on the 2D perspective, which is inconsistent with the guidelines of the original framework of LCZ. Stewart and Oke [7] indicate that the geometric properties of the LCZ (mean building height, aspect ratio, and sky view factor) need building height information. Therefore, in this study, new TAs are generated to improve the TAs of WUDAPT by considering the geometric properties calculated from the GIS-LCZ mapping method. The new TAs are digitized in QGIS by the aid of satellite imagery and, at the same time, by considering the calculated zone properties. Figure 2 shows the new TAs, which are later used for the evaluation of the resulting GIS-LCZ map and the existing WUDAPT L0 map, as well as to derive a new LCZ map from the LCZ generator. Custom Classification Using k-Means-Clustering The classification of LCZ based on the standard framework is very general, as it was developed to fit a majority of cities around the world; however, it does not fit any specific city perfectly well and it may be that some classes of the standard framework do not exist in a city, or the city has specific classes not found in the standard framework. As summarized by Quan and Bansal [17], some studies modified the original LCZ classes by removing, mixing, and adding the standard LCZ classes. In the reviewed studies, not all the standard LCZ classes are found and the classified zones do not have zone properties that fit to the range values defined by the standard. In this study, a custom classification to derive LCZs is introduced, where range values of the zone properties do not have to be defined. Instead, the grid tiles are clustered according to their zone properties into a number of custom classes, for which the average zone properties can be calculated to define the classes. This will give a result of distinct clusters, specific to the city, which contain grid tiles of similar zone properties. From this result, a custom land use/land cover map can be derived, and, together with the average zone properties, the urban morphology of a specific city may be described more accurately. This may also improve the accuracy of mesoscale climate simulation models that need highly resolved and accurate LULC maps. For the clustering purpose, the k-means clustering method is applied. k-means clustering is one of the most popular methods in cluster analysis. It uses the vector quantization method to partition N observations into k clusters, in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid). k-means clustering minimizes within-cluster variances (squared Euclidean distances) and optimizes squared errors. This method has been widely used in the classification of land use/land cover [35]. We use the k-means clustering method to find clusters of grid tiles according to their zone properties. The clustering is only performed for the urban classes. In cases where H/W is null because of no building height information, the zone property is set to 0. The five zone properties of urban classes are normalized and clustered applying the k-means clustering package of scikit-learn in Python. The number of clusters are set to be the same as the number of GIS-LCZ classes. The clustering result are compared to the GIS-LCZ classification result to investigate their relationship. Local Climate Zones Classification from GIS Data The GIS-LCZ of Berlin is shown in Figure 5 The result of the GIS-LCZ method is compared to the new training areas generated for Berlin, which are shown in Figure 2. The comparison is performed using the confusion matrix calculation in SAGA. Using a confusion matrix is a common method to assess the accuracy of a classification, where the classification result is compared with the reference or ground truth data. AccProd or producer accuracy implies the map accuracy from the perspective of the producer (map maker) or the probability that the reference class is correctly classified in the classification result. On the other hand, the AccUser or user accuracy is the probability that the classified class is correctly classified in the reference class. This accuracy specifies map accuracy from the perspective of a map user. Moreover, the overall accuracy value is the number of sites correctly classified divided by the total number of sites. The kappa coefficient can also be calculated. This value describes how well the classification was executed in comparison to just a random classification. The value ranges from 0 to 1, where 1 represents a perfect match between the classification result and the reference data, and 0 is the other way around where the classification result is considered completely random [36]. The confusion matrix of GIS-LCZ compared to the TAs is tabulated in Table 3. The overall accuracy and kappa values (resp. 92.47% and 0.91) are excellent; however, the LCZ 8 does not have a good producer accuracy value since the training areas of LCZ 8 do not detect any LCZ 8 in the classification result. Nevertheless, LCZ 8 only corresponds to only 0.1% of GIS-LCZ, which is also the lowest proportion of the LCZ classes in the GIS-LCZ result. Moreover, the TA for LCZ 3 could not be created, because insufficient training areas were identified. The WUDAPT L0 is also compared with the TAs in a confusion matrix shown in Table 4. The overall accuracy and kappa values are 74.95% and and 0.69, respectively, which are lower than that of GIS-LCZ. Generally the producer accuracy of each LCZ classes is lower than that of GIS-LCZ. The producer accuracies of WUDAPT L0 of LCZ 5 and 9 have significant discrepancies compared to that of GIS-LCZ; however, the WUDAPT L0 detects the LCZ 8 far better compared to GIS-LCZ. Moreover, LCZ 3 and E are not classified in WUDAPT L0. It is observed that WUDAPT L0 is not effective at detecting the zone properties related to building geometry (H, H/W, and BSF), which leads to misclassification of some LCZs. This situation is observed in WUDAPT L0 for LCZ 2, LCZ 4, and particularly LCZ 5 as implied in the confusion matrix of Table 4. LCZ 5 has a very low producer accuracy since it classifies most of the TAs of LCZ 5 into LCZ 4. WUDAPT L0 also gives a completely random classification result on LCZ 6 and 9. It is found that the tiles classified in these classes are in reality natural classes instead of urban classes, based on the zone properties and the view from the satellite imagery as shown in Figure 6. On the other hand, GIS-LCZ considers the building geometry, which enables it to correctly detect LCZs that are misclassified by WUDAPT L0: LCZ 2 instead of LCZ 5, LCZ 5 instead of LCZ 2 and LCZ 4, and LCZ 6 instead of LCZ 5 as indicated in the confusion matrix of Table 3. Figure 7 shows misclassification in WUDAPT L0, where the method classified the tiles as LCZ 4, but the mean building height values of these tiles are actually in the range of LCZ 5. The GIS-LCZ method addresses this shortcoming effectively. One shortcoming of the GIS-LCZ method is its strong dependency on the availability of input data. It is observed that some tiles cannot be classified correctly due to the inadequacy of the data to calculate the zone properties. GIS-LCZ also does not manage to detect LCZ 8, which is probably due to the unavailability of pervious surface fraction property or the small extent of the grid tile, which is insufficient for defining LCZ 8. The majority filter applied can remove the granular view and produce more homogeneous LCZ classes; however, this filter can also diminish the correctly classified GIS-LCZ classes. Some tiles of GIS-LCZ are found to belong partially to LCZ 6 or 8, which implies that the combination of two LCZs is possible to represent a local climate zone as it was also noted by other researchers [37]. The airport areas are classified as LCZ 5 in GIS-LCZ (rather than LCZ 8) because the building height property suits this class. This highlights a limitation of the LCZ framework, showing that its implementation cannot always be ideal for every city. LCZ Generator The new TAs based on GIS-LCZ data, see Figure 2, were also submitted to the LCZ generator and the result of the new LCZ map gives an accuracy value of 84% (see Figure 8). The previous (WUDAPT) submission for Berlin to the LCZ generator obtained a lower accuracy value of 61% (see Figure 9). The accuracy evaluation of the LCZ Generator refers to five indexes [38]: overall accuracy (OA), overall accuracy for the urban LCZ classes only (OAu), overall accuracy of the built vs. natural LCZ classes only (OAbu), a weighted accuracy (OAw), and the class-wise metric F1. The accuracy indexes OA, OAu, OAbu, and OAw of the new LCZ map are 84%, 82%, 98%, and 97%, respectively. The relevant indexes of WUDAPT are 61%, 59%, 94%, and 92%, respectively. The class-wise metric F1 evaluation also shows that in the new LCZ map, except for LCZ 8 and LCZ B, the accuracy of the other eight LCZ classes are better than that of the previous submission. [39] (see also [15]). Figure 9. Boxplot accuracy of the LCZ generator map for the previous WUDAPT training areas of Berlin; data from [40] (see also [15]). k-Means Clustering and Comparison to GIS-LCZ Classes First, we compare the result of the k-means clustering to the previously determined GIS-LCZ classes using average cluster properties. The average of the zone properties for each cluster is shown in Table 5. If we analyze each average cluster property separately, we can make the following observations. The clusters 0, 1, 2, 4 and 7 with building height between 10 m and 20 m may be middle-rise areas, the clusters 3, 5, 6 having building heights between 5 m and 10 m may The BSF values of the other clusters are between 10% and 20%, buildings in these areas are sparsely built. This shows that the BSF of most clusters in Berlin is small. The ISF value of cluster 3 is 66.7%, which may correspond to bare rock or paved areas. The ISF value of cluster 2 is 56.0%, which may correspond to compact areas. The ISF values of clusters 0, 1, 4, and 7 are between 20% and 40% indicating open or sparsely built areas. The cluster 5 with an ISF value of 9.7% may belong to sparsely built areas. The clusters 2, 4, and 7 with an AHF more than 10 W m −2 may indicate compact, open or lightweight areas. On the other hand, the other clusters with an AHF less than 10 W m −2 may correspond to sparsely built or paved areas. Overall, it seems difficult to establish a clear relationship between the clusters and the LCZ classes using only average cluster properties, because, depending on the property analyzed, the most likely LCZ class corresponding to each cluster differs. Next, we performed a cross-analysis of the average zone-properties between the custom clusters and the GIS-LCZ classification. The resulting agreement values in percent are shown in Table 6. Percentages greater than 20 are marked to identify GIS-LCZ classes that are similar to the k-means clusters. It can be seen from Table 6 that 71.8% of cluster 1 belong to GIS-LCZ 5, 67.9% of cluster 6 belong to GIS-LCZ 6, and 69.3% of cluster 4 belong to GIS-LCZ 2. Most of the members of cluster 0, 2, and 3 belong to GIS-LCZ 5 and 6. Relatively speaking, the mean building height of cluster 2 is closer to that of a midrise area, while the mean building height of cluster 0 and 3 are closer to that of a low-rise area. In addition, there may be a large number of paved areas in cluster 3. Moreover, cluster 5 is mainly composed of GIS-LCZs 6 and 9, and cluster 7 is distributed among GIS-LCZs 2, 5 and 6. The classification of the GIS-LCZ method is based on the standard LCZ framework; therefore, the GIS-LCZ method is not able to identify zone types outside of the standard LCZ framework, in which the zone types have different range values than that defined in the standard. On the other hand, k-means clustering is based entirely on quantitative properties, which can effectively avoid the constraints of the standard LCZ framework and may distinguish zone types other than that defined by the standard LCZ framework; however, the resulting clusters do not correspond to clearly defined urban topologies, and so the researchers need to name them according to the characteristics of each cluster on a case by case basis. Conclusions In this study, it is shown that the GIS-LCZ method can improve the accuracy over the remote sensing-based WUDAPT L0 approach in deriving an LCZ map of Berlin. When compared to the new training areas, a high overall accuracy of 92.47% and a high kappa value of 0.91 were found for the GIS-LCZ map. This indicates a highly accurate classification result and a strong improvement over the previous WUDAPT L0 result with an overall accuracy of 74.95% and a kappa value of 0.69. It is observed that the WUDAPT L0 method misclassified some LCZs due to its shortcoming in detecting the zone properties related to building geometry. On the other hand, the GIS-LCZ approach calculated the zone properties from vector and raster datasets, which correctly detects LCZs that were misclassified by WUDAPT L0. Nevertheless, the GIS-LCZ approach strongly depends on the availability of the data to calculate the zone properties. It can lead to a misclassification of LCZs when the zone properties are not available. This study also shows the limitation of the LCZ framework in addressing the finer variety of zone property values of the grid tiles in Berlin, where the LCZ classes from the standard do not fit perfectly to the calculated zone properties. The GIS-LCZ approach classified the zone properties using a fuzzy logic algorithm that was adapted from Estacio et al. [23] and modified in order to solve the membership value problem of the left zero bound of the trapezoidal function. The standard framework for LCZ E is also modified by defining the impervious surface fraction bigger than or equal to 60%, so that it can better detect paved surfaces in urban areas. The majority filter used in the post-processing employs a window filter of 3 × 3 pixels instead of 5 × 5 pixels used in the standard WUDAPT L0, in order to keep the details of the GIS-LCZ classification. This study also shows that the new training areas generated with the consideration of building height information can increase the accuracy of the LCZ map produced by the LCZ generator. Moreover, the k-means clustering result shows that the GIS-LCZ method tends to divide regions into clearly differentiated LCZ types according to the standard framework, while k-means clustering can identify regions with city-specific characteristics or inter-LCZ transition types. From the cross analysis between the clusters and the GIS-LCZ classes, it is found that some of the GIS-LCZ classes seem to be naturally distinguishable but some are difficult to separate. Future improvement of the methodology presented here could proceed in several directions. Only seven of the zone properties were used here for the classification of GIS-LCZ. It would be interesting to add other zone properties (pervious surface fraction, surface albedo, and surface admittance) and to see whether the result improves. The WUDAPT L0 map has been inefficient in detecting certain LCZs, where geometric zone properties, such as H and H/W, are critical. The classification in WUDAPT L0 could be improved by introducing building height information to the machine learning algorithm. Integration of the GIS-LCZ and the WUDAPT L0 method could also be possible to obtain advantages from both of these methods by replacing the post-processing step (majority filter) of WUDAPT L0 with an aggregation step applied in GIS-LCZ method by Gál et al. [20]. Existing investigations [21,[41][42][43][44] can be enhanced or reapplied by using the result of the GIS-LCZ map of Berlin. This approach is particularly suitable for practitioners attempting to characterize UHI via climate simulation as it can improve the accuracy of the simulation results by reflecting terrain features more precisely and consistently. This is especially valuable in the case of high horizontal grid resolution. The GIS-LCZ classes can also be used in the evaluation of simulation results from microscale numerical models, such as PALM-4U. Furthermore, studies that correlate UHI intensity with land cover characteristics can be extended to establish a correlation with the more accurate urban classification of the GIS-LCZ method. Another type of correlation can be carried out directly between the GIS-LCZ classes and remotely sensed land surface temperature (LST) or in situ measurements of near-surface air temperature. The GIS-LCZ classes can also be applied to design urban temperature measurement networks for the purpose of understanding spatial and temporal characteristics of urban climate. This network can be further used to analyze the air temperature conditions within the GIS-LCZ classes.
2022-05-20T15:21:21.884Z
2022-05-18T00:00:00.000
{ "year": 2022, "sha1": "aa1e8218f1b37dd144218dcff49d4644047095f1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-445X/11/5/747/pdf?version=1652933739", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "284cde2a84ad0053e2d0b1b0fe1f5b3f614ad710", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [] }
208955659
pes2o/s2orc
v3-fos-license
Evidence for the path to cervical cancer elimination Elimination of cervical cancer is at the forefront of the global health agenda with the launch of the WHO Initiative for the Elimination of Cervical Cancer in 2018. Human papillomavirus (HPV) vaccination is expected to be a priority agenda item on health for the 2020 G20 Summit in Riyadh following the 2019 commitment to universal access to vaccination in the G20 Osaka Leaders’ Declaration and the Okayama Declaration of the G20 Health Ministers. This political momentum should be coupled with the latest evidence to pinpoint outstanding areas for action and to implement responsive strategies. Cervical cancer is one of the top five cancers with the greatest proportion of avoidable cancer mortality— premature deaths resulting from health system failures that could otherwise be avoided on the basis of existing medical advancements and with access to high-quality health care. As evidenced by the importance of quality markers in health care, including the effectiveness and timeliness of care, an immense opportunity exists to bridge the gap in what can be achieved and what is achieved on the prevention and treatment of cervical cancer. Fundamental to any response is reliable assessment of the global burden of cervical cancer. In their Article in The Lancet Global Health, Marc Arbyn and colleagues present the latest estimates of cervical cancer incidence and mortality worldwide, using the International Agency for Research on Cancer’s GLOBOCAN data from 2018 and updating a similar analysis done in 2008. Arbyn and colleagues estimated that approximately 570 000 cases of cervical cancer and 311 000 deaths from the disease occurring worldwide in 2018. The annual age-standardised incidence (ASI) worldwide of cervical cancer was estimated to be 13·1 per 100 000 women-years, with an approximate range of less than 2 to 75 per 100 000. Overtime comparison of global patterns revealed that cervical cancer shifted from being the third most common malignant tumour to the fourth most common. However, despite global progress on primary and secondary prevention, Arbyn and colleagues’ updated analysis indicated that about 84% of cervical cancer cases occur in low-resource countries (defined as countries with a human development index [HDI] value lower than 0·80), a minimal change in distribution of the burden from a decade ago when this proportion was 85%. Additionally, Arbyn and colleagues found an inverse trend between both the ASI and the age-standardised mortality rate and the level of human development, as derived from the HDI. These data show the ongoing presence of the global cancer divide—disparities in morbidity and mortality within and between countries that are concentrated among the poor. The younger age profile of the disease compared with that of most other types of cancer, with cervical cancer being among the top three cancers in 146 (79%) of 185 countries in women younger than 45 years, provides further cause for concern. Especially as, according to the CONCORD 3 study, the global 5-year cervical cancer survival ranged between 50% and 70% over the time period of 2000–14. The situation is starker in specific areas of the world. In eastern, middle, southern and western Africa, Arbyn and colleagues reported that cervical cancer was the leading cancer among women in 2018, highlighting these geographical areas as a pressing priority for the ambitious WHO cervical cancer elimination initiative. In 2008, cervical cancer was also the leading cancer among women in central America, south-central Asia, and Melanesia; however, persistent efforts have yielded health gains in these parts of the world. Arbyn and colleagues have made the GLOBOCAN data more accessible and strengthened the existing analysis of global disparities in cervical cancer. The authors have substantiated the grave impetus to ensure that cervical cancer remains a public health priority and called governments and the international community to task to accelerate progress on cervical cancer prevention and treatment alongside the broader aim to achieve universal health coverage. As the cervical cancer elimination movement moves forward, all valued health goals should be pursued. Previous findings on the burden of serious health-related suffering, which could be ameliorated with adequate palliative care that is insufficient in much of the world today, estimated that 80% of individuals who have serious health-related suffering live in low-income and middleincome countries and that nearly a quarter of this burden is associated with cancer, including cervical cancer. The alleviation of suffering with access to care throughout the care continuum must be a priority in the quest for better health for all initiatives, including disease-specific ones. Published Online December 4, 2019 https://doi.org/10.1016/ S2214-109X(19)30523-6 Evidence for the path to cervical cancer elimination Elimination of cervical cancer is at the forefront of the global health agenda with the launch of the WHO Initiative for the Elimination of Cervical Cancer in 2018. 1 Human papillomavirus (HPV) vaccination is expected to be a priority agenda item on health for the 2020 G20 Summit in Riyadh following the 2019 commitment to universal access to vaccination in the G20 Osaka Leaders' Declaration 2 and the Okayama Declaration of the G20 Health Ministers. 3 This political momentum should be coupled with the latest evidence to pinpoint outstanding areas for action and to implement responsive strategies. Cervical cancer is one of the top five cancers with the greatest proportion of avoidable cancer mortalitypremature deaths resulting from health system failures that could otherwise be avoided on the basis of existing medical advancements and with access to high-quality health care. 4 As evidenced by the importance of quality markers in health care, including the effectiveness and timeliness of care, 5 an immense opportunity exists to bridge the gap in what can be achieved and what is achieved on the prevention and treatment of cervical cancer. Fundamental to any response is reliable assessment of the global burden of cervical cancer. In their Article in The Lancet Global Health, Marc Arbyn and colleagues 6 present the latest estimates of cervical cancer incidence and mortality worldwide, using the International Agency for Research on Cancer's GLOBOCAN data from 2018 and updating a similar analysis done in 2008. 7 Arbyn and colleagues estimated that approximately 570 000 cases of cervical cancer and 311 000 deaths from the disease occurring worldwide in 2018. The annual age-standardised incidence (ASI) worldwide of cervical cancer was estimated to be 13·1 per 100 000 women-years, with an approximate range of less than 2 to 75 per 100 000. Overtime comparison of global patterns revealed that cervical cancer shifted from being the third most common malignant tumour to the fourth most common. However, despite global progress on primary and secondary prevention, Arbyn and colleagues' updated analysis indicated that about 84% of cervical cancer cases occur in low-resource countries (defined as countries with a human development index [HDI] value lower than 0·80), a minimal change in distribution of the burden from a decade ago when this proportion was 85%. Additionally, Arbyn and colleagues found an inverse trend between both the ASI and the age-standardised mortality rate and the level of human development, as derived from the HDI. These data show the ongoing presence of the global cancer divide-disparities in morbidity and mortality within and between countries that are concentrated among the poor. 8 The younger age profile of the disease compared with that of most other types of cancer, with cervical cancer being among the top three cancers in 146 (79%) of 185 countries in women younger than 45 years, provides further cause for concern. 6 Especially as, according to the CONCORD 3 study, the global 5-year cervical cancer survival ranged between 50% and 70% over the time period of 2000-14. 9 The situation is starker in specific areas of the world. In eastern, middle, southern and western Africa, Arbyn and colleagues reported that cervical cancer was the leading cancer among women in 2018, highlighting these geographical areas as a pressing priority for the ambitious WHO cervical cancer elimination initiative. In 2008, cervical cancer was also the leading cancer among women in central America, south-central Asia, and Melanesia; 7 however, persistent efforts have yielded health gains in these parts of the world. Arbyn and colleagues have made the GLOBOCAN data more accessible and strengthened the existing analysis of global disparities in cervical cancer. The authors have substantiated the grave impetus to ensure that cervical cancer remains a public health priority and called governments and the international community to task to accelerate progress on cervical cancer prevention and treatment alongside the broader aim to achieve universal health coverage. As the cervical cancer elimination movement moves forward, all valued health goals should be pursued. Previous findings on the burden of serious health-related suffering, which could be ameliorated with adequate palliative care that is insufficient in much of the world today, estimated that 80% of individuals who have serious health-related suffering live in low-income and middleincome countries and that nearly a quarter of this burden is associated with cancer, including cervical cancer. 10 The alleviation of suffering with access to care throughout the care continuum must be a priority in the quest for better health for all initiatives, including disease-specific ones. Moreover, the disaggregation of data to examine health inequities (such as the burden of cervical cancer) among the most marginalised is necessary, to protect the needs of the populations most at risk, including indigenous communities, refugees and immigrants, and individuals in the lesbian, gay, bisexual, transgender, queer or questioning, and intersex community. In this era of misinformation and anti-immunisation campaigns, as well as persistent stigma surrounding cancer in various parts of the world, various challenges stand on the path to elimination of cervical cancer. Improved cancer surveillance and continued evidencebased public awareness activities can serve to counter some of these challenges.
2019-12-10T14:03:52.861Z
2019-12-04T00:00:00.000
{ "year": 2019, "sha1": "4f7bcbdc979b5f1807e4556d4c124468c9ca27a3", "oa_license": "CCBY", "oa_url": "http://www.thelancet.com/article/S2214109X19305236/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4182ffdbf87483a43426a4b256270c4489f521d2", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
260812540
pes2o/s2orc
v3-fos-license
Fusion of myofibre branches is a physiological feature of healthy human skeletal muscle regeneration Background The occurrence of hyperplasia, through myofibre splitting, remains a widely debated phenomenon. Structural alterations and fibre typing of skeletal muscle fibres, as seen during regeneration and in certain muscle diseases, can be challenging to interpret. Neuromuscular electrical stimulation can induce myofibre necrosis followed by changes in spatial and temporal cellular processes. Thirty days following electrical stimulation, remnants of regeneration can be seen in the myofibre and its basement membrane as the presence of small myofibres and encroachment of sarcolemma and basement membrane (suggestive of myofibre branching/splitting). The purpose of this study was to investigate myofibre branching and fibre type in a systematic manner in human skeletal muscle undergoing adult regenerative myogenesis. Methods Electrical stimulation was used to induce myofibre necrosis to the vastus lateralis muscle of one leg in 5 young healthy males. Muscle tissue samples were collected from the stimulated leg 30 days later and from the control leg for comparison. Biopsies were sectioned and stained for dystrophin and laminin to label the sarcolemma and basement membrane, respectively, as well as ATPase, and antibodies against types I and II myosin, and embryonic and neonatal myosin. Myofibre branches were followed through 22 serial Sects. (264 μm). Single fibres and tissue blocks were examined by confocal and electron microscopy, respectively. Results Regular branching of small myofibre segments was observed (median length 144 μm), most of which were observed to fuse further along the parent fibre. Central nuclei were frequently observed at the point of branching/fusion. The branch commonly presented with a more immature profile (nestin + , neonatal myosin + , disorganised myofilaments) than the parent myofibre, together suggesting fusion of the branch, rather than splitting. Of the 210 regenerating muscle fibres evaluated, 99.5% were type II fibres, indicating preferential damage to type II fibres with our protocol. Furthermore, these fibres demonstrated 7 different stages of “fibre-type” profiles. Conclusions By studying the regenerating tissue 30 days later with a range of microscopy techniques, we find that so-called myofibre branching or splitting is more likely to be fusion of myotubes and is therefore explained by incomplete regeneration after a necrosis-inducing event. Supplementary Information The online version contains supplementary material available at 10.1186/s13395-023-00322-2. Background Work from the late nineteenth century directed the hypothesis that we are born with a set number of myofibres, and that muscle growth is caused by an increased cell (myofibre) mass [1].However, since then, early human and rodent studies pointed towards an ability for muscle fibres to increase in number immediately post birth [2,3].Research in murine Duchenne muscular dystrophy models has reported longitudinal fibre splitting not observed in control mice [4,5].Similar observations have been made in muscle samples from two young boys with muscular dystrophy [6].This has led some to hypothesise that, in the adult state, muscle can undergo not only hypertrophy (increased cell size) but also a form of hyperplasia through muscle fibre splitting [7,8].While Duchenne represents a pathological condition characterised by chronic cycles of degeneration and regeneration [9], there is support for physiological fibre splitting in animals subjected to extreme loads, such as in rodents [10,11], birds [12,13] and amphibians [14].Just as fibre hyperplasia is a common developmental feature of drosophila [15]. Rodent and human studies generally agree that the processes involved in muscle injury, repair and regeneration are conserved across species [16], with most of the variation being accounted for by differences in the models employed.However, with regard to the question of hyperplasia vs. hypertrophy, data in humans are less clear than in animals, which limits the translation of concepts of muscle growth between species.In human powerlifters, indirect measures suggest the occurrence of hyperplasia.For example, in individuals who have performed heavy strength training for years, microscopy analysis of muscle tissue sections clearly shows visible clefts formed by encroachment of the myofibre membrane into the myofibre [17].In addition, powerlifters have larger total muscle size compared to previously untrained individuals who completed 6 months of resistance training, despite similar muscle fibre size between the two groups [18].Thus, either the powerlifters are born with more fibres or they create more.It should be noted, however, that some of the powerlifters were current or previous users of anabolic steroids and showed a higher number of central nuclei than what is normally seen [19,20], which could be indicative of some degree of pathology, or, as we propose here, regeneration. Indeed, the appearance of split or branched myofibres has been attributed to incomplete lateral fusion of myotubes during regeneration [21,22].This is in line with observations in human vastus lateralis muscle regenerating after myofibre necrosis induced by neuromuscular electrical stimulation [23,24].However, this has not been studied in a systematic manner, and as such represents an open question, which we address in this study.Additional unanswered questions relate to whether type II muscle fibres are more susceptible to injury and how to evaluate fibre type reliably during the highly dynamic process of muscle fibre regeneration, from expression of developmental myosins to their replacement by mature myosins organised in strict sarcomere register.The purpose of this study was therefore to investigate myofibre branching and fibre type in a systematic manner in human skeletal muscle undergoing adult regenerative myogenesis. Subjects and experimental design The study was approved by the Regional Scientific Ethical Committees of Copenhagen in Denmark (Ref: HD-2008-074) and conducted in accordance with the Declaration of Helsinki.The muscle biopsies analysed in this study are a subset of samples collected during a larger study [25].Briefly, volunteers were all young healthy males subjected to a muscle injury protocol consisting of 200 electrically stimulated eccentric contractions of the vastus lateralis muscle of one leg, as described in detail [25].For the present study, 5 subjects were selected based on the availability of muscle tissue collected from the injured leg on day 30 and the control leg of the same individual, as well as evidence of necrosis on cross-sections on day 7.The subject characteristics and… are shown in Table 1. Muscle biopsy sampling Muscle biopsies were collected from the vastus lateralis using the percutaneous needle biopsy technique of Bergström [26].Local anaesthetic (1% lidocaine: Amgros I/S, Copenhagen, Denmark) was applied subcutaneously, and tissue was extracted with 5-6-mm diameter biopsy needles and manual suction.Biopsy tissue was then divided into portions and preserved appropriately for cryosectioning, single fibres or transmission electron microscopy (TEM).For cryosectioning, tissue was embedded in Tissue-Tek (Sakura Finetek Europe, Zoeterwoude, the Netherlands) and frozen in isopentane, precooled by liquid nitrogen and stored at − 80 °C.For single fibres, fibre fascicles were fixed in Zamboni fixative (2% formaldehyde, 0.15% picric acid) and stored at − 20 °C in 50% glycerol in PBS [24,27].For TEM, a small part of the tissue sample was immersed in 2% glutaraldehyde in 0.05-M sodium phosphate buffer (pH 7.2) and stored at 4 °C. Cryosectioning A 12-μm-thick sections were cut from frozen samples in a cryostat, placed on glass slides (Superfrost Plus) and stored at − 80 °C.Two serial sections were cut for each slide.In total, twenty-two such slides were prepared from each sample for a series of histochemical (ATPase) and immunofluorescence staining, as indicated in Supplemental Table 1, resulting in 44 serial sections (covering a depth of 528 μm) in total from each specimen. Cryosection immunofluorescence Fourteen slides from each sample were immunofluorescently stained for MyHC I (myosin heavy chain type I), MyHC II (myosin heavy chain type II), MyHCe (myosin heavy chain-embryonic) or MyHCn (myosin heavy chain-neonatal).Along with these target proteins, the sarcolemma and basement membrane were labelled by antibodies against dystrophin and laminin, respectively (Supplemental Table 2).Primary antibodies were applied overnight to fixed (Histofix, 10 min), or unfixed, sections according to the respective antibody use instructions.Sections were then incubated in a cocktail of three secondary antibodies (Supplemental Table 2), for 45 min before being mounting in ProLong Gold Antifade Reagent, containing DAPI (Invitrogen, P36931). Cryosection microscopy ATPase and immunofluorescent cryosections were viewed on a widefield microscope (Olympus BX51).Brightfield and fluorescent images were captured by an Olympus DP71 camera (Olympus Deutschland GmbH, Hamburg, Germany), controlled by the Olympus cellSens software. Cryosection fusion/branching analysis Viewing the serial sections of the regenerating samples (Fig. 1), several clear cases of myofibre branching or fusion were apparent, for example an average-sized fibre on one section appearing as several small fibres on subsequent sections.To study this in a systematic manner, such fibres were identified and followed through serial sections for as long as they could be followed or until we reached the last section in the series.Using staining for laminin and dystrophin, patterns of fibre branching or fusion were recorded.The onset of fibre branching was defined as a point where the sarcolemma (dystrophin) formed a cleft in the fibre, as described by Swash and Schwartz in their study of myopathic disorders [30].These clefts often developed further on subsequent sections and eventually extended fully through the fibre, resulting in what appeared to be two separate and fully dystrophin-enclosed fibres [30].In recording branching/ fusing events, the fibre branch length was calculated by locating the first section showing a dystrophin cleft.The termination of the branch was recorded as the section where the branch appeared to have fused with the parent fibre (no dystrophin cleft).Some branches remained as branches in the last available section, so their measured length was classified as a minimum length, whereas some branches could not be followed confidently (for example due to folds in the section) and were abandoned.In the end, twelve fibres were identified on sections from two participants and followed through 22 serial sections, covering 264 μm of tissue depth (starting with section no.13, according to the overview presented in Supplemental Table 1).To view such branches longitudinally, we next examined the single fibres. Transmission electron microscopy and tissue processing After 3 rinses in 0.15-M sodium phosphate buffer (pH 7.2), the tissue samples were postfixed in 1% OsO 4 in 0.15-M sodium phosphate buffer (pH 7.2) for 2 h.Following dehydration in a graded series of ethanol, the specimens were transferred to propylene oxide and embedded in Epon (TAAB Laboratories Equipment Ltd., Aldermaston, UK).Ultrathin sections were cut with a Reichert-Jung Ultracut E microtome (Leica Microsystems), collected on 1-hole copper grids with formvar supporting membranes (Merck, Darmstadt, Germany) and stained with uranyl acetate and lead citrate.A Philips CM 100 transmission electron microscope (Philips, Amsterdam, the Netherlands) was used to view the sections, and digital images were obtained with an Olympus Soft Imaging Solutions (OSIS) Veleta side-mounted CCD camera (Olympus, Tokyo, Japan). Determining fibre type From each individual, one cryosection from the control and one from the stimulated leg were analysed (n = 3).A total of 70-80 fibres per section were classified according to the presence or absence of MyHCI, MyHCII, MyHCn and MyHCe and the pattern of staining observed at the four ATPase pH levels (ATPase 4.37, ATPase 4.53, ATPase 4.58 and ATPase 10.3). Unambiguous fusion/branching evident in serial cryosections of regenerating muscle Muscle tissue from healthy individuals who had been subjected to 200 electrically stimulated eccentric (lengthening) contractions 30 days earlier was analysed at the ultrastructural level (TEM), with 3-dimensional reconstructions of single fibre confocal images, and with a thorough study of 22 serial cryosections stained with a variety of immature and mature myosin types as well as ATPase histochemistry. Immunofluorescence images show clear encroachment of the sarcolemma and basement membrane, into some myofibres.Following the same fibres along a series of cryosections, many examples of "branching/ splitting" were observed (Figs. 1 and 2).In some cases, these branches reconnected with the parent myofibre at another level in the tissue, and some remained independent.For example, in Fig. 1, the pink circle captures a fibre which from section no.24 shows the sarcolemma protruding into the fibre (branching point), and from section no.26, it appears that the fibre has split completely into three fibres.These three fibres remain visible until a further split by section no.30.On section no.31, the four fibres are each surrounded by their own sarcolemma.However, on section no.40, there is only one fibre, approximately equal in size to the sum of the four smaller fibres visible on section no.31, indicating therefore that the four fibres have fused.On sections no.40 and no.50, this fibre exhibits a normal shape and size with respect to the surrounding fibres in this sample. The additional immunofluorescence stainings provide further information about this fibre and its branches (Fig. 2).They all demonstrate strong immunoreactivity for MyHC II and MyHCn and are negative for MyHC I and MyHCe.The ATPase staining supports the classification of this fibre as type II, which was a common observation in our samples.Of the 210 regenerating fibres that were stained and analysed, only one regenerating fibre predominantly presented as a type I fibre.Similar patterns are evident in the fibres marked with additional outline shapes (Figs. 1 and 2). In general, branches presented with the same myosin expression profile as their parent myofibre.Occasionally, though, a fibre branch presented as MyHCe + MyHCn + , while the parent myofibre was MyHCe-MyHCn + (Fig. 2, white square).Another consistent observation was that the dystrophin, and laminin staining patterns were similar during branching/fusion, implying a continuous basement membrane with the sarcolemma during this process. Through the systematic analysis of 12 branching myofibres on cryosections such as those presented in Fig. 2, a total of 21 branches was observed, with a median length of 144 μm (range 24-264 μm).Four branches were still visible 9 sections deeper into the tissue, on the last section collected in our series (section no.43, Supplemental Table 1).For these 4 branches, we can conclude they have a minimum length of 372μm, with no evidence of refusion with the parent myofibre within this distance. While fusion/branching events were frequent and unequivocal in our samples, we could not determine from the cryosections whether these events were more likely to be fusion of myotubes or branching/splitting of the main myofibre.To investigate this further, we therefore moved on to more detailed imaging techniques, confocal imaging of single myofibres and TEM. The presence of nestin and central myonuclei support fusion rather than branching High-resolution confocal microscopy in 3 dimensions not only confirmed the evidence of branches seen in the cryosections but also provided additional information.Since the fixation procedure used to prepare these fibre bundles is not conducive to MyHCe and MyHCn immunofluorescence, we stained for nestin, a protein only found in regenerating or denervated myofibres, or confined to the NMJ and MTJ in the unperturbed state [32,33].As seen in Fig. 3A-F, the myofibre branches display strong cytoplasmic immunoreactivity to nestin, where in particular the first myofibre branch displays most intense nestin signal towards the end of the branch (Fig. 3A-B).Although striations representing sarcomeres are visible in the nestin image, this same branch presents a striking lack of desmin + striations, in contrast to the desmin staining pattern of the parent fibre.Nestin in the branch along with desmin-negative striations suggests the branch is at an earlier stage of regeneration than the parent fibre.Chains of nuclei were common in regenerating single fibres, often coinciding with the point of branching, as reported earlier [30].This is especially apparent in Fig. 3G-H, where a branch appears to be connected to its parent myofibre at both ends, one of which immediately follows a chain of nuclei in the parent fibre.The sarcomeres of the branch segment in Fig. 3G-H are continuous with the sarcomeres of the fibre proper, and aside from the chain of nuclei, only nestin staining of the sarcolemma of the branch and parent myofibre disclose the incomplete regeneration stage of this fibre.Taken together, branch segments present with an earlier regeneration stage than the regenerating parent myofibre. Ultrastructural signs of fusion not branching Cross-sections from two subjects were imaged by transmission electron microscopy (Fig. 4).In Fig. 4A-F, a small myofibre is surrounded by larger myofibres.In the higher magnification images (Fig. 4B-F), it is clear that this smaller myofibre predominantly contains organised myofibrils, but also a relatively large area of disorganised myofilaments, corresponding to approximately one-third of the area of the fibre.The yellow asterisks demarcate the non-membranous border between these two zones.Notably, mitochondria align with this border, potential remnants of a population of sarcolemmal mitochondria from the time when this disorganised zone was a sarcolemma-defined branch, separate from its parent myofibre.Additional evidence of fusion, rather than branching, is presented in the high magnification Fig. 4H in the form of membrane-associated electrondense plaques, reported in drosophila as necessary for myoblast fusion [34]. Myosin maturation states of regenerating fibres Thirty days post electrical stimulation, while regeneration is well underway, it is clearly incomplete, and there is heterogeneity within a sample regarding stage of regeneration, with regard to the presence of mature (MyHC I and II) and immature (MyHCe, MyHCn) myosin types.We created additional myosin profiles to those observed in rested muscle samples to accommodate all myofibre profiles present in our specimens, based on the staining patterns on serial sections for MyHC I, MyHC II, MyHCe, MyHCn and ATPase staining at pH levels 4.37, 4.53.4.58 and 10.3 (Fig. 5A).A total of 210 regenerating muscle fibres and 182 control muscle fibres from 3 subjects was analysed.The control fibres fell into the two classic type I or type II fibre classifications, while the regenerating fibres presented with different patterns, requiring a total of 7 staining profiles (Fig. 5A-B).Several findings are worth noting.Firstly, a similar proportion of type I fibres (47%) was detected in samples from the control and stimulated conditions, suggesting type I fibres were not damaged by the stimulation protocol.Secondly, only 9% of fibres in the stimulated leg could be categorised as classic type II fibres, corresponding to 53% in the control leg of these same individuals.The remaining 44% of fibres in the stimulated leg were represented by two major type II fibre profiles, both of which were positive for MyHCe Fig. 3 Confocal microscope images of 3 regenerating healthy human muscle fibres, 30 days post injury.A-F are stained for desmin (red), nestin (green), and nuclei (blue) and G-H are stained for actin (phalloidin, red), nestin (green) and nuclei (blue).A Maximum intensity projection of a 14-slice z-stack (100-μm scale bar), displaying a nestin + branch attached to a regenerating myofibre (the lower myofibre displayed here alongside an (upper) uninjured myofibre).B Maximum intensity projection of a 25-slice z-stack (20-μm scale bar).Note the striated and nestin + segment (arrows) tightly associated with the parent myofibre.This branch displays a gradual increase in nestin immunoreactivity from the point of branching (or fusion) towards its end (C orthogonal slices 7-8).D-E Maximum intensity projections of a 10-slice z-stack (20-μm scale bar).Note the small nestin + desmin + myofibre segment (arrows) nestled against the parent myofibre (nestin-desmin +).Note the approximately 10 juxtaposed myonuclei in D. G Three slices of a z-stack (scale bar, 50 μm), showing 2 myofibres.The presence of nestin at the perimeter of the upper myofibre in this image indicates ongoing regeneration, in contrast to the lower myofibre.Arrows point to a region of the regenerating myofibre that appears to be split for a length of approximately 100 μm, demarcated by nestin (H).It can be seen from the striated actin staining that the smaller segment is longitudinally continuous with the parent myofibre.and MyHCn.Of these, one category was also positive for MyHC I, while the other was negative for MyHC I, making up 18% and 24%, respectively, of the total fibre pool.The remaining three categories represent patterns which are only found in 1-2 regenerating fibres each and which could not be assigned to any of the existing fibre groups.These fibres did not reflect a clear type I or II fibre type, with conflicting ATPase and immunoreactivity staining patterns. Discussion The occurrence of hyperplasia in healthy adult skeletal muscle is widely debated, fuelled by observations of "splitting" myofibres.However, in principle, such features could be explained by fusion of a myotube to its "parent" myofibre.In our detailed examination of healthy human muscle undergoing adult regenerative myogenesis after myofibre necrosis, we find support for fusion rather than splitting, which argues against hyperplasia, in line with earlier hypotheses that observations of branching/splitting represent fusion and are a physiological process in healthy muscle. It is important to note that our study examines regenerating myofibres, as shown by positive staining for the developmental myosins (neonatal and embryonic MHC), in healthy adult skeletal muscle.While there are some parallels between embryonic development and adult regenerative myogenesis [35], our findings may not necessarily represent other situations, such as development or muscle growth following heavy loading.This distinction is important, as data in larva show that, specifically during metamorphosis, fibre splitting is a common feature in this species [15].However, more commonly, changes in muscle mass are investigated following heavy loading, a situation characterised by repair of segmental muscle damage or growth.In any case, changes in muscle mass are most often observed by looking at cross-sections of either whole animal muscles or muscle biopsies in humans.In a comprehensive review from 2019, Murach and colleagues describe fibre splitting following extreme loading conditions in animals [7].Among others, they use data from Roy and Edgerton [36] to illustrate how changes in fibre pennation angle can make it difficult to assess fibre number from a tissue cross-section.While this is correct, change in fibre length is an additional critical factor, as is pointed out in a letter to the editor by Jorgenson and Hornberger [37].Fibre lengthening is primarily seen; following eccentric resistance training [38] represents an additional challenge to counting fibres on muscle cross-sections.In this study, we show that branching seen during regeneration can further complicate this analysis.Figure 6 is [7] and Jorgenson and Hornberg [37] of how changes to muscle morphology, and the angle and depth of the cross section, can influence total fibre number counts.This emphasises that under non-control situations, e.g.following training, during regeneration or in pathological states, muscle cross sections should be used with caution if the goal is to count the total number of fibres in a muscle.Furthermore, these methodological issues have likely clouded the data pertaining to hyperplasia. We set out to assess whether muscle fibre branching represents complete fibre splitting, and, with this, myofibre hyperplasia, or alternatively if the presence of branching fibres could be explained by fusion of myotubes rather than splitting.In this systematic analysis of regenerating human muscle, 30 days after injury induced by electrical stimulation, we find multiple signs of fibre splitting or branching, in line with earlier reports in powerlifters and anabolic drug users [19,20].By following these fibres through a series of 22 consecutive sections, we frequently observed the branch fusing with the parent myofibre again (Fig. 1), with a median branch length of 144 μm (range 24-264 μm).The same fibre thus appears split in portions of its length and "normal" in other parts.We often observed a single fibre branching into several smaller fibres, similar to observations in rodents in pathological states [4] and following strenuous exercise [10].This has previously been described as evidence of muscle fibre hyperplasia.Thus, when examined alone, a single cross section can be difficult to interpret. While the serial sectioning analysis confirmed branching as a regular feature at this time point of 4 weeks post injury, a continuation of this process towards complete splitting could still be a viable explanation for our observations.To investigate further, we turned to highresolution confocal imaging for viewing single fibres longitudinally in 3 dimensions.Small myofibre branches were observed, tightly associated with parent myofibres.These branches were often positive for neonatal and in some cases embryonic myosin (Fig. 2), as well as staining positive for nestin, which has been shown to stain fibres in the regenerating or un-innervated states [32,33,39].Importantly, both the neonatal myogenic marker and nestin-positive staining indicate that the branch is at an earlier stage of development in the regeneration process than the parent fibre.This is further supported by the lack of desmin + striations in these branch segments (Figs. 3 and 4).Disruption to the sarcomeric striations reported by Crameri and colleagues in the days after maximal eccentric voluntary exercise [40] is a sign of fibre damage and degeneration, while at the 4-week time point in the present study, it represents a different process -the formation of a new myofibre.Theoretically, fibre splitting would result in two, or more, similar fibres, rather than one fibre with mature sarcomeres and one without.Therefore, we find it most likely that the fibres are not splitting and dividing, but rather that the branch is in an earlier state of regeneration than the parent myofibre, where fusion between the branch and parent myofibre is an ongoing process.The chains of myonuclei extending from the point of branch fusion support this, as these nuclei likely mark the point of recent fusion between branch and parent sarcolemmas.Further substantiation for fusion can be found in the transmission electron microscopy images.Firstly, we observed myofibres with different states of sarcomere arrangement separated by non-membranous borders (Fig. 4), again supporting fusion of an immature branch with a more mature parent myofibre.Secondly, between a small myofibre (potentially a branch) and a larger closely associated myofibre, we observed membrane-associated electron-dense plaques.Electron-dense plaques are shown in Drosophila to be needed for the initial step of membrane fusion between myoblast and myotube [34,41,42]; however, electrondense plaques are also observed, in kette mutants, where fibres do not fuse [42].As such, electron-dense plaques are not indicant of fusion but can be interpreted as membranes in the phase of fusing.Taken together, we propose that the branching signs we, and others, observe can be explained by incomplete, ongoing, regeneration following fibre necrosis, and not splitting of myofibres leading to hyperplasia. How these branches occur in the first place deserves some consideration.The appearance of split or branched myofibres has earlier been attributed to as "incomplete lateral fusion of myotubes during regeneration" [21,22], which fits well with our understanding of how necrotic muscle fibres are replaced by new myofibres.Successful muscle regeneration requires the preservation of the myofibre basement membrane.The original basement membrane is eventually shed after providing essential scaffolding for myogenesis [16,43].Basement membrane formation distinguishes foetal muscle development from adult myogenesis because the basement membrane does not form until later stages of muscle development [43].Within the original basement membrane, satellite cells are activated, proliferate, differentiate and eventually fuse with each other to form myotubes.Thus, one basement membrane scaffold will contain many myotubes, which in turn fuse with each other to form a single myofibre.With this in mind, it does not seem unreasonable that a myotube occasionally does not fuse in a synchronised manner with the other myotubes and becomes partially orphaned from the parent myofibre, forming its own sarcolemma and basement membrane.Alternatively, it is possible that the branch represents a lone myotube formed at a later stage of regeneration than the other myotubes. While our observations support incomplete lateral myotube fusion as a possible explanation for the presence of branching myofibres in healthy regenerating muscle, alternative interpretations should be considered.The branches that are followed from the first point of branching (or fusion), through the 22 consecutive sections, do in some cases fuse again with the parent myofibre, while other branches were not observed to refuse and remained as branches with only a single point of attachment to the parent myofibre.This could be explained by the fibres not being followed far enough to detect fusion with the parent myofibre; however, it is also possible that these branches do not fuse, but rather split, for instance via myocyte grafting as suggested by Murach and colleagues [7].A dual response could be expected if some fibres suffer only partial destruction.However, from a previous analysis in muscle biopsies taken from these same subjects 7 days following injury, we know that all fibre fragments studied were either completely normal or regenerating (alternating necrotic and regenerating zones, with macrophage infiltration) along their entire length [24], so segmental damage does not seem to be a feature of the electrical stimulation model used, in healthy humans, in the present study.Furthermore, based on the earlier regeneration state of the branch, the centralised nuclei potentially marking site of recent fusion between the branch and parent myofibre (Fig. 3), and the presence of electron dense plaques (Fig. 4) pointing towards the cells being on a path towards fusion, we find it more likely that fusion is ongoing rather than splitting.Whether some branches undergo fusion, while others split completely, could certainly be a feature of other models of muscle overload, injury or pathology, and are potentially species specific. One of the other interesting observations from the present study was the difficulty in categorising regenerating myofibres as clear type I or type II.The fibres in the nonstimulated leg were straightforward to classify as type I or type II, whereas for the regenerating muscles, we had to create additional profiles, seven in total, most of which expressed developmental myosins which has been well documented previously supporting the notion that there is a re-expression of these during regeneration [44].The different myosin profiles most likely represent different stages of regeneration, as in the case of the branches, developmental myosin representing an immature state as is seen in the prenatal environment [45].In general, the MyHC I and II antibody staining was in agreement with the ATPase fibre categories.Although some fibres stained positive with both myosin type I and myosin type II antibodies, in other cases, the ATPase staining profile was inconclusive.Thus, a combination of antibody staining and ATPase staining is helpful for defining a fibre as primarily expressing either type I or type II myosin, under regeneration conditions.The other outcome from this analysis was that the regenerating fibres were almost exclusively type II.Crameri and colleagues have previously suggested, but were not able to show conclusively, that electrical stimulation primarily targets type II fibres [46].However, Gregory and Bickel suggest in a perspectives paper [47] that it is more likely that electrical stimulation has a stochastic/nonselective fibre-type recruitment pattern when used as a model of exercise.We speculate that the finding that 99.5% of the affected fibres being type II in nature is not a feature of the electrical stimulation model per se but rather a combination of [1] the recruitment pattern seen in the case of very high strain; in this case, electrical stimulation in combination with eccentric contractions selectively recruits large motor units, at least initially, and with this the fast type II fibres and [2] this regeneration stage, 4 weeks after the stimulated contractions.In rat soleus, a predominantly slow muscle, regenerating fibres initially express fast myosin transcripts, and then switch to slow myosin, upon innervation [48].This supports type II myosin as the default myosin heavy chain type, while the fibre is still developing (and not yet innervated).However, it is not known if this occurs in healthy adult human muscle, and it should be noted that the overall percentage of type I fibres (47%) did not differ between the regenerating muscle and the control muscle, which further supports the regenerating fibres being type II in nature.Our observations relating to branching and fusion therefore only reflect events in type II fibres, in a mixed muscle such as the vastus lateralis. Conclusions In this detailed study of branching myofibres in healthy regenerating human skeletal muscle tissue 4 weeks after a necrosis-inducing event targeting type II fibres, we find more evidence for incomplete ongoing myotube fusion rather splitting (and thereby hyperplasia) of myofibres.However, there may be alternative explanations, and further analysis of full single fibres from tip to tip is needed to definitively confirm this. Fig. 1 Fig.1Cross-sectional profiles of regenerating fibres.A 12-μm-thick serial sections of a biopsy from regenerating, healthy, human vastus lateralis muscle 30 days after necrosis induced by electrically stimulated eccentric contractions.Sections were stained with dystrophin to label the sarcolemma.Note the change in myofibre shape and features of branching and fusion in each highlighted area (coloured shape outlines).*Indicates the same (uninjured) myofibre throughout the series for references.Serial section numbers are indicated.Scale bar, 100 μm Fig. 2 Fig.2Cross-sectional profiles of regenerating fibres.Serial sections of a biopsy from regenerating human vastus lateralis skeletal muscle 30 days after injury induced by electrical stimulation-eccentric contractions.Sections were stained by ATPase or immunofluorescence, as indicated.In addition to features of branching and fusion, note the high prevalence of fibres positive for MyHCn (and to a lesser extent MyHCe), which mostly appear to have a type II fibre profile (see Fig.5for details).Coloured shape outlines highlighted fibres demonstrating branching and/or fusion along this series.*Indicates the same uninjured type I myofibre on each section, for reference.Scale bar, 100 μm Fig.3 Confocal microscope images of 3 regenerating healthy human muscle fibres, 30 days post injury.A-F are stained for desmin (red), nestin (green), and nuclei (blue) and G-H are stained for actin (phalloidin, red), nestin (green) and nuclei (blue).A Maximum intensity projection of a 14-slice z-stack (100-μm scale bar), displaying a nestin + branch attached to a regenerating myofibre (the lower myofibre displayed here alongside an (upper) uninjured myofibre).B Maximum intensity projection of a 25-slice z-stack (20-μm scale bar).Note the striated and nestin + segment (arrows) tightly associated with the parent myofibre.This branch displays a gradual increase in nestin immunoreactivity from the point of branching (or fusion) towards its end (C orthogonal slices 7-8).D-E Maximum intensity projections of a 10-slice z-stack (20-μm scale bar).Note the small nestin + desmin + myofibre segment (arrows) nestled against the parent myofibre (nestin-desmin +).Note the approximately 10 juxtaposed myonuclei in D. G Three slices of a z-stack (scale bar, 50 μm), showing 2 myofibres.The presence of nestin at the perimeter of the upper myofibre in this image indicates ongoing regeneration, in contrast to the lower myofibre.Arrows point to a region of the regenerating myofibre that appears to be split for a length of approximately 100 μm, demarcated by nestin (H).It can be seen from the striated actin staining that the smaller segment is longitudinally continuous with the parent myofibre.*Central myonuclei potentially indicate the site of recent fusion.The position of the YZ orthogonal view images in C (20-μm scale bar), F (scale bar 5 μm) and H (scale bar 10 μm) is designated by dashed lines in B, E and G, respectively (See figure on next page.) Fig. 4 Fig. 5 A Fig. 4 Transmission electron microscopy images of cross-sections of regenerating human muscle, 30 days post injury, in two subjects (one subject A-F, second subject in G-H).A shows a small fibre surrounded by larger myofibres (20-μm scale bar).B-F (scale bar 10 μm (B), 5 μm (C), 5 μm (D), 2 μm (E), 1 μm (F)) show magnified images of A. G A myofibre branch (*) closely associated with a parent myofibre (scale bar 10 μm), with evidence of membrane fusion, further magnified in H (scale bar 2 μm).The arrows point to membrane-associated electron-dense plaques, indicative of membranes in the phase of fusing.In all images, Z is z-disc, cap is capillary, m is myofibre, mn is myonucleus and mf is myofilament Fig. 6 Fig. 6 Illustration of the challenges of counting myofibre number on muscle cross-sections, due to changes in pennation angle, myofibre length and myofibre branching Table 1 Subject characteristics and muscle injury markers 4, 7 and 30 days after injuryAge and anthropometric values are mean ± SD.Injury markers are median ± SD
2023-08-12T14:09:58.323Z
2023-08-12T00:00:00.000
{ "year": 2023, "sha1": "83908389a7064cd654e016561c479785bbbfe0e6", "oa_license": "CCBY", "oa_url": "https://skeletalmusclejournal.biomedcentral.com/counter/pdf/10.1186/s13395-023-00322-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d9ccf39d47f3bfe43d16d2535dbfd33662118802", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
70728249
pes2o/s2orc
v3-fos-license
Antipyretic Effect of Cinnamomum burmannii ( Nees & T . Nees ) Blume Infusion in Fever-induced Rat Models Background : Fever is a frequent clinical sign encountered in human especially in children. Unfortunately, access to health care and medications (antipyretics) are hampered by shortage of services and affordability, which are accentuated by local resources mainly for those living in remote areas. Therefore, using herbal medicineas an alternative in treating fever should be developed as substituent reliance on synthetic antipyretic. This study is conducted to observe antipyretic effect of Cinnamomum burmannii (Nees & T.Nees) Blume infusion using Diphtheria Tetanus Pertussis (DTP) vaccine-induced fever in rats. Methods: Twenty-eight male Wistar rats (150 200 g) were randomly allocated into control and treatment groups. Fever was induced with DTP vaccine intramuscularly injected (0.7 mL/200 g body weight) and 4 hours later, distilled water (5 mL) was administered orally to the control group while the treatment group received 5 mL of 3%, 6%, and 12% of cinnamon infusion. Rectal temperature was measured before the pretreatment, 4 hours after DTP vaccine-induced fever injection and at a 30-minute interval during 180 minutes after the infusion administration. All procedures and protocols were performed in October 2012 at the Pharmacology Laboratory, Faculty of Medicine, Universitas Padjadjaran, Bandung. Results: Data analysis using the one way analysis of variance (ANOVA) showed significant reduction (p<0.001) of rectal temperature after 30 minutes and Duncan Post-Hoc test showed significant effect for 6% and 12% of cinnamon infusion groups. Conclusion : The antipyretic effect of 6% and 12% of Cinnamomum burmannii (Nees & T.Nees) Blume infusion in fever-induced rat models is found in the first 30 minutes [AMJ.2014;1(1):81–5] Introduction Fever is an increase of body temperature due to changes which occur in the body thermoregulatory set-point caused by pyrogens.This is a frequent clinical sign encountered in human, which can be due to infectious or noninfectious diseases.Developing countries such as Indonesia, with high population density and large geographical area, may face disparities in the access to health care and medication due to limited resources, especially for those who live in remote areas where the access to the primary health care (PHC) example such aspusat kesehatan masyarakat (Puskesmas), are still difficult geographically difficult to reach. Antipyretic side effects such as hypersensitivity, nausea, and vomiting occasionally occur in higher dosage consumption. 1 Hypersensitivity reactions can be from mild rashes to a more serious problem such as erythema multiforme disorder which is characterized by multiform skin lesions, Stevens-Johnson syndrome (SJS) and toxicepidermal necrolysis (TEN) with promising poor prognosis. 2ence, herbal medicine is a kind of health treatment which we need to look into as an alternative in treating fever since it is relatively affordable and accessible so that we do not have to rely much on synthetic drugs.Cinnamon is one of the examples of herbal medicine that is believed to have many beneficial health effects.With the genus name Cinnamomum belonging to the family Lauraceae, it comprises of many species and one of the species commonly used in Indonesia is Cinnamomum burmannii (Nees & T.Nees) Blume. 3 In a pharmacological study, a reduction of body temperature in mice was observed by the administration of decoctions of the dried twigs of cinnamon. 4While, an in vitro finding reveals that cinnamaldehyde, a chemical constituent of cinnamon can suppress production of endogenous pyrogens example such as tumor necrosis factor (TNF), interleukin-6 (IL-6), and interleukin-1 (IL-1)5 thus suggesting the role of cinnamaldehyde in providing hypothermic and antipyretic action. Accordingly, we were interested in conducting a research on Cinnamomum burmannii (Nees & T. Nees) Blume, using the simplest method of extracting active compounds; infusion, to provide evidence for a potential role of Cinnamomum burmannii (Nees & T.Nees) Blume in the treatment of fever. Methods A total of 28 male Wistar rats (150-200 g) obtained from Pusat Penelitian Antar Universitas (PPAU), Institute of Technology, Bandung were used in this research.The rats were housed at the animal facility of the Pharmacology Laboratory, Faculty of Medicine, Universitas Padjadjaran, Bandung with standard condition of temperature (25 ± 2°C).Rats that have been used previously on other studies were excluded.Seven days prior to starting of the experiment, the rats were kept under laboratory conditions and allowed unlimited food and water The rats were sacrificed with formalin injection at the end of experiment.Seven rats were used in each intervention group.The research protocols and animal care procedure were approved by the Health Research Ethics Committee of the Faculty of Medicine, Universitas Padjadjaran, Bandung. The Cinnamomum burmannii (Nees & T.Nees) Blume originates from Padang, Indonesia was bought from a herbal store in October 2012.The sample was identified as Cinnamomum burmannii (Nees & T.Nees) Blume by the Laboratory of Plant Taxonomy, Biology Department, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran.The cinnamon barks were measured into suitable weight of 0.15 g, 0.30 g and 0.60 g.Water is heated in the first level pot until it is boiled and cinnamon barks mixed with 5 mL of water were placed at the second level pot, heated until 90°C for 15 minutes with every 5 minutes the preparation is Twenty-eight male Wistar rats (150 200 g) were randomly allocated into 4 groups.The normal body temperature of each rat was measured rectally at predetermined intervals and recorded.The rectal temperature was measured by gently inserting a digital thermometer to a length of approximately 2.5cm intra rectal until stable reading was obtained or for up to 30 60s.For this, rats were restrained manually at the base of the tail.The accuracy of the thermometer was accurate to 0.1°C.After measuring basal rectal temperature, animals were injected intramuscularly with 0.7 mL/200 g body weight of DTP vaccine.Rats were then returned to their housing cages. Four hours after DTP vaccine injection, the rat's rectal temperature was measured again, as described previously.Only rats that showed an increase in temperature of at least 0.5°C were used for this study.The cinnamon infusion with doses of 3%, 6% and 12% were administered 5 mL orally to 3 groups of animals.The control group received 5 mL of distilled water orally.Rectal temperature was measured at 30 minutes intervals during a period of 180 minutes after the infusion and distilled water administration. Difference in mean values between groups were analyzed for each 30 minutes interval by a one way analysis of variance (ANOVA) followed by Duncan post-hoc test.Statistical significance was assessed as p <0.05. Results Results of the antipyretic effect of the Cinnamomum burmannii (Nees & T.Nees) Blume infusion are presented in Figure 1.Thirty minutes after intervention (30') an increase of rectal temperature by 0.5°C was observed in control animals which were given distilled water producing a mean rectal temperature of 38.24±0.24°C. Treatment with 5 mL of 6% (group 3) and 12% (group 4) cinnamon infusion significantly (p<0.001)reduced fever induced by DTP vaccine at 30 minutes after oral administration by approximately 0.8°C.On the other hand, treatment with 5 mL of 3% cinnamon infusion (group 2) failed to reduce the temperature where it increased the rectal temperature by 0.4°C, slightly lower by 0.1°C as compared to the control group at 30 minutes after oral administration.The antipyretic effects of 2 doses of Cinnamomum burmannii (Nees & T.Nees) Blume were noted as early as 30 minutes after oral administration and the effect was not maintained for the next 30 minutes until 180 minutes after oral administration.There were fluctuant temperature values with no exact increasing or decreasing trend.Rectal temperature of the entire group did not reduce to normal temperature at 180 minutes after oral administration. Discussion Theoretically, antipyretic acts works as an inhibitor on prostaglandin synthesis by inhibiting the enzyme cyclooxygenase (COX).Cyclooxygenase enzyme is also influenced by the presence of cytokines such as tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), interleukin-1 (IL-1) and interferon-γ produced by monocytes and macrophage.Meanwhile, monocytes and macrophage are stimulated by exogenous pyrogens when the body encounters with infection, toxins, or injury.In the absence of COX enzyme, arachidonic acid cannot be converted to prostaglandin (PGH2) hence no other prostaglandin as mediators of fever can be yield. 6iptheria Tetanus Pertussis (DTP) vaccine used in this research to induce fever in the rats was a combination of vaccine against infection of diphtheria, pertussis and tetanus.It is believed to have cause pyrogenic activity due to pertussis component in the vaccine in which the toxins presence in vaccine indirectly serves as exogenous pyrogens that cause fever. 7he antipyretic effect of cinnamon might be due to cinnamaldehyde content that may have inhibitory effect on secretion of IL-1 and then prostaglandin synthesis.This correspond to a study conducted by a group of researchers from Taiwan who reveal that cinnamaldehyde can suppress production of endogenous pyrogens (TNF, IL-6, and IL-1). 5The antipyretic effect of Cinnamomum burmannii (Nees & T.Nees) Blume may be due to these properties which influence in the chain breaking and prevention of prostaglandin release that cause fever. Lower doses of Cinnamomum burmannii (Nees & T. Nees) Blume infusion had less efficacy in reducing rectal temperature.Increase in temperature at minutes 60 and not reaching basal temperature could be due to shorter duration of action of cinnamon in decreasing the high temperature.Whether returning to thermal regulatory set point to normal is due to persistent exist of endogenous pyrogens in the circulation for several hours. 8innamaldehyde, a chemical constituent of cinnamon which has inhibitory effect on secretion of endogenous pyrogen may not be sufficient enough to fully inhibit release of endogenous pyrogens, COX and the prostaglandins.Therefore, a further research should be conducted on Cinnamomum burmannii (Nees & T. Nees) Blume using other methods of extracting the active compound which possibly result in higher concentration of cinnamaldehyde, thus will increases its effectiveness. For the control group, the decrease in temperature after 180 minutes can be explained since probably rats, as well as humans have their own mechanism in regulating body temperature.One of the ways is using their tail as thermoregulatory functions by dilating their tail blood vessel. 9t can be concluded that 5 mL of 6% and 12% Cinnamomum burmannii (Nees & T.Nees) Blume infusion per oral had antipyretic effect for the first 30 minutes.Additional studies are needed to determine if the antipyretics effects of cinnamon were due to inhibitory effect of cinnamaldehyde on secretion of endogenous pyrogens, and compared with gold standard antipyretics example paracetamol to see whether there was a significant difference in antipyretic effect between cinnamon and paracetamol.The research also should be developed into clinical trial and lastly expanded into phytopharmaca. Figure 1 Figure 1 Change in rectal temperature after intramuscular injection of vaccine DTP, oral administration of 5ml distilled water and 3 doses of Cinnamomum burmannii infusion to rats with fever.All drugs were administered at 0 minutes (0'), n=7 for all groups
2018-12-21T11:08:08.698Z
2014-12-31T00:00:00.000
{ "year": 2014, "sha1": "7e63c808559c31a682af8cdbf0201d239d9d88a3", "oa_license": "CCBYNC", "oa_url": "http://journal.fk.unpad.ac.id/index.php/amj/article/download/352/346", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7e63c808559c31a682af8cdbf0201d239d9d88a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268063894
pes2o/s2orc
v3-fos-license
The Situate AI Guidebook: Co-Designing a Toolkit to Support Multi-Stakeholder Early-stage Deliberations Around Public Sector AI Proposals Public sector agencies are rapidly deploying AI systems to augment or automate critical decisions in real-world contexts like child welfare, criminal justice, and public health. A growing body of work documents how these AI systems often fail to improve services in practice. These failures can often be traced to decisions made during the early stages of AI ideation and design, such as problem formulation. However, today, we lack systematic processes to support effective, early-stage decision-making about whether and under what conditions to move forward with a proposed AI project. To understand how to scaffold such processes in real-world settings, we worked with public sector agency leaders, AI developers, frontline workers, and community advocates across four public sector agencies and three community advocacy groups in the United States. Through an iterative co-design process, we created the Situate AI Guidebook: a structured process centered around a set of deliberation questions to scaffold conversations around (1) goals and intended use or a proposed AI system, (2) societal and legal considerations, (3) data and modeling constraints, and (4) organizational governance factors. We discuss how the guidebook's design is informed by participants' challenges, needs, and desires for improved deliberation processes. We further elaborate on implications for designing responsible AI toolkits in collaboration with public sector agency stakeholders and opportunities for future work to expand upon the guidebook. This design approach can be more broadly adopted to support the co-creation of responsible AI toolkits that scaffold key decision-making processes surrounding the use of AI in the public sector and beyond. INTRODUCTION Public sector agencies in the United States are rapidly adopting AI systems to assist or automate services in settings such as child welfare, credit lending, housing allocation, and public health.These tools have been introduced to help overcome resource constraints and limitations in human decision-making [12,36].However, as a growing body of work has documented, public sector AI tools have often failed to produce value in practice, instead exacerbating existing problems or introducing new ones [10,30,42,65].For example, the Michigan Unemployment Insurance Agency developed an AI-based fraud detection system (MiDAS); the agency stopped using the tool after realizing it falsely flagged over 90% of its casesa discovery that was made only after the tool had been in use for over two years, impacting hundreds of thousands of people along the way [6].Similarly, following the deployment of an AI-based tool for child maltreatment screening, Allegheny County's Department of Human Services faced significant criticism after the tool was found to exacerbate biases against Black and disabled communities [17,20,24,25].Research indicates that these problems in deployment were a consequence of fundamental conflicts between the tool's design on the one hand, and data limitations and worker needs on the other [17,20,30,31].Many other public sector agencies have dropped deployed AI tools for similar reasons, even after investing significant resources into their development (e.g., [28,57]). Many failures in public sector AI projects can be traced back to decisions made during the earliest problem formulation and ideation stages of AI design [13,47,65,68].AI design concepts that make it to production may be "doomed to fail" from the very beginning, for a variety of reasons.For example, AI design concepts have often been conceived in isolation from workers' actual decision-making tasks and challenges, leading to AI deployments that are not actually viable in practice [26,31,62,67,68].Similarly, teams often propose design concepts for new tools that cannot possibly be implemented in an effective, safe, or valid way given technical constraints, such as the availability and quality of data [13,50,65,68].However, discussion of such constraints is commonly left to later stages of the AI lifecycle, by which point teams have invested in an idea and may be more reluctant to explore alternative ideas [30,68].While agencies utilizing AI may be motivated to try to mitigate issues at later project stages, such attempts are unlikely to yield meaningful improvements if fundamental issues around the problem formulation and solution design are left unaddressed [20,31,61,65,68]. In this paper, we ask: How can we support public sector agencies in deciding whether or not a proposed AI tool should be developed and deployed in the first place?Today, we lack systematic processes to help agencies make informed choices about which AI project ideas to pursue, and which are best avoided.As AI tools proliferate in the public sector, the failures discussed above indicate that agencies are repeatedly missing the mark with AI innovation.While existing responsible AI toolkits have provided guidance on ways to support AI development and implementation to ensure compliance with the relevant principles and values (e.g., [18,37,39,53]), most existing toolkits are designed for use in industry contexts.Furthemore, most toolkits start from the assumption that the decision to develop a particular AI tool has already been made. To address these gaps, we introduce the Situate AI Guidebook: a toolkit to scaffold early-stage deliberations around whether and under what conditions to move forward with the development or deployment of a proposed public sector AI innovation.To ensure that our guidebook and process design is informed by existing organizational needs, practices, and constraints in the public sector, we partnered with 32 individuals, spanning a wide range of roles, across four public sector agencies and three community advocacy groups across the United States.Over the course of 8 months, we iteratively designed and validated the guidebook with a range of stakeholders, including (1) public sector agency leadership, (2) AI developers, (3) frontline workers, and (4) community advocates.The public sector agencies we partnered with represent different levels of experience and maturity with AI development and deployment: At the time of this research, some had just begun ideating ways to integrate AI tools into their agencies' processes; some were already in the process of developing new AI tools; and some had already experienced failures in AI tool deployment that led to halts in their use.The community advocacy groups include organizations that, among other areas of focus, represent and support community members in navigating challenging interactions with public services (e.g., parents negatively impacted by the child welfare system). We conducted formative semi-structured interviews and iterative co-design activities that guided the content and process design of the Situate AI Guidebook.In particular: • Through semi-structured interviews, we developed an understanding of public sector agencies' current practices and challenges around the design, development, and evaluation of new AI tools, in order to identify opportunities for new processes to improve current practice.• Through co-design activities, participants ideated and iterated upon a set of questions that they believed were critical to consider before deciding to move forward with the development of a proposed AI tool.In addition, they described how they envisioned a deliberation process could be effectively structured for adoption at their agencies. The resulting set of deliberation questions spanned a broad range of topics, from centering community needs to surfacing potential agency biases, given their positionality-topics which are relatively understated in existing Responsible AI toolkits developed for industry contexts.Notably, participants gravitated toward deliberation questions that promoted reflection on potential differences in perspective among the various stakeholders of public sector agencies (e.g., agency workers, frontline workers, impacted community members), surrounding topics such as the problem to be solved by an AI tool, notions of "community", or understandings of what it means for decision-making to be "fair" in a given context.This work presents the following contributions: (1) The Situate AI Guidebook1 ( .1.0)-the first toolkit co-designed with public sector agencies and community advocacy groups to scaffold early-stage deliberations regarding whether or not to move forward with the development of a proposed AI tool.(2) A set of 132 co-designed deliberation questions spanning four high level topics, (1) goals and intended use, (2) societal and legal considerations, (3) data and modeling constraints, and (4) organizational governance factors.Participants indicated these considerations are critical to discuss when deciding to move forward with with the development of a proposed AI tool, yet are not proactively or deliberately discussed today. (3) Guidance on the overall decision-making process that the Situate AI Guidebook can be used to support, informed by how participants envisioned they would use the guidebook in their agencies and by prior literature discussing related challenges that threaten the practical utility of researchcreated artifacts [37,66].(4) Success criteria for using the guidebook informed by participants' existing challenges, prior literature, and signals that participants themselves described as valuable in assessments regarding the guidebook's ability to promote meaningful improvements in their agency. In the following sections, we first overview relevant bodies of prior literature to help ground and motivate the creation of our toolkit (Section 2).We then describe the approach we took to collaboratively develop the Situate AI Guidebook (Section 3), and describe the guidebook's major components, including its guiding design principles (Section 4.1), deliberation questions (Section 4.2), process design (Section 4.3), and success criteria (Section 4.4).We conclude with a discussion of anticipated challenges, as well as directions for future research aimed at understanding how to implement such deliberation processes most effectively.We also discuss implications for future co-design of responsible AI toolkits intended to promote meaningful change in public sector contexts (Section 5).The public sector agencies we partnered with in this study plan to explore the use of the guidebook through pilots, to identify further avenues for improvement. BACKGROUND 2.1 Public Sector AI and Overcoming AI Failures In the United States, public sector agencies are government-owned or affiliated organizations occupying the federal, state, county, or city government, responsible for making decisions around the allocation of educational, welfare, health, and other services to the community [41].Public sector agencies across the United States are exploring how to reap the benefits of AI innovations for their own workplaces.AI tools promise new opportunities to improve the efficiency of public sector services, for example, by increasing decision quality and reducing agency costs [7,11,46,63].In 2018, 83% of agency leaders indicated they were willing or able to adopt new AI tools into their agency [5].In the public sector, there is also a recognition that developing AI tools in-house can help ensure that they are better tailored to meet agency-specific needs, ensure they are trained on representative datasets, and account for local compliance requirements [16].However, achieving responsible AI design in the public sector has proven to be an immense challenge [22,23,56,64].The domains where agencies are attempting to apply AI are often highly socially complex and high-stakes-including tasks like screening child maltreatment reports [58], allocating housing to unhoused people [35], predicting criminal activity [32], or prioritizing medical care for patients [45].In these domains, where some public sector agencies have a fraught history of interactions with marginalized communities [4,54], it has proven to be particularly challenging to design AI systems that avoid further perpetuating social biases [10], obfuscating how decisions are made [31], or relying on inappropriate quantitative notions of what it means to make accurate decisions [13].Public sector agencies are increasingly under fire for implementing AI tools that fail to bring value to the communities they serve, contributing to a common trend: AI tools are implemented then discarded after failing in practice [21,56,67,68]. Research communities across disciplines (e.g., HCI, machine learning, social sciences, STS) are beginning to converge toward the same conclusion: Challenges observed downstream can be traced back to decisions made during early problem formulation stages of AI design.Today, we lack concrete guidance to support these early stages of AI design [13,47,65,68].For example, after observing decades of failures to develop clinical decision support tools that bring value to clinicians, researchers have found that AI developers may lack an adequate understanding of which tasks clinicians desire support for, leading to the creation of tools that target problems that clinicians do not actually have [15,21,67].These trends are beginning to surface across other domains that have more recently begun to explore the use of new AI tools.In social work, researchers have found that technical design decisions in decision support tools reflect misunderstandings around the type of work that social workers actually do, leading to deployments where, for instance, the underlying logic of the model conflicts with how workers are trained and required to make decisions [30,31].Others have surfaced how seemingly technical design decisions made during early stages of model design actually embed policy decisions that conflict with community values and needs [20,59,61]. In addition to concerns regarding how well the problem formulation and design of a given AI tool reflects worker practices and community values, there is a concern that AI tools deployed in complex, real-world domains may be conceived without adequate consideration for the actual capabilities of AI.For example, examining a range of real-world decision support tools (e.g., in criminal justice, child welfare, tax lending), researchers have argued that existing AI deployments lack validity, due to limitations in the types of data that can be feasibly collected to train the desired model [13,42,50,65].This highlights the need for developers and organizations to reflect upon technical constraints and limitations at earlier stages of the AI development lifecycle, such as when evaluating whether or not to pursue a proposed AI project in the first place. While public sector agencies have emphasized the potential to improve decision accuracy and reduce bias as a key motivation to use new AI tools (e.g., [14]), these challenges around the problem formulation and design of AI systems implicate the veracity of these claims [13,31,50,61,65,68].We identify a significant opportunity to better support public sector agencies in making systematic, deliberate decisions regarding whether or not to implement a given AI tool proposal.Given the vast potential for harm, and the similarly vast potential for AI systems to meaningfully support workers and improve services in the public sector, it is critical to support agencies through concrete guidance and processes in making more informed decisions around which AI tool proposals to pursue, and which to avoid. Toolkits for Responsible AI Governance In an effort to support responsible design and development of AI systems in practice, the HCI, ML and FAccT research communities have contributed a range of responsible AI toolkits.These toolkits are intended to support and document assessments of AI systems, including their (potential) impacts (e.g., [52]), intended use cases (e.g., [39]), capabilities and limitations (e.g., [53]), dataset quality (e.g., [19,49]), and performance measures (e.g., [39]).Many of these toolkits are intended to be used as communication tools.For example, Model Cards provide a structure for communicating information regarding the intended uses, potential pitfalls, and evaluation measures of a given ML model, to support assessments of suitability for a given application and context of use [39].Recent research surveying these toolkits have found that the majority of existing toolkits frame the work of AI ethics as "technical work for individual technical practitioners [66].For the majority of existing responsible AI toolkits, the primary users are ML practitioners, limiting the forms of knowledge and perspectives that inform the work of "AI ethics" [66].A smaller number of toolkits have been designed for use by organization-external stakeholders, to support impacted end-users in interrogating and analyzing deployed automated decision systems (e.g., the Algorithmic Equity Toolkit [33]); provide impacted stakeholders with an opportunity to share feedback on an AI system's use cases and product design (e.g., Community Jury [1]); or support philanthropic organizations in vetting public sector AI technology proposals [3]).In all of these examples, the toolkit supports examinations of AI systems that have already been developed and sometimes even deployed. Most existing toolkits assume that the decision to develop a particular AI system has already been made.Therefore, even when they are intended to support reflection and improvement of the AI system, the types of improvements that could stem from using the toolkit tend to be limited to those that would not require fundamental changes to the underlying technology.Meanwhile, while some existing responsible AI toolkits target earlier stages of AI development (e.g., [69]), these have primarily been designed for private sector contexts.Yet there is good reason to expect that public sector agencies would benefit from tailored responsible AI tools.For instance, compared with the private sector, there is a greater expectation that public sector agencies exist to serve people and are expected to make decisions that center communities' needs.When making decisions as critical as what new AI tools to deploy, agencies are expected to adhere strongly to values such as deliberative decision-making, public accountability, and transparency.To date, there exists minimal concrete and actionable guidance on how to support public sector agencies in scaffolding early-stage deliberation and decision-making. A related existing artifact is the AI Impact Assessment, described as a "process for simultaneously documenting an [AI] undertaking, evaluating the impacts it might cause, and assigning responsibility for those impacts" [40].AI Impact Assessments have been proposed for both public and private sector contexts, and are intended to be completed either at an early stage of AI design (e.g., [2]), or after an AI system is developed or deployed (e.g., [38]).Another related artifact is the Data Ethics Decision Aid (DEDA) [18], a framework to scaffold ethical considerations around data projects proposed in the Dutch Government.However, neither of these examples are designed as deliberation toolkits, to promote collaborative reflection and discussion around the underlying problem formulation or solution design of an AI tool.AI Impact Assessments and ethical decision aids have also not typically been designed in collaboration with the stakeholders they intend to serve.With recent research suggesting low adoption of responsible AI toolkits in real-world organizational contexts [51,66], a co-design approach with organizational stakeholders has the potential to generate responsible AI tools that work in practice. METHODS: CO-DESIGN AND VALIDATION OF THE SITUATE AI GUIDEBOOK To iteratively co-design and validate the Situate AI Guidebook, we conducted semi-structured interviews and co-design activities with a range of stakeholders both within and outside of public sector agencies.In this section, we describe participants' backgrounds, the approach and resources used in our iterative co-design process, and our data analysis approach. Participants and Recruitment We co-designed the Situate AI Guidebook with individuals from four public sector agencies across the United States.Collectively, this set of public sector agencies has experienced a range of decisionmaking scenarios around the creation or use of AI-based decision tools.All four agencies are currently ideating new forms of AIbased tools, three have already implemented AI tools, and at least one had previously deployed an AI tool and subsequently decided to abandon it.From these agencies, we wanted to include stakeholders at different levels of the organizational hierarchy including those with experience making relevant decisions and those who are involved in the development or consumption of AI tools but who are not typically involved in decisions around development and deployment.We therefore included participants from three core stakeholder groups: 1) Agency leaders (L) who are in director or managerial roles, typically involved in agency-or department-level decisions including whether to design and deploy a particular AI tool, 2) AI developers, analysts, and researchers (A) who are in development, analysis, or research teams internal to a given public sector agency and typically build and evaluate AI tools, and 3) Frontline decision-makers (F) whose occupations bring them in direct contact with the community their agency serves and whom an AI tool may be intended to assist.Because we wanted to learn from additional frontline decision-makers but had access only to a limited number at the public sector agencies we connected with, we recruited additional participants beyond these agencies, with relevant professional backgrounds.These included social work graduate students with prior field experience making frontline decisions in public sector agencies. In addition, we co-designed the guidebook with individuals from three community advocacy groups across the United States, including family representation and child welfare advocacy groups.Individuals from these organizations created the fourth stakeholder group: 4) Community advocates (C) who represent and meet community members' needs around public services.While the Situate AI Guidebook is intended to be used by workers within a public sector agency, we included community advocates because we wanted the guidebook to represent their perspectives regarding the most critical considerations for moving forward with an AI tool design.As discussed in Section 4.3.2,we also worked with community advocates to begin envisioning what a future version of the toolkit, aimed at engaging community members in the deliberation process, might look like. In total, 7 agency leaders; 7 developers, analysts, and researchers; 7 frontline decision-makers; and 11 community advocates participated in the co-design process.To recruit public sector agencies, we contacted 19 U.S. public sector agencies at the state, city, or county level with human services departments.We received responses from five agencies.Following a series of informal conversations to share our research goals and study plans, four of the agencies decided to participate in the study.To recruit individuals from community advocacy organizations, we contacted community leaders and advocates across 8 organizations.While we requested individual study participation, some participants preferred to participate in the research study in small groups.By participating in groups, they believed they could provide a more extensive set of insights together.21 out of 25 sessions were conducted individually, and the remaining four were group interviews.For ease of communication, we will use the singular noun "participant" throughout the remainder of the paper. Iterative Co-Design and Validation The Situate AI Guidebook integrates findings across semi-structured interviews and co-design activities, which were conducted over the course of eight months between November 2022 and June 2023.The study sessions were ~90 minutes long for public sector workers, who were involved in both the interviews and co-design activities; the study sessions were ~60 minutes for community advocates, who were only involved in the co-design activities. Formative Semi-structured Interviews.To ensure that the Situate AI Guidebook is designed to address real-world needs and goals, we conducted semi-structured interviews with public sector agency workers to understand (1) their existing challenges and barriers to making decisions around AI systems and (2) desires for improving their current decision processes.Specifically, to understand existing decision-making processes, we asked each participant to recall a specific prior experience in which they or their agency needed to decide whether to move forward with the development or use of a new AI-based tool.As participants shared their stories, we asked follow-up questions to probe on possible causes behind the challenges that they described.For example, after describing how they previously made a related decision, we asked "What's challenging to do well now, when you're making those decisions?"or "What would you ideally want to discuss in conversations surrounding those decisions?"If a participant shared that they had not personally been involved in decisions around AI design and deployment-as was the case with community advocates and many frontline workers-these questions would be skipped, and more time would be spent on discussing these participants' desires for improved decision processes.We report findings on how agency decision-makers currently make decisions around AI, including how their decisions are shaped by complex power relations they hold with stakeholders external and internal to their agency (e.g., legal systems, frontline workers), in [29].In this paper, we share complementary findings that provide design rationale for the Situate AI Guidebook's design. 3.2.2 Co-Designing the Deliberation Questions.In the co-design activity, we first presented each participant with three potential scenarios: (1) Discussing ideas for new algorithms to improve services, (2) Deciding whether to pursue the development of a given algorithm design to improve services, and (3) Deciding whether to adopt an existing algorithm already implemented by others.We asked the participant to pick the scenario they had the most experience in or faced the most challenges for.For the scenario they selected, we asked the participant to think about what critical considerations and questions they believe should be on the table, when deliberating around these scenarios in an ideal future situation.If the participant was having a challenging time thinking of potential considerations and questions, we provided them with examples that were directly based on challenges they had brought up during the semi-structured interview (if applicable).To help document and organize, in real-time, the considerations and questions the participant was bringing up, we shared our screen and a link to an online board on Mural, a collaborative web application where multiple users can generate and arrange sticky notes.See Figure 2 for an example of a blank canvas. We asked each participant to brainstorm critical considerations and questions they would want future agencies to discuss.To avoid biasing participants, they were initially asked to openly ideate their own questions without viewing questions generated by prior participants.Following this, participants were shown existing questions, providing them an opportunity to comment upon and validate existing questions generated by other participants.As the participant openly generated ideas for questions, one of the members of our research team took post-it notes on what they were saying on the Mural board.The researcher would frequently check in with the participant, to ensure the post-its accurately represented their ideas.We also welcomed them to edit the post-its or create new ones.As they brainstormed, we asked follow-up questions to better understand how they think a given question could get answered, what makes it challenging to answer the question now, or how they are conceptualizing certain terms.For example, when a participant generated the question "How well are we involving community members in these decisions?, " we asked them to further elaborate on what this might look like in practice.This generated additional post-its, like "How well do we understand the costs, risks, and effort required of community members, if we invite them to contribute to model design decisions?"and "How are we weighting false positives and false negatives in a given algorithm, based on what type of mistake that is for the impacted community members?" As mentioned above, after the participant generated their own questions and considerations on the blank canvas, they were shown a list of topics and example questions for additional consideration.This helped scaffold further ideation on any considerations they may have missed in their initial ideation.In the first study session, we provided an initial list of eight broad topics, informed by prior literature: (a) Overall goal for using algorithmic tool, (b) Selection of outcomes that the algorithmic tool should predict, (c) Empirical evaluations of algorithmic tool, (d) Legal and ethical considerations around use of algorithmic tool, (e) Selection of training data for algorithmic tool, (f) Selection of statistical models to fit data, (g) Long-run maintenance of algorithmic tool, and (h) Organizational policies and resources around use of algorithmic tool.We prompted the participant to discuss any new ideas the provided topical categories inspired, or any disagreements they had with the categories.Figure 3 shows an example of what this list looked like in later stages of the co-design process. Between study sessions one or more researchers in our team iterated on the post-its generated during that study, to reduce redundancies and improve clarity.We then grouped the individual questions and considerations underneath the existing topical categories, while iteratively refining categories or creating new categories and subcategories as needed.The next participant was shown this updated version of the aggregated questions and topics at the end of the study. Guidebook Reflection and Validation. Participants that contributed to later stages of the co-design process were shown an overview of the aggregated questions, a recommended deliverable for the deliberation guidebook, and a high-level outline of a proposed deliberation process.We first showed participants the aggregated questions, and asked if there were any questions that they felt were critical to include but missing.We additionally asked if they disagreed with the importance of any of the questions, or if the wording of any question was confusing in any way.We then showed the participant an overview of the deliberation process and asked for their perspectives around what they would like to change in the proposed process, to have it fit better into their existing organizational decision-making processes.To address challenges in the potential use of the protocol, we also asked participants (especially frontline workers and community advocates) about challenges they anticipate with participating in the deliberation process.We then invited participants to discuss potential adjustments to the process or alternative processes that can help address these challenges and create a safer environment for them. Qualitative Analysis The study recordings from the semi-structured interviews and codesign activities were transcribed and then qualitatively coded by two members of the research team using a reflexive thematic analysis approach [8].We ensured that all interviews were coded by Figure 3: Screenshot of a Mural board populated with post-its after one participant's co-design activity.In our iterative co-design process, these post-its were refined by the research team then added to an aggregated list of questions that were successively grouped into higher level categories. the first author, who conducted all of the interviews and, whenever applicable, another author who observed the interview.The first author coded one transcript first, then discussed the codes with other coders to align on coding granularity.Each coder prioritized coding underlying reasons why participants generated certain questions during the co-design activity, while also remaining open to capturing a broader range of potential findings.We resolved disagreements between coders through discussion. THE SITUATE AI GUIDEBOOK The Situate AI Guidebook is a process to scaffold early-stage deliberations around whether and under what conditions to move forward with the development or deployment of a proposed AI innovation.The current version of this toolkit is intended for use within public sector agencies at various stages of maturity in their use of AI tools-from those that are just beginning to consider the use of new AI tools to those that may already have years of experience deploying AI tools.The deliberation questions are designed to be discussed across different stakeholders employed in a public sector agency, such as agency leadership, AI practitioners and analysts, program managers, and frontline workers. In this section, we provide an overview of the Situate AI Guidebook as an outcome of our co-design and validation sessions.We describe the Situate AI Guidebook through the following sections: (Section 4.1) Guiding Design Principles, (Section 4.2) Content Design, (Section 4.3) Process Design, and (Section 4.4) Success Criteria for Use. To provide context for key design decisions, throughout each section, we elaborate on participants' existing practices, challenges, desires, and needs for improving their decision-making process, drawing upon our thematic analysis.Where appropriate, we describe how the Situate AI Guidebook compares with existing responsible AI toolkits.At times, participants diverged in their desires (e.g., regarding how the decision-making process should be integrated into their agency).In some of these cases, our research team integrated these disagreements into the design of the guidebook (see Design Principle 2); in other cases, we document how these disagreements present challenges for the use of the guidebook, suggesting opportunities for future work (Section 4.3 and Section 5). Guiding Design Principles The goal of the guidebook is to scaffold public sector agency decisionmaking around the following question: Should we move forward with developing or deploying a proposed AI tool?If yes, what are key considerations to plan for?The guidebook aims to support agencies in answering this question through a deliberation-driven process supported by the following materials: (1) Question prompts to support conversations around the social (organizational, societal, and legal) and technical (data and modeling) considerations that should inform their recommendation, (2) Pointers to external resources to help guide their responses, (3) Template for a recommended deliverable to help communicate rationales and evidence for the recommendation that results from these deliberations, (4) Proposed use cases that illustrate how agencies could adopt the guidebook into their existing work processes, and (5) Success criteria to signal whether the intended outcomes of the guidebook may be relevant and useful to agencies. In co-designing the guidebook towards this goal, we centered two core design principles: • (Design Principle 1) Promoting reflexive deliberation. The question prompts (Section 4.2) should support stakeholders in having reflexive discussions-for example, conversations that surface their own pre-existing assumptions and beliefs about human versus AI capabilities and limitations with respect to a given task and context, or that surface relevant tacit knowledge that may be helpful to share with others. The question prompts should be designed to avoid prompting simple yes or no responses, to ensure that responses to complex questions are not reduced to a simple compliance activity.In drawing on prior work that emphasize the role of the toolkit as one that "prompts discussion and reflection that might not otherwise take place" [37,60], this design principles extends these notions of effective toolkits from prior literature to apply to topics of importance in public sector contexts.Prior research on public sector contexts (e.g., [27,30,61]) as well as findings from this study suggest that agency stakeholders' differing backgrounds shape their assumptions and concerns around AI tools, motivating the need for a deliberative decision-making process that surface these individual differences.Throughout Section 4.2, we elaborate on participants' existing challenges and desires to illustrate the importance of Design Principle 1 in their contexts.• (Design Principle 2) Ensuring practicality of the process.The guidebook should be designed to support a process (Section 4.3) that public sector agencies can feasibly understand, adopt, and adapt as needed.If an agency already has an existing decision-making structure, or conversations related to AI design already take place, the agency should find it easy to "fit" the guidebook into their existing organizational processes and conversations.This design principle is aimed at addressesing concerns raised in prior literature that existing responsible AI toolkits are often designed in isolation from the organizational contexts they intend to augment (e.g., [66]).This design principle is also motivated by our observations of the four public sector agencies in our study, which each had their own existing or planned organizational processes for developing AI tools (Section 4.3.1).Further, by co-designing the process the toolkit should follow (in addition to the guidebook content), we further an understanding of how organizational, labor, and power dynamics implicate the potential effectiveness of responsible AI toolkits in the public sector (Section 4.3.2). Content Design: Scaffolding Reflexive Deliberation Participants ideated critical questions that spanned four high-level topics, 12 mid-level, and 20 low-level topics.In each of the four subsections below, we briefly describe why participants were interested in the overall category of questions and provide example questions.The full set of deliberation questions for the Situate AI Guidebook ( .1)can be found in Appendix A. Goals and Intended Use. This section is intended to scaffold conversations around the following broad questions: (1) Given our underlying goals and intended use case(s), is our proposed AI tool appropriate?(2) What evidence do we have to support our answer to the previous question?What additional tasks may be required in the future to help us gather more evidence and/or better understand the evidence we currently have? Sample Questions. • Overall goal for using algorithmic tool -Who is going to be affected by the decision to use this hypothetical AI tool?-What evidence do we have suggesting that the painpoint this tool aims to solve actually exists?? What evidence do we have suggesting that technology may offer a remedy to this pain point?-Recall the stakeholders who are the most impacted by this hypothetical AI tool.How do we bring their voices to the table when determining goals?-Are there differences in the goals the agency versus community members think the tool should address? If so, what are they?If we are uncertain, what can we do to understand potential differences?-What biases (as a public sector agency) do we bring into this decision-making process?• Selection of outcomes that the algorithmic tool aims to improve -Hypothetically, imagine that our tool does a perfect job of improving the outcome that it targets.What additional problems might this create elsewhere in the system?• Empirical evaluations of algorithmic tool -Once the tool is deployed and in use, how can we evaluate how well it is working in the short-term?How can we evaluate how well it is working longerterm?-How can we effectively evaluate the tool from the perspective of impacted community members?-How might frontline workers respond to the tool? How can we better understand their underlying concerns and desires towards the tool? The deliberation questions focus on promoting conversations that bridge reflection and understanding of the goals of the proposed AI tool, as well as how these goals will be operationalized into measurable outcomes.The 52 questions within the Goals and Intended Use section are divided into nine subsections: (1) Who the tool impacts and serves, (2) Intended use, (3) How agencyexternal stakeholders should be involved in determining goals, (4) Differences in goals between the agency and impacted community members, (5) Envisioned harms and benefits, (6) Impacts of outcome choice, (7) Measuring improvement based on outcomes, (8) Centering community needs, and (9) Worker perceptions.For the purpose of this paper, we sample one question from each topical subsection. Several of the questions in this section are designed to help surface underlying assumptions regarding who benefits from the use of the tool, and to support discussion around what evidence suggests that these assumptions are true.These questions stem from participants' concerns around whether their AI systems are targeting areas that would bring the most benefits, and to whom these benefits apply.For example, one participant noted that their agency had invested a lot of effort into assessing and trying to improve fairness in their algorithms.However, the participant wondered whether they should have been having conversations around larger, "more challenging" questions.For instance, they wondered whether "correcting for bias" in an algorithm within an inherently biased system is a meaningful or feasible goal.They further elaborated: "I think there my concern often has to do with [the] unexamined belief that an algorithm is always an improvement.[...] I think [questions on broader goals and benefits are] more challenging and that people [who] are running the system may not always see [...] Personally, I think there's a lot of stuff that can be done with machine learning that doesn't have to [target] decision-making at the participant level.[...] But those are the kinds of questions the immediate focus [is] on.'Oh, we're going to use this to make decisions at critical points in programs.'Those are things that to me still need to be discussed.And it may be that those conversations are happening at tables that I'm just not at." (A02) Other participants expressed concerns for how frontline workers in their agency-the majority of who are currently not involved in early-stage conversations around the goals of the AI tool-may be misunderstanding the intended uses and capabilities of their AI tools.For example, one participant described that frontline workers may be concerned that the AI tools will displace them, even though their agency doesn't intend to use them to automate workers' jobs.They described: "There's almost like a mystique around machine learning algorithms, like there's some amazing thing that is all knowing and all seen, and therefore can predict all these different things.[...] helping people [... understand] what it's able to do and not able to do, I think, is something we've struggled with" (A04).Other questions are intended to help forefront considerations around what additional planning and resources may be needed, in order to adequately complete a related task in the future.For example, workers within agencies often described that involving community members in their AI design and evaluation process can be challenging, given the current lack of infrastructure to support such collaborations.However, community advocates described how involving community members is often an after-thought.One community advocate described the importance of being intentional and proactive in community engagement practices, because "it's easy to let that be something that gets back burning, like throughout the process to just have that be something we'll get to, and then we end up in that feedback loop where the feedback is provided but the tool is already created" (C2). Questions in this section help promote earlier reflection and planning on how community members could be involved, so that they could conduct appropriate empirical evaluations regarding their perceptions of the AI tool. Societal and Legal Considerations. This section is intended to scaffold conversations around the following broad questions: (1) Given the societal, ethical, and legal considerations and envisioned impacts associated with the use of AI tools for our stated goals, is our AI tool appropriate?(2) What evidence do we have to support our answer to the previous question?What additional tasks may be required in the future to help us gather more evidence and/or better understand the evidence we currently have? Sample Questions. • Legal considerations around the use of algorithmic tool -Do the people impacted by the tool have the power or ability to take legal recourse?Overall, the goal of this section is to help promote a systematic, deeper conversation on the various dimensions of social and ethical concerns relevant to the design of an AI tool.The 38 questions within the Societal and Legal Considerations section are divided into seven subsections: (1) Legal considerations around the use of the algorithmic tool, (2) Impacted community member needs, (3) Involving impacted communities, (4) Clarity of ethics goals and definitions, (5) Operationalization of ethics goals, (6) Envisioning potential negative impacts, and (7) Social and historical context surrounding the use of the algorithmic tool.Again, for the purpose of the paper, we sample one question per topical subsection. Participants shared that they did not currently have structured opportunities to proactively discuss social and ethical considerations surrounding AI tool design.While participants described that their teams spent a lot of time working on related dataand model-specific fairness tasks (e.g., using bias correction methods to improve the fairness of their AI tool), several participants noted a desire to discuss normative concerns regarding the design of an AI tool that could only be addressed in earlier problem formulation stages.Moreover, participants' past experiences illustrated an opportunity to better support cross-stakeholder communications around the ethical considerations that should aid AI design, by equipping teams with a shared knowledge base and vocabulary for ethical concerns.For instance, one participant described how a leadership team tasked them with creating a predictive algorithm to assist decisions about fraud investigation.The participant's team tried to "get them away from this" because the task was technically infeasible (producing high false positive rates) and ethically risky the cost of errors is high, given that decisions to investigate are highly intrusive to the individual.This section's questions intend to support agency stakeholders in forming a more complete understanding of the different ethical factors that could make a proposed AI tool design "appropriate" or "inappropriate." We note that the guidebook does not exclusively surface societal and ethical considerations in this section; the prevalence of relevant questions included in the other three topical sections (Goals and Intended Use, Data and Modeling Constraints, Organizational Governance) reflect how social and ethical considerations are intertwined with all facets of a proposed AI tool. Data and Modeling Constraints. This section is intended to scaffold conversations around the following broad questions: (1) Given the availability and condition of existing data sources, and our intended modeling approach, is our proposed AI tool appropriate?(2) What evidence do we have to support our answer to the previous question?What additional tasks may be required in the future to help us gather more evidence and/or better understand the evidence we currently have? Sample Questions. • Understanding data quality -Has the definition of the data changed over time?(E.g., in child welfare, has reunification always meant to reunify with the parent?)This section intends to forefront conversations around data and technical work that may be critical to have earlier on.The 18 questions within the Data and Modeling Constraints section are divided into seven subsections: (1) Understanding data quality, (2) Process of preparing data, (3) Selection of statistical models to fit data.For the purpose of the paper, we provide a subsample of questions under each topical subsection. Participants who had experience developing AI tools often underscored the importance of ensuring that they had the computing resources and data needed to develop their proposed AI tool.For example, they described the importance of forming a context-specific understanding of the data labels that may be challenging to identify without relevant domain knowledge (e.g., whether certain labels like "reunification" have changed definitions over time).Others described the importance of deliberating who should be involved in data inclusion and exclusion decisions when they are cleaning their data. Organizational Governance Factors. This section is intended to scaffold conversations around the following broad questions: (1) Given our plans for ensuring longer-term technical maintenance and policy-oriented governance, do we have adequate post-deployment support for our proposed AI tool?(2) What evidence do we have to support our answer to the previous question?What additional tasks may be required in the future to help us gather more evidence and/or better understand the evidence we currently have? Sample Questions. • Long-run maintenance of algorithmic tool -Do we expect there will be shifts in performance metrics over time?If so, why?What are our plans for identifying and mitigating those shifts?-Do we have the mechanisms to monitor whether the tool is having unintended consequences?• Organizational policies and resources around the use of algorithmic tool -Is there training for frontline workers who will be asked to use the tool?What evidence suggests that this training is adequate?-Imagine that we could assemble the "ideal team" to monitor and govern the tool after it is deployed: What are the characteristics of this ideal team?* Who is the actual team that will monitor and govern the tool after it is deployed?* Given the gaps between the "ideal team" and the actual team we expect to have: What risks to postdeployment monitoring and governance can we anticipate?How might we mitigate these risks? • Internal political considerations around the use of algorithmic tool -How well do we understand system administrators' and leadership's perspectives around the use of this tool?-How well do staff and leadership understand 'why' the tool could bring value? The 24 questions within the Organizational Governance Factors section are divided into five subsections: (1) Measuring changes in model performance over time, (2) Mechanisms to identify longterm changes in model performance, (3) Policies around worker interactions with the AI tool, (4) Governance structures around the AI tool, and (5) Internal political considerations around the use of the AI tool.As with prior sections, we include in this paper a sample of questions across these topical subsections. Similar to considerations around the Social and Legal Considerations of AI design (Section 4.2.2),participants often described encountering challenges when attempting to meet organizational governance-related needs of the AI tool, like maintaining their AI tool over time, ensuring workers are adequately trained, or communicating the goals and capabilities of the AI tool to agency leadership.Partipants highlighted that many of these challenges arise because such considerations are discussed in an ad-hoc manner, too late in the AI development process.Given that several of these needs may require longer-term planning and preparation (e.g., gathering resources of model maintenance), public sector agencies may be better equipped in meeting these governance needs if they were discussed in early stages of model design (rather than after an AI tool has already been developed).For example, participants described how they currently lack domain experts that could help maintain and improve their model post-deployment-a gap in their AI development process that they felt was critically important to address.While agencies currently discuss maintenance-related concerns at the deployment stage, this may not allow the agencies enough time to deliberate who should be involved in maintenance, or how to allocate additional roles for a maintenance team. Process Design: Designing for Practicality and Adaptability The overall goal of the Situate AI Guidebook is to help public sector agencies make more informed, deliberative decisions about whether and how to move forward with implementing a proposed AI tool.Prior literature studying existing responsible AI toolkits have started to surface concerns around how such toolkits may be used inappropriately or not used at all in practice, due to misalignments with the organizational contexts they are designed to support [37,66].In this section, we describe findings related to the broader deliberation process that participants envisioned the deliberation questions (Section 4.2) could be used to support. Below, we first present our proposed use case for the Situate AI Guidebook, along with an example instantiation of the use case and an explanation of how participants' existing practices informed this use case.We then discuss participants' desires for alternative use cases and processes around deliberation.Participants across agencies and roles expressed interest in using the questions in a few different ways, based on their concerns around cross-stakeholder power dynamics and desires to enable deliberation practices that align with their organizational values [51,66].Given participants' interests in adapting the guidebook to different use cases, a key component of the Situate AI Guidebook is that it is designed to allow users to select which topics and questions they would like to focus on: The deliberation questions are categorized and grouped into modular components; and users have the flexibility to select from a large set of deliberation questions within each component to identify a subset that is most relevant to their use case. Proposed Use Case: Using the Guidebook to Support Structured and Iterative Deliberations.Participants envisioned that the guidebook could be effectively used to support structured, iterative deliberation through formal workshops between members of their agency.In this section, we elaborate on one possible way this use case can be instantiated into an overall deliberation process, then discuss how this compares to participants' existing practices within their agency.We provide an example of one possible implementation of a formal deliberation process, using the guidebook. Example instantiation of proposed deliberation process.This process involves a four-stage phase, where the public sector agency would first appoint a facilitator(s) to help organize the overall decision-making process: Stage 1: Topic and Attendee Identification.The facilitator will identify which of the four guidebook topics, if not all, they are interested in convening a deliberation workshop on.Based on broad guidance provided in the guidebook, the facilitator will then identify the stakeholders that should be included in the deliberation workshop based on the selected topics.For example, the Societal and Legal Considerations section is designed to be used by a more diverse range of stakeholders (e.g., AI practitioners, frontline workers, community members, legal experts) compared to the Data and Modeling Constraints section (e.g., AI practitioners only). Stage 2: Question Selection and Deliberation.After finding a shared time for the deliberation workshop, the facilitator should share the goals and topics of deliberation (included in the guidebook) with the group.The guidebook includes a large number of questions for each topic of deliberation.For example, the Goals and Intended Use section alone has 52 questions.To ensure that the questions can be feasibly discussed within the allocated time, we highlight 1-2 recommended questions per major subsection, resulting in a smaller number of questions (19 questions for the Goals and Intended Use section).The remaining questions are also available in the guidebook as "optional questions." We recommend that, at the start of the workshop, the facilitator provides the attendees with the opportunity to identify any questions from the "optional questions" category they would like to additionally or alternatively discuss in the workshop.As the attendees are discussing each question, the facilitator should take note of their responses and points of disagreement.If there are disagreements that are challenging to resolve in response to a question, the facilitator should help the group identify action items to help gather more information or perspectives and plan to revisit the question at a later time.If the group finds they currently lack the resources or knowledge to fully address a question, the facilitator should also make note of this and plan to revisit the question at a later time. Stage 3: Deliberation Synthesis and Action Items.After the deliberation workshop, the facilitator should summarize the discussions and outcomes into the deliberation report template which we include in the guidebook.The template includes the following questions: (1) What is your recommendation?(2) Please list core reasons for your recommendation, based on the deliberation workshop, (3) Are there any follow-up tasks you must complete, in order to fully support this recommendation?If yes, please write the task(s) and plan(s) for completing it, and (4) What core counterarguments against this recommendation arose during the deliberation workshop?Please describe each counter-argument, including how you addressed or plan to address each one.Based on the report responses, the facilitator should continue to iteratively revisit the deliberation questions, organizing additional deliberation workshops with the attendees as needed, as they complete the follow-up tasks included in the report. Stage 4: Public Report and Improvement.The facilitator should work with their agency to publicly share the deliberation report with agency-external stakeholders, including impacted community members and related organizations.To promote conversations and bidirectional learning between community members and agency-internal stakeholders, the agency should hold public convenings and host an online commenting forum for any individuals who would feel more comfortable contributing anonymously online.The agency should then synthesize the themes that emerged from the conversations, identify action items to address any concerns, and share these results of the community conversations with the public.In the guidebook, we plan to include links to existing community review efforts to provide examples of what this interaction could look like.However, we note that effectively completing this step requires additional research and resource creation (which we discuss in the Discussion Section 5). How participants' existing organizational practices informed the proposed process design.Reflected in the process above, participants raised several important considerations to ensure the process is practical and meaningful to their agency.For example, participants in agencies that are actively developing new AI tools described that there are already AI design and development processes in place that support focused discussions on improving, for example, the algorithmic fairness of their AI tool.These participants did not want the Situate AI Guidebook to replace these conversations and work sessions.Instead, they desired a process that could augment their existing processes.For instance, participants noted that having these deliberation workshops earlier on, before they developed or analyzed any AI model, can help promote reflexive conversations about what it means to do the work of AI fairness and what important considerations that should aid this work (e.g., whether there is a definition of "fairness" that agency workers agree on).Relatedly, as reflected in the process description, several participants described the importance of revisiting these deliberation questions iteratively, throughout the AI development and deployment process (rather than only discussing these at the early ideation stages of a development process).For instance, participants described that their understanding of some of the question responses (e.g., the intended outcomes that the AI tool should help achieve) may evolve with time as the AI tool development evolves (e.g., depending on what is technically possible, given data constraints or prediction errors).Participants noted that designing the guidebook to center an iterative process would help ensure the conversations complement their existing AI development process, which is also iterative in nature.We originally designed the process so that the deliberation workshop attendees would be required to come to a consensus on the final recommendation before moving forward.However, the participants we spoke with emphasized that this may be impractical and unnecessary based on their end goals for the guidebook.We elaborate on this point in Section 4.4 when we discuss the guidebook's Success Criteria. We discuss limitations and future work related to this proposed use case in the Discussion, drawing on prior literature suggesting ways in which research-based conversational tools may not be adopted in practice or may be used in inappropriate ways. Empowering Participation: Accounting for Organizational Power Dynamics.Findings from this study strongly suggest that frontline workers-those who would be asked to use the AI tool once deployed-are interested in engaging in early-stage deliberations around what the AI tool should be designed to assist.Prior literature also suggests that ensuring agency leadership and AI developers understand frontline workers' needs and challenges is critical to ensuring that the "right" AI tools are being developed (e.g., [29,30,58,67]).However, effectively supporting conversations between roles with prominent and knowledge differentials remains a challenging task [27].For the current version of the Situate AI Guidebook, we begin accounting for this challenge by editing the language used in some of the guidebook sections to ensure it is understandable to those without prior knowledge on technology.We additionally asked frontline workers about their desires for how they would want to participate in the deliberation process, to ensure they feel safe to share any concerns.In this section, we discuss these findings.However, we note that future work is needed to ensure that the Situate AI Guidebook ( .1.0)adequately accounts for organizational power dynamics.In the Discussion, we discuss implications for complementary policy interventions that may be needed for an agency to effectively facilitate a multi-stakeholder deliberation process. Frontline workers' preferred processes for participating in multi-stakeholder deliberations.Frontline workers in our study had a range of perspectives around how best to involve them and their colleagues in the deliberation process.One frontline worker suggested that their agency should require all frontline workers to attend the deliberation workshops.They described that, without making participation in these discussions mandatory, frontline workers may opt to skip meetings given their busy schedules.The participant further expressed concerns that, if participation was on a voluntary basis, frontline workers who join may be harmfully selfselective: "... particularly for people with marginalized identities, it's important for them to be a part of these spaces and voice their concerns.I think that if it wasn't mandatory, it might be, you know, [a] self-selecting group" (F7).This frontline worker, along with another worker, also described how adjustments to the proposed process could help frontline workers feel more comfortable raising concerns.For example, the participants expressed that, for multistakeholder conversations, some frontline workers may feel more comfortable having a separate frontline worker-only deliberation workshop, synthesizing and formalizing their perspectives, then going to a group meeting with other agency stakeholders to present their perspectives.As the participant described: "A lot of social workers are very non-confrontational [...] we are our clients' best advocates but not for ourselves.And so I definitely do think that people might be more comfortable, you know, having their own sort of peer group discussion or colleague group discussion.And then that being you know, sort of the concerns being written down and formalized, and that being taken rather than a more informal sort of like anyone who has concerns just raise their hand and say their piece.I feel like that might be a bit daunting for some people" (F7). Alternative use case: Using the guidebook to empower everyday conversations.Complementing frontline workers' desires for engagement, participants from agencies that were not yet developing new AI tools described a different use case for the Situate AI guidebook.These participants envisioned that the guidebook could be used by teams to support everyday conversations, with the aim of proactively avoiding pitfalls in AI project ideation and selection.For instance, one participant described that they wanted the guidebook to be used more casually, by everyone in the agency, to help all staff members feel empowered to "be able to do a little more of the innovation" (L7).The participant described that, even if they get stuck or need help, "it would be awesome for them to have a library of resources that they can look at" to help them get started.The participant further described that workers in their agency should "have the flexibility to structure the deliberation workbook to their needs," for example, deciding which questions to discuss, how much time to take in discussing the questions, and who to talk with.By having a guidebook that empowers any staff member to discuss topics around AI, the participant hoped that these deliberations could have rippling effects on their agency's overall culture: "Maybe we're trying to get from [...] 'let's do this big project right with the right leaders and things in the room' to 'how do we create a culture of improvement' [...]Not just how do we do a technology project the right way, but actually can it have a broader impact on culture.This is how we do anything.It's always with this batch of questions in mind and thinking about how we can be people around problem solving" (L7). Desires to expand participation to community members for future versions of the guidebook.Several participants across agencies and community advocacy groups were interested in involving community members in the deliberation workshops supported by the guidebook.Workers within the agency wanted guidance on how to do this effectively.In our conversations with community advocates, we probed on how they would like to be involved in deliberations around the Situate AI guidebook.They described the importance of compensating community members for their time, providing multiple channels for communication (e.g., online forums and in-person meetings), and following up with the outcomes of the conversations: "that happens a lot, you know-agencies are like 'oh, we engage with the community, and we brought them into the space with us.' But then there's no follow up or follow through from those conversations.And that's been a historical thing" (C2). Importantly, we note that the guidebook, in its current form, is designed to support conversations across workers within a public sector agency.It is not designed to directly support conversations between agency-internal workers and agency-external stakeholders (e.g., community members).In the Discussion section, we discuss opportunities to expand participation through design improvements. Success Criteria What outcomes do public sector agencies and impacted community members consider "meaningful, " when assessing the effectiveness of the Situate AI Guidebook?What are their underlying theories of change around how their public sector agency could move towards more responsible early-stage AI design practices, and how do they envision the Situate AI Guidebook can help them progress towards that path?Overall, through the guidebook, participants wanted to form an understanding of the disagreements and tensions across agency workers felt most strongly about, to help position themselves to better address these disagreements through changes to the problem formulation or design of a proposed AI tool.Importantly, as indicated by the process design in Section 4.3, participants described that the goal of the Situate AI Guidebook should not be to resolve these tensions and disagreements across individuals.Participants described that this is an infeasible task, given underlying differences in values and goals across agency stakeholders.In this section, we elaborate on four success criteria of the guidebook.These success criteria intend to help communicate the intended goals and boundaries of the guidebook, and including how they are informed by participants' own notions of success for deliberations around the design of AI tools. 4.4.1 Make it easier for different agency stakeholders to communicate with each other about AI design, evaluation, and governance considerations.Many of the challenges that participants in our study described could be addressed if better communication channels existed between different agency stakeholdersincluding amongst AI developers, agency leadership, and frontline workers.This challenge is also well-documented in prior literature studying public sector AI decision-making [27,29,31,58].For instance, in current practice, frontline workers are often not meaningfully involved in early-stage deliberations around AI design and adoption.As a result, agency leadership and AI developers have interpreted workers' concerns around AI as a signal for not understanding what the AI tool does.Involving frontline workers in these earlier discussions can both help more proactively inform workers of AI capabilities and empower them to engage in constructive conversations that would improve the design of AI tools. 4.4.2 Bring context-specific needs for resources to the forefront of AI project selection conversations.While participants often knew which resources they needed to successfully implement a given AI tool, their past challenges sometimes surfaced a missed opportunity to identify these needs at an earlier stage of their AI design process.Moreover, related to the previous success criterion, our conversations with the participants surfaced ways in which having agency workers with different roles and perspectives engaged in these early-stage deliberations can strengthen their ability to anticipate the potential impacts of AI design decisions.For example, frontline workers voiced that AI tools they had used in the past were designed in ways that conflicted with their existing decision-making policies; other workers described that AI deployments may add additional labor to their day-to-day tasks, given they may be asked to more diligently collect data.In current AI development processes, where frontline workers may only be meaningfully engaged in AI implementation or piloting stages, mitigating these negative impacts may require more substantive tasks like redesigning the AI tool.Participants further described that these resource-related needs were highly context-specific.For instance, when discussing the importance of anticipating how their AI tool may impact community members, one participant recalled how even the definition of "community" may differ across agencies and AI tools: "Because we would always say, 'we're doing stuff [where] the community is informing us.' And then we realized, 'oh wait, it wasn't necessarily the group of people who were impacted by [our decisions]'" (L7).4.4.3Make social and ethical considerations a first order priority in conversations around whether to move forward with an AI tool idea.As described in Section 4.2.2 and 4.2.4,participants described their past assessments around whether an AI tool was appropriate to implement centered algorithmic considerationswhether that be the quality of their training data or outcomes of algorithmic fairness or accuracy metrics.While these considerations are critically important, others also discussed a desire to rigorously deliberate the underlying values embedded in design decisions, and the social and ethical impacts of a proposed AI tool.This echoes concerns from prior literature, discussing how technical design decisions often include hidden policy decisions and value judgements [20,61].Prior work also suggests the importance of these considerations, noting that existing AI ethics toolkits have largely framed the work of ethics as "technical work" [66]. 4.4.4 Make "fitting" an AI tool into a workplace a design problem, rather than an implementation problem.This success criterion intends to avoid practices where the AI tool idea is conceived before fully understanding context-specific practices and needs, and in turn, creating AI tools that frontline workers must then attempt to "fit" into their existing workflow.This tendency to treat "fitting" AI tools as an implementation problem, and its negative impacts on workers' ability to improve their existing decision-making practices, is also well documented in prior literature [67,68].Indeed, participants-including both AI developers and frontline workers-described that they wished they could have had better conversations, early on, to understand what the actual goal of the tool they were building should be.As one AI developer described, recalling a past experience in their team where leadership had asked them to create an AI tool: "It was kind of hard to get a sense of what the actual issue was that was being asked to be solved.It sounds kind of a lot like, 'Here's a bunch of different potential places an algorithm might fit in'" (A04).Ultimately, they were asked to create a predictive model to "to find fraud where there wasn't already suspicion of fraud." However, the AI developer described feeling leadership had proposed the idea as "this cool thing we could do" but, in reality, realizing that creating such a tool would create more problems downstream in the system (in this case, it would create too many referrals to be able to investigate).By promoting early-stage, structured deliberations around critical topics related to AI tools, public sector agencies could be supported in identifying higher-value, lower-risk opportunities to innovate with AI systems. DISCUSSION Public sector agencies in the U.S. are increasingly exploring how new AI tools can assist or automate services in child welfare, homelessness housing, healthcare, and policing, among other domains [11,22,45,58,67].In the U.S., these public services have historically been characterized by racial inequity, procedural injustice, and distrust from the impacted communities [9,17,61].While agencies have rapidly begun to deploy AI tools to improve their services, ensuring responsible development has proven to be an immense challenge.In the past decade, such AI tools have often failed to serve the needs of the communities that agencies are expected to serve [6,17,20,24,25].A growing body of literature has recognized that many downstream harms resulting from AI tools can be traced back to decisions made during the earliest problem formulation and ideation stages of the AI lifecycle.Yet, there are few, if any, effective resources for public sector agencies in making more deliberate decisions regarding whether a given AI proposal should be developed in the first place. Through iterative co-design sessions with 32 individuals (agency leaders, AI developers, frontline decision-makers, and community advocates) across four public sector agencies and three community advocacy groups, we created the Situate AI Guidebook ( .1.0).The guidebook, designed for public sector agency workers, scaffolds the process for early-stage deliberations around whether and under what conditions to move forward with the implementation or adoption of a new AI tool or idea.To support this process, the guidebook presents a set of 132 deliberation questions-which participants indicated are critical to consider yet are often overlooked today-spanning both social (organizational, societal, and legal) and technical (data and modeling) considerations around AI; along with guidance on the overall deliberative decision-making steps and success criteria for use.In this section, we discuss the design decisions we made in creating the guidebook, along with limitations and opportunities for future work.For each section of this discussion, we begin by summarizing relevant portions of the findings.Then, we elaborate on limitations and future opportunities to improve upon the existing guidebook. Overcoming Low Adoption Rates for Responsible AI Toolkits in the Public Sector As the research community continues to innovate new Responsible AI toolkits, recent literature has raised concerns regarding the practical efficacy of such toolkits.Prior work has found that the majority of AI ethics toolkits fail to account for the relevant organizational context, hindering their usability (e.g., overlooking guidance on how different stakeholders should be engaged) and effectiveness (e.g., focusing on the technical but neglecting the social aspects of AI ethics work) [66].Public sector decision-making around service allocation is often shaped by resource and staffing shortages, and require balancing tradeoffs to meet the competing needs of a range of stakeholders (e.g., impacted community members, policymakers and regulators, politicians) [64].Moreover, AI tools in the public sector often target socially high-stakes decisions (e.g., whether to screen in a family for child maltreatment investigation, or provide an individual with a credit loan), that have disproportionately negatively impacted the lives of historically marginalized communities. Prior work has shed light on the downstream impacts that public sector AI systems have had (e.g., [9,55]), along with challenges to ensuring their responsible design and use (e.g., [30,58,64]).Through our study, we demonstrated how collaborating with public sector agencies and community members to co-design a responsible AI toolkit-including its process and content design-can help surface and account for some of these challenges.That said, future research is needed to understand how effective the toolkit is in practice, and to surface other challenges that can only be observed through actual use (rather than through our co-design and interview study format).In the following subsections, we briefly discuss related findings and opportunities to improve the contextual design and use of the Situate AI Guidebook for public sector settings. 5.1.1Designing for more inclusive forms of worker participation. While AI tools for public sector contexts implicate a range of different agency-internal stakeholders, these agency workers-from agency leaders to frontline workers-often operate in silos, separated by power imbalances and knowledge differentials.We found that participants desired a range of particiation structures to account for these differences.For example, some frontline workers wanted to first gather amongst others with similar occupations to prepare for the deliberation workshop, and then send in one frontline worker to attend the workshop and represent their perspectives.On the other hand, other participants suggested that there should be an organizational policy that required all frontline workers to attend the deliberation workshops alongside the AI developers and agency leaders.Future work is needed to understand the broader range of solutions that could best address these differences in workers' preferences.For example, future work could pilot different processes, where agency workers are grouped in deliberation workshops in specified configurations depending on their role and background.Through observations and retrospective interviews of these configurations, we could better understand whether having a set of deliberation questions alone is adequate to prompt meaningful conversations.Future work could additionally explore how additional resources and toolscould be used alongside the deliberation toolkit, in order to effectively scaffold conversations around the appropriateness of AI design ideas.This direction would be especially critical to pursue, in order to ensure that the deliberation toolkit is accessible to those who may not have had prior exposure to AI technologies. 5.1.2Incentivizing and governing responsible use.Ensuring responsible use and adoption of the Situate AI Guidebook may require complementary efforts from governing bodies.For example, while the U.S. does not currently require public sector agencies to document early-stage deliberations around AI, having similar forms of external forces that incentivize agencies to engage in early-stage deliberation may help ensure that the deliberation toolkit is used effectively.One way to incentive public sector agencies may be to clearly communicate how the toolkit aligns with and complements existing voluntary guidelines, such as those in the NIST AI Risk Management Framework (RMF) [44].While the NIST RMF and NIST RMF Playbook [43] both focus on providing higher level guidance on steps to follow for responsible AI design, researchbased co-designed toolkits like the Situate AI Guidebook can help bridge gaps between their proposed policy guidance and real-world practice.In future work, we plan to map the guidebook to the four functions captured in the AI RMF Core: Govern, Map, Measure, and Manage.For example, each question or category of questions could be assigned one or more of the AI RMF Core functions. In future work, we plan to explore with public sector agencies community advocates, and other stakeholders how new policy and organizational interventions can support them in using the Situate AI Guidebook.The public sector agencies in our study, including the frontline workers, expressed interest in exploring how to use the guidebook in practice through pilots. Expanding the Situate AI Guidebook to Engage Community Members In public sector contexts, there is often a greater expectation that decisions center the needs of the community, including by being transparent to and engaging with the community during the decisionmaking process.In our study, participants expressed a desire for guidance on how to engage with community members in discussing complex topics around AI design.While the deliberation protocol is not currently designed to support such conversations, the current version poses questions that suggest follow-up tasks involving conversations with community members.For example, the question "Are we assessing the tool from the perspective of impacted community members?What evidence do we have to suggest that we are genuinely understanding their concerns and desires?" suggests that the agency should talk with impacted community members to understand their perspectives-a task that would require additional guidance and resources to complete successfully.Agency workers acknowledge they are often pushed to involve community members in their AI design work but without actionable guidance on how to do so effectively.Participants suggested linking existing relevant resources from the guidebook, to assist agencies in this regard.Moreover, community advocates in our study expressed interest in engaging in the deliberation workshops themselves.Future work should explore ways to improve the design, structure, and process of the deliberation guidebook so that it is wellequipped to support conversations between agency-internal workers and agency-external community representatives.For example, to help bridge a shared vocabulary about AI between agency workers and community representatives, future work could begin by integrating existing resources and guidance from publicly available guides like A People's Guide to Tech [48].It is possible that the specific questions and topics this deliberation guidebook addresses requires additional scaffolding and support.Future work should explore ways to provide this support through continued collaborations with community advocacy groups.Community advocates in our study additionally expressed interest in having both the option to attend in-person workshops and to participate anonymously online.Future work could explore ways to support more democratic forms of participation online using social computing platforms (e.g., [34]) intended to facilitate and analyze deliberation about specific topics around AI. Exploring how the Situate AI Guidebook Can Support Deliberation in Non-Public Sector Contexts While the Situate AI Guidebook was originally designed for highstakes public sector decision-making domains, there is an opportunity to adapt it to meet the needs of other AI use cases.Private and public sector settings share many organizational challenges (e.g., communication barriers across teams and occupations) and development tendencies (e.g., targeting problem spaces that AI capabilities may be ill-suited towards), that could implicate the effective design of responsible AI toolkits.Moreover, by designing for a setting with relatively high expectations and standards for responsible design (i.e., high stakes AI applications in the public sector), the Situate AI Guidebook sets a high bar for the kinds of questions and processes that should be followed to responsibly evaluate early-stage AI design concepts elsewhere.For this reasons, we expect that the guidebook may be (at least partially) applicable to other AI deployment contexts, including certain high risk applications in industry (e.g., healthcare, credit lending).Indeed, many of the questions that the participants generated are relevant to AI deployment contexts beyond the public sector.Most deliberation questions target core issues relevant to all AI deployments (i.e., around the goals, ethical implications, technical constraints, and governance practices surrounding an AI deployment).Besides the deliberation questions, the design principles underlying the guidebook may also help make the guidebook useful for non-public sector contexts.Because the public sector agencies we partnered with differed in their organizational practices (e.g., who is involved in decision-making around AI) and priorities (e.g., types of services provided), we intentionally designed the guidebook to allow for flexible adoption and personalization.For instance, in designing towards Design Principle 2, we categorized the questions into modular topics and subtopics that can be selected and combined for a given deliberation workshop.There is an opportunity for future work to expand the set of labels attached to the deliberation questions.For example, participants described an interest in future versions of the toolkit that categorized questions based on the type of technology (e.g., generative AI, predictive analytics), or the type of deliberation (e.g., individual assumption-checking, knowledge-sharing, future task identification) that the question is intended to support.Future work could similarly aim to understand the types of questions that are the most critical for certain AI deployment contexts (e.g., public sector social work, private sector healthcare, etc.). CONCLUSION As public sector agencies in the U.S. increasingly turn to AI tools to increase the efficiency of their services, it becomes critical to ensure these tools are designed responsibly.While much research and development efforts have been dedicated to better scaffolding responsible AI development and evaluation practices, real-world AI failures often point to fundamental problems in the problem formulation of an AI tool-problems that should be addressed before proceeding with any decision to develop an AI tool.Yet, we currently lack effective processes to support such early-stage, deliberate decision-making in the public sector.This paper introduces the Situate AI Guidebook: the first toolkit that is co-designed with public sector agencies and community advocacy groups to scaffold early-stage deliberations regarding whether or not to move forward with the development of an AI design concept.Through co-design sessions conducted over the course of 8 months, participants generated 132 questions which we organized under four high-level categories including (1) goals and intended use, (2) social and legal considerations of a proposed AI tool, (3) data and modeling constraints, and (4) organizational governance factors.In this paper, we elaborate on how participants' practices, challenges, and concerns shaped the Situate AI Guidebook's guiding design principles, the deliberation questions they believed were critical for early-stage decision-making, the overall organizational and team decision-making process the guidebook should scaffold, and the success criteria used to assess the effectiveness of the guidebook.We additionally discuss opportunities for future work to improve the design and implementation of the Situate AI Guidebook, including via continued partnership with public sector agencies in our study, who plan to pilot how the guidebook can be used in their agency. example, historical agency metrics, legislature, community members, research reports.)-What evidence suggests the specific form of technology we are envisioning (e.g., predictive analytics) may offer a remedy?• What are the additional challenges and risks associated with pursuing a technological solution to this problem? Involving agency-external stakeholders in determining the goals. • Think about the most impacted stakeholders you identified in response to the questions above.How do we bring their voices to the table when determining goals?Envisioned harms and benefits. • What are the potential harms and benefits of the tool, and to whom? -Do benefits outweigh the harms?-Do we expect there to be tradeoffs between accuracy, fairness, explainability?For example: making decisions in a completely random fashion may look "fair", but is not necessarily accurate.Centering community needs. • How can we effectively evaluate the tool from the perspective of impacted community members?-E.g., what does false positive, false negative mean for different impacted communities?How are we weighting false positives and false negatives, in a given use case, based on the relative costs of each type of error for the impacted stakeholders? • How might front-line workers respond to the tool?How can we better understand their underlying concerns and desires towards the tool?• How do front-line workers perceive the algorithm?(e.g., do they consider it a top-down requirement or a useful tool) • Do domain experts also believe the model 'makes sense', e.g., selection of important features? A.2 Societal and Legal Considerations The set of questions below are intended to support conversations around the following broader question: Given the societal, ethical, and legal considerations and envisioned impacts associated with the use of AI tools for our stated goals (identified in Facet 1), is our proposed AI tool appropriate?This stage would benefit from the expertise of the following stakeholders at the minimum, amongst others: AI practitioners, frontline workers, community members, legal experts. A.2.1 Legal considerations around the use of algorithmic tool. • Do the people impacted by the tool have the power or ability to take legal recourse?-How well can we interpret case-specific considerations in the context of legal documentation/guidelines (e.g., when there is a lot of grey in practice, but the law is written in black and white)?* E.g., in child maltreatment: "threat of harm" or "physical abuse" allegation type sounds black/white but there are various factors that make this grey.E.g., how hard did it hit them?Did it leave a mark?Action occurred but no impact from the action? A.2.2 Ethical and fairness considerations around the use of algorithmic tool. Impacted Community Member Needs.Involving Impacted Communities. • What are underlying assumptions that tool developers/researchers may have, regarding the soundness of the design decisions made in the tool?• How can we set up external participation opportunities, to increase access?-E.g., avoiding scheduling during a 9-5pm period (to open involvement to those who want to be involved) -E.g., is it possible to involve groups that are not involved and paid by the agency, to get input and feedback?-Do we know who should be included?How can we build the right network of people to talk with?• Who has a seat at the table, to decide how the tool impacts you? • How are you engaging with people closest to the problem (e.g., frontline workers, community members, or others impacted by the decisions)?• Have you communicated the limitations and historical context of the data, to community members?• How well do we understand the costs, risks, and effort required of community members, if we invite them?E.g., many were directly harmed by decisions made by the agency. • When do we start to engage impacted communities into discussions around the design or use of the tool? Clarity of Ethics Goals and Definitions. • Can we agree on a definition of fairness and equity in this context?What would it look like if the desired state is achieved? Operationalization of Ethics Goals. • Are fairness and equity definitions and operationalizations adequately context-specific?(For example, in the child welfare domain: children with similar profiles receive similar predictions irrespective of race?) • Do we know how to appropriately operationalize our fairness formulation in the algorithm design?• Can we mitigate biases in the model? • How can we balance tradeoffs between false negatives and false positives?• How well are we integrating domain-specific considerations into the design of the tool?• Have we recognized and tried to adjust for implicit biases and discrimination inherent in these social systems that might get embedded into the algorithm? Envisioning Potential Negative Impacts. • Do we understand the negative impacts of the decision made across sensitive demographic groups?• What are the externalities / long-run consequences of the decisions? A.2.3 Social and historical context surrounding the use of algorithmic tool. • Have we recognized and tried to adjust for implicit biases and discrimination inherent in these social systems that might get embedded into the algorithm?• How might we clearly communicate the limitations and historical context of the data to community members?• Are you modeling historical, systemic patterns? A.3 Data and Modeling Constraints The set of questions below are intended to support conversations around the following broader question: Given the availability and condition of existing data sources, and our intended modeling approach, is our proposed AI tool appropriate?This stage would benefit from the expertise of the following stakeholders at the minimum, amongst others: AI practitioners. A.3.1 Understanding data quality. • How does the data quality and trends compare with an 'ideal' state of the world?-What does our data look like, in terms of different demographic outcomes?• Has the definition of the data changed over time?(E.g., in child welfare, has reunification always meant to reunify with the parent?)• What data do we have access to? -Do we have the data/feature set to replicate the tool/analysis/and predictive accuracy of the existing tool? • How well do we understand the meaning and value of the data that will be used to train an algorithm?• How is the quality of this data? -How accurate is the data?-How recent is the data?-How relevant is the data?-Has the data been consistently collected? A.3.2 Process of preparing data. • How are we preprocessing the data? • Who should be involved in making decisions around whether to include or exclude certain data points or features?Do we have plans for involving those people?• How do we address bias in the data?• Do we have metrics for feature importance, that we could show relevant domain experts?• How well do we understand the data collection process?• Data leakage questions: Are we preventing oversampling of certain populations?-E.g., in child welfare: Are we pulling one child per report, and one report per child, to ensure there's no information leakage between training and test sets? • Is our model appropriate given the available data?Why or why not? A.4 Organizational Governance Factors The set of questions below are intended to support conversations around the following broader question: Given our plans for ensuring longer-term technical maintenance and policy-oriented governance, do we have adequate post-deployment support for our proposed AI tool?This stage would benefit from the expertise of the following stakeholders at the minimum, amongst others: Agency leaders, AI practitioners, frontline workers. Measuring changes in model performance over time. • Do we expect there will be shifts in performance metrics over time?If so, why?What are our plans for identifying and mitigating those shifts?• Do we expect that the data collection process will improve over time?What might this imply for how we maintain the tool?E.g., Is there a need for adjusting thresholds over time? Mechanisms to identify long-run changes. • Are we repeating feature engineering efforts over time?-Are we detecting how trends shift over time at the population level?• Are there mechanisms in place that track whether certain data features have changed over the years?• Do we have mechanisms to track longer-term outcomes over time, so that we can monitor for changes in model performance?• Do we have the mechanisms to monitor whether the tool is having unintended consequences? A.4.2 Organizational policies and resources around the use of algorithmic tool. Policies around worker interactions. • Is there training for frontline workers who will be asked to use the tool?What evidence suggests that this training is adequate?• How are frontline workers trained? • Is it clear to workers what information the tool can access, and what information it cannot?-How is this communicated to workers? Governance structures. • Imagine that we could assemble the "ideal team" to monitor and govern the tool after it is deployed: What are the characteristics of this ideal team?-Who is the actual team that will monitor and govern the tool after it is deployed?-Given the gaps between the "ideal team" and the actual team we expect to have: What risks to post-deployment monitoring and governance can we anticipate?How might we mitigate these risks?• Are there appropriate forms of governance, around the implementation?-Do those involved in governance have domain knowledge in the application context and have knowledge of the implementation process?• Are there sufficient guardrails in place to ensure algorithms wouldn't get weaponized?-E.g., IRB-like programs and researchers at the same table, to minimize risk of weaponizing? A.4.3 Internal political considerations around the use of algorithmic tool. • How well do we understand system administrators' and leadership's perspectives around the use of this tool?• How well do staff and leadership understand 'why' the tool could bring value?• Do system administrators and leadership perceive this tool positively?• Do leadership support the future use of the tool? -Do we have backing at a leadership level?E.g., director, agency, governor, community partners?• Is there sufficient buy-in from middle managers and executive support?• Do we have mechanisms to address concerns that could come up during the ideation and design process? Figure 2 : Figure 2: Screenshot of a blank board shown to participants at the start of the co-design activity on Mural.This board presents Scenario 1: Discussing ideas for new algorithms to improve services. Figure 4 : Figure 4: A high-level overview of the main stages involved in the proposed use case for the Situate AI Guidebook, intended to support structured and iterative deliberations within a public sector agency. • How can we open opportunities for those who are most impacted by the new tool to inform the decision-making process?•When will we start to engage impacted communities in discussions around how the tool should be designed or used? To what extent are we optimizing the things the agency cares about, versus what impacted community members care about?How rare is the event we are trying to predict?If it is rare, how reliably do we think we can predict it?•Howdoestheinclusion of additional information (e.g., attributes) improve outcomes?A.1.3Empiricalevaluations of algorithmic tool.Measuring improvement based on outcomes.•Once the tool is deployed and in use, how can we evaluate how well it is working in the short-term?How can we evaluate how well it is working longer-term?• What are some ways we might evaluate whether this tool is successful in improving the targeted outcomes?Are we using appropriate evaluation methods, e.g., synthetic controls, discontinuity analysis when cutoffs on risk exist.• What outcome measures are we evaluating on?What can these measures tell us, and what can they not tell us? • Hypothetically, imagine that our tool does a perfect job of improving the outcome that it targets.What additional problems might this create elsewhere in the system?• • Are there differences in the goals the agency versus community members think the tool should address?If so, what are they?If you are uncertain, what are your plans for understanding potential differences?-What are the envisioned harms and intended benefits from the tool that impact the community and the agency?• Can we have impacted community's representatives or advocates at the table, to inform the design and use of the tool?• How well are we engaging people closest to the problem and those impacted through the entire design, development, implementation, maintenance process?• Are the outcomes intended for agency or community benefit?• How well do we understand what outcomes the community wants to improve? • Do we understand how impacted stakeholders perceive each decision?E.g., emotional valence, potential impacts, etc. • To what extent are we optimizing the things the agency cares about versus what impacted community members care about?
2024-03-01T06:44:35.274Z
2024-02-29T00:00:00.000
{ "year": 2024, "sha1": "1610d01227fcd123fb8931ec004010264d5c2531", "oa_license": "CCBYNCSA", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3613904.3642849", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "067eae8ddb42a954e41307286d522e9d0b2c99b3", "s2fieldsofstudy": [ "Computer Science", "Political Science" ], "extfieldsofstudy": [ "Computer Science" ] }
53584865
pes2o/s2orc
v3-fos-license
EVALUATE OF HEAD LOSS , SEDIMENT VALUE AND COPPER REMOVAL IN SAND MEDIA ( RAPID SAND FILTER ) Along with the technology development and increasing consumption of water resources, we are experiencing low qualities in the mentioned resources. Copper brings about serious environment al pollution, threatening human health and ecosystem. This metal found variously in water resources and industrial activities. Therefore, it needs to treat the water resources from these excessive amounts. Different methods have used for this reason but the most used method during recent years has been the absorption by economic absorbers such as sand. Rapid sand filters usually used in water and wastewater treatment plants for water clarification. In this research, a single layer gravity rapid sand filter has used to reduce different concentrations of copper. sediment value and head loss arising in filter media is simulated by using combination of Carman-Kozeny, Rose and Gregory models in different discharges of rapid sand filter. Results have shown that with increasing in discharge and decreasing in input copper concentration, arriving time to given head loss, is increasing. In addition, results demonstrated that with increasing in copper concentration in influent, removal efficiency is decreasing somewhat. Results of this research can applied in an appropriate design of rapid sand filter to copper removal, a prediction of rapid sand filter ability to copper removal and an estimation of arising head loss during filter work thus evaluating of time interval backwash. Copper content in water and its removal Discharge increasing of heavy metal from wastewater, their poisonous identity, Detroit effect on water supply (Nuhoglu & Oguz, 2003) and in degradable environment has caused to their special importance (Saxena & Souza, 2006).Considering the increasing of industrial activity and problems due to the existence of heavy metals, removal or reduction of their concentration for achieving the acceptable level before discharge in environment is essential (Banejad et al., 2010). Copper is of the metals that found in many water supplies and they could be considerably troublesome.Copper brings about serious environmental pollution, threatening human health and ecosystem (Wang & Chen, 2009).Removal the metal ions of industrial wastewater has been achieved by ion exchange, membrane separation (Katsumata et al., 2003), evaporation (Mouflih et al., 2005), electrolysis, absorption processes and reverse osmosis (Sarioglu et al., 2005;Pehlivan et al., 2006).Choosing the best method to water and wastewater treatment depends on the concentration of heavy metals in the wastewater and the treatment expenses (Daneshi et al., 2009).Depositing has used extensively for removal of heavy metals due to low performance expenses.However, default of this method is production of high volume of sludge (Raju, 2003).On the other hand absorption method such as ion exchange method in easy for removal of metals but ion exchanging resins are expensive (Katsumata et al., 2003;Aslam et al., 2004).Among the mentioned methods, we should look for a method that is economic and easily applicable for developing countries and can use efficiently.Adsorption method has suggested for removal of heavy metals because it is cheaper and more effective than other technologies (Pehlivan et al., 2006).A method for metal removal can be applied to industrial wastes without prior treatment using solid adsorbents such as sand and silica (Yabe & Oliveira, 2003). Rapid sand filter and head loss Filtration is the process in which the suspended particles removed from a flow by passing through a prose media (Hamoda et al., 2004;Iritani, 2003).Removal of particle will vary due to size and identity of them (Clasen, 1998).Rapid sand filter used extensively for treatment of water and wastewater (Raju, 2003).Usually the effective size and uniformity coefficient are considered 0.45 -0.7 (mm) and 1.3 -1.7 respectively in rapid sand filters (Punmia et al., 1995).In water and wastewater treatment, granular media or rapid gravity filter is used.Filters clogged with deposits and this event lead to head loss in through of filter media.Therefore, filter backwashing have been necessary.To design an appropriate rapid sand filter utilizable effectively in removal of specific pollutant, head loss prediction before establishing is essential.Because of this, the equations that show relationship between involved hydraulic parameter must be used. Granular media hydraulic equations During filtration, the clogging of the pores increases thus the resistance in the filter bed.When the filter reaches to the maximum available head loss, the filter needs to backwash to avoid a decrease in the filtration velocity.Head loss effective factors presented by below equation.HL=f (L, d, V s , g, e, ʋ) Where HL= head loss in L depth of filter; d= filter media diameter; V s = flow velocity across media; g= gravity acceleration; e= filter porosity; ʋ = cinematic viscosity.To calculate head loss the most common equation are (1) Carman-Kozeny, (2) Rose and (3) Gregory Modified Carman-Kozeny equation The Carman-kozeny equation is a semi-empirical relationship and its extension to the particle deposition phase has to be based on experimental data because no theoretical description of the processes governing the head loss development have been developed to described the head loss as a function of time or increasing solids deposits.Summarizes of the wide variety of head loss development model during filtration by Herzig et al. (1970) andSakthivadivel et al. (1972) also show that all head loss models have used on modifications to the Carman-Kozeny equation (Boller & Kavanaugh, 1995).The change of various parameters as probity decreases, and the internal surface and the tortusity of the flow increases during solids deposition are incorporated into the Carman-Kozeny equation (Boller & Kavanaugh, 1995).There must be attention that Carman-Kozeny equation can be applied to estimate head loss, but can only be applied to clean filter beds.Therefore, this promoted and modified along the time.Most of the models lead to an equation relating the head loss gradient I at the certain floc volume deposit σ ν to the initial head loss gradient Ӏ0 given by the general form (equation 1) (1) Where p, x, y are empirical constant that are 35, 1.5 and -1 respectively Where h, h 0 and L are head loss, initial head loss and depth of purification layer respectively. Rose equation Rose equation in order to use for rapid sand filter in state that the filter bed considered homogeneous is shown as an equation 2: Where g = gravity acceleration; h 0 = head loss between up and down of porous media; Ɩ= length of path that fluid travel through media; d= effective size of bed particles; ƒ 0 = initial porosity involved in filtration; and C D = Newton drag coefficient.C D, the function of Reynolds number Amount of C D can achieve from equation 3: (3) Ψ is the particle shape factor that achieves from below equation: Ψ =A 0 /A Where A 0 = area of sphere that have a same volume with filter media particle; A= real area of filter media particle.Amount of this parameter suggested between 0.79 and 1 for sand (Tebbutt, 1998).After filter backwashing and start of filtration, due to fluid velocity in porous media, initial pressure gradient produce between up and down of porous media.With gradient entrance to Rose equation, initial porosity involved in filtration ƒ 0 is attainable.Gregory equation (Tebbutt, 1998) Gregory equation presented by as equation 4: Where ν= apparent fluid velocity; ƒ= involved porosity in filtration with respect to head loss (h); t= time (minute); C 0 = concentration of substance in fluid that lead to lead loss; and K= Gregory equation coefficient that variable in each of condition.In this study by combination of modified Carman-Kozeny, Rose and Gregory equation the time that head loss in granular media reach to premises level, estimated.This method is a benefit way to design the filter. Methodology To do this study, a single layer rapid sand filter by below characteristics is constructed.Filter surface size is 17˟17 cm; length of effective layer in treatment is 70 cm that included sand with 0.42-1.8mm diameter, actual density is 2.653cmgr, 0.6 mm effective size and uniformity coefficient is 1.5.The filter media supported on base material consisting of graded gravel layers (table 1).The gravel should be free from clay, dirt, vegetable and organic matter, and should be hard, durable and round, its total depth is 120cm and laid in the following layers (figure 1). Figure 1: Schematic of Filter In order to achieve different copper concentration (25, 75, 125 and 175 ppm), nitrate salt of Copper is used.Then solution separately sent to top the filter and passed through the granular media in various discharge (1.5, 2, 2.5, and 2.9 lit/min) separately. The characteristics of used water to making solution have shown in table 2. Sampling carried out from established tap under filter drain.Given samples acidified immediately by nitric acid.Then copper concentration in effluent perused by atomic emission spectrometer with ICP source.One of most important factors in modified Carman-Kozeny equation is the f 0 .Since that recognizing the amount of porosity that participate in filtration is impossible specially when deposits by complex morphology formed in granular media and f 0 will varied with each discharge to other estimating of this factor is a hard work. To do above aim for each discharge, initial head loss (h 0 ) was perused from installed piezometer at the purification layer (upper layer) below.Ten C D calculate from equation 3.In this study ψ considered equal to 0.85.f 0 calculate from Rose equation.Noticeable attention in Rose equation is on l.In the case of granular media l is length of path that fluid travel through filter.Because of this, purification layer height multiplied to tortuosity coefficient.Carrier (2003) explained that this amount is two. Head loss in filter and porosity amount relationship with emphasis on different passed discharge In this step, the range between initial head loss (h 0 ) and permissible head loss was assumed.For any discharge and assumed head loss, σ ν calculated from modified Carman-Kozeny equation.Needed f 0 in modified Carman-Kozeny equation, be achieved from step 2.1 from any discharge. Gregory equation adaptation Unknown parameters in Gregory equation are K and f.In each step of experiment f will be achieved from below equation To achieve K, following steps must be performed A: Calculate copper removal efficiency by filter in various steps then figure out the concentration of trapped copper that lead to lead loss in filter (C 0 ).B: h 0 peruse from installed piezometer at the beginning of filtration for each of discharges.h peruse from piezometer at certain time after filtration (in this case 50 minute) for inlet concentration of copper.C: entrance C 0 , f, h, h 0 , v and t in Gregory equation for each of experiments step.Therefore K is available in each step of experiment. Time estimation of certain head loss arriving In this step, assumptive range of head loss (h) (between initial head loss and permissible head loss) is considered.Now from 2.2, decreased porosity respect to assumptive head loss (f) is available.By entrance, h 0 , C 0 , v, h and K in Gregory equation for all of the situations (assumptive range of head loss, varied discharge and different concentration of inlet copper), time of reach to certain assumptive head loss (t in Gregory equation) will be accessible. Results and Discussion Hydraulic parameters for different discharge Achieved amounts for initial head loss, initial head loss gradient, Reynolds number, drag coefficient and initial porosity shown in table 3.As observed all of the Reynolds number have amount of less than one.Thus, laminar flow dominates on filter bed.Assumptive head loss versus f diagrams for all of the discharge Figure 2 describe relationship between head loss and decreased porosity (f) in different discharge.With attention on fig. 2 and table 3, these points figure out that with increase in discharge f 0 decreased.In addition, slope of lines in fig. 2 approximately is same.Then can be expected that porosity decreasing trend in different discharge be similar.In other word, increasing deposit rate in discharge range is similar.3).Then by using below equation, C 0 be accessible. C0=C 1 -C 2 Where C 1 and C 2 is inlet and outlet concentration of Copper, respectively Inlet copper concentration=125ppm Inlet copper concentration=175ppm Although increasing in discharge lead to entrance copper to filter is increased, the higher rate of water in bed causes that removal efficiency decreased, in addition deposit that is more compact form in granular media (because of more hydrodynamic force).Thus in same circumstance (same inlet copper concentration and given head loss), increasing in discharge lead to decreasing in σ ν In other world, hydrodynamic force of water in Copper filtration is more effective on head loss rather than inlet volume of copper. Line slope comparison in same discharge for any of the figures 4, 5, 6 and 7 shows that in lower inlet copper concentration slope is greater.Therefore, expect that in lower inlet copper concentration, deposit distribution in depth of bed is more homogeneous.However, in higher inlet copper concentration most of deposit formed in upper layers of bed. Conclusion Increasing in Copper concentration lead to removal efficiency decreased.Then if high concentrations of Copper exist, a series of rapid sand filters must be used.Considering that rapid sand filter has relatively establishing and reclamation low cost rather than other method for Copper removal, its recommend that this type of filter used for Copper removal from water and wastewater. In lower inlet copper concentration, deposit distribution in depth of bed is more homogeneous.Therefore, if high concentrations of Copper exist, rapid sand filters series consequence must be from filter by less depth to filter by more depth.With increasing in discharge and decreasing in inlet copper concentration, arriving time to given head loss increased. Following trend of this study can be useful to better rapid sand filter design (depth of filter, discharge, and grain size of filter media) Determining of arising head loss during filtration by presented method in this research lead to more exact estimation time interval for rapid sand filter backwashing. Using of filter media variable size in calculation and following of mentioned methodology, can aid to appropriate rapid sand filter particle size select. Table 3 : Initial head loss, initial head loss gradient, Reynolds number, drags coefficient and initial porosity amounts respect to apparent velocity K (Gregory coefficient) amounts in different condition (table 4) and estimated time to arrive given head loss (minute) in different copper concentration and different discharge(fig.4,5, 6, 7)To achieve C0 in Gregory equation, removal efficiency of Copper by rapid sand filter (E %), must calculate (fig.
2018-11-14T17:57:18.337Z
2014-06-21T00:00:00.000
{ "year": 2014, "sha1": "d4e96e676ed48680ff22a012007fbf35b070405d", "oa_license": "CCBYNC", "oa_url": "https://www.nepjol.info/index.php/IJE/article/download/10641/8617", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d4e96e676ed48680ff22a012007fbf35b070405d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
21825855
pes2o/s2orc
v3-fos-license
Estimation of costs for control of Salmonella in high-risk feed materials and compound feed Introduction Feed is a potential and major source for introducing Salmonella into the animal-derived food chain. This is given special attention in the European Union (EU) efforts to minimize human food-borne Salmonella infections from animal-derived food. The objective of this study was to estimate the total extra cost for preventing Salmonella contamination of feed above those measures required to produce commercial feed according to EU regulation (EC) No 183/2005. The study was carried out in Sweden, a country where Salmonella infections in food-producing animals from feed have largely been eliminated. Methods On the initiative and leadership of the competent authority, the different steps of feed production associated with control of Salmonella contamination were identified. Representatives for the major feed producers operating in the Swedish market then independently estimated the annual mean costs during the years 2009 and 2010. The feed producers had no known incentives to underestimate the costs. Results and discussion The total cost for achieving a Salmonella-safe compound feed, when such a control is established, was estimated at 1.8–2.3 € per tonne of feed. Of that cost, 25% relates to the prevention of Salmonella contaminated high-risk vegetable feed materials (mainly soybean meal and rapeseed meal) from entering feed mills, and 75% for measures within the feed mills. Based on the feed formulations applied, those costs in relation to the farmers’ 2012 price for compound feed were almost equal for broilers and dairy cows (0.7%). Due to less use of protein concentrate to fatten pigs, the costs were lower (0.6%). These limited costs suggest that previous recommendations to enforce a Salmonella-negative policy for animal feed are realistic and economically feasible to prevent a dissemination of the pathogen to animal herds, their environment, and potentially to human food products. I n the European Union (EU), efforts are in place to minimize human food-borne Salmonella infections from animal-derived food. Special attention is given to animal feed (1) in line with EU regulation (EC) No 178/ 2002 (known as the 'Food Law'), which considers animal feed as the first link of the animal-derived food chain. A quantitative risk assessment concluded that in both breeder and slaughter pigs, infected incoming pigs and Salmonella-contaminated feed are the two major sources of Salmonella (2). A similar situation also applies for poultry (3). The importance of feed is further emphasized in that Salmonella-safe feed is required to maintain breeding animals free from Salmonella. In the same way that Salmonella-contaminated food is the main route for transmission of Salmonella infections in humans, ingestion of Salmonella-contaminated feed is a key route of transmis-sion in animals (4). A striking example emphasizing the potential of contaminated animal feed to act as a source of Salmonella infections in humans occurred when Salmonella Agona emerged as a public health problem in several countries due to the widespread use of contaminated fish meal that was imported as feed material. In the period 1968Á1972, a rapid increase of human infections with S. Agona occurred in the United States as well as in Europe (5). Since then, S. Agona is among the most prevalent serotypes in humans. It is estimated that the serotype up to 2001 caused 1 million human illnesses in the United States alone since it was introduced in animal feed in 1968 (5). An integrated approach needed to prevent Salmonella contamination of feed is reviewed (6). Jones (7) has separated the control measures into three major strategies: 1) prevention of contamination, 2) reduction of multiplication, and 3) procedures to kill the pathogen. In spite of all the challenges involved, it is possible to successfully produce Salmonella-safe feed, even for young broilers (8), under commercial and industrial conditions as demonstrated, for example, in the Nordic countries (Denmark, Finland, Norway, and Sweden). The young broiler is very sensitive to peroral exposure to Salmonella and can become infected from ingestion of just a few Salmonella bacteria (9). In Sweden, with a long tradition of control of Salmonella in feed, the incidence of Salmonella in broiler production, based on an approach where each flock is tested before slaughter, is found to be very low (8). The average annual (1996Á2010) incidence of Salmonellainfected flocks (annual production 75 million chickens; average flock size, 20,000 chickens) was 0.2% based on testing prior to slaughter. Only 0.03% of carcasses were found to be Salmonella-contaminated when tested after slaughter (10). Also, in other food-producing animal species, the annual incidence of Salmonella is relatively low (11). During the same period (1996Á2010), Salmonella was isolated from only 0.13% of lymph nodes of fattening pigs indicating a low prevalence of Salmonella contamination of feed. In addition to the control of Salmonella in feed, this relatively positive situation is also the result of actions taken at the farm when Salmonella is detected in animals (12). However, the control of feed is considered to be essential. In contrast to available data on how to prevent and control Salmonella contamination of feed, there is a considerable gap of published data on the actual cost of those actions (13). It is currently also important to fill out that gap when considering that the costs, although unspecified, are sometimes used as an argument against implementing a control (14). The objective of this study is therefore to estimate the total extra cost for preventing and controlling Salmonella contamination of some high-risk feed materials (mainly soybean meal and rapeseed meal) and compound feed to food-producing animals and also the cost in relation to the price of feed. The study was carried out in Sweden because, as described above, the strategies applied for the prevention of Salmonella contamination in the feed industry result in a Salmonellasafe feed. These estimations should be of general value since the feed production generally includes the same technical approach in most countries and the price for feed materials and compound feed follows the global prices on feed commodities. General approach In Sweden, different measures are taken with regard to the manufacture of commercial feed in order to realize the ambition of producing Salmonella-safe feed. These measures normally result in extra costs above the cost for those measures required to produce commercial feed according to requirements for feed hygiene as described in EU regulation (EC) No 183/2005. However, that regulation does not include any specific requirements concerning reducing the contamination of Salmonella. This assessment estimates the extra costs, in addition to the requirements under this EU regulation, for the prevention and control of Salmonella contamination in animal feed. Special attention is given to the production of high-risk feed materials (as defined below) during the production process in crushing plants, when used in feed mills, for the manufacture of compound feed. 1 The total cost of compound feed production, as well as costs in relation to the price of the feed in question, is also estimated. It was not possible to specify the extra costs for feed intended to be used for different food-producing animal species. Therefore, the estimations cover the feed production for all food-producing terrestrial animals in Sweden, predominantly cattle, swine, and poultry, and to a lesser extent sheep. Feed for other species such as pet animals and farmed fish is not considered. Legislative demands and strategies for the control The minimum requirements in Sweden for the prevention and control of Salmonella in animal feed are provided for in national legislation (12). Some feed materials are classified (S1 to S3) according to risk for Salmonella contamination. The highest risk class (S1) only includes feed materials of animal origin, which are currently used only to a very limited extent in animal feed (e.g. animalderived fat, fish meal, some milk and egg products). Risk class S2 includes meals and expellers (cakes) from the oil crushing industry (e.g. babassu, coconut expeller, palm kernel expeller, rapeseed, and soybean meals) and maize gluten feed and meal. When handling such feed materials, crushing plants and feed mills are required to have a Salmonella control program in place. In this program, identified critical control points have to be tested for contamination based on a minimum number of samples at specified intervals according to HACCP principles. Swedish retailers/operators of feed mills are not allowed to use feed materials classified as S2 from other countries until a Salmonella control with a negative result has been carried out on every single lot received. Risk class S3 includes feed materials with a lower probability for Salmonella contamination. These also need to be tested, but the material can be used before the test result is available. Currently, feed materials classified as S3 only include rice. For other feed materials of vegetable origin, such as cereals, there are no detailed obligations regarding Salmonella laid down in national legislation. When feed for animals other than poultry is produced, a minimum of two environmental samples should be tested each week from the top of the storage bin for compound feed (1) and from the intake pit/bottom of elevator for feed materials (2). Special attention is given to the production of compound feed to poultry. For such production, the following five control points are specified as a minimum requirement: 1) intake pit/bottom of elevator for feed materials; 2) pneumatic aspiration (excavations) from feed materials or central aspiration; 3) top of pellet cooler; 4) dust from room for pellet cooler; and 5) storage bin for produced compound feed. Consequently, a minimum of five samples should be taken every week with regard to the production of poultry feed. The HACCP program, including hygienic procedures for cleaning, has to be adapted to each feed operation and has to be checked by the Competent Authority, the Swedish Board of Agriculture. In addition, all feed for poultry should be heat treated, to a minimum at 758C. Operators of feed mills have regularly identified feed materials as one of the control points in particular for oil seed meals from abroad. Testing for the absence of Salmonella (irrespective of serovar) is often conducted by operators of feed mills even for feed materials where testing is not mandatory before they are allowed to enter the plant or used for feed production. Consignments found to be Salmonella contaminated are decontaminated, followed by re-testing with a negative result before use (15). Decontamination is regularly done by treatment with organic acids (15). The surveillance of consignments of risk feed materials is based on a sampling procedure that takes account of the potential for an uneven distribution of Salmonella and is designed to detect contamination in 5% of the batch with 95% probability (16). This means that from consignments of 101Á10,000 tonnes, a minimum of eight samples must be analyzed, each consisting of 10 pooled incremental samples of 2.5 g. Where possible, sampling on a moving stream principle is applied. Feed material produced domestically is normally not specifically tested for Salmonella in the feed mill but the control is instead based on the control program for the producing plant, as described above. All samples are tested by standard bacteriological procedures (17) and in particular according to the NMKL-71 method (18). Consignments of feed materials from abroad are now often initially tested by a PCR technique, which in cases of a positive result are verified with the bacteriological methods (19). The analyses are always done at accredited laboratories. The mandatory samples from the feed production must be sent to the National Veterinary Institute for analysis (15). The national legislation also specifies measures to be taken by crushing plants and feed mills when Salmonella, irrespective of serovar, is isolated from: 1. Feed materials 2. Production lines for non-heat-treated feed 3. Production lines (before heat treatment/unclean part) for heat-treated feed 4. Production lines (after heat treatment/clean part) for heat-treated non-poultry feed 5. Production lines (after heat treatment/clean part) for heat-treated poultry feed These measures seek to identify and eliminate contamination of Salmonella and are always undertaken when Salmonella is isolated, irrespective of the serovar involved. Slightly more stringent measures are in place for poultry feed. In contrast to feed for other animal species, the delivery of compound feed to poultry producers has to be stopped directly when Salmonella is detected on the clean side of the production line. Note, however, that the operators of feed mills, irrespective of the animal species intended, now generally apply such procedures. Estimation of cost The study was done on the initiative and under the leadership of the Swedish Board of Agriculture. Initially, the different steps of feed production associated with prevention and control of Salmonella contamination were identified as a joint effort with the industry. The annual mean costs during 2009 and 2010 were then estimated, by representatives of the three major feed producers for the Swedish market. 2 One was a producer of crushed feed material from rapeseed at one plant with an annual crushing capacity of 300,000 tonnes of rapeseed for the production of 180,000 tonnes of rapeseed meal and the other two were major producers of compound feed, with an estimated 90% share of the Swedish feed market. Their production was located at approximately 15 feed mills with a total annual production capacity of 1.6Á1.8 million tonnes of compound feed. Based on the legislative demands and associated control strategies and possible additional measures as described above, the cost for preventing and controlling Salmonella contamination were identified for 1) high-risk feed material which is split up into domestic production and imported ready-to-use feed material, where each is further split up into four subareas as specified in Tables 1 and 2, 2) compound feed which all concern domestic production in feed mills. This cost was split up into seven groups as specified in Table 2, and finally 3) the total cost for those two groups of feed were related to the commodity price as specified in Table 4. The three participating producers were competing on the market and therefore insisted on carrying out the 2 AAK (http://www.aak.com), Lantmännen (http://www.lantmannenlantbruk.se/) and Svenska Foder (http://www.svenskafoder.se/). analyses individually and not as a joint effort. Their estimations of the total cost for the different identified areas are therefore not further split up but include cost of labor, laboratory analyses, equipment, voluntary industryÁbased additional controls, for example, heat treatment on non-poultry feed, extra sampling and biosecurity or other related costs as relevant for the estimators who are key persons for the area and who are familiar with similar estimations. The cost was distributed across the total amount of feed produced. The data were analyzed and summarized by the Swedish Board of Agriculture. When necessary, clarification was sought. Diverging estimations of cost were presented as a range. Control of Salmonella The costs were originally given in Swedish (SEK) but here recalculated to euro (t) at an estimated mean currency exchange rate during recent years of 1t 09 SEK. Where necessary, conversion to/from US currency for prices on feed commodities was conducted at 1$ 07.20 SEK. Manufacture of crushed feed material in Sweden Only one commercial plant crushes oil seeds classified as S2 (see above) in relation to risk for Salmonella contamination (20). This plant produces rapeseed meal (nr 2.07 in Regulation (EC) No 242/2010) using both domestic and non-domestic sources. In line with legal requirements, the establishment has a declared ambition to deliver only Salmonella-safe feed materials to customers/feed mills. This means that product is not delivered until it has tested negative for Salmonella contamination (18). The feed safety GMP program of the plant is certified by VFK (The Association for Safe Feed Materials, http:// www.vfk.se). The specific costs for delivering Salmonellasafe feed materials is presented in Table 1 and in total estimated at 2t per tonne. Salmonella As a clarification of the legislative demands described above, it should be noted that the demand for testing for Salmonella of rapeseed (Risk class S 2) originating in other countries is not applied for the crushing plant, as rapeseed is regarded as a raw material for further processing into a final expeller/meal. However, if Salmonella is detected in produced rapeseed meal, the product is to be decontaminated by heat-treatment or by organic acid and tested free from Salmonella contamination before delivery, as described before. High-risk feed materials originating from other countries for use in compound feed The choice of good suppliers: Although it is normally not possible to buy feed materials with any kind of Martin Wierup and Stig Widell 'Salmonella guarantee', experience has shown that the Salmonella status of feed materials placed on the market by producers in other countries varies (15). Efforts are therefore made to choose suppliers of high-risk feed materials with statistically good records with regard to Salmonella contamination to avoid those extra costs associated with Salmonella contamination as described below. So far, the only known example of a foreign producer with good statistical records for absence of Salmonella contamination is a Norwegian crusher of soybean, which is known to have a good self-auditing hygiene program that is completely transparent to Swedish customers and authorities (1). Commodities leaving that plant have been tested free from Salmonella contamination before delivery, making additional testing on reception unnecessary as otherwise required at the Swedish feed mills. The extra cost due to Salmonella control for soybean meal from that plant in relation to soymeal from other producers is by Swedish operators of feed mills estimated at 3.3t per tonne. When any intended supplier of feed materials has faced problems with Salmonella in their establishment or Salmonella has been detected in their products, Swedish feed mills have sometimes been forced to temporarily choose another supplier, which is associated with different kinds of extra costs estimated at approximately 3.3t per tonne. However, due to the low annual incidence of such events, the extra cost here is considered negligible. Control of Salmonella Sampling and analysis: These costs include mandatory sampling and testing of all consignments from other countries of S1, S2, and S3 feed materials for the absence of Salmonella contamination and the holding of consignments in quarantine pending the results. Category S1 and S2 feed materials are not to be used in compound feed production until a negative result for Salmonella is available. Sampling and testing may also be applied on a voluntary basis for domestically produced feed materials, although a legal demand for such a procedure is not in place because, in contrast to the situation for non-Swedish feed production, the domestic production can be controlled by the competent Swedish authorities. However, every consignment of feed materials found positive for Salmonella, whether risk categorized or not, has to be handled according to provisions laid down in national legislation. The possession of consignments under quarantine means extra costs for storage facilities. The total cost for this control on receipt is estimated at 0.4Á0.9 t per tonne. Acid treatment when Salmonella has been detected: When a consignment is found to be Salmonella contaminated Á when tested as described before under 'Sampling and analysis' Á it must either be sent back to the consignor or undergo a decontamination process. In practice, returning to the consignor is seldom an option. The dominant way of de-contamination is by use of organic acid (15,21). After decontamination, further sampling and testing is conducted to verify that the de-contamination has been successful. Then the feed material can be used in the production of compound feed but only in heat-treated compound feed. The storage place for the contaminated feed materials subsequently requires cleaning and disinfecting. Costs for de-contaminating feed materials include capital for investment in equipment, extra storage space, and variable costs including costs for organic acid, labor, sampling, and testing. The total extra costs for the decontamination of a Salmonella-contaminated consignment, which have to be paid by the Swedish consignee, usually the feed mill, are estimated at around 16.7Á22.2 t per tonne of treated feed material. When estimating the mean proportion (7%) of consignments of non-Swedish S2 feed materials, mostly soy and rapeseed meal, being Salmonella contaminated, the mean extra cost for control of Salmonella in these feed materials is estimated at 1.1Á1.7 t per tonne. General heat treatment of a feed material: Acid treatment is only performed when Salmonella has been confirmed in a consignment. However, on a voluntary basis, prophylactic heat treatment is often performed when some feed materials, usually soy and rapeseed meal, are delivered directly to farmers. The cost for this heat treatment is estimated at 11Á22 t per tonne of treated feed material. Compound feed Technical standard of feed mills For decades, feed mills in Sweden have been continuously improved by taking on board new technologies for the production of feed that is Salmonella-safe. They are constructed in a way that separation can be made between an unclean and a clean section where the borderline is the heat treatment step. The development of more effective long-duration conditioners and pellet presses has resulted in ongoing replacement of equipment. Such modern, long-duration conditioners allow for heat treatment of the feed for a longer period of time. Additionally, the construction of coolers which are used to lower the temperature and the humidity in the pelleted material, has been improved. Over-pressure is used to prevent microorganisms from being introduced into the cooler where hot and humid feed is handled. The annual capital costs for the technical improvements were estimated at 0.1Á0.2 t per tonne. Heat treatment According to national legislation, feed for poultry has to be heat treated. The operator has to ensure that the temperature has reached 758C in the feed before it may be passed over to the cooler. In practice, most commercial compound feed for food-producing animals is heat treated, usually during a pelleting process. Pelleting the feed could result in nutritional and other advantages but is also an effective way, with effective conditioners, of preventing the spread of Salmonella through feed. This procedure is considered an extra cost, noting prolonged residence in the conditioner and the use of high amounts of steam. The extra total costs for having an effective de-contamination effect in the conditioner and in the pellet press are estimated at 0.8Á1.1 t per tonne of feed. HACCP-associated sampling According to national legislation, environmental sampling for control of Salmonella contamination (HACCP) has to be conducted on a weekly basis as described above. Very often the operator takes more samples than laid down as a minimum requirement in the national legislation. In some cases, a large quantity of samples is taken at one specific time in the year in order to get a good overview of the hygienic situation in the establishment. The extra costs for all the HACCP-associated sampling are estimated to be around 0.2Á0.3 t per tonne of feed. Measures when Salmonella has been detected in a feed mill When Salmonella has been detected in a feed mill, further monitoring by sampling is carried out in order to gain an understanding of the extent of the contamination. The contaminated area or object is cleaned and disinfected and the effect verified by repeated sampling. Depending on the situation, production and deliveries may be affected. The competent authority has to be notified when Salmonella has been identified in the clean section of the production line. When Salmonella has been detected in the unclean section and the follow-up sampling indicates only local contamination, a local cleaning/ disinfection on the spot is required. In such cases, the competent authority, when notified, gives guidance to the operator. The overall goal is to keep the unclean section of the establishment free from Salmonella contamination even though it is known that Salmonella is occasionally found in that section. However, permanent Salmonella contamination is not allowed to be established in any part of the plant. When Salmonella is detected in the clean section, production is normally stopped in the production line involved to prevent the spread of Salmonella within and from the plant. Dispatch of feed from the establishment is, in practice, immediately stopped. Under guidance of the competent authority a thorough environmental sampling scheme is then followed. Samples that have to be taken and kept on a routine basis when feed is delivered to customers (dispatch samples) are retrospectively checked for Salmonella for a certain period (specific to every situation). Measures are undertaken to eliminate the contamination. Thorough cleaning and disinfection is always carried out and the concerned production line is not re-started until follow-up environmental sampling has confirmed that the decontamination efforts have been successful. Considerable extra costs could result if feed from other feed mills has to be provided and delivered to the customer/animal holdings of the plant during the halt in production. In some situations, in particular when difficulties in eliminating Salmonella contamination occurs, the costs during this critical period may be considerable when, for example, repetitive cleaning operations are required. Based on previous incidences when Salmonella is detected within a feed mill, the associated extra costs are estimated at 0.1 t per tonne. Measures when Salmonella has been detected in the farm of a customer Measures in the feed mill: According to national legislation, a feed operator has to take action when Salmonella has been detected in the herd of a customer. This includes monitoring by sampling of the production line from which the feed to the customer was delivered. Dispatch samples from feed delivered to the customer are normally checked as well. The costs for the measures described in this point are estimated as: B0.1 t per tonne of feed. However, if Salmonella is detected in the plant, the associated costs are as described above. Measures when delivering feed to herds under quarantine: When Salmonella is isolated in food producing animals the actual herd is put under restrictions aimed at preventing the spread of the infection. Measures are taken to eliminate the infection from the herd (11). The delivery of feed to such farms has to follow certain procedures including cleaning and disinfecting certain parts of the vehicle to prevent the spread of infection to other herds and to the feed mill. The extra costs for these procedures are estimated as: B0.1 SEK per tonne of feed. Total extra costs due to combating salmonella in feed The total extra costs for the Swedish feed industry to fulfill the ambition that Salmonella should not be transmitted by feed have been summarized as follows and partly also presented in Tables 2 and 3. As can be seen, the costs for the following procedures are specifically estimated as described above. Cost for achieving Salmonella test negative high-risk feed materials (mostly rape Á or soymeal) The high protein-rich feed materials of vegetable origin used in Sweden, mainly soy and rapeseed meal, are either crushed in Sweden or originate in other countries. As presented in Table 2, the specific cost for producing such a feed material that is tested for Salmonella contamination so that it can be used in compound feed is: 2.0t per tonne for domestically crushed rapeseed produced according to Swedish legislation for feed production. 3.3t per tonne for soymeal from a non-Swedish crushing plant operating in accordance with standards laid down in Swedish legislation for feed production. 2.1t per tonne for protein sources (mainly soy and rapeseed meal) from non-Swedish crushing plants selected for having a relatively good hygiene standard in relation to Salmonella contamination but produced without connection to the demands in Swedish legislation for feed production. Cost for production of a Salmonella-safe compound feed The estimated cost for the production of Salmonellasafe compound feed excluding the corresponding cost for the high protein-rich feed materials as described above is presented in Table 3. It can be seen that the cost is approximately 1.55t (1.3Á1.8 t) per tonne. Estimation is also made of the total cost for producing Salmonella-safe compound feed. Because different formulae for the inclusion in compound feed of high protein-rich feed materials are applied, it was assumed that the three types of proteins (Table 2) were equally mixed into compound feed to a concentration of 10Á30%. It was also assumed that the mean cost for achieving those products test negative for Salmonella was 2.5 t per tonne. Based on those assumptions, the total cost for producing a Salmonella-safe compound feed is 1.8Á2.3 t per tonne. Cost for control of Salmonella is in relation to the commodity price The cost for control of Salmonella as described above has also been related to the estimated mean market price for the different feed products during the years (2009/10), as well as during 2012, a year when the global feed prices were generally higher than average prices during previous years. The result is presented in Table 4. It can be seen that at the feed price level of 2010, the mean estimated cost for the control of the high-risk feed materials was approximately 1.0% (0.7Á1.2%) of the commodity prices. At the price level of 2012, the cost decreased to 0.7% (0.5Á0.8%). For the calculation of the corresponding data for compound feed, based on the assumptions specified in Table 4, the relative cost for the control of Salmonella at a 20% inclusion of soy or rape meal was approximately 0.8% at the price level of 2010 and 0.6% at the higher price level of 2012. This is the final price paid by the farmer. By the use of the data presented above and the feed formulations applied to the major animal species in Sweden, the cost at the 2012 price level for receiving Salmonella-safe compound feed can be calculated. For broilers, with the inclusion of 23% soy meal in the feed, that cost is 0.0021 t/kg, or B1% (0.7%) of the price (0.33 t/ kg) of commercially produced feed. The corresponding estimated cost for fattening pigs using 14% rapeseed is slightly lower: 0.0018 t/kg or 0.6% of the feed price (0.30 t/kg). For a dairy cow using approximately 25% feed concentrate (of equal parts of rape and soy meal), that cost is similar to broilers: 0.0018 t/kg or 0.7% of the feed price (0.32 t/kg). For broilers this means a total cost of approximately 0.008t per bird when slaughtered at 2 kg and consuming 3.6 kg of feed. By the use of these data, the production cost for the farmer for different Table 3. Estimated cost for the production of Salmonella-safe compound feed to food producing animals in Sweden during 2009/10 Compound feed Estimated cost per tonne of produced feed 1. Without specific costs for high-risk feed materials ( food-producing animal species or production systems can be calculated, depending on the proportionate use of compound feed or decontaminated high-risk feed materials (mostly rape and soy meal). Discussion This paper presents the cost for all the steps included in the Swedish strategy to control Salmonella contamination of feed. Some subset of costs could be estimated in detail but for others only a rough estimation was possible. The study is interesting because similar data are not readily available and the results are largely based on rather solid data from the feed industry with no known incitement to underestimate the cost. This is especially so because there is no national or other economic compensation for the control of Salmonella in feed which instead is paid by the producers and included in their feed prices. The estimated costs were interestingly relatively similar between different producers. The results are also valuable because longterm documented data in particular from the poultry and swine industry (8) indicates that the control is effective and largely has eliminated feed as one of the major sources of Salmonella infections in food-producing animals. The total cost was estimated at 1.8Á2.3 t per tonne of compound feed which based on the 2010 price level for feed material is approximately 0.8% of the farmer's cost for the compound feed. At the higher global feed prices during 2012, the relative cost decreased to 0.6%. Based on the feed formulations applied to different animal species, the above relative cost for achieving Salmonella-safe compound feed was almost equal for broilers and dairy cows (0.7% at 2012 feed prices) and due to less use of protein-rich feed lower (0.6%) for fattening pigs. Control is based on two major steps. The first step aims at preventing Salmonella-contaminated feed material from entering feed mills. Those feed materials found to be of highest risk, in particular non-Swedish feed materials of vegetable origin, mainly soy and rapeseed meal, have to be tested and found negative for Salmonella contamination, before being allowed to enter the feed mill and used as ingredients in compound feed (15). The second step is based on continuous control within the feed mill according to HACCP principles. Out of the total cost per tonne of compound feed, the estimated mean cost for the first step accounted for 25% (0.5 t; 0.2Á0.7 t, depending on the protein concentration) compared to 75% (1.5 t) for the second step (i.e. the control within the feed mill). The costs (2 t per tonne) for producing a Salmonella-safe high-risk protein-rich feed material (rapeseed meal) is specified for one Swedish crushing plant and that is the only known data of its kind found in either the scientific or gray literature. It is not known to what extent cost is specifically added to the final price of rapeseed meal, which also includes other properties with an added value. Another non-Swedish crushing plant is known to produce Salmonella-safe soybean meal but the specific cost for the elimination of Salmonella contamination is not known. Instead, it is estimated that an extra price of 3.3 t per tonne is paid. This product also has other properties of added value so it is not known to what extent the extra cost for a Salmonellasafe source refers only to the cost for elimination of the Salmonella contamination. This latter plant can only supply part of the demand from the Swedish feed mills and both crushing plants are the only ones known so far to provide high-risk protein-rich feed material with any kind of guarantee for the absence of Salmonella. This means that currently at EU level, it is not possible to pay an extra fee for sourcing of Salmonella-safe feed materials as seems to be assumed in a recent costÁbenefit study (13). Due to the lack of crushing plants that can provide Salmonella-safe feed materials, the feed mills are instead obliged to buy feed materials with an unknown Salmonella status with a high-risk for Salmonella contamination. There is therefore a need to have such products tested and if necessary decontaminated at an estimated cost of 2.1t per tonne. Earlier up to 30% of consignments of feed materials from crushing plants with an unknown Salmonella status were found Salmonella contaminated upon arrival to Sweden, but the situation has improved, usually to an annual incidence of B10% during recent years (15). Very few data are available on the possible human health impact of that source of contamination, which also should vary with the preventive measures applied in the whole feed chain. In one study from Denmark, it was estimated that 2.1% of domestically acquired human Salmonella infections during 1999Á2003 could be attributed to feed-borne serotypes acquired through the consumption of Danish pork and beef and the dominating source of Salmonella was contaminated imported soy bean products (14). However, apart from less intensive sampling of the feed material than in the current study, major human pathogenic serovars (S. Typhimurium, S. Enteritidis) with a special ability to establish themselves in food animal populations were not included in the estimation. The crushing plants, for example, in the continental EU Member States, have often four to five times the production capacity of the Swedish crushing plant described in this paper (1). This means that the relative cost for Salmonella control per volume feed (apart from decontamination of Salmonella contaminated consignments) would be significantly lower than at the Swedish plant. However, this is not the case for control at feed mills because the mean size of the Swedish feed mills is large compared to most EU countries (22). The cost described above relates only to all the legislation on control of Salmonella in compound feed and associated voluntary measures. However, in Sweden other legislation contributes to minimizing the risk for contamination of feed at the farm, and when Salmonella infections occur in animals, the feed is generally controlled including possible spread by compound feed from feed mills. This contributes to ensuring that feed producers remain alert as does the fact that they are paying for the cost of the control as described above. Of fundamental importance for the efficiency of the control measures is that the monitoring of feed materials as well as the sampling within the HACCP program are combined with methods to eliminate those Salmonella contaminations that occur. Soy is the most frequently used protein-rich feed material by the EU livestock industry and a significant risk for Salmonella contamination of the food chain. Approximately 97% of soy is imported from third countries and mostly crushed before it reaches the EU (22). Also, other oilseed meals produced within and outside EU have a high-risk for being Salmonella contaminated. Therefore, a most effective way to strengthen the ongoing effort to minimize the prevalence of Salmonella in EU farms would be to implement stringent measures for elimination of Salmonella contamination already in the crushing plants. The current study has shown that this can be done at a cost of around 2 t per tonne of feed material. Another important area of focus for control of Salmonella relates to the production of compounded feed at the feed mills, which can be done at a cost of 1.5 t per tonne. The total cost for the production of Salmonella-safe feed was estimated at 0.5Á0.7% of the farmer's price for compound feed during 2012. This can be considered as a fairly low price when considering that the cost has sometimes been used as an argument against implementing more stringent measures for preventing Salmonella contamination of feed. It should also be emphasized that the estimated cost for achieving a Salmonella-safe compound feed does not include the capital costs previously laid down for the technical improvement of feed mills, including the management skill of their staff which in Sweden have been ongoing for decades. However, it is not known to what extent some of those costs are included in those measures required to produce commercial feed according to EU regulation (EC) No 183/2005. This study indicates that a Salmonella-negative standard for animal feed as recommended by the FDA in 1991 and by Crump et al. (5) and as successfully implemented since decades in Sweden due to its relatively limited cost can be considered as a realistic approach. From a one health perspective, it is thus important that efforts are done to prevent Salmonella-contaminated animal feed material and animal feed and thus also possibly associated antibiotic resistance genes to move between and within countries and from resulting in a widespread dissemination of the pathogen to animal herds and their environment with potential for a subsequent contamination of a range of human food products (6). providing data on the cost for control of Salmonella in the enterprises of their concern. The authors also thank R. Erixon, Kalmar Lantmän, Sweden (http://www.kalmarlantman.se/sitebase/) for providing estimated data on feed prices; special thanks are due to L.M. Widell, Analysis Unit, Swedish Board of Agriculture, SE-551 82 Jö nkö ping, Sweden, for checking the approach applied for the economical estimations and Dr Simon More, UCD Centre for Veterinary Epidemiology and Risk Analysis, University College Dublin, Belfield, Dublin 4, Ireland, for valuable critical review and advice on the language.
2016-05-12T22:15:10.714Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "8fb216fd67d6ffc4c748ea1b3963a5b176f469b2", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/iee.v4.23496?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8fb216fd67d6ffc4c748ea1b3963a5b176f469b2", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
35384767
pes2o/s2orc
v3-fos-license
Peripheral chemosensitivity is not blunted during 2 h of thermoneutral head out water immersion in healthy men and women Abstract Carbon dioxide (CO2) retention occurs during water immersion, but it is not known if peripheral chemosensitivity is altered during water immersion, which could contribute to CO2 retention. We tested the hypothesis that peripheral chemosensitivity to hypercapnia and hypoxia is blunted during 2 h of thermoneutral head out water immersion (HOWI) in healthy young adults. Peripheral chemosensitivity was assessed by the ventilatory, heart rate, and blood pressure responses to hypercapnia and hypoxia at baseline, 10, 60, 120 min, and post HOWI and a time‐control visit (control). Subjects inhaled 1 breath of 13% CO2, 21% O2, and 66% N2 to test peripheral chemosensitivity to hypercapnia and 2–6 breaths of 100% N2 to test peripheral chemosensitivity to hypoxia. Each gas was administered four separate times at each time point. Partial pressure of end‐tidal CO2 (PETCO2), arterial oxygen saturation (SpO2), ventilation, heart rate, and blood pressure were recorded continuously. Ventilation was higher during HOWI versus control at post (P = 0.037). PETCO2 was higher during HOWI versus control at 10 min (46 ± 2 vs. 44 ± 2 mmHg), 60 min (46 ± 2 vs. 44 ± 2 mmHg), and 120 min (46 ± 3 vs. 43 ± 3 mmHg) (all P < 0.001). Ventilatory (P = 0.898), heart rate (P = 0.760), and blood pressure (P = 0.092) responses to hypercapnia were not different during HOWI versus control at any time point. Ventilatory (P = 0.714), heart rate (P = 0.258), and blood pressure (P = 0.051) responses to hypoxia were not different during HOWI versus control at any time point. These data indicate that CO2 retention occurs during thermoneutral HOWI despite no changes in peripheral chemosensitivity. The chemical control of ventilation in humans is tightly regulated by the central and peripheral chemoreceptors which detect changes in arterial blood gases and pH (Kara et al. 2003). Chang and Lundgren (1995) have shown that central chemosensitivity is not altered during 10 min of water immersion, which indicates that the central chemoreceptors are not affected by brief thermoneutral water immersion. The peripheral chemoreceptors, comprised of the aortic and carotid bodies, are the primary oxygen sensors in the body (Kara et al. 2003;Prabhakar and Peng 2004). In addition to oxygen sensing (Kara et al. 2003;Prabhakar and Peng 2004), the peripheral chemoreceptors are activated when exposed to acute hypercapnia and contribute to the acute hypercapnic ventilatory response (Kara et al. 2003). In fact, the peripheral chemoreceptors account for approximately 35% of the increase in ventilation during acute hypercapnia (Smith et al. 2006;Wilson and Teppema 2016). Therefore, a reduction in peripheral chemosensitivity could contribute to CO 2 retention during water immersion. A possible mechanism which could contribute to the reduction in the chemical control of ventilation during water immersion is the interaction between the arterial baroreceptors and the peripheral chemoreceptors (Heistad et al. 1975;Koehle et al. 2010). Peripheral chemosensitivity is blunted during baroreceptor loading (Heistad et al. 1975); therefore, central hypervolemia during water immersion (Arborelius et al. 1972;Pendergast et al. 2015) could blunt peripheral chemosensitivity and play a role in CO 2 retention. The purpose of our study is to test the hypothesis that peripheral chemosensitivity is blunted during HOWI in humans. Subjects Ten subjects (age: 23 AE 2 years, BMI: 26 AE 2 kg/m 2 , 3 women) participated in four visits: a screening visit, a familiarization visit, and two randomized experimental visits. Subjects self-reported to be active, nonsmokers, not taking medications, and free from any known cardiovascular, metabolic, neurological, or psychological disease. Women were not pregnant, confirmed via a urine pregnancy test prior to the familiarization and experimental visits, and were tested during the first 10 days following self-identified menstruation to control for menstrual cycle hormones (Minson et al. 2000). Each subject was informed of the experimental procedures and possible risks before giving informed, written consent. During the familiarization visit, all subjects were acquainted with the breathing apparatus (i.e., the mouthpiece and pneumatic switching valve) and gases that would be used during the experimental visits. The study was approved by the Institutional Review Board at the University at Buffalo, and performed in accordance with the standards set forth by the latest revision of the Declaration of Helsinki. Instrumentation and measurements Height and weight were measured with a stadiometer and scale (Sartorius Corp., Bohemia, NY). Urine-specific gravity was measured using a refractometer (Atago USA, Inc., Bellevue, WA). The partial pressure of end-tidal carbon dioxide (PETCO 2 ) was measured using a capnograph (Nonin Medical, Inc., Plymouth,). Since PETCO 2 reflects PaCO 2 throughout a wide range of physiological dead space (McSwain et al. 2010), including water immersion (Salzano et al. 1984;Mummery et al. 2003;Cherry et al. 2009), PETCO 2 was used as a marker of PaCO 2 in our study. Arterial oxygen saturation (SpO 2 ) was measured using a finger pulse oximeter (Nonin Medical, Inc.) and beat to beat blood pressure was measured via the Penaz method (ccNexfin Bmeye NA, St. Louis, MO) on a hand that was supported above the water during HOWI. Blood pressure was corrected to heart level using a height correction sensor (ccNexfin Bmeye NA). Heart rate was measured continuously from a three lead ECG (DA100C; Biopac Systems, Inc., Goleta, CA). Inspired and expired ventilation were measured continuously using nonheated and heated pneumotachometers, respectively, (Hans Rudolph, Inc., Shawnee, KS) that were attached to a twoway nonrebreathing valve and mouthpiece (Hans Rudolph, Inc.). Hemodynamic data were obtained at 500 Hz and ventilation data were captured at 62 Hz by a data acquisition system (Biopac MP 150, Goleta, CA) and stored on a personal computer for offline analyses. Minute ventilation, tidal volume, and respiratory rate were determined using the breath by breath respiratory analysis program of the data acquisition system (AcqKnowledge 4.2, Goleta, CA) by a blinded researcher. Abhorrent breaths (e.g., sigh, breath hold, etc.) were excluded and ventilation data are presented in BTPS. The rate of CO 2 production (VCO 2 ) was calculated as mean expired CO 2 partial pressure (i.e., derived from the CO 2 waveform) divided by barometric pressure minus water vapor pressure of the body (Siobal et al. 2013). Alveolar ventilation was calculated as the product of VCO 2 and 863 divided by PETCO 2 (West 2012). Dead space ventilation was calculated as minute ventilation minus alveolar ventilation. Stroke volume was determined via the arterial pressure waveform using Modelflow (ccNexfin Bmeye NA) and cardiac output was calculated as the product of heart rate and stroke volume. Total peripheral resistance was calculated as mean arterial pressure divided by cardiac output. The ratio of alveolar ventilation to cardiac output was calculated as an index of the ratio of alveolar ventilation to pulmonary perfusion (Derion et al. 1992;Levitzky 2013). Experimental approach Subjects reported to the laboratory for two randomized experimental visits: (1) a HOWI visit and (2) a time-control dry visit (control). Subjects arrived at the laboratory having refrained from strenuous exercise, alcohol, and caffeine for 12 h, and food for 2 h for both visits. Subjects also arrived to the laboratory euhydrated for both HOWI and control visits (urine-specific gravity: 1.012 AE 0.007 and 1.015 AE 0.006, respectively). Subjects assumed a seated position for instrumentation in a temperature-controlled laboratory (25 AE 2°C, 49 AE 8% relative humidity). Following at least 10 min of seated rest, baseline peripheral chemosensitivity to hypercapnia and hypoxia were measured. It has been suggested that peripheral chemosensitivity to both acute hypercapnia and hypoxia should be used in order to completely assess the peripheral chemoreflex (Chua and Coats 1995). Upon the completion of baseline measurements, the subjects either entered a pool (HOWI) or continued seated rest (control) for 2 h. HOWI consisted of seated rest in thermoneutral water (35.1 AE 0.2°C) up to the suprasternal notch. Over the next 2 h, peripheral chemosensitivity to hypercapnia and hypoxia were measured at 10, 60, and 120 min. Then, subjects exited the pool (HOWI) or remained seated (control), and peripheral chemosensitivity to hypercapnia and hypoxia were measured after 10 min of seated rest (i.e., post). During the peripheral chemosensitivity to hypercapnia and hypoxia tests, subjects were encouraged to breathe spontaneously as they viewed a nonstimulating documentary. Peripheral chemosensitivity to hypercapnia Peripheral chemosensitivity to hypercapnia was measured via four carbon dioxide administrations (i.e., 13% CO 2 , 21% O 2 , and 66% N 2 ) separated by 3 min of room air breathing. Briefly, using a pneumatic switching valve (Hans Rudolph, Inc.), subjects were rapidly switched between breathing room air and carbon dioxide, and back to room air. All four carbon dioxide administrations consisted of one breath each. Peripheral chemosensitivity to hypercapnia was calculated by plotting the mean of the three highest consecutive ventilations (e.g., individual breaths extrapolated to minute values) versus the maximum PETCO 2 value within 2 min following each carbon dioxide administration (Chua and Coats 1995;Edelman et al. 1973;Pfoh et al. 2016). Furthermore, recent findings indicate that activation of the peripheral chemoreceptors also modulate hemodynamics (Stickland et al. 2007(Stickland et al. , 2008Niewinski et al. 2014a;Limberg et al. 2015). Therefore, peripheral chemosensitivity to hypercapnia was also calculated by plotting the peak heart rate and the peak mean arterial pressure versus the maximum PETCO 2 value within 2 min following each carbon dioxide administration, using similar methods that have been used during acute hypoxia (Niewinski et al. 2013(Niewinski et al. , 2014aLimberg et al. 2016). Peripheral chemosensitivity to hypercapnia data are reported as the slope of the linear regression line for the ventilatory, heart rate, and blood pressure responses to hypercapnia. This test of peripheral chemosensitivity is reliable and reproducible within subjects over 1 month (Chua and Coats 1995). Peripheral chemosensitivity to hypoxia Peripheral chemosensitivity to hypoxia was measured via four nitrogen administrations (i.e., 100% N 2 ) separated by 3 min of room air breathing. Briefly, using a pneumatic switching valve (Hans Rudolph, Inc.), subjects were rapidly switched between breathing room air and nitrogen, and back to room air. The first two nitrogen administrations consisted of two and four breaths, respectively, for all subjects. The number of nitrogen breaths for each of the remaining two nitrogen administrations were determined based on the SpO 2 values achieved during the first two nitrogen administrations, and kept consistent within a subject during each peripheral chemosensitivity test for both experimental visits. Our goal was to achieve a range of nadir SpO 2 values (80-95%) following the nitrogen administrations. Peripheral chemosensitivity to hypoxia was calculated by plotting the mean of the three highest consecutive ventilations (e.g., individual breaths extrapolated to minute values) versus the nadir SpO 2 value within 2 min following each nitrogen administration (Edelman et al. 1973;Weil and Zwillich 1976;Chua and Coats 1995;Niewinski et al. 2013Niewinski et al. , 2014aLimberg et al. 2015;Pfoh et al. 2016). Peripheral chemosensitivity to hypoxia was also calculated by plotting the peak heart rate and the peak mean arterial pressure versus the nadir SpO 2 value within Page 3 2 min following each nitrogen administration (Edelman et al. 1973;Chua and Coats 1995;Niewinski et al. 2013Niewinski et al. , 2014aLimberg et al. 2016). Peripheral chemosensitivity to hypoxia data are reported as the absolute value of the slope of the linear regression line for the ventilatory, heart rate, and blood pressure responses to hypoxia. This test of peripheral chemosensitivity was chosen to avoid ventilatory decline that is associated with longer hypoxic durations (Powell et al. 1998;Steinback and Poulin 2007;Pfoh et al. 2016). This test of peripheral chemosensitivity is also reliable and reproducible within subjects over 1 month (Chua and Coats 1995). Data and statistical analyses Resting data were determined using the mean values from the last 2 min of each seated rest period, prior to the tests of peripheral chemosensitivity. Data were assessed for approximation to a normal distribution and sphericity, and no corrections were necessary. Outliers were identified and removed using a nonlinear regression analysis using the ROUT method in Prism (Motulsky and Brown 2006). The Q value, or the false discovery rate, was set conservatively (i.e., 0.1%) so that only definitive outliers were removed and the n is reported for each result. Objectively determined outliers were removed from statistical analyses for the ventilatory responses to hypercapnia and hypoxia (n = 2) and for the blood pressure responses to hypercapnia and hypoxia (n = 1). All data were analyzed using a two-way repeated measures ANOVA. If a significant interaction or main effect was found, the Holm-Sidak multiple comparisons post hoc test was used to determine where differences existed. Data were compared to baseline within each visit and between visits at five time points (i.e., baseline, 10, 60, 120 min, and post). Data were analyzed using Prism software (Version 6; GraphPad Software Inc., La Jolla, CA). Data are reported as means AE SD and exact P-values are reported where possible. The alveolar ventilation to perfusion ratio ( Fig. 3F) was not statistically different during HOWI versus control at any time point (condition main effect: P = 0.820). Moreover, the alveolar ventilation to perfusion ratio was not statistically different versus baseline at any time point in either condition (time main effect: P = 0.456). Peripheral chemosensitivity to hypercapnia Ventilatory responses to hypercapnia (Fig. 4A) were not statistically different during HOWI versus control at any time point (condition main effect: P = 0.898). Moreover, ventilatory responses to hypercapnia were not statistically different versus baseline at any time point in either condition (time main effect: P = 0.951). Heart rate responses to hypercapnia (Fig. 4B) were not statistically different during HOWI versus control at any time point (condition main effect: P = 0.760). Moreover, heart rate responses to hypercapnia were not statistically different versus baseline at any time point in either condition (time main effect: P = 0.339). Mean arterial pressure responses to hypercapnia (Fig. 4C) were not statistically different during HOWI versus control at any time point (condition main effect: P = 0.092). However, mean arterial pressure responses to hypercapnia were higher at 120 min (P = 0.049) and post (P = 0.043) versus baseline during control. Maximum PETCO 2 during peripheral chemosensitivity to hypercapnia are presented in Table 1. Maximum PETCO 2 was not statistically different during HOWI versus control at any time point (condition main effect: P = 0.398). Maximum PETCO 2 was not statistically different versus baseline at any time point in either condition (time main effect: P = 0.789). Peripheral chemosensitivity to hypoxia Ventilatory responses to hypoxia (Fig. 5A) were not statistically different during HOWI versus control at any time point (condition main effect: P = 0.714). Moreover, ventilatory responses to hypoxia were not statistically different versus baseline at any time point in either condition (time main effect: P = 0.099). Heart rate responses to hypoxia (Fig. 5B) were not statistically different during HOWI versus control at any time point (condition main effect: P = 0.258). Moreover, heart rate responses to hypoxia were not statistically different versus baseline at any time point in either condition (time main effect: P = 0.235). Mean arterial pressure responses to hypoxia (Fig. 5C) were not statistically different during HOWI versus control at any time point (condition main effect: P = 0.051). Moreover, mean arterial pressure responses to hypoxia were not statistically different versus baseline at any time point in either condition (time main effect: P = 0.246). Nadir SpO 2 during peripheral chemosensitivity to hypoxia are presented in Table 1. Nadir SpO 2 was not statistically different during HOWI versus control at baseline (P = 0.367), 10 min (P = 0.440), or post (P = 0.340), but was lower during HOWI versus control at 60 min (P =0.010) and 120 min (P = 0.042). Nadir SpO 2 was not statistically different versus baseline at any time point in either condition (time main effect: P = 0.135). Discussion Our study demonstrates that PETCO 2 increases during 2 h of thermoneutral HOWI in humans without a change in ventilation or peripheral chemosensitivity (Figs 1, 4, and 5). Contrary to our hypothesis, peripheral chemosensitivity to hypercapnia and hypoxia was not blunted during HOWI (Figs. 4 and 5). Collectively, these data indicate that activation of the peripheral chemoreceptors to a brief hypercapnic or hypoxic stimulus is not altered during HOWI. Consequently, our data do not support a role for the peripheral chemoreceptors in the retention of CO 2 during thermoneutral HOWI in humans. Ventilation Similar to previous findings (Jarrett 1966;Salzano et al. 1970Salzano et al. , 1984Kerem et al. 1995;Cherry et al. 2009;Miyamoto et al. 2014), we observed a significant increase in PETCO 2 during HOWI (Fig. 1A). It has been shown that CO 2 retention occurs during water immersion at depth due a reduction in alveolar ventilation that is caused by increased dead space (Salzano et al. 1984;Mummery et al. 2003). However, our subjects were studied at the surface (i.e., 1 ATA) and therefore the increase in dead space in our subjects was most likely lower compared to subjects that have been studied at depth (Salzano et al. 1984;Hickey et al. 1987;Mummery et al. 2003;Cherry et al. 2009). The breath by breath ventilatory data from our study indicate that ventilation was not altered throughout HOWI. In addition to an increase in dead Values are mean AE SD, n = 10. 1 Different from control, P < 0.050. space, it has been suggested that an increase in PETCO 2 may be due to an increase in CO 2 redistribution and storage throughout body tissues (Farhi and Rahn 1960;Matalon and Farhi 1979;Serrador et al. 1998). It is unclear if CO 2 redistribution and storage occurred during our study. Recent evidence indicates that thermoneutral HOWI shifts the respiratory operating point (i.e., PETCO 2 vs. minute ventilation) to the right to increase the likelihood of CO 2 retention (Miyamoto et al. 2014). Our data agree with the idea that thermoneutral HOWI shifts the respiratory operating point as we observed an increase in PETCO 2 without a change in ventilation. Control HOWI Previous findings indicate that minute ventilation and alveolar ventilation are reduced during water immersion, primarily as a function of increased breathing gas density (Salzano et al. 1984;Cherry et al. 2009). It is also thought that central hypervolemia and increased work of breathing during water immersion contribute to the reductions in minute and alveolar ventilation (Lanphier and Bookspan 1999;Lundgren and Miller 1999). Our data (Fig 2A and B) do not confirm the reductions in minute and alveolar ventilation. However, we did observe an increase in dead space ventilation at 10 min of HOWI which is similar to other investigations (Mummery et al. 2003;Cherry et al. 2009). Thus, the CO 2 retention that we observed during water immersion might be related to the increased dead space and not a reduction in alveolar ventilation. This idea warrants future investigation. Changes in breathing pattern might also contribute to the increased CO 2 retention during water immersion. We observed decreases in tidal volume and increases in respiratory rate throughout HOWI compared to baseline ( Fig. 2D and E). Water immersion has been shown to increase the work of breathing (Otis et al. 1950;Collett and Engel 1986) but previous studies suggest that this is not directly related to CO 2 retention (Thalmann et al. 1979;Hickey et al. 1987;Norfleet et al. 1987). Thus, the changes in breathing pattern that we observed, possibly due to the enhanced negative pressure breathing (Pendergast and Lundgren 2009), could be responsible for CO 2 retention during HOWI. However, it is unknown if the increased work of breathing is mitigated via alterations in breathing pattern . A reduced alveolar ventilation, which is proposed to be the one of the main causes of CO 2 retention (Salzano et al. 1984;Mummery et al. 2003), is thought to occur in place of increasing the work of breathing to prevent hypercapnia during water immersion (Lundgren and Miller 1999). On the basis of our alveolar ventilation and dead space data, we speculate that HOWI may induce alterations in breathing pattern to minimize the work of breathing which subsequently leads to CO 2 retention. Hemodynamics The prevailing theory is that mean arterial pressure initially increases during water immersion due to a cephalad fluid shift which subsequently causes diuresis and a return of blood pressure to baseline values after continued water immersion (Arborelius et al. 1972;Pendergast et al. 2015). However, some investigators have also found that mean arterial pressure does not change (Bonde-Petersen et al. 1992;Sramek et al. 2000;Watenpaugh et al. 2000;Pendergast et al. 2015) or slightly decreases (Craig and Dvorak 1966). We observed a decrease in mean arterial pressure at 10 min and 60 min of HOWI compared to baseline (Fig. 3A), which could be explained by a decrease in total peripheral resistance (Fig. 3C) (Arborelius et al. 1972;Bonde-Petersen et al. 1992;Pendergast et al. 2015) and/or diuresis without a change in cardiac output (Fig. 3B). The water temperature we used (~35°C) (Pendergast et al. 2015) may have slightly heated the integument due to the water temperature to skin temperature (~33-34°C) thermal gradient (Bierman 1936), which may have increased intersubject variability in total peripheral resistance. It is thought that inequality of the alveolar ventilation to perfusion ratio (i.e., <1) occurs during diving as a function of the reduced alveolar ventilation and the increased blood flow. However, previous findings indicate that the alveolar ventilation to perfusion ratio is unaffected during thermoneutral HOWI (Derion et al. 1992). Our data agree with the findings of Derion et al., and can be explained by the fact that we did not observe a reduced alveolar ventilation and/or an increased cardiac output during water immersion. Thus, we suggest that alveolar ventilation to perfusion mismatching does not occur during water immersion and does not contribute to the explanation of CO 2 retention. Peripheral chemosensitivity to hypercapnia Our data indicate that ventilatory and hemodynamic responses to acute hypercapnia are not blunted during 2 h of thermoneutral HOWI (Fig. 4A). Therefore, it appears as though CO 2 retention during HOWI is not due to a reduction in the sensitivity of the peripheral chemoreceptors to a brief hypercapnic stimulus. Furthermore, there is an interaction between the central and peripheral chemoreceptors such that the ventilatory response to central chemoreceptor stimulation is reliant upon activation of the peripheral chemoreceptors (Rodman et al. 2001;Smith et al. 2006Smith et al. , 2015Blain et al. 2010). Based on our findings that the ventilatory response to hypercapnia is not blunted during 2 h of thermoneutral HOWI, it is likely that central chemosensitivity is also not changed. However, it is not known if central chemosensitivity is altered beyond 10 min of thermoneutral HOWI (Chang and Lundgren 1995). Peripheral chemosensitivity to hypoxia Similar to the peripheral chemosensitivity to hypercapnia, we found that the ventilatory and hemodynamic responses to acute hypoxia are not blunted during HOWI (Fig. 5A). In support of our findings, the use of lower body positive pressure to increase central blood volume does not alter the ventilatory response to hypoxia (Koehle et al. 2010). However, Heistad et al. (1975) demonstrated that baroreflex loading lowers the ventilatory response to peripheral chemoreceptor activation. Thermoneutral HOWI induces central hypervolemia of~1 L (Arborelius et al. 1972), which should be sufficient to load the arterial baroreceptors (Pendergast et al. 2015). However, we did not observe an increase in mean arterial pressure during HOWI. Therefore, we might not have sufficiently loaded the baroreceptors to cause a decrease in peripheral chemosensitivity during HOWI (Heistad et al. 1975). It is currently not known if further activation of the sympathetic nervous system modulates peripheral chemosensitivity during HOWI as circulating catecholamines have been shown to be important modulators of peripheral chemosensitivity (Prabhakar and Peng 2004;Stickland et al. 2007Stickland et al. , 2008Niewinski et al. 2014b) and there is evidence that demonstrates that circulating catecholamines are lower during thermoneutral HOWI (Norsk et al. 1990;Stadeager et al. 1992). Perspectives Although the degree of CO 2 retention induced from 2 h of resting thermoneutral HOWI is not large enough to develop CO 2 narcosis, CO 2 retention merits formal investigation because of the likelihood of CO 2 narcosis during diving (Warkander et al. 1990;Lanphier and Bookspan 1999). Our data indicate that peripheral chemosensitivity is not changed and it does not appear that the peripheral chemoreceptors contribute to CO 2 retention during 2 h of thermoneutral HOWI. Moreover, Chang & Lundgren have previously shown that the central chemosensitivity is not altered during 10 min of thermoneutral HOWI and most likely do not contribute to CO 2 retention. However, Cherry and colleagues have shown that CO 2 retention occurs in a graded response to multiple factors including increased gas density and breathing resistance, as well as minor factors such as baseline central chemosensitivity and baseline aerobic fitness (i.e., maximal oxygen consumption) ). Furthermore, they also showed that greater decreases in ventilation lead to greater CO 2 retention ). However, we showed that CO 2 retention may occur independent of any changes in ventilation. Thus, it is important to further evaluate other possible mechanisms that contribute to the degree of CO 2 retention during HOWI (i.e., central chemosensitivity, hyperoxia, breathing resistance, immersion depth, and oxygen consumption). Considerations Our study has several limitations. First, the tests of peripheral chemosensitivity were not randomized. Throughout the protocol, subjects always experienced four nitrogen administrations followed by four carbon dioxide administrations. However, it has previously been shown that repetitive hypoxic administrations do not induce long-term facilitation of ventilation in humans (McEvoy et al. 1996;Powell et al. 1998). In spite of our efforts to blind subjects to the gas administrations (i.e., timing and content), they were most likely aware of when they inhaled the hypercapnic gas due to the acidic taste and subjects were able to see and/or hear the pneumatic switching valve. Consequently, it is possible that subjects altered their ventilation upon administration of the hypoxic or hypercapnic gases. However, we believe that this effect was minimized by familiarizing the subjects with the gases and switching value prior to the experimental visits. Peripheral chemosensitivity to hypercapnia was achieved using only 1 breath of hypercapnic gas during each administration. Thus, our hypercapnic stimulus (i.e., maximum PETCO 2 ) was similar following each gas administration and we did not obtain a range of maximum PETCO 2 values, similar to how we obtained a range of nadir SpO 2 during the hypoxic administrations. It is unclear if ventilatory responses to acute hypercapnia are linear throughout a wide range of maximum PETCO 2 . The nadir SpO 2 during peripheral chemosensitivity to hypoxia indicate that the hypoxic stimulus was greater at 60 min and 120 min of HOWI versus control (Table 1). However, our calculation of peripheral chemosensitivity is based on a linear relationship between SpO 2 and minute ventilation, which is linear until SpO 2 falls below 70% (Chua and Coats 1995). Finally, CO 2 retention occurred during HOWI. Therefore, during the HOWI visit, the tests of peripheral chemosensitivity took place with a mild hypercapnic background which may have activated the central chemoreceptors and potentially masked changes in peripheral chemosensitivity (Somers et al. 1989;Smith et al. 2006;Blain et al. 2010). However, because ventilation was unchanged throughout HOWI, we speculate that this did not contribute to our findings.
2018-04-03T02:53:29.802Z
2017-10-19T00:00:00.000
{ "year": 2017, "sha1": "9601306e564e289958a25de8715616e16d64cba2", "oa_license": "CCBY", "oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.13472", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9601306e564e289958a25de8715616e16d64cba2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255478017
pes2o/s2orc
v3-fos-license
The physical basis of osmosis It is surprising that osmosis, a phenomenon so central to biology, has been cloaked in misunderstanding for so long. The authors show that the most plausible account for what drives water fluxes is one put forward by Peter Debye in 1923, where the repulsion of solute molecules from the semipermeable membrane generates a pressure drop, which draws water from a chamber with low solute concentration to one that is high. Introduction Osmosis is one of the most powerful forces that organisms must counteract to survive. An index of its importance is that animal cells, of all kinds, spend about a quarter of their energy resisting the osmotic challenge induced by the presence of impermeant molecules in cells (i.e., the Donnan effect, Appendix 1; Rolfe and Brown, 1997;Kay, 2017). An unchecked Donnan effect would lead to a continuous influx of water until the cell bursts. The need to maintain osmotic balance is unrelenting, interrupted neither by sleep nor hibernation. Furthermore, osmosis is quite literally at the root of plant physiology (Niklas and Spatz, 2012). The phenomenological thermodynamics of osmosis has long been clear, at least for osmotic equilibrium. van 't Hoff's equation for the equilibrium pressure difference can be derived by equating the chemical potentials of the water in the two compartments (Dill and Bromberg, 2003;Phillips et al., 2012) separated by a semipermeable membrane, but this thermodynamic derivation provides no insight into the molecular mechanism that generate the pressure difference. Indeed, the molecular basis of osmosis continues to be widely mischaracterized and hence misunderstood, although a consistent mechanistic understanding was presented 100 yr ago (Debye, 1923b). In this paper, we will show why a molecular basis for osmosis that is most often given in biology textbooks is invalid. This misconception consists in the belief that the osmotic water flux is driven by a gradient in water concentration across the membrane. We will show how the osmotic and diffusive fluxes of water can be separately measured across a semipermeable membrane. This can then be used to demonstrate that diffusion alone cannot account for the osmotic flux across membranes with aqueous pores. We will then show how a physical mechanism that was first presented by Peter J.W. Debye in 1923 can generate a macroscopic pressure and provides the most plausible account of osmosis. We refer to it as the "Debye model." Debye was perhaps the first to recognize that osmosis arises from the mechanical interaction of an impermeant solute with a semipermeable membrane but does not depend on the precise chemical nature of the solute or the solvent. We believe that the Debye model has failed to take hold in biology for several reasons, inter alia, a lack of understanding of the physical argument, its requirement for mathematical explication, and the availability of other simple, seemingly plausible, but flawed arguments. In addition, textbooks, besides omitting the Debye model, have not raised any inconsistencies in the conventional approach. There has hence seemed little need to question what at first blush seems a simple phenomenon. There have been several attempts primarily directed at biologists to set the record straight on the physical basis of osmosis (Stein, 1966;Kramer and Myers, 2012), as well as accounts of the Debye model in journals (Manning, 1968;Oster and Peskin, 1992;Borg, 2003 Preprint;Marbach and Bocquet, 2019;Song et al., 2021) and textbooks (Villars and Benedek, 1974;Weiss, 1996;Baumgarten and Feher, 2011;Nelson, 2014), but despite these efforts, misconceptions have persisted. The apparent simplicity of osmosis may have masked what is at bottom a rather subtle phenomenon with enormous implications for biology (Dick, 1966;House, 1974;Andersen, 2015). It is, we think, worth readdressing the physical basis of osmosis because it may open new ways of looking at water and solute transport that have remained hidden because of flawed beliefs, and it is important to ensure that our understanding is firmly rooted in wellestablished physical principles. The osmotic flux of water is important in several biological disciplines; indeed, it is a challenge to find one where it is not. However, different branches of science have developed unique terminologies, which may confuse someone familiar with the terms of one field in reading the literature of another. The unified view and terminology presented here may help to bring consilience to the study of osmosis. We first provide a review of the basic empirical information about osmosis, including a discussion of some misconceptions. Then, we give an account of the Debye model, both as presented by Debye himself to derive van 't Hoff's law for osmotic equilibrium, and as extended to apply to osmotic flow (Manning, 1968). The seminal contributions of the Norwegian physicist Lars Vegard are integrated into this account (Vegard, 1908). Finally, we discuss water flow across biological membranes in the context of Debye's model. The rudiments of osmosis and common misconceptions To illustrate the process of osmosis, we will consider a semipermeable membrane, namely, one that is permeable to water but completely impermeable to solute molecules, separating two solutions. We will restrict our discussion to water, but it also applies to any other solvent. If the osmolarities (i.e., the total concentrations of solutes in moles per unit volume) of the solutions differ, water flows from the solution with the lower osmolarity to that with the higher osmolarity. In the situation diagrammed in Fig. 1, the movement of water can be prevented if the piston exerts excess pressure on the solution with higher osmolarity equal, if the solutions are dilute, to RTΔc s , where Δc s is the osmolarity difference. This experimental observation is encapsulated by van 't Hoff's equation, where ΔP is the pressure difference under no-flow, equilibrium conditions between two solution chambers separated by a semipermeable membrane. 1 The definitions of symbols in the equations can be found in Table 1. The term RTc s in a free-standing solution with solute concentration c s is often referred to as the "osmotic pressure" of the solution. However, this imprecision is the source of some confusion since an actual pressure difference can only arise between two solutions with different osmolarities separated by a semipermeable membrane. It is worth emphasizing that osmotic pressure is not a physical property of a free-standing aqueous solution. We will lay out our argument in terms of the osmolarities of the solutions. However, to understand the osmotic flux of water in cells, it is important to consider that macromolecules in both the cytoplasm and extracellular solutions may exclude water. The osmotically active solute concentration within a cell is determined by the number of moles of solute per mass of freely exchangeable water molecules, namely, the osmolality (Boron and Boulpaep, 2016). For dilute solutions, which we are considering, the osmolarity and osmolalities are essentially identical. Our objective is now to understand what generates such a pressure difference across a semipermeable membrane separating solutions with different osmolarities. To begin our Figure 1. Classical demonstration of osmosis. (a) A U-tube with a semipermeable membrane separating pure water on the left from an aqueous solution with an impermeant solute of concentration c s on the right. (b) With time, water will move from left to right, elevating the column of solution on the right, until its gravitational weight stops the flow. (c) Alternatively, the flow of water can be prevented if a piston applies a pressure equal to RTc s (in the dilute regime). analysis, we review first the hydraulic flow of water in response to a hydrostatic pressure difference and relate this motion to that induced by a difference in osmolarity. We consider a membrane with pure water on both sides when a transmembrane hydrostatic pressure difference ΔP is imposed (for example with a piston). The volume of water flux per unit area of a membrane is given by the empirical relationship (Weiss, 1996). with the water flux being directed to the side with lower pressure and L p is the hydraulic permeability. The value of L p depends on the specific composition and structure of the membrane that allows water to move across it. Eq. 2 is Darcy's law, which can be derived from the Navier-Stokes equation for the convective flow of a liquid. 2 The volume water flux across a semipermeable membrane subject to both a hydrostatic pressure difference and a difference in osmolarity can be derived by combining Eqs. 1 and 2, Eq. 3 has a long history and has been proposed by many scientists in different fields, sometimes only in words. It is sometimes called Starling's equation in physiology (Starling, 1896;Blaustein et al., 2019), and for dilute solutions with impermeable solutes, it is part of the Kedem and Katchalsky (1958) equations, but it could without exaggeration be called the "Fundamental Law of Osmosis." A remarkable feature of Eq. 3 is that two physically distinct driving forces, an imposed hydrostatic pressure difference ΔP and an osmolarity difference RTΔc s , produce the same flux of water. The connection between force and flow is given by the same coefficient L p in both cases. The implication for the underlying physical mechanisms of pressure and osmotic flow is that these mechanisms must be one and the same. Note also that van 't Hoff's law at equilibrium is recovered from the Fundamental Law by setting the flux Φ V equal to zero. If the coefficients for the two driving forces were different, van 't Hoff's equation would be violated. When the volume flux is carried only by the water, the number of moles of water flowing across a unit area of membrane can be derived for dilute solutions from the molar volume of water v 0 w (Finkelstein, 1987): Substituting Eq. 3 into Eq. 4 gives an alternative form of the Fundamental Law of Osmosis, where P f is the osmotic permeability coefficient, P f can be determined from the measurement of water fluxes induced either by a hydrostatic pressure difference or a difference in osmolarity across a membrane (Fettiplace and Haydon, 1980;Finkelstein, 1987). The foregoing observations give rise to several questions, which we will pick up later. What is the physical reason for the observed equivalence of hydraulic and osmotic flow? It is counterintuitive that the same coefficient, L p or P f , should apply to both. Why, from a molecular point of view, must an impermeable solute concentration be balanced at equilibrium by a difference in hydrostatic pressure, and why should van 't Hoff's law be so similar to the equation of state of an ideal gas? How water moves across membranes The flow of water is composed of two components, a convective component and a diffusive component (Truskey et al., 2009). Both may be present simultaneously but to different degrees depending on the nature of the flow. For macroscopic flow, the convective movement dominates, but we will give an example of flow through a lipid bilayer that is entirely diffusive. We will describe the convective and diffusive contributions in turn. Convection is the bulk flow of liquid induced by a force. It is what we are able to see when water runs in a brook or through a pipe and is described mathematically by the Navier-Stokes equation (Truskey et al., 2009;Phillips et al., 2012). At the molecular level, in convective flow, clusters of closely packed water molecules move in concert in the direction of the force. However, because molecules in a liquid can move relative to each other, they are always in random motion, which drives diffusive movement. If, in addition to thermal motion, a mechanical force F acts on the molecules, their random movements are biased in the direction of the force, and each molecule acquires a drift velocity μF in the direction of the force. The proportionality constant μ is called the diffusional mobility of the molecule, 3 and it is connected to the diffusion constant D through the Einstein relation μ = D/RT. Molecules within a liquid flowing convectively under a force therefore simultaneously exhibit diffusive motion that is superimposed upon the convective flow. More specifically, the average velocity of a molecule in a flowing liquid is the sum of the convective velocity and the diffusive drift velocity. As an example, a pressure gradient in water simultaneously induces both convective flow according to the Navier-Stokes equation and a diffusive drift of water molecules along the gradient. Clusters of water molecules move as a whole along the pressure gradient, while each individual molecule responds to the gradient by drifting stochastically away from regions of higher pressure and toward regions of lower pressure. The reason that an individual molecule drifts toward a region of lower pressure is that less work is required at lower pressure to accommodate the molecular volume. For flow through membranes, we can quantify the relative importance of the convective and diffusive contributions with the dimensionless P f /P d ratio. The overall permeability P f has already been defined as characterizing the flow observed when a pressure or osmolarity difference is imposed on the two sides of the membrane in accordance with the Fundamental Law of Osmosis, Eq. 5. The diffusional permeability coefficient P d is what the permeability would be in the absence of convection. Then, only the diffusion of the water molecules is effective in the transport. Significantly, P d can actually be measured in a separate experiment from the observed diffusional flux ϕ * w of trace concentrations of isotopically labeled water (Mauro, 1957;Fettiplace and Haydon, 1980), in the absence of either a pressure or osmolarity difference, and Δc * w is the difference in concentration of the water isotope across the membrane. 4 That the P d in this equation is the same P d appearing in the P f /P d ratio requires proof, which is provided in Appendix 2. It is likely that the diffusional and convective flows of water are additive, so we write P f P c + P d , where P c is the contribution from convection, and then when we divide both sides by P d , we find that for the P f /P d ratio, from which a useful interpretation of P f /P d emerges. From its meaning, convection is represented by a positive value of P c , the smallest possible value of P f /P d is unity, and then the flow is entirely determined by the diffusion of the water molecules. But if P f /P d is much greater than unity, convection dominates osmotic flow through the membrane. Mauro (1957) and Robbins and Mauro (1960) measured P f and P d for a series of synthetic collodion membranes of increasing density in polymer material. For the most open membrane, the diffusive component of water flow was a small fraction, 1/730 of the overall observed flow, while for the mostdense membrane, the diffusive contribution was somewhat more important, but still just 1/36 of the total. Their experiments showed conclusively that the water flow in these membranes is dominated by convection, like water running in a brook, perhaps obstructed in its course by rocks (in the membrane, by polymer material). Unlike most synthetic membranes, biological membranes are heterogeneous, with protein channels like aquaporin spanning the lipid bilayer (White et al., 2022). Water is transported independently through both the bilayer and the channels, as illustrated in Fig. 2. The P f /P d ratio provides insight in the biological case also. For isolated lipid bilayers, measurements show P f /P d 1 (Fettiplace and Haydon, 1980), so there is no convective flow component. Water crosses the lipid bilayer diffusively as dispersed independent molecules. However, the measurements of Hevesy et al. (1935) in frog skin many years ago showed that P f was greater than P d . This inequality was also found to be true in red blood cells (Paganelli and Solomon, 1957). These experiments provided the first evidence of water channels; however, it took a long time to identify and isolate aquaporin channels (Agre et al., 1995). There is no convective (Navier-Stokes) water flow in the strict sense through aquaporin channels since the water molecules move in a single file. Nonetheless, the molecules are thought to be in close proximity in the channel, and observed values greater than unity of the P f /P d ratio could reflect their influence on each other during osmotic flow. Common misconceptions about osmosis Diffusion is not the primary driver of osmosis. A major obstacle bedeviling our understanding of the molecular level of osmotic pressure and osmosis, for well over a century, is the belief that diffusion is the sole driver of osmosis. Here is a typical statement: "Water spontaneously moves 'downhill' across a semipermeable membrane from a solution of lower solute concentration (relatively high water concentration) to one of higher solute concentration (relatively low water concentration), a process termed osmosis or osmotic flow. In effect, osmosis is equivalent to 'diffusion' of water across a semipermeable membrane" (Lodish et al., 2021). Or, "…water moves slowly into or out of cells down its concentration gradient, a process called osmosis" (Alberts et al., 2015). Although water diffusion may seem to provide a reasonable mechanism for osmosis, measurements from membranes with aqueous water channels show conclusively that diffusion alone cannot account for the osmotic flux. The fact that P f > P d demonstrates that there is a significant convective component to osmotic water fluxes that cannot arise by diffusion. This disparity points to the need for a driver in addition to the water gradient. This is precisely what the Debye model does, showing how the collision of the solute molecules with the membrane generates a pressure drop that drives water across the membrane. It is incorrect to characterize the osmotic flow of water as essentially a Fick's law diffusion of water molecules between aqueous solutions of differing water concentrations. The difference in water concentration (moles per unit volume) in pure water and in an aqueous solution is not simply a function of solute concentration alone. A straightforward calculation shows that it also depends on the ratio of the partial molar volume of the solute species to the molar volume of pure water (see Appendix 3). This ratio is specific to the particular solute species. The same concentration of solute, but for different solute species, leads to water concentration differences between the two solutions that are specific to the specific solute species. If the osmotic flow were caused by the difference in water concentrations, the water flux would then be specific to the solute species used to establish it. Such a dependence on impermeable solute species is not observed for dilute solutions, and moreover, would contradict both van 't Hoff's equation and the Fundamental Law of Osmosis. The mechanism of osmosis cannot be inferred from the properties of free-standing solutions. Another misconception arises from a focus on the bulk properties of the solutions bathing the membrane, while ignoring the physical implications of the most obvious property of the membrane itself, namely, its mechanical interaction with the solute making it impermeable to the solute molecules. The most common mistake, which has recurred persistently, is the idea that in a free-standing solution, the solute and solvent possess independent pressures, just like a mixture of ideal gases. Modern thermodynamic and statistical mechanical ideas of liquid solutions have fortunately taken root, and today this erroneous picture is only rarely invoked. The modern thermodynamic analysis of osmotic pressure is correct but provides no information about the mechanism. It compares the chemical potentials of water in a free-standing solution with the chemical potential of water in pure water with no reference to the physical interaction of membrane with solute. Osmotic transport is not different from transport induced by pressure differences. Another misconception is to deny the reality of the pressure underlying the movement of water across a semipermeable membrane. Here is an example: "The relationship (van 't Hoff) however arises directly from the parallels in the thermodynamic relationships and should not be interpreted in the molecular mechanistic sense since the osmotic pressure is in fact a property ensuring equilibrium of the solvent and solute, and has its effect only via its reduction of the chemical potential of this solvent" (Tombs and Peacocke, 1974). The identification of hydrostatic pressure-driven flow and flow driven by a concentration imbalance of impermeable solute is embodied in the Fundamental Law of Osmosis, and we will demonstrate how the Debye model explains this equivalence in a physically transparent way. A mechanistic model for osmosis: The Debye model Several different mechanisms have been proposed to explain how osmosis arises, with Guell (1991) listing 14. 5 We will argue that there is in fact a parsimonious explanation for osmosis that relies on the mechanical interaction between the membrane and impermeable solute molecules, and that we will refer to as the Debye model as it was first proposed by Debye in 1923. Despite Debye's reputation, the model made little impact on our understanding of osmosis-disappearing for decades, probably because biologists were not aware of it and chemists and physicists were largely uninterested-until the 1960s. Unfortunately, the connection to the original work was lost and we reestablish it here (see Box 1 for a short history). Debye recognized that the physical principles underlying the development of an osmotic pressure must be centered on the interactions of the membrane with the solute molecules since osmotic pressure is not observed in the absence of a membrane. As Debye put it in his 1923 paper, "We express the quality of semi-permeability of the membrane by saying that the potential energy of a solute molecule increases from zero to infinity when it is transported across the membrane from the solution side." An equivalent statement would be that the membrane exerts a repulsive force F on a solute molecule that is strong enough to prevent the solute molecule from entering the membrane and crossing over to the pure solvent side. The Debye model leads to van 't Hoff's law Debye was concerned only with osmotic equilibrium, so we begin by following his derivation of van 't Hoff's law for osmotic pressure at equilibrium. Afterward, we discuss steady-state osmotic flow as a straightforward extension of his model (Manning, 1968). Consider a semipermeable membrane separating two chambers at equilibrium, with the x coordinate increasing from left to right, the semipermeable membrane perpendicular to the x axis, the solution compartment with solute concentration c s,r to the right of the membrane, and the pure solvent to the left (Fig. 3 A). We are in effect looking at an infinite 2-D membrane, with all values isotropic in the y and z directions. Our first goal is to obtain an equation characterizing the solute concentration profile c s (x). For that, we write an equation for the flux j s of solute molecules in the absence of applied pressure, where the first term on the right-hand side of the equation is Fick's law for solute flux in the presence of a solute concentration gradient in dilute conditions, and the second term is the contribution to the solute flux from the mechanical force F exerted by the membrane on nearby solute molecules. Einstein's relation D = RTμ (Einstein, 1905) will allow us to convert the solute mobility μ to its diffusion coefficient D. The semipermeability property of the membrane means that passage of solute into and through the membrane is completely blocked by the force F . Therefore, there must be a gradient of solute concentration across the membrane-solution interface where from left to right the solute concentration increases steeply from zero just inside the membrane to the constant value c s,r of solute concentration in the solution chamber. Moreover, since the membrane excludes the solute, the solute flux across the interface must vanish. Setting j s = 0 and then using Einstein's relation and canceling D, we obtain an equation to characterize the solute concentration profile c s (x), To connect this equation to the pressure that develops across the membrane, we can visualize a volume element of the solution near the membrane as a thin slice of thickness dx parallel to the membrane (see rectangular blue box in Fig. 3). When the system is at equilibrium, the slice, in particular, must be in mechanical equilibrium, meaning that all of the forces acting on and inside the slice must balance out to zero. The intermolecular forces among the molecules inside the slice cancel each other as a consequence of Newton's law of action-reaction, leaving the requirement that the forces on the slice originating from outside it must balance it to zero. These forces are the repulsive force F from the membrane acting on each solute molecule in the slice and the hydrostatic pressures from the fluid surrounding the slice and pushing from outside the slice on each of the side surfaces of the slice. With P(x), the pressure at x, the zero balance is expressed by the equation dP/dx − c s F 0 or 7 dP dx c s F . Eq. 10 and Eq. 11 can be combined: The van 't Hoff equation for osmotic equilibrium, follows after integration from left to right (pure solvent to solution) with P r the pressure in the solution compartment, P 0 the pressure in the pure solvent compartment, and of course c s,l = 0 in the pure solvent compartment. We are now in a position to recognize the genius of Debye's insight, simple as it is. At the heart of his derivation is the membrane-solute force F , which would be different for every membrane and every solute. How can it lead to van 't Hoff's equation, which is applicable generally to any membrane-solute pair? The reason, as we have just seen, is that it produces compensating physical effects, and F cancels from the final result. It is worthwhile considering an alternative approach, first used by Ehrenfest (1915) and then by others (Kiil, 1982;Borg, 2003 Preprint;Bowler, 2017) to employ the virial theorem to understand osmosis. The virial theorem from the statistical mechanics of a fluid is a relation between the pressure of the fluid and its total time-averaged energy, kinetic plus potential. The potential energy accounts for the forces of interaction among the particles of the fluid. For a real gas, the virial theorem was developed by Mayer (Uhlenbeck and Ford, 1963) into his virial expansion, an infinite series for the pressure in which the first term gives the ideal gas equation of state and the higherorder terms account successively for corrections due to interactions among the gas molecules. The McMillan and Mayer (1945) theory gives an analogous virial expansion for the osmotic pressure that arises when a solution is separated from Box 1. A short history of the Debye model The investigation of osmosis has an interesting history that has been told by others (Smith and Smith, 1960;Hammel and Scholander, 1976;Mason, 1991). In this section, we will focus on the history of the Debye model. Although the experimental demonstration of osmosis by Jean-Antoine Nollet (1748) predates that of diffusion by Thomas Graham (1828), the development of the theoretical basis of diffusion proceeded with little controversy (Einstein, 1905;Jacobs, 1935;Berg, 1993). In contrast, the theoretical underpinnings of osmotic pressure proved contentious from the start. 6 There is a fascinating story recounted by Wald (1982) that it was Hugo de Vries (a botanist and one of the rediscoverers of Gregor Mendel's work) who told van 't Hoff about Pfeffer's experiments (Pfeffer, 1890) on semipermeable membranes when their paths crossed while walking in Amsterdam. van 't Hoff was awarded the first Nobel Prize in Chemistry in 1901 largely for his work on osmosis. At our historical remove, it may seem strange to award the prize for what seems like such a simple finding. However, it provided one of the first experimental confirmations of atomic theory. What we have called the Debye model was first proposed by Peter J.W. Debye in a paper first published in French (Debye, 1923b) and then in German (Debye, 1923a), and primarily devoted to further developments of Debye's theory of ionic solutions. Debye remarks in a footnote "Among the large number of authors who have already dealt with the kinetic theories of osmotic pressure, we must cite above all: L. Boltzmann, H.A. Lorentz, Ph. Kohlstamm, C. Jäger, O. Stern, P. Langevin, J.J. van Laar, P. Ehrenfest," but does not cite any of their papers, because they failed to pin down the mechanism. In the intervening years, there have been very few references to Debye's paper. Joos developed a simplified derivation of the mechanism in what is essentially a didactic paper (Joos, 1941), acknowledging that his work was derived from an idea in a paper by Debye (1923b). The derivations were included in Joos's influential textbook of physics (Joos and Freeman, 1959). Manning (1968) was probably the first to rederive the Debye model in the second half of the 20th century. Manning based his derivation on a textbook by Rutgers (1954), who said that his argument was derived from Debye, but Rutgers does not quote the paper. It is worth noting that Debye provided a foreword to the Rutgers textbook. The textbook by Villars and Benedek, 1974 is the source most often quoted for the solute-membrane repulsion model, but it has no references at all. In the biological literature, Mauro (1979) appears to be the first to have referred to Manning and to Villars and Benedek in the context of osmosis. It is puzzling that Debye's work on osmosis made little impact since he was a major figure in the development of physics in the 20th century, receiving the Nobel Prize in 1936. It is even more so because he was a professor at Cornell University (Ithaca, NY, 1939-66) during the period when the debate about the molecular origins of osmosis was revived. Indeed, from the mid-1950s to the 1990s, several theories competed about the origin of osmotic pressure (Hammel, 1979;Hildebrand, 1979;Mauro, 1979;Soodak and Iberall, 1979;Yates, 1979;Essig and Caplan, 1989). Prominent among the contesting theories was the controversial solvent tension theory (Hammel and Scholander, 1976). However, the Debye model never seemed to have made an appearance in the debate, at least in its quantitative form. In an interview in 1964, Debye himself provides a possible key to this enigma. When asked which periods of his work stand out to him "…I think they are important at the moment when I am doing them. Later I forget about them. So it's only during the time that I have fun with them that they seem important" (Corson et al., 1964). 6 "Again we have the basically pointless question: What exerts osmotic pressure? Really, as already emphasized, I am concerned in the end only with its magnitude; since it has proved to be equal to the gas pressure one tends to think that it comes about by a similar mechanism as with gases. Let he, however, who is led down the false path by this rather quit worrying about the mechanism."-van 't Hoff (1892) translation from Weiss (1996). pure solvent by a semipermeable membrane. The first term of the expansion gives the van 't Hoff equation and the higherorder terms account successively for solute-solute interactions as mediated by the solvent. However, this approach cannot provide any insight into the membrane-solution forces that generate pressure. It is effectively equivalent to the thermodynamic analysis of osmosis using the chemical potential of water (see above). The Vegard pressure profile We now move from considering osmotic equilibrium to the situation where the pressures in the chambers are constrained to be the same and both chambers are very large and well stirred. Under these conditions, which we will refer to as the osmotic steady state, an osmolarity gradient across the semipermeable membrane will drive a steady flow of water across the membrane. We will show that extension of the Debye model to osmosis demonstrates that there must be a pressure drop from the solution to just inside the membrane equal to RTc s . Since the pressure is lower on the solution side (just inside the membrane) than on the pure solvent side, there is necessarily a pressure gradient across the membrane. In a simple 1-D visualization, the expected pressure profile is shown in Fig. 3 a. However, the pressure gradient within the membrane may have a more complicated form shaped by the molecular structure of the membrane. In a prescient 1908 paper, Lars Vegard, who was a student of J.J. Thompson at the time, appears to have been the first to propose this pressure profile (Vegard, 1908). He suggested, based on osmotic transport measurements with synthetic membranes, that somehow the solute generated a pressure gradient within the membrane but did not propose a mechanism. Such pressure profiles were rediscovered by several workers (Dainty, 1965;Mauro, 1965;Manning, 1968) in the 1960s. Manning first made the connection between the profile and the Debye model (see Fig. 3 C of Manning [1968]). We term this peculiar pressure profile the Vegard pressure profile and the pressure drop in the narrow interface region on the solution side the Vegard pressure drop. The Vegard pressure profile provides a graphic description of the force that drives the osmotic flow of water. The intramembrane pressure gradient drives water from the side with the lower osmolarity (pure solvent in Fig. 3) to the side with the higher osmolarity. In the narrow interface region on the high osmolarity side, the pressure drop by itself would drive water back toward the membrane, but in this region the Debye model shows that it is balanced by the forces from the membrane that drive the impermeable solute molecules away. The Vegard pressure drop drives osmosis With reference to Eq. 11 and the discussion above it, we have explained that the difference dP/dx − c s F represents the net force on a volume element of the solution near the membranesolution interface and at equilibrium it equals zero. When the system is not in equilibrium, the difference dP dx − c s F dV is still the total net force on a volume element dV at the membranesolution interface, but it is not zero and gives rise to a volume flux Φ V . If the flux is not too large, we can set down a linear relationship between the net force and the volume flux, where we will verify the identification of the proportionality constant as hL p , where h is the width of the membrane and L p is the permeability in Darcy's law, Eq. 2. The relation between the membrane force F and the solute concentration gradient at the membrane-solution interface, Eq. 10, remains valid in the steady-state case since we assume well-stirred conditions at the interface so that this expression for Φ V becomes Now, we integrate both sides of this equation across the membrane-solution interface from just inside to just outside. The integral involving the volume flux Φ V is small because it is proportional to the narrow width of the interface. But the integrals of the pressure and concentration derivatives do not depend on the width of the interface. The integral of the pressure derivative across the interface equals the pressure difference across the interface. The integral of the derivative of solute concentration equals the difference of solute concentrations across the interface. This latter difference equals the bulk solute concentration in the solution because the concentration of impermeable solute just inside the membrane is zero. The result then of integrating both sides of Eq. 15 across the membrane-solution interface is that from outside to inside there is a pressure drop equal to RTc s,r at the interface. In other words, the pressure just inside the membrane on the solution side is lower by this amount than the pressure P 0 of the solution outside. Since the pressure is P 0 in both chambers outside the membrane, there must be a pressure gradient across the entire membrane from P 0 on the pure solvent side to P 0 −RTc s,r on the solution side, and hence we have produced the Vegard pressure profile and pressure drop. We can take the derivation one step further, and in doing so, both illuminate the action of the pressure gradient and verify the choice of coefficient hL p . The solute concentration is zero inside the membrane and so its gradient is also zero there. Setting dcs dx 0 in Eq. 15, we see that inside the membrane, an equation that explicitly exhibits the volume flux as driven by a pressure gradient inside the membrane when the pressures in both solution and pure solvent compartments are equal. Moreover, with these coefficients, this equation is equivalent to Darcy's law (Eq. 2). Applications of the Debye model to osmotic flow through biological membranes Stiff porous membranes The Debye model, based as it is on fundamental physical principles, should be applicable to osmotic flow across any pressure-bearing membrane, including synthetic polymer-based membranes, the copper-ferrocyanide membrane used by Vegard, and the collodion membranes in Mauro's measurements. In the latter, a P f /P d ratio much greater than unity suggests a pressure-driven bulk water flow inside the membrane with Debye-Vegard pressure drops at the solution-membrane interfaces and a pressure gradient traversing the membrane of the Vegard type (see Fig. 3). These synthetic membranes should be realistic models for biological structures such as the walls of microvessels. The smallest pores crossing capillary walls are about 5 nm wide (Michel and Curry, 1999), much larger than a water molecule (∼0.3 nm), thus carrying water in more or less its ordinary bulk liquid form. The osmotic water flow across capillary walls is hence expected to be consistent with the Debye-Vegard model. Cell membranes Plant cell membranes are supported by a pressure-bearing cell wall and the Debye model for osmotic flow is expected to hold true. Although animal cell membranes lack a cell wall, they are reinforced by a submembrane cytoskeletal network (Kapus and Janmey, 2013). The lipid bilayer in some biological cell membranes is spanned by aquaporin water channels (Preston et al, 1992;Walz et al, 1997). Lipid bilayers are very permeable to water (Fettiplace and Haydon, 1980); however, in particular cells, the water flux is accelerated by specific aquaporins, but not all cells express aquaporins (Verkman, 2012). Since proteins are relatively stiff (Krieg et al., 2018), the Debye model is expected to account for the osmotic flow through aquaporins, just as it does through any pressure-bearing semipermeable membrane. There are Debye-Vegard pressure drops at both ends of the channel, with the larger drop occurring at the end abutting the solution of greater osmolarity. The two ends of the channel could face unequal pressures and the water molecules in the interior of the channel are therefore subjected to a force directed toward the lesser of the two pressures. Aquaporin channels are very narrow with cross-sectional areas just sufficient to accommodate a single water molecule. The single-file movement of a column of water through such channels cannot be described as bulk convective flow, even though experimental measurements show that the P f /P d ratio is significantly greater than unity. Although the osmotic movement of water across these channels may be pressure-driven, a precise description of the dynamics of the water molecules inside the channel is a subject of current investigation (Kavokine et al., 2020). Lipid bilayers Lipid bilayers self-assemble in vitro and may be studied in isolation. Their P f /P d ratios are found to be equal to unity, indicating that water inside them exists, and flows, as independent molecules. Although the parallel arrangement of hydrocarbon tails permits their diffusion within the plane of the bilayer, facilitating the passage of water, out-of-plane movements of the tails are more constrained and therefore may be compatible with an internal pressure gradient. Since the external pressures on the two sides of a bilayer membrane may be equal when water transport occurs in response to a difference in osmolarity, we think the possibility of the Debye-Vegard pressure drop and interior pressure gradient are realistic. We show in Appendix 4 that the Debye model for a lipid bilayer leads to the result P f KD/h, where K is the partition coefficient (the ratio of water concentration inside the membrane to that outside), D is the self-diffusion constant of independent water molecules inside the membrane, and h is the thickness of the membrane. Since it is clear from inspection that the same result holds for P d , we conclude that P f /P d 1, in agreement with experimental measurements. Discussion Our primary objective in this paper is to provide a persuasive argument for the Debye model grounded in well-established principles of physics. It begins with the Fundamental Law of Osmosis which implies that whatever happens to drive water across a membrane in the presence of an osmotic gradient must be the same as for the pressure-driven flow in the absence of an osmotic gradient. The Vegard pressure drop, on the side of the membrane adjacent to the solution with the higher osmolarity, provides a plausible mechanical basis for the law, since the osmotic flow is then also pressure driven. A number of scientists have given verbal accounts that accord well with the Debye mechanism and are worth recalling: "To the extent that it is possible to visualize molecular events, this process could perhaps be pictured (at least for narrow pores) as a molecular piston pump, with solute molecules playing the role of the piston" (Dainty and Ferrier, 1989). And from the great epithelial physiologist Hans Ussing: "The pore contains pure water all the way through, so the driving force cannot be a difference in the chemical activity. Obviously, the driving force is the 'suction' created by the osmotic pressure difference at the dotted line. But suction is only another word for hydrostatic pressure difference" (Ussing and Andersen, 1955). Physiologists often refer to what is termed the "colloid osmotic (or oncotic) pressure," which is the osmotic pressure that can be attributed to blood plasma proteins (Boron and Boulpaep, 2016). As blood flows into a capillary bed, the hydrostatic pressure filters plasma into the interstitial fluid leaving behind the impermeant proteins in the blood. This has the effect of decreasing the osmolarity of the interstitial fluid relative to the blood. As blood flows out of the capillaries, the hydrostatic pressure declines and now the osmotic gradient across the capillary wall drives interstitial fluid back into the blood. This interaction between hydrostatic and osmotic gradients, which is of immense importance in clearing the interstitial space, was first postulated by Starling. Although the term "colloid osmotic pressure" is useful in physiology, its mechanistic origins can also be accounted for by the Debye model. However, it is worth noting that a protein molecule contributes more than a small solute molecule to the osmolarity through an excluded volume effect (Guttman and Anderson, 1995). Our focus has primarily been on the physical basis of osmosis, but there are several allied phenomena and concepts that we have not touched on which are worth mentioning for readers interested in exploring further ramifications of osmosis, namely: depletion forces (Asakura and Oosawa, 1958), diffusioosmosis (Marbach and Bocquet, 2019), osmotic stress (Parsegian, 2002), reflection coefficients (Finkelstein, 1987), and virial coefficients (Neal et al., 1998). The history of attempts to find a molecular basis for osmosis is surprisingly long and tangled for what on the surface seems like a simple phenomenon. One of the primary difficulties with establishing the physical basis of osmosis is setting up the initial scenario and isolating the essential forces at play. The picture that emerged from the Debye model raised hackles and unfounded thermodynamic arguments were used to counter it. What made this situation even more complicated is that there appeared to be no way of testing the predictions of the theories. After a flurry of activity with no resolution, the debate died out, leaving the erroneous water concentration gradient model uncontested in some textbooks. An odd element that added to the confusion is that even wrong arguments led to the van 't Hoff equation. It is worthwhile comparing the evolution of our understanding of diffusion to that of osmosis. In the case of diffusion, Einstein's explanation in 1905 was rapidly confirmed by Jean Perrin's experiments in 1909(Perrin, 1909, 1910. In contrast, it has taken a very long time for a consistent mechanistic account of osmosis to emerge. To add to that the absence of experiments addressing the osmotic mechanisms at the nanometer scale has perhaps retarded the acceptance of the Debye model. Molecular dynamics provides a method for exploring what occurs at a molecular level in a phenomenon like osmosis (Roux, 2021). In molecular dynamics, which is now a well-established discipline in molecular physics, Newton's laws of motion are used to computationally model the collisions of individual molecules. Molecular dynamic simulations using simple particles to represent the solvent and solute together with an energy barrier to model the membrane successfully recapture van 't Hoff's law (Murad and Powles, 1993;Zhu et al., 2002;Luo and Roux, 2010;Lion and Allen, 2012). This confirms that the nature of the solvent and solute are irrelevant in generating an osmotic flux. However, molecular dynamics has not been used to model the Vegard pressure profile in steady-state osmosis but has been used to predict P f /P d from the molecular structure of aquaporins (Zhu et al., 2002;Portella and De Groot, 2009) and to visualize the pressure drop within a polyamide membrane when hydrostatic pressure is applied across it (Wang et al., 2023) With the development of techniques that allow one to probe below the nanometer scale, the precise molecular mechanics of osmotic transport and the validity of the Debye model should be within reach of experiments. It is not inconceivable that molecular sensors could be designed to detect the pressure gradient's presence and extent. It should therefore be possible to probe the pressure profile first postulated by Vegard in 1908, to confirm a simple and unified view of the physical basis of osmosis. Appendix 1 The Donnan effect Since the Donnan effect plays an important role in water transport in cells, it is worthwhile delving into its nature. To do this, we consider a simplified model introduced by Post and Jolly (1957). Let us consider a spherical "cell" with a pliant membrane that is permeable to an uncharged molecule A and water. If we place the cell in an infinite bath with A at a concentration of [A] e , and assume that the passage of A into and out of the cell is governed by the same first-order rate constant k, then: Therefore, at equilibrium cell will be stable. Now, if we introduce b moles of an uncharged impermeant molecule B into the cell, Eq. 17 remains unchanged, but the equation for osmotic equilibrium becomes: where w is the volume of the cell. The cell must follow the osmotic constraint and the kinetic constraint, and the only way that it can do this is if w→∞. So, water flows in continuously and the cell volume grows without ceasing. Although we have shown the case for an uncharged molecule, the same holds true for charged solutes. The volume can be stabilized by introducing an impermeant molecule in the extracellular space. However, this is not what animal cells do; instead, they pump molecules out of the cell to stabilize cellular volume (Tosteson and Hoffman, 1960). In the case of the toy model that we have introduced here, it can be shown that if A is pumped out of the cell, the volume can be stabilized in the presence of B. Appendix 2 Identification of P d Eq. 7 is a statement of Fick's law for the diffusion of tracer molecules when there is a concentration gradient of the tracer. Therefore, P d D/h, where D is the self-diffusion constant of water molecules in the membrane and h is the membrane thickness. The question is whether this P d also characterizes the diffusive component ϕ w,d of water flux, not tracer molecules, when a force per molecule f is imposed. The answer is yes, as we show here. Note that an arbitrary multiplicative factor, such as a partition coefficient, does not affect the conclusion. Using the Einstein relation between diffusion constant and diffusional mobility, we have ϕ w,d = (D/RT) (N w /Ah)f, where N w is the number of water molecules in the membrane and Ah is the volume occupied by the membrane, A being the cross-section area and h the membrane thickness. The total force F on the water in the membrane is N w f, and D/h P d . Then, ϕ w,d P d (F/A)/RT), where we use a molar flux. The net force per unit area can be imposed by a pressure difference and then ϕ w,d −P d ΔP/RT, completing the proof. Appendix 3 Relation between water and solute concentrations The sum of the water and solute concentrations c w + c s is (n w + n s )/ V, where n i is the number of moles of species i and V is the volume of solution, equal to (n w v w + n s v s ), where v i is the partial molar volume of species i. A straightforward rearrangement leads to where X s [=n s /(n w + n s )] is the mole fraction of solute. Only if the solute species is essentially identical to water, for example, D 2 O, can we say v s v w v 0 w , where the latter is the molar volume of pure water, and thus obtain from this equation the simple result c w + c s = 55.5 M. In this situation, the water concentration depends only on the solute concentration and is independent of the specific solute species. In general, however, the concentration of water and that of solute are not simply related.
2023-01-07T14:10:46.685Z
2023-01-04T00:00:00.000
{ "year": 2023, "sha1": "40051505f2995eedaed4d6f1809e635a528c0064", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "72dde305466d823b96b36abc92cb33150f8417f6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
212437375
pes2o/s2orc
v3-fos-license
Paper—A New Algorithm to Detect and Evaluate Learning Communities in Social Networks: Facebook ... A New Algorithm to Detect and Evaluate Learning Communities in Social Networks: Facebook Groups This article aims to present a new method of evaluating learners by communities on Facebook groups which based on their interactions. The objective of our study is to set up a community learning structure according to the learners' levels. In this context, we have proposed a new algorithm to detect and evaluate learning communities. Our algorithm consists of two phases. The first phase aims to evaluate learners by measuring their degrees of ̳Safely‘. The second phase is used to detect communities. These two phases will be repeated until the best community structure is found. Finally, we test the performance of our proposed approach on five Facebook groups. Our algorithm gives good results compared to other community detection algorithms. Keywords—Community detection, evaluation, centrality, social network, safely, learning communities. Introduction Social networks represent a space for discussing and sharing information, they became channels of knowledge for communication and interaction between its users. Today, social networks are used by multiple users in several disciplines. In particular, social networks play a very important role in the field of education. Typically, they bring learners together from different places in real time to facilitate the interaction between them. This interaction allows creating a positive and active environment for online learning to follow students' news and to evaluate them in real time. Students use a variety of social networks such as Facebook, Pinterest, Twitter, Instagram, Snapchat, etc. According to the Diplomeo survey on digital practices of students in France, Facebook is the first social network has been used by students, 93% of 17 -27 years are indeed registered on Facebook. 82% of students are registered on Snapchat, whereas only 64% of students have account on Instagram and 53% on Twitter 1 . Social networks offer several measures that we can use to follow users' actions:  The number of mentions  Hashtags Learners' traces in social networks can be studied and analyzed to evaluate them. In a simulation study which we have performed on two types of interactions ‗comment' and ‗like'. We found that the interaction via comments are important, it makes possible to see exactly what learners think and what their difficulties are. On the other hand, the interaction via likes show that some learners received only the information and they could not interact with their colleagues [1] Social Network Analysis (SNA) is used to model and describe the relationships between users. In this context, social networks modeling is based on graph theory. Each network can be presented and viewed as a graph which contains nodes (users) and links (the relationship between users). These relationships can be structural (used by, colleague of), factual (communicate with, interact with), or declarative (like, comment, subscriber) [2]. A graph contains several knowledge that we use to assess learners in a social network. The notions most commonly used in the Social Network Analysis (SNA) are the centrality and the density of link of the network [3]. Centrality (Eigenvector centrality, PageRank, Betweenness centrality, etc.) is more subtle notion; it seeks to highlight the most important users in the network. The density of links represents the number of internal links between users in a community. The creation of the virtual communities is an important property in social networks. McMillan and Chavis proposed a community model consisting of four main elements: Membership, influence, needs fulfillment, and emotional connection. These elements can be directly applied to create online communities in an educational context [4]. Community detection provides an interesting lighting into the network structure. A good community structure gives a microscopic view of complex systems. The detection of communities differs from one aspect to another. For example, web pages communities include pages that deal with the same subject [5]. Thereby, linguistic communities contain people who communicate with the same linguistic tools [6]. Learning communities also regroup learners who have and share the same level of learning and the common interests in social networks [7]. Bielaczyc and Collins defined the learning community as: -a community is a social unit where a learning culture manifests itself in which all are involved in a collective effort of understanding.‖ [8]. The main problem of community detection is to form groups in such a way that users within these groups are strongly connected. Today, there are several algorithms to detect communities in social networks. These algorithms seek to optimize a quality function called modularity (Q), which it measures the density of internal links of communities [9]. This article discusses two important aspects of research: Online learners' evaluation and community detection. The main idea of our work is to propose a method that facilitates the online assessment of learners. Instead of evaluating learner by learner, we proposed to evaluate them by communities. Therefore, we provide a pedagogical community representation of learners in a network. In this sense, we present our algo-rithm which allows detecting and evaluating communities in social networks. The most community detection solutions focus on the position of initial nodes of communities. In this approach, we will define a new measurement called -Safely Centrality‖ which it identifies safe learners in the network. Then, communities will be formed around safe learners by looking for their neighbors. Therefore, the main contributions of this research paper are:  We propose a new method to identify and to evaluate learning communities in social networks in which teachers can easily assess their learners  We define a new measurement of centrality called -Safely centrality‖ to evaluate learning community in social networks  We evaluate the performance of our method on five Facebook groups using the modularity. The experimental results demonstrate the effectiveness of our algorithm to identify at-risk learners in social networks The rest of the paper is organized as the following: In the second section, we present some existing approaches related to our work. Then, section 3 describes in detail the different steps of our algorithm. The results of our experience are presented in section 4. Finally, the conclusion and future directions of our research will be presented in section 5. Related Works In this section, we present some existing works related to our proposed method. First, we cite some studies that demonstrate the educational potential of using social networks. Then, we present some existing approaches to detect communities in social networks. The educational uses of social networks Nowadays, social networks represent an important part of our daily life. They offer a simple and convenient solution to learn online. Jeon et al demonstrated that college students can use Facebook as helpful venues for information seeking. In this study, authors use an App Facebook called -College Connect‖ that helps students to identify useful resources by visualizing their personal social ties with their friends who have the same interest. [10]. In addition, a study has done on a Facebook group which devoted to chemistry, according to students' view, this group is practice to promote their skills and also to motivate themselves to learn online [11]. In this sense, Seidel suggested a descriptive study of the evolution of a Facebook group named -Breast Imaging Radio Logist‖ for radiologists interested in breast imaging. The purpose of this study is to analyze affiliations of this Facebook group. In this context, radiologists find it useful to use Facebook groups as a forum to exchange information [12]. In Malaysia, because of the language problems facing students, a study examines how learners make up for their inadequate linguistic repertoire, and it also improves their online discussion using communication strategies on Facebook groups [13]. In addition, Inderawati et al. proposed an innovative approach to evaluate 48 students of Sriwijaya University in a Facebook group of English writing courses. This method based on the quality of students' comments. Authors check the reliability of comments based on two kinds of rubric _rating scale_ containing scoring systems. They propose a system to assess learners consist of four scores: score D (bad) score C (average) score B (good) and score A (very good) [14]. Another study conducted at Mzuzu University in Malawi makes it possible to integrate Twitter and blogs into two undergraduate courses at the library information department. This study showed that students used these technologies correctly to share the course materials, and to communicate actively and instantly between themselves and with their teachers [15]. According to Anggraeny, he examined the students' point of view on the use of Instagram in teaching and learning processes. The importance of this study is to help teachers to communicate with these learners and also to better understand their barriers [16]. Community detection in social networks Community detection is a problem that widely studied in the field of Social Networks Analysis. Several methods have been proposed to detect communities in social networks. Blondel et al. proposed a fast and easy method called -Louvain method‖ for detecting communities in large networks based on the optimization of the modularity [17]. Modularity (Q) is a measurement function introduced by Newman et al. It makes it possible to evaluate the quality of the community structure which was obtained, the modularity is a value between -1 and 1 that measures the density of edges within communities compared to the edges connecting communities to each other [18]. In addition, Raghavan et al. designed a simple method based on nodes' labels to detect communities in a graph. Initially, each node is initialized by a unique label. In the different iterations, each node takes the label shared by the majority of its neighbors. If there is no single majority of labels, one of the labels is chosen randomly. In this way, most of the labels are propagated in the graph. The algorithm stops when each node has the majority label of its neighbors. Communities are defined as sets of nodes with identical labels [19]. In this context, some community detection algorithms use centrality measures. For example, Ahajjami et al. have proposed a new scalable leader-community detection approach for community detection in social networks based on leadership. This study is divided into two steps: the first step consists to select the network leadership by the eigenvector centrality measure. In the second step, they detected the communities by the similarity of nodes [20]. Otherwise [21] suggested a new community representation of a network, they defined two measures of centrality -leading degree‖ and -following degree‖ to measure the representation of a node and its relations in a graph. A community is made up of a leader and his followers. In a graph, leading nodes have a Higher degree of leading, whereas the other nodes have a low degree of leading and a higher degree of following in relation to leading nodes. Proposed Algorithm General approach A learning community is made up of learners, young people or adults who interact with each other in order to develop their personal and collective knowledge [22]. Our approach is used to detect and evaluate learning communities in social networks. Figure 1 summarizes the different steps of our approach. In this article, we propose a new algorithm called Evaluation and Detection Community Algorithm (EDCA). EDCA is built in two phases:  Learners' evaluation to detect safe learners in the network  Building communities by detecting neighbors. Notations and definitions Let an undirected and weighted graph G (V, E, WV, WE) with:  V = {Ui}: is the set of nodes (learners)  E = {Aij}: is the set of arcs that represents the interaction between the learners  WE= {mij}: is the set of arcs' weights that indicates the total number of interactions between two learners  Wv = { Di} : is the set of nodes' weights, it represents the node degree, that is to say, the number of incoming and outgoing interactions  Ωi { Ui , Status} : is the set of communities detected and evaluated in the social network. In which ‗Status' can be safe or at-risk  Safe : safe learners are the active learners in the network  At-risk: at-risk learners represent students who have problems to interact with their colleagues  Safe community: contains the most active users in the network  At risk Community: possesses at-risk learners who have difficulty to interact with each other Safely centrality Safe learners represent the learners' principal of the network. They can easily interact and exchange information with each other. Detecting safe learners in a social network is the main challenge faced by researchers. The Social Learning Analytics (SLA) allows presenting several measures, the most important centrality measures are: betweenness centrality, closeness centrality and degree centrality that we used in our previous work [23]. These measures make it possible to measure the representation of a learner in a network. In our approach, we defined a new measure of centrality called -Safely centrality‖ defined by equation (1). This measure detects safe learners in a social network. ∑ (1) With N is the number of learners in the network, d(i,j) is the distance between i and j. To judge if a community is safe or at-risk, it is necessary to compare the degree of safely with a threshold (S) that varies from one community to another. Community detection In a network, it is easy to observe the interaction between nodes, but it is difficult to see its community structure. Therefore, we are introducing a new community representation to better reflect the community learning structure of a network. A community is usually formed from an initial node. In our approach, we chose nodes that have a higher degree of ‗Safely centrality' compared with other nodes as the initial nodes of communities. These nodes called safe nodes. Then we seek for their neighbors according to this equation: http://www.i-jet.org With , and three nodes of a network, such as is a safe node. are two at-risk nodes. The detection of the neighbors of the initial nodes is done by equation (3) according to these properties: Property 1: and are part of the same community if its weight is higher than other links' weight in the network. Property 2: If a risky node does not have a relationship with safe nodes then we place this node in a separate community. Property 3: If a safe node does not have a relationship with at-risk nodes then we place this node in a separate community. Renew the graph At each iteration, EDCA uses a new graph of which vertices (V') are the communities discovered during the previous iteration. For this purpose, the weight of links between these new vertices is given by the sum of links' weights that existed between the nodes of these two communities. The links that existed between vertices of the same community create loops on this community in the new graph. The new graph G'(V',E',W'E,W'V ) is defined as the following: With N' is the number of nodes of G' such that N' < N. {A'ij} is the set of links between the new nodes of the network. And } is the set of links' weights between the nodes. Evaluation and detect community algorithm (EDCA) As shown in Table 1, the proposed algorithm EDCA is divided into two phases: Phase 1: Evaluate community The initial partition consists to place each node in a separate community. Thus, this partition is composed of N communities. Afterwards, for each community, we calculate the "safely centrality" measure. If this measure is higher than the threshold (S) then the community is considerate safe, if it is not the community will be at-risk. Phase 2: Community detection For each safely node, we detect its neighbors by equation (3) to create communities, and we calculate the modularity of this partition. Thereafter, if the value of the modularity is differing from the previous value then we renew the graph by equation(4). Again we apply repeatedly the first and the second phase of the algorithm on the new graph. In each iteration of EDCA, we calculate the modularity. If we obtain a fixed modularity in two followed iterations, then the algorithm stops and it takes the community structure of the highest modularity. Experimental Study In this section, we present experimental results that are obtained in our study. In addition, for measuring the performance of our algorithm EDCA, we compare it with three community detection algorithms: Edge betweenness centrality [24], Label propagation [19] and leading eigen [25]. Before discussing our results, we describe the dataset in which we apply these algorithms, and the quality metric used in this research. Datasets description Our experimental study was chosen to adapt to the real environment of online social networks. Actually, we aim to generate our dataset that contains learners' interactions on Facebook groups. The purpose of this article is to evaluate the performance of our algorithm, for that reason we use a dataset contains users' interactions on Facebook groups. In our case, we considered that each user is a learner. The data were collected from Cheltenham's Facebook groups 1 ; discussions within these groups consist to exchange the major issues of users. Five open groups were selected, which are described in the following: Performance metrics Quality indicators answer the question: What is the right community structure for a network? They are generally based on the local properties of communities. One of the quality functions called Modularity was introduced by Newman et al. [9] This function makes it possible to evaluate the quality of the detected community structure. Modularity calculates the density of links in a community. With ∑ is the sum of weights of links attached to the vertex i, Ωi is the community to which the vertex i is assigned, is the Kronecker delta which is equal 1 if u=v and 0 otherwise, and ∑ . Discussion and evaluation We implemented our algorithm EDCA with the R language. "igraph" and "cluster" are two libraries that we used to interact with the network. Community detection and analysis is an important methodology to understand the organization of various networks. In general, community detection algorithms are always based on a characteristic or information such as labels, leadership, shortest path, etc. In our algorithm, we used the "safely centrality" measure to detect the safe nodes in the network, which present the initial nodes of communities. Figure 3 shows the community structure which is detected by EDCA for five Facebook groups. While red clusters represent at-risk communities, green clusters represent safe communities. The results of our algorithm prove its performance in detecting and evaluating learning communities. More concretely, the community structure was obtained by EDCA allowed us to easily identify the most active users and the less interacted ones in a group, especially, learners who face barriers to learning. As shown in Figure 4, the x-axis represents the number of iterations of our algorithm, and the y-axis shows the value of the modularity. The modularity varies according to the number of iterations. Fig4.a and fig4.b illustrate the improvement of modularity. However, during the evolution of the modularity, it rises a little then it goes down after it increases until it reaches the maximum threshold, so that it takes a fixed value (see fig4.c, fig4.d, and fig4.e). On the other hand, as mentioned above, we compared the performance of our algorithm with different community detection algorithms. The objective of this comparison is to assess the internal connectivity of communities using the modularity measure. Figure 5 illustrates the modularity obtained for each algorithm. We note that the modularity obtained by EDCA and leading eigen are close, this result implies that both algorithms have give a similar partition. On the other hand, we see that the modularity of the EDCA algorithm is higher compared to other algorithm, Edge betweenness and Label probagation. In all five Facebook groups, EDCA produces the highest modularity value compared to other algorithms. These results mean that our proposed method is more flexible than other methods. Conclusion Nowadays, learners and teachers use social networks as a learning environment to facilitate the interaction between them. This study proves that the use of social net-works as an informal learning activity allows learners to learn together without constraint of time and place. In this article, we have proposed a new algorithm for detecting and evaluating learning communities. Our approach begins with the identification of the safe nodes which is based on the "safely centrality" measurement. These nodes represent the initial nodes of communities. For each safe node, we look for its neighbors to build communities. The experimental results illustrate the performance of our proposed algorithm. Experimental results make evidence that the community structure obtained by EDCA is more flexible compared to other algorithms. These results provide an opportunity to use this algorithm in other areas like e-commerce, e-mailing, science citations, etc. So that we can analyze and evaluate groups of people. As a perspective, we aim to collect our own dataset from Facebook groups to implement our algorithm; we also aim to optimize our algorithm to minimize the execution time.
2019-12-12T10:33:54.591Z
2019-12-06T00:00:00.000
{ "year": 2019, "sha1": "b5b9d81410fa96d013a0659fc7fb053fdc813aa7", "oa_license": "CCBY", "oa_url": "https://online-journals.org/index.php/i-jet/article/download/10889/6201", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e23a5522efc42a9315faaca0db872e9a1cd31a87", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
62790211
pes2o/s2orc
v3-fos-license
Impact of Parthenium hysterophorus L . ( Asteraceae ) on Herbaceous Plant Biodiversity of Awash National Park ( ANP ) This study was conducted in Awash National Park (ANP), East Shewa Zone of Oromia National Regional Sate, Ethiopia, aimed at determining the impact of parthenium weed (Parthenium hysterophorus L.) on herbaceous diversity. A transect belt of 13.5 km * 0.10 km of parthenium weed infested land was identified for the determination of the impact. Four quadrats were purposively laid every 250 m interval two for infested and two for non-infested each from both sides of the road and a total of 216 quadrats of 2 m x 2 m (4 m 2 ) were considered. A total of 91 species were identified from which five of them were out of the quadrats. All species were categorized into 21 families, from which Poaceae and Fabaceae shared about 40%. The species in the non-infested quadrats were found to be more diverse and even when compared to those of the infested quadrats. Infested quadrats were found to be more abundant and dominant. Tetrapogon tenellus was found the dominant specie in the noninfested quadrats while Parthenium hysterophorus was found dominant in the infested followed by T. tenellus. There was no statistically significant difference between the total stand crop biomass of the infested and noninfested. Parthenium weed have been found creating great challenge on herbaceous plant diversity of ANP. Invasive Alien Species yet identified in Ethiopia.Since its introduction in 1976 into Ethiopia (Tefera 2002) parthenium weed has been reported as relentlessly spreading throughout the agricultural lands, forests, orchards, poorly managed arable crop lands and rangelands, almost throughout the country.EARO (2002) Study area The Impact of Parthenium Weed on Herbaceous Diversity A total of 91 species were identified Herbaceous Stand Crop Biomass Valuable species in the infested area which were essential for grazing reported as, Awash National Park, one of the prominent national parks in Ethiopia and where a number of wild animals and various woody and herbaceous species inhabit has been at risk due to the aggressive spread of the weed to the park.Herbaceous vegetations are the dominant component of most wildlife reserve areas.It was reported as parthenium weed has the potential to decline adversely the herbaceous components of the vegetation upto lantana weed (Lantana camara) and witch weeds (Striga species) are among the major Abstract two for infested and two for non-infested each from both sides of the road and a total of 216 quadrats of 2 m x 2 m (4 m 2 ) were considered.A total of 91 species were identified from which five of them were out of the quadrats.All species were categorized into 21 families, from which Poaceae and Fabaceae shared about 40%.The species in the non-infested quadrats were found to be more diverse and even when compared to those of the infested quadrats.Infested quadrats were found to be more abundant and dominant.Tetrapogon tenellus was found the dominant specie in the noninfested quadrats while Parthenium hysterophorus was found dominant in the infested followed by T. tenellus.There was no statistically significant difference between the total stand crop biomass of the infested and noninfested.Parthenium weed have been found creating great challenge on herbaceous plant diversity of ANP.Resumen Este estudio se realizó en el Parque Nacional de Awash (ANP), Oriente de Shewa, zona de Oromia del Estado Regional Nacional, Etiopía, con el fin de determinar el impacto de la mala hierba parthenium (Parthenium hysterophorus L.) sobre la diversidad herbácea.Se realizó un transecto de 13,5 km * 0,10 kilometros en la tierra infectada por la mala hierba parthenium para determinar su impacto.Se colocaron cuatro cuadrantes en intervalos de 250 m, dos para infectados y dos para no infectados a cada uno de los lados de la carretera, considerando un total de 216 cuadrantes de 2 m x 2 m (4 m2).Se identificaron un total de 91 especies, cinco de ellas ubicadas fuera de los cuadrantes.Todas las especies fueron clasificadas en 21 familias, de las que Poaceae y Fabaceae compartían alrededor del 40%.Se encontró mayor diversidad de especies en los cuadrantes no infectados, e incluso en comparación con los cuadrantes infectados.Los cuadrantes infectados resultaron ser más abundantes y dominantes.Tetrapogon tenellus se encontró como especie dominante en los cuadrantes no infectados, mientras que Parthenium hysterophorus se encontró dominante en el infectado seguido de T. tenellus.No hubo diferencias estadísticamente significativas entre la biomasa total de la cosecha de pie de los infectados y no infectados.La mala hierba Parthenium se presenta como un gran reto en la creación de diversidad de plantas herbáceas en el ANP.Palabras claveEste de Shewa, infectación, invasión de especies alóctonas, Oromia, turismo. Figure 1.Map of study area (Awash National Park) (Source: Ethiopia Institute of Agricultural Research, GIS unit) ( Apendix1) and grouped into 21 families, of which, Poaceae is Shannon and Yi = the abundance of species I = the sum of the lesser scores of species i where it occurs in both quadrats m = number of species highway run east-west a total length of 50 m x 13.5 km was considered in both sides of the road.A preferential sampling method was used.The sampling plots were arranged on the transect line laid on both side of the road.A quadrat of 2mx2m was laid in an interval of 250.At each point two quadrats, one from infested (IN) and one from non-infested (NI) and a total of 216 quadrats were considered.Each species available in the quadrat was counted and recorded.Visual cover estimation of each specie was taken. The because of its direct threat to the habitat of species that are key to the tourism industry(Raghubanshi et al. 2005; www.unep.org).ANP has been providing tourism and conservation services for the country but currently because of parthenium weed it has been losing its previous value.Parthenium weed caused a decline in stand density of herbaceous species by an average 69% within a few years from its introduction into ANP.This is in agreement with whatEvans (1997) stated that parthenium weed has the potential to replace dominant the same line a decline in biomass of up to 41% was recorded in ANP.This was recorded from the middle of the Park where interference of livestock was said minimum.Although, not thoroughly seen in this study, areas around the border of the Park where regularly visited by livestock of the more abundant than the noninfested quadrats (Table3).This could be because of that parthenium weed is an addition on the prevailing vegetation and it is moreover denser than any of the others vegetation where infestation did not take place.Although most of the associated species were found susceptible to the competition and allelopathic effect, the over all stand density of the infested was found greater.Despite the increment in total stand density and canopy cover, the species diversity (H') and evenness (J) value declined in infested quadrats.This could be due to the fact that some species Table 1 . Comparison in absolute frequency (AF), absolute stand density (AD) and mean absolute stand density (MAD) Table 2 . The Czekanowski similarity coefficient (Sc) among the IN, NI and Table 4 . Stand crop biomass ANOVA summary Table 5 . Stand crop biomass mean separation *Means with the same letter are not significantly different at α= 0.05. Table 2 Herbaceous species index of Awash National Park (ANP)
2018-12-18T10:29:22.797Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "26145dae68025482572e488430bfa222fac5a571", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3391/mbi.2011.2.1.07", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "26145dae68025482572e488430bfa222fac5a571", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Geography" ] }
223039174
pes2o/s2orc
v3-fos-license
“Big data, 4IR and electronic banking and banking systems applications in South Africa and Nigeria” Efficient banking solutions are an integral part of the business integration of South African and Nigerian economies as the two largest economies in the continent. Security, effectiveness, and integration of banking systems are critical to the sustainable develop- ment of the African continent. Therefore, an empirical analysis of the production of research on banking services and systems was conducted. The aim of the study was to examine the robustness of the research findings on banking systems in terms of their importance for the economic sustainability of the continent in the era of the fourth in- dustrial revolution. The study adopted a bibliometric analysis using software clusters to visualize the results. Due to higher visibility of outputs and likely citations, the results showed that the key terms from Google Scholar are ranked higher than outputs from Scopus. Main research interests were related to internet banking (f = 70), e-payment systems (f = 57), telephone banking (f = 56), automated teller machines (f = 54), and mobile banking (f = 40). The results also showed a very low research interest in the technical aspect of online banking services such as security (f = 19, TLS = 40), authentication (f = 17, TLS =33), network security (f =13, TLS = 33), computer crime (f = 16, TLS = 42), and online banking (f = 11, TLS =32). The study found there were insufficient outputs in the area of the fourth industrial revolution (4IR) and banking services in Africa. Future research trends should examine the impact of the 4IR and big data on the banking system, regional economic integration, and sustainable growth in the continent. 2019), including banking operations. Banking systems play a key role in fa-cilitating sustainable growth and regional integration. In this era of integrated Information Technology (IT), the networked environment is characterized by cloud computing, the Internet of Things, and big data technology. Hence, virtual business information systems must be adapted by banks to take advantage of these technologies. A recent study hypothesized that mobile banking services, cryptocurrency, and e-commerce platforms that is built on sustainable, but enhanced networked solutions, will drive future global enterprises, including banking solutions (Pezzuto, 2019). And future banking systems will continue to provide e-payment services, mobile or virtual banking, and electronic fund transfers, provided that matured business and IT alignment is maintained. Considering the above, do the two leading African countries have an improved IT infrastructure and broadband to sustain their economic growth and agile banking services? In the INTRODUCTION A real-time flow of information that facilitates the rapid exchange of knowledge through the adoption of big data will drive the growth, competitiveness, and productivity of companies (Pezzuto, 2019), including banking operations. Banking systems play a key role in facilitating sustainable growth and regional integration. In this era of integrated Information Technology (IT), the networked environment is characterized by cloud computing, the Internet of Things, and big data technology. Hence, virtual business information systems must be adapted by banks to take advantage of these technologies. A recent study hypothesized that mobile banking services, cryptocurrency, and e-commerce platforms that is built on sustainable, but enhanced networked solutions, will drive future global enterprises, including banking solutions (Pezzuto, 2019). And future banking systems will continue to provide e-payment services, mobile or virtual banking, and electronic fund transfers, provided that matured business and IT alignment is maintained. Considering the above, do the two leading African countries have an improved IT infrastructure and broadband to sustain their economic growth and agile banking services? In the integrated business environment, with advances in the IT infrastructure, banking services must be configured to respond and adapt to an integrated and networked technology to provide virtual banking services. This study, therefore, provides a bibliometric analysis of knowledge production in this research domain. The importance of a robust banking service that can ensure the sustainability of a national economy cannot be overestimated. A study by the World Bank indicated that 66% of the African population does not use formal banking systems. The African continent leads the world in terms of the use and deployment of mobile money transfers (Ohene-Afoakwa & Nyanhongo, 2017). The African continent remains largely dependent on the banking and financial services from the United States of America. Unless Africa develops and consolidates its business information systems with a unique banking system architecture, there will be no significant and sustainable development in the continent. For example, some observe that the sustainable growth of any economy depends on how efficient the banking system is in driving economic trajectories (Jeucken, 2010). Besides, the usual importance of an efficient banking system is to promote economic growth, ensure green investment financing, and facilitate agile regional economic integration. A robust banking services architecture across the continent will eliminate the over-dependence on foreign exchanges for all regional transactions. However, some of the current challenges in the continent hinge on the inability of academics to assist the government (through research) in prototyping continental banking system architecture that responds to the embryonic African market. Knowledge production for agile banking services, especially in the era of big data and cloud computing, is limited. There is a lack of research on the agile banking system architecture, and insufficient literature on the plethora of challenges around banking systems in the continent, especially in Nigeria and South Africa. Some of the available literature only focuses on the consumers' attitudes towards internet and cellphone banking services in South Africa (Maduku, 2011(Maduku, , 2013. There is a low uptake in market penetration of banking services in South Africa (Gill, 2010), and the question remains, how do users measure e-banking service quality, and is there a mechanism to validate electronic banking (ebanking) transactions? Recently, in South Africa, there have been cases of a lack of synergy between epayments and account settlements. In which a third party deducts from a client bank account when the third-party contract had been terminated. This meant that although the contractual obligations were no longer in effect, the merchants kept deducting money from the clients, and the clients/bank could not stop such transactions. The weak interfaces of internet applications and mobile apps show that the service functions and business requirements of the mobile and internet banking infrastructure are not robust enough to respond to user needs. However, studies in this domain have failed to highlight these challenges. Problem statement This paper examined the governance of banking information systems. It evaluated the production of knowledge on banking services and banking systems in Africa, based on the results of scientific publications. One challenge of mobile banking is the lack of a strong network infrastructure (Chaudhry, 2015), and this can pose a serious threat to economic activities, especially within the rural/farming towns in Nigeria that have no easy access to physical banking premises. Despite con-siderable efforts and outputs on banking services and electronic accounting information systems, several gaps have not been covered exhaustively in the literature. The unavailability and misuse of automated teller machines (ATMs) are some of the problems facing Nigerian banking systems. Some studies have examined the negative impact of an ATM, as it exacerbates fraud in Nigeria and leads to other barriers of mobile banking (Agwu, Okpara, Ailemen, & Iyoha, 2014; Ogbuji, Onuoha, & Izogo, 2012). It has been established that innovation in networked technologies is central to banking services deliveries (Ilo, Wilson, & Nnanyelugo, 2014). Despite the reported fraud cases and appre-hensions about online banking risks, banking services are still being introduced to users. Another barrier to introducing internet banking services in Nigeria hinges on the constant technological changes (Agwu, 2015) that have led to an increase in the level of threats and the risk of cyber frauds. It is expected in this era of technological advancement and its ability to contribute to developing e-commerce that most research in banking applications should focus on the nexus between banking systems and cloud computing, cloud infrastructure, Internet of Things, and cryptocurrency amongst others. In recent years, some of the research focus has been on the demographic effects on the mobile banking adoption by clients (Abayomi et al., 2019), digital signature, e-banking authentication, (Okereke & Ezugwu, 2014), and this study on e-signature and authentications did not even examine cryptography and cloud infrastructure and other relevant technologies. Other research focuses were on the quality of e-banking in Nigeria (Olowokere & Olufunmilayo, 2018) and strategies for improving the e-banking security framework (Sarjiyus, Oye, & Baha, 2019). None analyzed the advanced technologies, such as big data and cloud computing. But this study showed a lack that existed in knowledge production in banking information systems vis-à-vis the impact of big data and 4IR on banking services, especially in Nigeria and South Africa. As most research interest in banking services was on corporate governance (Ngerebo-A & Yellowe, 2012), customers' use of internet banking (Agwu, 2017;Ozuru & Opara, 2014), the impact of ATM on the automated banking services (Ali, 2016), and e-banking authentication (Okereke & Ezugwu, 2014). Unfortunately, challenges facing the continent in terms of financial services architecture are beyond these research focus. For example, to travel from South Africa to Nigeria, travelers still rely on buying the US dollar, since Nigerian currency is not available at the airport in Johannesburg. At the same time, South African rand could be purchased at the airport in Lagos. But how can African development be strengthened, while the economies depend heavily on the Euro and the US dollar? Besides, when paying for purchases in the US dollar using VISA or MASTER bank cards, an additional fee of USD 23.55 is charged. Such extra costs from a million South Africans and Nigerian could (potentially) transfer almost USD 24 million out of the continent. Yet, African scholars' knowledge production has failed to address some of these critical challenges in the banking system architecture. Therefore, this study uses bibliometrics as a tool for mapping knowledge production (Ajibade & Mutula, 2018), which is important in evaluating the research focus of banking services in Nigeria and South Africa. Banking services in Nigeria An attempt in Nigeria, 50 years ago, to examine the importance of online payments and electronic banking services (Agboola, 1970), as well as technologies and internet services, has increased the flexibility of banking services (Sarjiyus, Oye, & Baha, 2019). Sarjiyus, Oye, and Baha (2019) focused on evaluating the online security risk of electronic banking systems. However, they did not examine the role of the fourth industrial revolution (cloud infrastructure and big data) in providing e-banking and likely associated security implications. In Nigeria, it was established that poor electricity and ICT infrastructure were major impediments facing banking services (Essia & Anwana, 2012). However, up to date scientific inquiries must reflect all the major problems facing the banking systems, including the implications of using cryptocurrencies and fraud prevention. Besides, the fourth industrial revolution (4IR), big data, and the 5G (generation) mobile network infrastructure are likely to revolutionize future banking services. Thus, this bibliometric analysis is important for mapping the direction of research in this domain to establish the relevance of knowledge production in the continent. There are many problems in the continent associated with the adoption and use of banking services. Using Nigeria as a case study regarding what has not been reported in recent studies. For example, in Ekiti State, Nigeria, there are major banks with no physical banking halls/buildings in major towns and sub-rural communities in the Ekiti-West, Oye, and Ikole Local Governments of the state. These problems are beyond the fairytale and cosmetic analysis of challenges that have been reported in the literature, such as customer satisfaction with e-banking (Musa, Habib, & Muhammad). For instance, a study reported ease of capital movement using the e-banking system across the border (Nnamani & Makwe, 2019). Howbeit, the ease of cash movement depends on one's geographical location within the country. But it would have been acceptable to have robust banking applications and mobile banking systems that can bridge this digital divide in mobile banking systems. Unfortunately, major infrastructural challenges are inhibiting the adoption of mobile applications in most of these areas. Due to lack of access to mobile data or internet facilities, a lack of electricity supply to recharge flat phone batteries, etc. A recent study indicated that only 20% of Nigerian households had access to networked information technology. Another challenge of e-banking systems is the ability to confirm the identities of transacting parties to curb fraud and increase user satisfaction (John & Roitimi, 2014). Banking services in South Africa South Africa and Nigeria are the two largest economies in Africa; therefore, monetary services must be robust to drive the two economies. However, the financial architecture/infrastructure between the two nations is not aligned or integrated. Furthermore, there is a lack of literature to support executable strategies aimed at harmonizing business and IT banking services to stimulate regional economic integration between Nigeria and South Africa (Ajibade & Mutula, 2020a). Many studies (Maduku, 2013) have explored banking services in South Africa and the predictability of bank clients' attitudes. In 2018, Nyoka's study focused on mobile banking in South Africa and mediating factors for its adoption (Nyoka, 2018). While the focus on mobile banking services, criteria for transactional banking services preference, and factors determining the consumer use of mobile banking is noteworthy (Kabanda, Downes, & dos Ramos, 2012), some of the research focus of banking services in South Africa should be on integrating the fourth industrial revolution (4IR) technology into banking services. There seems to be a huge gap in the literature, especially in terms of the interconnectedness of the 4IR and banking services in Africa. For instance, the challenge of some of these open-source technologies, especially cryptocurrencies, as a disruptive technology to banking services in Africa should be addressed. There are malicious parties using cryptocurrencies to ask for ransom. For example, a website belonging to the Johannesburg Municipality was hacked, and the hacker demanded ransom in Bitcoins for the site to be functional again. Some of the knowledge production trends on digitization and mobile banking in the networked environment have not covered these challenges in South Africa and Nigeria. METHODOLOGY The study used a bibliometric analysis as a useful quantitative tool to map the trend of knowledge production (Ajibade & Mutula, 2018). Data were extracted from the Web of Science (WoS) (n = 58), Google Scholar (GS) (n = 662) and Scopus databases. However, there were 776 total global outputs from the Web of Science. Only 58 outputs were from scholars from Africa, with these metrics: h-index = 9, AVC = 4.91 (average citations per item), and a total sum cited times STC = 285 from 269 citing articles. Because most outputs in the WoS were replicated in Scopus, it was decided to use the data from Scopus and GS for the analysis. However, as the outputs in the Scopus database was insignificant, it was decided to include all outputs from all countries in the database using the following search strings (wildcard such as "* or ?" were not used, since the search strings include the name of countries, a term known with no ambiguity): TITLE-ABS-KEY ("banking services") AND (Limit-To (Affilcountry, "South Africa") Or Limit-To (Affilcountry, "Nigeria") Or Limit-To (Affilcountry, "Ghana") Or Limit-To (Affilcountry, "Egypt") Or Limit-To (Affilcountry, "Ethiopia") Or Limit-To (Affilcountry, "Tunisia") Or Limit-To (Affilcountry, "Morocco") Or Limit-To (Affilcountry, "Zimbabwe") Or Limit-To (Affilcountry, "Kenya") Or Limit-To (Affilcountry, "Mauritius") Or Limit-To (Affilcountry, "Cameroon") Or Limit-To (Affilcountry, "Zambia") Or Limit-To (Affilcountry, "Libyan Arab Jamahiriya") Or Limit-To (Affilcountry, "Sudan") Or Limit-To (Affilcountry, "Tanzania") Or Limit-To (Affilcountry, "Uganda") Or Limit-To (Affilcountry, "Algeria") Or Limit-To (Affilcountry, "Botswana") Or Limit-To (Affilcountry, "Malawi") Or Limit-To (Affilcountry, "Rwanda") Or Limit-To (Affilcountry, "Somalia")). The authors processed outputs from Google Scholars (GS) and extracted the abstracts to compare the key terms with those in Scopus. Out of 1,986 terms/keywords in the GS output, it was decided to find out terms that occurred at least fifteen (15) times. Only 39 terms met this minimum threshold. After that, 60% of the most relevant of the 39 terms were selected, which is 23 terms. Co-authorship metric Besides, other terms such as institution, author names, and research terms that were not relevant to banking services and systems were filtered out (see Table 4). Availability of banking services in Nigeria A living-lab method was used, which is an innovative research methodology often applied to open technological research projects (Almirall & Wareham, 2008). It is useful in determining the failure or success of deployed technological products (Coorevits, Seys, & Schuurman, 2014), such as banking systems and services, to test the availability of banking services. Undoubtedly, all banks have their branches in Lagos, but the distance from Lagos to Ekiti State on E1/A122 expressway is 394 km. Ekiti State population is 2.3 million, and agriculture provides over 75% of employment and income for the population. Yet, mobile access, network services connectivity are limited in most farm locations. However, the study tested mobile banking services and the proximity of banking halls in "Oke Ayedun" of Ikole Local Government of Ekiti State (see Figure 1). Unfortunately, it appeared there was only one Wema banking hall in the town where the researcher had searched for the banking hall using the GPS-enabled device on the mobile phone. However, agile banking ap-plications would have been ideal instead of traveling between 23 km to 51 km to the state capital to access banking locations and round trips of 102 km to set up a mobile banking application should there be a need. Therefore, it would be very convenient to have mobile banking systems to cater for the people in these areas. Figure 1 shows the current nearest access and union banks, respectively, and the distance to the researcher's current location, a reality that none of the previous studies have examined. Therefore, there is a need to assess knowledge production in this research area and to show some of the existing gaps in the body of knowledge. Co-citation analysis is important to show the relatedness of outputs based on the number of times they are cited together. For the analysis, co-citation was chosen, whereas for the unit of analysis, cited sources were selected and the full counting methods were applied. Out of the total number (n = 9,285) of global sources, journal sources were selected with at least 10 citations per source to calculate the number of co-citations. Only 54 sources met this minimum threshold (see Figure 2 and Table 1). The finding showed the top 5 ranked citations by sources, which accounted for almost a quarter (28.1%) of the total outputs (f = 2,250). Journal sources with higher total link strength (TLS) provided the strength of the impact of collaboration links among journal sources, and this was reported in other bibliometric visualization in a different field of studies (Nadzar, Bakri, & Ibrahim, 2018; Wang, Xing, Zhu, Dong, & Zhao, 2019). The network analysis (see Figure 2) indicated the TLS of outputs, the similarity, the percentages of the total link strength from the top five journal sources that accounted for 32.3% in relation to the other sources. This indicated that these journals were closely cited by sources working on banking services and systems in comparison with the other 54 journal sources in this analysis. This means that the visibility of authors and their outputs will be higher, probably with greater impact than if the results were published in the lower 67%, which accounted for the remaining journal sources. Co-authorship by countries The collaboration scheme shows countries that are conducting joint research on banking systems in Nigeria and South Africa. It revealed a growing trend between scholars in the continent and the rest of the world (see Figure 3). However, South African scholars enjoyed more international collaboration compared to Nigerian scholars, as indicated in the co-authorship network (see Table 2): South African scholars accounted for over 50.7% from (n = 278) outputs, while Nigerian scholars' (n = 150) outputs only accounted for 27.4%. The citation analysis indicated that South African citations (RSA) accounted for 50.7% of the total outputs in comparison with Nigerian (NIG) 27%. Similarly, the RSA citation is 1,226, accounting for 36%, and NIG citation is 974, accounting for 24.6% (see Table 2), showing the top 10 countries with which scholars from Nigeria and South Africa col- Note: TC -total citation, TLS -total link strength. laborate on banking system research. By comparison, South African outputs were almost doubled that of Nigeria in terms of the total outputs and citations counts ( Table 2). Mapping of key phrases and co-occurrence analysis of banking systems Co-occurrence analysis examined the most important terms and directions of research in banking services/systems since co-occurrence analysis revealed the direction of research and concentration of the researcher focus (Wang et al., 2019). However, by visualizing co-occurrence analysis, it is possible to identify research areas not extensively covered, thus, showing gaps that need to be explored for future studies. All keywords that occurred at least ten (10) times were analyzed and selected, and out of the total 2,493 keywords, only 35 keywords met at least 10 thresholds. The visualization network (Figure 4) and the breakdown (Table 3) indicated the predominant trend and area of focus of banking systems and mobile applica- tion knowledge production in Nigeria and South Africa. Anyway, South African output visibility (ranked 3 rd ) was higher than the Nigerian output that was ranked 14 th (see Table 4). The comparative data (see Tables 3 and 4 From Figure 4, all relevant keywords related to crimes were further isolated (see Figure 5), as this clustering network was useful for accessing areas that have been researched in banking systems about crimes, frauds, etc. and other areas not yet explored in banking research. It would also help mobile banking (Apps) designers to come up with areas to further explore in predicting service and business requirements for their system solutions. It showed the interrelatedness of key phrases, based on the number of documents in which they occurred together, to reveal the focus of research and the prevalence of knowledge production. CONCLUSION This bibliometric study quantitatively analyzes outputs on banking services and systems (BS) in Africa, with a focus on Nigeria and South Africa. The study presented main research areas, most relevant key terms, and their TLS, the direction of knowledge production in banking services in Nigeria and South Africa, and technologies related to the analysis of banking systems. The study concluded there was a lack of growth in terms of outputs and focus on banking applications in terms of cryptocurrency technologies, the impact of big data, cloud infrastructure and authentication of an online transaction, artificial intelligence, and e-commerce payment execution amongst others. The study summarized outputs on banking services, top 10 high-impact journals, where these outputs were published, major countries collaborating with scholars from the continent to publish on banking systems, and trends in banking research, by summarizing past and present and using visualization networks to provide possible gaps. Future studies should, and researchers are expected to find it useful to look at the basic terms related to banking fraud or crime to formulate future research trends. Figure 5. Isolated key phrases related to crime and banking systems The results of the latest studies showed that the major outputs in Africa have not delved into the fourth industrial revolution (4IR) in the study of electronic banking services and systems. This revealed a gap that scholars in banking and financial studies can explore. This study showed that mobile banking, internet banking, and electronic commerce were exhaustively covered and highly ranked. However, the results revealed that major technical research areas have not been given much priority. These include security (f = 19, TLS = 40), authentication and information security (f = 17, TLS =33), computer crime (f = 16, TLS = 42), automatic teller machines (f =14, TLS = 12), network security (f = 13, TLS = 33) and online banking (f = 11, TLS =32). As such, future studies should consider research trajectories on these topics. Although this study has demonstrated a growing interest in BS, research trends and the predominant focus of research in BS using bibliometric analysis, future studies should use Google Scholar database, adding search terms not used in this study or improving Boolean operators, and compare such findings to enrich the ideas gathered from this study. LIMITATIONS AND STRENGTHS Although this paper covered numerous parameters in studying the banking services and banking systems literature from different sources and databases, one limitation of the study is that articles not published in English were excluded. Due to many journal sources, authors and countries with low outputs were excluded as only the top outputs were displayed in the tables. Also, a recently published quality article might not have attracted many citations, whereas older articles could have received highercitations, which is a limitation. But the total outputs by institutions, authors and journal sources were used to rank outputs instead of the total citation scores. Therefore, whenever older outputs with higher citations and the new articles are included in the analysis, the researchers could include the g-index metric to solve this problem instead of using the h-index metric. Alternatively, citation count alone should not be used as a means to measure selections relevance. Nevertheless, the bibliometric analysis was an important tool in evaluating the gaps in the literature and research trends in banking industries in terms of technological innovation. The analysis can help researchers to identify gaps for future research. Institutions can use the findings to identify likely contributors with whom to collaborate, and journals with higher impact factors to submit articles to in order to increase research output's visibility. Therefore, the data obtained are reliable, replicable, and verifiable, in which the academic community in Africa and the banking industry can adjust their future priority to reflect technological integration and the fourth industrial revolution impact in driving innovation in banking system research.
2020-06-25T09:10:02.417Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "e6d113f148e872bbe44788bca773922c4db08bb9", "oa_license": "CCBY", "oa_url": "https://www.businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/13662/BBS_2020_02_Ajibade.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ca2ee5ae9ba7f63e932e6eeec2a2f1a70a995c39", "s2fieldsofstudy": [ "Economics", "Business", "Computer Science" ], "extfieldsofstudy": [ "Business" ] }
22311902
pes2o/s2orc
v3-fos-license
Practice standards for quality clinical decision-making in nursing Clinical decision-making is a critical component of nursing practice, as the life of the patient is at stake. The quality of clinical decision-making is, therefore, essential in delivering quality nursing care. The facilitation of quality clinical decision-making in nursing requires the development of standards to monitor, evaluate and implement remedial actions that improve on the quality of clinical decision-making (Muller, 2002:203; Beyea & Nicoll, 1999:495). However, there are no such practice standards against which the quality of clinical decision-making by nurses can be evaluated and assessed. Introduction The development and setting of quality standards are the first and most basic steps in the process o f conducting quality assurance activities.In their Draft Charter for Nursing Practice, the South African Nursing Council (SANC, 2004:10) re-emphasise their commitment to the delivery of high quality nursing care by nurses.Clinical decision-making is a critical component of nursing practice.Nurses make daily clinical decisions that impact on the lives of their patients.The quality of these decisions therefore lies at the heart of the process to deliver quality nursing care.The achievement o f such outcom es re q u ires the developm ent and im plem entation of mechanisms to facilitate the quality of clinical decision-m aking.One such mechanism that can be implemented to ensure the quality thereof, and therefore the quality of nursing care, is to formulate ap p ro p riate p ro fessio n al stan d ard s (Beyea & N icoll, 1999:495;SANC, 2004:29).In this vein , stan d ard formulation is an essential activity of quality improvement. T he ov erall p ro cess o f q u ality improvem ent includes the setting of standards, practice m onitoring, the evaluation of identified practice problems, and resolving those practice problems (Muller, 2002:203).The development and use of standards are emphasised in the literature about the quality of care, as standards are used to derive criteria against which care, or the processes to deliver such care, are measured for the purposes of quality improvement (Dozier, 1998:22).Standards can be defined as statem ents relating to the scope of nu rsin g p ractice, in clu d in g both standards of care: aspects of the nurse's role such as assessment, planning and e v alu atio n ; and standards of p ro fessio n al p erfo rm ance, such as aspects of the nurse's role in quality assu ran ce and research (A m erican Nurses Association, 1991:1;Deab-Baar, 1993:33).Bachman and Malloch (1998:26) also noted that the concept of standards carries with it incredible confusion.Based on a literature review, Patterson (1988:625) also found evidence of such confusion.She identified and defined two concepts that need clarification: standard o f care and standard o f practice.A standard of care focuses on the recipient of care (the patient) and a standard o f practice focuses on the provider of care (the nurse).A standard of care is written about patient outcomes, whereas a standard of practice is written about the nursing pro cess (Jo h n so n & M cC loskey, 1992:53).Standards of practice are sometimes referred to as professional standards.Alternatively, standards can be classified as regulatory, voluntary and involuntary (Beyea& Nicoll, 1999:495).R egulatory standards are based on regulation usually m andated by the government.Voluntary standards are those dev elo p ed by health care practitioners and are often the work of a p ro fessio n al o rg a n isa tio n .B oth regulatory and voluntary standards can be paralleled with professional standards, which are promulgated by professional o rg an isatio n s, and accrediting and reg u lato ry bodies and in stitu tio n s (Dozier, 1998:22).Involuntary standards are those defined by professional liability insurance carriers.Standards may also be categorised according to the scope of influence, e.g.national, state, local or institutional standards (Beyea & Nicoll, 1999:495).It is important to draw a distinction betw een a standard and clinical guidelines, as these concepts are often confused or used interchangeably.Standards are different from guidelines. By com parison, guidelines refer to recommended approaches to managing patient/client conditions, focusing on specific aspects of patient care delivery connecting interventions and expected outcomes (Dozier, 1998:23).Clinical p ractice g u id elin es are statem ents designed to assist practitioners with decision-making about appropriate care for specific clinical circum stances.Clinical guidelines reflect the state of current clinical knowledge, as published in the scientific health care literature, reg ard in g the effec tiv en e ss and ap p ro p riaten ess o f procedures or practices (Child & Holmes, 1999:73).However, both guidelines and standards can serve as the basis for many activities, either w ithin nursing or the larger healthcare system.Guidelines reflect standards.They describe care delivery that is consistent with standards.Both can enhance m u ltid iscip lin ary collaboration (Childs & Holmes, 1999:74). Decision-making is a process carried out by the nurse (the provider of care), but it is focused on the patient (the recipient of care).In this vein, decision-making forms part of the nurse's daily practice.Therefore, standards for quality clinical decision-m aking can be regarded as practice standards, as they focus on the functions of the provider of care.Practice standards on decision-making in nursing refer to descriptive statements that reflect the minimum expected level of care and that settle disputes about the expected level of performance during a nurse's clinical decision-making. The importance of quality in health care has become more marked in the past few years.Measures to improve the quality of care, in the context of the reduced availability of health care staff, have led to the questioning o f the accepted boundaries of professional roles.One such role in question is that of the nurse as decision-maker.The need to improve the quality of clinical decision-making in nursing is one of the most serious issues facing present clinical nursing practice.Effective and efficient decision-making practices are emphasized in the White Paper on the Transformation of Public Services (1997) in order to achieve a highly efficient public service, including healthcare services.Decision-making forms an integral part to attain the latter.However, the quality of the decisions taken determine whether an efficient health care service is attainable.The incredible amount of healthcare data, com plex and continuing regulatory changes and, most im portantly, the erosion of public confidence in health care quality require significant action.In this vein, Malloch (1999:1) indicates that selected strategies must address the q u ality control needs and the u n p recedented dem ands placed on health care leaders.This is particularly relevant to the service-delivery point in the healthcare sector, where nurses' clinical decisions have a direct impact on the health status of the patient.Thus, developing quality control programmes that identify, m onitor and document quality outcomes is essential to restore public trust and confidence in healthcare.To do so, a collaborative approach to health care decision-making in general, but to clin ical decision-m ak in g in particular, is required. C linical decision-m aking is both a cognitive and an affective problem solving activity that focuses on defining p atien t problem s and selectin g appropriate treatm ent interventions (B uckingham & A dam s, 2000:981;Deloughery, 1998:47).In a clinical nursing practice setting, nurses work as members of a healthcare team and must communicate decisions to other members of the multi-disciplinary healthcare team to ensure the co n tin u ity and c o ordination of patient care.Therefore, co operative and collaborative efforts during clinical decision-making should be emphasised and reflected in standards of professional practice in terms of clinical decision-making. Nurses form the largest proportion of the healthcare delivery resources in the Healthcare sector.They therefore play an important role in the delivery of quality healthcare, in general, and in nursing care, in particular.Quality clinical decision making is an important process through which the nurse delivers nursing care.Q uality clinical decision-m aking in nursing refers to a rational, interactive, co llab o rativ e , co n su ltativ e and scientifically-based process.During this process, nurses m ake goal-directed choices between perceived alternatives, based on their abilities, within a specific context, with the purpose of promoting the health of the individual, group or community.These choices coincide with pred eterm in ed standards (A rries, 2002:308;Noone, 2002:21-22).The quality of decision-making will influence 63 Curationis March 2006 the quality of the outcome, viz.health prom otion and em powerm ent of the individual, group or community.In addition, not only does the n u rse's quality o f clinical decision-m aking influence the outcome thereof, but it also has fin an c ial im p licatio n s for the institution at large.Furthermore, the nurse, as a so -called independent practitioner, is not only responsible and accountable for quality clinical decision m aking to facilitate quality nursing specifically, but also for quality healthcare in general.The nurse therefore requires practice guidelines on clinical decision making that reflect excellence and are presented in the form of standards and criteria that are user-friendly and realistic. Problem statement Clinical decision-making in nursing is regarded as an important activity by the nurse, since a decision is a prerequisite for any significant action by the nurse to care for the patient.However, from unstructured observation by studying the South African Nursing C ouncil's (1993Nursing C ouncil's ( -1998) ) disciplinary reports, the following about nurses' clinical decision making has been observed: (i) an increase in the num ber of disciplinary cases am ong n u rses, and (ii) that these disciplinary cases reflect situations within which the nurse had made decisions to either maintain, restore or promote the health of the patient.It was however concluded from these observations that n u rses' clinical decision-m aking is ineffective, as it does not adhere to the framework of clinical, ethical and legal correctness for any nursing action, including clinical decision-making. As a possible solution to the afore mentioned problem, practice standards for quality clinical decision-making in nursing are required.The aim of these practice standards should be to evaluate, monitor and remedy actions implemented to improve the quality of clinical decision making, a process nurses follow during patient care.However, there are no such practice standards in the South African context, against which one can evaluate and assess n u rs e s ' q u ality o f clin ical decision-making. Purpose of the study The purpose of the study is to formulate practice standards for quality clinical decision-making in nursing. Definition of terms Practice standards Practice standards focus on the provider of care (the nurse) and are written about the nursing process.A practice standard on clinical decision-making is a written description about the desired level of performance during clinical decision making that reflects the connotative characteristics associated with excellence for measuring and evaluating the actual quality thereof (M uller in Booyens, 1998:606;Dozier, 1998:23). Quality Defining the term quality is almost an impossible task, as it has a multifaceted nature (D avis, 1987in Johnson & McCloskey, 1992:45).For the purposes o f this study, quality is defined as reflecting the characteristics of excellence as described in predetermined standards. Quality clinical decision-making Q uality clinical decision-m aking, a cognitive-affective problem -solving activity, refers to the outcomes of rational interactive collaborative and consul tativ e dynam ic p ro b lem -so lv in g processes, in which nurses and members o f the m ultidisciplinary health team engage to define patient problems, to select and im plem ent ap p ro p riate treatm en t in te rv e n tio n s, and to communicate decisions in accordance with predetermined standards to ensure the quality, continuity and coordination of patient care in order to facilitate health (Arries, 2002:308). Research design and method A qualitative, explorative, descriptive standard formulation research design (Mouton & Marais, 1990:45-46;Muller, 1990:49-55) has been followed to develop standards for quality clinical decision making in nursing.Standard develop ment requires a unique method. Standard development was based on the prin cip les described by M uller (in Booyens, 1998:607-608;636-637), and co n sists o f d ev elo p m en t and quantification phases that are modified to meet the requirements, as described by Lynn (1986:382-385), for instrument development.The development phase requires input from expert and grassroot level practitioners.The purpose is to determine what specialists in the various fields of nursing practice regard as good practice.Both inductive and deductive approaches can be employed to achieve the latter and ensure ow nership and trustworthiness of the standards.The quantification phase deals with the formal validation of the draft standard and the evaluation of the level of performance in nursing practice. The above process o f stan d ard development was modified in this study.The quantification phase was omitted, as the researcher argued that by following the principles of logical deduction and induction, credible and re aso n ab le standards could also be form ulated.Both in d u ctiv e and d ed u c tiv e approaches were followed during this process.See Table 1. The research was conducted in four phases, namely an empirical phase, a conceptualisation phase, a standard formulation phase and the last phase was the conceptualisation of a system for quality clinical decision-m aking in nursing.These four phases w ill be described in detail below. Phase 1: empirical phase To meet the first criterion proposed by Muller (in Booyens, 1998:607), that is, input from expert and grassroot level practitioners, empirical exploration and description of the expectations of the stakeholders in terms of quality clinical decision-making were carried out (see Table 1).To obtain richness in data about the expectations of these stakeholders, a multi-method approach was followed.F ocus group interview s (K reu g er, 1994:39-74;De Vos, 1998:313-324), individual interviews (De Vos, 1998:297-312) and naïve sketches (Giorgi, 1985:10-14) were employed.A non-probability, purposive and convenient sample (Bums & Grove, 2001:374;De Vos, 1998:199) of the stakeholders was conducted.Data was analysed by means o f the open coding approach described by Tesch (Tesch, 1990).To ensure the credibility of the results of the first phase, principles of trustworthiness (Lincoln & Guba, 1986:289-331), viz.p ro lo n g ed engagement, triangulation, co-coding, dense-description, step-wise repetition and an investigative audit, were adhered to. PHASE FOUR: A system for quality clinical decision-making nursing Research method: -Conceptualisation -Characteristics of a system according to systems theoretical perspective (Bertallaffny, 1950) Trustworthiness Standards of good conceptualisation (FUNDISA, 2000) making was carried out.A purposeful selection (Bums & Grove, 2001:376) of both national and international literature sources, viz.thesauruses, journal articles and subject-specific literature on the themes that emerged from the empirical phase, was conducted.The aim of the literature study was twofold, on the one hand to analyse the concept's quality and clinical decision-making respectively and, secondly, to integrate the results with th ose o f the em p irical phase in a conceptual framework, by employing both inductive and deductive reasoning strategies. This conceptual framework was used as a deductive guide to form ulate the standards for quality clinical decision m aking in n u rsing.To ensure the trustworthiness of the conceptualisation, principles for credible conceptualisation (F U N D IS A , 2 0 00), to g eth er w ith triangulation and scheduled peer group discussions, were employed. Phase 3: standard formulation phase D uring the th ird phase, p ractice standards for clinical decision-making in n u rsin g w ere fo rm u lated .The formulation of these practice standards was based on the statements logically derived from the conceptual framework.By employing reasoning strategies of an a ly sis, sy n th esis and inference, practice standards for quality clinical d ecisio n -m ak in g w ere derived inductively and deductively.To ensure the credibility of the standards for clinical decision-making in nursing, principles of log ic, p ro lo n g ed engagem ent, triangulation, peer-group discussion, dense description, step-wise repetition and an investigative audit (Lincoln & Guba, 1985:289-331) were applied.Two experts on standard form ulation in nursing were also consulted during this process. Phase 4: a system for quality clinical decision-making in nursing Based on the findings of the preceeding phases, a system for quality clinical d ecisio n -m ak in g in nursing was co n c ep tu alised .F igure one is the conceptual presentation of this system (see Figure 1).Before embarking on a description of the standards for quality clinical decision making, a description of the conceptual framework, on which the standards are based, is given. Quality clinical decision making in nursing Standards can be derived from different sources based on frameworks as diverse as the nursing process, health care needs, body systems or the process of care.Standard developm ent is based on a conceptual framework of a system for quality clinical decision-m aking in nursing (Figure 1). Q uality clinical decision-m aking in nursin g occurs in a m u lti-le v el, m u ltid im en sio n al co n tex t. The multidimensional nature of the context within which clinical decision-making occurs has sev eral u n co n tro lled dimensions that influence the quality thereof.It is therefore important for the nurse to consider these dim ensions during clinical decision-making.The context of quality clinical decision making brings about certain expectations of the stakeholders involved in such a decision.In nursing, stakeholders regard factors such as abilities (knowledge, skills and values) and resources (including both material and human resources) as im portant inputs for quality clinical decision-m aking.These inputs are transformed during the process of clinical decision-making into outcomes, viz. the pro m o tio n o f h ea lth and the empowerment of the individual, group or community.Argumentation, the logic of quality clinical decision-making, requires a rational interactive approach.This im p lies th at the nurse engages in d ialo g u e w ith o th e r ap p ro p riate health care p ro fessio n als through a process of collaboration, consultation and arg u m e n tatio n . R a tio n a l argumentation refers to a communicative and collaborative process of advancing, supporting, criticising and modifying claims, and the reciprocal statement of argum ents that all stakeholders are capable of understanding so that they may grant or deny adherence (Rossouw, 1993:293). Through a process of rational interaction, collaboration and consultation, nurses engag e w ith m em bers o f the multidisciplinary health team to define patient problems, select and implement appropriate treatment interventions, and communicate decisions in accordance with predetermined standards to ensure the quality, continuity and coordination of patient care (Arries, 2002:308).The 66 Curationis March 2006 aim of this interaction is to promote the health o f the in d iv id u a l, g ro u p or community through empowerment. Practice standards for clinical decision-making in nursing Practice standards for clinical decision making in nursing will be presented.The nurse, as a provider of healthcare, is an independent practitioner who m akes clinical decisions in collaboration with a multi-professional health team.As a decision-maker, the nurse synthesises theoretical, scientific and contemporary clinical knowledge and experience to assess the health status of the individual, group or community, and to promote their h ea lth and em pow erm en t on the wellness-illness continuum of health. I have indicated elsewhere in this article that practice standards focus on the nurse as a provider o f nursing care.Therefore, these standards are sometimes referred to as professio n al practice standards.Unlike standards of care that focus on the individual patient and his/ her specific health status, and using an accepted scientifically-based process such as the nursing process to address his/her health problems, standards of p ro fessio n al p ractice re la te to the professional behaviour of the nurse while doing that, and particularly using the process of clinical decision-making.The in ten tio n o f p ro fe ssio n a l p ra c tic e standards on quality clinical decision making is to provide direction for nursing practice regardless of the practice setting.S tandards o f p ro fessio n al p ra ctice usually involve dimensions o f quality of care, performance appraisal, education, co lleg iality , eth ics, c o lla b o ra tio n , research and the utilisation of resources (Childs & Holmes, 1999:74).Practice standards for quality clinical decision making in nursing will be discussed under two main clusters: those that relate to p ro fessio n al p ra c tic e , the c lin ic a l dec isio n -m a k in g p ro cess and empowerm ent as the outcom e o f the latter.The dimensions listed by Child and Holmes (1999:74) are integrated in the aforementioned clusters, for the sake of simplicity and understanding.1.2 Clinical decision-making takes place in the relevant professional practicespecific framework of nursing practice: The nurse: 1 .2.1 dem onstrates insight and can describe relevant legislation, standards, policies and procedures that affect his/ her clinical decisions as a nurse; 1.2 .2demonstrates responsibility and accountability for own clinical decisions and professional conduct; 1.2.3 dem onstrates a commitment to ethical practice and a responsible attitude towards patients/families/members of the multidisciplinary team; 1.2.4 maintains current registration as a nurse; 1.2.5 practises clinical decision-making w ithin his/her own level of clinical competence; 1 .2 .6 m eets the req u irem en ts for continuing clinical com petence with regard to clinical decision-making and nursing practice, including investing own time, effort or other resources to meet identified learning outcomes; 1.2.7 m ain tain s ow n p hysical, psychological and emotional fitness for nursing practice; 1.2 .8continually identifies, monitors, and documents evidence of clinical decision making practice accurately and legally in relation to legislation and policies; 1.2.9 continuously refines and adapts practices of clinical decision-making to conform to legislation, standards and policies; and 1 .2 . 1 0 identifies and understands the legal-ethical and clinical implications of his/her clinical decisions. 1.3 T he nurse ap p lies relev an t professional ethics and philosophical frameworks to clinical decision-making.The nurse: 1.3.1 describes the ethical standards estab lish ed by the resp ectiv e professional or registering body relevant to clinical decision-making; 1.3.2upholds the values contained in the South African Nursing Council (SANC) Code of Ethics, namely safe, competent and eth ica l care, ch o ice, dignity, confidentiality, justice, accountability and quality practice environment; 1.3.3consistently demonstrates ethical attitudes, values and behaviours that are conducive to ethical clinical decision making and practice; 1.3.4consistently practises according to the responsibility and accountability statements in the SANC Code of ethics; 1.3.5 identifies key strategies to resolve ethical dilemmas arising from clinical decisions; 1.3.6 critically reflects on the morality of clinical decisions and incorporates current evidence on moral reasoning in clinical decision-making; 1.3.7 is com m itted to h is/h er own professional development as a clinical decision-maker; and 1.3.8dem onstrates a com m itm ent to confidentiality and respect for diversity. 1.4 The clinical context (micro-context) is conducive to rational interaction during clinical decision-making.1.4.1 The nurse understands the context and system s in w hich healthcare is provided, and applies this knowledge to optimise healthcare.1.4.2The organisational structure, culture and climate are conducive to rational, interactive and collaborative clinical decision-making.1.4.3There is evidence of applicable c o lla b o ra tio n , c o n su lta tio n and cooperation am ong m em bers o f the multidisciplinary health team.1.4.4There is evidence of continuous em pow erm ent strategies to develop nursing staff's clinical decision-making competencies.1.4.5There is evidence o f w ritten, re lev an t and u p -to -d a te po licies, guidelines, protocols and procedures that guide clinical decision-making.1.4.6 T he nurse re co g n ises the interdependence between diverse care pro v id ers w hile u n d ersta n d in g the limitations and opportunities inherent in complex systems.1.4.7There is evidence of cost-effective strategies that ensure the availability of relevant resources to enhance the quality of clinical decision-making.1.4.8The nurse considers factors related to safety, effectiveness and cost in planning and clinical decision-making. 1.5 The nurse demonstrates appropriate and relevant clinical co m p eten cies (specialised body of knowledge, skills and values) and utilises evidence from nursing science and the humanities to make clinical decisions..The nurse: 1.5.1 knows how and w here to find relevant evidence to support the making of safe, appropriate and ethical clinical decisions; 1.5.2 interprets and uses current evidence from research and other credible sources to make clinical decisions; 1.5.3 understands and com m unicates nursing contribution to clinical decision making in health care practice; 1.5.4 shares nursing knowledge about clinical decision-making with patients, colleagues, students and others; 1.5.5 uses re la tio n s h ip and communication theories appropriate in interaction with colleagues, patients and others; and 1.5.6 interprets and uses current evidence from research and other credible sources to make clinical decisions. Clinical decision-making R ational clinical decision-m aking is believed to refer to an interactive process o f assessm ent, diagnosis, p lan n in g , implementation and evaluation. (i) Assessment 2.1.The nurse performs a comprehensive fu n c tio n a l, re lev an t and h o lis tic assessment using a developmental, bio p sy ch o -so cial ap p ro ach , as the framework for understanding the nature of health problems patients present with. The nurse: 2 .1.1 obtains and accurately documents a relevant, comprehensive and problemfocused health history, considering both bio-psycho-social and cultural changes; 2 .1.2 assesses the dynamic interaction between the current complaint and the known acute/ chronic health problems, in accordance with developmental status; 2.1.3performs and accurately documents a comprehensive and problem-focused physical examination, considering both bio-psycho-social and cultural changes; 2.1.4assesses and accurately documents relevant, comprehensive and problemfocused laboratory and diagnostic data, co nsidering b io -p sy ch o -so cial and cultural changes; 2.1.5performs appropriate screening evaluations that are age, gender and development specific (including mental health, su b stan ce abuse, v io len ce, behaviour, speech/language develop ment, learning disabilities, etc.); 2 .1.6analyses the multiple effects of pharmacological agents, including home made rem edies and shop-purchased preparations, relating to the individual/ group/community with health problems; 2.1.7performs and accurately assesses and docum ents the im pact o f the environment on the health status of the individual, group, family or community, co nsidering b io -p sy ch o -so cial and cultural changes; 2 .1.8identifies health and bio-psychosocial and environmental risk factors for the individual, group or com m unity (including developmental level, risktaking behaviour, nutritional status, environmental factors, family issues, social support and immunisation status); 2.1.9analyses roles, tasks and stressors of fo rm al/in fo rm al system s/fam ily caregivers for the individual, particularly for vulnerable and frail groups; 2 . 1 .1 0 discriminates between multiple potential mechanisms causing signs and symptoms of health problems commonly diagnosed in the in d ividual/group/ community; and 2 .1.11analyses and synthesises the data collected to determine the health status of the individual/group/community.(iv) Implementation 2.4 The nurse implements the identified plan of care in a legal-ethical, clinically correct and culturally congruent manner: 2.4.1 The nurse co-ordinates the delivery of care by: 2.4.1.1 employing strategies to promote the health and safety of the environment; 2.4.1.2providing leadership in c o ordination with multidisciplinary health teams for delivering an integrated patient care service.2.4.1.3synthesising data and information to advocate the necessary system and community support measures, including environmental modifications; and 2.4.1.4coordinating resources to enhance the d eliv ery o f care across the multidisciplinary healthcare continuum. 2.4.2 The nurse collaborates with other members of the multidisciplinary health team/patients/families in the identified plan of care, to enhance the abilities of others and to affect change by: 2.4.2.1 functioning as a member of the multidisciplinary health team to provide nursing expertise; 2.4.2.2 integrating the treatment plan with the goals of the multidisciplinary health team; 2.4.2.3 maintaining responsibility for the more specialised health treatment plan goals and communicating these goals to the rest of the multidisciplinary health team: 2 .4.2.4sy nthesising clin ical data, experience and theoretical frameworks and evidence w hen providing consultation; 2 .4.2 .5 facilitating the effectiveness of co n su ltatio n and co llab o ratio n by involving the relevant stakeholders in decision-making and negotiating role responsibilities; 2.4.2.6 communicating consultation and collaborative recom m endations that influence the identified plan, facilitating understanding among stakeholders, en h an cin g the work o f others and affecting change; 2 .4.2.7 co llab o ratin g w ith nursing co llea g u es and o ther health care personnel to implement the care plan, if appropriate; 2.4.2.8 supporting collaboration with nursing colleagues and other members of the health team to implement the plan of care; 2 .4.2.9estab lish in g and sustaining th era p eu tic and e th ic a lly sound relationships with patients/fam ilies/ members of the multidisciplinary health team; 2.4.2.10 advocating and developing policies that clearly outline responsibility and accountability for everyone involved in clinical decision-making; and 2.4.2.11 communicating, collaborating and consulting with registered nurses and other members of the healthcare team ab out the p ro v isio n o f health care services. 2.4.3 The nurse consults with other members of the multidisciplinary health team during the identified plan of care to enhance the abilities of others and to effect change by: 2.4. (v) Evaluation 2.5 The nurse evaluates progress in the attainment of outcomes by: 2.5.1 conducting a systematic, ongoing and criterion-based evaluation; 2.5.2 systematically evaluating outcomes in relation to the structure and processes prescribed by the plan; 2.5.3 including the individual, group or community involved in the care/situation in the evaluative process; 2.5.4 using ongoing assessment data to revise the diagnosis, the outcomes and the plan, as needed; 2.5.5 evaluating the effectiveness of the planned strategies in relation to patient responses and the attainm ent of the expected outcome; 2.5.6 documenting and disseminating, as appropriate, the results of the evaluation, including any need for managerial action; 2.5.7 evaluating the accuracy of the diagnosis and the effectiveness of the interventions in relation to the patient's attainment of the expected outcome; 2.5.8 synthesising the results o f the evaluation analyses to determ ine the im pact o f the plan on the affected individual, group or community; and 2.5.9 using the results of the evaluation analyses to make recommendations to process or structure changes, including policy, p ro ced u re or p ro to co l documentation, as appropriate. Outcome: empowerment 3.1 There is written evidence that clinical decision-making in nursing facilitates the empowerment of the individual, group or community, as measured by the following criteria: 3.1.1Individuals, groups or communities are able to make inform ed decisions about id en tify in g and p rio ritis in g problems that affect them. Critique of the standards D eveloping stan d ard s re q u ire s a structured approach that can incorporate either an em pirical or a n o rm ativ e approach. Empirical approach The empirical approach, also called the inductive approach, requires a survey of w hat is currently regarded as good practice in similar circumstances (Muller, 2002:206).To achieve these criteria, the expectations of stakeholders in terms of q uality clinical d ecisio n -m ak in g in nursing were explored and described.Based on these results, principles for standard formulation were generated by using the in d u ctiv e and d ed u c tiv e re aso n in g strateg ie s o f an a ly sis, synthesis and inference. Normative approach In the normative approach, the objective is to determine what specialists in the various fields regard as good practice (Muller, 2002:206).In other words, what ought to happen during clinical decision making.These criteria were met by conducting a literature study on clinical decision-making, i.e. structure, process and outcome.Again, based on these results, p rin cip les for standard formulation were generated using the inductive and ded u ctive reasoning strategies o f analysis, synthesis and inference. In following the two above processes, it was ensured that reasonable standards were fo rm u lated based on w hat is considered to be "right" inside and outside nursing.A conceptual framework was thus constructed, based on the results of the empirical and normative approaches.The general value system, as set out in the philosophical, legal and ethical framework of nursing, also gives direction to what could be considered to be right and wrong during clinical decision-making. Standards for quality clinical decision making met the following criteria.They are realistic, understandable, manageable and achievable. Realistic standards The standards are realistic as they were inferred from both em p irical and conceptual data.Consensus discussion with two experts on standard formulation confirms the realistic nature of these standards. Understandable, manageable and achievable standards The standards are understandable, as they are written in a language known to local nurses in the country.During the literature study phase it was ensured that language and nuances in meaning were overcome through the re-interpretation of the structure, process and outcome of clinical decision-making as it operates within nursing.Thirty-six standards for quality clinical decision-making in nursing were initially form ulated (A rries, 2002:327-354).Follow ing the recom m endations of experts on standard formulation, and considering the criteria of manageability and achievability, these standards were re-organised and categorised.The thirtysix standards were reduced to twelve standard statements, each with its own criteria for measurement. Conclusion and recommendations Practice standards for quality clinical d ecisio n -m ak in g in nursing w ere developed.These standards were based on the expectations o f stakeholders regarding quality clinical decision m aking in nursing and an in-depth literature study.In employing the rules of inductive and deductive logic, it is believed that reasonable standards of trustworthiness, based on the empirical findings o f the ex p ectatio n s o f stakeholders and the conceptualisation o f clin ical decisio n -m ak in g , w ere developed.The following recommen dations on how these standards could be used to guide clinical decision-making in nursing are made: Phase 2: conceptualisation phase During the conceptualisation phase, a concept analysis (W alker & Avant, 1995:390) on quality clinical decision-64 Curationis March 2006 Figure 1 : Figure 1: A System for quality decision-making in Nursing (i) Nursing practice (a) Standards for quality clinical decision making could be utilised as a foundation fo r in terd iscip lin ary and interinstitutional consensus building.(b) By defining the scope of clinical d ecisio n -m ak in g for nurses, these standards could be u tilised as an infrastructure for the development of in stitu tio n al standards o f care and guidelines.(c) Using these standards to link key concepts such as clinical decision making, the contextual dimension thereof, ethics and empowerment outcomes could be utilised as a foundation to reduce fragmentation.(ii) Nursing education, management and research (a) By defining the scope of clinical decision-making, these standards could be utilised as an infrastructure for the co m p eten cy -b ased education p ro grammes; (b) The standards can be utilised to develop educational sessions and for curriculum development emphasising com petencies for clinical decision making in nursing.(c) These standards can be utilised to plan, organise, and evaluate clinical decision-making practices in nursing.(d) Lastly, these standards can be utilise to evaluate and enhance m ultidisci plinary collaboration during clinical d ecision-m aking am ong healthcare professionals.standards: Linking care, competence, and quality.Journal of Nursing Care Quality.12(4): 22-29.F U N D IS A , 20 0 0: (U npublished).Standards for good conceptualisation in research.Johannesburg: Department of Nursing Science.G IE R E , R M 1984: U ndertsanding scientific reasoning.Second edition.New York: Holt, Rhinehart and Winston.G IO R G I, A 1985: Phenomenology and psychological research.Pittsburgh: Duquesne University Press.JOHN SON , M & M CCLOSKEY, JC 1992: The delivery of quality health care.Series on Nursing Administration, 3. St Louis: Mosby Year Book.KRUEGER, R A 1994: Focus groups: A practical guide for applied research.S econd e d itio n .L ondon: Sage Publication.LIN CO LN , YS & GUBA, EG 1985: Naturalistic inquiry.New York: Sage.LYNN, NR 1986: Determination and q u a n tific a tio n o f co n ten t validity.Nursing Research.35(6): 382-385.M ALLOCH, K 1999: The performance measurement matrix: A framework to optimise decision-making.Journal o f Nursing Care Quality.13(3): 1-12.M ORGAN, LD 1998: Planning focus groups.London: SAGE Publication.M OUTON, J & MARAIS, H C 1990 : M etodologie v ir die geestesw etenskappe: Basiese begrippe.Pretoria: Raad vir Geesteswetenskaplike Navorsing.M O U T O N , J 1996: U nderstanding social research.Pretoria: Van Schaik.M ULLER, M 1990: Navorsingsmetodo lo g ie v ir die fo rm u lerin g van verpleegstandaarde.Curationis.13(3 & 4): 49-55.M ULLER, M 2002 : Nursing dynamics.Third edition.Sandown: Heinemann.NOONE, J 2002: Concept analysis of d ecisio n -m ak in g .N u rs in g F o ru m , 37(3):21-32.PATTERSON, C 1988: Standards for patient care: The Joint Commission focus on nursing quality assurance.Nursing Clinics of North America.23:625-638.SOUTH AFRICA (REPUBLIC) 1997: White Paper on the transformation of public service delivery (Batho Pele).Notice 1459 of 1997: Pretoria: State Press.ROSSOUW, G J 1993: Moral decision making amidst moral dissensus: A post m odern approach to moral decision making in business.Koers.58(3): 283-298.S O U T H A F R IC A N N U R S IN G C O U N C IL 2004: D raft C harter for Nursing Practice.Pretoria: SANC.S O U T H A F R IC A N N U R S IN G C O U N C IL 1993-1998: Disciplinary reports.Pretoria: SANC TESC H , R 1990: Qualitative research: Analysis types and software tools.New York: The Falmer Press.VAN VEUREN, P 1993: Kritiese denke as opvoedkundige im peratief.K oers.58(3):273-282.W ALKER, LO & AVANT, K C 1995: Strategies for theory construction in n ursing.T h ird e d itio n .N orw alk: Appleton & Lange.
2017-06-17T06:24:57.255Z
2006-09-28T00:00:00.000
{ "year": 2006, "sha1": "210276255c7ac216ad41c49345973244eb243582", "oa_license": "CCBY", "oa_url": "https://curationis.org.za/index.php/curationis/article/download/1052/988", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "210276255c7ac216ad41c49345973244eb243582", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2570108
pes2o/s2orc
v3-fos-license
EEG-microstate dependent emergence of perceptual awareness We investigated whether the differences in perceptual awareness for stimuli at the threshold of awareness can arise from different global brain states before stimulus onset indexed by the EEG microstate. We used a metacontrast backward masking paradigm in which subjects had to discriminate between two weak stimuli and obtained measures of accuracy and awareness while their EEG was recorded from 256 channels. Comparing targets that were correctly identified with and without awareness allowed us to contrast differences in awareness while keeping performance constant for identical physical stimuli. Two distinct pre-stimulus scalp potential fields (microstate maps) dissociated correct identification with and without awareness, and their estimated intracranial generators were stronger in primary visual cortex before correct identification without awareness. This difference in activity cannot be explained by differences in alpha power or phase which were less reliably linked with differential pre-stimulus activation of primary visual cortex. Our results shed a new light on the function of pre-stimulus activity in early visual cortex in visual awareness and emphasize the importance of trial-by-trials analysis of the spatial configuration of the scalp potential field identified with multichannel EEG. INTRODUCTION Under certain circumstances, sensation and perception can be dissociated such that the same physical stimulus gives rise to different perceptual outcomes. Phenomena like multi-stable perception (e.g., the Necker cube and binocularly rivalry) or stimuli presented at perceptual thresholds share the fact that the same stimulus can be perceived one way or another or that it can either be perceived or not. These conditions allow us to study perceptual awareness independent of sensory processing. Since such differences in perceptual awareness cannot arise from physical differences in the stimuli, they might arise from differences in the brain state before the stimulus is encountered (for a recent review see . Imaging techniques with high temporal resolution, such as EEG and MEG, provide a means of distinguishing pre-stimulus activity from post-stimulus activity. The EEG measures the electrical field generated by the brain by using electrodes placed across the scalp to differentially measure the summation of all concurrently active intracranial sources at a given time point. The EEG measurement can be considered as a matrix with space in one dimension and time in the other dimension. The analyses of the EEG can focus on the temporal dimension and assess differences in frequency power or phase at selected electrodes, or it can focus on the spatial dimension and assess topographic differences of the electric field. Both characteristics of the EEG have been shown to vary before stimulus onset and to influence how upcoming stimuli can be treated and perceived. Differences in perceptual awareness of stimuli presented at the detection and discrimination thresholds could be related to differences in pre-stimulus power and phase in the alpha frequency band. The alpha band comprises frequencies between 8 and 12 Hz, and its functional significance has been most commonly described by reflecting cortical excitability to which its power is inversely related (Pfurtscheller, 1992), with higher levels of alpha power corresponding to lower levels of excitability, and vice versa. In line with this notion, it has been shown that the detection of a light pulse presented at the sensory threshold depends on the pre-stimulus alpha power: undetected stimuli were preceded by increased alpha power compared to detected stimuli (Ergenoglu et al., 2004). Likewise, illusory visual percepts (phosphenes) induced by a TMS-pulse have been shown to depend on both inter- (Romei et al., 2008b) and intra-individual (Romei et al., 2008a) differences in pre-stimulus alpha power. In addition to detection, discrimination ability in a backward masking task has been related to both inter- (Hanslmayr et al., 2005) and intra- (Hanslmayr et al., 2007;van Dijk et al., 2008) individual differences in pre-stimulus alpha power. Also, the perceptual reversals of a Necker cube have been shown to be preceded by decreased alpha power (Isoglu-Alkaç et al., 1998, 2000Isoglu-Alkaç and Strüber, 2006). Similarly, perceptual reversals during binocular rivalry are preceded by decreased gamma power (Doesburg et al., 2005). In addition, the detection of a near-threshold stimulus (Busch et al., 2009) and the efficiency of metacontrast masking (Mathewson et al., 2009) have also been shown to be related to local differences in the pre-stimulus alpha phase. Taken together, these results suggest that the ability to detect and discriminate stimuli presented at the perceptual threshold can vary as a function of the pre-stimulus alpha power and phase, and, hence, on the excitability of early visual cortex through pulsed inhibition (Mathewson et al., 2011). The problem is that amplitude, power and phase modulations of EEG waveforms are local measures that vary with the reference; in addition, amplitude and power modulations vary at every instant, and the phase is different at every electrode (Lehmann and Michel, 1989), which makes it difficult to interpret the physiological meaning of local differences in power or phase between conditions. The EEG scalp potential field on the other hand is a global and reference-free measure of overall brain activity. Different topographies of the potential field directly indicate differences in the configuration of the underlying sources (Helmholtz, 1853;Vaughan, 1982). Unlike the constantly changing amplitude and power modulations, the configuration of the scalp topography remains stable for brief periods (∼80-120 ms) with sharp transitions between subsequent states. These brief states of stable topography have been named the "EEG microstates". Microstates have been shown to characterize the contents of spontaneous thoughts (Lehmann et al., 1998(Lehmann et al., , 2010, to explain the trial-totrial differences in the hemispheric lateralization of emotional word processing (Mohr et al., 2005) and to determine the topography of ERPs (Kondákor et al., 1995;Kondakor et al., 1997). We have recently shown that microstates can be considered the electrophysiological correlate of resting-state networks identified with fMRI which suggests that the momentary scalp configuration represents the activity in a specific neurocognitive network Van de Ville et al., 2010). More recently, we have started to investigate the notion that the global state of the brain indexed by the pre-stimulus microstate can determine the perceptual awareness of multi-stable stimuli. These stimuli are physically identical but can have different perceptual interpretations. We could show that the perceptual reversals of ambiguous figures (Britz et al., 2009) and during binocular rivalry arise as a direct consequence of the pre-stimulus microstate. In both studies, stimuli were presented intermittently, and we identified two microstate topographies immediately before stimulus onset that dissociated perceptual reversals from perceptual stability. Statistical parametric mapping of their concomitant source differences showed that the reversals were caused by increased neuronal activity in the right inferior parietal lobe in both cases. Microstates have not yet been used to investigate the emergence of perceptual awareness at sensory thresholds. Metacontrast masking is a powerful technique for experimentally manipulating the visibility of stimuli. A briefly presented target stimulus is followed by a mask with the same inner cutout of the same contour as the stimulus. The visibility of the target varies as a U-shaped function of the interval between stimulus and mask: at very brief and long inter-stimulus intervals (ISIs), the target is visible, but at intermediate ISIs, the mask efficiently renders the target invisible. Moreover, within those intermediate ISIs, there appears to be a "sweet spot" at which the masking effect is efficient in roughly 50% of cases, i.e., the same stimulus is perceived in about half the trials and not perceived in the other half of the trials. The efficiency of making is commonly explained by disruption of re-entrant processing between higher and lower visual areas after stimulus onset (Fahrenfort et al., 2007(Fahrenfort et al., , 2008 and recurrent processing within early visual areas (Boehler et al., 2008). We assessed a different hypothesis, namely that perceptual awareness and the efficiency of masking might depend on the global brain state at the time of stimulus arrival. In the present study, we used Electrical Neuroimaging (Murray et al., 2008;Michel et al., 2009) to investigate whether differences in perceptual awareness can arise from differences in the prestimulus brain state indexed by the EEG microstate immediately before stimulus onset. We used a metacontrast backward masking paradigm where subjects had to discriminate between two targets and assessed differences in subjective awareness while performance was kept constant for physically identical stimuli. We compared the same physical stimulus when it was correctly identified with and without awareness, and equating performance and stimulus properties avoided the confound of awareness with performance and stimulus properties. Subjective awareness and objective performance have been shown to be independent, and awareness is not necessary for correct performance (Schwiedrzik et al., 2011). We hypothesized that different pre-stimulus microstates and thus different neuronal networks in the brain are active when subjects will become aware of a stimulus in a given trial than when they do not, and our goal was to identify two states that dissociate correct stimulus identification with and without awareness. Statistical parametric mapping of their concomitant intracranial generator differences will then reveal the location of activity differences for stimuli that were correctly identified with and without awareness. In addition to the global measure of prestimulus microstates, we investigated local differences in alpha power and phase in order to relate our findings to those from previous studies. Pre-stimulus differences in alpha power have been independently related to performance and awareness. If prestimulus alpha power over visual cortex is related to visual awareness, we expect to find higher alpha power before stimuli that were correctly detected with than without awareness. Similarly, differences in alpha phase over occipital electrodes at stimulus onset should vary as a function of awareness. SUBJECTS Twenty-three healthy adults (7 male, mean age 23,3 years, range 18-37) were initially screened for the EEG study, and a separate group of eight adults (3 male, mean age 28.87 years, range 22-37 years) participated in a behavioral pre-test. All subjects were right handed as assessed by the Edinburgh Handedness Inventory (Oldfield, 1971), had normal or corrected-to-normal visual acuity as assessed with the Freiburg Visual Acuity Test (Bach, 1996), the mean decimal Visual Acuity across subjects was 1.698. None of the subjects reported a history of psychiatric or neurological impairments. Subjects participated for monetary compensation of CHF 20/h after giving informed consent approved by the Ethics Committee of the University Hospital of Geneva. Eight participants did not participate in the EEG study because of their behavioral results in a training period (either too many aware or too many unaware trials). A total of 15 subjects (4 male, mean age: 23.81 years, range 19-37) completed the EEG experiment. The data from four subjects was excluded from the analysis because of primarily unaware responses in one case, primarily correct aware (CA) responses in another case, and chance performance and an insufficient number of acceptable trials due to data quality in the two other cases. Thus, the behavioral and EEG data from a total of 11 subjects were submitted to further analysis. Figure 1 illustrates the stimuli and experimental procedure. Target stimuli were a square and a diamond (the square rotated by 45 • ) subtending 1 • of visual angle. The mask was a larger contour of the two superimposed targets, which subtended 2 • of visual angle, with an inner cutout of the same contour. All stimuli were presented in white (67.21 cd/m 2 ) on a black background in the center of a CRT screen with a refresh rate of 75 Hz. Stimulus presentation and timing was achieved using E-prime2 (Psychology Software Tools, Inc., Pittsburgh, USA). Each trial began with the presentation of a fixation cross (1 • ) at the center of the screen for 500 ms. After a blank interval of 500 ms one of the two possible targets (square or diamond) was presented for 39 ms. The target was followed by a blank interval of variable duration (39, 52, 65, 104 ms). Subsequently, the mask was presented for 52 ms. After the offset of the mask, subjects first had to indicate which target stimulus they saw, yielding a measurement of accuracy. They then had to indicate whether they actually saw the target or whether they guessed the answer, yielding a measurement of awareness. All responses were made with the index and middle fingers of the right hand (index finger for the square and middle finger for the diamond for the first question and index finger for aware and middle finger for unaware for the second one). Each session started with a practice run of FIGURE 1 | Experimental procedure. A diamond or square was presented for 39 ms. After a variable ISI (39, 52, 65, 104 ms), it was followed by a contour mask. Subjects first had to indicate which stimulus they saw (accuracy measurement) and then whether they saw the stimulus or whether they were guessing (awareness measurement). 520 trials, and subjects performed 8 blocks of 98 trials for a total of 784 trials. STIMULI AND PROCEDURE Since the objective of the current study was to assess differences in awareness when performance was kept constant for physically identical stimuli, we compared correctly identified stimuli which differed in awareness. We therefore first identified the ISI at which subjects had roughly equal numbers of aware and unaware correct trials. The paradigm was validated in a behavioral pretest in which we tested 7 ISIs (13,26,39,52,65,78 and 104 ms) with 8 subjects. This behavioral experiment showed that most subjects had equal numbers of aware and unaware correct trials at ISIs 39, 52 and 65 ms. We used those ISIs in the subsequent EEG experiment in addition to an easily visible condition (104 ms) in order to reduce frustration. EEG RECORDING AND RAW DATA PROCESSING The EEG was continuously recorded from 256 carbon-fiber coated Ag/AgCl electrodes using a Hydrocel Geodesic Sensor Net®. The EEG was digitized at 1 kHz with a band-pass filter of 0-100 Hz and a recording reference at the vertex; impedances were kept below 30 kΩ. Electrodes located on the cheeks and in the nape were excluded and 204 electrodes were maintained for subsequent analysis. Before selecting the relevant epochs, the EEG was re-referenced to the common average reference and digitally filtered between 1 and 30 Hz. We used a 2nd order Butterworth filter with a −12 db/octave roll-off; the filter was computed linearly with two passes, one forward and one backward in order to eliminate phase shifts and with poles calculated each time to the desired cut-off frequency. We extracted epochs of 50 ms before stimulus onset for CA and Correct Unaware (CU) conditions at each subject's ideal ISI condition, and trials contaminated by oculomotor and other artifacts were excluded. For each participant, channels exhibiting substantial noise were interpolated using a 3D spherical spline interpolation procedure (Perrin et al., 1989). On average, 6.3 channels were interpolated for each subject. The analysis was performed using the Cartool software by Denis Brunet. 1 ANALYSIS OF PRE-STIMULUS MICROSTATES As mentioned above, the topography of the scalp electric field remains quasi-stable for brief periods of ∼80-120 ms, the socalled EEG microstates (Lehmann et al., 1987;Koenig et al., 2002). During these periods of stability, only the strength, but not the topography of the field can change. The strength of the scalp field is reflected in the Global Field Power (GFP), which is computed as the spatial standard deviation of the potential field (Lehmann and Skrandies, 1980;Skrandies, 1990). Local maxima of the GFP are hence the best representative of a given microstate in terms of signal-to-noise ratio. Previous studies have shown that only the microstate immediately before stimulus onset is crucial for the determination of the fate of an upcoming stimulus (Kondákor et al., 1995;Kondakor et al., 1997;Lehmann et al., 1998;Mohr et al., 2005;Britz et al., 2009, which is why we restricted our analysis to the microstate immediately before stimulus onset. The microstate analysis comprised five steps: First, we determined for each subject the ISI at which there were a similar number of trials in the CA and CU conditions. Second, we extracted the topographic map at the GFP maximum closest to stimulus onset in the 50 ms time window before stimulus onset. Because the topography remains stable for ∼100 ms with abrupt transitions between subsequent states, we reasoned that the GFP peak closest to stimulus onset in the 50 ms time window before stimulus onset was the best representative of the pre-stimulus microstate in a given trial. We did this for the CA and CU conditions for each subject. Third, we jointly submitted the pre-stimulus microstate maps from all subjects in the CA and CU conditions to a k-means spatial cluster analysis to identify the templates of the most dominant microstate maps in the two conditions. We wanted our analysis to be strictly data-driven and made no a priori assumptions regarding the number of clusters or the amount of global explained variance (GEV). We performed a cluster analysis with 20 different solutions ranging from 1 to 20 clusters and determined the best solution by means of the minimum of the cross-validation criterion (CV). The CV is a measure of predictive residual variance, i.e., the difference between the data and the model, and its minimum identifies the solution for which the residual variance is minimal or-in other words-the minimum number of clusters that best explain the data. Fourth, we computed a strength-independent spatial correlation between the template maps representing the optimal solution of the cluster analysis and the topographic map of the single trials. We matched, i.e., labeled each single trial pre-state microstate map with the template map it best corresponded with, thereby assessing its GEV. The GEV is the sum of the explained variance weighted by the GFP. It is a measure of how well a map explains the data both in terms of strength and in terms of frequency of occurrence. This was done to determine how well the templates identified by the cluster analysis are represented in the raw data of each subject. Fifth, we finally determined which maps dissociated the CA and the CU conditions by statistically comparing their GEV between these conditions. ANALYSIS OF PRE-STIMULUS SOURCE DIFFERENCES We extracted the single trials labeled by the templates of the maps that dissociated the CA and CU conditions and estimated the magnitude of their intracranial generators with a local autoregressive average (LAURA) inverse solution (Grave de Peralta Menendez et al., 2004). LAURA was computed with a locally spherical realistic head model (LSMAC; Brunet et al., 2011) using the ICBM 152 non-linear atlas of the Montreal Neurological Institute (MNI; Fonov et al., 2011) as the standard brain for all subjects. The LSMAC model does not require the estimation of a best fitting sphere. Instead, it uses the realistic head shape and estimates the local thickness of scalp, skull and brain underneath each electrode. Then, these thicknesses are used in a 3-shell spherical model with the local radii, which allows taking into consideration the real geometry between the electrodes and the solution points. First, the brain surface was extracted from this atlas, and then the gray matter was extracted from the brain. A total of 4766 solution points was regularly distributed in the gray matter of the cerebral cortex and limbic structures. The forward problem was solved with an analytical solution with a 3-layer conductor model. This somewhat simplified realistic head model allows an accurate and rapid analytical solution of the forward problem. It has been shown to give similar results to boundary element head models (Guggisberg et al., 2011). Numerous experimental and clinical studies have shown that this model provides reliable and accurate estimations of intracranial currents (Brodbeck et al., , 2011Groening et al., 2009;Vulliemoz et al., 2009;Plomp et al., 2010). When considering estimations of intracranial current distributions, one is faced with the problem of thresholding. There cannot be a predefined threshold that indicates when an estimated source can be reliably considered as "active". One way of overcoming this is by statistically comparing the estimated intracranial currents between conditions (James et al., 2008(James et al., , 2011Britz et al., 2009Plomp et al., 2009Plomp et al., , 2010. Comparable to statistical parametric mapping used in fMRI research, we statistically compared the estimated intracranial currents in the CA and CU conditions at every solution point. We did this analysis twice: once using only those trials that were labeled as the microstate maps that dissociated the CA and CU conditions and once using all trials irrespective of their microstate labeling. For all analyses reported in the manuscript, we used the False Discovery Rate (Benjamini and Hochberg, 1995) to control for multiple comparisons. ANALYSIS OF PRE-STIMULUS POWER AND PHASE In addition, we analyzed local pre-stimulus differences in alpha power and phase in order to relate our findings of the microstate analysis to those of previous studies (Ergenoglu et al., 2004;Hanslmayr et al., 2005Hanslmayr et al., , 2007Busch et al., 2009;Mathewson et al., 2009). In order to assess the distribution of phase angles and lags, we performed the analysis at all 204 electrodes, and in order to assess the effect of the chosen reference on the distribution of the phase angles and lags, we repeated the phase analysis using five different references: the average reference, averaged mastoids, FPz, Cz and Oz. We applied a discrete Fourier Transform with a Blackman window to the raw EEG in the 200 ms window before stimulus onset and extracted the power and phase for every trial at all electrodes. With a sampling rate of 1000 Hz and a time window of 200 ms, the frequency resolution of the FFT is 5 Hz. We compared the power at 10 Hz in the pre-stimulus period for all trials and those trials classified as the microstate maps that dissociated the CA and CU conditions. For the analysis of phase, we first computed the mean phase angle for each subject in the CA and CU conditions at each electrode. We then assessed where these phases differed significantly using a Watson-Williams test (Watson and Williams, 1956) as implemented in the CircStat Matlab toolbox (Berens, 2009). Pilot study We assessed awareness as a function of accuracy at the 7 ISIs and the majority of subjects showed equal numbers of trials in the CA and CU condition at an ISI of 39 ms, the results are plotted in Figure 2. Figure 3 summarizes the behavioral results. The majority of subjects (75%) showed similar numbers of trials in the CA and CU condition at an ISI of 39 ms, 16.7% of subjects at an ISI of 52 ms and 8.3% at 65 ms. Performance was well above chance at each of these ISIs (73, 76 and 80%, respectively). On average, subjects had 40% (SD = 11) of trials in the CA condition and 34% (SD = 9) of trials in the CU condition, this difference was not significant (t (1,10) = 1.06, p = 0.31). Mean reaction times were 665 ms (SD = 212 ms) in the CA condition and 819 ms (SD = 230) in the CU condition. This difference was significant (t (1,10) = 3.8, p = 0.0036). Although some subjects reported having seen more squares than diamonds or vice versa, there was no difference in the identification accuracy for both types of stimuli (squares: 81% (SD = 11), diamonds: 70% (SD = 22); t (1,10) = 1.52, p = 0.15). There were no learning effects in the main EEG experiment: neither accuracy rates (F < 1) nor awareness (F (1,7) = 1.33, p = 0.249) differed between blocks. PRE-STIMULUS MICROSTATES After artifact rejection, on average 152 trials were retained for every subject. The GFP peak closest to stimulus onset occurred on average 12.71 ms before the stimulus. The prestimulus microstate maps at the GFP peak closest to stimulus onset of these trials were submitted to a k-means spatial cluster analysis. The cross validation criterion yielded 16 maps as the best solution which explained 76.26% of the Global Variance. We then computed a strength-independent spatial correlation between the template maps identified in the cluster analysis and those of each trial and statistically assessed which template maps best dissociate the CA and CU conditions. Two microstate maps dissociated the CA and the CU condition with respect to GEV; their templates are displayed in Figures 4A and 4B, respectively. Map 3 had a significantly higher GEV in the CU than the CA condition (t (1,10) = −2.67, p = 0.0234) and Map 16 had a significantly higher GEV in the CA than the CU condition (t (1,10) = 2.98, p = 0.014). On average, 15 (+/− 2.8) % of trials were classified as Map 3 or Map16. PRE-STIMULUS SOURCE DIFFERENCES We computed distributed LAURA inverse solutions for trials classified as microstate maps 3 and 16 and assessed their statistical difference at every solution point (Figure 4). We found statistically significant increased activity in bilateral Cuneus and Lingual Gyrus in the CU compared to the CA condition (MNI coordinates of maximal difference: x = −3.03, y = −98.08, z = −5.7, t = −4.72, p = 0.00082). When considering all trials of the CU and CA conditions irrespective of their microstate map classification, we found no differences in current density anywhere in the brain. PRE-STIMULUS POWER AND PHASE DIFFERENCES We found no pre-stimulus power differences at 10 Hz. This holds for all trials as well as for trials classified as microstate maps 3 and 16. Figure 5 displays the results of the phase analysis. Panel 5a shows the topographic distribution of the phase angles in the CA and CU conditions (left and middle panels) and the phase lags (the difference of the phase angle in the CA and CU conditions) at all electrodes (right panel). We found significant phase differences between the CA and CU conditions at 94 out of 204 electrodes. Nearly opposite phase angles (phase lags of >170 • ) were found at only 13 out of 204 electrodes. At five of those electrodes (107,115,129,131,205), phase lags were >170 • , and at the other eight (49,100,122,139,142,143,149,211), the phase lags were < −170 • . The topographic distribution of the phase angles in the CA and CU conditions and the phase lags as well as the location of significant phase lags depended strongly on the chosen reference (Figure 5c). For an average mastoid reference, we found significant phase differences >170 • at five electrodes (149,152,157,193) and significant phase differences < −170 • at another The left column depicts the distribution of phase angles in the CA condition, the middle column depicts the distribution of phase angles in the CU condition and the right column depicts the distribution of the phase lags between the CA and the CU conditions. The first row depicts the results for an average mastoid reference, the second row for an FPz reference, the third row for a Cz reference and the fourth row for an Oz reference. DISCUSSION We show that differences in visual awareness of physically identical stimuli can be related to differences in pre-stimulus microstates and their concomitant neuronal generators. We used a metacontrast masking paradigm in which subjects had to discriminate between a square and a diamond target followed by a mask and compared physically identical stimuli that were correctly identified with and without awareness. We identified two global brain states indexed by the pre-stimulus microstate on a trialby-trial basis that dissociated the CA and the CU conditions. Statistical parametric mapping of their concomitant intracranial generators revealed increased current density in the Cuneus and Lingual Gyrus before the onset of stimuli that were identified without awareness. These anatomically defined areas are part of the primary visual cortex. Because different topographies necessarily imply different generators (Helmholtz, 1853;Vaughan, 1982), these results indicate that primary visual cortex is more strongly pre-activated when subjects fail to become aware of a stimulus presented at the threshold of awareness. This finding might initially sound counterintuitive, since one might assume that "more activity" directly implies "better performance" or "increased awareness", which is likely the case for above threshold stimuli but not necessarily for near-threshold stimuli. Our interpretation of this finding is that the pre-activation of visual cortex apparently interferes with adequate processing of weak stimuli. The effects of masking are commonly explained by disruption of re-entrant processing between higher and lower visual areas (Fahrenfort et al., 2007(Fahrenfort et al., , 2008 and recurrent processing within early visual areas (Boehler et al., 2008) by the mask after the stimulus is encountered. We hypothesized that the pre-stimulus brain-state can also influence the efficiency of masking and could identify two pre-stimulus brain states indexing differential activity in early visual cortex that dissociate efficient from inefficient masking. Our results can complement the prevailing view of the mechanisms underlying masking: the pre-stimulus activity in early visual cortex can be considered as an alternative source of interference with re-entrant processing. An alternative explanation is that if the primary visual cortex is already active before the onset of a weak stimulus at the threshold of awareness, such a weak stimulus cannot provide sufficient additional activity to attain awareness. In other words, the brain appears to be unable to distinguish between the spontaneous pre-activation of primary visual cortex and the post-stimulus activity evoked by a weak near-threshold stimulus. This finding is supported by a recent study by He (2013) where she elegantly shows how behaviorally relevant negative interactions between pre-and post-stimulus activity can be observed on a trial-bytrial basis in the absence of mere amplitude differences between conditions. This inverse relation between pre-and post-stimulus activity sheds a new light on the functional relation between spontaneous and evoked activity. In another modality, trial-totrial differences in output force have been found to be inversely related to levels of pre-stimulus activity in primary motor cortex (Fox et al., 2007). These results underline the importance of recent methodological advances that consider trial-to-trial variations in ongoing activity instead of averaged differences between conditions which reveal important new insights into brain function. Traditionally, trial-to-trial variations in behavioral and neuronal measures are dismissed as noise and eliminated by signal averaging, however, it is becoming increasingly evident that these variations are functionally significant activity with an important impact on perception and cognition (Mohr et al., 2005;Fox et al., 2007;Britz et al., 2009Michel, 2010, 2011;Garrett et al., 2011Garrett et al., , 2013Kanai and Rees, 2011;Tzovara et al., 2012Tzovara et al., , 2013He, 2013). Our current results further support the significance of such apparently slight trial-to-trial variations; even though we considered only a subset of trials, we identified that proportion which yielded consistent differences across all single trials from all subjects. We of course can not rule out that activity differences in other brain areas-most probably in parietal and frontal areas-might have also contributed to the differences in the emergence of perceptual awareness. However, the contributions of other brain areas are less strong and less consistent than those in early visual cortex immediately before stimulus onset. In order to compare our results to previous studies, we assessed pre-stimulus alpha power and phase which are considered to index different aspects of cortical excitability. Alpha power is considered as an index of alertness and general excitability of visual cortex to which it is inversely related (Pfurtscheller, 1992). Alpha phase on the other hand is assumed to reflect cyclic variations in cortical excitability (Busch et al., 2009;Mathewson et al., 2009;Scheeringa et al., 2011). Several EEG and MEG studies have shown increased pre-stimulus alpha power for undetected compared to detected stimuli (Ergenoglu et al., 2004;Hanslmayr et al., 2007;Romei et al., 2008aRomei et al., ,b, 2010. Likewise, the ability to correctly distinguish between two stimuli presented at the discrimination threshold depends on the prestimulus alpha power (Hanslmayr et al., 2005;van Dijk et al., 2008). Simultaneous EEG-fMRI studies however provide mixed results about the relation between alpha power and activity in primary visual cortex indexed by the BOLD response (Becker et al., 2011;Scheeringa et al., 2011). In the present study, we found no pre-stimulus differences in alpha power at 10 Hz. This is surprising given that other studies have shown that both awareness and discrimination ability can vary as a function of pre-stimulus alpha power. In those studies, performance was close to chance, i.e., subjects were able to detect or to correctly discriminate stimuli in roughly 50% of cases and in such cases alpha power appears to be a powerful tool to distinguish between differences in detection or discrimination ability. Here, we analyzed differences in awareness for correctly identified stimuli, and performance was very high (subjects responded correctly in about 80% of trials). When performance is close to ceiling, alpha power no longer appears to be a good parameter to distinguish between correct discrimination with and without awareness. Instead, we could relate the differences in awareness for correct target discrimination to a global pre-stimulus brain state that reflects differential pre-stimulus activity in primary visual cortex. Other studies have related awareness to local differences in phase of the alpha and theta band (Busch et al., 2009;Mathewson et al., 2009;Dugué et al., 2011). These differences in awareness as a function of the pre-stimulus alpha phase, i.e., that a stimulus was perceived when it occurs during a certain phase and that it was not perceived during the opposite phase, were interpreted as cyclic variations of cortical excitability or inhibition. However, this claim is difficult to support because local variations in phase are reference dependent, which renders the functional interpretation of a peak or trough very challenging. We analyzed the pre-stimulus alpha phase at all 204 electrodes using five different references. We found significant phase inversions between the CA and CU conditions, but both their location and their direction varied strongly with the chosen reference. Not a single electrode out of the 204 showed consistent phase inversions across the five references we used, which renders the functional interpretation of the location of phase differences on the one hand and that of peaks and troughs at best arbitrary. We thus replicate the results from previous studies that show differences in awareness as a function of pre-stimulus alpha phase, but we also show that such local phase differences have to be interpreted with a lot of caution. The link between visual cortex excitability and alpha phase has been claimed without a direct demonstration; differences in excitability are generally inferred from the fact that a stimulus is perceived or not. Here, we show that this link between local phase, awareness and pre-stimulus activity in primary visual cortex is not as direct as previously claimed. We show that the global brain state immediately before stimulus onset can be more unambiguously linked to pre-stimulus differences in primary visual cortex activity than local differences in alpha phase, and the present results corroborate the importance of the state of visual cortex at the time of stimulus arrival for visual awareness. The present results extend the results from our prior studies in which we showed that the changes in the perceptual awareness for ambiguous stimuli and during binocular rivalry arise as a direct consequence of pre-stimulus microstates (Britz et al., 2009;. These studies revealed that the right inferior parietal cortex is implicated in the generation of perceptual reversals of multi-stable stimuli and that inferior temporal areas are involved in percept stabilization during binocular rivalry. Here, we show that the emergence of perceptual awareness for correctly identified stimuli presented at the threshold of awareness can likewise be linked to the pre-stimulus microstate which indexes that primary visual cortex is differentially active immediately before stimulus onset. Activity in primary visual cortex is necessary but not sufficient to attain awareness (Tong, 2003), and there is ample evidence that the dynamic interplay of activity in lower visual and higher order brain areas in parietal and frontal cortex are crucial for awareness (Lumer et al., 1998;Dehaene et al., 2006;Lamme, 2006;Lau and Passingham, 2006). Using fMRI, Lau and Passingham (2006) have shown that prefrontal cortex comes into play when subjects become aware of stimuli that are equated for performance but not physical identity, thus confounding awareness and stimulus properties. Because of the slow temporal dynamics of the hemodynamic response function, the precise temporal allocation of fMRI effects remains a challenge. Several EEG studies however indicate that parietal and prefrontal areas might come into play only after stimulus onset (Sergent et al., 2005;Fahrenfort et al., 2007Fahrenfort et al., , 2008Genetti et al., 2010) when subjects become aware of stimuli. In the present study, we bridged the gaps between awareness, accuracy and physical identity by assessing awareness when accuracy was kept constant for physically identical stimuli. For every subject, we compared physically identical stimuli that were correctly identified but that differed in awareness. To our knowledge, this is the first study that has attempted to equate both physical identity and behavioral accuracy when assessing differences in awareness. The apparent differences between awareness and accuracy during experimental manipulations of stimulus visibility have been recently challenged as being due to conservative response criteria for the awareness ratings and by being abolished by using the bias-free measurement of d' (Ko and Lau, 2012;Lloyd et al., 2013). It should be noted though, that the inclusion of nonstimulus trials necessary for the computation of d' themselves might introduce more conservative response criteria because subjects have to distinguish between stimuli with different degrees of visibility and the physical absence of stimuli. Furthermore, the order of identity and awareness judgments might likewise influence the awareness ratings. In the present study, subjects knew that there was always a stimulus present and that they had to indicate whether or not they saw it after they indicated its identity which should not have strongly biased their awareness judgment. However, future studies are needed to address these issues in more detail. Taken together, the same physical stimuli can undergo very different perceptual fates as a function of the state of the brain before stimulus arrival: differences in frequency power or phase on the one hand, and differences in the overall configuration of intracranial generators indexed by the scalp topography on the other hand yield different perceptual outcomes of the same stimulus. These findings are important to consider when comparing ERPs to differences in perceptual awareness: differences in topography, power or phase between single trials in the "baseline" period can be easily eliminated and translated into a post-stimulus effect by performing a baseline correction. Previous studies have claimed that differences in awareness result from differences in pre-stimulus alpha power or opposite pre-stimulus alpha phase, which supposedly reflect cyclic variations in the excitability of primary visual cortex. However, a direct demonstration between alpha power, alpha phase, visual awareness, and activity in primary visual cortex has been lacking. In the present study, we show that differences in awareness for the same stimuli arise from differences in a global pre-stimulus brain state that reflects differential pre-stimulus activity in primary visual cortex. AUTHOR CONTRIBUTIONS Juliane Britz, Laura Díaz Hernàndez and Tony Ro, designed research. Juliane Britz and Laura Díaz Hernàndez performed research. Juliane Britz and Laura Díaz Hernàndez analyzed the data, Juliane Britz, Laura Díaz Hernàndez, Tony Ro, and Christoph M. Michel wrote the manuscript.
2016-05-04T20:20:58.661Z
2014-03-31T00:00:00.000
{ "year": 2014, "sha1": "2d3eb76f6ddb316acdf9cf69b59dc522a99a9f38", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2014.00163/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "011f1af9609083d511207471632b8d1bdc7768fe", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
251929389
pes2o/s2orc
v3-fos-license
Similarity-based Link Prediction from Modular Compression of Network Flows Node similarity scores are a foundation for machine learning in graphs for clustering, node classification, anomaly detection, and link prediction with applications in biological systems, information networks, and recommender systems. Recent works on link prediction use vector space embeddings to calculate node similarities in undirected networks with good performance. Still, they have several disadvantages: limited interpretability, need for hyperparameter tuning, manual model fitting through dimensionality reduction, and poor performance from symmetric similarities in directed link prediction. We propose MapSim, an information-theoretic measure to assess node similarities based on modular compression of network flows. Unlike vector space embeddings, MapSim represents nodes in a discrete, non-metric space of communities and yields asymmetric similarities in an unsupervised fashion. We compare MapSim on a link prediction task to popular embedding-based algorithms across 47 networks and find that MapSim's average performance across all networks is more than 7% higher than its closest competitor, outperforming all embedding methods in 11 of the 47 networks. Our method demonstrates the potential of compression-based approaches in graph representation learning, with promising applications in other graph learning tasks. INTRODUCTION Calculating similarity scores between objects is a fundamental problem in machine learning tasks, from clustering, anomaly detection, and text mining to classification and recommender systems. In Euclidean feature spaces, similarities between feature vectors are commonly calculated as lengths, norms, angles, or other geometric concepts, possibly using kernel functions that perform implicit non-linear mappings to high-dimensional feature spaces [36]. For relational data represented as graphs, methods using the graph * Also with Center for the Study of Complex Systems, Institute of Physics Belgrade, University of Belgrade. topology to calculate pairwise node similarities can address learning problems such as graph clustering, node classification, and link prediction. For link prediction, recent works take a multi-step approach and separate representation learning and link prediction [6,30]: First, they learn a latent-space node embedding from the graph's topology, using methods such as graph or matrix factorisation [59,64], or random walk-based techniques [31,62,76]. Then, they interpret node positions as points in a high-dimensional feature space, possibly applying downstream dimensionality reduction. Finally, they use node positions in the resulting feature space to assign new "features" to pairs of nodes, which can be used to predict links. Taking an unsupervised approach, links are predicted based on node similarities [45] by calculating distance metrics or similarity scores between node pairs to rank them. We can alternatively use a supervised approach [46] by (i) using binary operators like the Hadamard product [31], (ii) sampling negative instances (node pairs not connected by links), and (iii) using the features of positive and negative instances to train a supervised binary classifier [31]. Advances in graph embedding and representation learning have considerably improved our ability to predict links in networks, with applications in biological [73] and social [35] networks and in recommender systems [13]. However, these methods introduce challenges for real-world link-prediction tasks: First, they require specifying hyperparameters that control aspects regarding the scale of patterns in graphs, the influence of local and non-local structures, and the latent space dimensionality [81]. Network-specific hyperparameter tuning addresses these issues, but is challenging in real applications and aggravates the risk of overfitting; recent systematic comparisons reveal that the performance of different methods largely varies across data sets [6,30]. These challenges make it difficult for practitioners to choose and optimally parametrise an embedding method. Second, using latent metric spaces implies symmetric similarities, limiting the performance when predicting directed links [39,59]. Third, compared with hand-crafted features, embeddings tend to have low interpretability: We can assess the similarity of nodes, but we cannot explain why some nodes are more similar than others [6,30,64]. Nevertheless, recent graph neural network-based approaches focus on learning features for Blue and orange nodes have a unique codeword within their module, shown next to the nodes and derived from their stationary visit rates. Decimal numbers show the theoretical lower limit for the codeword length in bits. Map equation similarity, MapSim for short, derives description lengths for predicted links, connecting more similar nodes uses fewer bits. Intra-community links tend to have shorter description lengths than inter-community links. link prediction from local subgraphs [85], overlapping node neighbourhoods [84], or shortest paths [87], achieving favourable performance. Finally, recent works highlight fundamental limitations of low-dimensional representations of complex networks [69], questioning to what extent Euclidean embeddings can capture patterns relevant to link prediction. Motivated by recent works highlighting the importance of community structures for link prediction [23,30,86], we propose a novel approach to similarity-based link prediction that addresses these issues. Our contributions are: • We introduce map equation similarity, MapSim for short, an information-theoretic method to calculate asymmetric node similarities. MapSim builds on the map equation [67], a framework that applies coding theory to compress random walks based on hierarchical cluster structures. • Unlike other random walk-based embedding techniques, our work builds on an analytical approach to calculate the minimal expected description length of random walks, neither requiring simulating random walks nor tuning hyperparameters. • Following the minimum description length principle, Map-Sim incorporates Occam's razor and balances explanatory power with model complexity, making dimensionality reduction superfluous. With hierarchical cluster structures, MapSim captures patterns at multiple scales simultaneously and combines the advantages of local and non-local similarity scores. • We validate MapSim in an unsupervised, similarity-based link prediction task and compare its performance to six well-known embedding-based techniques in 47 empirical networks from different domains. This analysis highlights challenges in the generalisability of embedding techniques and parametrisations across different networks. • Confirming recent surveys, we find that the performance of popular embedding techniques for unsupervised link prediction without network-specific hyperparameter tuning depends on the data. In contrast, MapSim provides high performance across a wide range of networks, with an average performance 7.7% and 7.5% better than the best competitor in undirected and directed networks, respectively. MapSim outperforms the chosen baseline methods in 11 of the 47 networks with a worst-case performance 44% and 33% better than popular embedding techniques in undirected and directed networks, respectively. In summary, we take a novel perspective on graph representation learning that fundamentally differs from other random walkbased graph embeddings. Instead of embedding nodes into a metric space, leading to symmetric similarities, we develop an unsupervised learning framework where (i) positions of nodes in a coding tree capture their representation in a non-metric latent space, and (ii) node similarities are calculated based on how well transitions between nodes are compressed by a network's hierarchical modular structure (figure 1). Apart from node similarities that can be "explained" based on community structures captured in the coding tree, MapSim yields asymmetric similarity scores that naturally support link prediction in directed networks. We provide a simple, non-parametric, and scalable unsupervised method with high generalisability across data sets. Our work demonstrates the power of compression-based approaches to graph representation learning, with promising applications in other graph learning tasks. RELATED WORK AND BACKGROUND We first summarise recent works on graph embedding and similaritybased link prediction. Then, we review the map equation, an information-theoretic objective function for community detection and the theoretical foundation of our compression-based similarity score. Related Work Focusing on unsupervised similarity-based link prediction, we consider methods that calculate a bivariate function sim( , ) ∈ R , where , ∈ are nodes in a directed or undirected, possibly weighted graph = ( , ) [49,50]. While similarity metrics often consider scalar functions ( = 1), recent vector space embeddings use binary operators to assign vector-valued "features" with > 1 to node pairs. Since vectorial features are typically used in downstream classification techniques, this can be seen as an implicit mapping to similarities, for example "similar" features being assigned similar class probabilities. We limit our discussion to topological or structural approaches [49], and consider functions sim( , ) that can be calculated solely based on the edges in graph without requiring additional information such as node attributes or other non-topological graph properties. Several works define scalar similarities based on local topological characteristics such as the Jaccard index of neighbour sets, degrees of nodes, or degree-weighted measures of common neighbours [1]. Other methods define similarities based on random walks, paths, or topological distance between nodes [11,45,47,48]. Compared to purely local approaches, an advantage of random walk-based methods is their ability to incorporate both local and non-local information, which is crucial for sparse networks where nodes may lack common neighbours. Since walk-based methods reveal cluster patterns in networks [67], they generally perform well in downstream tasks such as link prediction and graph clustering [30]. Graph factorisation approaches that use eigenvectors of different types of Laplacian matrices that represent relationships between nodes share this high performance [9], likely because (i) Laplacians capture the dynamics of continuous-time random walks [52], and (ii) spectral methods can capture small cuts in graphs [8]. Building on these ideas, recent works on graph representation learning combine random walks and deep learning to obtain highdimensional vector space embeddings of nodes, serving as features in downstream learning tasks [6,81]: Perozzi et al. [62] generate a large number of short random walks to learn latent space representations of nodes by applying a word embedding technique that considers node sequences as word sequences in a sentence. This corresponds to an implicit factorisation of a matrix whose entries capture the logarithm of the expected probabilities to walk between nodes in a given number of steps [82]. Following a similar walk-based approach, Grover and Leskovec [31] generate node sequences with a biased random walker whose exploration behaviour can be tuned by search bias parameters and . The resulting walk sequences are used as input for the word embedding algorithm word2vec [54], which embeds objects in a latent vector space with configurable dimensionality. Tang et al. [76] construct vector space embeddings of nodes that simultaneously preserve first-and secondorder proximities between nodes. Similar to Adamic and Adar [1], second-order node proximities are defined based on common neighbours. Extending the random walk approach in [62], Perozzi et al. [63] learn embeddings from so-called walklets, random walks that skip some nodes, resulting in embeddings that capture structural features at multiple scales. The abovementioned graph embedding methods compute a representation of nodes in a, compared to the number of nodes in the network, low-dimensional Euclidean space. A suitably defined metric for similarity or distance of nodes enables recovering the link topology with high fidelity [12], forming the basis for similaritybased link prediction. In contrast, Lichtenwalter et al. [46] argued for a new perspective that uses supervised classifiers based on (i) multi-dimensional features of node pairs, and (ii) an undersampling of negative instances to address inherent class imbalances in link prediction. Recent applications of graph embedding to link prediction have taken a similar supervised approach, for example using vector-valued binary operators to construct features for node pairs from node vectors [31,50,62]. Despite good performance, recent works have cast a more critical light on such applications of low-dimensional graph embeddings. Questioning the distinction between deep learning-based embeddings and graph factorisation techniques, Qiu et al. [64] show that popular embedding techniques can be understood as (approximate) factorisations of matrices that capture graph topology. Thus, low-dimensional embeddings can be viewed as a (lossy) compression of graphs, while link prediction or graph reconstruction can be viewed as the decompression step. Fitting this view, a recent study of the topological characteristics of networks' low-dimensional Euclidean representations has highlighted fundamental limitations of embeddings to capture complex structures found in real networks [69]. Techniques like node2vec, LINE, or DeepWalk have been reported to perform well for link prediction despite those limitations. However, recent surveys concur that finetuning their hyperparameters to the specific data set is required [30,56,86], which can be problematic in large data sets and increase the risk of overfitting. When used for link prediction, graph embedding methods are typically combined with dimensionality reduction and supervised classification algorithms, possibly using non-linear kernels. Comparative studies found that the performance of Euclidean graph embeddings for link prediction is connected to their ability to represent communities in graphs as clusters in the feature space [30], which, due to the non-linear nature of graph data [75], strongly depends on their topology. Using symmetric operators or distance measures in metric spaces limits their ability to predict directed links because the ground truth for ( , ) can differ from ( , ) [39]. These issues raise the general question whether we should use low-dimensional Euclidean embeddings for link prediction tasks. Recent works addressed some of those open questions, for example with hyperbolic or non-linear embeddings [23,75], extensions of Euclidean embeddings for directed link prediction [39], or embeddings that explicitly account for community structures [10,79,86]. However, existing works still use hyperparameters, require separate dimensionality reduction or model selection to identify the optimal number of dimensions, fail to capture rich hierarchically nested community structures present in real-world networks [68], or do not integrate community detection with representation learning. Addressing all issues at once, we take a novel approach that treats graph representation learning as a compression problem: We use the map equation [67], an analytical information-theoretic approach to compress flows of random walks in directed or undirected, possibly weighted networks based on their modular structure. Unlike recent work by Ghasemian et al. [28] that predicts links based on how they influence the map equation's estimated codelength, requiring inefficient recalculations, we take advantage of the map equation's coding machinery without any computational overhead. The map equation's hierarchical coding tree with node assignments provides an embedding in a discrete, non-metric latent space of possibly hierarchical community labels with automatically optimised dimensionality using a minimum description length approach. Following the map equation's compression principles, we relate the similarity between nodes and to how efficiently we can compress the link ( , ) with respect to the network's modular structure. As an analytical approach, our method neither introduces hyperparameters nor needs to simulate random walks, and naturally yields asymmetric node similarities suitable to predict directed links. Background: the map equation The map equation is an information-theoretic objective function for community detection that, conceptually, models network flows The corresponding coding tree. Links are annotated with transition rates to calculate similarities in the information-theoretic limit. Each coding tree path corresponds to a network link, which may or may not exist. The coder remembers the random walker's module but not the most recently visited node. Describing the intra-module transition from node 5 to 3 requires − log 2 (3/12) = 2 bits. The inter-module transition from node 5 to 7 requires three steps and − log 2 (1/12 · 1/2 · 2/10) ≈ 6.9 bits. with random walks [67]. To detect communities, the map equation compresses the random walks' per-step description length by searching for sets of nodes with long flow persistence: network areas where a random walker tends to stay for a longer time. Consider a communication game where the sender observes a random walker on a network, and uses binary codewords to update the receiver about the random walker's location. In the simplest case, all nodes belong to the same module and we use a Huffman code to assign unique codewords to the nodes based on their stationary visit rates. With a one-module partition, M 1 , the sender communicates one codeword per random-walker step to the receiver. The theoretical lower limit for the per-step description length, we call it codelength, is the entropy of the nodes' visit rates [70], where H is the Shannon entropy, is the set of the nodes' visit rates, and is node 's visit rate. In networks with modular structure, we can compress the random walks' description by grouping nodes into more than one module such that a random walker tends to remain within modules, and module switches become rare. This lets us re-use codewords across modules and design a codebook per module based on the nodes' module-normalised visit rates. However, sender and receiver need a way to encode module switches. The map equation uses a designated module-exit codeword per module and an index-level codebook with module-entry codewords. In a two-level partition, the sender communicates one codeword for intra-module randomwalker steps to the receiver, or three codewords for inter-module steps (figure 2). The lower limit for the codelength is given by the sum of entropies associated with module and index codebooks, weighted by their usage rates. Given a partition of the network's nodes into modules, M, the map equation [67] formalises this relationship, Here = m∈M m is the index-level codebook usage rate, m is the entry rate for module m, and = { m | m ∈ M} is the set of module entry rates; m exit is the exit rate for module m, m = m exit + ∈m is the codebook usage rate for module m, and m = {m exit } ∪ { | ∈ m} is the set of node visit rates in m, including m's module exit rate. The map equation can detect communities in simple, weighted, directed, and higher-order networks, and can be generalised to hierarchical partitions through recursion [68]. To make use of node metadata for detecting communities, we can either incorporate a corresponding term in the map equation [21], design metadatainformed flow models [7], or introduce a prior network and reinforce link weights between nodes with the same metadata label [71]. MAPSIM: NODE SIMILARITIES FROM MODULAR FLOW COMPRESSION Compression-based similarity measures consider pairs of objects more similar if they jointly compress better. Extending this idea to networks, we exploit the coding of network flows based on the map equation, and use it to calculate information-theoretic pairwise similarities between nodes: MapSim. We interpret a network's community structure as an implicit embedding and, roughly speaking, consider nodes in the same community as more similar than nodes in different communities. To calculate node similarities, we begin with a network partition and its corresponding modular coding scheme 1 , which can be visualised as a tree, annotated with the transition rates defined by the link patterns in the network (figure 2). While the network's topology constrains random walks to transitions along existing links, the coding scheme is more flexible and can describe transitions between any pair of nodes. To describe the transition from node to , we find the corresponding path in the partition tree and multiply the transition rates along that path, that is, we use the coarse-grained description of the network's community structure, not the network's actual link pattern; it can describe any transition regardless of whether the link ( , ) exists in the network or not. The description length in bits for a path with transition rate is − log 2 ( ). For example, consider the scenario in figure 2 where we calculate similarity scores for the two directed links (5, 3) and (5, 7), neither of which exists in the network. Nodes 5 and 3 are in module , and the rate at which a random walker in visits node 3 is 3/12, requiring − log 2 (3/12) = 2 bits to describe that transition. Node 7 is in module , and a random walker in exits at rate 1/12, enters at rate 1/2, and then visits node 7 at rate 2/10, that is, at rate 1/120, requiring − log 2 (1/120) ≈ 6.9 bits. Paths to derive similarities emanate from modules, not from nodes, because the model must generalise to unobserved data. If compression was our sole purpose, we would use node-specific codebooks containing codewords for neighbouring nodes, but no longer detect communities, and only be able to describe observed links. Instead, the map equation's coding scheme is designed to capitalise on modular network structures: The modular code structure provides a model that generalises to unobserved data, coarse-grains the path descriptions, and prevents overfitting. For the general case, where M can be a hierarchical network partition, we number the sub-modules within each module m from 1 to m -we refer to these numbers as addresses -such that an ordered sequence of addresses uniquely identifies a path starting at the root of the partition tree. We let addr : M × → (N) be a function that takes a network partition and a node as input, and returns the node's address in the partition. To calculate the similarity of node to , we identify the longest common prefix of the nodes' addresses, addr (M, ) and addr (M, ), and select the partition tree's sub-tree M ⟨ ⟩ that corresponds to the prefix : M ⟨ ⟩ is the smallest sub-tree that contains and . We obtain the addresses for and within sub-tree M ⟨ ⟩ by removing the prefix from their addresses. That is, addr (M, ) = + + addr(M ⟨ ⟩ , ) and addr (M, ) = + + addr(M ⟨ ⟩ , ), where + + is list concatenation. The rate at which a random walker transitions from to is the product of (i) the rate at which the random walker moves along the path addr(M ⟨ ⟩ , ) in reverse direction, rev(M ⟨ ⟩ , addr(M ⟨ ⟩ , )), that is from to the root of ⟨ ⟩ , and (ii) the rate at which the random walker moves along the path addr( where is the longest common prefix shared by the addresses of and in the partition tree defined by M. To express map equation similarity in terms of description length, we take the − log 2 of MapSim and regard pairs of nodes that yield a shorter description length as more similar. MapSim is asymmetric since module entry and exit rates are, in general, different and and can have different visit rates. MapSim is zero if one node is in a disconnected component; the exit rate for regions without out-links is zero, so the corresponding description length is infinitely long. This issue can be addressed with the regularised map equation [71], a Bayesian approach that introduces an empirical prior to model incomplete data with weak links between all pairs of nodes, where prior link strengths depend on the connection patterns of each node. We calculate node similarities in three steps: (i) inferring a network's community with Infomap [20], a greedy, search-based optimisation algorithm for the map equation, (ii) representing the corresponding coding scheme in a suitable data structure, and (iii) using MapSim to computing similarities based on the coding scheme. The overall approach is illustrated in figs. 1 -3 and algorithm 1. Algorithm 1: Pseudo-code of function MapSim to calculate similarity score for node pair ( , ). EXPERIMENTAL VALIDATION We evaluate the performance of MapSim in unsupervised, similaritybased link prediction for 47 real-world networks, 35 directed (table 1) and 12 undirected (table 3), retrieved from Netzschleuder [61] and Konect [42]. Details of the directed and undirected networks are shown in tables 2 and 4, respectively. Our analysis is based on a Python-implementation available on GitHub 2 , building on Infomap, a fast and greedy search algorithm for minimising the map equation with an open source implementation in C++ [19,20]. As baseline, we use four random walk and neighbourhood-based embedding methods: DeepWalk [62], node2vec [31], LINE [76], and NERD [39], using the respective author's implementation. We also include results for MapSim based on the one-module partition for each network for comparison, which ignores community structure. Adopting the argument by [31], we exclude graph factorisation methods and simple local similarity scores because they have already been shown to be inferior to node2vec. We include NERD because it is a recent random walk-based embedding method proposed for directed link prediction with higher reported performance than other walk-based embeddings [39]. Unsupervised Link Prediction Different from works that use graph embeddings for supervised link prediction, we address unsupervised link prediction. Like Goyal and Ferrara [30] and Khosla et al. [39], we take a similarity-based approach that does not require training a classifier. We compute similarity scores based on node embeddings, rather than applying a supervised classifier to features computed for node pairs. We adopt the approach by Khosla et al. [39] and calculate node similarities as the sigmoid over the feature vectors' dot product. Considering how different embedding techniques generalise across data sets, we purposefully refrained from hyperparameter tuning. We chose a single set of hyperparameters for each method, informed by the default parameters given by the respective authors and recent surveys' discussion regarding which hyperparameter values generally provide good link prediction performance. For DeepWalk and node2vec, we sample = 80 random walks of length = 40 per node, and use a window size of = 10. For both methods, the underlying word embedding is applied using the default model parameters fixed by the authors, = 1, = 10 and = 0. For node2vec we set the return parameter to = 1. Since for = = 1 node2vec is identical to DeepWalk, we use = 4, which was found to provide good performance for link prediction [30]. We run LINE with first-order (LINE 1 ), second-order (LINE 2 ), and combined first-and-second-order proximity (LINE 1+2 ), use 1,000 samples per node, and = 5 negative samples. For NERD, we use 800 samples and = 3 negative samples per node. We set the number of neighbourhood nodes to = 1, as suggested by the authors for link prediction. We use = 128 dimensions for all embeddings. Since MapSim is a non-parametric method, it does not require setting any hyperparameters. However, to avoid local optima when heuristically minimising the map equation, we run Infomap 100 times and select the partition with the shortest description length. We use 5-fold cross-validation to split links into train and test sets, treating weighted links as indivisible. We calculate the node embedding (for MapSim the coding tree) in the training network, derive predictions based on node similarities, and evaluate them based on the links in the validation set. For each fold, we restrict the resulting training network to its largest (weakly) connected component. For a validation set with positive links, we sample negative links uniformly at random, and calculate scores for all 2 links. In undirected networks, for each positive link ( , ), we also consider ( , ) as positive, and, therefore, sample two negative links per positive link. Varying the discrimination threshold, we obtain a receiver operator characteristic (ROC) per fold, and calculate the area under the curve (AUC). Detailed results, including average and worst-case performance, are shown in tables 2 and 4; we also report precision-recall performance (table 5). We include MapSim based on the one-module partition 3 in the results and note that it performs better than using a modular partition in some cases: this suggests that the network does not have a strong community structure, which could be addressed with the regularised map equation [71]. When mentioning MapSim in the following, we refer to using modular partitions. On average, MapSim outperforms all baseline methods across the 47 data sets in terms of AUC and AUPR (figure 4); for detailed results on a per-network basis see tables 2, 4, and 5 in the appendix. Using a one-sided two-sample -test, we find that MapSim's average performance across all networks is significantly higher than that of the best graph embedding method, LINE 1+2 , both in directed and undirected networks ( ≈ 0.008 and ≈ 0.039, respectively). MapSim provides the best performance in 11 of the 47 networks, with a standard deviation of the AUC score less than half of that of the best embedding-based method (LINE 1+2 ). For undirected networks, MapSim achieves the best performance for five of the 12 networks, while none of the embedding methods beats Map-Sim's performance in more than two networks. We find the largest performance gain in the directed network linux, where MapSim yields an increase of AUC of approximately 22.6% compared to the best embedding (NERD). MapSim's worst-case performance across all networks is approximately 44% and 33% above that of the best-performing embedding for directed and undirected networks, respectively. MapSim' performance advantage can be as high as 84%, for example = 0.988 of MapSim in foursquare-friendships-new vs. = 0.537 for node2vec. While node2vec performs best in the largest directed network, MapSim performs best in the largest undirected network and in several small networks, suggesting that MapSim works well both for small and large networks. We attribute those encouraging results to multiple features of our method: Different from graph embedding techniques that require downstream dimensionality reduction, MapSim's compression approach implicitly includes model selection and avoids overfitting. Moreover, the representation of nodes in the coding tree is integrated with the optimisation of hierarchical community structures in the network. Due to its non-parametric approach and the use of the analytical map equation, MapSim performs well in absence of tuning to the specific data set. Scalability Analysis We analyse MapSim's scalability in synthetically generated networks with modular structure and tunable size and link density. We generate -regular random graphs with nodes and (mean) degree . To avoid trivial configurations where a modular structure 3 With the one-module partition, MapSim becomes equivalent to preferential attachment. is absent, we create a network by first generating two -regular random graphs with 2 nodes each and "cross" two links, one from each of the two graphs, to obtain a single connected network with strong community structure. We then apply Infomap to (i) minimise the map equation and extract the network's modular structure, and (ii) construct the coding tree for calculating node similarities. We repeat this 10 times for random networks with different numbers of nodes and degrees . The average run times are reported in figure 5, which shows that, for sparse networks, the runtime of MapSim is linear in the size of the network. Edler et al. [19] report that the theoretical asymptotic bound of computational complexity for the optimisation of the map equation is in O ( ), which is the same as for vector space embedding techniques like node2vec and DeepWalk 4 . Thus, MapSim does not entail higher computational complexity compared to popular graph embeddings. This makes it an interesting choice for practitioners looking for a simple and scalable method that works well in small, large, directed, and undirected networks. CONCLUSION AND OUTLOOK We propose MapSim, a novel information-theoretic approach to compute node similarities based on a modular compression of network flows. Different from vector space embeddings, MapSim represents nodes in a discrete, non-metric space of communities that yields asymmetric similarities suitable to predict links in directed and undirected networks. The results are highly interpretable because the network's modular structure explains the similarities. Using description length minimisation, MapSim naturally accounts for Occam's razor, which avoids overfitting and yields a parsimonious coding tree. Performing unsupervised link prediction, we compare MapSim to popular embedding-based algorithms on 47 data sets covering networks from a few hundred to hundreds of thousands of nodes and millions of edges. Our analysis shows that the average performance of MapSim is more than 7% higher than its closest competitor, outperforming all competing methods in 11 of the 47 networks. Taking a new perspective on graph representation learning, our work demonstrates the potential of compression-based methods with promising applications in other graph learning tasks. Moreover, recent generalisations of the map equation to temporal and higher-order networks [19] suggest that our method also applies to graphs with non-dyadic or time-stamped relationships.
2022-08-31T01:16:27.976Z
2022-08-30T00:00:00.000
{ "year": 2022, "sha1": "2be51cc4b972d429b67bc95b7ffae4d17f05f1d7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2be51cc4b972d429b67bc95b7ffae4d17f05f1d7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
124322347
pes2o/s2orc
v3-fos-license
MONOTONE GENERALIZED CONTRACTIONS IN ORDERED METRIC SPACES In this paper, we prove some existence and uniqueness results on coincidence points for g-monotone mappings satisfying linear as well as generalized nonlinear contractivity conditions in ordered metric spaces. Our results generalize and extend two classical and well known results due to Ran and Reurings (Proc. Amer. Math. Soc. 132 (2004), no. 5, 1435-1443) and Nieto and - (Acta Math. Sin. 23 (2007), no. 12, 2205-2212) besides similar other ones. Finally, as an application of one of our newly proved results, we establish the existence and uniqueness of solution of a first order periodic boundary value problem. Introduction The abstract monotone iterative techniques and corresponding fixed point results on ordered sets are natural as well as general enough to cover a variety of situations. There exists an extensive literature on this theme, but keeping in view the requirements of this presentation, we merely refer to ([4], [6], [9], [10], [11], [13], [14], [18], [25], [26], [38], [39], [40]). In recent years, a multitude of fixed point theorems have been proved in ordered metric spaces wherein the involved contraction conditions are merely assumed to hold on elements which are comparable in the underlying partial ordering. Thus, in this context, the usual contraction condition is considerably weakened but at the expense of monotonicity of the involved mapping. The techniques involved in the proofs of such results is the combination of ideas used in the proof of contraction principle together with the one involved in monotone iterative technique. This trend was essentially initiated by Turinici [39]. Later, Ran and Reurings [35] proved a slightly more natural version of the corresponding fixed point theorem of Turinici (cf. [39]) for continuous monotone mappings with some applications to matrix equations, which runs as follows. Theorem 1.1 (Ran and Reurings [35]). Let (X, ) be an ordered set equipped with a metric d and f a self-mapping on X. Suppose that the following conditions hold: 1) such that d(f x, f y) ≤ αd(x, y) ∀ x, y ∈ X with x y, (vi) every pair of elements of X has a lower bound as well as an upper bound. Then f has a unique fixed point. Thereafter, Nieto and Rodríguez-López [32] slightly modified the assumptions (iii) and (vi) of Ran and Reurings' fixed point theorem and also given some applications to ordinary differential equations. Theorem 1.2 (Nieto and Rodríguez-López [32]). Let (X, ) be an ordered set equipped with a metric d and f a self-mapping on X. Suppose that the following conditions hold: (i) (X, d) is complete, (ii) f is monotone, (iii) either f is continuous or X satisfies the following property: If {x n } is a sequence in X such that x n d −→ x whose consecutive terms are comparable, then there exists a subsequence {x n k } of {x n } such that every term is comparable to the limit x, (iv) there exists x 0 ∈ X such that x 0 f (x 0 ) or x 0 f (x 0 ), (v) there exists α ∈ [0, 1) such that d(f x, f y) ≤ αd(x, y) ∀ x, y ∈ X with x y, (vi) every pair of elements of X has a lower bound or an upper bound. Then f has a unique fixed point. The aim of this paper is to extend the core results of Ran and Reurings [35] (i.e., Theorem 1.1) and Nieto and Rodríguez-López [32] (i.e., Theorem 1.2) to a pair of mappings f and g defined on ordered metric space X whenever f is either g-monotone linear contraction or g-monotone nonlinear contraction in two different ways namely: X is complete or alternately any subspace Y of X satisfying f (X) ⊆ Y ⊆ g(X) is complete. This paper is a continuation of our earlier work carried out in [2]. Preliminaries In this section, to make our exposition self-contained, we recall some basic definitions, relevant notions and auxiliary results. Throughout this paper, N stands for the set of natural numbers, while N 0 for the set of whole numbers (i.e., N 0 = N ∪ {0}). Definition 2.1 ( [28]). A set X together with a partial order (often denoted by (X, )) is called an ordered set. In this context, we write x y instead of y x. Two elements x and y in an ordered set (X, ) are said to be comparable if either x y or y x and we denote it as x ≺≻ y. Clearly, the relation ≺≻ is reflexive and symmetric, but not transitive in general. is a metric space and (X, ) is an ordered set. Definition 2.3 ([12] ). Let (X, ) be an ordered set and f and g two selfmappings defined on X. We say that f is g-increasing (resp. g-decreasing) if for any x, y ∈ X g(x) g(y) ⇒ f (x) f (y) (resp.; f (x) f (y)). In all, f is called g-monotone if f is either g-increasing or g-decreasing. Notice that under the restriction g = I, the identity mapping on X, the notions of g-increasing, g-decreasing and g-monotone mappings reduce to increasing, decreasing and monotone mappings respectively. ). Let (X, ) be an ordered set and f and g two selfmappings defined on X. If f is g-monotone and g(x) = g(y), then f (x) = f (y). Definition 2.4 ( [22,24]). Let X be a nonempty set and f and g two selfmappings on X. Then (i) an element x ∈ X is called a coincidence point of f and g if (ii) if x ∈ X is a coincidence point of f and g, then x ∈ X with x = g(x) = f (x), is called a point of coincidence of f and g, (iii) if x ∈ X is a coincidence point of f and g such that x = g(x) = f (x), then x is called a common fixed point of f and g, (iv) the pair (f, g) is said to be commuting if (v) the pair (f, g) is said to be weakly compatible or coincidentally commuting if f and g commute at their coincidence points, i.e., Definition 2.5 ( [23,37]). Let (X, d) be a metric space and f and g two selfmappings on X. Then (i) the pair (f, g) is said to be weakly commuting if It is clear that, in a metric space, commutativity ⇒ weak commutativity ⇒ compatibility ⇒ weak compatibility but reverse implications are not true in general. Definition 2.6 ( [36]). Let (X, d) be a metric space, f and g two self-mappings on X and x ∈ X. We say that f is g-continuous at x if for all sequences {x n } ⊂ X, Notice that with g = I (the identity mapping on X) Definition 2.6 reduces to the definition of continuity. Now, we formulate the variants of bounded sequences and monotone sequences with respect to relation ≺≻. Definition 2.7. Let (X, ) be an ordered set and {x n } a sequence in X. Then (i) {x n } is said to be termwise bounded if there is an element z ∈ X such that each term of {x n } is comparable with z, i.e., x n ≺≻ z ∀ n ∈ N 0 so that z is a c-bound of {x n } and (ii) {x n } is said to be termwise monotone if consecutive terms of {x n } are comparable, i.e., Clearly all bounded above as well as bounded below sequences are termwise bounded and all monotone sequences are termwise monotone. Let (X, d, ) be an ordered metric space and {x n } a sequence in X. If {x n } is termwise monotone and x n d −→ x, then we denote it symbolically by x n x. Next, we formulate the following notion using certain property utilized by Nieto and Rodríguez-López [32] (see assumption (iii) in Theorem 1.2). Definition 2.8. Let (X, d, ) be an ordered metric space. We say that (X, d, ) has TCC (termwise monotone-convergence-c-bound) property if every termwise monotone convergent sequence {x n } in X has a subsequence, which is termwise bounded by the limit (of the sequence) as a c-bound, i.e., Definition 2.9. Let (X, d, ) be an ordered metric space and g a self-mapping on X. We say that (X, d, ) has g-TCC property if every termwise monotone convergent sequence {x n } in X has a subsequence, whose g-image is termwise bounded by g-image of limit (of the sequence) as a c-bound, i.e., Notice that under the restriction g = I, the identity mapping on X, Definition 2.9 reduces to Definition 2.8. 10. An ordered set (X, ) is called sequentially chainable if range of every termwise monotone sequence in X remains a totally ordered subset of X. Proposition 2.2. The following are equivalent: (i) (X, ) is sequentially chainable, (ii) ≺≻ is transitive on range of every termwise monotone sequence in X, (iii) for every termwise monotone sequence {x n } in X, The following family of control functions is essentially due to Boyd and Wong [7]. Mukherjea [29] introduced the following family of control functions: : ϕ(t) < t for each t > 0 and ϕ is right continuous . The following family of control functions found in literature is more natural. The following family of control functions is due to Lakshmikantham andĆirić [27]. : ϕ(t) < t for each t > 0 and lim r→t + ϕ(r) < t for each t > 0 . The following family of control functions is indicated in Boyd and Wong [7] but was later used in Jotic [21]. Proposition 2.3 ([2] ). The class Ω enlarges the classes Ψ, Θ, ℑ and Φ under the following inclusion relation: The following known results are useful in the proof of our main results. is a sequence such that a n+1 ≤ ϕ(a n ) ∀ n ∈ N 0 , then lim n→∞ a n = 0. (iv) the following four sequences tend to ǫ when k → ∞: Lemma 2.3 ([15] ). Let X be a nonempty set and g a self-mapping on X. Then there exists a subset E ⊆ X such that g(E) = g(X) and g : E → X is one-one. Results on coincidence points Firstly, we prove a coincidence point theorem under generalized ϕ-contractivity condition as follows. Theorem 3.1. Let (X, d, ) be an ordered metric space and f and g two selfmappings on X. Suppose that the following conditions hold: is sequentially chainable. Then f and g have a coincidence point. , then x 0 is a coincidence point of f and g and hence we are through. Otherwise, if g(x 0 ) = f (x 0 ), then owing to assumption (a) (i.e., f (X) ⊆ g(X)), we can choose x 1 ∈ X such that g(x 1 ) = f (x 0 ). As f (X) ⊆ g(X), we can choose x 2 ∈ X such that g(x 2 ) = f (x 1 ). Continuing this process, we define a sequence {x n } ⊂ X of joint iterates such that Now, we show that {gx n } is a termwise monotone sequence, i.e., To prove (2), we distinguish four cases (owing to conditions (b) and (c): In cases (i) and (ii), we conclude that {gx n } is respectively increasing and decreasing sequence (for proof see lines of the main results of [2]). In case (iii), using (1) ). Hence, on using assumption (b) and (1), we get g( ). Further, on using assumption (b) and (1), we get g( . Continuing this procedure inductively, we obtain In the similar manner, in case (iv), we obtain Therefore, in all the cases, (2) holds for all n ∈ N 0 . If g(x n0 ) = g(x n0+1 ) for some n 0 ∈ N, then using (1), we have g(x n0 ) = f (x n0 ), i.e., x n0 is a coincidence point of f and g so that we are through. On the other hand, if g(x n ) = g(x n+1 ) for each n ∈ N 0 , we define a sequence On using (1), (2), (3) and assumption (d), we obtain Hence by Lemma 2.1, we obtain Now, in view of (4) and Lemma 2.2, there exist ǫ > 0 and two subsequences and Proposition 2.2, we have g(x m k ) ≺≻ g(x n k ). Hence, on using (1) and assumption (d), we obtain On taking limit superior as k → ∞ in (6) and using (5) and the definition of Ω, we have which is a contradiction. Therefore {gx n } is a Cauchy sequence. Now, we use assumptions (e) or (e ′ ) to accomplish the proof. Firstly, assume that (e) holds. By assumption (e1) (i.e., the completeness of X), there exists z ∈ X such that On using (1) and (7), we obtain In view of assumption (e3) (i.e., continuity of g) in (7) and (8), we have . As lim n→∞ f (x n ) = lim n→∞ g(x n ) = z (due to (7) and (8)), on using assumption (e2) (i.e., compatibility of f and g), we obtain Now, we show that z is a coincidence point of f and g. To accomplish this, we use assumption (e4). Suppose that f is continuous. On using (7) and continuity of f , we obtain On using (10), (11), (12) and continuity of d, we obtain Thus z ∈ X is a coincidence point of f and g and hence we are through. Alternately, suppose that (X, d, ) has g-TCC property. As g(x n ) z (due to (2) and (7)), ∃ a subsequence {y n k } of {gx n } such that Since g(x n k ) d −→ z, so equations (7)-(12) also hold for {x n k } instead of {x n }. On using (13) and assumption (d), we obtain Now, we asserts that (14) d(f gx n k , f z) ≤ d(ggx n k , gz) ∀ k ∈ N. On account of two different possibilities arising here, we consider a partition In case (i), on using Proposition 2.1, we get d(f gx n k , f z) = 0 ∀ k ∈ N 0 and hence (14) holds for all k ∈ N 0 . In case (ii), owing to the definition of Ω, we have d(f gx n k , f z) ≤ ϕ(d(ggx n k , gz)) < d(ggx n k , gz) ∀ k ∈ N + and hence (14) holds for all k ∈ N + . Thus (14) holds for all k ∈ N. On using triangular inequality, (9), (10), (11) and (14), we get Thus z ∈ X is a coincidence point of f and g and hence we are through. Now, assume that (e ′ ) holds. Then the assumption f (X) ⊆ Y and completeness of Y ensure the existence of y ∈ Y such that f (x n ) d −→ y. Again owing to assumption Y ⊆ g(X), we can find u ∈ X such that y = g(u). Hence, on using (1), we get (15) lim Now, we show that u is a coincidence point of f and g. To accomplish this, we use assumption (e ′ 2). Firstly, suppose that f is g-continuous, then using (15), we get On using (15) and (16), we get Secondly, suppose that f and g are continuous. Owing to Lemma 2.3, there exists a subset E ⊆ X such that g(E) = g(X) and g : E → X is one-one. Without loss of generality, we are able to choose E ⊆ X such that u ∈ E. Now, define T : g(E) → g(X) by As g : E → X is one-one and f (X) ⊆ g(X), T is well defined. Again, as f and g are continuous, it follows that T is continuous. Since {x n } ⊂ X and g(E) = g(X), there exists {e n } ⊂ E such that g(x n ) = g(e n ) ∀ n ∈ N 0 . On using Proposition 2.1, we get f (x n ) = f (e n ) ∀ n ∈ N 0 . Therefore, in view of (1) and (15), we get Thus u ∈ X is a coincidence point of f and g and hence we are through. Finally, suppose that (Y, d, ) has TCC property. As g(x n ) g(u) (due to (2) and (15)), ∃ a subsequence {gx n k } of {gx n } such that (19) g(x n k ) ≺≻ g(u) ∀ k ∈ N 0 . On using (1), (19) and assumption (d), we obtain We asserts that On account of two different possibilities arising here, we consider a partition In case (i) holds, on using Proposition 2.1, we get d(f x n k , f u) = 0 ∀ k ∈ N 0 , which implies that d(gx n k +1 , f u) = 0 ∀ k ∈ N 0 and hence (20) holds for all k ∈ N 0 . If case (ii) holds, then owing to the definition of Ω, we have d(gx n k +1 , f u) ≤ ϕ(d(gx n k , gu)) < d(gx n k , gu) ∀ k ∈ N + and hence (20) holds for all k ∈ N + . Thus (20) holds for all k ∈ N. On using (15), (20) and continuity of d, we get . Hence u ∈ X is a coincidence point of f and g. This completes the proof. If we set ϕ(t) = αt (with α ∈ [0, 1)) in Theorem 3.1 and remove the assumption (f), then we obtain the following coincidence theorem for α-contraction. Theorem 3.2. Let (X, d, ) be an ordered metric space and f and g two selfmappings on X. Suppose that the following conditions hold: ( such that d(f x, f y) ≤ αd(gx, gy) ∀ x, y ∈ X with g(x) ≺≻ g(y), (e) (e1) (X, d) is complete, (e2) (f, g) is compatible pair, (e3) g is continuous, (e4) either f is continuous or (X, d, ) has g-TCC property, or alternately (e ′ ) (e ′ 1) there exists a subset Y of X such that f (X) ⊆ Y ⊆ g(X) and (Y, d) is complete, (e ′ 2) either f is g-continuous or f and g are continuous or (Y, d, ) has TCC property. Then f and g have a coincidence point. Proof. We use the same structure as in the proof of Theorem 3.1. By following its lines, we derive By classical techniques, it can be easily shown that {gx n } is a Cauchy sequence. Here it is noticed that there is no need to use the assumption (f) mentioned in Theorem 3.1, because we do not need to apply the contractivity condition to d(gx m k , gx n k ). Finally, we accomplish the proof by using (e) and (e ′ ) same as in the proof of Theorem 3.1. Corollary 3.1. Theorem 3.1 (also Theorem 3.2) remains true if we replace (e ′ 1) by one of the following conditions besides retaining the rest of the hypotheses: (e ′ 1) ′ (X, d) is complete and one of f and g is onto, (e ′ 1) ′′ (X, d) is complete and has a closed subspace Y with f (X) ⊆ Y ⊆ g(X). Proof. If (e ′ 1) ′ holds, then either f (X) = X or g(X) = X so that either f (X) or g(X) is complete and hence assumption (e ′ ) is applicable. If (e ′ 1) ′′ holds, then Y is complete and hence assumption (e ′ ) is applicable. As commutativity ⇒ weak commutativity ⇒ compatibility for a pair of mappings, therefore the following consequence of Theorem 3.1 (also of Theorem 3.2) trivially holds. In the following lines, we present the results regarding the uniqueness of a point of coincidence and common fixed point corresponding to Theorems 3.1 and 3.2. Remark 3.2. In Theorem 3.3, we can replace (u 0 ) by the following condition: (u ′ 0 ) for every x, y ∈ X, ∃ z ∈ X such that f (x) ≺≻ g(z) and f (y) ≺≻ g(z). Indeed (u 0 ) and (u ′ 0 ) are equivalent. The implication (u 0 ) ⇒ (u ′ 0 ) is trivial. Conversely, if (u ′ 0 ) holds, then we have the following possibilities: (i) f (x) g(z) and f (y) g(z) so that g(z) ∈ g(X) is an upper bound of {f x, f y}, is an upper bound of {f x, f y}, and hence (u 0 ) holds. Thus (u 0 ) ⇔ (u ′ 0 ). Theorem 3.4. In addition to the hypotheses of Theorem 3.3, suppose that the following condition holds: (u 1 ) one of f and g is one-one. Then f and g have a unique coincidence point. Theorem 3.5. In addition to the hypotheses of Theorem 3.3, suppose that the following condition holds: (u 2 ) (f, g) is weakly compatible pair. Then f and g have a unique common fixed point. Corresponding fixed point theorems By particularizing g = I, the identity mapping on X, in Theorems 3.2 and 3.1 (together with Theorems 3.3-3.6), we respectively derive the following fixed point theorems. Let (X, d, ) be an ordered metric space and f a self-mapping on X. Suppose that the following conditions hold: Then f has a fixed point. Moreover, if in addition the following also holds: (vi) every pair of elements of f (X) has a lower bound or an upper bound, then f has a unique fixed point. Remark 4.1. Notice that Theorem 4.1 improves Theorem 1.1 (i.e., the main result of Ran and Reurings [35]) and Theorem 1.2 (i.e., the main result of Nieto and Rodríguez-López [32]) in the following respects: • In the context of hypothesis (i), the completeness of X is not necessary. Alternately, it can be replaced by the completeness of Y , where f (X) ⊆ Y ⊆ X. • In the context of hypothesis (vi), the requirement of a lower bound or an upper bound is not required on whole of X but it suffices to take the same merely on the subset f (X). • The assumption (vi) is unnecessary for the existence part and it is merely utilized to establish the uniqueness of fixed point. Let (X, d, ) be an ordered metric space and f a self-mapping on X. Suppose that the following conditions hold: (vi) (f X, ) is sequentially chainable. Then f has a fixed point. Moreover, if in addition the following also holds: (vii) every pair of elements of f (X) has a lower bound or an upper bound, then f has a unique fixed point. Examples In this section, we furnish some examples establishing the genuineness of our main results. , then ϕ ∈ Ω. Now, for all x, y ∈ X with x y, we have so that f and ϕ satisfy assumption (v) of Theorem 4.1. Observe that all the other conditions of Theorem 4.1 are also satisfied. Therefore, f has a unique fixed point (namely: x = 0). Notice that f is not a linear contraction. To substantiate this, choose x = 0 and y = ǫ, where ǫ is arbitrarily small but positive. If we take a constant α such that d(f x, f y) ≤ αd(x, y), then α ≥ 1 1+ǫ , which amounts to say that α ≥ 1 so that α ∈ [0, 1). Henceforth, f is not a linear contraction. Thus, Example 5.1 establishes the utility of Theorem 4.1 over well known fixed point theorems of Ran and Reurings and Nieto and Rodríguez-López (i.e., Theorems 1.1 and 1.2). Example 5.2. Consider X = R equipped with usual metric and usual partial order. Define f, g : X → X by f (x) = 5 and g(x) = x 2 − 4 ∀ x ∈ X. Then f is g-monotone. Let ϕ ∈ Ω be arbitrary. Now, for x, y ∈ X with g(x) g(y), we have , gy)). Notice that neither f nor g is one-one, i.e., (u 1 ) does not hold and hence, we can not apply Theorem 3.4, which guarantees the uniqueness of coincidence point. Observe that there are two coincidence points (namely: x = 3 and x = −3). Also, the pair (f, g) is not weakly compatible, i.e., (u 2 ) does not hold and hence, we can not apply Theorem 3.5, which ensures the uniqueness of common fixed point. Notice that there is no common fixed point of f and g. Example 5.3. Let X = R. On X, consider usual metric d and partial order defined by x y ⇔ x ≤ y and xy ≥ 0. Then (X, d) is a complete metric space. Define f, g : , gy)). Therefore, f , g and ϕ satisfy assumption (d) of Theorem 3.1. By a routine calculation, one can also verify all the conditions mentioned in (e) (of Theorem 3.1). Thus, all the conditions of Theorem 3.1 are satisfied and f and g have a coincidence point in X. Moreover, the condition (u 0 ) also holds and therefore, in view of Theorem 3.6, f and g have a unique common fixed point (namely: x = 0). Application In this section, as an application of Theorem 4.2, we prove an existence and uniqueness of solution of the following first order periodic boundary value problem which is essentially inspired by [32]. where T > 0 and f : I × R → R is a continuous function. Let C(I) denote the space of all continuous functions defined on I. Now, we need to recall the following definitions: Definition 6.1 ([32]). A function α ∈ C 1 (I) is called a lower solution of (21), if α ′ (t) ≤ f (t, α(t)), t ∈ I α(0) ≤ α(T ). Let F denote the family of functions φ : [0, ∞] → [0, ∞] satisfying the following conditions: (i) φ is continuous and increasing, Typical examples of F are φ(t) = αt, 0 ≤ α < 1, φ(t) = t 1+t and φ(t) = ln(1 + t). Also, clearly F ⊂ Ω. Now, we prove the following result regarding the existence and uniqueness of the solution of the periodic boundary value problem described by (21) in the presence of a lower solution or an upper solution. Theorem 6.1. In addition to the problem described by (21), suppose that there exist λ > 0 and φ ∈ F such that for all x, y ∈ R with x ≤ y Then the existence of a lower solution or an upper solution of problem (21) ensures the existence and uniqueness of the solution of the periodic boundary value problem described by (21). Proof. The problem (21) can be rewritten as: Notice that the problem (23) is equivalent to the integral equation where the Green function G(t, ξ) is given by Define a function A : C(I) → C(I) by Evidently, if u ∈ C(I) is a fixed point of A, then u ∈ C 1 (I) is a solution of (24) and hence of (21). On C(I), define a metric d given by: Also, on C(I), define a partial order given by: Now, we check that all the conditions of Theorem 4.2 are satisfied for Y = X = C(I). (i) Clearly, (C(I), d) is a complete metric space. On using (24), (25) and the fact that G(t, ξ) > 0 for (t, ξ) ∈ I × I, we get which implies that A(u) A(v) so that A is decreasing. (iii) Take a sequence {u n } ⊂ C(I) such that u n u ∈ C(I). Then for each t ∈ I, {u n (t)} is a sequence in R converging to u(t). Hence, {u n (t)} has a monotone subsequence {u n k (t)}. Therefore, for all k ∈ N 0 and for all t ∈ I, we have u n k (t) ≤ u(t) if {u n k (t)} is increasing u n k (t) ≥ u(t) if {u n k (t)} is decreasing, which implies that u n k ≺≻ u ∀ k ∈ N 0 so that (C(I), d, ) has TCC property. Choose arbitrary u, v ∈ C(I), then w := max{Au, Av} ∈ C(I), which yields that w is an upper bound of {Au, Av}. Thus, by Theorem 4.2, A has a unique fixed point, which is, indeed, a unique solution of problem (21).
2019-04-21T13:07:25.469Z
2016-01-31T00:00:00.000
{ "year": 2016, "sha1": "0b3fef912a76bf1600a625395963b4539f8985a7", "oa_license": null, "oa_url": "http://www.ndsl.kr/soc_img/society/kms/E1BMAX/2016/v53n1/E1BMAX_2016_v53n1_61.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a4bfe9740651349e08b7750cb5f50dd82b0ddad6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
59599909
pes2o/s2orc
v3-fos-license
Deception-As-Defense Framework for Cyber-Physical Systems We introduce deceptive signaling framework as a new defense measure against advanced adversaries in cyber-physical systems. In general, adversaries look for system-related information, e.g., the underlying state of the system, in order to learn the system dynamics and to receive useful feedback regarding the success/failure of their actions so as to carry out their malicious task. To this end, we craft the information that is accessible to adversaries strategically in order to control their actions in a way that will benefit the system, indirectly and without any explicit enforcement. Under the solution concept of game-theoretic hierarchical equilibrium, we arrive at a semi-definite programming problem equivalent to the infinite-dimensional optimization problem faced by the defender while selecting the best strategy when the information of interest is Gaussian and both sides have quadratic cost functions. The equivalence result holds also for the scenarios where the defender can have partial or noisy measurements or the objective of the adversary is not known. We show the optimality of linear signaling rule within the general class of measurable policies in communication scenarios and also compute the optimal linear signaling rule in control scenarios. Introduction All warfare is based on deception. Hence, when we are able to attack, we must seem unable; when using our forces, we must appear inactive; when we are near, we must make the enemy believe we are far away; when far away, we must make him believe we are near. -Sun Tzu, The Art of War [27] can be used as a defense strategy by making the opponent/adversary to perceive certain information of interest in an engineered way. Indeed, deception is also not limited to hostile environments. In all non-cooperative multi-agent environments, as long as there is asymmetry of information and one agent is informed about the information of interest while the other is not, then the informed agent has power on the uninformed one to manipulate his/her decisions or perceptions by sharing that information strategically. Especially with the introduction of cyber connectedness in physical systems, certain communication and control systems can be viewed as multi-agent environments, where each agent makes rational decisions to fulfill certain objectives. As an example, we can view transmitters (or sensors) and receivers (or controllers) as individual agents in communication (or control) systems. However, classical communication and control theory is based on the cooperation between these agents to meet certain challenges together, such as in mitigating the impact of a noisy channel in communication or in stabilizing the underlying state of a system around an equilibrium through feedback in control. However, cyber connectedness makes these multi-agent environments vulnerable against adversarial interventions and there is an inherent asymmetry of information as the information flows from transmitters (or sensors) to receivers (or controllers)1. Therefore, if these agents are not cooperating, e.g., due to adversarial intervention, then the informed agents, i.e., transmitters or sensors, could seek to deceive the uninformed ones, i.e., receivers or controllers, so that they would perceive the underlying information of interest in a way the deceiver has desired, and correspondingly would take the manipulated actions. Our goal, here, is to craft the information that could be available to an adversary in order to control his/her perception about the underlying state of the system as a defensive measure. The malicious objective and the normal operation of the system may not be completely opposite of each other as in the framework of a zero-sum game, which implies that there is a part of malicious objective that is benign and the adversary would be acting in line with the system's interest with respect to that aligned part of the objectives. If we can somehow restrain the adversarial actions to fulfill only the aligned part, then the adversarial actions, i.e., the attack, could inadvertently end up helping the system toward its goal. Since a rational adversary would make decisions based on the information available to him, the strategic crafting of the signal that is shared with the adversary, or the adversary can have access to, can be effective in that respect. Therefore, our goal is to design the information flowing from the informed agents, e.g., sensors, to the uninformed ones, e.g., controllers, in view of the possibility of adversarial intervention, so as to control the perception of the adversaries about the underlying system, and correspondingly to persuade them (without any explicit enforcement) to fulfill the aligned parts of the objectives as much as possible without fulfilling the misaligned parts. In this chapter, we provide an overview of the recent results [25,21,24] addressing certain aspects of this challenge in non-cooperative communication and control settings. For a discrete-time Gauss Markov process, and when the sender and the receiver in a non-cooperative communication setting have misaligned quadratic objectives, in [25], we have shown the optimality of linear signaling rules2 within the general class of measurable policies and provided an algorithm to compute the optimal policies numerically. Also in [25], we have formulated the optimal linear signaling rule in a non-cooperative linear-quadratic-Gaussian (LQG) control setting when the sensor and the controller have known misaligned control objectives. In [21], we have introduced a secure sensor design framework, where we have addressed the optimal linear signaling rule again in a non-cooperative LQG setting when the sensor and private-type controller have misaligned control objectives in a Bayesian setting, i.e., the distribution over the private type of the controller is known. In [24], we have addressed the optimal linear robust signaling in a non-Bayesian setting, where the distribution over the private type of the controller is not known, and provided a comprehensive formulation by considering also the cases where the sensor could have partial or noisy information on the signal of interest and relevance. We elaborate further on these results in some detail throughout the chapter. In Section 2, we review the related literature in economics and engineering. In Sections 3 and 4, we introduce the framework and formulate the deception-as-defense game, respectively. In Section 5, we elaborate on Gaussian information of interest in detail. In Sections 6 and 7, we address the optimal signaling rules in non-cooperative communication and control systems. In Section 8, we provide the optimal signaling rule against the worst possible distribution over the private types of the uninformed agent. In Section 9, we extend the results to partial or noisy measurements of the underlying information of interest. Finally, we conclude the chapter in Section 10 with several remarks and possible research directions. Notation: Random variables are denoted by bold lower case letters, e.g., x x x. For a random vector x x x, cov{x x x} denotes the corresponding covariance matrix. For an ordered set of parameters, e.g., x 1 , . . . , x κ , we use the notation x k:l = x l , . . . , x k , where 1 ≤ l ≤ k ≤ κ. N(0, ·) denotes the multivariate Gaussian distribution with zero mean and designated covariance. For a vector x and a matrix A, x and A denote their transposes, and x denotes the Euclidean 2 -norm of the vector x. For a matrix A, Tr{A} denotes its trace. We denote the identity and zero matrices with the associated dimensions by I and O, respectively. S m denotes the set of m-by-m symmetric matrices. For positive semi-definite matrices A and B, A B means that A − B is also positive semi-definite. Deception Theory in Literature There are various definitions of deception. Depending on the specific definition at hand, the analysis or the related applications vary. Commonly in signaling-based deception definitions, there is an information of interest private to an informed agent whereas an uninformed agent may benefit from that information to make a certain decision. If the informed and uninformed agents are strategic while, respectively, sharing information and making a decision, then the interaction can turn into a game where the agents select their strategies according to their own objectives while taking into account the fact that the other agent would also have selected his/her strategy according to his/her different objective. Correspondingly, such an interaction between the informed and uninformed agents can be analyzed under a game-theoretic solution concept. Note that there is a main distinction between incentive compatible deception model and deception model with policy commitment. Definition 1 We say that a deception model is incentive compatible if neither the informed nor the uninformed agent have an incentive to deviate from their strategies unilaterally. The associated solution concept here is Nash equilibrium [2]. Existence of a Nash equilibrium is not guaranteed in general. Furthermore, even if it exists, there may also be multiple Nash equilibria. Without certain commitments, any of the equilibria may not be realized or if one has been realized, which of them would be realized is not certain beforehand since different ones could be favorable for different players. Definition 2 We say that in a deception model, there is policy commitment if either the informed or the uninformed agent commits to play a certain strategy beforehand and the other agent reacts being aware of the committed strategy. The associated solution concept is Stackelberg equilibrium, where one of the players leads the game by announcing his/her committed strategy [2]. Existence of a Stackelberg equilibrium is not guaranteed in general over unbounded strategy spaces. However, if it exists, all the equilibria would lead to the same game outcome for the leader of the game since the leader could have always selected the favorable one among them. We also note that if there is a favorable outcome for the leader in the incentive compatible model, the leader has the freedom to commit to that policy in the latter model. Correspondingly, the leader is advantageous by acting first to commit to play according to a certain strategy even though the result may not be incentive compatible. Game theoretical analysis of deception has attracted substantial interest in various disciplines, including economics and engineering fields. In the following subsections, we review the literature in these disciplines with respect to models involving incentive compatibility and policy commitment. Economics Literature The scheme of the type introduced above, called strategic information transmission, was introduced in a seminal paper by V. Crawford and J. Sobel in [9]. This has attracted significant attention in the economics literature due to the wide range of relevant applications, from advertising to expert advise sharing. In the model adopted in [9], the informed agent's objective function includes a commonly known bias term different from the uninformed agent's objective. That bias term can be viewed as the misalignment factor in-between the two objectives. For the incentive compatible model, the authors have shown that all equilibria are partition equilibria, where the informed agent controls the resolution of the information shared via certain quantization schemes, under certain assumptions on the objective functions (satisfied by quadratic objectives), and the assumption that the information of interest is drawn from a bounded support. Following this inaugural introduction of the strategic information transmission framework, also called cheap talk due to the costless communication over an ideal channel, different settings, such as • Single sender and multiple receivers [11,12], • Multiple senders and single receiver [13,17], • Repeated games [18], have been studied extensively; however, all have considered the scenarios where the underlying information is one-dimensional, e.g., a real number. However, multidimensional information can lead to interesting results like full revelation of the information even when the misalignment between the objectives is arbitrarily large if there are multiple senders with different bias terms, i.e., misalignment factors [3]. Furthermore, if there is only one sender yet multidimensional information, there can be full revelation of information at certain dimensions while at the other dimensions, the sender signals partially in a partition equilibrium depending on the misalignment between the objectives [3]. The ensuing studies [11,13,12,17,18,3] on cheap talk [9] have analyzed the incentive compatibility of the players. More recently, in [16], the authors have proposed to use a deception model with policy commitment. They call it "sender-preferred sub-game perfect equilibrium" since the sender cannot distort or conceal information once the signal realization is known, which can be viewed as the sender revealing and committing to the signaling rule in addition to the corresponding signal realization. For information of interest drawn from a compact metric space, the authors have provided necessary and sufficient conditions for the existence of a strategic signal that can benefit the informed agent, and characterized the corresponding optimal signaling rule. Furthermore, in [28], the author has shown the optimality of linear signaling rules for multivariate Gaussian information of interest and with quadratic objective functions. Engineering Literature There exist various engineering applications depending on the definition of deception. Reference [19] provides a taxonomy of these studies with a specific focus on security. Obfuscation techniques to hide valuable information, e.g., via externally introduced noise [15,8,30] can also be viewed as deception based defense. As an example, in [15], the authors have provided a browser extension that can obfuscate user's real queries by including automatically-fabricated queries to preserve privacy. Here, however, we specifically focus on signaling-based deception applications, in which we craft the information available to adversaries to control their perception rather than corrupting it. In line with the browser extension example, our goal is to persuade the query trackers to perceive the user behavior in a certain fabricated way rather than limiting their ability to learn the actual user behavior. In computer security, various (heuristic) deception techniques, e.g., honeypots and honey nets, are prevalent to make the adversary perceive a honey-system as the real one or a real system as a honey-one [26]. Several studies, e.g., [7], have analyzed honeypots within the framework of binary signaling games by abstracting the complexity of crafting a real system to be perceived as a honeypot (or crafting a honeypot to be perceived as a real system) to binary signals. However, here, our goal is to address the optimal way to craft the underlying information of interest with a continuum support, e.g., a Gaussian state. The recent study [20] addresses strategic information transmission of multivariate Gaussian information over an additive Gaussian noise channel for quadratic misaligned cost functions and identifies the conditions where the signaling rule attaining a Nash equilibrium can be a linear function. Recall that for scalar case, when there is no noisy channel in-between, all the equilibria are partition equilibria, implying all the signaling rules attaining a Nash equilibrium are nonlinear except babbling equilibrium, where the informed agent discloses no information [9]. Two other recent studies [1] and [10] address strategic information transmission for the scenarios where the bias term is not common knowledge of the players and the solution concept is Stackelberg equilibrium rather than Nash equilibrium. They have shown that the Stackelberg equilibrium could be attained by linear signaling rules under certain conditions, different from the partition equilibria in the incentive compatible cheap talk model [9]. In [10], the authors have studied strategic sensor networks for multivariate Gaussian information of interest and with myopic quadratic objective functions in dynamic environments and by restricting the receiver's strategies to affine functions. In [1], for jointly Gaussian scalar private information and bias variable, the authors have shown that optimal sender strategies are linear functions within the general class of measurable policies for misaligned quadratic cost functions when there is an additive Gaussian noise channel and hard power constraint on the signal, i.e., when it is no longer cheap talk. Deception-As-Defense Framework Consider a multi-agent environment with asymmetry of information, where each agent is a selfish decision maker taking action or actions to fulfill his/her own objective only while actions of any agent could impact the objectives of the others. As an example, Fig. 1 illustrates a scenario with two agents: Sender (S) and Receiver (R), where S has access to (possibly partial or noisy version of) certain information valuable to R, and S sends a signal or signals related to the information of interest to R. Definition 3 We say that an informed agent (or the signal the agent crafts) is deceptive if he/she shapes the information of interest private to him/her strategically in order to control the perception of the uninformed agent by removing, changing, or adding contents. Deceptive signaling can play a key role in multi-agent non-cooperative environments as well as in cooperative ones, where certain (uninformed) agents could have been compromised by certain adversaries. In such scenarios, informed agents can signal strategically to the uninformed ones in case they could have been compromised. Furthermore, deceiving an adversary to act, or attack the system in a way aligned with the system's goals can be viewed as being too optimistic due to the very definition of adversary. However, an adversary can also be viewed as a selfish decision maker seeking to satisfy a certain malicious objective, which may not necessarily be completely conflicting with the system's objective. This now leads to the following notion of "deception-as-defense". Definition 4 We say that an informed agent engages in a deception-as-defense mode of operation if he/she crafts the information of interest strategically to persuade the uninformed malicious agent (without any explicit enforcement) to act in line with the aligned part of the objective as much as possible without taking into account the misaligned part. We re-emphasize that this approach differs from the approaches that seek to raise suspicion on the information of interest to sabotage the adversaries' malicious objectives. Sabotaging the adversaries' malicious objectives may not necessarily be the best option for the informed agent unless the objectives are completely opposite of each other. In this latter case, the deception-as-defense framework actually ends up seeking to sabotage the adversaries' malicious objectives. We also note that this approach differs from lying, i.e., the scenario where the informed agent provides a totally different information (correlated or not) as if it is the information of interest. Lying could be effective, as expected, as long as the uninformed agent trusts the legitimacy of the provided information. However, in noncooperative environments, this could turn into a game where the uninformed agent becomes aware of the possibility of lying. This correspondingly raises suspicion on the legitimacy of the shared information and could end up sabotaging the adversaries' malicious objectives rather than controlling their perception of the information of interest. Once a defense mechanism has been widely deployed, this can cause the advanced adversaries learn the defense policy in the course of time. Correspondingly, the solution concept of policy commitment model can address this possibility in the deception-as-defense framework in a robust way if the defender commits to a certain policy that takes into account the best reaction of the adversaries that are aware of the policy. Furthermore, the transparency of the signal sent via the committed policy generates a trust-based relationship in-between S and R, which is powerful to persuade R to make certain decisions inadvertently without any explicit enforcement by S. Game Formulation The information of interest is considered to be a realization of a known, continuous, random variable in static settings or a known (discrete-time) random process in dynamic settings. Since the static setting is a special case of the dynamic setting, we formulate the game in a dynamic, i.e., multi-stage, environment. We denote the information of interest by {x x x k ∈ X}, where X ⊂ R m denotes its support. Let {x x x k } have zero mean and (finite) second-order moment Σ k := cov{x x x k } ∈ S m . We consider the scenarios where each agent has perfect recall and constructs his/her strategy accordingly. S has access to a possibly partial or noisy version of the information of interest, x x x k . We denote the noisy measurement of x x x k by y y y k ∈ Y, where Y ⊂ R m denotes its support. For each instance of the information of interest, S selects his/her signal as a second-order random variable s s s k = η k (y y y 1:k ), correlated with y y y 1:k , but not necessarily determined through a deterministic transformation on y y y 1:k (i.e., η k (·) is in general a random mapping). Let us denote the set of all signaling rules by Υ k . As we will show later, when we allow for such randomness in the signaling rule, under certain conditions the solution turns out to be a linear function of the underlying information y y y 1:k and an additive independent noise term. Due to the policy commitment by S, at each instant, with perfect recall, R selects a Borel measurable decision rule γ k : S k → U, where U ⊂ R r , from a certain policy space Γ k in order to make a decision knowing the signaling rules {η k } and observing the signals sent s s s 1:k . Let κ denote the length of the horizon. We consider that the agents have cost functions to minimize, instead of utility functions to maximize. Clearly, the framework could also be formulated accordingly for utility maximization rather straightfor-wardly. Furthermore, we specifically consider that the agents have quadratic cost functions, denoted by U S (η 1:κ , γ 1:κ ) and U R (η 1:κ , γ 1:κ ). An Example in Non-cooperative Communication Systems Over a finite horizon with length κ, S seeks to minimize over η 1: by taking into account that R seeks to minimize over γ 1: where the weight matrices are arbitrary (but fixed). The following special case illustrates the applicability of this general structure of misaligned objectives (3) and (4). Suppose that the information of interest consists of two separate processes {z z z k } and {t t t k }, e.g., x x x k := z z z k t t t k . Then (3) and (4) cover the scenarios where R seeks to estimate z z z k by minimizing whereas S wants R to perceive z z z k as t t t k , and end up minimizing An Example in Non-cooperative Control Systems Consider a controlled Markov process, e.g., where w w w k ∼ N(0, Σ w ) is a white Gaussian noise process. S seeks to minimize over by taking into account that R seeks to minimize over with arbitrary (but fixed) positive semi-definite matrices Q S and Q R , and positivedefinite matrices R S and R R . Similar to the example in communication systems, this general structure of misaligned objectives (8) and (9) can bring in interesting applications. Suppose the information of interest consists of two separate processes {z z z k } and {t t t k }, e.g., x x x k := z z z k t t t k , where {t t t k } is an exogenous process, which does not depend on R's decision u u u k . For certain weight matrices, (8) and (9) cover the scenarios where R seeks to regularize {z z z k } around zero vector by minimizing whereas S seeks R to regularize {z z z k } around the exogenous process {t t t k } by minimizing We define the deception-as-defense game as follows: Definition 5 The deception-as-defense game G := (Υ, Γ, {x x x k }, {y y y k }, U S , U R ) is a Stackelberg game between S and R, where • {x x x k } denotes the information of interest, • {y y y k } denotes S's (possibly noisy) measurements of the information of interest, • U S and U R are the objective functions of S and R, defined respectively by (3) and (4), or (8) and (9). Under the deception model with policy commitment, S is the leader, who announces (and commits to) his strategies beforehand, while R is the follower, reacting to the leader's announced strategies. Since R is the follower and takes actions knowing S's strategy η 1:κ ∈ Υ, we let B(η 1:κ ) ⊂ Γ be R's best reaction set to S's strategy η 1:κ ∈ Υ. Then, the strategy and best reaction pair (η * 1:κ , B(η * 1:κ )) attains the Stackelberg equilibrium provided that Quadratic Costs and Information of Interest Misaligned quadratic cost functions, in addition to their various applications, play an essential role in the analysis of the game G. One advantage is that a quadratic cost function can be written as a linear function of the covariance of the posterior estimate of the underlying information of interest. Furthermore, when the information of interest is Gaussian, we can formulate a necessary and sufficient condition on the covariance of the posterior estimate, which turns out to be just semi-definite matrix inequalities. This leads to an equivalent semi-definite programming (SDP) problem over a finite dimensional space instead of finding the best signaling rule over an infinite-dimensional policy space. In the following, we elaborate on these observations in further detail. Due to the policy commitment, S needs to anticipate R's reaction to the selected signaling rule η 1:κ ∈ Υ. Here, we will focus on the non-cooperative communication system, and later in Section 7, we will show how we can transform a non-cooperative control setting into a non-cooperative communication setting under certain conditions. Since the information flow is in only one direction, R faces the least mean square error problem for given η 1:κ ∈ Υ. Suppose that R R R R is invertible. Then, the best reaction by R is given by almost everywhere over R r . Note that the best reaction set B(η 1:κ ) is a singleton and the best reaction is linear in the posterior estimate E{x x x k |s s s 1:k }, i.e., the conditional expectation of x x x k with respect to the random variables s s s 1:k . When we substitute the best reaction by R into S's cost function, we obtain where M S := R S (R R R R ) −1 R R Q R . Since for arbitrary random variables a a a and b b b, E{a a aE{a a a|b b b}} = E{E{a a a|b b b}E{a a a|b b b}}, the objective function to be minimized by S, (15), can be written as where H k := cov{E{x x x k |s s s 1:k }} denotes the covariance of the posterior estimate, and the constant c is given by We emphasize that H k ∈ S m is not the posterior covariance, i.e., cov{E{x x x k |s s s 1:k }} cov{x x x k |s s s 1:k } in general. The cost function depends on the signaling rule η 1:κ ∈ Υ only through the covariance matrices H 1:κ and the cost is an affine function of H 1:κ . By formulating the relation, we can obtain an equivalent finite-dimensional optimization problem over the space of symmetric matrices as an alternative to the infinite-dimensional problem over the policy space Υ. Next, we seek to address the following question. • ? Relation between η 1:κ and H 1:κ What is the relation between the signaling rule η 1:κ ∈ Υ and the covariance of the posterior estimate H 1:κ ? Here, we only consider the scenario where S has access to the underlying information of interest perfectly. We will address the scenarios with partial or noisy measurements in Section 9 by transforming that setting to the setting of perfect measurements. There are two extreme cases for the shared information: either sharing the information fully without any crafting or sharing no information. The former one implies that the covariance of the posterior estimate would be Σ k whereas the latter one implies that it would be cov{E{x x x k |s s s 1:k−1 }} since R has perfect memory. • ? Sufficient Condition What would be the sufficient condition? Is the necessary condition on H k ∈ S m (22) sufficient? The sufficient condition for arbitrary distributions is an open problem. However, in the following subsection, we show that when information of interest is Gaussian, we can address the challenge and the necessary condition turns out to be sufficient. Gaussian Information of Interest In addition to its use in modeling various uncertain phenomena based on the central limit theorem, Gaussian distribution has special characteristics which make it versatile in various engineering applications, e.g., in communication and control. The deception-as-defense framework is not an exception for the versatility of the Gaussian distribution. As an example, if the information of interest is Gaussian, the optimal signaling rule turns out to be a linear function within the general class of measurable policies, as to be shown in different settings throughout this chapter. Let us first focus on the single-stage setting, where the necessary condition (22) is given as The convention here is that for arbitrary symmetric matrices A, B ∈ S m , A B means that A − B O, that is positive semi-definite. We further note that the space of positive-semi-definite matrices is a semi-cone [29]. Correspondingly, Fig. 2 provides a figurative illustration of (23), where H 1 ∈ S m is bounded from both below and above by certain semi-cones in the space of symmetric matrices. With a certain linear transformation bijective over (23), denoted by L 1 : S m → S n , where n ∈ Z is not necessarily the same with m ∈ Z, the necessary condition (23) can be written as As an example of such a linear mapping when Σ 1 ∈ S m is invertible, we can consider and n = m. If Σ 1 is singular, then the following lemma from [23] plays an important role to compute such a linear mapping. Lemma 1 Provided that a given positive semi-definite matrix can be partitioned into blocks such that a block at the diagonal is a zero matrix, then certain off-diagonal blocks must also be zero matrices, i.e., Let the singular Σ 1 ∈ S m with rank n < m have the eigen-decomposition where Λ 1 O n . Then, (23) can be written as where we let be the corresponding partitioning, i.e., N 1,1 ∈ S n . Since U 1 H 1 U 1 O m , the diagonal block N 2,2 ∈ S m−n must be positive semi-definite [14]. Further, (27) yields that −N 2,2 O m−n , which implies that N 2,2 = O m−n . Invoking Lemma 1, we obtain N 1,2 = O n×(m−n) . Therefore, a linear mapping bijective over (23) is given by where the unitary matrix U 1 ∈ R m×m and the diagonal matrix Λ 1 ∈ S n are as defined in (26). With the linear mapping (29) that is bijective over (23), the necessary condition on H 1 ∈ S m can be written as since the eigenvalues of I n weakly majorize the eigenvalues of the positive semi-definite L 1 (H 1 ) from below [14]. Up to this point, the specific distribution of the information of interest did not play any role. However, for the sufficiency of the condition (23), Gaussianness of the information of interest plays a crucial role as shown in the following theorem [24]. Theorem 1 Consider m-variate Gaussian information of interest x x x 1 ∼ N(0, Σ 1 ). Given any stochastic kernel η 1 ∈ Υ 1 , we have Furthermore, given any covariance matrix H 1 ∈ S m satisfying we have that there exists a probabilistic linear-in-x x x 1 signaling rule where L 1 ∈ R m×m and n n n 1 ∼ N(0, Σ o 1 ) is an independent m-variate Gaussian random variable, such that cov{E{x x x 1 |η 1 (x x x 1 )}} = H 1 . Let L 1 (H 1 ) ∈ S n have the eigendecomposition L 1 (H 1 ) =Ū 1Λ1Ū 1 andΛ 1 = diag{λ 1,1 , . . . ,λ 1,n }. Then, the corresponding matrix L 1 ∈ R m×m and the covariance Σ o 1 O m are given by where the unitary matrix U 1 ∈ R m×m and the diagonal matrix Λ 1 ∈ S n are as defined in (26) Proof Note that for Gaussian information and the signaling rule (32), the covariance of the posterior estimate is given by Given H 1 ∈ S m satisfying (47), for (33) and (34), the linear-in-x x x 1 signaling rule (32) yields that cov{E{x x x 1 |L 1 x x x 1 + n n n 1 }} = H 1 . • > Implication of Theorem 1 If the underlying information of interest is Gaussian, instead of the functional optimization problem we can consider the equivalent finite-dimensional problem Then, we can compute the optimal signaling rule η * 1 corresponding to the solution of (37) via (32)-(34). Without any need to solve the functional optimization problem (36), Theorem 1 shows the optimality of the "linear plus a random variable" signaling rule within the general class of stochastic kernels when the information of interest is Gaussian. • ! Versatility of the Equivalence Furthermore, a linear signaling rule would still be optimal even when we introduce additional constraints on the covariance of the posterior since the equivalence between (36) and (37) is not limited with the equivalence in optimality. Recall that the distribution of the underlying information plays a role only in proving the sufficiency of the necessary condition. Therefore, in general, based on only the necessary condition, we have min The equality holds when the information of interest is Gaussian. Therefore, for fixed covariance Σ 1 ∈ S m , Gaussian distribution is the best one for S to persuade R in accordance with his/her deceptive objective, since it yields total freedom to attain any covariance of the posterior estimate inbetween the two extremes Σ 1 H 1 O. The following counter example shows that the sufficiency of the necessary condition (47) holds only in the case of the Gaussian distribution. A Counter Example for Arbitrary Distributions For a clear demonstration, suppose that m = 2 and Σ 1 = I 2 , and correspondingly x x x 1 = x x x 1,1 x x x 1,2 . The covariance matrix H := 1 0 0 0 satisfies the necessary condition (47) since which implies that the signal s s s 1 must be fully informative about x x x 1,1 without giving any information about x x x 1,2 . Note that Σ 1 = I 2 only implies that x x x 1,1 and x x x 1,2 are uncorrelated, yet not necessarily independent for arbitrary distributions. Therefore, if x x x 1,1 and x x x 1,2 are uncorrelated but dependent, then any signaling rule cannot attain that covariance of the posterior estimate even though it satisfies the necessary condition. Let us now consider a Gauss-Markov process, which follows the following firstorder auto-regressive recursion where A ∈ R m×m and w w w k ∼ N(0, Σ w ). For this model, the necessary condition (22) is given by for k = 2, . . . , κ. Given where Λ k O n k , i.e., Σ k − AH k−1 A has rank n k . The linear transformation L k : k i=1 S m → S n given by is bijective over (41). With the linear mapping (43), the necessary condition on H 1:κ ∈ κ i=1 S m can be written as which correspondingly yields that L k (H 1:k ) ∈ S n k has eigenvalues in the closed interval [0, 1]. Then, the following theorem extends the equivalence result of the single-stage to multi-stage ones [24]. Furthermore, given any covariance matrices where H 0 = O m , then there exists a probabilistic linear-in-x x x k , i.e., memoryless, signaling rule where L k ∈ R m×m and {n n n k ∼ N(0, Σ o k )} is independently distributed m-variate Gaussian process such that cov{E{x x x k |η 1 (x x x 1 ), . . . , η k (x x x 1:k )}} = H k for all k = 1, . . . , κ. Given H 1:k−1 , let L k (H 1:k ) ∈ S n k have the eigen-decomposition L k (H 1:k ) =Ū kΛkŪ k andΛ k = diag{λ k,1 , . . . ,λ k,n k }. Then, the corresponding matrix L k ∈ R m×m and the covariance Σ o k O m are given by where the unitary matrix U k ∈ R m×m and the diagonal matrix Without any need to solve the functional optimization problem min Theorem 2 shows the optimality of the "linear plus a random variable" signaling rule within the general class of stochastic kernels also in dynamic environments, when the information of interest is Gaussian. Communication Systems In this section, we elaborate further on the deception-as-defense framework in noncooperative communication systems with a specific focus on Gaussian information of interest. We first note that in this case the optimal signaling rule turns out to be a linear deterministic signaling rule, where S does not need to introduce additional independent noise on the signal sent. Furthermore, the optimal signaling rule can be computed analytically for the single-stage game [28]. We also extend the result on the optimality of linear signaling rules to multi-stage ones [25]. In the single stage setting, by Theorem 1, the SDP problem equivalent to the problem (15) faced by S is given by We can have a closed form solution for the equivalent SDP problem (15) [28]. If Σ 1 ∈ S m has rank n, then a change of variable with the linear mapping L 1 : S m → S n (29), e.g., T := L 1 (S), yields that (52) can be written as where If we multiply each side of the inequalities in the constraint set of (53) from left and right with unitary matrices such that the resulting matrices are still symmetric, the semi-definiteness inequality would still hold. Therefore, let the symmetric matrix W ∈ S n have the eigen-decomposition where Λ + and Λ − are positive semi-definite matrices with dimensions n + and n − . Then (53) could be written as and there exists a T r ∈ R n + ×n − such that satisfies the constraint in (53). Then, the following lemma shows that an optimal solution for (56) is given by T * + = O n + , T * r = O n + ×n − , and T * − = O n − . Therefore, in (56), the second (negative semi-definite) term −Tr{T − Λ − } can be viewed as the aligned part of the objectives whereas the remaining first (positive semi-definite) term Tr{T + Λ + } is the misaligned part. Lemma 2 For arbitrary Proof The left inequality follows since Tr{ AB} = Tr{ A 1/2 BA 1/2 } while A 1/2 BA 1/2 is positive semi-definite. The right inequality follows since the diagonal entries of A are majorized from below by its eigenvalues by Schur Theorem [14] while the eigenvalues of A are weakly majorized from below by the eigenvalues of I n since I n A [14]. Based on (57), the solution for (56) implies that the optimal solution for (53) is given by By invoking Theorem 1 and (33), we obtain the following theorem to compute the optimal signaling rule analytically in single-stage G (a version of the theorem can be found in [28]). (3) and (4), respectively. Then, an optimal signaling rule is given by Theorem 3 Consider a single-stage deception-as-defense game G, where S and R have the cost functions almost everywhere over R m . The matrices U 1 ∈ R m×m , Λ 1 ∈ S n are as defined in (26), and U − ∈ R n×n − is as defined in (55). Note that the optimal signaling rule (60) does not include any additional noise term. The following corollary shows that the optimal signaling rule does not include additional noise when κ > 1 as well (versions of this theorem can be found in [25] and [23]). (3) and (4), respectively. Then, for the optimal solution S * 1:κ ∈ κ k=1 S m of the equivalent problem, P k := L k (S * 1:k ) is a symmetric idempotent matrix, which implies that the eigenvalues of P k ∈ S n k are either 0 or 1. Let n k,1 ∈ Z denote the rank of P k , and P k have the eigen-decomposition Corollary 1 Consider a deception-as-defense game G, where the exogenous Gaussian information of interest follows the first-order autoregressive model (40), and the players S and R have the cost functions Then, the optimal signaling rule is given by almost everywhere over R m , for k = 1, . . . , κ. The unitary matrix U k ∈ R m×m and the diagonal matrix Λ k ∈ S n k are defined in (42). Control Systems The deception-as-defense framework also covers the non-cooperative control settings including a sensor observing the state of the system and a controller driving the system based on the sensor outputs according to certain quadratic control objectives, e.g., (9). Under the general game setting where the players can select any measurable policy, the control setting cannot be transformed into a communication setting straight-forwardly since the problem features non-classical information due to the asymmetry of information between the players and the dynamic interaction through closed-loop feedback signals, which leads to two-way information flow rather than one-way flow as in the communication setting in Section 6. However, the control setting can be transformed into a non-cooperative communication setting under certain conditions, e.g., when signaling rules are restricted to be linear plus a random term. Consider a controlled Gauss-Markov process following the recursion (7), and with players S and R seeking to minimize the quadratic control objectives (8) and (9), respectively. Then, by completing to squares, the cost functions (8) and (9) can be written as where j = S, R, and and {Q j,k } follows the discrete-time dynamic Riccati equation: andQ j,κ+1 = Q j . On the right-hand side of (63), the state depends on the control input u u u j,k , for j = S, R, however, a routine change of variables yields that where we have introduced the control-free, i.e., exogenous, process {x x x o k } following the first-order auto-regressive model and a linearly transformed control input • > Non-classical Information Scheme under General Game Settings The right-hand side of (68) resembles the cost functions in the communication setting, which may imply separability over the horizon and for the optimal transformed control input is given by u u u o k = −K j,k E{x x x o k |s s s 1:k } and the corresponding optimal control input could be computed by reversing the transformation (70). However, here, the control rule constructs the control input based on the sensor outputs, which are chosen strategically by the non-cooperating S while S constructs the sensor outputs based on the actual state, which is driven by the control input, rather than the control-free state. Therefore, R can have impact on the sensor outputs by having an impact on the actual state. Therefore, the game G under the general setting features a non-classical information scheme. However, if S's strategies are restricted to linear policies η k ∈ Υ k ⊂ Υ k , given by Therefore, for a given "linear plus noise" signaling rule, the optimal transformed control input is given by u u u o k = −K j,k E{x x x o k |s s s 1:k }. Then, (68) can be written as where we have introduced the augmented vectors u u u = u u u κ · · · u u u 1 and . To recap, S and R seek to minimize, respectively, the following cost functions We note the resemblance to the communication setting. Therefore following the same lines, S faces the following problem: where where Ξ k,i ∈ R m×m is an m × m block of Ξ ∈ R mκ×mκ , with indexing starting from the right-bottom to the left-top, and where The optimal linear signaling rule in control systems can be computed according to Corollary 1 based on (81). Uncertainty in the Uninformed Agent's Objective In the deception-as-defense game G, the objectives of the players are common knowledge. However, there might be scenarios where the objective of the uninformed attacker may not be known precisely by the informed defender. In this section, our goal is to extend the results in the previous sections for such scenarios with uncertainties. To this end, we consider that R has a private type ω ∈ Ω governing his/her cost function and Ω is a finite set of types. For a known type of R, e.g., ω ∈ Ω, as shown in both communication and control settings, the problem faced by the informed agent S can be written in an equivalent form as for certain symmetric matrices V ω,k ∈ S m , which depend on R's objective and correspondingly his/her type. If the distribution governing the type of R, e.g., {p ω } ω ∈Ω , where p ω denotes the probability of type ω ∈ Ω, were known, then the equivalence result would still hold straight-forwardly when we consider since (84) is linear in V ω,k ∈ S m . For the scenarios where the distribution governing the type of R is not known, we can defend against the worst possible distribution over the types in a robust way. In the following, we define the corresponding robust deception-as-defense game. Definition 6 The robust deception-as-defense game is a Stackelberg game [2] between S and R, where • Ω denotes the type set of R, • {x x x k } denotes the information of interest, • {y y y k } denotes S's (possibly noisy) measurements of the information of interest, • U r S and U ω R are the objective functions of S and R, derived based on (3) and (4), or (8) and (9). In this hierarchical setting, S is the leader, who announces (and commits to) his strategies beforehand, while R stands for followers of different types, reacting to the leader's announced strategy. Players type-ω R and S select the strategies γ ω 1:κ ∈ Γ and η 1:κ ∈ Υ to minimize the cost functions U ω R (η 1:κ , γ ω 1:κ ) and Type-ω R selects his/her strategy knowing S's strategy η 1:κ ∈ Υ. Let B ω (η 1:κ ) ⊂ Γ be type-ω R's best reaction set to S's strategy η 1:κ ∈ Υ. Then, the strategy and best reactions pair (η * 1:κ , {B ω (η * 1:κ )} ω ∈Ω ) attains the Stackelberg equilibrium provided that η * 1:κ ∈ argmin B ω (η 1:κ ) = argmin Suppose S has access to the perfect measurement of the state. Then, in the robust deception-as-defense game G r , the equivalence result in Theorem 2 yields that the problem faced by S can be written as where we have introduced the block diagonal matrices S := diag{S κ , . . . , S 1 } and V ω := diag{V ω,κ , . . . , V ω,1 }, and Ψ ⊂ S mκ denotes the constraint set at this new high-dimensional space corresponding to the necessary and sufficient condition on the covariance of the posterior estimate. The following theorem from [24] provides an algorithm to compute the optimal signaling rules within the general class of measurable policies for the communication setting, and the optimal "linear plus noise" signaling rules for the control setting. Proof There exists a solution for the equivalent problem (90) since the constraint sets are decoupled and compact while the objective function is continuous in the optimization arguments. Let (S * , p * ) be a solution of (90). Then, p * ∈ ∆ |Ω | is given by since the objective in (90) is linear in p ∈ ∆ |Ω | . Since p * ∈ ∆ |Ω| , i.e., a point over the simplex ∆ |Ω | , there exists at least one type with positive weight, e.g., p * ω > 0. Then, (93) yields and furthermore since for all ω o ∈ Ω such that p ω o > 0, we have Tr{V ω o S * } = Tr{V ω S * }. Therefore, given the knowledge that in the solution p * ω > 0, we can write (90) as To mitigate the necessity p * ω > 0 in the solution of the left-hand-side, we can search over the finite set Ω since in the solution at least one type must have positive weight, which completes the proof. • ! Irrelevant Information in Signals The optimization objective in (90) is given by which is convex in S ∈ Ψ since the maximum of any family of linear functions is a convex function [6]. Therefore, the solution S * ∈ Ψ may be a non-extreme point of the constraint set Ψ, which implies that in the optimal signaling rule S introduces independent noise. Note that Blackwell's irrelevant information theorem [4,5] implies that there must also be some other (nonlinear) signaling rule within the general class of measurable policies that can attain the equilibrium without introducing any independent noise. Partial or Noisy Measurements Up to now, we have considered the scenario where S has perfect access to the underlying information of interest, but had mentioned at the beginning that results are extendable also to partial or noisy measurements, e.g., where C ∈ R m×m and v v v k ∼ N(0, Σ v ) is Gaussian measurement noise independent of all the other parameters. In this section, we discuss these extensions, which hold under certain restrictions on S's strategy space. More precisely, for "linear plus noise" signaling rules η k ∈ Υ k , k = 1, . . . , κ, the equivalence results in Theorems 1 and 2 hold in terms of the covariance of the posterior estimate of all the previous measurements3, denoted by Y k := cov{E{y y y 1:k |s s s 1:k }}, rather than the covariance of the posterior estimate of the underlying state H k = cov{E{x x x k |s s s 1:k }}. Particularly, the following lemma from [22] shows that there exists a linear relation between the covariance matrices H k ∈ S m and Y k ∈ S mk since x x x k → y y y 1:k → s s s 1:k forms a Markov chain in that order. Lemma 3 Consider zero-mean jointly Gaussian random vectors x x x, y y y, s s s that form a Markov chain, e.g., x x x → y y y → s s s in this order. Then, the conditional expectations of x x x and y y y given s s s satisfy the following linear relation: E{x x x|s s s} = E{x x xy y y }E{y y yy y y } † E{y y y|s s s}. Note that s s s 1:k is jointly Gaussian with x x x k and y y y 1:k since η i ∈ Υ i , for i = 1, . . . , k. Based on Lemma 3, the covariance matrices H k ∈ S m and Y k ∈ S mk satisfy where D k := E{x x x k y y y 1:k }E{y y y 1:k y y y 1:k } † ∈ R m×mk . Furthermore, y y y 1:k ∈ R mk follows the first-order auto-regressive recursion: y y y 1:k = E{y y y k y y y 1:k−1 }E{y y y 1:k−1 y y y 1:k−1 } † I m(k−1) =:A y k y y y 1:k−1 + y y y k − E{y y y k |y y y 1:k−1 } 0 m(k−1) . Therefore, the optimization problem faced by S can be viewed as belonging to the non-cooperative communication setting with perfect measurements for the Gauss-Markov process {y y y 1:k } following the recursion (101), and it can be written as min where W k := D k V k D k . • > Dimension of Signal Space Without loss of generality, we can suppose that the signal s s s k sent by S is mk dimensional so that S can disclose y y y 1:k . To distinguish the introduced auxiliary signaling rule from the actual signaling rule η k , we denote it byη k ∈Υ k and the policy spaceΥ k is defined accordingly. When the information of interest is Gaussian, for a given optimalη 1:i , we can always set the ith optimal signaling rule η i (·) in the original signal space Υ i as η i (y y y 1:i ) = E{x x x i |η 1 (y y y 1 ), . . . ,η i (y y y 1:i )}, almost everywhere over R m , and the right-hand-side is the conditional expectation of x x x i with respect to the random variablesη 1 (y y y 1 ), . . . ,η i (y y y 1:i ). Then, for k = 1, . . . , κ, we would obtain E{x x x k |η 1 (y y y 1 ), . . . , η k (y y y 1:k )} = E{x x x k |η 1 (y y y 1 ), . . . ,η k (y y y 1:k )}, almost everywhere over R m , since for η 1:κ ∈ Υ selected according to (103), all the previously sent signals {η 1 (y y y 1 ), . . . , η k−1 (y y y 1:k−1 )} are σ-{η 1 (y y y 1 ), . . . ,η k−1 (y y y 1:k−1 )} measurable. Based on this observation, for partial or noisy measurements, we have the equivalent problem Tr{Y k W k }, subject to cov{y y y 1: where Y 0 = 0. Given the solution Y * 1:κ , we can compute the corresponding signaling rulesη 1:κ according to Theorem 2 and then the actual optimal signaling rule η 1:κ ∈ Υ can be computed by (103). Conclusion In this chapter, we have introduced the deception-as-defense framework for cyberphysical systems. A rational adversary takes certain actions to carry out a malicious task based on the available information. By crafting the information available to the adversary, our goal was to control him/her to take actions inadvertently in line with the system's interest. Especially, when the malicious and benign objectives are not completely opposite of each other, as in a zero-sum game framework, we have sought to restrain the adversary to take actions, or attack the system, carrying out only the aligned part of the objectives as much as possible without meeting the goals of the misaligned part. To this end, we have adopted the solution concept of game theoretical hierarchical equilibrium for robust formulation against the possibility that advanced adversaries can learn the defense policy in the course of time once it has been widely deployed. We have shown that the problem faced by the defender can be written as a linear function of the covariance of the posterior estimate of the underlying state. For arbitrary distributions over the underlying state, we have formulated a necessary condition on the covariance of the posterior estimate. Then, for Gaussian state, we have shown the sufficiency of that condition since for any given symmetric matrix satisfying the necessary condition, there exists a "linear plus noise" signaling rule yielding that covariance of the posterior estimate. Based on that, we have formulated an SDP problem over the space of symmetric matrices equivalent to the problem faced by the defender over the space of signaling rules. We have first focused on the communication setting. This equivalence result has implied the optimality of linear signaling rules within the general class of stochastic kernels. We have provided the optimal signaling rule for single stage settings analytically and provided an algorithm to compute the optimal signaling rules for dynamic settings numerically. Then, we have extended the results to control settings, where the adversary has a long-term control objective, by transforming the problem into a communication setting by restricting the space of signaling rules to linear policies plus a random term. We have also addressed the scenarios where the objective of the adversary is not known and the defender can have partial or noisy measurements of the state. Some future directions of research include formulation of the deception-asdefense framework for • robust control of systems, • communication or control systems with quadratic objectives over infinite horizon, • networked control systems, where there are multiple informed and uninformed agents, • scenarios where the uninformed adversary can have side-information, • applications in sensor selection.
2019-02-04T18:29:40.000Z
2019-02-04T00:00:00.000
{ "year": 2021, "sha1": "00e7b312f7661f774e2c54ed0358864169f0b0ce", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1902.01364", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "00e7b312f7661f774e2c54ed0358864169f0b0ce", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
259172901
pes2o/s2orc
v3-fos-license
The complete chloroplast genome sequence of Phyllostachys incarnata Wen, 1982 Abstract Phyllostachys incarnata Wen, 1982 is one of the important material and edible bamboo specie of high quality in China. We reported the complete chloroplast(cp) genome of P. incarnata in this study. The cp genome of P. incarnata (GenBank accession number: OL457160) was a typical tetrad structure with a full length of 139,689 bp, comprising a pair of inverted repeated (IR) regions (21,798 bp) separated by a large single-copy (LSC) region (83,221 bp) and a small single-copy (SSC) region (12,872 bp). And the cp genome contained 136 genes, including 90 protein-coding genes, 38 tRNA genes, and 8 rRNA genes. Phylogenetic analysis based on 19 cp genomes suggested that P. incarnata was relatively close to P. glauca among the species analyzed. Introduction Phyllostachys incarnata Wen, 1982 is a bamboo species with high-quality shoots, which belongs to Phyllostachys of Poaceae Barnhart and originated from Fujian and Zhejiang provinces of China (Chen et al. 2006). Currently, it is also distributed in Sichuan, Anhui, Jiangxi and other regions for large scale introduction and cultivation with a high priority, because of its high yield and long growing season of the edible shoots. It has been praised for characteristics of early emergence (April-May), strong shooting ability, long duration and excellent shoot quality. Furthermore, this bamboo species provides pest and disease resistance, and tolerance of abiotic stresses such as drought, cold, and waterlogging. And beyond that, it can also allow gardeners to support cultivated landscapes with unique nature (Leng 2017). Typical characteristics of P. incarnata including the young bamboo poles were thickly white powdery especially below nodes; sheaths abaxially fleshy red or greenish or distally green on slender culms, with sparse speckles; final branchlets with 3 or 4 leaves; auricles flourishing, ovate or semicircular, greenish purplish, with radial tassels; ligule strongly projecting, purplish, acuminate upward ( Figure 1). Chloroplast genes are related to many important traits in plants, including resistance to herbicides, insect resistance, and stress tolerance (Daniell et al. 2005). Currently, chloroplast gene engineering has been applied to improve plant resistance to herbicides and insects, as well as to increase their stress tolerance (Wani et al. 2015). Hence, analyzing the chloroplast genes of P. incarnata will aid in comprehending its adaptability and expedite the enhancement of its varieties. In this study, we report and characterize the chloroplast genome of P. incarnata. Using these data, we reconstruct the phylogenetic tree of this species to reveal the relationship and provide useful information for further study of P. incarnata. Materials and methods Plantlet of P. incarnata were collected from Wenjiang District, Chengdu City, Sichuan Province, China (N30 42 0 6 00 , E103 51 0 30 00 ) on 7 September 2021. The specimens were deposited in the herbarium room of Aba Teachers University, Aba, Sichuan, China, China (http://www.abtu.edu.cn/; contact person: LiHua Wang; Email: wanglh0823@163.com) under the voucher number ATUP02109070002. Total DNA was extracted from the fresh leaves of P. incarnata using NEBNext Ultra DNA Library Prep Kit for Illumina (E7370L, New England Biolabs), and the high-quality DNA was sheared to the fragments of 350 bp in length for the shotgun library construction, which were sequenced using the Illumina NovaSeq platform (Illumina Inc, San Diego, CA), and thus generating 150 bp paired-end reads. The filtered reads were assembled into the complete chloroplast genome using the program A5-miseq v20150522 (Coil et al. 2015) and SPAdes v3.9.0 (Bankevich et al. 2012) and with P. edulis chloroplast genome (GenBank accession number: NC_015817) as a reference. The annotation of chloroplast genome was conducted through the online program CPGAVAS2. The whole chloroplast genome map was drawn using CPGView (http://www.1kmpg.cn/ cpgview/) (Liu et al. 2023). The annotated genomic sequence has been registered into GenBank with the accession number (OL457160). Results The cp genome of P. incarnata was a typical tetrad structure with a full length of 139,689 bp, comprising a pair of inverted repeated (IR) regions (21,798 bp) separated by a large singlecopy (LSC) region (83,221 bp) and a small single-copy (SSC) region (12,872 bp) ( Figure 2). A total of 136 genes, including 90 protein-coding genes, 38 tRNA genes and 8 rRNA genes, are successfully annotated in the complete chloroplast genome sequence of P. incarnata. And the complete chloroplast genome contains 109 unique genes, including 80 proteincoding genes, 29 tRNA genes and 4 rRNA genes. The rps12 is a trans-spliced gene ( Figure S1). It has three unique exons. The maximum-likelihood phylogenetic tree was constructed based on 19 complete chloroplast genomes of Phyllostachydinae species, and Chimonobambusa purpurea as outgroup. All sequences were obtained from NCBI GenBank. The complete protome nucleotide sequences were extracted, aligned and concatenated in PhyloSuite v1.2.2 (Zhang et al. 2020), and used to perform the ML inference in IQ-TREE Multicore version 1.6.12 (Gao et al. 2018). Phylogenetic analysis suggested that P. incarnata was relatively close to P. glauca among the species analyzed (Figure 3). Discussion The chloroplast genome of plants has a very conserved structure and genetic composition (Maier et al. 1995;Daniell et al. 2016;Yan et al. 2023). In this study, we obtained and analyzed the chloroplast genome of P. incarnata for the first time. We identified that the genomic structures, gene contents and orders were highly conserved and similar to other Phyllostachydinae species (Attigala et al. 2016, Huang et al. 2019, Zheng et al. 2021, Zheng et al. 2021, Wu and Ge 2012, Zheng et al. 2020, Ma et al. 2014, Tu et al. 2022, Ma et al. 2017, Liu et al. 2021. We also analyzed the phylogenetic relationships of P. incarnata by complete protome nucleotide sequences, which can provide valuable insights into the phylogenetic and evolutionary position of P. incarnata in the Phyllostachydinae subtribes and Gramineae family. Ethical approval The study was approved by the institutional review board of Aba Teachers University, Aba, Sichuan, China. The collection of plant materials was conducted in accordance with Figure 2. Schematic map of overall features of P. incarnata chloroplast genome. The map contains six tracks in default. From the center outward, the first track shows the dispersed repeats. The dispersed repeats consist of direct (D) and Palindromic (P) repeats, connected with red and green arcs. The second track shows the long tandem repeats as short blue bars. The third track shows the short tandem repeats or microsatellite sequences as short bars with different colors. The small single-copy (SSC), inverted repeat (IRa and IRb), and large single-copy (LSC) regions are shown on the fourth track. The GC content along the genome is plotted on the fifth track. The base frequency at each site along the genome will be shown between the fourth and fifth tracks. The genes are shown on the sixth track. The optional codon usage bias is displayed in the parenthesis after the gene name. Genes are color-coded by their functional classification. The transcription directions for the inner and outer genes are clockwise and anticlockwise, respectively. guidelines provided by the Aba Teachers University and Sichuan province regulations. Field studies complied with Sichuan province legislation. Author contributions H. Wang, W. Liu, B.X. Wang, Y.K. Liu and J.N. Wang designed research study and obtained the funding. L.H. Wang and W. Liu carried out the experiment. B.X. Wang wrote the manuscript with support from Y.K. Liu offered great in data analysis and all authors agree to be accountable for all aspects of the work. Disclosure statement No potential conflict of interest was reported by the author(s). Data availability statement The genome sequence data that support the findings of this study are openly available in GenBank of NCBI at (https://www.ncbi.nlm.nih.gov/) under the accession no. OL457160. The associated BioProject, SRA, and Bio-Sample numbers are PRJNA893462, SRR22044020, and SAMN31424863 respectively.
2023-06-17T05:10:10.802Z
2023-06-03T00:00:00.000
{ "year": 2023, "sha1": "465acf3486e4461c9ec2b28ccf5066f4b7c43e6e", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "465acf3486e4461c9ec2b28ccf5066f4b7c43e6e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
24739812
pes2o/s2orc
v3-fos-license
Collecting shoulder kinematics with electromagnetic tracking systems and digital inclinometers: A review. The shoulder complex presents unique challenges for measuring motion as the scapula, unlike any other bony segment in the body, glides and rotates underneath layers of soft tissue and skin. The ability for clinicians and researchers to collect meaningful kinematic data is dependent on the reliability and validity of the instrumentation utilized. The aim of this study was to review the relevant literature pertaining to the reliability and validity of electromagnetic tracking systems (ETS) and digital inclinometers for assessing shoulder complex motion. Advances in technology have led to the development of biomechanical instrumentation, like ETS, that allow for the collection of three-dimensional kinematic data. The existing evidence has demonstrated that ETS are reliable and valid instruments for collecting static and dynamic kinematic data of the shoulder complex. Similarly, digital inclinometers have become increasingly popular among clinicians due to their cost effectiveness and practical use in the clinical setting. The existing evidence supports the use of digital inclinometers for the collection of shoulder complex kinematics as these instruments have been demonstrated to yield acceptable reliability and validity. While digital inclinometers pose a disadvantage to ETS regarding accuracy, precision, and are limited to two-dimensional and static measurements, this instrument provides clinically meaningful data that allow clinicians and researchers the ability to measure, monitor, and compare shoulder complex kinematics. INTRODUCTION The ability to objectively measure shoulder complex kinematics is key to gaining a thorough understanding of normal and abnormal movement, and may assist clinicians in the diagnosis and management of shoulder dysfunction [1] . Earlier studies [2,3] exposed participants to potentially harmful radiography in order to assess static two-dimensional motions of the shoulder complex that may inaccurately describe what is actually occurring three-dimensionally [4,5] . Subsequently, technological advances have allowed for noninvasive three-dimensional analysis of glenohumeral and scapulothoracic kinematics utilizing electromagnetic tracking systems (ETS) [6][7][8][9][10][11] . The main obstacle to analyzing three-dimensional shoulder movements is the difficulty of tracking the movements of the scapula. Unlike the upper and lower extremity segments, the scapula glides and rotates underneath layers of soft tissue and skin requiring investigations into the ability to accurately and repeatedly measure scapular kinematics using noninvasive measures [7][8][9][11][12][13][14][15][16][17][18][19][20] . Furthermore, other real limitations exist in that these systems are neither cost effective nor practical for the clinical setting [21,22] . Due to these difficulties, other methods of measuring shoulder complex kinematics that are easily accessible in the clinical setting have been investigated [21][22][23] . The availability of reliable and valid clinical instrumentation enables clinicians to make sound clinical decisions that are effective, efficient, and safe. Clinically accessible methods have been established that qualitatively and quantitatively assess scapular resting position and scapular orientation during humeral elevation [24][25][26][27][28] . Of the two, quantitative methods improve objectivity that may lead to decreased clinician error. Several studies have utilized the digital inclinometer to investigate various kinematic measures of the shoulder complex. While a three-dimensional analysis provides a thorough investigation of glenohumeral and scapulothoracic kinematics, the digital inclinometer provides clinicians with a more simplistic mean of analyzing kinematic data. Instances in the literature exist where inclinometers were validated against three-dimensional scapular kinematic data collected by ETS [21,23] . Other studies have established criterion-related validity and reliability of other clinical instruments against data collected with a digital inclinometer [22] . To our knowledge no articles have been published that review the reliability and validity of ETS and digital inclinometers as measurement tools for collecting shoulder complex kinematics. The purpose of this paper is to provide such a review, with emphasis placed on the various factors, methods and motions that affect reliability and validity, and selected clinical applications utilizing these instruments. ETS ETS permit investigators the ability to track the position and orientation of sensors in space. These systems utilize an electromagnetic transmitter that generates an electromagnetic field and a series of sensors tethered to a computer system. Combined, the transmitter and computer system are able to detect the location and orientation of the sensors allowing for the six degrees of freedom required for three-dimensional analysis. In the field of biomechanics, these sensors can be mounted to the surface of the skin overlying various anatomical landmarks that enables the measurement of body segment kinematics. Currently, there are two ETS (Polhemus, Colchester, VT and Ascension Technology Corporation, Burlington, VT) that are commonly used in the study of biomechanics. In order to acquire and analyze data collected by the hardware, users must either write their own code using a commercially available product such as MATLAB (The MathWorks, Inc., Natick, MA) or purchase a commercially available software interface system, such as MotionMonitor ® (Innovative Sports Training, Inc., Chicago, IL) that is a comprehensive turnkey data acquisition and analysis system. As it relates to data acquisition and analysis, post-treatment analysis of the data is performed to quantify shoulder kinematics. Presently, in order to facilitate the reporting of shoulder kinematics among researchers and clinicians, the International Society of Biomechanics has published standards for joint coordinate systems and rotation sequences for the thorax, clavicle, scapula, and humerus [29] . Calibration Accuracy and precision are necessary in order to effectively utilize any data that is collected by laboratory/ clinical instruments. Ascension has published information regarding accuracy for the Flock of Birds (FOB) system with root mean square (RMS) errors of 7.62 mm for linear position and 0.5° for orientation. However, the environment in which these data were attained is unclear. It is well understood that metallic objects within the vicinity of the electromagnetic transmitter will alter the magnetic field, thus affecting accuracy of the ETS [30,31] . Milne et al [30] demonstrated significant alterations in measurement accuracy (positional difference of 5.26 cm and angular difference of 9.75°, P < 0.001) when mild steel was introduced into the electromagnetic field of the ETS. They collected the kinematic data utilizing the default settings with a sampling frequency of 103 Hz. LaScalza et al [31] investigated different sampling frequencies and their effects on accuracy when aluminum and steel were introduced into the electromagnetic field. While both materials had significant effects (P < 0.0001) on measurement error, a significant interaction of sampling frequency and metal type (P < 0.0001-0.0016) indicated errors in all three coordinates. The FOB system was found to be more accurate at lower frequencies (i.e., 20 Hz) when aluminum was placed within the electromagnetic field, whereas the system was more accurate at higher frequencies (i.e., 120 Hz) when steel was present [31] . Therefore, users of ETS should be cognizant of their testing environment and utilize calibration procedures to adjust for interferences created in the electromagnetic field. Earlier studies [6,7,9,10,13] investigating scapular kinematics utilizing ETS were limited to static measurements through a given range of motion. Meskers et al [32] investigated the accuracy of the FOB system before and after a static calibration procedure. Positional measurements were collected in a 1 m 3 space located [32] . Others have reported static RMS errors of 5.3 mm in position, 3.1 mm in linear displacement, and 0.23° in orientation and have suggested that system accuracy be established for each testing environment [14] . As methodologies [4,8,33] have evolved, the collection of dynamic scapular kinematics has become the norm; therefore, an understanding of the dynamic accuracy of the FOB is necessary. McQuade et al [34] investigated the dynamic accuracy and repeatability of the FOB utilizing a dynamic pendulum calibration technique. RMS errors were reported for position (3.7-10 mm), angular displacement (0.3°-0.5°), and angular velocity (1.1°-2.2°/s). In addition, the authors suggested that studies examining motions with speeds greater than 250°/s would incur large errors in accuracy [34] . Therefore, studies investigating high velocity uncontrolled athletic movements should use caution in the reporting of results. Scapula tracking methods The ability to accurately and precisely track dynamic movements of the scapula in a noninvasive manner has been a limiting factor in analyzing detailed kinematics of the shoulder complex. The current gold standard for tracking scapular kinematics involves use of invasive, transcutaneous, cortical pins being placed in the scapular spine [4,8] . While this method allows for dynamic assessment of scapular motion, it is obviously undesirable in large-scale clinical studies. Nonetheless, cortical pins provide a means of directly collecting bony kinematic data that may be less comfortable for the patient. The usefulness of this methodology can be seen in the study by Karduna et al [8] in validating the scapular tracker and acromion method, both being noninvasive methods. Three noninvasive methods have been described for use with an ETS to track scapula orientation: Scapula locator, scapula tracker, and acromion method. Each of these noninvasive methods have been described and validated based on the associated measurement error when comparing novel approaches. Scapula locator: Johnson et al [13] first described the scapula locator as a means to record three-dimensional scapular orientations in space. The measurement jig consisted of a housing that supports three rods that could be positioned over the posterolateral acromial angle, the root of the scapular spine, and the inferior angle of the scapula. An electromagnetic sensor affixed to the jig allowed orientation of the locator, relative to the thorax, to be recorded by an ETS during quasi-static trials. Quasi-static trials involved the participant moving to selected positions and holding those positions while the scapula locator was used to collect orientation data. This apparatus eliminated the need to individually digitize the three anatomical landmarks as described by van der Helm [10] , which decreased error and increased speed of analyses [7,13] . Three studies evaluated the reliability of the scapula locator and found it to be applicable in three-dimensional kinematic studies of the scapula [7,9,21] . Johnson et al [13] reported 95% confidence interval ranges for intra-observer and inter-observer errors. They reported intra-observer errors ranging from 0.89° to 2.34° for anterior-posterior tilt, 0.91° to 1.87° for medial-lateral tilt, and 1.05° to 2.69° for upward-downward rotation, while inter-observer errors ranged from 4.98° to 7.88°, 4.5° to 6.04°, and 5.64° to 11.02°, respectively. Following designed modifications and improvements, Barnett et al [9] reported 95% confidence intervals for inter-observer errors that ranged from 2.55° to 2.72° for anterior-posterior tilt, 3.57° to 3.63° for medial-lateral tilt, and 3.47° to 3.85° for upward-downward rotation. Similarly, Meskers et al [7] reported standard deviations for inter-observer errors, which were 2.73° to 2.87° for anterior-posterior tilt, 2.98° to 3.21° for medial-lateral tilt, and 3.80° to 3.91° for upward-downward rotation. In addition to inter-observer errors, they reported intertrial (1.93°-1.96°; 2.26°-2.46°; 2.37°-2.53°; respectively), inter-day (2.83°-3.03°; 4.01°-4.17°; 3.43°-3.73°; respectively), and inter-subject (7.81°-8.02°; 7.86°-9.02°; 6.05°-7.04°; respectively) variability [7] . The reported error measures for the scapula locator indicate sufficient reliability for its use in clinical research [7,9,13] . The fairly low inter-day error measures reported by Meskers et al [7] demonstrates motor noise associated with other palpation methods (i.e., scapula locator), and does not require a custom designed piece of equipment (i.e., scapula locator and scapula tracker) [11] . Karduna et al [8] established concurrent validity of the acromion method against an invasive method whereby an ETS sensor was attached to transcutaneous cortical pins that were drilled into the spine of the scapula. They reported RMS errors of 3.7° to 11.4° for all scapular orientation angles (anteriorposterior tilt, medial-lateral tilt, and upward-downward rotation) during four active motions of the shoulder complex (scapular plane elevation, sagittal plane elevation, horizontal abduction, and external rotation). Generally, the acromion method underestimated the bone fixed measurements; however, upward rotation was overestimated [8] . In contrast, Meskers et al [11] found the acromion method underestimated all scapular orientation angles by an average of 6.5° (maximally 13°) when compared to measurements obtained with a scapula locator. Karduna et al [8] found that RMS errors increased for all scapular orientation angles as humeral elevation increased indicating the presence of skin motion artifacts. Due to the relationship of error and elevation, they indicated that the acromial method was acceptable for tracking scapular motions below 120° of elevation. A systematic error pattern was identified for upward rotation; therefore, the authors presented a correction model that reduced the overall RMS error of upward rotation from 6.3° to 2°. In likeness, Meskers et al [11] was able to reduce RMS errors for scapular orientation angles to approximately 2° when applying a linear regression model to correct skin motion artifact to improve the RMS error calculated between the acromion method and scapula locator. It was confirmed that measurement error increased as elevation increased indicating the sensor was sensitive to skin motion artifact [11] . In contrast, Lin et al [16] found no significant differences or significant correlations in scapular orientation angles that would have suggested skin motion artifact. They concluded that skin motion artifact had little impact on the scapular kinematics when evaluating four functional tasks. Alternate methods to improve accuracy of tracking scapular motions, which have been described as less complex than skin motion artifact correction models, have been proposed in studies utilizing optoelectronics tracking systems [36,37] . Brochard et al [36] developed a double calibration technique of the local scapula coordinate system that resulted in lowered RMS errors ranging from 2.96° to 4.48° as compared to the larger RMS errors of a single calibration (6°-9.19°). Shaheen et al [37] reported that optimal positioning of the acromial marker (the meeting point of the spine of the scapula and acromion) and angle of abduction (90° of shoulder elevation) during the initial calibration of the local scapular coordinate system resulted in improved RMS errors (3° to 5°). While the reduction in RMS errors reported by Brochard et al [36] and Shaheen the ability to reliably align the scapula locator with adequate precision, especially considering the amount of error that may be associated with identifying anatomical landmarks. In a more recent modeling study, Langenderfer et al [20] indicated that variability in scapular kinematic descriptions could range as high as 11.7° in anterior-posterior tilt, 16.6° in medial-lateral tilt, and 12.3° in upward-downward rotation when allowing for 4 mm in anatomical landmark variability. Nonetheless, Meskers et al [7] reported considerably smaller errors caused by palpation when digitizing the anatomical landmarks with the scapula locator (0.53°-1.52°). Although the scapula locator has been demonstrated to be a reliable method for measuring quasi-static scapula kinematics, its relevancy falls short given the inherent dynamics of normal human movement. Furthermore, the locator has not been compared against the gold standard method to establish accuracy. [8] first described the scapula tracker as a valid method for noninvasive tracking of three-dimensional scapula motions. The scapula tracker was a custom made plastic jig made of three parts: A base, an arm, and a footpad. The base was affixed to the skin overlying the spine of the scapula. The attached arm was adjustable to reach the acromion, which was affixed to the flat part of the acromion via the footpad. An ETS sensor was connected to the base of the scapula tracker that allowed dynamic tracking of three-dimensional scapula kinematics. The scapula tracker was compared to simultaneous measurements captured by a sensor attached to transcutaneous cortical pins that were drilled into the spine of the scapula. In an effort to validate the scapula tracker, the authors reported RMS errors of 3.2° to 10° for all scapular orientation angles (anterior-posterior tilt, medial-lateral tilt, and upward-downward rotation) during four active motions of the shoulder complex (scapular plane elevation, sagittal plane elevation, horizontal abduction, and external rotation). Interestingly, while most efforts to validate an instrument involve an assessment of concurrent validity through correlation analyses, an evaluation of RMS error was utilized instead. In these instances, while no acceptable level of error was defined, investigators sought to define methods that resulted in as little error as possible. Given the nonlinear nature of the data, the use of RMS appears to have served as an appropriate alternative for establishing validity. Scapula tracker: Karduna et al No articles were found that specifically addressed reliability for the scapula tracker. Acromion method: The acromion method is a skinfixed method by which an ETS sensor is adhered to the flat surface of the acromion that allows noninvasive tracking of three-dimensional scapula motions [8,33,35] . This method allows for dynamic tracking of the scapula that does not restrict the motions of subjects, is more comfortable, reduces the data collection time and et al [37] were not as substantial as Karduna et al [8] and Meskers et al [11] , the simplicity of the techniques are appealing. Therefore, investigation into the utilization of these calibration techniques [36,37] with ETS is warranted. The reliability of tracking scapular motion during isolated humeral planar motions with ETS utilizing the acromion method has been relatively strong over time (Table 1). Inter-trial and within-day, inter-session reliability in both healthy and impaired subjects has been demonstrated to yield good to excellent. In addition, inter-day, intra-observer reliability demonstrated moderate to excellent results in healthy subjects with the exception of Scibek and Carcia [14] where inter-day reliability was found to yield fair to excellent results. Instances of lower inter-session or inter-day reliability may be due to anatomical landmark digitization error [14,20] and sensor placement error [11,14,37] . Thigpen et al [15] suggested that scapular orientation angles should be collected in the sagittal plane in order to best detect changes in kinematics due to the larger CMCs (0.82-0.94) and smaller RMS errors (3.43°-5.76°) compared to the scapular and frontal planes. With the exception of Scibek and Carcia [14] , similar results were reported by Roren et al [18] (ICC = 0.77-0.93) and Haik et al [19] (ICC = 0.70-0.82) regarding inter-day, intra-observer reliability measures during sagittal plane elevation. However, less favorable results (ICC = 0.58-0.88) have been found for the descending phase of motion in the sagittal plane [19] . Regarding error in the sagittal plane, Roren et al [18] found small SEM (0.69°-1.61°) and small real difference (SRD) (1.90°-4.47°) values, whereas Haik et al [19] found relatively large SEM (2.77°-6.79°) and minimal detectable change (MDC) (6.43°-15.76°) values. These differences are likely due to the lower range of motion (0°-90°) [18] studied as compared to the other two studies (30°-120° [ 15] and 0°-120°[ 19] ) considering the known associated errors with higher levels of elevation [8] . While these studies have demonstrated acceptable reliability for assessing scapular kinematics in isolated planar motion, the large SRD and MDC question the ability of ETS to detect meaningful changes in scapular kinematics. SRD and MDC measurements have substantial value to clinicians, especially when determining outcomes of an intervention. Only two studies in the literature were found that investigated the reliability of tracking dynamic scapular orientation angles during functional movement patterns with ETS utilizing the acromion method [16,18] . Lin et al [16] investigated the reliability of tracking shoulder complex motions during four functional activities (overhead height task, shoulder height task, sliding a box task, and reaching for a salt shaker task). They reported inter-trial ICC values based on peak scapular orientation angles that ranged from 0.78 to 0.99 for kinematic descriptions of the shoulder complex (scapular orientation angles and humeral orientation angles). Measurement error Scibek and Carcia [14] Quasi-static ICC Ascending Inter was reported with SEM values that were less than 2°f or all kinematic variables. In addition, the authors reported Pearson bivariate correlation values that ranged from 0.81 to 0.97, which served as an index of similarity across the trials of the recorded movement patterns during each respective functional task. Roren et al [18] assessed the reliability of tracking two functional movement patterns (simulated back washing and hair combing) based on scapular orientation angles at rest, 30°, and 90° of humeral elevation (only rest and 30° for back washing). They reported ICC values that ranged from 0.83-0.98 for inter-trial reliability; 0.64 to 0.92 for inter-day, intra-observer reliability; and 0.35 to 0.89 for inter-day, inter-observer reliability. SEM values ranged from 0.77° (MDC = 2.12°) to 1.67° (MDC = 4.64°) for inter-day, intra-observer, and 1.05° (MDC = 2.91°) to 3.23° (MDC = 8.96°) for inter-day, inter-observer. The repeatability of functional movement patterns has been demonstrated to yield good to excellent inter-trial reliability [16,18] . While Lin et al [16] did not report inter-session or inter-day measures of reliability, Roren et al [18] demonstrated fair to excellent interday reliability. Of the two movement patterns, the hair combing movement pattern consistently demonstrated larger ICCs and smaller SEMs and MDCs. The authors speculated the less favorable measures of the back washing movement may be due to the subjects not being able to see the arm motion while looking ahead, thus not receiving visual feedback of the movement. Another note of importance that may have impacted the results of Roren et al [18] was that the authors utilized the original standardization protocol [38] instead of the most current [29] . Other studies have suggested higher measures of reliability were enhanced to restricting humeral elevation to one plane of motion for the collection of scapular kinematics [14,15] . The results of these two studies have demonstrated the ability to repeatedly measure functional tasks of the upper extremity that involved multi-planar motions [16,18] . However, some caution should be taken when comparing inter-day, inter-observer scapular kinematic data. Humeral tracking method As stated earlier regarding the tracking of scapular motions, the ability to accurately and precisely track dynamic movements of the humerus in a noninvasive manner is necessary to garner relevant data about shoulder complex kinematics. However, these types of studies are not applicable to large-scale clinical studies due to the invasive nature of the method. The most commonly used noninvasive method for tracking humeral kinematics with an ETS utilizes a hook-andloop strap that secures a sensor to the surface of the upper arm (humeral cuff), and avoids the use of cortical pins making it more desirable for large-scale clinical studies. Ludewig et al [39] simultaneously compared the tracking of humeral kinematics with a humeral cuff to a sensor affixed to an external humeral fixator in a single subject. Dynamic three-dimensional kinematic data were collected for humeral elevation in the scapular and sagittal planes and internal and external rotation with the upper arm maintained at the side. Different Euler angle rotation sequences were used to describe humeral rotation angles with respect to the trunk (z, y', z") and scapula (y, x', z"). The humeral cuff was found to closely match humeral rotation angles with maximal underrepresentation of external rotation of 5.7° during elevation in the scapular plane and 15.6° of external rotation with the arm at the side. RMS errors for humeral rotation angles ranged from 1.3° to 7.5° for all respective motions. In an effort to establish a noninvasive method, LaScalza et al [40] compared humeral kinematic data collected with a humeral cuff against a bone-fixed sensor in five cadaver specimens. The scapula of each specimen was prevented from moving by being rigidly fixed to a testing apparatus. The arms were directed through several motions including abduction, flexion, external rotation, three simulated reaching tasks, and a simulated overhand throw. Measurement errors calculated for all humeral rotation angles between the humeral cuff and bone-fixed sensor were reported as SEMs that ranged from 0.0° to 1.5°. Hamming et al [41] established concurrent validity of a humeral cuff against an invasive method whereby ETS sensors were attached to transcutaneous cortical pins that were placed into the clavicle, acromion, and humerus. They reported average errors for all humeral orientation angles (angle of elevation, plane of elevation, and axial rotation) during five dynamic motions of the shoulder complex (frontal plane elevation, scapular plane elevation, sagittal plane elevation, axial rotation with the arm at the side, and axial rotation with the arm at 90° abduction). For all five motions, the mean errors for the humeral orientation angles for angle of elevation and plane of elevation ranged from 1.0° to 2.3°. However, mean errors for the humeral orientation angles for axial rotation were much larger for all five motions. Mean errors during the five dynamic motions ranged from 4.8° to 5.5° for the three motions of elevation, whereas the mean errors for the two rotation motions ranged from 14.3° to 11.5° with maximal differences approaching 30°. Furthermore, the authors found that differences in body mass index impacted measurement error with significant increases when subjects had index measures greater than 25. These studies validate the use of the humeral cuff for tracking humeral kinematics [14,[39][40][41] . In contrast to Ludewig et al [39] , LaScalza et al [40] and Hamming et al [41] reported fairly large measurement errors for tracking humeral axial rotation during any type of shoulder complex motion. Furthermore, all three studies observed fairly slow movements (approximately ≤ 40°/s) limiting the effects of skin artifacts caused by inertial movements of the sensor during faster motions. The measurement error reported for all elevation movements may support the use of the humeral cuff based on the significant effects that anatomical landmark digitization can have on humeral kinematic descriptions. Langenderfer et al [20] indicated that variability in humeral orientation angle descriptions could range as high as 7.3° for elevation angle, 15.8° for plane of elevation, and 11.3° for axial rotation when allowing for 4 mm in anatomical landmark variability. Nonetheless, caution should be used when interpreting measures of humeral orientation angles of axial rotation as the validity and reliability of this measure is questionable. Although the aforementioned studies bring forth skepticism in utilizing the humeral cuff, other research has demonstrated its effectiveness in collecting kinematic data. Scibek and Carcia [14] established criterion- DIGITAL INCLINOMETER Many clinicians have limited or no access to state of the art three-dimensional biomechanical instrumentation for collecting kinematic data. Furthermore, clinicians do not have the time that is needed to set-up subjects, collect, and process the data collected with ETS. Clinicians need access to simple instrumentation that is both cost effective and practical to the clinical setting. The ability to quantitatively vs qualitatively measure shoulder movement is much more meaningful in the clinical setting. In addition, valid and reliable instruments provide clinicians with the ability to accurately measure, monitor, and compare changes in shoulder movement that may lead to better patient outcomes. The digital inclinometer has neither the ability to record threedimensional nor dynamic shoulder movements. However, this tool provides clinically meaningful measures of twodimensional shoulder kinematic data [42,43] . Scapular measurements The digital inclinometer has been demonstrated to be a valid instrument in measuring two of the three axes of scapular motion: upward rotation [21] and anteriorposterior tilt [23] . Johnson et al [21] and Scibek and Carcia [23] established criterion-related validity of a modified digital inclinometer against data collected with an ETS. Both studies utilized Pearson product moment correlations demonstrating strong relationships that ranged from 0.74 to 0.92 (mean differences 7°to 14°) for upward rotation [21] and 0.63 to 0.86 (mean differences 3.66° to 4.75°) for anterior-posterior tilt [23] . The smaller mean differences found with anterior-posterior tilt are most likely attributed to the smaller range of motion that occurs during humeral elevation as compared to the larger range of motion associated with upward rotation. Additionally, Johnson et al [21] compared static inclinometer measures to dynamic ETS measures with Pearson product moment correlations that ranged from 0.59 to 0.73. While the relationships were strong, the less favorable correlations reflected the expected inherent differences when comparing static to dynamic kinematics [17] . Regression analyses indicated positive relationships between the digital inclinometer and ETS. Johnson et al [21] reported the inclinometer detected 0.92° to 1.20° of change for every 1° detected by the ETS for upward rotation while Scibek and Carcia [23] reported slightly less favorable results with the inclinometer detected 1° of change in tilt for every 0.5° detected by the ETS for anterior-posterior tilt. It should be noted that Johnson et al [21] utilized participants with healthy and impaired shoulders while Scibek and Carcia [23] utilized only healthy participants highlighting the need for further investigation into the clinical usefulness of measuring anterior-posterior tilt in unhealthy shoulders. Regarding reliability, Johnson et al [21] reported intrarater, inter-trial reliability with ICC values that ranged from 0.89 to 0.96, and SEM values that ranged from 2.0° to 2.8°. Similarly, Scibek and Carcia [23] reported excellent inter-trial reliability with ICC values that ranged from 0.97 to 0.99. It appears that upward rotation can be repeatedly measured with acceptable consistency; however, no articles were found that have specifically assessed the reliability of assessing anterior-posterior tilt with a digital inclinometer. Humeral measurements Similar to scapular measurements, few investigations have reported on the validity of the utilization of digital inclinometers for humeral measurements. Two studies by Kolber et al [44,45] determined concurrent validity between measures collected with the inclinometer and a standard goniometer with ICC values for scaption (0.94), flexion (0.86), abduction (0.85), external rotation (0.97), and internal rotation (0.95) indicating good to excellent measures. Laudner et al [43] determined concurrent validity by measuring the relationship between horizontal adduction motion and internal rotation motion. Significant (P < 0.01) Pearson product moment correlations ranged from 0.52 to 0.72 between methods signifying an association of a loss of motion with contracture of the posterior capsular structures of the glenohumeral joint. While differences in methodology make comparisons difficult, these studies have demonstrated the digital inclinometer to be a valid instrument. Two-dimensional measurements of shoulder motion utilizing a digital inclinometer has been demonstrated to exhibit moderate to excellent measures of reliability and validity. Similar to ETS, inter-observer measurements resulted in less than favorable reliability as compared to intra-observer measurements when utilizing digital inclinometers (Table 2). Therefore, caution must be taken when comparing angular measures of the shoulder complex that have been obtained by two different observers, and when measures are being compared that have been recorded from different instrumentation. CLINICAL APPLICATIONS Electromagnetic tracking systems and inclinometers have both shown to be both valid and reliable means of collecting shoulder complex kinematic data specific to movement of both humerus and scapula. When attempting to monitor clinical outcomes the ability to accurately quantify motions of these bony segments can provide useful data that could be used to drive clinical decision making. A variety of studies have demonstrated the usefulness of ETS in addressing clinically related questions, specifically those whose aim is to quantify shoulder kinematics associated with various shoulder patient populations. Electromagnetic tracking systems have been useful in describing shoulder kinematics exhibited by the scapula and humerus in patients with rotator cuff pathology [46][47][48][49][50][51][52] . Lukasiewicz et al [46] noted altered scapular kinematic patterns in patients presenting with shoulder impingement when compared to participants with healthy shoulders. Similarly, in a study designed to compare three-dimensional shoulder kinematics in subjects with and without shoulder impingement, McClure et al [52] noted differences in scapular kinematics between groups, which were attributed to compensation strategies utilized for glenohumeral weakness and shoulder motion loss. In a treatment based study McClure et al [48] assessed scapular kinematics in patients with shoulder impingements before and after a six week intervention. While patients noted improvements in pain and shoulder function, no changes were noted in scapular kinematics following the intervention program [48] . Mell et al [47] utilized an ETS to identify variations in scapulohumeral rhythm between rotator cuff tear, tendinopathy, and healthy control subjects. Using the same equipment, others have investigated the role that pain and rotator cuff tear size has on scapulohumeral rhythm [49,50] and shoulder movement velocity [51] . Similarly, ETS have been utilized to capture three-dimensional scapular kinematics in patients with multidirectional instability [53] , in a patient that had undergone shoulder arthroplasty [54] , and in patients with frozen shoulders [55,56] . In all but one case [54] , a noninvasive approach was utilized in conjunction with the ETS. In each case, data were obtained that enabled the clinicians to quantify the three-dimensional motion associated with the shoulder complex. Electromagnetic tracking systems have also been useful in some clinically based studies designed to monitor three-dimensional scapular kinematics following an intervention. Wang et al [57] utilized an ETS to monitor alterations in scapular orientation following a stretching and strengthening protocol in a small sample of subjects presenting with forward shoulder posture. Similarly, Ebaugh et al [58,59] , in two separate studies, evaluated the impact of shoulder muscle fatigue on the glenohumeral and scapular kinematics in samples of twenty healthy subjects. Others have also utilized ETS to monitor changes in scapular kinematics and scapulohumeral rhythm following fatigue protocols [60][61][62] . When evaluating the impact of glenohumeral internal rotation deficit (GIRD) in the shoulders of 23 subjects, Borich et al [63] noted that a significant relationship exists between GIRD and scapular orientation. Although, ETS have been utilized in a variety of clinically based studies, the number of participants in these studies is relatively small. Often, access to these testing systems is limited due to the financial and physical resources necessary to own and operate this sophisticated equipment. Furthermore, although there are a variety of software packages and platforms that allow for data capture and analysis, the amount of time that must be invested in learning how to utilize these systems along with the time associated with setting up subjects is considerable and likely exceeds the available time for most clinicians. Still, investigators continue to utilize this equipment for their research; however, the number and size of these clinically based shoulder studies is limited. Interestingly, many of the studies involving the shoulder and ETS are validation studies designed to verify the clinical usefulness of a new, clinically available method of kinematic assessment. Johnson et al [21] took this approach when validating the digital inclinometer for use with assessing scapular upward rotation, which was replicated by Scibek and Carcia [23] for the monitoring of scapular anteriorposterior tilt. Still others have utilized ETS to establish the validity of a visual and clinically based scapular dyskinesis screening [26,27] . Ultimately, while ETS allow for accurate quantification of three-dimensional shoulder kinematics, accessibility limitations, along with physical and financial limitations make other tools and systems, such as inclinometers, an attractive option for clinical use and for addressing clinical questions. In addition to the work of Johnson et al [21] and Scibek and Carcia [23] , other investigators have suggested that inclinometers offer a cost effective and clinically useful means by which to quantify shoulder and scapular kinematics [64,65] . A number of studies involving assessment of the shoulder have relied on the use of inclinometers to quantify both scapular motion and glenohumeral motion. Borsa et al [66] utilized a digital inclinometer to quantify scapular upward rotation during humeral elevation in subjects with healthy shoulders. Scibek and Carcia [42] utilized a digital inclinometer to evaluate scapulohumeral rhythm in unimpaired subjects. A variety of clinically based studies have incorporated inclinometers when quantifying scapular motion and glenohumeral motion in overhead athletes and in patients with shoulder pathologies [43,[67][68][69][70][71][72] . Dover et al [67] utilized inclinometers to measure glenohumeral range of motion and to evaluate proprioception in female softball athletes. Witwer and Sauers [68] evaluated scapular upward rotation in a group of collegiate water polo players. Similarly, Laudner et al [72] incorporated a digital inclinometer when comparing scapular upward rotation between baseball pitchers and positions players. Another inclinometer-based study examined scapular kinematics in 72 overhead athletes, with healthy and injured shoulders [69] . Interestingly, these studies where clinical data were obtained using an inclinometer routinely presented with larger sample sizes as compared to those clinically based studies that utilized ETS. Certainly, the statistical designs of these studies that utilized inclinometers may have required larger sample sizes; however, the ease of use associated with the inclinometer made it feasible to test large pools of subjects. While there are few studies where ETS were used to measure changes in shoulder kinematics following an injury intervention program [48] , inclinometers have been shown to be plausible options. Following the establishment of the digital inclinometer as a valid and reliable tool for assessing posterior shoulder tightness [43] , Laudner et al [71] evaluated the acute effects of a sleeper stretch designed to increase posterior shoulder flexibility. Using the inclinometer, the investigators were able to observe significant increases in shoulder internal rotation and posterior shoulder motion following the stretching intervention [71] . Similarly, utilizing an inclinometer, McClure et al [73] compared the effectiveness of two stretching protocols, a sleeper stretch and cross body stretch, to increase shoulder range of motion. Although the randomized controlled trial utilized smaller sample sizes, they were able to detect significant and clinically meaningful increases in shoulder motion using an inclinometer [73] . Although not an intervention based study, Thomas et al [70] utilized a digital inclinometer to monitor changes in shoulder range of motion and scapular upward rotation in overhead athletes over the course of their competitive seasons. Based upon the observed changes in glenohumeral and scapular motion across their sport seasons, it was suggested that changes in motion should be monitored during their competitive seasons so as to address any changes that might contribute to the occurrence of shoulder injuries [70] . While both ETS and inclinometers can be utilized to monitor changes in shoulder complex kinematics over time or following intervention strategies, inclinometers provide an accessible, affordable, and clinically useful strategy for monitoring various aspects of shoulder motion. CONCLUSION The ability to gain valuable insight into the kinematics of the shoulder complex is heavily reliant on the accuracy and precision of the instrumentation utilized. The evidence presented in this review demonstrates that ETS and digital inclinometers are reliable and valid instruments. Similarly, it is apparent that ETS have an advantage regarding accuracy, precision, and the ability to capture three-dimensional and dynamic analyses, while digital inclinometers are much more cost effective and practical in clinical settings. Reliability of both of these instruments is highly dependent on the user as inter-rater measures were found to be less desirable when compared to intra-rater measures, with palpation error likely contributing to the increased variability. Although some evidence has been presented regarding the minimal detectable changes captured with ETS for scapular kinematics, further study is warranted to expand our understanding of the clinical usefulness of ETS. Conversely, inclinometers provide a clinically useful means to monitor kinematic changes during outcomesbased studies. [44] Scaption Intra-day, inter-observer 0.89 3.4°9°I nter-day, intra-observer 0.88 3.4°9°L audner et al [43] Horizontal adduction Intra-observer 0.93 1.64 Inter-observer 0.91 1.71 de Winter [65] Abduction Inter-observer 0.28-0.83 External rotation Inter-observer 0.56-0.90 ICC: Intraclass correlation coefficient; RMS: Root mean square; SEM: Standard errors of measurement; MDC: Minimal detectable change.
2018-04-03T05:40:14.860Z
2015-11-18T00:00:00.000
{ "year": 2015, "sha1": "5211251f6f25d0cf7c2ecf3c3ed764fb9b3b8155", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5312/wjo.v6.i10.783", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3271b57aefa551e81da9286271cf0f7db7178b57", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51893829
pes2o/s2orc
v3-fos-license
Dissemination of novel biostatistics methods: Impact of programming code availability and other characteristics on article citations Background As statisticians develop new methodological approaches, there are many factors that influence whether others will utilize their work. This paper is a bibliometric study that identifies and quantifies associations between characteristics of new biostatistics methods and their citation counts. Of primary interest was the association between numbers of citations and whether software code was available to the reader. Methods Statistics journal articles published in 2010 from 35 statistical journals were reviewed by two biostatisticians. Generalized linear mixed models were used to determine which characteristics (author, article, and journal) were independently associated with citation counts (as of April 1, 2017) in other peer-reviewed articles. Results Of 722 articles reviewed, 428 were classified as new biostatistics methods. In a multivariable model, for articles that were not freely accessible on the journal’s website, having code available appeared to offer no boost to the number of citations (adjusted rate ratio = 0.96, 95% CI = 0.74 to 1.24, p = 0.74); however, for articles that were freely accessible on the journal’s website, having code available was associated with a 2-fold increase in the number of citations (adjusted rate ratio = 2.01, 95% CI = 1.30 to 3.10, p = 0.002). Higher citation rates were also associated with higher numbers of references, longer articles, SCImago Journal Rank indicator (SJR), and total numbers of publications among authors, with the strongest impact on citation rates coming from SJR (rate ratio = 1.21 for a 1-unit increase in SJR; 95% CI = 1.11 to 1.32). Conclusion These analyses shed new insight into factors associated with citation rates of articles on new biostatistical methods. Making computer code available to readers is a goal worth striving for that may enhance biostatistics knowledge translation. Introduction Knowledge translation is fundamental to advancing science. There are multiple routes by which scientists disseminate their findings to aid in translation, but publishing findings in peer-reviewed journals is perhaps one of the most important means for doing so. For biostatisticians, as with most scientists who work in academic settings, publishing manuscripts that highlight contributions to their respective fields is key to career advancement, including promotion and tenure. Although publishing is personally important to the authors, it is important to recognize that reasons for publishing also include advancing the field of biostatistics and influencing the application of biostatistics methods to real-world settings [1][2][3]. Prior research suggests that the uptake of new statistical methods has much room for improvement [4]. Pullenayegum et al. (2016) provide some examples, including researchers not incorporating measurement error in regression models when indicated and the failure to utilize more adaptive designs in clinical trials [5]. Inverse-intensity weighting, one specific example of a class of statistical techniques designed in the early 2000's to handle longitudinal data with random follow-up times related to the outcome [6], has rarely been implemented, despite this phenomenon occurring often in certain types of chart review studies. Prior to Pullenayegum et al.'s 2016 paper, the method had only been used once as a primary analysis [7]. A survey conducted by Canada's Natural Sciences and Engineering Research Council also suggests that researchers in certain fields related to mathematics, including statisticians, engage in knowledge transfer less often than colleagues in other natural sciences and engineering disciplines [8]. In our own prior work, we found that only 1.7% of articles published in the field of general/internal medicine research from 2000-2009 included a citation of an article published in the biostatistics literature during that same time frame, perhaps providing further evidence of suboptimal translation of the knowledge gained through biostatisticians' primary research [9]. While there are likely many ways that biostatisticians could improve the rate at which knowledge translation occurs, information within biostatistics publications themselves might provide insight into means of "successful" knowledge translation. Since biostatistics methods are algorithms, providing the algorithms in user-friendly formats to the reader, via printed computer code or some other means (e.g. on a personal or journal website, through e-mail upon request), could likely facilitate the use of that method by other scientists as well as help ensure reproducibility [10][11][12][13]. The ability to reproduce results is becoming more important as researchers' analytic techniques and data structures become more complex while searching for weaker associations among variables. As researchers attempt to reproduce other investigators' results and identify potential errors, having as much information as possible in primary publications (including accompanying data and/or code) will help ensure the reliability of the dissemination of new methods [11]. Researchers have promoted strategies for increasing citation frequencies, including publishing in journals that are read by large numbers of readers or that have high impact value, having the article freely accessible online (e.g. in a self-archive, public repository, or open-access format), including authors from multiple institutions and/or multiple countries, using more references, writing longer articles, sharing research data, and publishing across disciplines [14][15][16]. Publishing articles with tables and figures may also tend to increase readership as well [17], although this is practically universally done in biostatistics methods papers. Based on earlier findings suggesting that expository articles may tend to disseminate faster than traditional statistical methodological articles [18], we hypothesized that illustrating the use of the new method on a "real-world" dataset might also be associated with higher citation counts. We also recognize, however, the possibility that this might have the reverse effect if the examples are not used or presented appropriately. This paper is a bibliometric study that illustrates the associations between characteristics of published articles summarizing new biostatistics methods and one knowledge translation metric, their citation counts. While all of these characteristics are of interest with regard to their relationship with citation counts, of primary interest was the extent to which making computer software code available to the reader is associated with future citations. Methods To review biostatistical methods summarized in articles from a broad range of journals, we initially identified 85 peer-reviewed English-language journals that were listed among journals in the "Statistics and Probability" category for the year 2010, according to SCImago Journal & Country Rank [19]. From this list, we excluded journals that were not primarily focused on statistical methods or deemed not relevant to biostatistics, leaving 35 journals. Additionally, in Feb 2016, we conducted an informal 1-question e-mail survey of 49 biostatisticians linked to biostatistics, epidemiology, and research design (BERD) programs at institutions with Clinical and Translational Science Awards (CTSAs) that asked: "If you were going to submit a manuscript for publication that discussed a new biostatistical method (or an extension of an existing method) that you developed, in what journal would you prefer to publish? Please name up to 5 journals." Among 20 responders (41% response rate), all but two biostatistics journals mentioned were already included in our list of 35; these two were added to our list of journals. We then randomly selected from each of these journals up to 20 articles whose publication date was in the calendar year 2010. For journals with fewer than 20 articles published in 2010, all articles were selected. The year 2010 was chosen to provide a reasonable amount of time (i.e. 7 years) for knowledge translation to have occurred and also to ensure that the articles in question reflect methods which are still relatively novel. The goal was to identify at least 400 articles considered to be new statistical methods published across a range of journals, in order to provide 80% power to detect modest differences (effect sizes equivalent to 0.4 standard deviation units in a 2-sample t-test framework) in citation counts, and modest increases in the proportion above the median numbers of citations (e.g. 45% vs. 62%, equivalent to an odds ratio around 2.0, in a 2x2 chi-square test framework), assuming 2-sided testing and an alpha level of 0.05. An article was considered to be about a new biostatistical method if it described a novel statistical technique or algorithm or extended an existing technique or algorithm, which could potentially be used during the design, conduct, or analysis of a biomedical research study. If a method was statistical in nature but designed for a non-biomedical application (e.g. economics, agriculture), we did not necessarily exclude that article, since many statistical methods designed for non-biomedical disciplines are eventually adapted for use in biomedical research. Articles that compared already-existing biostatistical methods but did not extend those prior methods in any novel way were excluded, as were papers merely descriptive in nature about a particular study's design, analysis, or results. Articles that appeared to be purely mathematical and/or statistical proofs were excluded, because in such cases the authors were not typically describing new techniques and/or algorithms that could be readily adapted for use in other studies. In using this type of rubric for classification, the intent was to be highly specific, meaning that there would be little doubt that the final list of journal articles all represented novel biostatistical methods. Questions that helped decide whether an article was outlining a new biostatistical method included: 1) Does the article describe something that could be used for helping design a study? 2) Does the article describe something that could be used for helping analyze data from a study? 3) Does the article describe something for which computer code could be used to help make use of this new method? For articles classified as new methods, a number of characteristics were abstracted and entered into a REDCap database. These characteristics included publication dates (in print and on-line, if available), page counts (range: 1 to 54), number of references cited (range: 2 to 95), number of authors (range: 1 to 7), number of publications by most published author (range: 2 to 1653), total number of author publications (range: 3 to 1745), whether any of the authors' institutional involvement was located in the U.S., whether the authors collectively were affiliated with more than one institution and/or country, whether there was evidence that any of the authors' primary discipline was clinical or otherwise non-methodological (i.e. as indicated by their degree or affiliation), whether computer code was made available to the reader (i.e. in the actual article, appendix, or supplementary material; on a referenced web page; or upon request from one of the authors), and whether a real-life application of the method was provided in the article. If an article specifically referenced computer code uploaded or published elsewhere (e.g. CRAN, GitHub), then that article was classified as having provided computer code. The publication dates were used to create a representation of the duration of follow-up time, defined as the number of months from publication (on-line or in print, whichever was earlier) until April 1, 2017. We also recorded the article's 2010 SCImago Journal Rank (SJR) indicator (range: 0.670 to 6.036), its journal's h-index (range: 14 to 133), whether the article was available in the PubMed Central repository [19], whether the article was freely accessible on the journal's website, and whether a version (e.g. pre-or post-print) of the article was published on any freely accessible website (including the journal's website). For this study, both the SJR indicator and journal h-index provided journal ranking scores based on citations received from other publications. While the SJR indicator and h-index are both metrics that quantify citation rates, a journal's SJR indicator is a broad measure of its citations over a 3-year time period, while a journal's h-index is more reflective of its most highly cited articles. The full details for how these scores are calculated can be found on the SCImago Journal & Country Rank website [19]. Finally, we noted the number of citations each article had (as of April 1, 2017) and the number of publications each author had (as of May 1, 2017), according to the Scopus1 citation database [19,20]. Although it is possible that articles with higher citation counts may be of higher quality than others (i.e. more effective and reproducible), we did not specifically examine or judge the quality of the new methods being proposed in these articles. Articles were reviewed and characterized by two of the three biostatisticians listed as authors (AEW and PJN, or LNM and PJN). When there was disagreement, for example, on whether an article truly represented a novel statistical method or whether code was available, the biostatisticians re-evaluated their assessments until consensus was reached. The biostatisticians agreed approximately 80% of the time on whether an article should be classified as a new method, but after discussion consensus was achieved 100% of the time. A complete list of the articles included in the final analyses and their abstracted characteristics is included as supporting information (S1 Dataset). The SAS code reflecting the analyses has also been submitted as supporting information (S1 File). Since the primary hypothesis centered around whether making code available in the article would be associated with higher numbers of citations, characteristics of new methods articles with and without code available were compared using generalized linear mixed models (GLMMs) that included random journal effects to account for within-journal correlation. The various article characteristics served as dependent variables, incorporating an appropriate distributional assumption (e.g. binomial, multinomial, Poisson, or normal) and link function (e.g. log, linear), depending on whether the characteristic was a categorical, count, or continuous variable. A binary variable indicating whether the article had code available served as the independent variable in each of the models. A similar strategy was then used to determine which article, author, and journal characteristics were associated with the number of citations during the study follow-up period. For these analyses, we used a series of GLMMs to examine whether article characteristics (independent variables) were associated with number of citations (dependent variable). The models assumed a lognormal distribution for the number of citations, and each utilized a log link function. All of these GLMMs incorporated random journal effects to account for within-journal correlation. For each variable, a rate ratio was calculated by exponentiating the estimated regression coefficient; the rate ratio and its 95% confidence interval were reported. The rate ratio reflects the fold-increase in number of citations associated with having vs. not having the factor of interest (for categorical variables) or with a 1-unit increase (for SJR indicator and number of authors) or a 10-unit increase (for other continuous and count variables). Spearman correlations assessing the magnitude of the associations between number of citations and continuous characteristics were also calculated. Finally, we created a multivariable GLMM using the number of citations as the dependent variable. Assuming a lognormal distribution for the number of citations provided superior model fit with normally distributed residuals, as assessed via diagnostic plots, when compared to models that assumed other distributions, including normal, Poisson, and negative binomial. In this process, all main effects for the characteristics were included. We also investigated all two-way interactions between an indicator variable representing whether or not code was available in the article and each of the remaining characteristics. Using a forwards selection approach, only a single significant interaction was added. When characteristics were moderately to highly correlated with each other (rho>0.5), only the one most correlated with number of citations was selected to be included in the multivariable model. The model included random journal effects to account for clustering of articles within journals. Since the lognormal modelling approach requires outcomes to be nonzero, n = 19 articles with 0 citations were not included in the primary multivariable model; however, we conducted a sensitivity analysis to see whether any bias was introduced by this omission by assessing whether the study conclusions changed when the natural logarithm of the number of citations plus 1 was treated as the dependent variable in the multivariable model. We also conducted a sensitivity analysis in which several articles with extremely high number of citations were excluded from the multivariable analyses. Results A total of 722 articles were reviewed, of which 428 (59%) were classified as being novel biostatistics methods. Among the 428 new methods articles, 19 (4.4%) were never cited during the study follow-up period, and the maximum number of citations for an article was 535. The mean (± SD) number of citations was 16.7 (±38.9), and the median (interquartile range) was 8.0 (3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15). Table 1 lists descriptive statistics associated with the primary abstracted data elements, stratified by whether or not computer code was available. Characteristics were generally similar between articles with and without code available; however, compared to articles without code available, articles with code available were more likely to include a coauthor who was not a methodologist (9.6% vs. 3.8%, p = 0.04). The bivariate relationships for each abstracted data element with the number of citations (adjusted for length of follow-up time) are presented in Table 2. Articles with code available had somewhat higher citations, on average, than articles without code, but this bivariate association was not statistically significant (rate ratio = 1.23, 95% CI = 0.96 to 1.57). Articles that were freely accessible on the journal website had about 1.5 times as many citations as articles that were not free (rate ratio = 1.49, 95% CI = 1.05 to 2.11), and articles that were freely accessible in some form on any website had about 1.6 times as many citations as articles that were not free (rate ratio = 1.55, 95% CI = 1.23 to 1.96). Most of the continuous and count-level variables were also associated with the number of citations, including higher numbers of references cited, longer articles, articles with more authors, higher SCImago Journal Rank indicators, and higher journal h-indices. The final multivariable generalized linear mixed model is summarized in Table 3, and a number of interesting relationships were noted. All of the reported rate ratios, 95% confidence intervals, and p-values are adjusted for all factors included in the model. A key interaction was identified between having code available and whether the article was freely accessible on the journal website. For articles that were not freely accessible on the journal's website, having code available appeared to offer no boost to the number of citations (rate ratio = 0.96, 95% website increased the citation counts by another 50% (rate ratio = 1.47, 95% CI = 1.15 to 1.88, p = 0.002). Each additional 10 references cited was associated with a 19% increase in the number of citations (rate ratio = 1.19, 95% CI = 1.09 to 1.30, p = 0.0001). Although page length varied considerably (ranging from 1 to 54 pages), longer articles were cited more often than shorter articles; for example, articles that were 20 pages long had about 1.3 times as many citations, on average, compared with articles that were 10 pages long. The strongest association (i.e. the association with the smallest p-value) was noted for the 2010 SCImago Journal Rank indicator, which exhibited a 21% increase in citations for a 1-unit increase in the SJR indicator (rate ratio = 1.21; 95% CI = 1.11 to 1.32). A small but significant association was also noted between citation counts and total number of publications among authors (i.e. a 1% increase in citation counts with every 10 additional publications). All significant associations would remain significant at a Benjamini-Hochberg [21] false discovery rate of 5%, with the exception of the association between citation counts and total number of publications among authors. When the sensitivity analyses were conducted to determine whether our findings were similar when the natural logarithm of the number of citations plus 1 was treated as the dependent variable in the multivariable model or when several articles with extremely high number of citations were excluded from the multivariable analyses, the study findings remained essentially unchanged. The interaction between having code available and whether the article was freely available on the journal website remained highly significant in each case, and all other previously identified statistically significant associations remained significant. Discussion In this study characterizing 428 articles summarizing the development of new biostatistics methods, we found a number of article, author, and journal characteristics that were associated with increased citation counts over about seven years of follow-up. These significant bivariate relationships included the article being freely available, having computer programming code available, higher number of references, longer page length, higher numbers of authors, higher SCImago Journal Rank indicator, higher journal h-index, and higher total numbers of publications among the article's authors. In a multivariable model, having computer code made available was associated with a 2-fold increase in citation counts over the 7-year time period for articles freely available on the journal website but not for articles that were not freely available. A number of other associations with citation counts remained statistically significant in the multivariable model, with the strongest impact on citation rates coming from SJR, which showed a 21% increase in citations for every 1-unit increase in the SJR indicator. While some of these associations (i.e., number of references and SJR indicator) are particularly strong even after controlling for other factors, it is possible that unmeasured confounders could explain these associations. Although we observed higher citation rates when computer programming code was provided for articles freely available on journal websites, we cannot be certain that merely providing code for one's new methods will enhance future citations. Similarly, it may be that higher quality articles simply tend to have higher numbers of references, more pages, more authors, and more publications among its authors; thus, we do not suggest that intentionally trying to increase these factors would directly translate to higher numbers of citations. Factors that could potentially confound the observed associations could include variables that would be extremely difficult to capture such as number of formal/informal presentations of the new method, whether the authors work at institutions that are better at supporting translation of scientific findings, or other less tangible factors such as notoriety/popularity of the authors, how socially connected the authors are to potential users of their methods, quality of the article, or relevance of the topic to other investigators/disciplines. Identifying the impact of such confounders or latent variables is a topic for future research. Barriers to the adoption of new biostatistics methods by other 'users' have been described in the literature to some extent. Pullenayegum et al. (2016) note the following barriers: lack of expertise in the area, lack of software, and lack of time needed to understand and utilize new methods. Providing computer code to the reader may be one means of enhancing adoption of new methods [22] or at a minimum assisting in the reproducibility process. In this review, we observed a variety of different types of computer code being provided, with R scripts and packages being the most common. Although we did not investigate reasons why methodologists do not make code available, perhaps biostatisticians, like many scientists, feel a burden to have multiple publications within a short time frame for promotion and may thus forego the added steps necessary to provide code to the reader [23]. Our study has several strengths and limitations worth noting. We included a broad range of journals in which new biostatistical methods are summarized, and the sample size was sufficiently large to allow us to investigate a number of potential associations with citation frequencies. While we had sufficient power for our primary question of interest (relationship with providing software code), it should be noted that we did not have sufficient power to assess all other covariates included in the multivariable model. For example, 77% of the articles provided a "real-life example", rendering insufficient power to detect a significant association for this covariate. Although there was some subjectivity in the article assessments, relevant characteristics were abstracted by two of three biostatisticians working independently. Because there are so many journals that highlight new biostatistical methods, we could not include them all in this bibliometric analysis; however, we did include all mentioned in responses to in an informal e-mail survey of biostatisticians. Our study focused on papers published in 2010, and it may be that authors have already started providing software code on a more consistent basis; one journal has even recently published guidelines on structuring code to assist authors [13], and many now require or strongly recommend that data and code be submitted with the manuscript [11,[24][25][26][27][28]. Such guidelines will help ensure that new methods are readily available for use by others. By focusing only on 2010 publications, however, we minimized substantial differences in follow-up time and were thus able to look forward for 7 years (until 2017) to capture citation counts. We also recognize that the sciences of biostatistics and bioinformatics are often blurred and that we did not specifically address new bioinformatics methods; whether our findings hold true in related fields is an area of future research. Finally, there are ways in which authors of new methods can 'market' their findings and encourage knowledge transfer that are beyond what can be captured in a study such as this one; aspects such as authors' involvement in professional societies, the degree to which they speak in front of captive audiences, and personal web pages were beyond the scope of this study. These analyses shed new insight into factors associated with citation rates of biostatistical methods articles. We have demonstrated an association between citation rates and several article, journal, and author characteristics, including whether programming code is available, whether the manuscript is freely accessible, greater numbers of references, and length of article. For biostatisticians publishing their novel methods, ensuring that relevant computer code is available to readers and that their articles are freely available to readers are goals worth striving for, given their relationship with other investigators' use of the new methods. Formal analysis: Amy E. Wahlquist, Paul J. Nietert.
2018-08-14T13:08:09.694Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "77804e577063ab10547a3e64aad23d7d7f772ce6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0201590&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77804e577063ab10547a3e64aad23d7d7f772ce6", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
118648306
pes2o/s2orc
v3-fos-license
Argyres-Douglas Theories and S-Duality We generalize S-duality to N=2 superconformal field theories (SCFTs) with Coulomb branch operators of non-integer scaling dimension. As simple examples, we find minimal generalizations of the S-dualities discovered in SU(2) gauge theory with four fundamental flavors by Seiberg and Witten and in SU(3) gauge theory with six fundamental flavors by Argyres and Seiberg. Our constructions start by weakly gauging diagonal SU(2) and SU(3) flavor symmetry subgroups of two copies of a particular rank-one Argyres-Douglas theory (along with sufficient numbers of hypermultiplets to guarantee conformality of the gauging). As we explore the resulting conformal manifold of the SU(2) SCFT, we find an action of S-duality on the parameters of the theory that is reminiscent of Spin(8) triality. On the other hand, as we explore the conformal manifold of the SU(3) theory, we find that an exotic rank-two SCFT emerges in a dual SU(2) description. Introduction and Summary N = 2 superconformal field theories (SCFTs) often have exactly marginal deformations that preserve N = 2 supersymmetry (SUSY). Such deformations are descendants of dimension two operators that we can add to the prepotential where the integration is taken over the N = 2 chiral Grassmann parameters. The λ i parameterize spaces commonly referred to as "conformal manifolds. The simplest isolated SCFTs we can consider gauging are just collections of free hypermultiplets. For example, taking a collection of eight hypermultiplets and gauging an SU(2) ⊂ Sp (8) flavor subgroup, we construct the SU (2) theory with N f = 4 and SO (8) flavor symmetry. As we vary the resulting exactly marginal coupling, τ = θ π + 8πi g 2 , the theory becomes strongly coupled. However, if we tune the coupling appropriately, a new weakly coupled S-dual description emerges at another cusp [2] which looks like the original theory up to an S 3 triality outer automorphism of the flavor Spin (8). The duality group in this case is SL(2, Z), and this construction extends the notion of N = 4 duality [3] to an N = 2 theory. More generally, if one starts from a Lagrangian theory and tunes the gauge coupling to another cusp, one often finds that a new isolated interacting SCFT emerges. For example, in [4], Argyres and Seiberg found that, by starting with the weakly coupled SU(3) gauge theory with six fundamental flavors and varying the gauge coupling, a new cusp emerges with an S-dual description in which the Minahan-Nemeschansky (MN) theory with E 6 global symmetry [5] is weakly coupled to a doublet of SU(2) via an SU(2) ⊂ E 6 gauging. This type of duality has been generalized by Gaiotto [6] and many other authors (see, 1 On general grounds, such conformal manifolds are Kähler [1]. 2 They can also couple sectors with co-dimension one or higher conformal manifolds. However, we can often continue this process iteratively until we have a collection of isolated theories. e.g., [7] and [8]). All other examples of S-duality discussed in the literature essentially share the general characteristics of the above two cases, but with varying numbers of cusps and isolated sectors of varying ranks (i.e., varying dimensions of their Coulomb branches). In particular, all the instances of S-duality that we are aware of involve N = 2 scalar chiral primaries (we mean operators annihilated by all the anti-chiral Poincaré supercharges; these operators are often called "Coulomb branch" operators) of integer dimension. In this paper, we will generalize S-duality to theories with non-integer scaling dimension Coulomb branch primaries. Since Lagrangian theories have only integer dimension N = 2 chiral operators, our theories of interest are never completely weakly coupled. Instead, we will find various cusps where weakly coupled gauge fields emerge and couple various isolated strongly coupled sectors that are related to each other in interesting ways. 3 The original examples of theories with non-integer dimension chiral operators were discovered as special points in the Coulomb branch of SU(3) super Yang-Mills by Argyres and Douglas [9] and in SU (2) SQCD with N f = 1, 2, 3 flavors in [10] (the N f = 1 SCFT is the same as the one in [9]). Following the notation of [11], we will refer to these theories as the I 2,3 , I 2,4 , and I 3,3 SCFTs respectively. 4 These theories are believed to be the only rank-one SCFTs with non-integer dimension N = 2 chiral operators. 5 Of course, there are also many higher-rank Argyres-Douglas (AD) theories (e.g., see the review in [14]). Although the above AD theories are isolated, they typically inherit some flavor symmetry from the UV gauge theories in which they are embedded. For example, the I 2,4 and I 3,3 theories have SU(2) and SU(3) flavor symmetry respectively (the I 2,3 theory has no flavor symmetry). Therefore, we can try gauging the flavor symmetries of the I 2,4 or I 3,3 theories in an exactly marginal fashion (adding additional sectors charged under a diagonal combination of flavor symmetries as necessary), studying the resulting conformal manifold, 3 Note that there can be conformal manifolds with only integer dimension Coulomb branch operators that do not have a Lagrangian limit because they have some exceptional flavor symmetry (for example, one can gauge an SU (3) subgroup of the flavor symmetry of the E 8 SCFT as in [7]). 4 These SCFTs also go under many different names. For example, they are sometimes referred to as the (A 1 , A 2 ), (A 1 , A 3 ), and (A 1 , D 4 ) theories [12] (this notation arises from the fact that the BPS quivers of these theories are the products of the corresponding ADE Dynkin diagrams). 5 More precisely, Kodaira's classification of elliptic fibrations over the complex plane [13] implies that the only consistent non-integer scaling dimensions of N = 2 Coulomb branch generators describing a rank one theory are 6/5, 4/3, or 3/2. These scaling dimensions are realized, respectively, by the I 2,3 , I 2,4 , and I 3,3 theories [10]. While it is not inconceivable that other inequivalent theories have the same spectrum, no such theories have been found to date. and finding the various S-dual frames. To that end, in the first part of this paper, we will study a particular rank three theory, which we denote as T 2, 3 2 , 3 2 . This theory consists of SU(2) gauge fields coupled to two I 3,3 theories and a doublet of hypermultiplets. As a result, T 2, 3 2 , 3 2 has one marginal coupling. We will see that this marginal coupling parameterizes a conformal manifold with three S-dual cusps, and that, at each of these cusps, SU(2) gauge fields emerge and couple two I 3,3 theories and a doublet of hypermultiplets (with the parameters of the theory mixed in interesting ways). After appropriately taking into account the mixing of the different parameters, we will find an analog of the triality discussed in [2]. Furthermore, subject to some assumptions, we will prove that the T 2, 3 2 , 3 2 theory is the minimal (i.e., lowest-rank) theory with non-integer dimensional Coulomb branch operators that has a marginal gauge coupling and exhibits S-duality. As such, our discussion of the T 2, 3 2 , 3 2 SCFT represents the minimal generalization of Seiberg and Witten's analysis of SU (2) with N f = 4 [2]. In the second part of the paper, we look for the lowest-rank generalization of Argyres-Seiberg duality. We will argue that the simplest generalization is given by a rank four theory we call T 3,2, has an SU(2) gauge theory realization in which the gauge group is coupled to a single I 3,3 theory and a more exotic theory of rank two with spectrum 3, 3 2 that we will call T 3, 3 2 (this theory plays the role of the E 6 SCFT in our duality). The latter has a G T 3, 3 2 ⊃ SU(3) × SU(2) flavor symmetry, of which we gauge the SU(2) factor. T 3, 3 2 has not been explicitly discussed in the literature (although it appears implicitly in the classification of [11,15]), and our analysis will elucidate some of its interesting properties. For example, our results imply that the flavor symmetry does not suffer from Witten's anomaly [16]. Moreover, it follows from our analysis that the SU(2) and SU(3) flavor central charges are The result for k is somewhat unconventional, since it does not follow from the usual rule of thumb for relating flavor central charges to (in our normalization) twice the scaling dimension of some Coulomb branch operator in the theory; indeed, the T 3, 3 2 theory has no dimension- 5 2 operator. Also, using the results of [17], we can immediately conclude that since the I 3,3 theory does not have exotic N = 2 chiral primaries, neither does the T 3, 3 2 theory. The rest of this paper is organized as follows: In Section 2 we describe the tools that let us identify the AD building blocks in the various S-dual frames. In Section 3 we give the details of the rank three example generalizing the S-duality of SU (2) with N f = 4, while in Section 4 we discuss the rank four generalization of Argyres-Seiberg duality. We briefly conclude in Section 5. In Appendix A, we sketch out the Hitchin system derivation of the various Seiberg-Witten curves we use in the main part of the paper. Appendix B exhibits the equivalence of the (III , F ) theory to I 3,3 plus a triplet of hypermultiplets. The Strategy The idea of using isolated sectors to construct conformal manifolds of N = 2 SCFTs by weakly gauging flavor symmetry subgroups is rather general. In order to make sense of the vast set of possible building blocks and the S-dual cusps that can emerge, we should find some simple, universal, and invariant characterizations of the physics on an N = 2 conformal manifold, M. For example, we can study: (i) The a and c conformal anomalies. (ii) The set of flavor symmetries (in our conventions, these are symmetries commuting with the N = 2 superconformal algebra and not related by supersymmetry to higherspin symmetries), G = i G i , and the corresponding flavor central charges, k i . (iii) The spectrum, S, of Coulomb branch operators. These quantities do not change as we travel along M. 6 As we go between different cusps of the conformal manifold, the various quantities in (i), (ii), and (iii) are "partitioned" among the different emergent sectors. One interesting aspect of the Argyres-Seiberg-like dualities is that, unlike in the case of SU(2) gauge theory 6 The a and c central charges are invariant under exactly marginal deformations by the usual anomaly matching arguments (conformal symmetry is unbroken as we move along M). The flavor symmetries are also invariant (at the cusps, where weakly coupled gauge fields emerge, we also have emergent flavor symmetries; however, these symmetries are arbitrarily weakly gauged) since the exactly marginal primaries, O i , are uncharged under the flavor symmetries (this follows, e.g., from the analysis of the O i O † j OPE in [18]). As a result, by anomaly matching arguments, the k i are constant on M. The invariance of the N = 2 chiral spectrum follows from [19], which shows that the number of such operators cannot change as we traverse the conformal manifold, and from [20], which shows that the dimensions of these operators do not change either. Note that this reasoning applies also to the "exotic" higher-spin N = 2 chiral primaries considered in [17]. with N f = 4, these quantities are generally distributed differently at the different cusps. For example, in the case of [4], at the SU(3) cusp we have (2.1) The first contributions in a and c come from the SU(3) gauge sector, while the remaining contributions come from the six flavors (this partition reflects the fact that there are seven corresponding N = 2 stress tensor multiplets and hence seven different N = 2 sectors). 7 Finally, the flavor symmetry comes from the hypermultiplets, and the gauge sector gives all the contributions to S (the elements of S are the scaling dimensions of the generators of the N = 2 chiral ring-in this case the Casimirs of SU (3)). On the other hand, at the SU(2) cusp, we find three distinct N = 2 sectors (with three independent N = 2 stress tensor multiplets) (2. 2) The first contributions in the above partitions are from the gauge sector, the second contributions come from the MN theory, which has rank one (its Coulomb branch chiral ring has a single generator of dimension three), and the third contributions come from the doublet of hypermultiplets. Now let us turn to theories with fractional-dimensional operators. In the case of the T 2, 3 2 , 3 2 theory mentioned in the introduction, we have In our conventions, a = 3 32 3TrR 3 − TrR and c = 3 32 3TrR 3 − 5 3 TrR , whereR = 1 charge, and I 3 is the Cartan of SU (2) R (a free N = 2 U (1) vector multiplet scalar primary has I 3 = 0 and where the first contributions are from the SU(2) gauge sector, the second and third contributions are from the two I 3,3 SCFTs, and the final contributions are from the hypermultiplets. The global symmetry group is U(1) 3 since we gauge a diagonal SU(2) ⊂ (2), where the SU(3) factors come from the I 3,3 sectors and the Sp(2) factor comes from the two hypermultiplets. This gauging is marginal since k SU (2) = 2k I 3,3 SU (2) + k 2⊕2 = 2 · 3 + 2 = 8. 8 On the other hand, in the case of the T 3,2, 3 conformal manifolds is simple. We first take the data in (2.3) and (2.4) and match it to data for the corresponding theories in the infinite class of AD SCFTs described in [11,15]. In particular, we will argue that T 2, where the theories listed on the RHS of (2.5) are defined in [11,15]. 9 Using our methods, it is clear that one can explore infinitely many generalizations of the conformal manifolds we will discuss in this text. In ( , since the embedding index of SU (2) ⊂ SU (3) is unity. 9 Evidence for the first equality in (2.5) was presented at the level of the BPS spectra in [22] (note that the methods in [23] are also useful for finding the BPS spectrum in this case). We will describe how S-duality works in this theory. Note also that, as we explain in more detail below, the superscript "3 × [2, 2, 1, 1]" in the second equality refers to certain Young tableaux that define the III 3×[2,2,1,1] 6,6 SCFT. ifications of the A k (2, 0) theory and are therefore referred to as being of class S). 10 These theories can be succinctly described in terms of Hitchin systems, 11 and the corresponding Seiberg-Witten (SW) curves come from the spectral covers of these Hitchin systems. Using the resulting curves, we can then explore the various cusps of the conformal manifolds and find new S-dual frames. As an alternate derivation, we will also show how to obtain the SW curves directly from certain UV-complete linear quiver theories. Crucially, the Hitchin systems also give us direct access to the quantities (i)-(iii) without the need to fully analyze the SW curves. 12 As a result, we can immediately generate conjectures about different S-dual frames and perform some checks on our guesses before verifying them by analyzing the SW curve. Indeed, in the examples below, we will essentially be able to conjecture the S-dualities from studying the different ways in which the quantities in (i)-(iii) can be partitioned. To confirm these guesses, we then study various limits of the SW curve. The reasons we can proceed in this way are as follows: • The Casimirs of the adjoint Higgs field in the Hitchin system description allow us to find the Coulomb branch spectrum, S = {∆ 1 , · · · , ∆ N }. By the results of [25], this data also fixes 13 • Using the recipes in [11,15,26] (see also the discussion in [27] and [28]), we can give a Lagrangian description of the three-dimensional mirror of the S 1 compactification of our theory, T 3dm . Although this description is not always "good" (in the sense that the IR superconformal R-symmetry can mix with accidental symmetries), we can unambiguously compute the dimension of the corresponding Coulomb branch, dimM 3dm C , and hence a − c via the relation (2.7) 10 In fact, there is some redundancy in this description, and, as we will see, both the T 2, theories can also be realized as the IR description of M 5 branes wrapping a sphere with one irregular and one regular puncture. 11 See [24] for a beautiful account of the relationship between theories of class S and Hitchin systems. 12 One apparent exception to this statement is the set of flavor anomalies. 13 A condition for using the results in [25] is that our theory has a freely generated Coulomb branch. All the theories we study in this paper satisfy this condition. We expect (2.7) to hold in all theories that have a genuine Higgs branch (all the superconformal theories of class S discussed in [11,15] with non-integer dimension Coulomb branch operators come from genus zero compactifications of the (2, 0) theory and therefore have Higgs branches). 14 • The three-dimensional mirror often allows us to fix the precise flavor symmetry of the theory via the monopole analysis of [29] or, sometimes, from applying mirror symmetry again and reading off the flavor symmetry directly. Note that we can essentially always find the number of mass parameters of the theory in this way (we can also do this by studying Casimirs of the adjoint Higgs field), and we can read off the full flavor symmetry as long as the IR behavior is under sufficient control. 15 We should note that from the perspective of the compactification of the A k (2, 0) theory, it may be somewhat surprising that we have an exactly marginal parameter at all. Indeed, in the case of Gaiotto's theories [6], marginal parameters in the four-dimensional field theory are identified with complex structure deformations of the Riemann surfaces on which the parent six-dimensional theory is compactified. Clearly, the punctured spheres we consider do not have any complex structure deformations. Instead, it turns out that the exactly marginal deformations in our theories arise from certain dimensionless parameters of the co-dimension two defects used in defining the six-dimensional parent theory. 16 Finally, before we proceed, we should also note that in studying the behavior of our theories at different cusps in the marginal coupling space, we will often find it necessary to renormalize some of our parameters by multiplying them by functions that either vanish or diverge at a given cusp. The reason we do this is simple. We must demand that our parameterization of the Coulomb branch is non-singular so that the BPS masses are finite and non-trivial functions of the Coulomb branch coordinates. Presumably this criterion can be also understood as the necessity of renormalizing the operators whose vevs parameterize the Coulomb branch as we traverse the conformal manifold. In [20], this renormalization was interpreted as the statement that operators can pick up non-trivial phases or mix in interesting ways as we travel along closed loops in the marginal coupling space (i.e., 14 The first equality in (2.7) is a natural generalization, to strongly coupled theories with a Higgs branch, of the weakly coupled result that a − c = − 1 24 (n H − n V ), where n H is the number of hypermultiplets and n V is the number of vector multiplets. The second equality in (2.7) follows from mirror symmetry (in particular, the exchange of Higgs and Coulomb branches under this duality) and the fact that the Higgs branch does not receive quantum corrections as we go to long distances compared to the S 1 radius. 15 As we will discuss, in the case of the T 3, 3 2 theory, this analysis is much more subtle. 16 We thank G. Moore for a discussion of this point. operators transform as sections of certain bundles over the conformal manifold). We will find some evidence for this picture, since our normalizations introduce monodromies in the marginal parameter space. A minimal generalization of Seiberg and Witten's S-duality In this section, we will study the T 2, 3 2 , 3 2 theory introduced above. In the first subsection, we find the invariant quantities (i)-(iii) of the I 4,4 theory [11] and show that they match . 17 We also argue that, subject to some assumptions, the only potential cusps of the T 2, 3 2 , 3 2 theory involve an SU(2) gauge sector coupled to two I 3,3 sectors and a doublet of hypermultiplets (in other words, we argue that there is no emergent rank-two sector with Coulomb branch spectrum 3 2 , 3 2 ). We then find further evidence for this picture by analyzing the SW curve of the I 4,4 theory. Moreover, we find an S-duality action on the parameters of the theory that is reminiscent of the Spin(8) triality of the SU(2) gauge theory with N f = 4. As a result, this discussion represents a simple generalization of Seiberg and Witten's analysis [2]. In the final subsection, we show how T 2, 3 2 , 3 2 can be derived from a UV-complete linear quiver. Before proceeding to the calculations, let us show that our theory is the simplest (i.e., lowest-rank) example of an S-duality with non-integer dimension Coulomb branch operators under certain reasonable assumptions: (a) the only rank-zero theories are collections of free hypermultiplets and (b) the only rank-one theories with non-integer scaling dimension primaries are the I 2,3 , I 2,4 , and I 3,3 theories. 18 Under these assumptions, it follows from the fact that I 2,3 has no flavor symmetry and the fact that k that the lowest rank theory we can imagine constructing-let us call it T rk2 -involves an SU(2) gauge theory coupled to one copy of the I 3,3 theory (via a gauging of the SU(2) ⊂ SU(3) flavor symmetry) and five hypermultiplets (via a gauging of SU(2) ⊂ Sp (5)) so that k SU (2) = k I 3,3 SU (2) + 5k 2 = 8. However, T rk2 is inconsistent, because the gauged SU(2) suffers from Witten's SU(2) anomaly [16]. To understand this last statement, note that the I 3,3 theory cannot have such an anomaly. Indeed, as we described above, the I 3,3 theory can be obtained as the IR endpoint of an RG flow from the asymptotically free limit of SU(2) SQCD with N f = 3 [10] (the 17 Note that we can also realize our theory in terms of the (I 3,3 , S) Hitchin system. This system has lower rank than the I 4,4 Hitchin system, but it also has an additional regular singularity. 18 It might be possible to prove assumption (a) by generalizing [25] and using the N = 2 version of the arguments presented in [30]. short-distance limit clearly has vanishing Witten anomaly since we can give SU(2) ⊂ SO(6)preserving masses to the squarks). This flow preserves an SU(3) ⊂ SO(6) flavor symmetry of the gauge theory, and, moreover, this symmetry is identified with the flavor symmetry of the I 3,3 theory in the deep IR. Since the RG flow does not leave any additional massless matter besides the I 3,3 theory at long distances, it must be the case that the Witten anomaly for the I 3,3 theory matches the (vanishing) Witten anomaly for the UV theory. Therefore, T rk2 has the same Witten anomaly as the five half-hypermultiplet doublets. This anomaly is clearly non-vanishing, and so the T rk2 theory is inconsistent. On the other hand, theory has an even number of hypermultiplet doublets, it is a consistent theory. theory match the corresponding quantities for the I 4,4 theory (evidence for the equivalence of the BPS spectra of these theories was given in [22]). To that end, we first note that, as desired, the I 4,4 SCFT has the following N = 2 chiral spectrum [11,15] Evidence that T 2, As a result, using (2.6), we find [15] 2a Next, we can write down a good UV description of the three dimensional mirror theory. 19 According to [15], this theory is described by a quiver involving four U(1) nodes with a bifundamental between each node and the overall U(1) decoupled. 20 Deleting a redundant 19 By this we mean a theory in which the IR superconformal R symmetry is visible in the UV. More precisely, we have in mind a theory in which the IR superconformal R-symmetry (or R-symmetries if there are multiple sectors) descends from a symmetry (or symmetries if there are multiple sectors) of the RG flow. 20 In the prescription of [11,15], this statement follows from the fact that the irregular singularity of the corresponding Hitchin system has boundary conditions specified by three 4×4 matrices whose eigenvalues are generically different (and whose degeneracies are therefore in one-to-one correspondence with three Young tableaux of the form [1, 1, 1, 1]). Note also that we have written the remaining U (1) factors in certain linear combinations that are convenient for applying the mirror symmetry algorithm in [31]. U(1) factor, we find the theory As a result, we conclude that dimM 3dm C = 3 and therefore [15] Finally, we can check that the flavor symmetries match. One way to do this is to take the mirror transform of the above theory (using the algorithm in [31]) We see that this theory has a U(1) 3 flavor symmetry, and so Alternatively, we can find the same result directly in the mirror theory by noting that there are three U(1) Coulomb branch symmetries that shift the three independent dual photons by constants. Any additional symmetries would correspond to currents that sit in monopole multiplets of dimension one [29]. However, the monopole multiplets have dimension where − → a = (a 1 , a 2 , a 3 ) ∈ Z 3 is a magnetic U(1) 3 charge vector. Note also that (3.6) is consistent with the claim that we have a good description of the IR theory since there are no free (or unitarity bound violating) monopole operators in our microscopic description. These results strongly indicate that T 2, 3 Let us now ask about possible S-dual descriptions. One possibility is that we have various dual descriptions involving an SU(2) gauge group coupled to two I 3,3 sectors and a doublet of hypermultiplets. A more exotic possibility would involve a dual description with an SU(2) gauge group coupled to a rank-two theory with Coulomb branch spectrum 3 2 , 3 2 . While we cannot prove that this second possibility does not occur without the SW analysis of the next section, we can already see it is unlikely. Indeed, it is reasonable to assume that any of the sectors that emerge at the cusps of the conformal manifold are also of class S and can be realized as compactifications of the (2, 0) A k theory (since the parent I 4,4 theory is in this class). However, there are no rank-two theories with spectrum 3 2 , 3 2 that can be built from the recipes in [11,15] (besides two decoupled copies of the I 3,3 theory). In the next subsection, we will demonstrate that the first option described in this paragraph is indeed realized. Analysis of the SW curve We begin by writing down the Seiberg-Witten curve for the I 4,4 theory 0 = x 4 + qx 2 z 2 + z 4 + c 30 x 3 + c 03 z 3 + c 20 x 2 + c 11 xz + c 02 z 2 + c 10 x + c 01 z + c 00 . (3.7) The Seiberg-Witten 1-form is given by λ = xdz. Since the mass of a BPS state is given by λ, the 1-form λ has scaling dimension one. This observation fixes the scaling dimensions of x, z, c ij and q as In order to make contact with the T 2, 3 2 , 3 2 theory discussed above, we should first show that an SU(2) gauge symmetry emerges. To that end, let us turn off all the c ij except for c 00 . The SW curve is given by In terms of y = −i(c 00 ) is expressed as with the 1-form now λ = u dx/y. The equation (3.10) is precisely the curve for SU (2) with [2,32], where u is the Coulomb branch parameter of dimension 2. The parameter f is related to the exactly marginal gauge coupling τ = θ π + 8πi g 2 . 21 The equivalence of (3.9) and (3.10) suggests that the I 4,4 curve contains a sector described by a conformal SU (2) vector multiplet. The above SU(2) gauge theory has cusps at q = ∞ and q = ±2 where the curve (3.10) degenerates and different S-dual descriptions of the theory become weakly coupled. We can go between the cusps via the transformations T : τ → τ + 1 and S : τ → −1/τ [2]. In terms of q, these are expressed as T : q → 12−2q 2+q , S : q → −q. It turns out that S and T can be extended to the full I 4,4 curve (3.7). To that end, first consider The equation (3.7) is invariant under this transformation after we perform a one-form- x. Next, consider the T transformation. We first shift x → x + c 30 /(2q − 4) and z → z + c 03 /(2q − 4) so that the curve (3.7) is 30 x +c 03 z) +c 20 x 2 +c 11 xz +c 02 z 2 +c 10 x +c 01 z +c 00 . (3.12) This shift keeps λ invariant up to an exact term. While the relation between c kℓ andc kℓ is generically complicated, it reduces toc kℓ = c kℓ when q → ∞. Now consider the following transformation:T where g is a linear map defined by g( 21 Without loss of generality, we can take keeps the 1-form invariant up to an exact term. Hence, the I 4,4 curve is invariant under the transformations generated byS andT . As we will show in the remaining parts of this subsection, the cusps of the conformal SU(2) gauge theory persist in the presence of the fractional dimensional operators, and, at each of the cusps q = ∞, ±2, a weakly coupled SU(2) gauge group couples two I 3,3 theories and a doublet of hypermultiplets. We go between the cusps via theS andT transformations (and we use this freedom to study the cusp at q = ∞ and then study the q = ±2 cusps via these symmetries). Moreover, we see in (3.11) and (3.13) that these transformations act non-trivially on the various parameters and vevs. Note that theS andT transformations take a particularly simple form when acting on the independent physical mass parameters (i.e., the independent residues of the one-form), m i (i = 1, 2, 3), of the theorỹ S : where the m i are the independent eigenvalues of the simple poles in the Hitchin field at z = ∞. 22 As a result, we see that the duality group acts on the residues via S 3 . This situation is somewhat reminiscent of the action of the SL(2, Z) duality group of the SU(2) N f = 4 gauge theory on the mass parameters via triality [2] (although here we only have a U(1) 3 flavor symmetry instead of SO(8), and we have a non-trivial action of the duality group on the various non-integer dimension parameters of the theory). Indeed, it would be interesting to make this analogy more precise. 23 Cusp at q = ∞ Consider the I 4,4 curve (3.7) near q = ∞. Since one of the coefficients is divergent in this limit, it is not clear whether our parameterization of the curve describes the Coulomb branch in a non-singular fashion. As discussed in the introduction, we should normalize the c ij so that the masses of BPS states are non-trivial functions of these quantities. Let us first consider the Coulomb branch parameter c 10 of dimension 3 2 . When all the other deformations of the conformal point are turned off, the curve is given by 0 = x 4 + qx 2 z 2 + z 4 + c 10 x . To evaluate the periods of this curve, let us change variables as (x, z) → (x, w) with w = z/x. Neglecting a trivial branch (x = 0), we find (3.14) The 1-form is λ = 1 2 x 2 dw up to exact terms. The curve (3.14) is a triple covering of the w-plane with branch points at the roots of 1 + qw 2 + w 4 and at w = ∞. Let us define the roots w ± = ± 1 2 (−q + q 2 − 4). In the limit q → ∞, the 1-cycle with the largest absolute value of the period of the one-form is the one around w = ∞ and w = w + (or w − ). Its period behaves in the limit as with a q-independent constant κ. Since so that the period with the largest absolute value remains finite and non-vanishing in the limit q → ∞. We renormalize all the c ij except for c 00 in the same way (i.e., we demand that the largest period created by each c ij = 0 remains finite and non-vanishing in the limit q → ∞). The only deformation we need to study more carefully is c 00 . When only c 00 is turned on, the curve is the genus one curve (3.9). With an appropriate choice of two independent 1-cycles A and B, their periods behave in the limit q → ∞ as Since the ratio of the two periods is divergent, the curve is pinched in the limit q → ∞. This is the signature of a light W-boson and an infinitely massive monopole. A natural normalization in this case is c 00 → qc 00 so that 1 2πi A λ ∼ √ c 00 and 1 2πi B λ ∼ 1 πi √ c 00 log q. 24 24 We could also normalize c 00 as c 00 → q(log q) 2 so that 1 2πi A λ ∼ √ c 00 / log q and 1 2πi B λ ∼ 1 πi √ c 00 . Here we use the traditional normalization in which the period of the pinched cycle is finite and non-vanishing. Note that for c ij = c 00 there is a unique renormalization up to q-independent rescaling. The reason for this is that no 1-cycle created by c ij = c 00 is pinched in the limit q → ∞. As a result, the curve near q ∼ ∞ is written as 0 = x 4 + qx 2 z 2 + z 4 + q Let us now study the behavior of this curve in the limit q → ∞. It turns out that the curve splits into three sectors. • In the region |z/x| ∼ 1/ √ q, the curve is well-described by the new set of variables In the limit q → ∞, the curve reduces to By shiftingz →z − c 11 /(2x), the curve is written as . • In the region |z/x| ∼ √ q, the curve is well-described by the new variablesz = q − 1 4 z andx = q 1 4 x. The 1-form is now λ =xdz. After shiftingx →x − c 11 /(2z), the curve in the limit q → ∞ is written as . • In the region |z/x| ∼ 1, the curve in the limit q → ∞ is given by 0 = x 2 z 2 + c 11 xz + c 00 . This curve describes the SU(2) superconformal QCD in the weak coupling limit with c 11 a mass parameter for a fundamental hypermultiplet. We can eliminate this term by shifting x → x−c 11 /(2z). The curve after the shift is 0 = x 2 z 2 +(c 00 −c 2 11 /4), which describes the pinched W-boson cycle of the weak-coupling SU(2) curve. The mass of the W-boson is proportional to c 00 − c 2 11 /4. 26 The monopole cycle is overlapping between |z/x| ∼ √ q ∼ ∞ and |z/x| ∼ 1/ √ q ∼ 0; its period is divergent. To recapitulate: the first two sectors describe two I 3,3 theories while the third sector describes an SU(2) vector multiplet coupled to a fundamental hypermultiplet. The W-boson mass implies that the SU(2) sector is gauging the SU(2) flavor subgroups of the I 3,3 sectors. Hence, the I 4,4 curve (3.7) near q ∼ ∞ describes the Coulomb branch of the weak coupling limit of the T 2, 3 2 , 3 2 theory defined in the introduction. Cusps at q = ±2 Let us briefly discuss the other cusps at q = ±2. Since they are mapped to q = ∞ by the where ǫ = q ∓ 2. It is straightforward to show that, in the limit q → ±2, the curve splits into two I 3,3 curves connected by an SU(2) curve. A difference from the previous cusp is that the parameters c ij are now mixed among the three sectors. In terms of the linear map g defined below (3.13), one of the I 3,3 curves is characterized by g(c 30 ), g(c 20 ), g(c 10 ) and g(c 00 ) − g(c 11 ) 2 /4 while the other is governed by g(c 03 ), g(c 02 ), g(c 01 ) and g(c 00 ) − g(c 11 ) 2 /4. The SU(2) vector multiplet and a fundamental hypermultiplet are characterized by g(c 00 ) and g(c 11 ). The linear quiver In this section, we would like to demonstrate how the T 2, 3 2 , 3 2 theory can be engineered from a UV-complete linear quiver. To that end, consider the theory in Figure 1. Following [33], 26 The shift of the W-boson mass squared by a hypermultiplet mass squared is a common phenomenon. See for example [2]. we can write the corresponding SW curve as follows: (3.24) The SW differential has the form λ = v t dt. In the above formula u i , u ′ 2 andũ 2 are the Coulomb branch coordinates of the theory, m 1 and m 3 encode the mass parameters for the two SU(2) doublets, m 2 is related to the mass of the fundamental hypermultiplet of SU(3), µ 1 and µ 2 are associated with the mass parameters of the bifundamental hypermultiplets. 27 q 1 , q 2 and Λ are, respectively, the marginal couplings of the SU(2) gauge groups and the dynamical scale of the SU(3) group. If we send one of the q i couplings to zero, the curve reduces to that of the linear quiver with the SU(2) group replaced by two hypers in the fundamental of SU(3), which is indeed the expected degeneration in the "ungauging" limit. Setting q 1 = q 2 = 0 the curve reduces to that of SU(3) SQCD with N f = 5. If we send to zero Λ, thus ungauging SU(3), the quiver breaks into two pieces, each describing a scale invariant SU(2) theory. Depending on how we write the curve, in the degeneration limit we are left with the curve for one of these two sectors. For example, in the above formula, only the terms proportional to a positive power of t remain. We can change description and keep the other sector simply with the redefinition t → t/Λ. With a constant shift of v, which does not affect the form of the SW differential, and a suitable redefinition of the parameters, we can bring the curve to the following form, which is more convenient for our later discussion: (3.25) 27 Notice that the above curve is schematic: the parameters m i are not the physical masses (i.e., the residues of the SW differential) but are instead combinations of the mass parameters and the dynamical scale of the theory. We are interested in the origin of the moduli space of this theory (i.e., the point in the moduli space we get by setting all the parameters in (3.25) to zero except q i ) where the curve reduces to The resulting curve is singular and, as usually happens in N = 2 theories, the degeneration of the curve signals the presence of a superconformal fixed point, whose SW curve can be extracted starting from (3.25) by taking a suitable scaling limit. First of all we define new variables (3.27) In terms of these variables, (3.25) becomes (3.28) The SW differential is λ = (v/z)dz. Then, sending Λ to infinity, we get the curve To obtain this formula we divided the whole curve by a constant and rescaled z to set to one the coefficient of z 2 v and to −1 − g and g the coefficients of the terms v 2 and v 3 /z 2 respectively. This manipulation is also accompanied by the proper redefinition of the parameters. Notice that this transformation does not affect the SW differential. Since we are discussing a superconformal theory, all the parameters appearing in (3.29) should have a definite scaling dimension. This can be read from (3.27) using the UV dimension of the parameters appearing in (3.25). Notice that the above curve is homogeneous, in the sense that assigning dimension one to v (which is consistent with the constraint on the SW differential) and 1/2 to z we find that all the terms in (3.29) have dimension two. This is precisely the property we expect for the curve describing an SCFT. A minimal generalization of Argyres and Seiberg's S-duality In this section, we turn our attention to the rank four T 3,2, gauge fields coupled to an I 3,3 theory and an exotic rank two theory we call T 3, 3 2 (in the language of [15], this theory can be written as III can be embedded in a UV-complete linear quiver. Before proceeding to the calculations, let us show that-under the same assumptions we used at the beginning of Section 3 to demonstrate the minimality of our first examplethere are no rank three theories that exhibit Argyres-Seiberg-like duality. We can prove this statement as follows. Let us consider the possible rank three theories. They break up into two cases: (a) a rank one gauge theory coupled to either a rank two sector or to two rank one sectors, and (b) a rank two gauge theory coupled to a rank one sector. Let us consider (a) first. In this case, the gauge theory must be SU(2). Let us suppose that it is coupled to two rank one sectors. Vanishing of the one-loop beta function implies that the only possibility is that SU(2) is coupled to two copies of the I 3,3 theory with an additional doublet. This is the T 2, 3 2 , 3 2 = I 4,4 theory we studied in Section 3 and showed did not exhibit Argyres-Seiberg-like behavior. Next let us suppose that the SU (2) gauge theory is coupled to a rank two sector. In order to have an Argyres-Seiberg-like duality, such a theory must be dual to a rank two gauge theory coupled to a rank one sector with a non-integer dimension Coulomb branch operator as in (b). The possible rank two gauge groups are: SU(2) × SU(2), SU(3), Sp (2), and G 2 . We can rule out Sp(2) and theory we are about to study). We will now compute the quantities (i)-(iii) described in Section 2 for the III 3×[2,2,1,1] 6,6 theory and show that they match the quantities given in (2.4) for the T 3,2, 3 2 , 3 2 SCFT. We will then motivate the existence of an SU(2) gauge theory cusp and demonstrate how these quantities are partitioned at such a point on the conformal manifold. Preliminary evidence that T 3,2, We first note that the Hitchin system description of III 3×[2,2,1,1] 6,6 is specified by three 6 × 6 matrices whose eigenvalue degeneracies are encoded in three copies of the Young tableaux [2, 2, 1, 1] (i.e., each matrix has two sets of two-fold degenerate eigenvalues, see Appendix A.2). 28 Next, from the three Young tableaux [2, 2, 1, 1], we use the rules described in [11] to write down the three-dimensional mirror where the subscripts in the U (2) To read off the symmetries of the theory, we can apply mirror symmetry again and find This theory has a U(3) flavor symmetry, and so we conclude that Alternatively, we can work directly in the mirror theory. Clearly, there is a Coulomb branch symmetry that shifts the three dual photons by independent constants. To see the symmetry enhancement to U(3) in the IR, we should study the monopole operators with dimension one [29]. The general formula for the dimensions of the monopole operators is where − → a = (a 1,1 , a A,1 , a A,2 , a B,1 , a B theory. From the discussion in [4], we then expect that there should be a degeneration limit where an SU (2) gauge group emerges. As we will see, the presence of fractional dimensional operators does not spoil this picture, although the emergent sectors that appear are quite different than in [4]. What can this cusp look like? We again expect a decomposition into sectors of class S (of type A k ). One possibility is an SU(2) gauge group coupled to a rank one theory with a dimension three Coulomb branch operator, a rank two theory with spectrum 3 2 , 3 2 , and some number of fundamentals. However, as we argued in the previous section, such a rank two sector is unlikely to exist in the A k theories of class S, and, since our parent theory is of this type, such an option should not be realized. Another possibility is an SU(2) gauge group coupled to a rank three theory with spectrum 3, 3 2 , 3 2 . However, just as in the case of the rank two theory with spectrum 3 2 , 3 2 , such a theory cannot be constructed from the recipes in [11,15]. As a result, the last possibility is an SU(2) gauge group coupled to a rank two theory with Coulomb branch spectrum, 3, 3 2 , and a copy of the I 3,3 theory. We denote this rank two theory, T 3, for the total theory is supplied by the I 3,3 sector. 29 We will now argue that the . In other words, we claim that the irregular singularity of the Hitchin system describing this theory has three 6 × 6 matrices with the first two (i.e., those multiplying the third and second order poles at z = ∞ in the Higgs field) having three doubly degenerate eigenvalues theory is quite subtle. From the Hitchin system, we can deduce the following UV description of the three dimensional mirror where the subscripts in the U(2) B,C representations denote charges under the corresponding U(1) subgroups. 29 One nice check of our discussion is the following. If our conjecture is correct, then the fundamental hypermultiplets at the SU (3) cusp are monopoles in the SU (2) gauge theory description (our argument is similar in spirit to the argument in [4]). With this understanding, let us consider k U(1) . On the SU (2) gauge theory side of the duality, it is natural to take k U(1) = k Note that all of the nodes in this description are "good" in the sense of [29]. In particular, the U(1) 1 node has N f − 2N c = 0 and so too do the U(2) B,C nodes. The U(2) A node is also good since it has N f − 2N c = 1. Therefore, is is natural to guess that this theory should have no monopole operators of dimension ∆ ≤ 1 2 and that the flavor symmetry should be SU(3) × SU(2). 30 We also find dimM 3dm C = 6 and therefore a − c = − 1 4 . 31 This result is certainly compatible with what we expect from (4.8) and what we obtain in the next section. However, there is a wrinkle (note that we do not expect the discussion that follows to affect dimM 3dm C or therefore a − c). Indeed, we can compute the dimensions of the monopole operators [29] where − → a = (a 1,1 , a A,1 , a B,1 , a B,2 , a C,1 , a C,2 ) ∈ Z 6 is a U(2) 3 magnetic charge vector (we have used the freedom of shifting the flux by a charge corresponding to the overall U(1) in order to set the magnetic flux from the U(1) 1 node to zero). It is easy to check that, up to Z 3 2 permutations, the dimension one monopole operators are M ± 1 = (0, 0, ±1, 0, 0, 0), M ± 2 = (0, 0, 0, 0, ±1, 0), M ± 3 = ±(0, 0, 1, 0, 1, 0), M ± 4 = ±(1, 1, 1, 1, 1, 1), M ± 5 = ±(2, 0, 2, 0, 2, 0), M ± 6 = ±(1, −1, 1, −1, 1, −1). However, there is also a dimension half monopole operator, M ± = ±(1, 0, 1, 0, 1, 0). The heuristic reason for this result is that the extra U(2) A we have added to connect the two linear quivers that produce the SU(3) and SU(2) symmetries gives large quantum corrections to the theory. Therefore, even though the quiver is "good" by the usual tests, it actually has an apparent dimension half free monopole operator! It is not immediately clear to us how to proceed (in particular, we are not sure how to deduce the flavor symmetry group from the three dimensional perspective). 32 30 If we regard the theory as an N = 1 theory, then the flavor symmetry would, intriguingly, be SU (3) × SU (2) × U (1). 31 Note that this value of a − c rules out another potential candidate for describing T 3, theory. 32 One naive way of analyzing the theory is to turn on a vev Φ A = Φ B = Φ C = diag(v 1 , 0) and move out onto the Coulomb branch. The remaining massless theory splits into two sectors: one is identical to the three-dimensional reduction of I 3,3 (with the (Q AB ) 1 1 , (Q BC ) 1 1 , and (Q CA ) 1 1 hypermultiplets as matter Analysis of the SW curve The SW curve for the III 3×[2,2,1,1] 6,6 theory is given by The theory has an exactly marginal coupling q and three mass deformation parameters m i . The b i are relevant couplings associated with two Coulomb branch operators of dimension 3 2 , whose vevs are identified with c i . There are also Coulomb branch operators u, v of integer dimensions. In order to make contact with the T 3,2, In terms ofũ = u/[2(q + 1 q )],ṽ = v/[2 √ 2(q + 1 q )], f = 4/(q + 1 q ) 2 ,x = xz/ √ 2 and y = x 3 + √ 2z 2x2 /(q + 1 q ) +ũx +ṽ, the curve is expressed as (4.14) content), and the other looks like the I 3,3 quiver but with one node having an extra flavor attached to it (with the (Q AB ) 2 2 , (Q BC ) 2 2 , (Q CA ) 2 2 , and (Q A1 ) 2 hypermultiplets as matter content). This second quiver again leads to an apparently free monopole operator which we can again attempt to decouple by moving out onto the Coulomb branch of this reduced theory. Proceeding in this way, we ultimately find two decoupled U (1)'s and two copies of the S 1 reduction of the I 3,3 theory (the (Q A1 ) 2 hypermultiplet becomes massive and is integrated out). While we find the correct a − c in this case, we also find an Sp(2) × SU (3) 2 flavor symmetry. This result is clearly incompatible with the four-dimensional discussion below, and so we predict that the three-dimensional behavior of the T 3, 3 2 theory is more complicated. We might try to rescue this interpretation by gauging an SO(3) ⊂ Sp(2) × SU (3) × SU (3) diagonal subgroup, where the first SU (3) is from the T 3, 3 2 theory and the second is from the I 3,3 sector. This would leave an SO(2) × SU (3) flavor symmetry (the commutant of SO(3) in SU (3) has dimension zero). However, as we will see below, we have a mass parameter in the I 3,3 sector and so this interpretation is not correct. The 1-form is written as up to exact terms, where P =x 3 +ũx +ṽ. These are the one-form and curve for the SU(3) gauge theory with N f = 6 [32]. The parameter f is identified with a modular function of the exactly marginal gauge coupling τ = θ π + 8πi g 2 . 33 The emergence of (4.14) suggests that the III 3×[2,2,1,1] 6,6 curve contains a sector described by a conformal SU(3) vector multiplet. The curve (4.14) is known to be invariant under Γ(2) ⊂ SL(2, Z), which is generated by T 2 : τ → τ + 2 and S : τ → −1/τ [32]. In terms of q, these correspond to as long as we also send x → ix, z → −iz (which keeps the 1-form invariant). The two transformations q → 1/q and q → −q will be important later in this subsection. The SU(3) superconformal QCD described by (4.14) has a weak-coupling cusp at τ = i∞ and a strong coupling cusp τ = 1. In terms of q, these correspond to q = 0, ∞ and q = ±1, respectively. Below we study the behavior of the full III We first renormalize all the deformations of the curve so that the largest period created by each deformation is finite and non-vanishing in the limit q → 0. The renormalized curve is written as which turns out to split into three sectors as follows. 33 Once again, we take • In the region |z/x| ∼ q, we definez = q − 1 2 z andx = q 1 2 x so that |z/x| ∼ 1. In terms ofx andz, the curve in the limit q → 0 is written as 17) and the 1-form is given by λ =xdz up to exact terms. Let us shiftz →z − 1 3 (x + b 1 + m 3 /x). This curve can be identified with that of the (III 27 . This means that the sector near |z/x| ∼ q describes the Coulomb branch of the (III • In the region |z/x| ∼ 1/q, we definez = q 1 2 z andx = q − 1 2 x. The curve in the limit q → 0 is now written as • In the region |z/x| ∼ 1, the curve is On the other hand, the above discussion shows that the (III Cusps at q = ±1 We now turn to the points q = ±1 in the marginal coupling space. Since these two points are related by the symmetry transformation q → −q, we will, without loss of generality, focus on q = 1. Note that there is no symmetry transformation which maps q = ±1 to q = 0, ∞. Therefore, we expect to have a different weak coupling description in this case. To understand the above statement, we first renormalize the deformations so that the largest period created by each deformation is finite and non-vanishing in the limit q → 1. The correct renormalization turns out to be where ǫ = 1 − q. Therefore the renormalized curve is written as x + v , (4.21) x−z . In the limit q → 1, or equivalently ǫ → 0, the curve splits into three sectors, depending on |ζ|. • In the region |ζ| ∼ 1, the curve in the limit q → 1 is given by theory. • Finally, let us look at the region |ζ| ∼ ǫ 1 2 , which is between the above two regions. . It follows that finitex andz correspond to |ζ| ∼ ǫ 1 2 in the limit q → 1. The curve in terms of these variables reduces to 0 =x 2 (x 2z2 +û) , (4.24) in the limit q → 1. Apart from the trivial branchx 2 = 0, this is the weak coupling limit of the SU(2) curve. The period of the pinched cycle is proportional to √û , which is identified with the central charge of the SU(2) W-boson. Hence, in the limit q → 1, the III theory and an I 3,3 theory; see Figure 3. SCFT. We see that these results immediately imply that k The linear quiver In this subsection, we will show that the T 3,2, 3 2 , 3 2 theory can be embedded in a UV-complete linear quiver theory. To understand this claim, let us consider the theory in Figure 4. The SW curve can be written as follows [33]: The notation is identical to that of Section 3.3. 34 The SW differential is again λ = (v/t)dt. After the shift v → v − m 3 and a suitable redefinition of the parameters we find the curve By setting all the parameters to zero (apart from q 1 and q 2 ) in (4.26) the curve becomes singular. Our next task is to extract the SW curve describing the effective low-energy theory at this singular point. As in the previous example, we extract the curve starting from (4.26) and taking a scaling limit. We change variables as follows: Rewriting (4.26) in terms of the new variables and taking the limit Λ → ∞ we find the curve (4.28) In the above formula we have divided everything by a constant and rescaled z to set to one the coefficient of the terms z 2 v 2 and v 4 /z 2 . This transformation does not change the SW differential λ = (v/z)dz. We are then left with a single marginal parameter that we call q. We claim that the above curve describes the theory III 3×[2,2,1,1] 6,6 . Indeed, setting x = v/z we bring the SW differential to the canonical form λ = xdz. The resulting curve is precisely (4.11) with the identification u 2 = u and u 3 = v. The only difference is a factor of two in the definition of m 1 and m 2 . 34 As in Section 3.3, the above curve is schematic, and the parameters m i , µ i do not correspond to the physical mass parameters of the theory. Conclusions In this paper, we found minimal generalizations of Seiberg and Witten's S-duality in SU (2) gauge theory with four fundamental flavors and Argyres and Seiberg's S-duality in SU (3) gauge theory with six fundamental flavors to theories with non-integer dimensional Coulomb branch operators. Along the way, we found an S-duality action on the parameters of the Appendix A. Hitchin system perspective In this appendix we briefly review how one obtains the SW curves of various Argyres-Douglas type theories from the corresponding Hitchin system [11,24]. A class of Argyres-Douglas theories are obtained by compactifying the 6d (2,0) theory on a punctured sphere. The Coulomb branch of such a 4d theory (or more precisely its reduction to 3d) is described by At the punctures on P 1 , we impose BPS boundary conditions. Since we can trivialize the gauge bundle around the puncture, the boundary condition is given by specifying the singular behavior of Φ near the puncture. For a trivialized gauge field,∂ A Φ = 0 implies Φ is meromorphic. The singularity at a puncture is called "regular" or "irregular" if Φ has a simple or higher-order pole there, respectively. It was shown in [11] that the resulting 4d theory is an Argyres-Douglas type theory only if there is a single irregular singularity on P 1 with at most one additional regular singularity. Below, we review the SW curves of several Argyres-Douglas theories of this type. A.1. I n,n theory The I n,n theory is obtained from the A n−1 Hitchin system on P 1 with an irregular singularity. 35 Suppose that the singularity is at z = ∞. The boundary condition of the Higgs 35 Here we use the notation of [15]. The same theory is called (A n−1 , A n−1 ) in the language of [12]. field Φ(z) is given by where M i are traceless n-by-n matrices. By using gauge transformations, M i can be simultaneously diagonalized. For the I n,n theory, the matrices M i can be any diagonal traceless matrices. The lower-order terms of O(z −2 ) are not fixed by the boundary condition at z = ∞ but are subject to the constraint that Φ(z) is not singular at z = ∞. The SW curve of the I n,n theory is then given by the spectral curve det(xdz − Φ(z)) = 0. The second non-trivial example is the I 4,4 theory. The boundary condition (A.3) is now given by 4 × 4 matrices M i . Up to coordinate changes, the spectral curve is written as 0 = x 4 + qx 2 z 2 + z 4 + c 30 x 3 + c 03 z 3 + c 20 x 2 + c 11 xz + c 02 z 2 + c 10 x + c 01 z + c 00 . (A.5) Here the dimensions of the parameters are given in (4.12). In particular, this theory has a single exactly marginal coupling, q. theory, which is obtained from the A 5 Hitchin system on P 1 with an irregular singularity. Suppose that the singularity is at z = ∞. The boundary condition for the Higgs field is characterized by (A.3) with three six-by-six matrices M i . In the case of a type III theory, we specify the number of coincident eigenvalues of M i by Young tableaux [11]. Since our Young tableaux are now [2, 2, 1, 1], we demand that M i are of the form M 1 = diag(ã 1 ,ã 1 ,ã 2 ,ã 2 ,ã 3 ,ã 4 ) , up to gauge equivalence. We implicitly assume the tracelessness of these matrices. The resulting spectral curve is written as 0 = x 2 z 2 (z + x) 2 + x 2 z 2 (z + x)b + xz m 1 z(x + z) + m 2 x(z + x) + b 2 4 xz + xz (c + bm 1 2 )z + (c + In the above table, the subscripts in the representations signify the charges of the fields under the corresponding U(1) subgroups. Note that the existence of the M ± 3 dimension half monopole operator follows immediately from the fact that the U(2) C node is "ugly" in the classification of [29] (it has N f − 2N c = −1). To find the remainder of the theory in the IR, we can follow [29] and move along the Coulomb branch of the U(2) C node by taking Φ C = diag(v 1 , 0) (where the Φ C is the adjoint chiral multiplet of U(2) C ) and examine the remaining massless theory. 36 Turning on this vev in the N = 4 superpotential leaves (besides a decoupled U(1) parameterizing the moduli space of the free M ± 3 theory [29]) a massless theory with U(2) C → U(1) C and the following matter multiplets: (Q AB ) a , (Q BC ) 2 , (Q CA ) a 2 , (Q A1 ) a (along with the corresponding hypermultiplet partners; here a is an SU(2) A index and 2 is a U(2) C index).
2015-01-20T03:09:31.000Z
2014-11-21T00:00:00.000
{ "year": 2015, "sha1": "a0e54f371041f2745c96a4f797a52e0e2700d41b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP02(2015)185.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a0e54f371041f2745c96a4f797a52e0e2700d41b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259688193
pes2o/s2orc
v3-fos-license
Does Precision Technologies Adoption Contribute to the Economic and Agri-Environmental Sustainability of Mediterranean Wheat Production? An Italian Case Study : The European Green Deal has set a concrete strategic plan to increase farm sustainability. At the same time, the current global challenges, due to climate change and fuels and commodity market crises, combined with the COVID-19 pandemic and the ongoing war in Ukraine, affect the need for quality food and necessitate the reduction of negative external effects of agricultural production, with fair remuneration for the farmers. In response, precision agriculture has great potential to contribute to sustainable development. Precision agriculture is a farming management system that provides a holistic approach to managing the spatial and temporal crop and soil variability within a field to improve the farm’s performance and sustainability. However, farmers are still hesitant to adopt it. On these premises, the study aims to evaluate the impacts of precision agriculture technologies on farm economic, agronomic, and environmental management by farmers adopting (or not) these technologies, using the case study method. In detail, the work focuses on the period 2014–2022 for two farms that cultivate durum wheat in central Italy. The results suggest that the implementation of precision technologies can guarantee economic and agri-environmental efficiency. The results could serve as a basis for developing a program to start training in farms as well as to suggest policy strategies. Introduction The transition towards a sustainable agricultural system is a priority to ensure the Sustainable Development Goals (in particular, SDGs 2.3 and 12.4) of the United Nations Agenda 2030, as well as the European Green Deal objectives.In particular, the European Commission has set a concrete strategic plan to reduce the use of chemicals and fertilizers, enhance biodiversity, and assist farmers in decision-making processes to increase farm sustainability.In addition, the current historical period and geopolitical framework lead to significant impacts on the agricultural sector.In particular, wheat production is currently affected by a significant stock depletion and price volatility.Starting with COVID-19 in 2020, the unexpected spread of the pandemic and the resulting lockdowns and closures around the globe led to an unavoidable critical situation related to the export restrictions and the changes to the purchasing behavior of wheat derivatives, such as flour [1].These circumstances have put Europe and countries such as Italy in severe deficit conditions in terms of stocks, which also derives from the increased price volatility.Price volatility can be partially traced to uncertainty over the flow of supplies, depending principally on current production and existing stocks.The U.S. Department of Agriculture estimates that global wheat ending stocks for the 2022/2023 marketing year will be around 267 million metric tons.More than half of these stocks will be held by China, while EU, USA, and other major exporters account only for 20%.China's wheat stocks increased by over 160% between 2012 and 2020.This was largely due to changes in China's agricultural policy, which increased producer support prices, resulting in the accumulation of large government stockpiles [2].By contrast, wheat stocks held by the rest of the world declined by 12% over same period. Moreover, the ongoing war in Ukraine has contributed to reduce the wheat production in the country, disrupting the markets worldwide.The Russia-Ukraine war has caused the highest increase, since 2008, in levels and volatility of prices in agricultural markets for wheat, creating an ongoing vulnerability for global food security [3][4][5][6].One difference between the two periods is the scale of the disruptions in staple food markets.While the period of initial pandemic lockdowns saw some isolated volatility, the Russia-Ukraine war is affecting all major food staples [7].The relative tightness of global stocks suggests that price volatility will continue to remain high in respect to the past 10 years.Going forward, rebuilding inventories of wheat and other key global crops would help to reduce both prices and price volatility.By the same token, tight stocks mean that an unforeseen production shortfall in a major wheat producing region would likely send prices sharply higher again (as in 2010/11 and 2012/13) and result in increased price volatility. In addition, fertilizer prices are a determinant factor more now than at the beginning of the pandemic, where the situation was already compromised.Even if the prices were at extremely high levels before the war began, they are still continuously rising; nonetheless, Russia, an important fertilizer producer, is considering an export ban.Furthermore, the energy crisis due to the high prices for natural gas, an essential feedstock to produce nitrogen-based fertilizers such as urea and ammonia, is contributing to boost fertilizer prices as well [8].Higher fertilizer prices could depress production, leading to less grain on the market in 2022 and putting further upward pressure on already-high food prices [9,10].In this context, it is important to analyze the cereal sector with reference to wheat production, which remains a mainstay of nutrition both in Italy and worldwide.This is because it is essential to understand how to cope with the current crises, considering the market dynamics that are being determined such as the rise in fertilizer prices and the volatility of wheat prices, factors that would make the cultivation of wheat (and cereals in general) unprofitable. As a consequence of the global instability, the implementation of sustainable resilient strategies in agriculture is crucial.This entails the implementation of innovative agricultural practices that increase the productivity and income of farmers and, at the same time, can help to maintain ecosystems.Therefore, one of the actions to implement is reducing the quantity of inputs, in particular fertilizers, while maintaining production to protect both the environment and the income of farmers [11,12]. A key factor for sustainable agriculture is the introduction of digital technologies, which can help farm management through better-informed and timely decisions.These new technologies are known by the term Precision Agriculture Technologies (PATs), a farming management concept based on observing, measuring, and responding to inter and intra-field variability in crops [13].According to an official report jointly published by ITU and FAO in 2020 [14], "digital agriculture has the potential to contribute to a more economically, environmentally, and socially sustainable agriculture, while meeting the agricultural goals of a country more effectively". From the beginning of the 1990s, different authors have discussed the agri-environmental and economic effects derived from the application of PATs [11,[15][16][17][18][19][20][21].In detail, most papers deal with environmental sustainability.The environmental benefits of precision agriculture (PA) derive mainly from the optimization of the management of crop inputs, such as as seeds, fertilizers (especially the efficient use of nitrogen), pesticides, irrigation water, and diesel, which often results in a reduction in their consumption without a decrease in the yield.It is notable that some studies report that the quantity of inputs does not decrease, but their use is optimized to avoid waste and pollution [12,22].In addition, it emerges that, from an environmental point of view, through PATs it is possible to improve the soil proprieties (sustainable nutrient management) and reduce greenhouse gas emissions [23][24][25].Finally, the optimal management of weeds is underlined [26,27]. The research on precision agriculture applied to the cereal farming started later, in 1997.In this scenario, the increasing interest of academia in this topic is notable from Figure 1, with an exponential increase in the number of documents (articles and reviews) available per year.The highest-producing countries regarding PA adoption in cereal farming cultivation are the United States, China, and Australia, while Italy ranks just fifth. a decrease in the yield.It is notable that some studies report that the quantity of inputs does not decrease, but their use is optimized to avoid waste and pollution [12,22].In addition, it emerges that, from an environmental point of view, through PATs it is possible to improve the soil proprieties (sustainable nutrient management) and reduce greenhouse gas emissions [23][24][25].Finally, the optimal management of weeds is underlined [26,27]. The research on precision agriculture applied to the cereal farming started later, in 1997.In this scenario, the increasing interest of academia in this topic is notable from Figure 1, with an exponential increase in the number of documents (articles and reviews) available per year.The highest-producing countries regarding PA adoption in cereal farming cultivation are the United States, China, and Australia, while Italy ranks just fifth.Considering the growing population and the necessity of safe food, establishing methods to increase the yield of staple crops, such as wheat, without compromising the sustainable development of future generations, is a challenging task.The implementation of PATs, such as variable rate application systems, could improve productivity, providing support to both producers and consumers [28][29][30][31][32][33].In the pool of available documents, only few studies assess the economic sustainability of PA application in the cereal sector [34][35][36][37].The economic benefits involve a general reduction of production costs, especially Considering the growing population and the necessity of safe food, establishing methods to increase the yield of staple crops, such as wheat, without compromising the sustainable development of future generations, is a challenging task.The implementation of PATs, such as variable rate application systems, could improve productivity, providing support to both producers and consumers [28][29][30][31][32][33].In the pool of available documents, only few studies assess the economic sustainability of PA application in the cereal sector [34][35][36][37].The economic benefits involve a general reduction of production costs, especially due to the correct management of crop inputs (reduction of pesticides and nitrogen) and an increase in productivity of the farm.The major economic benefit is recorded in the decrease in labor costs and the cost saving of fuel.However, an increase in total costs due to the capital invested in technology is highlighted. However, adoption of PA tools is still far behind expectations, in part due to limitations in quantifying and demonstrating its economic and environmental benefits, insufficient detailed knowledge on technological functions, small farms managed by older farmers, and the deficiency of an incentive system [13,[38][39][40][41][42][43][44][45][46]. On these premises, this paper evaluates the impacts of PATs on farm economic, agronomic, and environmental management by farmers adopting (or not) these technologies, using the case study method proposed by Yin (2009).This study is part of the activities of the Operational Group SMART AGRICULTURE TEAM financed by the Rural Development Program (RDP) Marche 2014/2020, sub-measure 16.1 (Appendix A, Figure A1).The objective of the project is to evaluate how PATs could support the optimization of nitrogen fertilizer management in durum wheat production.Contextually, the Operational Group aimed to evaluate the economic and environmental sustainability of cereal farms adopting or not adopting PATs.The work focuses on the period 2014-2022 for two farms (A and B) that cultivate durum wheat in central Italy.Farm A has used PATs since 2018; farm B uses conventional agronomic management.Based on the objective of the Operation Group, this paper tries to answer the following research questions: i. How does the durum wheat profitability evolve if a farm adopts or does not adopt precision agriculture technologies?ii. Could the application of precision agriculture technologies improve and make more efficient the nitrogen use within the context under investigation? The economic trend of durum wheat production is explored using a profitability ratio analysis.In addition, to understand what will happen to farm B if it decides to adopt the PATs package of farm A, a simulation was performed for the year 2022. From an agri-environmental perspective, fertilization management is one of the most relevant targets of the PA.In particular, the nitrogen (N) derived from fertilizers, when inefficiently used in crop production systems, can move from agricultural fields and contaminate surfaces and groundwater resources, as well as contribute to greenhouse gas emissions (GHG) [47].Since the interaction between the N rate, soil, weather, and crop response is a complex system, the management of this nutrient is the key aspect that distinguishes PA from conventional management [48,49].Thus, the N environmental and agronomic efficiency is measured in this paper with the estimation of the nitrogen agronomic efficiency (NAE) index.This paper is structured as follows: Section 2 describes materials and methods; Section 3 presents the results and discussion.Finally, Section 4 concludes. Study Area and Data Set This work focuses on the period 2014-2022 (9 years) for two farms (A and B) that cultivate durum wheat (Triticum turgidum subsp.Durum Desf ) in the Marche Region (central Italy) in rotation with maize (Zea mays L.) (Figure 2).The climate of the study area is meso-Mediterranean based on the Walter and Leith Climate Class (Figure 3), which is characterized by a mean annual precipitation of about 768 mm and a mean annual temperature of 17.2 °C with monthly means ranging from 9 °C in February to 29°C in August.There is a potential for frost from February until March and a period with a high probability of drought from June to August.The climate of the study area is meso-Mediterranean based on the Walter and Leith Climate Class (Figure 3), which is characterized by a mean annual precipitation of about 768 mm and a mean annual temperature of 17.2 • C with monthly means ranging from 9 • C in February to 29 • C in August.There is a potential for frost from February until March and a period with a high probability of drought from June to August.The climate of the study area is meso-Mediterranean based on the Walter and Leith Climate Class (Figure 3), which is characterized by a mean annual precipitation of about 768 mm and a mean annual temperature of 17.2 °C with monthly means ranging from 9 °C in February to 29°C in August.There is a potential for frost from February until March and a period with a high probability of drought from June to August.The physical and chemical compositions of the soil for the compared farms are reported in Table 1.The physical and chemical compositions of the soil for the compared farms are reported in Table 1.The two farms are agronomically managed differently; farm A acquired the first PA package in 2018 and started to adopt it in 2019.This period is considered the years of "technical change", in which farm A has fully implemented the use of the PAT package considered in the present study.In line with the subdivision made by Finco et al. [21], the PA package acquired by farm A includes: i. Guidance systems (driver assistance, machine guidance, controlled traffic farming) ii. Recording technologies (soil mapping, soil moisture mapping, canopy mapping, yield mapping) iii. Reacting technologies (variable-rate irrigation and weeding and variable rate application of seeds, fertilizers, and pesticides). In detail, farm A invested EUR 531,000 in PATs (Appendix A, Table A1).The investments in agricultural machinery equipped with PA technologies were financed for 40% of the total amount by joining Measure 4.1 ("Support for investments in farms") of the PSR Marche 2014-2020.The use of this equipment is not limited to wheat cultivation; they are also used for the management of other cultivation, such as corn, on a surface that is four times larger than the one of farm B. In 2018, based on estimated cash flows at the time, the expected payback period (PBP) for the entire technology package was 5 years.Despite this, given the peculiar market trend during the period 2020-2022, the PBP dropped to 3 years, and currently the entire investment is paid off. On the other hand, farm B used conventional agronomic management in all the years of this study.Figure 4 represents the experimental design of the case study. times larger than the one of farm B. In 2018, based on estimated cash flows at the time, th expected payback period (PBP) for the entire technology package was 5 years.Despit this, given the peculiar market trend during the period 2020-2022, the PBP dropped to years, and currently the entire investment is paid off. On the other hand, farm B used conventional agronomic management in all the year of this study.Figure 4 represents the experimental design of the case study.As already mentioned, this study is based on an Italian EIP-AGRI Operational Group Project called the Smart Agriculture Team (SAT), whose main goals have been the following • To ensure a correct management of nitrogenous inputs on durum wheat through precision agriculture technologies in order to reduce the environmental impact o cereal cropping systems • To evaluate the economic, environmental, and social sustainability of investments in these technologies. Farm A was selected as a case study as it represents one of the few pioneering Italian farms that decided to adopt PATs.Likewise, farm B has been selected as a "control" cas for not yet adopting PA technology after an in-depth analysis of all the Marche region As already mentioned, this study is based on an Italian EIP-AGRI Operational Group Project called the Smart Agriculture Team (SAT), whose main goals have been the following: • To ensure a correct management of nitrogenous inputs on durum wheat through precision agriculture technologies in order to reduce the environmental impact of cereal cropping systems • To evaluate the economic, environmental, and social sustainability of investments in these technologies. Farm A was selected as a case study as it represents one of the few pioneering Italian farms that decided to adopt PATs.Likewise, farm B has been selected as a "control" case for not yet adopting PA technology after an in-depth analysis of all the Marche region farms associated with the largest trade association of Italian farmers.Thus, the three criteria on which farm B was examined were the following: • The presence of a strong and real willingness to adopt the PA technologies investigated in this study • The presence of a comparable size of the Utilized Agricultural Area (UAA) devoted to cereal farming and of a minimum total farm size of 100 ha to be defined as a large farm according to FADN statistical standards • Farm B is a more efficient farm than the average in terms of productivity and profitability even without the implementation of PA technologies.In this regard, as it can be seen from the data (Table 2) that farm B is capable of levels of profitability almost in line to the median operating profit per hectare (calculated net of European CAP supporting payments applied to the durum wheat production) obtained by farms larger than 100 ha and specialized in cereal farming in central Italy.Furthermore, farm A and farm B are both located in the top 25%-in terms of operating profit per hectare from durum wheat farming-cereal farms in central Italy.Farm A and farm B, while both producing durum wheat, are different from each other both structurally and from the point of view of the entrepreneurial and management logic that guides their strategic and operational choices.Farm A is a farm of about 400 ha, is cultivated using minimum tillage regime, and is almost entirely irrigated.Three quarters of the hectares are positioned in flat areas, and the rest are in hilly areas.Farm A's mission is explicitly oriented towards technological innovation.About half of the farm UAA is used for the cultivation of cereals, including corn, while about a quarter of the UAA is used in the production of industrial legumes.It is important to note that farm A is integrated up-stream along the supply chain with an important Italian seed industry.Farm B is a farm of about 110 non-irrigated hectares cultivated using minimum tillage regime, located in hilly areas, and almost entirely occupied by cereal and forage crops. For contextualizing the two case studies in the territorial framework, a comparison between them and our elaboration on the FADN sample of cereal farming in central Italy was carried out using basic profitability indicators, i.e., productivity, average value (price), gross profit, operating profit (Table 2).The historical series analyzed in Table 2 is divided into two periods according to the year of adoption of the PA by farm A in 2018. Based on Table 2: 1. Productivity: For both periods considered (2014-2018 and 2019-2022), the two case studies are both considerably more productive than the median value of productivity referred to in the sample of farms (greater than 40 ha) producing durum wheat in central Italy.Nevertheless, in the period 2019-2022, that is, the period after the acquisition of the PA technology by farm A, both farms A and B slightly lost productivity compared to their levels in the previous period. 2. Price of the durum wheat produced: in the period 2014-2018, farm A proves to possess a capacity to enhance production with a notable premium price compared to central Italy (+16%) and farm B (+20%).This difference in price is due to the fact that farm A markets its product as seed wheat, a niche market in respect to the mainstream production of semolina wheat.In the period 2019-2022, post-PA adoption by farm A, the world changed drastically due to the double crisis (pandemic and the war in Ukraine) which, as we know, has led to a shock on the commodity market.Therefore, the surge in profit margins per hectare experienced by both case studies is due to the short-term economic prospects. 3. Profitability (2014-2018): in the period 2014-2018, the operating income generated by every hectare of durum wheat produced by farm A was 69% higher than that of central Italy and 70% higher than that of farm B. This evidence indicates a much greater cost efficiency experienced by farm A in its PA pre-adoption period with respect both to the median context and to farm B. On the other hand, during 2019-2022, both case studies show an operating income which increased considerably because of the supply shock within the European market.In this regard, it is interesting to note that the difference in competitiveness between the two case studies observed in the previous period disappeared, as indicated by the operating income settling on the same level for both the farms. The economic results obtained by farm A to produce durum wheat in the pre-adoption period (2014-2018) in comparison to that of the reference context are not surprising; in fact, from a managerial point of view, farm A is a farm characterized for being explicitly oriented towards efficiency and for having a very high propensity to innovate, which is an atypical attribute in the agricultural context investigated.As confirmed by the Smart AgriFood Observatory in 2021, the Italian UAA managed with precision agriculture techniques is around 4%. Farm A relies on a managerial structure given by three managers-i.e., the managerial structure coincides with the farm ownership-plus three full-time workers (all three highly skilled agricultural technicians).One of the three managers is a young, specialized technician who has been responsible for the computerized and automated farm management since the acquisition of the PA technology in 2018. Despite being a larger and more profitable wheat producer compared to the median value of the sample of cereal farms in central Italy, farm B is characterized by a traditional management structure which does not employ full-time workers and where the management work and the work in the fields are both carried out directly by the entrepreneur and his family. Finally, focusing on nitrogen management, Table 3 lists all the practices applied by both farmers, acquired through the field notebooks. Economic Analysis The economic analysis aims to explore farm profitability in adopting or not adopting PATs through indicators by comparing two case studies (farm A and B) based on Yin's case study design [50].This approach was chosen because the focus of the study is a contemporary phenomenon characterized by a small number of pioneers adopting PA. To carry out this study, a profitability analysis was performed employing financial ratios [51,52].This analysis allows comparing the two cereal farms, A and B, placed in the same locality and similar in the UAA devoted to the production of durum wheat; in this way, the understanding of the discrepancies in the results, determined by a different management approach and in the propensity to adopt new technologies, can emerge [53].We restate that farm A has invested in precision farming technology since 2018, while Agronomy 2023, 13, 1818 9 of 19 farm B has not (yet) invested in precision farming technology but operates under the conventional management system, and it is considered a possible "target farm" that could adopt PA. Thus, we designed our case study as follow: • The profitability of durum wheat production performed by the PA-adopting case study (farm A) has been assessed by comparing how the profitability indicators evolve before and after the adoption period (2014-2018 vs. 2019-2022). • Besides, the profitability of durum wheat production has been assessed comparing the economic indicators of the PA-adopting farm (farm A) to that of the non-adopting farm (farm B). By being limited to a specific crop, the analysis has been conducted using margin ratios (income statement analysis) as indicators of profitability but not return ration (balance sheet analysis), since this type of indicator would have required an analysis of the profitability of the farm business taken as a whole.Instead, this study focuses only on durum wheat profitability, meeting the objectives of the Operational Group SMART AGRICULTURE TEAM financed by the Rural Development Program (RDP) Marche 2014/2020, sub-measure 16.1. It is also important to point out that this economic analysis was not constructed as an experimental field trial but as a comparative case study conducted within real farms operating on the real market.In fact, our goal is not to directly (experimentally) evaluate the effect of some PA device on the crop profitability; rather, the objective is to analyze basic crop profitability measures and indices during the period of the PA adoption process.In this regard, while supporting the necessity of carrying out experimental trials to verify the economic efficacy of adopting specific technologies to specific crops, we underline that also the economic effectiveness evaluation of technology adoption carried out in the "real farm" productive space can generate further elements of analysis useful in understanding the determinants of the adoption process.Our work falls into this second category of studies on technology adoption effectiveness. The measurement and ratios [52] utilized to perform the profitability analysis refer to durum wheat production, and are listed below: • Productivity T/ha • Operating profit margin Operating profit/RV All the data useful for this analysis were obtained by means of in-depth interviews of the agribusiness entrepreneurs of the two farms. The Nitrogen Agronomic Efficiency Index (NAE) To measure the environmental and agronomic efficiency, the nitrogen agronomic efficiency (NAE) index was calculated by the following formula (Equation ( 1)): NAE = Yield harvested (kg/ha) Nitrogen provided to the crop (kg/ha) The NAE is the ratio between the total yield harvested (kg/ha) and the nitrogen provided to crop (kg/ha).The higher the NAE value, the greater the nitrogen use efficiency for production purposes.At crop maturity, the yield data was collected with a combined harvester for the entire durum wheat production area.The yield data (t/ha) was calculated from measurements taken at the time of delivery to the consortium. Economic Results In this paragraph, the main economic results will be presented.Table 4 shows the comparison between farm A and B from 2014 to 2022 in terms of productivity, cost efficiency of the production process, and profitability.(1) Productivity: Land productivity is a very complex indicator that depends on many variables involved.In our case study, the data show that the most productive farm is the one that does not adopt the PA: farm B.Moreover, what is noted is also a slight declining trend in productivity for both farms, and perhaps this evidence could be related to the change in atmospheric and climatic conditions in the medium term.However, this is a hypothesis that should be verified using statistically representative samples of cultivated areas.Then, focusing the attention on the post adoption period, we note that farm A shows an increase in productivity in the 2019-2020 period followed by a decrease in productivity in the period 2021-2022.Again, the owners/managers of farm A attribute these trends as essentially linked to environmental conditions and not directly linked to the use of PATs which, among other things, should not be a factor of productivity increase but of cost optimization for any given level of productivity.(2) Cost efficiency: Regardless of the use of the PATs, looking at the trend of variable costs and the variable costs ratio, it emerges that farm A is a farm structurally more efficient than farm B, while, in terms of PA cost effectiveness, until 2021, the variable costs ratio remains substantially constant.Therefore, no signs of PA adoption efficacy are observed.Things change in 2022.Indeed, the variable costs ratio between farm A and farm B falls from 0.83-0.87(in trend) to 0.76.Although this is an observation of only one year, so not very meaningful if seen in isolation, it still allows us to make a hypothesis: with raw material prices at the levels of 2022, the cost optimization of the production process using PATs could become significant and relevant.Obviously, this hypothesis should be tested experimentally; nevertheless, our data indicate that the farm that adopts a PAT management shows resilience in terms of increase in the production cost per hectare, which is much greater than the case study that does not adopt PAT.(3) Gross profitability: Interesting information can emerge if observing the gross profit. First, in the pre-adoption period, farm A was shown to be capable of much higher profitability than the "control" case study (farm B).Since 2018, in conjunction with the investment in the PAT package, farm A apparently loses its profitability advantage with respect to farm B. Indeed, in the period 2021-2022, the gross profit ratio between the two case studies is reversed compared to previous years-the wheat produced by farm B becomes more profitable than that produced by farm A-and this is due to three underlying forces acting simultaneously: wheat selling price, productivity, and contingency of exceptional environmental conditions. a. Selling price: Since 2018, the difference between the two case studies in terms of average revenue narrowed, until it disappeared in 2022.The exceptional increase in prices in the three-year period, 2020-2022, favored an upward squeezing of the price differentials, which was previously linked mainly to product quality.b. Productivity: Farm B remains a structurally more productive farm even in the post-adoption period of the PA package by farm A. The higher productivity of the durum wheat produced by farm B lies in the genetics of the seeds used.Farm A produces durum wheat for seed.The varieties used are generally less productive than semolina varieties, but they usually tend to have a higher market value even if, as we have seen in 2021-2022, the price of the two case studies flattens out on the same level due to the market shock.c. Environmental conditions: Although the use of PATs allows a greater timeliness of action in crop management, even without the use the technologies, farm B was able to manage the 2021 sowing period more effectively than farm A. The 2021 sowing was very difficult in the survey area due to exceptionally prolonged rain events.Farm A was unable to sow before December 2021 (two months of delay), and this strongly influenced the low productivity of the 2022 harvest, while farm B found useful windows for sowing in the right period, i.e., October 2021. (4) Operating profit: the fundamental information contained in the comparison between the two case studies, in terms of operating profit, is the incidence of the depreciation share of the PA capital invested by farm A in 2018.This factor, combined with the alignment of the prices of wheat sold starting from 2020 and the higher productivity of farm B, determines an inversion of the profitability of the two case studies in 2021-2022, when farm B becomes more profitable than farm A. The weight of the share of depreciation of the PA capital on the profitability per hectare of farm A also emerges from the joint comparison of the gross margin and the operating margin (Figure 5).In fact, the narrowing of the distance between the two indicators that can be seen when passing from the gross margin to the net margin is essentially due to the depreciation rate of the PA capital discounted by farm A. Agronomy 2023, 13, x FOR PEER REVIEW 12 of 20 rain events.Farm A was unable to sow before December 2021 (two months of delay), and this strongly influenced the low productivity of the 2022 harvest, while farm B found useful windows for sowing in the right period, i.e., October 2021.(4) Operating profit: the fundamental information contained in the comparison between the two case studies, in terms of operating profit, is the incidence of the depreciation share of the PA capital invested by farm A in 2018.This factor, combined with the alignment of the prices of wheat sold starting from 2020 and the higher productivity of farm B, determines an inversion of the profitability of the two case studies in 2021-2022, when farm B becomes more profitable than farm A. The weight of the share of depreciation of the PA capital on the profitability per hectare of farm A also emerges from the joint comparison of the gross margin and the operating margin (Figure 5).In fact, the narrowing of the distance between the two indicators that can be seen when passing from the gross margin to the net margin is essentially due to the depreciation rate of the PA capital discounted by farm A. In 2020-2022, the operating profit of farm A improved to levels far above pre-adoption conditions, and this is especially due to the market price trend (Figure 6).In 2020-2022, the operating profit of farm A improved to levels far above pre-adoption conditions, and this is especially due to the market price trend (Figure 6).In 2020-2022, the operating profit of farm A improved to levels far above pre-adoption conditions, and this is especially due to the market price trend (Figure 6).In addition, farm adopting PATs creates advantages over traditional farming in terms of better management of resource efficiency.This aspect is particularly relevant for the use In addition, farm adopting PATs creates advantages over traditional farming in terms of better management of resource efficiency.This aspect is particularly relevant for the use of N fertilizer.In fact, after COVID pandemic and for the Russia-Ukraine war, this input increased its price by 176% from January 2020 to December 2022 (Figure 7). of N fertilizer.In fact, after COVID pandemic and for the Russia-Ukraine war, this input increased its price by 176% from January 2020 to December 2022 (Figure 7).In this scenario, PATs allowed farm A to optimize N distribution according to the specific necessity of the crop as shown in the following paragraph (NAE index). In this way the farm A works achieving both a better quality of production and minimizing the negative impacts on the environment. Finally, to understand what will happen to farm B if it decides to adopt the PATs package of farm A, a simulation was performed for the period 2020-2022 (Table 5).A depreciation cost of the same PA capital acquired by farm A is considered with a variating depreciation rate according to the durum wheat farm UAA.It emerges that, if farm B had acquired the same PA package as farm A, the operating margin of farm B improves thanks to the new market conditions in the period 2020-2022 despite the cost of PA capital.This evidence suggests that PA adoption by farm B could be feasible in economic terms thanks to a sufficiently profitable, productive, and extensive In this scenario, PATs allowed farm A to optimize N distribution according to the specific necessity of the crop as shown in the following paragraph (NAE index). In this way the farm A works achieving both a better quality of production and minimizing the negative impacts on the environment. Finally, to understand what will happen to farm B if it decides to adopt the PATs package of farm A, a simulation was performed for the period 2020-2022 (Table 5).A depreciation cost of the same PA capital acquired by farm A is considered with a variating depreciation rate according to the durum wheat farm UAA.It emerges that, if farm B had acquired the same PA package as farm A, the operating margin of farm B improves thanks to the new market conditions in the period 2020-2022 despite the cost of PA capital.This evidence suggests that PA adoption by farm B could be feasible in economic terms thanks to a sufficiently profitable, productive, and extensive farm structure in which implementing the new technologies in this new market conditions (which, however, are constantly changing). Nevertheless, despite the favorable economic situation, farm B is currently not prone to technological change.The motivation could not be purely economic, but it could be linked to the characteristics of the owner.As the literature suggests [54][55][56], older farmers show a lower propensity to adopt as compared to their younger counterparts.Old farmers' may be loath to changes and they may not see longer-term benefits perhaps because they lack training and their bond to conventional agricultural management [57].Moreover, access to credit is certainly another possible constraint to adoption. Agronomic Results Farm A supplied on average less nitrogen (−63%) than farm B for each year under analysis (Table 6).While evaluating the average yield, during the five growing seasons, it shows that farm B achieves 10 percent more than the farm A. The NAE index, which is an index designed to assess nitrogen fertilizer use efficiency, shows that farm A obtained a higher value (+0.47) than farm B (Table 6). Mineral nitrogen contributes to better growth by giving the crop the nutrient when it needs it most, which results in higher production [59,64] and quality grain levels [65].Farm B obtained a higher production level than farm A due to the higher nitrogen provided to the crop. Ref. [66], with 20 years of data on durum wheat production, shows that nitrogen is the key driver of the production.The authors have shown that increasing the nitrogen supplied to durum wheat allows a significant increase in yield. It is also true that as the dose increases, the yield of durum wheat does not increase proportionally.Ref. [67] showed that nitrogen doses above 150 kg N/ha do not increase yield but, on the contrary, result in a higher protein percentage. When the nitrogen is not absorbed by the crop, it can only have two fates, leaching [68] and denitrification [69], which have negative environmental impacts, without considering the economic damage suffered by the farmer and that such production inputs are less available and increasingly expensive. Given its consequences at the agronomic, economic, and environmental levels, the management of nitrogen fertilization has always been an important topic of scientific research [28].Today, to optimize nitrogen fertilization at the farm level, precision farming has a strong impact on both the environment and the economy [63,70,71]. Precision agriculture is agronomic management based on the spatial and temporal variability of agronomic components, such as the soil's chemical and physical variability [49] and crop needs.Analyzing spatial and temporal variability, prescription maps [62] [72] can be generated that allows the nitrogen dose to be adjusted according to crop needs and therefore improve the nitrogen use efficiency (NUE). Several authors have reported that precision farming allows an increase in NUE.Ref. [73], in China, showed the yield and NUE results of precision agronomic management.The authors report an increase in yield and NUE compared to conventional agriculture of 10% and 51-97%, respectively. In Switzerland with winter wheat (Triticum aestivum) [74], it was reported that precision nitrogen management improved the NUE by an average of 10%.Moreover, in Umbria, Italy, it was reported that the variable rate technology improved the NUE by 15% compared to the flat rate [75]. In accordance with all the previous works, an increase in the NUE was also achieved in our case study.Farm A, which uses the variable rate technology, obtained a higher NAE of 15% than farm B, which distributes nitrogen evenly.Farm A is more environmentally and profit-friendly than farm B. Conclusions This study is based on a double case study in central Italy and explores durum wheat profitability and the optimization of the nitrogen fertilization as a function of the management of the production process through PA technologies for the period 2014-2022. Farm A acquired and implemented the PA package in 2018-2019, while farm B has not (yet) invested in PA technology but works under the conventional management system.Therefore, it can be considered a possible "target" farm that could adopt PA since it is a larger farm compared to the local farm average size.Moreover, the owner of farm B shows a high propensity to adopt these technologies and participated in this study precisely to have more points of reference for deciding on possible PA investments, especially in light of the growing cost of production inputs. Since the adoption of PA is still at a pioneering state in central Italy, our case study can represent a useful benchmark for both agricultural entrepreneurs and policymakers with respect to the economic effects of PA technology adoption applied to durum wheat production.In detail, from the economic analysis, it emerges that, in terms of gross profit, there are substantial differences between the two case studies.Farm A is characterized by a gross profit that is, on average, higher than farm B in the pre-crisis period 2014-2020.Farm A' s economic indicators have been affected by the PAT depreciation schedule coinciding with the technological change.Despite this, the economic efficiency of farm A improved to levels above pre-adoption conditions, thanks to the new market conditions in the period 2020-2022.In addition, farms adopting PATs optimize the use of inputs such as nitrogen fertilization according to crop needs; at the same time, it favors the farm management's efficiency in terms of human resources. In the 2014-2021 period, our study did not show any clear savings in terms of wheat production costs that could be attributed to the use of the PA technology package by farm A. However, things changed in 2022.In fact, with the surge in the price of inputs, the index of variable costs of farm A increased by 29%, while that of farm B increased by 46% with respect to 2021.The hypothesis is that this 17% difference, corresponding to about EUR 60 per hectare, could be due to the use of PATs by farm A. Despite this possible savings in terms of variable costs, it is necessary to consider that: • The depreciation share of the financial capital invested by farm A in the PA package was EUR 89.18 per hectare in 2022. • The agricultural area on which this share of depreciation is calculated is four times higher than the agricultural area available to farm B. As mentioned by Schimmelpfennig [76], large farms may present economies of scale when adopting PATs because they have more hectares over which to spread investment costs.Moreover, large farms are also more likely to have the type of variability that makes PATs [77]. Thus, because of the depreciation share, farm B, while being able to save around EUR 60 per hectare in terms of variable costs under the exceptional market conditions of 2022, does not show a broad economic incentive to invest financial capital in PA, probably due to a lack of economies of scale, while farm A does.These findings confirm our previous analysis relating to the dimensional thresholds necessary to create an economic incentive for investment in PA by a specialized cereal farm in Italy which is at least 200 ha in a hilly area [78].Nevertheless, the incentive in using PA technology could be present even for smaller farms in terms of payment of a rent for a PA-type management assistance at service, rather than in the purchase of technological capital. Regarding the willingness to adopt PA technologies for the cultivation of wheat, Hanson et al. [79] verified that, in North Dakota, wheat may have negative effects on PAT adoption due its lower cost of production with respect to other crops such as corn.This assertion is confirmed by our case study, given that corn is a fundamental crop within the farm A production structure while it is absent within the production structure of farm B. This study is not without limitations.The first limitation of this study is the fact of having compared the pioneering case study A with a single control farm (B) rather than with a pool of farms.There were two constraints which prevented the initial intention of comparing case study A with a sample of farms: • The research project from which this article derives involved an agronomic experimentation in the case study farms, and it would have been beyond the possibilities of the project to develop this experimentation in more than two farms (farm A and B). • Therefore, the first issue was to identify a "control" farm (case study B) available to host the agronomic experimentation for the participation in the comparative study.This farm should have been available to provide all its economic and accounting data. In this regard, it should also be clarified that farms in Italy in many cases do not keep detailed analytical accounting relating to long historical series in their archives.For this reason, even working with just one farm, it was not easy to reconstruct the analytical accounting data set necessary for carrying out this study [80].On the other hand, there were no difficulties with farm A, since it keeps track of its own detailed analytical accounting using advanced management software (as Geofolia, Isagri). The second limitation consists in having only one farm (farm A) implementing a broad package of PATs and managed according to a logic that we can define as PA-oriented.However, this limitation could be explained by the fact that there is a lack of diffusion of these technologies, so it becomes rather impossible to work with broader samples, and it becomes necessary to develop research based on a few case studies.Therefore, according to this second limitation, it would be appropriate to create an infrastructure that allows researchers to be able to acquire reliable economic, financial, and agronomic data on which to perform analyses on the effects of PA technologies.In addition, being a pioneering technology in this area, few farms have purchased and adopted precision farming techniques because they would like to understand whether such technology provides an economic and environmental benefit.This aspect is closely linked to the objective of the present work.In conclusion, policymakers are advised to encourage the adoption of these technologies given that the current market conditions generate incentives to adopt, specifically, very high costs of input but very high prices of output [81].Finally, both from an economic and an agronomic point of view, it is important to consider these aspects in order to appreciate all the advantages of this type of innovation that hinges on the automation of the production process.The farm deciding to adopt PATs must already Figure 1 . Figure 1.Number of available papers (articles and reviews) and the top 10 countries producing papers about PA applications in cereal/wheat production (* The papers considered for 2023 are published until March). Figure 1 . Figure 1.Number of available papers (articles and reviews) and the top 10 countries producing papers about PA applications in cereal/wheat production (note: The papers considered for 2023 are published until March). Agronomy 2023 , 20 Figure 2 . Figure 2. Marche region and experimental locations: A and B represent the position of farms A and B respectively. Figure 2 . Figure 2. Marche region and experimental locations: A and B represent the position of farms A and B respectively. Figure 2 . Figure 2. Marche region and experimental locations: A and B represent the position of farms A and B respectively. Figure 3 . Figure 3. Walter and Lieth climate diagram of the study area (2000-2021 long-term series). Figure 3 . Figure 3. Walter and Lieth climate diagram of the study area (2000-2021 long-term series). Figure 5 . Figure 5. Variation of gross and operating margin for farms A and B in the considered period (2014-2022). Figure 5 . Figure 5. Variation of gross and operating margin for farms A and B in the considered period (2014-2022). Figure 5 . Figure 5. Variation of gross and operating margin for farms A and B in the considered period (2014-2022). Table 1 . Soil physical and chemical compositions for the experimental farms. Table 1 . Soil physical and chemical compositions for the experimental farms. Table 2 . Performance indicator comparison between FADN database and the two selected case studies.Index base value: central Italy. Table 5 . Simulation of PAT adoption for farm B for the period 2020-2022. Table 5 . Simulation of PAT adoption for farm B for the period 2020-2022. Table 6 . Total nitrogen provided, crop yield, and NAE per farm each year.
2023-07-12T05:38:24.283Z
2023-07-08T00:00:00.000
{ "year": 2023, "sha1": "1c4570075e87cadcce89c082ff16536258318d8c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/13/7/1818/pdf?version=1688807098", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3c2d59168d4c2826a4e064ac9e4d0d52c6ec4c68", "s2fieldsofstudy": [ "Economics", "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
258562435
pes2o/s2orc
v3-fos-license
Enhanced Recovery After Caesarean Delivery: A Narrative Review Intan , INTRODUCTION Law Number 44 of 2009 declares a hospital is a health service institution providing comprehensive individual health services in inpatient, outpatient, and emergency.Various disciplines complement patient care in hospitals with each other (Budhi Setianto, et al., 2021).In addition, clinical considerations implementation is regulated in the regulation of the Minister of Health of the Republic of Indonesia, Number 5 of 2016.It states that clinical advisory is crucial in implementing National Health Insurance.It can ensure quality control and cost control so that the health services provided are effective and efficient according to patients' needs.The clinical advisory also provides certainty in resolving clinical problems in health services during the implementation of National Health Insurance.Furthermore, Clinical Pathway (CP) is an evidence-based integrated performance planning concept with measurable outcomes for healthcare performance standards, standards of care, and home care.It summarizes every step given to the patient based on patient service standards.In addition, it includes evaluation, diagnosis, information support, rehabilitation, and clinical evaluation.Variances in clinical pathways have been identified to contribute to hospital duration of stay, medicinal drug utilization, hospital outcomes, and costs.ERACS is an intuitive first step to lower variances to improve patient care (Mullman, 2020). Enhanced recovery after surgery (ERAS) is a philosophy of perioperative care that has been used in other fields since the 1990s but has only recently been applied to obstetric care in the form of Enhanced Recovery After Caesarean Delivery (ERACS).This review highlights perioperative care in ERACS, ERACS guidelines, and the benefits of ERACS.ERACS is a multimodal-based perioperative management protocol to recover the patient's condition immediately.It maintains preoperative organ function and reduces stress response during surgery.The primary keys in this protocol include preoperative counseling, optimization of nutrition, use of standard anesthetic and multimodal analgesia drugs, and early mobilization (Kurniawaty & Anindita, 2018).The ERACS protocol can increase patient satisfaction, reduce patient length of stay, and reduce costs.The protocol covers perioperative care, from preadmission, preoperative, and intraoperative.In addition, it includes postoperative, involving a multidisciplinary team of anesthesiologists, surgeons, nurses, and nutritionists.Recent studies revealed that ERACS improved patient outcomes, reduced postoperative complications, accelerated postoperative recovery, and supported faster patient discharge.Further, it could lower hospital costs (Kurniawaty & Anindita, 2018). The Perioperative Care in ERACS Preoperative, intraoperative, and postoperative care are critical in implementing ERACS (Tika et al., 2022). Preoperative care Pre-admission information, education, and counseling will be provided in preoperative care.Patients should get sufficient information on the surgical and anesthetic procedures the patient will undergo. Ideally, the patient and family meet with the surgeon, anesthesiologist, and nurse for discussion.It can reduce fear and patient anxiety and accelerate patient recovery and discharge.In addition, psychological counseling aims to reduce stress to accelerate wound healing and recovery after surgery.Counseling provides information, leaflets, or multimedia information provided to patients.It can improve patient involvement in perioperative nutrition, mobilization, pain control, and physiotherapy.Besides, it reduces complications after surgery. Education and counseling are generally necessary for the success of the ERACS.The education and counseling provided contain information about the procedure and what to expect when the patient is in the operating room.In addition, there are surgical plans, pain management plans, goals for nutrition, and early mobilization.Other information provided to the patient is the nutritional information for pregnant women, nursing mothers, length of stay, and criteria for patient discharge (Habib & Ituk U, 2018) Both tests are carried out after the patient has obtained other test results that the anesthetist uses as a standard for surgical feasibility. Intraoperative care Before undergoing the surgical procedure, the patient must fast to avoid postoperative vomiting.The recommended fasting duration before anesthesia is six to eight hours for solid foods and two hours for high-calorie fluids.Taking high-calorie drinks two hours before surgery can reduce thirst, hunger, and anxiety before surgery.There will be ranitidine or omeprazole capsules provisioned two hours before the procedure.In addition, a single dose of broad-spectrum prophylactic antibiotics is provided 30-60 minutes before the ERACS procedure (Kurniawaty & Anindita, 2018).Furthermore, scheduled acetaminophen and non-steroidal anti-inflammatory drugs (NSAIDs) are provided.It includes restricting the dose of neuraxial opioids (morphine), preventing hypothermia and nausea, and assisting mothertoddler bonding (Kurniawaty & Anindita, 2018). Multimodal analgesia has become a key component for most surgeries or anesthetics.ERACS protocols involve medications and techniques beyond routine surgical anesthesia.Analgesic drugs may be administered immediately preoperative, intraoperative, and continued postoperatively.Non-opioid analgesics minimize opioid consumption in ERACS (Patel & Zakowski, 2021). Postoperative care Early mobilization begins by sitting on the edge of the patient's bed.The patient can stroll from the patient's bed to the restroom because the catheter was removed no later than 6 hours after the procedure to avoid urinary tract infections complication in postoperative patients.After removing the catheter, the patient can breastfeed the baby in a comfortable sitting position so there is correct baby attachment when breastfeeding.The patient can be discharged one day after the ERACS procedures (the second day of hospitalization).The criteria for discharge are patients without pain or tolerated pain without additional anti-pain medications such as anti-pain patches or infusions.Based on respondent interviews, there was an increase in patient satisfaction after using the ERACS method among patients who have experienced regular cesarean sections.In addition, patients undergoing SC surgery for the first time thought that SC surgery was not as painful as imagined.So they wanted to give birth with the same method for the next delivery (Tamang, 2021). ERACS Guidelines Medical Societies, The American College of Obstetricians and Gynecologists (ACOG), and the Society for Maternal-Fetal Medicine (SMFM) prepare ERACS guidelines based on clinical evidence.Then, they submit those guidelines for practitioners to review, consider, and adopt (Liu, Du, and Yao, 2020).The ERACS guidelines enhance healing after surgical procedures.In addition, the guidelines comprise recommendations to improve all elements of patient care.The ERACS guidelines are an evidence-based practice to remove obstacles in implementation.Thus, training health workers regarding ERACS guidelines and clinical audits is vital (Bowden, 2019). Table 1.ERACS Guidelines (Liu, Du, and Yao, 2020) Preoperative care Protocol -Antenatal Care -Inpatient Care -Education and counseling (anesthesia procedures, pain management, nutrition, early mobilization, criteria for patient discharge) -Intake of solid food six to eight hours before surgery -Intake of high-calorie drinks two hours before surgery -Ranitidine or omeprazole provision two hours before the procedure.-A single dose of broad-spectrum prophylactic antibiotics 30-60 minutes before the procedure. The Benefits of ERACS There are several reasons why the clinical results of performing ERACS are so impressive.Preoperative education and detailed psychological counseling about the ERACS protocol can help reduce psychological stress and improve patient adherence (Fajriani, 2016).Second, the ERACS protocol reduces hunger, increases carbohydrate intake, relieves stress, and reduces insulin resistance and food loss after surgery (Kurniawaty & Anindita, 2018).Third, the ERACS protocol recommends faster removal of urinary catheters and mobilization to reduce the risk of postoperative urinary tract infections and venous https://doi.org/10.33086/jhs.v16.i01.3098Intan Nurhayati -Enhanced Recovery After Caesarean Delivery: A Narrative Review thromboembolism.Fourth, standard nursing practice, broad-spectrum prophylactic antibiotics, and early mobilization with the ERACS protocol decrease postoperative infection risks such as postoperative wound infections, lung infections, and urinary tract infections (Tamang, 2021).Fifth, multimodal analgesia and intraoperative care can increase patient comfort during surgery (Liu, Du, and Yao, 2020). Last, early postoperative oral nutrition is vital to speed recovery by maintaining body homeostasis so patients can perform daily activities. According to the latest research, ERACS showed a decreased length of stay in patients.The underlying thing is a significant pain reduction with multimodal analgesia so that post-Sectio Caesaria patients can mobilize for two hours and continue for six hours after surgery.Length of stay (LOS) is one indicator to assess hospital quality.Length of stay is a term given to describe the length of time a patient is hospitalized, from when the patient is recorded at the time of admission until the hospital issues a discharge planning or discharge plan.This data is essential in the medical record to consider patient costs. The hospital expenditure budget is the most significant contributor to state budget expenditure, so the number of patient days or LOS needs to be considered to estimate the management of hospital expenses and financing. CONCLUSION The ERACS method as a perioperative program for cesarean section patients has many benefits, including shortening the duration of hospitalization, reducing anxiety and stress, reducing the risk of postoperative infection, and accelerating the body's recovery.In addition, there is faster functional recovery, minimal complications, and a shorter length of stay.Furthermore, it can improve the quality of patient care and reduce opioid exposure and dependence.ERACS aims to provide a comfortable patient experience by accelerating the process of patient care and recovery by prioritizing patient safety. However, the obstacle is consistency in carrying out the ERACS protocol in each related service unit, such as polyclinic, operating rooms, and treatment rooms, to implement each protocol comprehensively and optimally. Intraoperative care Protocol -Prevent hypotension due to anesthetic drugs.-Spinal anesthesia -Multimodal non-opioid analgesia -Optimal uterotonic with a low dose -Improved mother-baby bonding -Phenylephrine is the vasopressor of choice to prevent maternal hypotension.-A low dose of 0.5% bupivacaine, Fentanyl, and morphine -Paracetamol IV dan NSAID -Low dose oxytocin infusion 15-18 IU/hour -Delayed Cord Clamping and early initiation of breastfeeding Postoperative care Protocol -Early oral intake -Early mobilization -Drinking water for 0-30 minutes post-op.-Food intake 4 hours post-op -Mobilization Level 1: sitting back in bed for 15 to 30 minutes.-Mobilization Level 2: sit on the side of the bed with legs dangling for 5 to 15 minutes.-Mobilization Level 3: Standing -Mobilization Level 4: Walking around the patient ward -Early urinary catheter removal no later than 6 hours after the procedure to minimize the risk of urinary tract infection. It is essential to provide educational materials that can be accessed via the web or taken home to help patients become familiar with the ERACS concept.During the Covid-19 pandemic, patients with an elective SC surgery plan will undergo PCR (polymerase chain reaction) tests.In addition, patients with emergency surgery will undergo a COVID-19 antigen test.
2023-05-09T15:01:48.117Z
2023-02-28T00:00:00.000
{ "year": 2023, "sha1": "856cb0cd44d758978da6470f4484fc50c171eacf", "oa_license": "CCBYSA", "oa_url": "https://journal2.unusa.ac.id/index.php/JHS/article/download/3098/2057", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a1e48977bedc63195744eb1108d6e10622b715d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
4382575
pes2o/s2orc
v3-fos-license
Neural Nets via Forward State Transformation and Backward Loss Transformation This article studies (multilayer perceptron) neural networks with an emphasis on the transformations involved --- both forward and backward --- in order to develop a semantical/logical perspective that is in line with standard program semantics. The common two-pass neural network training algorithms make this viewpoint particularly fitting. In the forward direction, neural networks act as state transformers. In the reverse direction, however, neural networks change losses of outputs to losses of inputs, thereby acting like a (real-valued) predicate transformer. In this way, backpropagation is functorial by construction, as shown earlier in recent other work. We illustrate this perspective by training a simple instance of a neural network. Introduction Though interest in artificial intelligence and machine learning have always been high, the public's exposure to successful applications has markedly increased in recent years. From consumer-oriented applications like recommendation engines, speech face recognition, and text prediction to prominent examples of superhuman performance (DeepMind's AlphaGo, IBM's Watson), the impressive results of machine learning continue to grow. Though the understandable excitement around the expanding catalog of successful applications lends a kind of mystique, neural networks and the algorithms which train them are, at their core, a special kind of computer program. One perspective on programs which is relevant in this domain are so-called state-andeffect triangles, which emphasize the dual nature of programs as both state and predicate transformers. This framework originated in quantum computing, but has a wide variety of applications including deterministic and probabilistic computations [6]. The common two-pass training scheme in neural networks makes their dual role particularly evident. Operating in the "forward direction" neural networks are like a function: given an input signal they behave like (a mathematical model of) a brain to produce an output signal. This is a form of state transformation. In the "backwards direction", however, the derivative of a loss function with respect to the output of the network is backpropagated [7] to the derivative of the loss function with respect to the inputs to the network. This is a kind of predicate transformation, taking a real-valued predicate about the loss at the output and producing a real-valued predicate about the source of loss at the input. The main novel perspective offered by this paper uses such state-and-effect 'triangles' for neural networks. We expect that such more formal approaches to neural networks can be of use in trends towards explainable AI, where the goal is to extend automated decisions/classifications with human understandable explanations. In recent years, it has become apparent that the architecture of a neural network is very important for its accuracy and trainability in particular problem domains [3]. This has resulted in a profligation of specialized architectures, each adapted to its application. Our goal here is not to express the wide variety of special neural networks in a single framework, but rather to describe neural networks generally as an instance of this duality between state and predicate transformers. Therefore, we shall work with a simple, suitably generic neural network type called the multilayer perceptron (MLP). We see this paper as one of recent steps towards the application of modern semantical and logical techniques to neural networks, following for instance [1,2]. Outline. In this paper, we begin by describing MLPs, the layers they are composed of, and their forward semantics as a state transformation (Section 2). In Section 3, we give the corresponding backwards transformation on loss functions and use that to formulate backpropagation in Section 4. Finally, in Section 5, we discuss the compositional nature of backpropagation by casting it as a functor, and compare our work in particular to [1]. Forward state transformation Much like ordinary programs, neural networks are often subdivided into functional units which can then be composed both in sequence and in parallel. These subnetworks are usually called layers, and the sequential composition of several layers is by definition a "deep" network 3 . There are a number of common layer types, and a neural network can often be described by naming the layer types and the way these layers are composed. Feedforward networks are an important class of neural networks where the composition structure of layers forms a directed acyclic graph-the layers can be put in an order so that no layer is used as the input to an earlier layer. A multilayer perceptron is a particular kind of feedforward network where all layers have the same general architecture, called a fully-connected layer, and are composed strictly in sequence. As mentioned in the introduction, the MLP is perhaps the prototypical neural network architecture, so we treat this network type as a representative example. In the sequel, we will use the phrase "neural network" to denote this particular network architecture. More concretely, a layer consists of two lists of nodes with directed edges between them. For instance, a neural network with two layers may be depicted as follows. We will represent such a network via special arrows 3 ⇒ 4 ⇒ 2, where the numbers 3, 4, and 2 correspond to the number of nodes at each stage. These arrows involve weights, biases, masks, and activations, see Definition 2.1 below. The (forward) semantics of these arrows is given by functions R 3 → R 4 → R 2 . They will be described in greater detail shortly, in Definition 2.3. We first concentrate on individual layers. In the definition below we shall write M(n) = R n and P(n) = {k ∈ N | k ⊆ n}. In this description of the powerset P we identify a natural number n ∈ N with the n-element subset of numbers {0, 1, . . . , n − 1} below n. We shall have more to say about M and P in Remark 2.2 below. Definition 2.1 A single layer n ⇒ k between natural numbers n, k ∈ N is given by three functions: the transition function n M / / P(k) the mask function R α / / R the activation function. follows, for i ∈ n and j ∈ k. there is a mutable connection from node i to node j, with weight M(i)(j) j ∈ M (i) and M(i)(j) = 0 means there is no connection from node i to node j j ∈ M (i) and M(i)(j) = 0 means there is a non-mutable connection from node i to node j, with weight M(i)(j). The activation function α : R → R is required to be differentiable. Mutability is used only to determine which weights should be updated after back propagation. In particular, M is not used in forward propagation, and we often omit M in situations where it plays no role, including forward propagation. Remark 2.2 The operations M and P are called multiset and powerset. They both form a monad on the category Set of sets and functions. In general, they are defined on a set I as: where supp(ϕ) = {i ∈ I | ϕ(i) = 0} is the support of ϕ. Such a function ϕ can also be written as formal sum: This explains why such an element ϕ ∈ M(I) is sometimes called a multiset on I: it counts elements i k ∈ I with multiplicity r k = ϕ(i k ) ∈ R. In this paper we shall use these monads P and M exclusively on natural numbers, as finite sets; in that case M(n) = R n , as used above. We shall not really use that P and M are monads, except for the following construction: each function T : I → M(J) has a 'Kleisli' or 'linear' extension T * : M(I) → M(J) given by: The transistion map T in a layer n ⇒ k is the linear part of the associated function R n → R k , and the activation function α is the non-linear part. This linear role of T is emphasised by using this linear extension T * . Notice that if T (i)(j) = 0, then the input from node i does not contribute to the outcome. Hence this corresponds to not having a connection i → j in the layer. When it comes to updating, we have to distinguish between a weight being 0 because there is no connection -so that it remains 0 -and weights that happen to be zero at some point in time, but may become non-zero after an update. This is done via the mask function M . (2) Notice that we use notation x ∈ R n to indicate a vector of reals x i ∈ R. Similarly, the notation α is used to apply α : R → R coordinate-wise to T * (x, 1) ∈ R k , where T * is defined in (1). The additional input 1 in T * (x, 1) is used to handle biases, as will be illustrated in the example below. The function [[ T, α ]] : R n → R k expresses (forward) state transformation. Sometimes we use alternative notation ≫ for state transformation, defined as: This notation is especially suggestive in combination with loss transformation ≪, working backwards. The interpretation function [[ T, α ]] performs what is often called forward propagation. We will refer to vectors x ∈ R n as states; they describe the numerical values associated with n nodes at a particular stage in a neural network. We can then also say that forward propagation involves state transformation-a layer n ⇒ k transforms states in R n to states in R k . The following example 4 illustrates how the interpretation function works. We shall describe this network as two layers: In this network all connections are mutable, as indicated via the function M which sends each i ∈ 2 to the whole subset M (i) = 2 ⊆ 2. The activation function is the so-called sigmoid function σ, for both layers, given by σ(z) = 1 /(1+e −z ). The two transition functions T, S have type 3 → M(2). Their definition is given by the labels on the arrows in the network (3): Alternatively, one may see T, S as matrices: We thus get, according to (2): We see how the bias is described via the arrows out of the 'open' nodes • in (3) and is added in the appropriate manner to the outcome, via the value '1' on the right-hand-side in (2). We write NN for the category of neural networks, as in [1]. Its objects are natural numbers n ∈ N, corresponding to n nodes. A morphism n → k in NN is a sequence of layers n ⇒ · · · ⇒ k, forming a neural network. Composition in NN is given by concatenation of sequences; a (tagged) empty sequence is used as identity map for each object n. Next, we write RF for the category of real multivariate differentiable functions: objects are natural numbers and morphisms n → k are differentiable functions R n → R k . Proposition 2.5 Forward state transformation (propagation) yields a functor NN → RF, which is the identity on objects. A morphism n → k in NN, given by a sequence of layers ℓ 1 , · · · , ℓ m , is sent to the composite with the understanding that an empty sequence : n → n in NN gets sent to the identity function R n → R n . This yields a functor by construction. In line with this description we shall interpret a morphism N = ℓ 1 , . . . , ℓ m : n → k in the category NN as a function [ Backward loss transformations In the theory of neural networks one uses 'loss' functions to evaluate how much the outcome of a computation differs from a certain 'target'. A common choice is the following. Given outcomes y ∈ R k and a target t ∈ R k one takes as loss: Here we abstract away from the precise form of such computations and use a function L for loss. In fact, we incorporate the target t in the loss function, so that for the above example we can give L the type L : R k → R, with definition: y |= L := L(y) The validity notation |= emerges from the view that vectors y ∈ R k are states (of type k), and loss functions L : R k → R are predicates (of type k). The notation y |= L then expresses the value of the loss L in the state y. We now come to backward transformation of loss along a layer. We ignore mutability because it does not play a role. Definition 3.1 Let (T, α) : n ⇒ k be a single layer. Each loss function L : R k → R on the codomain k of this layer can be transformed into a loss function (T, α) ≪ L : R n → R on the domain n via: For a morphism N = ℓ 1 , . . . , ℓ m : n → k in the category NN of neural networks we define: We can now formulate a familiar property for validity and transformations, see e.g. [4,6]. Lemma 3.2 For any neural network N : n → k in NN, any loss function L : R k → R and any state x ∈ R n , one has: Proof By the definition of these notations: Many forms of state and predicate transformation can be described in the form of a 'state-and-effect triangle', where 'effect' is used as alternative name for 'predicate', see [6]. Here this takes the following form. given by: The above triangle commutes in one direction: Hom(−, R) • Stat = Pred. In order to obtain commutation in the other direction one typically restricts the category Set to an appropriate subcategory of algebraic structures. For instance, in probabilistic computation, states form convex sets and predicates form effect modules, see e.g. [4,5]. In the present situation with neural nets it remains to be investigated which algebraic structures are relevant. That is not so clear in the current general set up, for instance because we impose no restrictions on the loss functions that we use. Back propagation In the setting of neural networks, back propagation is a key step to perform an update of (the linear part of) a layer. Here we shall give an abstract description of such updates, in terms of a loss function L as used in the previous section. In fact, we assume that what is commonly called the learning rate η is also incorporated in L. Let T, M, α : n → k be a layer. Given an input state a ∈ R n and an (differentiable) loss predicate L : R k → R we will define a gradient ∇ (a,L) (T ) and use it to change T into where the mutability map M : n → P(k) is used as k × n Boolean matrix (with 0's and 1's only), and where ⊙ is the Hadamard product, given by elementwise multiplication. It ensures that only mutable connections are updated. Definition 4.1 In the situation just described, the gradient can be given as: We have introduced a new bound variable X, to clearly indicate the derivative that we are interested in. The type of X is the same as T , namely a k × (n + 1) matrix. In order to compute this gradient, we recall that the derivative of a (differentiable) function f : R n → R m is the m × n 'Jacobian' matrix of partial derivatives: (i) The gradient ∇ (a,L) (T ) can be calculated as: (The superscript T in (−) T is for 'matrix transpose', and is unrelated to the transition map T .) (ii) In the special case where α is the sigmoid function σ, the vector s in point (i) is a Hadamard product: Proof The chain rule for multivariate functions gives a product of matrices: We elaborate the three parts one-by-one. • The derivative of the loss function L : R k → R is given by its partial derivatives, written as L ′ : R k → R k . Thus, the first part L ′ (T, α) ≫ a) of (6) is in R k . • The derivative of the coordinate-wise application α : R k → R k of α : R → R, applied to the sequence T * (a, 1) ∈ R k consists of the k × k diagonal matrix with entries α ′ (T * (a, 1) j ) at position j, j. We shall write this diagonal as a vector α ′ (T * (a, 1) j ) ∈ R k . The product of the first two factors in (6) can thus be written as a Hadamard (coordinatewise) product ⊙: L ′ (T, α) ≫ a) ⊙ α ′ (T * (a, 1) j ) . • For the third part in (6) we notice that X → X * (a, 1) is a function R k×(n+1) → R k . The j th row of its Jacobian consists of the k × (n + 1) matrix with a, 1 at row j and zeros everywhere else. Indeed, the j th coordinate X * (a, 1) j is given by: X * (a, 1) j = X j1 a 1 + · · · + X jn a n + X j(n+1) . Next we are interested in gradients of multiple layers. Proposition 4.3 Consider two consecutive layers m (S,β) =⇒ n (T,α) =⇒ k, with initial state a ∈ R m and loss function L : R k → R. The gradient for updating S is: The derivative of the transformed loss (T, α) ≪ L is by the chain rule: where [T ] is the k × n matrix obtained from the k × (n + 1) matrix T by omitting the last column. More generally, for appropriately typed neural nets N, M , Proof The first equation in the above proposition obviously holds. We concentrate on the second equation (7): We still need to prove T ′ * (y, 1) = [T ], where [T ] is obtained from T by dropping the last column. The function T * (−, 1) has type R n → R k , so the derivative T ′ * (y, 1) is a k × n matrix with entry at i, j given by: Together these T ij , for 1 ≤ i ≤ k and 1 ≤ j ≤ n, form the k × n matrix [T ]. (7) reveals an important point: for actual computation of backpropagation we are not so much interested in loss transformation, but in erosion transformation, where we introduce the word 'erosion' as name for the derivative L ′ of the loss function L. Remark 4.4 Equation For this erosion transformation we introduce new notation ≪. Let (T, α) : n ⇒ k be a single layer, and let E : R k → R k be a 'erosion' function. We transform it into another erosion function (T, α) ≪ E : R n → R n , by following (7): By construction we have: Conceptually, we consider loss transformation more fundamental than erosion tranformation, because loss transformation gives rise to the 'triangle' situation in Theorem 3.3. In addition, erosion transformation can be expressed via derivatives and loss transformation, as the above equation (9) shows. In the obvious way we can extend ≪ in (8) from single to multiple layers (neural networks). In case α is the sigmoid function σ, the right-hand-side of (8) simplifies to: We illustrate back propagation for the earlier example. The target in this example is 0.01, 0.99 ∈ R 2 , so that the loss function L : R 2 → R and its 'erosion' derivative E = L ′ : R 2 → R 2 are: The learning rate η is set to 0.5. The updating of the transition matrices T, S works in backward direction. By Lemma 4.2 we get as gradient: Our next aim is to update the preceding, first transition function / matrix T . The updated first matrix of the neural network is then: Functoriality of backpropagation In a recent paper [1] a categorical analysis of neural networks is given. Its main result is compositionality of backpropagation, via a description of backpropagation as a functor. In this section we first give a description of the functoriality of backpropagation in the current framework, and then give a comparison with [1]. We write SL for the category of 'states and losses'. • The objects of SL are triples (n, a, L), where a ∈ R n is a state of type n and L : R n → R is a (differentiable) loss function of the same type n. • A morphism N : (n, a, L) → (k, b, K) is a neural network N : n → k, in the category NN, such that both: b = N ≫ a and K = N ≪ L. There is an obvious forgetful functor U : SL → NN given by U(n, a, L) = n and U(N ) = N . Definition 5.1 Define backprop B : SL → NN in the following way. On objects, we simply take G(n, a, L) = n. Next, let N = ℓ 1 , . . . , ℓ m be a morphism (n, a, L) → (k, b, K) in SL, where ℓ i = (T i , α i , M i ). We write: • a 0 := a and a i+1 := ℓ i ≫ a i ; this gives a list of states a 0 , a 1 , . . . , a m with a m = b, by assumption; • K m := K and K i−1 := ℓ i ≪ K i ; this gives a list of loss functions K 0 , . . . , K m with K 0 = L. Then B(N ) : n → k is defined as a list of layers, of the same length m as N , with components: (Recall, M i is a Boolean 'mask' matrix that takes care of mutability, and ⊙ is the Hadamard product.) Proof This is 'immediate', but writing out the details involves a bit of book keeping. Let (n, a, L) and similarly p j = T P j , α P j , M P j . The procedures in the two bullets in Definition 5.1 yield for the maps N and K separately: From the perspective of the composite sequence ℓ 1 , . . . , ℓ u , p 1 , . . . , p v we can go through the same process and obtain sequences a ′ 0 , . . . , a ′ u+v and F ′ 0 , . . . , F ′ u+v with: We can now describe the components of the updated network B(P • N ). For 1 ≤ i ≤ u and 1 ≤ j ≤ v, We conclude this section with a comparison to [1], where it was first shown that backpropagation is functorial. The approach in [1] is both more abstract and more concrete than ours. (i) Here, a layer (T, α) : n ⇒ k of a neural network consists of linear part T : n + 1 → M(k) and a nonlinear part α : R → R. We ignore the mutability matrix M for a moment. As shown in Definition 2.3, the layer (T, α) gives rise to an interpretation function [[ T, α ]] : R n → R k that performs forward state transformation (T, α) ≫ (−). In [1] there is no such concrete description of a layer. Instead, the paper works with 'parametrised' functions P × R n → R k . Our approach fits in this framework by taking the set of linear parts P = M(k) n+1 as parameter set. These parametrised functions are organised in a category Para, which is shown to be symmetric monoidal closed. (ii) The comparison of the outcome of a state transformation by a network n ⇒ k and a target t ∈ R k is captured here abstractly via a loss function L : R k → R. This more general perspective allows us to define loss transformation N ≪ L along a network N . We have thus developed a view on neural network computation, with forward and backward transformations, that is in line with standard approached to (categorical) program semantics. It gives rise to the pattern of a state-and-effect triangle in Theorem 3.3. Moreover, we show that there is an associated 'erosion transformation' function, that is suitable related to loss transformation via derivatives, see (9). In the formalism of [1] backward computation also plays a role, via a function 'r', of type P × R n × R k → R n , for a network n ⇒ k. It corresponds to our erosion transformation (8), roughly as: r(ℓ, a, b) = (ℓ ≪ L ′ b )(a), where L ′ b is the derivative of the loss function L b associated with the 'target' b. (iii) Here we have concentrated on the sequential structure. In [1], parallel composition is also taken into account in the form of symmetric monoidal structure. For us, such additional structure is left as future work. Conclusions In this paper, we have examined neural networks as programs in a state-and-effect framework. In particular, we have characterized the application of a neural network to an input as a kind of state transformation and backpropagation of loss along the network as a kind of predicate transformation on losses. We also observed that the compositionality of backpropagation corresponds to the functoriality of a mapping between a category of states-and-effects to the category of neural networks. For the sake of illustrating this perspective on neural networks, we have deliberately chosen a simple subclass of the known network architectures and built a category of multilayer perceptron (MLPs). However, we believe it is possible to develop a richer categorical structure capable of capturing a much wider variety of network architectures. This may be the focus of future work. We also considered a single training scheme: backpropagation paired with stochastic gradient descent (with a fixed learning rate). We are interested in modeling other kinds of neural network training categorically. As mentioned in the discussion following Theorem 3.3, there is typically a category of algebraic structures in the upper right vertex of the state-and-effect triangle which we have not determined yet.
2018-03-25T22:01:32.000Z
2018-03-25T00:00:00.000
{ "year": 2018, "sha1": "8e4476260ae6160ff85df23842805fbdec8b727a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.entcs.2019.09.009", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "aa2f756b4420ed97da3eae183e7694219332d183", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256632072
pes2o/s2orc
v3-fos-license
Melanophages give rise to hyperreflective foci in AMD, a disease-progression marker Retinal melanosome/melanolipofuscin-containing cells (MCCs), clinically visible as hyperreflective foci (HRF) and a highly predictive imaging biomarker for the progression of age-related macular degeneration (AMD), are widely believed to be migrating retinal pigment epithelial (RPE) cells. Using human donor tissue, we identify the vast majority of MCCs as melanophages, melanosome/melanolipofuscin-laden mononuclear phagocytes (MPs). Using serial block-face scanning electron microscopy, RPE flatmounts, bone marrow transplantation and in vitro experiments, we show how retinal melanophages form by the transfer of melanosomes from the RPE to subretinal MPs when the “don’t eat me” signal CD47 is blocked. These melanophages give rise to hyperreflective foci in Cd47−/−-mice in vivo, and are associated with RPE dysmorphia similar to intermediate AMD. Finally, we show that Cd47 expression in human RPE declines with age and in AMD, which likely participates in melanophage formation and RPE decline. Boosting CD47 expression in AMD might protect RPE cells and delay AMD progression. Supplementary Information The online version contains supplementary material available at 10.1186/s12974-023-02699-9. Introduction Age-related macular degeneration (AMD) affects more than 150 million people worldwide (early AMD) and 10 million patients suffer from debilitating late stage AMD [1,2]. Early/intermediate AMD is characterized by pigmentary changes and lipoproteinaceous debris accumulation between the photoreceptors and the melanosome-rich retinal pigment epithelium (RPE, pseudodrusen) or below the RPE (soft drusen). Later, AMD can be complicated by central choroidal neovascularization (neovascular AMD, late form) and ultimately a disciform scar (neovascular AMD end stage), or by an extending lesion of the photoreceptors, RPE, and choroid that often starts parafoveally (geographic atrophy, GA, late form) [3]. Patients with early/intermediate AMD can progress and develop late AMD (~ 15% in the Beaver Dam study over 10 years; ~ 30% in the Blue mountain study over 6 years), but a large part of patients stay stable for years [4,5], underlining the potential usefulness of progression biomarkers of AMD. Recently, retinal imaging by spectral-domain optical coherence tomography (SD-OCT) identified hyperreflective foci (HRF), as a highly predictive biomarker for progression from intermediate to late AMD [6,7]. HRF are defined as discrete, wellcircumscribed intraretinal lesions with reflectivity comparable to the retinal pigment epithelium (RPE) band on SD-OCT [6,7]. Their presence is also associated with the two major genetic AMD risk factors the CFH H402 variant and a 10q26 haplotype [8]. In a direct comparison study of post-mortem SD-OCT and histology of drusenoid pigment epithelium detachment (a known precursor to GA), HRF were identified to be caused by melanosome/melanolipofuscin-containing sub-and intra-retinal cells (MCCs) [9]. As MCCs occur in intermediate AMD before major RPE death occurs and the RPE is the only melanin containing cell type under the healthy retina, it is widely believed that HRF are caused by migrating retinal pigment epithelium [2,9]. However, a direct comparison of SD-OCT and immunohistochemistry of a laser lesion in a human patient revealed that HRFs co-localized with MPs [10] and immunohistological studies of MP distribution in AMD, revealed the presence of pigment-containing MPs [11,12], raising the possibility that MPs that ingested melanosomes/melanolipofuscin can be MCCs and by extension the anatomical equivalent of HRF and a bad prognostic factor in AMD. MPs are a family of cells that include monocyte (Mo), resident macrophages (rMφ) such as microglial cells (MC), and monocyte-derived inflammatory macrophages (iMφ) that arise during inflammation [13]. Their accumulation has been shown to play an important role in the pathogenesis of many chronic, age-related diseases [13,14], including late AMD [3,10] where they have been shown to play a critical role in neovascularization and photoreceptor degeneration [3]. Importantly, MP accumulation is also observed around reticular pseudodrusen [15] and large drusen that characterize early/intermediate AMD [11,16]. At this earlier stage they might fulfill a homeostatic role, controlling debris accumulation, or provoke further degeneration, possibly depending on the patients AMD risk factors that determine the MPs pathogenic potential [3]. We recently showed that the homeostatic elimination of infiltrating MPs is dependent on Thrombospondin 1 (TSP1)-mediated activation of the CD47 receptor and Thbs1 −/− -and Cd47 −/− -mice develop age-related subretinal MP accumulation [17]. We demonstrated that both major genetic AMD risk factors, the CFH H402 variant and a 10q26 haplotype, inhibit TSP1-mediated CD47 activation and MP elimination, promoting pathogenic inflammation [17,18]. Independently of TSP1, CD47, expressed on many cell types, also functions as the ligand for signal regulatory protein α (SIRPα) [19]. SIRPα is expressed on all myeloid cells, including monocytes, macrophages and microglia, and its ligation by CD47 induces a "don't eat me" signal inhibiting the myeloid cell-mediated removal of the CD47-expressing cell [19]. Together, these observations raise the question whether melanosomes/melanolipofuscin particle-containing MPs can represent the underlying anatomical structures of HRF and how MPs ingestion of RPE melanosomes/melanolipofuscin particle affects RPE homeostasis. Using human AMD sections, we here show that intraretinal pigment, internally to the RPE cell layer, is never found in cells positive for RPE or macroglial cell markers but locates to melanin/melanolipofuscin-laden macrophages previously described as melanophages in hyperpigmentation disorders or melanotic lesions of the skin [20][21][22]. Using CD47 −/− -mice and CD47 blocking antibodies, we demonstrate that the lack of the "don't eat me signal" on RPE is sufficient to generate subretinal melanophages, associated with RPE dysmorphia, strikingly similar to intermediate AMD lesions. Last but not least, we show that Cd47 expression declines in human RPE with age and in AMD patients, which likely participates in melanophage formation and associated RPE deterioration in AMD. Immunohistochemistry on donor eyes sections Four control donor eyes and 12 donor eyes of 11 donors with a known history of AMD, melanosomes/ melanolipofuscin-containing cells visible in unstained bright-field microscopy, and histological evidence for intermediate AMD (sizeable drusen), neovascular AMD (subretinal presence of vessels without gliosis), disciform scar (subretinal presence of vessels with gliosis), or GA (lesions with complete outer retinal and RPE atrophy) were used in this study (see Table 1). Informed consent was obtained for all donors by the Minnesota Eye bank and experiments conformed to the principles set out in the WMA Declaration of Helsinki. The death to ocular cooling time and the death to enucleation time comprised between 45 min-6h45min and 2h45 and 7h15, respectively. The posterior segment was fixed 4 h in 4% paraformaldehyde. 8-μm horizontal sections of paraffin embedded human tissues crossing the optic nerve and perifovea were cut with a microtome (Microm Microtech France). The sections were de-paraffinized by 30 min incubation in QPath ® Safesolv (VWR Chemicals) and rehydration was performed in 5-min serial baths of alcohol (100/95/70%) and water. Antigen retrieval was performed in boiling citrate buffer pH6 for 20 min. Sections were blocked with 1% horse serum 30 min and then exposed overnight to recombinant rabbit monoclonal anti-RPE65 (ab231782, 1:200, Abcam), rabbit anti-human peropsin (LS-A1150, 1:200, LSBio), mouse anti-human CD68 (NCL-L-CD68, 1:40, Leica Biosystems), mouse anti-human CD163 (NCL-L-CD163, 1:25, Leica Biosystems), rabbit anti-IBA1 (019-19741, 1:200, Fujifilm Wako), mouse anti-GFAP (G3893, 1:200, Sigma-Aldrich) antibodies. After washing, sections were incubated with appropriate secondary goat anti-rabbit or goat anti-mouse antibody conjugated to an alkaline phosphatase (1:500, Ther-moFisher Scientific) for 60 min, followed by revelation with Fast-Red (Sigma-Aldrich) following the manufacturer instructions. Sections were counterstained with Hoechst 33342 (1:1000, ThermoFisher Scientific). Autofluorescence was observed in the green channel (excitation filter bandpass 470/40 and suppression filter bandpass 525/50). The slides were then washed, mounted, and viewed and photographed with a Leica DM550B fluorescence microscope (Leica Biosystems). The total surface, and immuno-stained surface covered by intraretinal pigment were measured for each retinal pigmented focus for each antibody and the percentage of immune-stained surface of the total pigmented surface was calculated for each eye. Control experiments omitting the first antibody gave no staining (data not shown). Consecutive serial slides were used to carry out stainings with multiple antibodies. Control human tonsil sections processed using the same experimental procedure, were used to validate the antibodies. Animals Wild-type (WT) C57BL/6J control, Cd47 −/− and Thbs1 −/− -mice were obtained from Charles River. All mice used in this study were male and were rd8 mutation free, as this mutation can lead to an AMD-like phenotype. Male mice were used to eliminate the influence of the reproductive cycle. The mice were kept to the indicated ages under specific pathogen-free condition in a 12 h/12 h light/dark (100 lx) cycle with no additional cover in the cage and with water and normal chow diet available ad libitum. All experimental protocols and procedures were approved by the French Ministry of higher Education, Research and Innovation (authorization number #00075.01, #2218 2015090416008740 v4). All procedures were performed under anesthesia and all efforts were made to minimize suffering. Optical coherence tomography (OCT) imaging in mice Pupils were dilated with tropicamide (Mydriaticum, Théa, France) and phenylephrine (Neosynephrine, Europhta, France). Animals were then anesthetized by intraperitoneal injection of ketamine (50 mg/kg) and xylazine (10 mg/kg). SD-OCT images were taken with the SD-OCT imaging device (Bioptigen 840 nm HHP; Bioptigen, North Carolina, USA). Eyes were kept moisturized with 9‰ NaCl during the whole procedure. Image acquisitions were performed on Bioptigen acquisition software and processed with open source Fiji software (http:// fiji. sc/ Fiji). Immunohistochemistry on mice eye sections Mice were killed by CO2 asphyxiation and eyes were enucleated. Eyes were fixed for 1 h in 4% PFA, then rinsed and sectioned at the limbus; the cornea and lens were discarded. Eyecups were incubated in 30% sucrose overnight at 4 °C, then embedded in OCT and sectioned (10 µm). Cryosections were blocked with PBS containing 1% horse serum, 0.1% Triton 1 h at room temperature and exposed overnight to rabbit anti-IAB1 antibody (019-19741, 1:200, Fujifilm Wako) at 4 °C. After washing, sections were incubated 2 h with an Alexa Fluor 488-conjugated donkey anti-rabbit IgG (1:500, ThermoFisher Scientific) and counterstained with Hoechst 33342 (1:1000, ThermoFisher Scientific). The slides were then washed, mounted, and viewed and photographed with a Leica DM550B fluorescence microscope (Leica Biosystems). MP and RPE quantification on mouse RPE/choroidal flatmounts Mice were killed by CO2 asphyxiation and eyes were enucleated. The globes were fixed in 4% PFA for 45 min, and then sectioned at the limbus; the cornea and lens were discarded. RPE/choroid tissues were separated from retina and incubated overnight with rabbit anti-IBA1 antibody (019-19741, 1:400, Fujifilm Wako) and Alexa Fluor 594 phalloidin (1:100, ThermoFisher Scientific) in PBS containing 0.1% Triton. Tissues were rinsed and incubated 2 h with an Alexa Fluor 488-conjugated donkey anti-rabbit IgG (1:500, ThermoFisher Scientific) and counterstained with Hoechst 33342 (1:1000, Ther-moFisher Scientific). RPE/choroids were flatmounted, viewed and photographed with a Leica DM550B fluorescence microscope (Leica Biosystems). MPs were counted on whole RPE/choroidal flatmounts. RPE nucleation and morphology were evaluated on randomized photos taken between optic nerve and the mid-periphery retina. Melanophages on RPE/choroidal flatmounts were defined and quantified as IBA1 + MPs (green fluorescence) that visibly block the red Alexa Fluor 594-phalloidin fluorescence of the underlying RPE when viewed in the red channel. Retinal flatmount preparation with adherent RPE Mice were killed by CO2 asphyxiation and eyes were enucleated. The globes were transferred into PBS solution without calcium. After cleaning from excess of tissues around the sclera, eyes were incubated 40 min in a solution containing L-cysteine (0,035 mg/ml in PBS) and 10 unit of Papain (Worthington) at 37 degrees Celsius, then transferred in DMEM containing 10% fetal bovine serum for dissection; the cornea was first removed by carefully cutting along the ora serrata, the choroid with sclera was delicately detached by peeling until the optic nerve. Finally, the lens and iris were removed by cutting around. Retinal/RPE tissues were then fixed in PFA 4% for 45 min and then rinsed with PBS and flatmounted and scanned with the Hamamatsu Nanozoomer Digital Pathology (NDP) 2.0 HT (Hamamatsu Photonics, France). Serial block-face scanning electronic microscopy Mice were killed by CO2 asphyxiation and eyes were enucleated. The globes were fixed in PBS containing 2% paraformaldehyde, 1% glutaraldehyde during 1 h at room temperature. Samples were then prepared for Serial Block Face using the NCMIR protocol (https:// ncmir. ucsd. edu/ sbem-proto col). They were post-fixed for 1 h in a reduced osmium solution containing 1% osmium tetroxide, 1.5% potassium ferrocyanide in PBS, followed by incubation with a 1% thiocarbohydrazide (TCH) solution in water for 20 min at room temperature. Subsequently, samples were fixed with 2% OsO 4 in water for 30 min at room temperature, followed by 1% aqueous uranyl acetate at 4 °C overnight. The samples were then subjected to en bloc Walton's lead aspartate staining and placed in a 60 °C oven for 30 min. Then samples were dehydrated in graded concentrations of ethanol for 10 min each. The samples were infiltrated with 50% Agar low viscosity resin (Agar Scientific Ltd) overnight. The resin was then changed and the samples further incubated during 3 h prior to inclusion in returned capsules and polymerized for 18 h at 60 °C. The polymerized blocks were mounted onto special aluminum pins for SBF imaging (FEI Microtome 8 mm SEM Stub, Agar Scientific), with two-part conduction silver epoxy kit (EMS, 12642-14). Samples mounted on aluminum pins were trimmed and inserted into a TeneoVS SEM (ThermoFisher Scientific). Acquisitions were performed with a beam energy of 2 kV, a current of 100pA, in HiVac mode with the filtering system, a dwell time of 1 µs per pixel and sections of 50 nm. The pixel size was 10 nm. Images were processed for 3D reconstitution and segmentation using Imaris software (Oxford Instruments). Bone marrow transplantation Twenty-four hours before transplantation 6-month-old WT and Cd47 −/− recipient mice were lethally irradiated with 10 Gy (1 Gy/min) of total body irradiation from a 137 Cs source. Bone marrow cells were collected from the tibias and femurs of age-matched wild-type mice, rinsed and resuspended in PBS. Recipient mice were intravenously injected with 3 × 10 6 bone marrow cells from donors via the tail vein. 6 months after bone marrow transplantation, mice were killed and the eyes were enucleated for MP/melanophage quantification on RPE/ choroid flatmounts. CD47 expression in human tissues The human ocular tissues used for CD47 expression analysis in this study were obtained from body donation for science, handled in accordance with the Declaration of Helsinki. Each donor had volunteered their body and had provided written consent to the Laboratory of Anatomy of our Faculty of Medicine, Saint-Etienne, France). After removal of the anterior segment through a circular incision at the equator and delicate removal of the neuroretina, 350 µl of RA1 buffer (Macherey Nagel) were added to the posterior segment, covering the central exposed RPE cells. After 5 min, the buffer containing the lysed RPE cells were pipetted up and down 5 times. The lysates were then processed following supplier instructions. Single-strand cDNA was synthesized with 1 µg of RNA pretreated with DNase amplification grade, using oligo-dT as primer and Superscript II reverse transcriptase (Thermo Fisher Scientific). For real-time PCR, 1/100 of cDNA was incubated with the polymerase and the appropriate amounts of nucleotides (TaqMan Gene Expression Master Mix, Applied Biosystems; Power SYBR Green PCR Master Mix, Applied Biosystems). qPCR were realized with the QuantStudio real-time PCR system (Applied Biosystems) using the following parameters: 45 cycles of 15 s at 95 °C, 45 s at 60 °C. Results were normalized with expression of RPS26 as an housekeeping gene. RPE normalized CD47 expression in the transcriptome dataset of RPE/choroid samples of Newman et al. Intraretinal pigment in AMD is primarily located in melanophages The high content of melanosomes of the healthy retinal pigment epithelium (RPE) is an important contributor for its appearance as a single hyperreflective band in spectral-domain optical coherence tomography (SD-OCT), a clinically used method to visualize the retina and choroid, shown here is an example of a healthy individual (blue arrows Fig. 1A). In patients with intermediate AMD, hyperreflective foci (HRF), defined as discrete, well-circumscribed lesions with a reflectivity similar to the RPE can regularly be observed sub-and intra-retinally, shown here in a patient with intermediate AMD (red arrows Fig. 1B). HRF have been shown to be a highly predictive biomarker for progression from intermediate to late AMD [6,7,25] and they are also more common in AMD carriers of the main genetic risk factors [8]. In unstained histological sections of AMD patients, foci of pigmentation that resembles the RPE pigment is regularly observed internally to the RPE monolayer (Fig. 1C). These melanin containing cells (MCCs) are generally believed to be migrating RPE cells and start occurring in intAMD before major RPE cell death occurs. To identify the cell types that constitute the MCCpool, we performed immunostaining on paraffin sections of 12 eyes of 11 donors with a known history of AMD that all contained MCCs, visible by unstained bright-field microscopy (example Fig. 1C). The histological sections of one donor additionally had sizeable drusen without atrophy or CNV, seven donors had visible atrophic lesions on post-mortem fundus examination and in histology, sections from two donors featured subretinal CNV without gliosis, and three donors were characterized by a disciform scar on fundus examination and in histology (Table 1). MPs were detected by CD68 (Fig. 1D), CD163 (Fig. 1E), and ionized calciumbinding adapter molecule 1 (IBA1, Fig. 1F) staining, the RPE by Retinal pigment epithelium-specific 65 kDa protein (RPE65, Fig. 1G) and peropsin (Fig. 1H) staining; and macroglia by glial fibrillary acidic protein (GFAP, Fig. 1I) immunohistochemistry. Healthy eye donors (inset Fig. 1G-I) and human tonsils (inset Fig. 1D-F) served as positive controls. We used a chromogenic substrate revealing method (alkaline phosphatase/Fast Red) that is visible in bright field and in red fluorescence and observed autofluorescence in the green channel (excitation filter bandpass 470/40 and suppression filter bandpass 525/50) and a Hoechst nuclear marker in the blue channel. Bright field photographs of immune staining of CD68, CD163 and IBA1, revealed a red staining in the typical form and location of MPs in stained tonsil sections (inset Fig. 1D-F) and choroid (where CD68 only stains a subset of choroidal macrophages as previously reported [12]) but no red staining was detected in RPE, identified as the typical monolayer of pigmented cells, demonstrating the MP specificity of the staining. Observation of retinal pigmented foci revealed that a substantial portion of the pigmented intraretinal structures appears red (positive for the FastRed pigment used to reveal the immunohistochemistries) for each of the MP markers (arrows, Fig. 1D-F) compared to their brown color of unstained sections (Fig. 1C), to negative controls omitting the primary antibody (not shown), and to retinal pigmented foci in sections stained for RPE and glial markers (Fig. 1G-I). Accordingly, micrographs/ images of fluorescence microscopy, taken with the same exposure times for pigmented foci and the RPE, reveals a strong red component in the foci (red fluorescence and red/green double positive fluorescence appearing as yellow), compared to the green only autofluorescence emanating from the RPE of the same sections ( Fig. 1D-F), and to pigmented foci stained for RPE and glial markers ( Fig. 1G-I). Immunostaining for RPE65 and peropsin, two RPE-specific proteins, stained the RPE mono-layer of healthy control donors red in bright-field observation (insets Fig. 1G and H) and strongly marked the RPE in AMD sections visible in bright field and in red fluorescence, but failed to stain any pigmented foci of the retina ( Fig. 1G and H). Similarly, GFAP staining revealed a staining in an astrocyte distribution in healthy controls (inset Fig. 1I) and a staining pattern typically observed in activated Müller cells in AMD sections, but the staining never overlapped with pigmented foci, despite coming very close (Fig. 1I). We next measured the surface of the immunohistochemistry sections that contained melanosomes/melanolipofuscin particles and the surface that was additionally positive for the immunostaining and calculated the percentage of the surface of the retinal sections that was double positive over the total surface containing melanosomes/melanolipofuscin granules for each immunostaining in each of the 12 donor eyes (Fig. 1J). Strikingly, retinal pigmented foci were never found to be positive for either RPE or macroglial cell markers in any of our donor eyes. However, we found that between 70 and 100% of pigmented foci surface was positive for at least one of the MP markers in each of the patients. As we were technically unable to simultaneously stain for all three MP markers, we do not know whether MP-marker negative pigmented foci with one staining would have stained positive for one of the other used MP markers (or markers not used in this study) and it is additionally possible that not all retinal pigment is located intracellularly. Interestingly, in sections of two patients with disciform scars, a substantial number of pigmented cells found within the scar (but not in the retinal parenchyma) stained in part positive for RPE65 and peropsin (data not shown), suggesting that islands of RPE cells have become surrounded by the scar tissue on certain sections. Taken together, our data show that retinal MCCs found in AMD never stain positive for macroglial or RPE cellspecific marker, but the majority can be stained with specific MP markers. Our results confirm a recent report of CD68, CD163 and RPE65 immunohistochemistry [26]. As MPs have been shown to infiltrate the diseased retina in intermediate and late AMD [3,12,15,16,27], these results strongly suggest that the majority of HRF in AMD are caused by pigment-laden MPs called melanophages that have been well described in skin diseases [20][21][22]. Subretinal pigment-laden MPs accumulate in CD47 −/− mice with age but not in Thbs1 −/− -mice While melanophage formation in the dermis through ingestion of melanosomes from neighboring melanocytes has been previously described [28], their generation in the retina with an intact RPE cell layer and the mechanisms involved have not been clearly demonstrated. We previously showed that the homeostatic elimination of infiltrating MPs is dependent on Thrombospondin 1 (TSP1)-mediated activation of the CD47 receptor on MPs [17]. Thbs1 −/− -and Cd47 −/− -mice therefore develop comparable age-related subretinal MP accumulation contrary to WT mice, confirmed here on phalloidin (red fluorescence staining) and IBA1 (green fluorescence staining) stained RPE/choroidal flatmounts of 12-month-old WT mice ( Fig. 2A), Thbs1 −/− - (Fig. 2B) and Cd47 −/− -mice (Fig. 2C). However, on closer observation, there is a remarkable difference between the two knockout mouse strains: in Cd47 −/− -mice the majority of the subretinal IBA1 + MPs are bloated with a dense pigment which blocks the visualization of the underlying RPE phalloidin stain (red fluorescence; asterixis Fig. 2C), and the IBA1-stain of the MP, which remains visible only at the border and in the dendrites of the MPs. This was not observed in Thbs1 −/− -mice. IBA1 (green fluorescence) stained cryo-sections of 12-monthold Cd47 −/− -mice confirmed the presence of pigmented foci (red arrow, Fig. 2D) in the outer retina, internally to the pigmented RPE band (blue arrow, Fig. 2D). These pigmented foci were invariably IBA1-positive (green arrow, Fig. 2E). Quantification of subretinal MPs at 2, 6, 12, and 18 months on IBA1-stained RPE flatmounts, corroborate and extend our previous observation that Thbs1 −/− -and Cd47 −/− -mice accumulate subretinal MPs at 12 months, showing the infiltration reaches a plateau from 12 months of age onwards (Fig. 2F). Quantification of subretinal pigment-laden MPs on flatmounts revealed that 80% of all IBA1 + subretinal MPs in Cd47 −/− -mice are filled with pigment to a point that they visibly block the red fluorescence RPE phalloidin staining, when the flatmount is viewed in the red channel. This phenomenon was not observed in WT and Thbs1 −/− -mice in which the phalloidin RPE staining was continuous and not obscured by over-laying MPs (Fig. 2G). The average size of the bloated subretinal Cd47 −/− MPs, measured as the area they cover on flatmounts, was tripled compared to control and Thbs1 −/− -mice (Fig. 2H). In summary, our study reveals that pigment-containing MPs can form in the retina as seen here in aged Cd47 −/− -mice similar to dermal melanophages previously described [28], independently of the TSP1-mediated CD47 pathway. Massive intracellular accumulation of RPE-derived melanosomes/melanolipofuscin particles in subretinal MPs of CD47 −/− -mice causes subretinal melanophage formation and their clinical appearance as hyperreflective foci Melanin, the main pigment found in mammals, is located in melanosomes that are formed in melanocytes and in RPE cells. With age and lipofuscin accumulation the melanosomes can fuse to form melanolipofuscin particles. In the dermis, macrophages ingest melanosomes from neighboring melanocytes to become melanophages, similar to keratinocytes in the epidermis [28]. Subretinal MPs in CD47 −/− -mice, however, do not have direct contact with melanocytes but with RPE cells, which do not physiologically traffic melanosomes to other cells. To better define the nature of the pigment that accumulates in subretinal MPs in CD47 −/− -mice, we performed serial block-face scanning electron microscopy (SBF-SEM) on 12-month-old Thbs1 −/− - (Fig. 3A-D), and Cd47 −/− -mice (Fig. 3E-H). Two blocks of each, 12-month-old Thbs1 −/− -, and Cd47 −/− -mice, were serially cut until a subretinal MP was captured from beginning to end. Representative images (Fig. 3) show the nuclei of subretinal MPs in the midst of photoreceptor outer segments and adjacent to the RPE nuclei of Thbs1 −/− -, and Cd47 −/− -mice (red asterixis Fig. 3A and E). A detailed view of the subretinal Thbs1 −/− -MP (Fig. 3B) show a melanolipofuscin particle (white arrow) and one orthogonally cut, and two spindle-shaped longitudinally cut, electron-dense melanosomes (magenta arrows), undistinguishable in electron density and shape to RPE melanosomes (blue arrows). Using Imaris imaging software, we next determined the border of the subretinal MP ( Fig. 3C; green color), and the surface of all melanosomes and melanolipofuscin granules (magenta) and other organelles such as mitochondria (white) on every SBF-SEM section containing the MP (Additional file 1: Movie S1). The reconstruction of cell reveals that the retinal Thbs1 −/− -MP contains only very few intracellular melanosomes/melanolipofuscin granules (< 20; Fig. 3D). In contrast, the body of captured subretinal CD47 −/− -MPs contained densely packed melanosomes (Fig. 3F, magenta arrows) and the threedimensional reconstruction reveal the extent of melanosome/melanolipofuscin granule accumulation (several hundreds) in the bloated cell body of the CD47 −/− -MP (Fig. 3 G and H and Additional file 2: Movie S2). Transmission bright light micrographs of RPE/retinal flatmounts, in which the RPE and retina where kept together using a protocol we usually use for RPE/ Retina culture, revealed the densely pigmented RPE in 12-month-old WT-and Thbs1 −/− -mice (Fig. 3I, J). In contrast, on micrographs of age-matched CD47 −/− -flatmounts, taken under the same conditions (exposure time, aperture), the RPE's pigmentation is much diminished and they appear pale in comparison to the densely pigmented subretinal melanophages (Fig. 3K). Taken together, these experiments strongly suggest that melanosomes and melanolipofuscin granules from the RPE massively accumulate in subretinal CD47 −/− -MPs inducing the melanophage phenotype, similar to AMD patients. The fact that subretinal melanophages are visible in Cd47 −/− -mice as HRF in OCT examination further supports the hypothesis that melanophages are the underlying anatomical features of HRFs in AMD. Melanophage accumulation in CD47 −/− mice is associated with RPE dysmorphia To assess whether melanophages form in CD47 −/− -mice because subretinal MPs phagocyte dying RPE cells, we next assessed RPE density on phalloidin/IBA1 double-labeled RPE/choroidal flatmounts of 12-monthold Thbs1 −/− -and Cd47 −/− -mice (Fig. 4A). Quantifications of RPE cell numbers per square millimeter revealed no significant difference between WT-, Thbs1 −/− -, and Cd47 −/− -mice at 12 months, and RPE cell density was similar between WT-and Cd47 −/− -mice at 6 months of age (Fig. 4B), showing that the accumulation of subretinal melanophages in Cd47 −/− -mice was not associated with a significant RPE cell loss compared to controls. The age-related MP accumulation in Thbs1 −/− -, and Cd47 −/− -mice also revealed no significant loss of photoreceptor cell nuclei rows at 12 months quantified on histological sections (data not shown), contrary to MP accumulation in Cx3cr1 −/− and ApoE2-isoform expressing mice. However, while in 6-month-old WTand Cd47 −/− -mice and in 12-month-old WT-and Thbs1 −/− -mice RPE cells were hexagonal in 80% and 60% of cases (the remainder being pentagonal cells), the percentage of hexagonal cells fell to 40% in 12-month-old Cd47 −/− -mice (Fig. 4C). Conversely, the percentage of dysmorphic RPE cells with less than five or more than six neighbors/sides was significantly elevated in 12-monthold Cd47 −/− -mice compared to the other strains ( Fig. 4A yellow asterixis, Fig. 4D). The vast majority of RPE cells of 12-month-old Cd47 −/− -mice were fitted with one or two nuclei as in the other strains. However, a small but significant population of RPE cells had three instead of a maximum of two nuclei (Fig. 4E). In summary, our morphological analysis of the RPE demonstrates that the accumulation of melanophages in Cd47 −/− -mice is not primarily due to the phagocytosis of dead RPE cells. However, the RPE of 12-month-old Cd47 −/− -mice had undergone significant morphological changes which might be due to the chronic contact with melanophages. CD47-deficient RPE cells lose melanosomes/ melanolipofuscin to melanophages Physiologically, the tips of the outer segments (OS) of the photoreceptors are phagocytosed by the RPE, but when subretinal MPs accumulate, such as with age or in Cx3cr1 −/− mice, they also phagocytose OS [3,29]. However, MPs rarely seem to phagocytose melanosomecontaining microvilli of the RPE cells, likely due to inhibitory "don't eat me" signals of the RPE, such as CD47. To test whether MPs would phagocytose material from living RPE cells, we incubated unlabeled human monocytes with human RPE cell line, ARPE19 cells, that we had previously labeled with FarRed CellTrace (FRCT) either in the presence of a control IgG or the anti-CD47 blocking antibody B6H12. After 2 h of incubation, when no cell death occurs in either RPE or monocytes (data not shown), we observed FarRed Cell Trace uptake in both conditions by flow cytometry, but the population of CD14 + monocytes had become significantly more FRCT positive when the CD47-blocking antibody was present in the co-culture (Fig. 5A and B), showing that CD47 blockage significantly increases the transfer of FRCT + cytoplasm from ARPE19 cells to monocytes even in this short time period. The forward scatter area (FSC-A, which reflects the cell size) of CD47-blocking antibody treated CD14 + FRCT + Mos only slightly increased (10-15%) compared to control monocytes (data not shown), demonstrating that monocytes did not phagocytose whole ARPE19 cells (three times the size of the monocyte), but cell parts or vesicles. Next, to test whether this trafficking would take place in vivo, we transplanted CD47 +/+ WT bone marrow from mice of a CD45.1 genetic background into 6-month-old lethally irradiated CD47 +/+ -and CD47 −/− -recipient mice of a CD45.2 genetic background. CD47 −/− -bone marrow transplanted animals in CD47 +/+ recipients were not viable confirming previous studies that showed that the transplanted CD47 −/− -bone marrow gets eliminated by the recipients' splenic dendritic cells and macrophages [30]. The animals were kept for 6 months after the transplantation to allow the replacement of the retinal microglia by bone marrow-derived cells in this irradiation model without head sparing [31]. At 12 months, 6 months after the lethal irradiation, flow cytometry confirmed the successful engraftment of CD45.1 bone marrow in the recipient mice (data not shown). Phalloidin/IBA1 double-labeled RPE/choroidal flatmounts of the CD47 +/+ BM/CD47 +/+ recipient WT transplanted mice revealed unpigmented subretinal MPs, morphologically akin to the subretinal MPs observed in aged WT-and Thbs1 −/− -mice (Fig. 5C upper panels). Subretinal MPs in CD47 +/+ BM/CD47 −/− recipient chimeras however, were heavily pigmented and blocked the red phalloidin fluorescence of the underlying RPE (Fig. 5C upper panels) similar to the melanophages observed in Cd47 −/− -mice (Fig. 2). Quantification of the subretinal MP density in both transplanted groups were in a comparable range than 12-month-old WT mice, as would have been expected for the accumulation of CD47 +/+ MPs. However, quantification of the percentage of melanophages in the subretinal MP population (defined as IBA1 + MPs that visibly block the red phalloidin fluorescence of the underlying RPE when viewed in the red channel) revealed that 80% of all subretinal MPs in in CD47 +/+ BM/CD47 −/− chimeras were melanophages, comparable to Cd47 −/− -mice. Melanophages were not observed in CD47 +/+ BM/CD47 +/+ WT transplanted mice similar to WT mice. Together, these experiments reveal that the in vitro inhibition of the CD47-mediated "don't eat me" signal induces phagocytosis of RPE cells by monocytes and we demonstrate, in vivo, that lack of CD47 on RPE cells is sufficient to induce the accumulation of subretinal melanophages (Fig. 6). RPE CD47-expression decreases with age and in intermediate AMD in humans Our data demonstrate that the vast majority of retinal MCCs in AMD are melanophages. We show that in mice melanophages form due to melanosome/melanolipofuscin transfer from the RPE to subretinal MPs secondary to the reduced expression of the "don't eat me" signal CD47 Quantitative RT-PCR of macular RPE mRNA preparations from 35 "healthy" subjects older than 60 years revealed that RPE Cd47 mRNA expression significantly decreases with age ( Fig. 6A; p = 0.0261 deviant from zero), the most important risk factor for AMD. RPE mRNA preparations were obtained by applying 350 µl of RA1 lysis buffer directly to the posterior pole of donor eyes in which we had previously removed the retina. These mRNA preparations are highly enriched for RPE transcripts and contain little choroidal contamination (data not shown). The subjects had normal post-mortem fundus appearance and no known history of AMD or other retinal diseases. Sub analysis for Cd47 mRNA expression levels according to CFH Y402H and the 10q26 variant revealed no influence of these genetic risk factors on Cd47 expression (data not shown). We next analyzed Cd47 expression in the publicly available data of the transcriptome of RPE/choroid samples of healthy subjects and controls from Newman et al. [23]. We first filtered the data to keep only subjects older than 60 and analyzed the data from the "normal" samples (controls, n = 36) and from intermediate AMD patients (n = 18) classified in Newman et al. as "MD2" (n = 4; soft distinct drusen > 63 µm/pigmentary changes) and "dry AMD" (n = 14; soft indistinct drusen > 125 µm, reticular drusen, soft distinct drusen in association with pigmentary changes, soft distinct drusen in association with pigmentary changes). Samples from late AMD patients were not included in the analysis as there were only two to four per subgroup in the dataset. As the amount of RPE mRNA might vary from sample to sample depending on how many viable RPE cells the RPE/choroidal extractions contained, we normalize the expression data of each of the RPE/ choroid samples for the content of RPE mRNA using 40 RPE-specific transcripts. These RPE-specific transcripts were selected from the single cell transcriptomic dataset of Voigt et al. [24]. Similar to our RPE mRNA preparations, normalized Cd47 mRNA expression in the central RPE of healthy control subjects diminished significantly with age ( Fig. 6B; n = 36, p = 0.0156 deviant from zero). In intermediate AMD patients, the linear regression line of Cd47 mRNA expression with age was below that of the healthy subjects and did not reach significance (intAMD n = 18; p = 0.0704 deviant from zero). Last but not least, Cd47 mRNA expression in the Newmann data set revealed a significantly lower expression of CD47 mRNA in the central samples of intAMD patients compared to controls [23] (*Mann-Whitney p = 0.0217). Taken together this data demonstrates that RPE Cd47 transcriptions diminishes with age and in intermediate AMD. This diminished expression of one of the "don't eat me" signals might promote melanophage formation and associated RPE dysmorphia in AMD where subretinal MPs accumulate. Discussion Not all patients with early/intermediate AMD progress to develop late debilitating AMD and many patients stay stable for years [4,5]. Hyperreflective foci (HRF) on SD-OCT, provoked by melanosome/melanolipofuscin-containing cells (MCCs) interior to the RPE band [9], have been recognized to be a highly reliable progression biomarker for neovascular AMD and geographic atrophy [6,25,32,33]. The mechanism responsible for AMD progression must therefore cause the appearance of retinal MCCs or MCCs themselves trigger progression to late AMD. It has widely been assumed that MCCs causing HRFs are provoked by RPE cells that migrate into the retina [9,34]. In a process known as type-2 epithelial-mesenchymal transition (EMT) epithelia from kidney, lung an intestine have been shown to transdifferentiate into mesenchymal cells and participate in fibrosis in inflammatory conditions [35] and a similar transition has been proposed to occur in retinal disease [36]. However, the identification of MCCs as RPE cells in AMD patients is based only on the fact that both contain melanosomes and melanolipofuscin [9], despite the fact that melanosomes, lipofuscin, and melanolipofuscin particles are not specific marker of RPE cells. In retinitis pigmentosa models subretinal macrophages have been beautifully shown to be autofluorescent and to contain melanofuscin particles that are indistinguishable from the RPE [37,38]. Melanosomes are also produced in melanocytes and can be trafficked from cell to cell. Melanocytes in the skin transfer melanosomes to keratinocytes in the epidermis, but also to macrophages in the normal dermis [28] and in skin disorders [20][21][22], giving rise to melanophages. Therefore, MCCs, provoking HRF in the retina in AMD, are not necessarily migrating RPE cells. Indeed, our study of sections from 12 eyes with MCCs from 11 AMD patients failed to detect RPE-specific peropsin or RPE65 in any retinal MCCs in any of the sections, but strongly marked all RPE cells in the monolayer even those close to atrophic lesions where RPE cell death occurs. Additionally, we found no evidence that MCCs are positive for GFAP, a marker of astrocytes in the healthy retina and of astrocytes and activated Müller cells in AMD. On the other hand, the majority of MCCs stained positive for MP markers CD68, CD163, and IBA1, which never stained RPE cells integrated in the monolayer and are not known to be expressed by any other retinal or mesenchymal cells, showing a high degree of specificity in this tissue. These results confirm a recent report using similar markers [26]. 70 to 100% of the surface of the histological section that contained pigment was positive for at least one of the MP markers, identifying the majority of MCCs as autofluorescent, melanin-containing MPs, melanophages. In reality the part of melanophages in the MCC population is likely higher as (i) we were not able to stain simultaneously for the three MP markers as they require different substrate incubation times and incompatibilities of secondary antibodies; (ii) MCCs might be negative for our chosen MP markers but positive for others; and (iii) all pigment is not necessarily within cells at all times. It has been argued that the migrating RPE are de-differentiated to a point that they cease to express RPE-specific markers, or that the MCCs positive for MP markers are RPE cells that transdifferentiated into MPs [26,39]. While RPE cells have been shown to undergo type-2 EMT in vitro, there is no evidence a transdifferentiation into a highly differentiated MP is possible. Keeping in mind that intracellular melanosomes and melanolipofuscin particles are not specific to RPE cells but exist in other cell types and notably in macrophages in the form of melanophages, it seems most plausible that autofluorescent, pigmented cells positive for MP markers (in a condition where MPs infiltrate the tissue) are indeed just that: melanophages. While melanophages form in the dermis through ingestion of melanosomes from neighboring melanocytes [28], very little is known about the mechanisms of retinal melanophage formation. In GA one might assume that MPs infiltrating the atrophic lesions will phagocytose RPE debris from dead RPE cells, but HRFs, and by extension melanophages, also appear in intermediate-and neovascular-AMD where RPE death is not a prominent feature [32]. These clinical observations raise the question how retinal melanophages form and whether their presence is sufficient to give rise to HRF on SD-OCT. We recently showed that both major genetic AMD risk factors, the CFH H402 variant and a 10q26 haplotype, inhibit TSP1-mediated CD47 activation and subretinal MP elimination, promoting pathogenic inflammation [17,18]. We here confirm that both aged Thbs1 −/− -and Cd47 −/− -mice develop subretinal MP accumulation [17]. However, there was a remarkable difference between the two knockout mouse strains: in Cd47 −/− -mice the cell bodies of subretinal MPs were bloated with a dense pigment, which SBF-SEM reveals was due to densely packed intracellular melanosomes/melanolipofuscin particles, only rarely observed in Thbs1 −/− -mice (Figs. 2 and 3) and with striking similarities to melanophages in the dermis of mice [28] and the retinae of AMD patients (Fig. 1). At the same time, the RPE surrounding the melanophages in Cd47 −/− -mice was markedly less pigmented compared to WT-and Thbs1 −/− -mice, suggested that melanosomes and melanofuscin particles had been transferred from the RPE to the melanophages. In SDOCT examination, numerous hyperreflective foci adjacent to the RPE line were visible only in Cd47 −/− -mice characterized by their accumulation of subretinal melanophages. These results confirm experimentally that retinal melanophages can form subretinally and provoke HRFs in SDOCT imaging. The analysis of the RPE monolayer of Cd47 −/− -mice revealed no age-related, or strain-related change of the RPE cell numbers, suggesting that the melanosomes/ melanolipofuscin of the melanophages did not stem from the phagocytosis of dead RPE cells, but were transferred from live RPE cells. However, the percentage of regularly shaped hexagonal RPE cells was significantly reduced to the detriment of irregular shaped RPE and an increase in the variability in size in aged Cd47 −/− -mice (Fig. 4), a feature also observed in wild-type mice twice the age [40], hyperinflammatory mice [41], and most importantly in intermediate AMD [42]. To date we do not know whether these morphological RPE changes are directly due to the absence of CD47 in the RPE or to the chronic presence of melanophages. CD47, independently of TSP1, functions as the ligand for signal regulatory protein α (SIRPα) [19]. SIRPα is expressed on all MPs and its ligation by CD47 induces a "don't eat me" signal that inhibits the phagocytosis of the CD47-expressing cell [19]. Melanosomes observed in melanophages of Cd47 −/− -mice could therefore stem from aberrantly phagocytosed parts of the RPE, such as RPE microvilli to which melanosomes migrate after light onset [46]. Indeed, when we co-cultured monocytes with a FarRed CellTrace pre-stained human RPE cell line for 2 h, we observed a transfer of CellTrace-stained material from the RPE to the monocytes when CD47 was inhibited. Importantly, using CD47 +/+ bone marrow transplantation of CD47 +/+ -and CD47 −/− -recipients we created a in vivo mouse model where subretinal CD47 +/+ MPs accumulate adjacent to either CD47 +/+ -or CD47 −/− -RPE cells. As expected, the level of subretinal CD47 +/+ MPs accumulation of around 170 subretinal MPs/eye (10 MPs/mm 2 ) in both chimeras was comparable to wildtype mice. However, we only observed melanophages in CD47 −/− -recipient mice, confirming that the lack of the "don't eat me" signal in the recipient mice is responsible for the subretinal melanophage-phenotype to occur. Last but not least, we demonstrate that RPE Cd47 transcription diminishes in human subjects with age, the most important AMD risk factor, and in intermediate AMD compared to control subjects (Fig. 6). This diminished expression of a "don't eat me" signal on the aging RPE likely promotes melanophage formation and associated RPE dysmorphia if it coincides with subretinal MP accumulation in AMD, similar to our observations in mice. Interestingly, in aged Cd47 −/− -and Cd47 −/− -recipient transplanted mice, melanophages constituted 80% of the subretinal MP population. It is not yet clear, whether the "un-pigmented" 20% of MPs constitute a different subtype of MPs or whether they infiltrated more recently and had not yet acquired the melanophage phenotype. In the dermis it has been shown that pigment of tattoos, which is phagocytosed and kept intracellularly by dermal "pigmented" MPs similar to melanophages, is released upon the death of the pigment-containing MP and taken up by infiltrating MPs that thereby become "pigmented" in a pigment capture-release-recapture cycle [28]. Although purely hypothetical at this stage, a modified pigment capture-release-recapture cycle might take place in the retina: anti-VEGF treatment, which accelerates retinal MP elimination/death in the laser-induced model of neovascular AMD [47], also astonishingly decreases the number of HRFs in patients [32]. If anti-VEGF treatment in patients increased the death of melanophages, the liberated melanin containing particles would be passively transported towards the RPE due to the directional flow of water and ions [48] and could be re-phagocytosed by the RPE. Indeed, RPE cells eagerly take up melanosomes they are in contact with in vitro [49]. The concept that HRFs in AMD are provoked by melanophages, the melanosomes/melanolipofuscin-containing subgroup of MPs that infiltrate the retina in AMD, is also supported by the observation that they are more common in carriers of both major genetic AMD risk factors, the CFH H402 variant and a 10q26 haplotype [8], which we showed promote the accumulation of MPs in the retina [17,18]. In summary, our study provides several lines of evidence that retinal melanophages are at the origin of HRFs and shows how they might form in AMD. Our immunohistochemistry on AMD donor eyes demonstrates that MCCs express MP markers but not RPE or macroglial cell markers. We show how retinal melanophages can form in mice in vivo in prior to RPE cell death, due to the lack of the CD47 "don't eat me" signal on RPE cells, and give rise to HRF in SD-OCT. Importantly, we demonstrate that Cd47 transcription diminishes with age and in intermediate AMD, which likely promotes melanophage formation in AMD (Fig. 7). Together, with our previous
2023-02-08T15:26:29.721Z
2023-02-08T00:00:00.000
{ "year": 2023, "sha1": "f1ba139bc367c83cfa837259aed443031c888991", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "f1ba139bc367c83cfa837259aed443031c888991", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251267119
pes2o/s2orc
v3-fos-license
Assessment of Bacteriocin-Antibiotic Synergy for the Inhibition and Disruption of Biofilms of Listeria monocytogenes and Vancomycin-Resistant Enterococcus : In this study, we have evaluated the effects of previously characterized bacteriocins produced by E. faecium strains ST651ea, ST7119ea, and ST7319ea, against biofilm formation and biofilms formed by L. monocytogenes ATCC15313 and vancomycin-resistant E. faecium VRE19. The effects of bacteriocins on the biofilms formed by L. monocytogenes ATCC151313 were evaluated by crystal violet assay and further confirmed by quantifying viable cells and cell metabolic activities through flow cytometry and TTC assay, respectively, indicating that bacteriocin activities required to completely eradicate biofilms are at least 1600 AU mL − 1 , 3200 AU mL − 1 , and 6400 AU mL − 1 , respectively for each bacteriocin evaluated. Furthermore, bacteriocins ST651ea and ST7119ea require at least 6400 AU mL − 1 to completely eradicate the viability of cells within the biofilms formed by E. faecium VRE19, while bacteriocin ST7319ea requires at least 12800 AU mL − 1 to obtain the same observations. Assessment of synergistic activities between selected conventional antibiotics (ciprofloxacin and vancomycin) with these bacteriocins was carried out to evaluate their effects on biofilm formation and pre-formed biofilms of both test microorganisms. Results showed that higher concentrations are needed to completely eradicate metabolic activities of cells within pre-formed biofilms in contrast with the biofilm formation abilities of the strains. Furthermore, synergistic activities of bacteriocins with both ciprofloxacin and vancomycin are more evident against vancomycin-resistant E. faecium VRE19 rather than L. monocytogenes ATCC15313. These observations can be further explored for possible applications of these combinations of antibiotics as a possible treatment of clinically relevant pathogens. Introduction Biofilms are typically composed of either a homogeneous or mixture of different species/strains to form a structured multi-cellular community, enclosed in a complex matrix, that typically acts as a protective barrier to various antimicrobial substances [1,2]. Bacterial communities enclosed in this structure (biofilms) are usually comprised of highly dense cells within proximity, made up of combined live microorganisms, dead cells, and numerous biopolymers. Furthermore, complex chemical gradients and compositions are also found within these ecosystems. This enables microorganisms within the system to occur in a wide array of functional physiological states that allows them to survive the fluctuating conditions within the film. Thus, this serves as a deadlock environment for a high probability of interspecies or intraspecies genetic material exchange, which, in turn, results in the possible development of highly adaptive microorganisms such as antimicrobial-resistant strains [1,3,4]. Biofilm formation is regulated by intracellular 481 signaling, through the release of specific metabolic products, that triggers a phenomenon called quorum sensing [5][6][7]. Bacterial biofilms formed by spoilage or food-borne pathogenic organisms within food systems have been one of the major problems faced by the industry [7]. This has also been discussed by Poulsen [8], including the various negative effects of biofilms in food processing involving engineering, health care, and food technological facets [9][10][11][12][13][14]. L. monocytogenes, a known food-borne pathogen that causes listeriosis, has been considered a primary safety concern in the food industry [15]. According to the regulations in the EU and the USA, zero tolerance for L. monocytogenes was granted by the food industry. This is due to their ability to possess various adaptive mechanisms to survive a wide range of environmental conditions, including adaptation to acidic and osmotic stress and psychotropic properties [16,17]. All these physiological characteristics enable this pathogen to survive multiple hurdles employed in the production of fresh produce and processed foods, a huge and profitable industry [18]. Although biofilm formation is not considered a primary virulence factor for L. monocytogenes, the capacity of any potentially pathogenic bacterium to form a biofilm exacerbates its ability for better survival in aberrant niches; which amplifies its ability to pose serious contamination and health-associated consequences. This can be attributed to the adaptive capabilities of microorganisms enclosed in this film to survive in extreme environments such as surfaces of fomites or the presence of disinfectants or antimicrobials, especially in clinical and food production, which uses these compounds frequently, consequently, facilitating the increase in the incidence of resistant pathogens selection and development [19][20][21]. The silent war against the continuous emergence of AMR or multidrug-resistant (MDR) microorganisms has been going on for decades. The frequent use and misuse of antibiotics drugs, which is amplified amidst the COVID-19 pandemic, as a consequence of reduced access to healthcare by sanitary restrictions, lockdowns, remote consultations, and not controlled antibiotic therapies at domicile located patience are only parts of the examples that can be responsible for the misuse of the antibiotics and can be factors, accelerated the increase and development and selective survival of these pathogens. Thakur et al. [15] have predicted that about 10 million AMR infection-associated deaths in the year 2050 will be recorded, surpassing deaths associated with cancer, measle, diarrheal diseases, and diabetes. Nosocomial infections associated with MDR have been high in immunocompromised individuals. One of which includes the emergence and increasing occurrence of vancomycin-resistant enterococci (VRE), especially in clinical settings. According to CDC (2019), enterococci infections have been a minor occurrence (<10%); however, the increased number of its nosocomial-associated infections caused WHO to elevate this pathogen on the pedestal along with Salmonella, Helicobacter pylori, and Staphylococcus aureus for their elucidation and discovery of alternative control agents [22,23]. Although enterococci are known to be a common member of the human microbiota, typically localized in the lower gastrointestinal tract of humans, in some cases, their occurrence in aberrant niches within the host poses a serious health problem. Some of the serious health association of these opportunistic pathogens includes infective endocarditis, urinary tract infections (UTI), rare cases of intra-abdominal infections and meningitis, and systemic infections such as bacteremia [24,25]. Another concern raised for this opportunistic pathogen is its ability to form biofilms in fomites, particularly in catheters, that have been noted to contribute to at least 25% of catheter-associated UTIs [26]. As aforementioned, although the capacity to form biofilms has not been of primary concern, it has an accumulative input on the possible threat it poses; thus, it was included in the considerations raised by the European Food Safety Authority for all safety assessments of various probiotic candidates under the enterococci group [27]. In the quest for finding naturally occurring alternatives to antibiotics, antimicrobial peptides or bacteriocins-small bioactive peptides that typically inhibits the growth of closely related microorganism-can be considered as a promising candidate [28,29]. Al-though an arsenal of antimicrobial by-products are produced by LAB, bacteriocins have been identified as stable and highly potent [29,30]. In addition, these antimicrobials have long been employed as naturally occurring preservatives in various fermented goods and are also employed in fresh produces and minimally processed foods [31]. Its use as an alternative for antibiotics and other commercial antimicrobials has long been rallied by various scientific groups and individuals [32][33][34]. However, its effect on the biofilms of pathogenic microorganisms has also gained the spotlight. This is due to its potency, nature, stability to different environmental factors and precision on its target spectra [28,35]. In a previous study [36], bacteriocinogenic strains of Enterococcus faecium ST651ea, ST7119ea, and ST7319ea were isolated from Korean traditional soybean paste and expressed bacteriocins were characterized. It was shown that bacteriocins ST651ea, ST7119ea, and ST7319ea were proteinaceous by nature, bioactive after exposure to a large range of temperatures, pH, and in the presence of chemicals commonly applied in protein purification processes and/or food industry [36]. Moreover, based on the sequence of amplicons generated after PCR targeting known enterocins genes and reconstructed amino acid sequences of produced putative enterocins, were concluded that E. faecium ST651ea, ST7119ea, and ST7319ea can be considered producers of modified enterocin A, B, and P [36]. Thus, this study aimed to evaluate the effects of previously characterized bacteriocins with potent inhibitory effects against Listeria spp. and VRE [36], against the biofilms formed by L. monocytogenes, and vancomycin-resistant Enterococcus faecium. Furthermore, the study also aimed to assess the possible synergistic activities of bacteriocin with ciprofloxacin, a wide-spectrum fluoroquinolone commonly used for UTI and renal infections, or vancomycin, one of the drugs commonly used to treat systemic infections, against biofilm formation and biofilms formed by both test microorganisms. Bacteriocins Preparation Previously isolated and characterized as bacteriocinogenic enterococci strains, E. faecium ST651ea, ST7119ea, and ST7319ea [36], deposited in the collection of HEM Pharma Ltd. (Suwon, Korea), were grown in MRS (Difco, Franklin Lakes, NJ, USA) for 18 h at 37 • C. Bacteriocins containing CFS were collected by centrifugation (4000× g at 4 • C, 30 min), filter sterilized (0.22 µm Sartorius Minstart syringe hydrophobic filters, Göttingen, Germany), and heat-treated (80 • C for 10 min) to inactivate potentially produced heatlabile antimicrobial proteins or extracellular proteolytic enzymes. As previously shown by Fugaban et al. [36], studied strains E. faecium ST651ea, ST7119ea, and ST7319ea produced bacteriocins, showed high similarity to enterocin A, B, and P, characterized as thermostable polypeptides. Semi-purification of the bacteriocins was carried out as previously described by Fugaban et al. [36]. The expressed bacteriocins by the studied strains were precipitated to obtain 60% protein saturation using ammonium sulfate from 500 mL of CFS-containing bacteriocins. Precipitated proteins were collected by centrifugation (20,000× g, 60 min, 4 • C), and the obtained pellets were re-suspended in 50 mL 25 mM potassium phosphate buffer, pH 6.5. Hydrophobic column chromatography (SepPakC18, Waters Millipore, Milford, MA, USA) was used to separate the precipitated proteins eluted with a step gradient from 20% to 80% iso-propanol in 25 mM phosphate buffer (pH 6.5). Obtained partially purified bacteriocins were stored at −20 • C and were used throughout the study. Bacteriocin activity was evaluated as previously described by Fugaban et al. [36]. Appropriate controls were applied to confirm that observed inhibition properties were consequences of the effect of bacteriocins and not of the applied in the purification process chemicals. Determination of Minimum Inhibitory Concentrations (MIC) of Antibiotics against Planktonic Cells of L. monocytogenes ATCC15313 and E. faecium VRE19 The MIC of antibiotics vancomycin (CheilJedang Pharma Co., Seoul, Korea) and ciprofloxacin (Sigma-Aldrich, St. Louis, MO, USA) were determined for L. monocytogenes ATCC15313 and E. faecium VRE19 (provided by prof. Kwak, Handong Global University, Pohang, Korea) via broth microdilution assay according to the recommendations of Clinical Laboratory Standards Institute (CLSI). Test organisms L. monocytogenes ATCC15313 and E. faecium VRE19 were grown in BHI for 18 h at 37 • C, and the cells were harvested (4000× g, 10 min), followed by cell washing cells twice using sterile 1× PBS (Lonza, Basel, Switzerland) before re-suspending in the same solution. Antibiotics used in the assay were prepared as suggested by the guidelines. For both antibiotics used in the assay, 256 µg mL −1 were used as the highest final concentration and were diluted in a two-fold manner. The antibiotics previously prepared were distributed in a 96-well flatbottom microplate (SPL Life Sciences, Pochon, Kyonggi-do, Korea) to a final volume of 60 µL and leaving the last two columns as controls (growth and sterility controls). Inoculum preparation was carried out by adjusting the harvested cells into 0.5 McFarland units (approximately 10 7 CFU mL −1 ) and distributed in the corresponding plates for each antibiotic. Plates were incubated for 18 h at 37 • C, and MIC, defined as the lowest antibiotic concentration that completely inhibits the growth of bacteria, was determined by visual assessment and confirmed by spectrophotometry (OD 600 nm). Determination of Minimum Inhibitory Concentrations (MIC) of Bacteriocins against Planktonic Cells of Target Microorganisms Activities of semi-purified bacteriocins ST651ea, ST7119ea, and ST7319ea were assessed as suggested by Todorov and Dicks [37] and Todorov et al. [38] against the planktonic cells of L. monocytogenes ATCC15313 and E. faecium VRE19. Sterile BHI were inoculated with 10% 18 h-old cultures of selected test organisms. Eighty microliters of prepared bacterial suspension were distributed to the first 11 columns of sterile 96-well microtiter plates. Different concentrations of semi-purified bacteriocins, on the other hand, were prepared in a two-fold dilution manner in a sterile 100 mM potassium phosphate buffer, pH 6.5. Equal amounts of corresponding bacteriocin dilutions were dispensed on the first 10 columns in the wells to obtain a 1:1 ratio of bacterial culture and bacteriocin. The untreated column was used as growth control, while sterile BHI added on the 12th column was used as sterility control. All setups were incubated at 37 • C for 18 h. The MIC was determined as the lowest concentration required to completely inhibit bacterial growth. Molecular Detection of Vancomycin Resistance-Associated Genes of E. faecium VRE19 Clinical isolate E. faecium VRE19 was identified to be resistant to vancomycin based on the antibiogram profiling carried out through microbroth dilution assay and confirmed through ETEST ® antibiotic strips (bioMérieux, Marcy-I'Étoile, France), was screened further for the presence of vancomycin resistance genes including vanA, vanB, vanC, vanD, vanE, and vanG. Bacterial cells of E. faecium VRE19, grown in 100 mL of BHI overnight at 37 • C, were used for the DNA isolation by applying ZR Fungal/Bacterial DNA Kit (Zymo Research, Irvine, CA, USA) carried out according to the manufacturer's recommendations. The DNA concentration and purity were assessed using SPECTROS star Nano nanodrop (BMG LABTECH, Rotenberg, Germany) before the PCR assay, which was carried out as previously described by Fugaban et al. [36]. Biofilm Formation of L. monocytogenes ATCC15313 and E. faecium VRE19 The ability of L. monocytogenes ATCC15313 and E. faecium VRE19 was assessed as suggested by Doijad et al. [39] with some modifications. Briefly, 18 h-old cultures of respective strains were inoculated in a sterile BHI at a final cell concentration of~10 5 CFU mL −1 . One hundred and fifty microliters were transferred to the first 10 columns of sterile 96-well flatbottom microtiter plates (SPL Life Sciences), while the last column was added with sterile BHI only to serve as sterility control. Prepared plates were incubated at 37 • C for 24-36 h to allow the setups to form biofilms. Quantification of Biofilms by Crystal Violet Assay After allowing the biofilms to form crystal violet assay was carried out to quantify the biofilms as suggested by Todorov et al. [38] with some modifications. The assay was carried out by carefully discarding the cultures, followed by washing using 1× PBS. The attached biofilms were fixed with 120 µL of methanol for 15 min, and the excess was discarded. Subsequently, the plates were left to dry for an additional 10 min and stained with 120 µL of 1% (w/v) crystal violet for 15 min. The excess crystal violet was flushed out using distilled water, and plates were left to dry for 30 min. The adhered CV to the biofilms was extracted by 95% ethanol (v/v) and incubated for 15 min before absorbance reading at OD 550 nm (SPECTROStar). The biofilm formation ability of the test organisms used in this study was assessed based on the guidelines described by Stepanović et al. [40], and the statistical evaluation of significant differences among samples was carried out using t-test analysis (p < 0.05). Quantification of Viable Cells from Bacteriocin-Treated Biofilms of L. monocytogenes ATCC15313 and E. faecium VRE19 by Flow Cytometry The proportion of viable, damaged, and dead bacterial cells from bacteriocin-treated setups were quantified using a dye-exclusion assay with propidium iodide (PI). Biofilms were allowed to form in flatbottom 12-well sterile microtiter plates containing 1 mL of BHI inoculated with~10 6 cells mL −1 for 24-36 h. Bacteriocins ST651ea, ST7119ea, and ST7319ea were prepared in aliquots of different concentrations using 100 mM phosphate buffer (pH 6.5). The liquid culture from the plates was discarded and added with 1 mL of previously prepared bacteriocin, whereas sterility control and growth control wells were added with sterile phosphate buffer. The biofilm challenge assay was carried out for 1 h. Determination of viable bacterial cells was assessed using dye-exclusion assay with PI (Sigma-Aldrich, St. Louis, MO, USA) by flow cytometry was carried out as suggested by R&D systems (Sigma-Aldrich). Samples of 0.5 mL from each well were drawn, and cells were harvested by centrifugation at 10,000× g for 10 min. Obtained pellets were resuspended in 1× staining buffer formulated with 1× PBS, 0.5% bovine serum albumin (BSA, Sigma-Aldrich), and 0.05% NaN 3 (Sigma-Aldrich). Bacterial suspensions were stained with PI (final concentration of 30 µg mL −1 ) for 5 min in the dark. Sorting and quantification of cells were determined using Flow Cytometer ZE5 and analyzed using Everest software v 2.2.08.0 (Bio-Rad Laboratories, Hercules, CA, USA). Growth control and sterility control were included. Determination of Metabolic Activity Detection of microbial viability was carried out as suggested by Krajenc et al. [41] with modifications as follows. The pre-formed biofilm challenge was carried out as previously described, but instead of crystal violet staining, 100 µL of BHI supplemented with 0.1% of triphenyl tetrazolium chloride (TTC, Sigma Aldrich) was added to each well and incubated for 6 h at 37 • C. The medium was discarded, and metabolic activity was then assessed based on the development of red color, which denotes a successful extraction of formazan from the viable cells by adding 150 µL of 70:30 ethanol: acetone solution to each well and incubating it for 18 h at 37 • C. Complete abrogation of metabolic activities was used to determine and analyze the synergistic activities against test organisms and setups used. Assessment of Synergistic Activities of Bacteriocins and Antibiotics against Biofilm Formation of L. monocytogenes ATCC15313 and E. faecium VRE19 The synergistic activities of each bacteriocin with either vancomycin or ciprofloxacin were assessed in a binary combinatorial effect using the MIC previously identified as baselines for the highest concentrations of combination cocktails. Each binary component antimicrobial cocktail was prepared using a 1:1 (v/v) ratio of designated bacteriocin and corresponding antibiotics of designated concentrations. All the bacteriocins studied were prepared in two-fold dilutions as previously described, whereas the antibiotics were prepared as described in the CLSI for the preparation of antibiotics for antimicrobial susceptibility testing (AST). BHI seeded with 18 h-old cultures of corresponding applied test organisms (L. monocytogenes ATCC15313 and E. faecium VRE19) were distributed individually in 96-well flatbottom sterile microtiter plates. Each well was added with 70 µL of test organisms, leaving the last two for sterility control and growth control. A total of 70 µL of previously prepared binary component antimicrobial cocktails of corresponding concentrations and ratios were dispensed accordingly. Plates were incubated for 36 h at 37 • C and quantified and analyzed as previously described. All setups were carried out in duplicates. Synergistic activities were interpreted using the fractional inhibitory concentration (FIC) index as follows: where A is the MIC inhibition of bacteriocin used in the setup, while B is the corresponding antibiotics used. Results were interpreted as suggested by Faleiro and Miguel [42], where indices ranging between 0 and 0.5 indicates synergistic activity in a two-component system; values ranging from 0.5 and 1.0 are considered to have an additive effect on bacterial inhibition, values between 1.01 and 2.0 indicative of indifference between two combined inhibitory substances, and values between 2.0 and 4.0 indicate antagonism. Evaluation of Synergism of Bacteriocins and Antibiotics on the Biofilm Formed by L. monocytogenes ATCC15313 and E. faecium VRE19 The pre-formed biofilms of the test organisms assessed in this study were challenged using the same binary component antimicrobial cocktails as previously described. Formation of L. monocytogenes ATCC15313 and E. faecium VRE19 biofilms were carried out in 96-well flatbottom sterile microtiter plates using BHI seeded with 10% of each test organism. Each well was inoculated with 120 µL of appropriative bacterial suspension along with the growth control, while the same volume for BHI was used for the sterility control. Each corresponding setup was carried out in triplicates. All prepared biofilm plates were incubated for 36 h at 37 • C. Before the biofilm challenge, planktonic cells from the biofilm plates were removed by discarding the culture followed by washing the plates twice with sterile 1× PBS. Plates were left to dry for 15 min in a sterile environment. Bacteriocins of corresponding concentrations were prepared as previously described, and 100 µL of each corresponding treatment was distributed accordingly. Biofilm challenge assay was carried out for 2 h at 37 • C. Remnant biofilms after the assay were quantified as previously described. Synergy was assessed through calculated FIC values. MIC of Antimicrobials Used Bacteriocins produced by E. faecium strains ST651ea, ST7119ea, and ST7319ea were obtained from CFS obtained after cultivation in MRS for 24 h at 37 • C and precipitation with ammonium sulfate (60% saturation). After chromatography on SepPakC18, fractions eluted with 60% isopropanol in 25 mM phosphate buffer (pH 6.5) presented the highest bacteriocin activity. Taking into consideration levels of bacteriocin activity and color of fractions eluted with 40%, 60%, and 80% isopropanol in 25 mM phosphate buffer (pH 6.5), fraction 60% isopropanol was selected for further application. The detection of the minimum inhibitory concentration of semi-purified bacteriocins produced by E. faecium strains ST651ea, ST7119ea, and ST7319ea, previously characterized by Fugaban et al. [36], were further assessed for their potential to inhibit the growth of biofilms. In this study, confirmation of MIC of planktonic cells of both test organisms were conducted in liquid culture as suggested by Todorov et al. [38]. Recorded activities against the planktonic cells of L. monocytogenes bacteriocins needed to completely inhibit the growth of L. mono-cytogenes ATCC15313 were 1600 AU mL −1 , 3200 AU mL −1 , 3200 AU mL −1 , respectively for semi-purified bacteriocins ST651ea, ST7119ea, and ST7319ea. While MIC recorded for E. faecium VRE19 were 1600 AU mL −1 , 3200 AU mL −1 , and 6400 AU mL −1 , accordingly. These recorded activities are used as a reference point for the identification of the minimum inhibitory concentration for the bacteriocins studied against the planktonic cells of L. monocytogenes ATCC15313 and E. faecium VRE19. On the other hand, MIC for ciprofloxacin and vancomycin were quantified using microbroth dilution. MIC of ciprofloxacin against L. monocytogenes ATCC15313 and E. faecium VRE19 were 512 mg L −1 and 128 mg L −1 , respectively. While vancomycin, a glycopeptide antibiotic, requires at least 64 mg L −1 against L. monocytogenes ATCC15313 and 128 mg L −1 for E. faecium VRE19 to completely inhibit the growth of their planktonic cells. Molecular Detection of Vancomycin Resistance-Associated Genes in E. faecium VRE19 Confirmation of the phenotypic vancomycin-resistance previously observed on the test organism E. faecium VRE19 has been carried out through a PCR-based approach. Results indicated that E. faecium VRE19 has vancomycin resistance coded by vanA and vanB genes. Phenotypic demonstration of this resistance was found to be survival of resistant enterococci at high concentrations of vancomycin (≤250 mg L −1 ). In this study, previous MIC detection assays confirm the phenotypic manifestation of this observation. Biofilm Inhibition by Partially Purified Bacteriocins ST651ea, ST7119ea, and ST7319ea The biofilm eradication activities of partially purified bacteriocins produced by strains E. faecium ST651ea, ST7119ea, and ST7319ea were assessed by challenging the pre-formed biofilms of L. monocytogenes ATCC15313 and E. faecium VRE19 for 1 h. Following CV staining and absorbance reading at 550 nm, a significant reduction (p < 0.05) of biofilm mass was observed with the treatment of at least 3200 AU mL −1 for all bacteriocins evaluated against biofilm formed by L. monocytogenes ATCC15313, while this is the same minimum concentration required for both bacteriocin ST7119ea and ST7319ea against the biofilms formed by E. faecium VRE19, ST651ea (Figures 1a and 2a) requires two-fold higher to significantly destroy the biofilms formed by this microorganism. The last observations agree with the fact that MICs for bacteriocins produced by E. faecium ST651ea, ST7119ea, and ST7319ea was 6400 AU mL −1 , 6400 AU mL −1 , and 12,800 AU mL −1 , respectively. Additionally, quantification of the rates of viable/live, dead, and damaged cells within the bacteriocin-treated biofilms was carried out after 1 h challenge showing that the minimum concentration needed for the bacteriocins evaluated to completely damage or kill the cells within the biofilms formed by L. monocytogenes is 1600 AU mL −1 , 3200 AU mL −1 , and 6400 AU mL −1 , for bacteriocins ST651ea, ST7119ea, and ST7319ea, respectively. On the other hand, two-fold higher is required for bacteriocins ST651ea and ST7119ea to obtain the same effects against the VRE biofilm, while it requires a minimum of 12,800 AU mL −1 to eliminate the viability of the cells within the biofilm based on this assay (Figures 1b and 2b). Similar results were observed when viable cells were visualized by TTC experimental approach (Figures 1c and 2c) for L. monocytogenes ATCC15313 and E. faecium VRE19, respectively. mL −1 to eliminate the viability of the cells within the biofilm based on this assay (Figures 1b and 2b). Similar results were observed when viable cells were visualized by TTC experimental approach (Figures 1c and 2c) for L. monocytogenes ATCC15313 and E. faecium VRE19, respectively. Assessment of Synergism of Bacteriocins and Antibiotics against Biofilm Formation of L. monocytogenes ATCC15313 and E. faecium VRE19 In this study, evaluation of the possible synergism between the bacteriocins produced by E. faecium ST651ea, ST7119ea, or ST7319ea with vancomycin or ciprofloxacin for their ability to inhibit the formation of biofilms of L. monocytogenes ATCC15313 and E. faecium VRE19 in vitro. Results showed that synergistic activities were demonstrated by all bacteriocins individually paired with ciprofloxacin against both test microorganisms (Table 1) (topographic presentation of biofilm formed after 36 h shown in Figures 3a-5a) using the guidelines for combinations of antimicrobial substances. Conversely, the effect of vancomycin can be seen to demonstrate synergism when paired with bacteriocins ST651ea, ST7119ea, or ST7319a against E. faecium VRE19 (Table 1); while only ST651ea worked in synergy with ciprofloxacin to inhibit the formation of L. monocytogenes ATCC15313 biofilm. The combinations of bacteriocins ST7119ea or ST7319ea with ciprofloxacin showed an additive effect instead against the formation of Listeria monocytogenes ATCC15313 biofilm in this assay (Table 1). Topographic presentation analysis of biofilm formation of both test organisms assessed was demonstrated in Figures 4a and 5a. Identifying that some of the combinations of bacteriocins and antibiotics work synergistically against the formation of biofilms of both test organisms noting a significant reduction in the concentrations required for the inhibition of biofilm formation compared to the individual inhibitory activities recorded for each antimicrobial. General observations indicate that combinations of bacteriocins and ciprofloxacin have synergistic activity in the inhibition of L. monocytogenes ATCC15313, while the combination of bacteriocins and vancomycin had synergistic activities against E. faecium VRE19 biofilm formation. Assessment of Synergism of Bacteriocins and Antibiotics against Biofilm Formation of L. monocytogenes ATCC15313 and E. faecium VRE19 In this study, evaluation of the possible synergism between the bacteriocins produced by E. faecium ST651ea, ST7119ea, or ST7319ea with vancomycin or ciprofloxacin for their ability to inhibit the formation of biofilms of L. monocytogenes ATCC15313 and E. faecium VRE19 in vitro. Results showed that synergistic activities were demonstrated by all bacteriocins individually paired with ciprofloxacin against both test microorganisms (Table 1) (topographic presentation of biofilm formed after 36 h shown in Figures 3a, 4a and 5a) using the guidelines for combinations of antimicrobial substances. Conversely, the effect of vancomycin can be seen to demonstrate synergism when paired with bacteriocins ST651ea, ST7119ea, or ST7319a against E. faecium VRE19 (Table 1); while only ST651ea worked in synergy with ciprofloxacin to inhibit the formation of L. monocytogenes ATCC15313 biofilm. The combinations of bacteriocins ST7119ea or ST7319ea with ciprofloxacin showed an additive effect instead against the formation of Listeria monocytogenes ATCC15313 biofilm in this assay (Table 1). Topographic presentation analysis of biofilm formation of both test organisms assessed was demonstrated in Figures 4a and 5a. Identifying that some of the combinations of bacteriocins and antibiotics work synergistically against the formation of biofilms of both test organisms noting a significant reduction in the concentrations required for the inhibition of biofilm formation compared to the individual inhibitory activities recorded for each antimicrobial. General observations indicate that combinations of bacteriocins and ciprofloxacin have synergistic activity in the inhibition of L. monocytogenes ATCC15313, while the combination of bacteriocins and vancomycin had synergistic activities against E. faecium VRE19 biofilm formation. Assessment of Synergism of Bacteriocins and Antibiotics against Pre-Formed Biofilms of L. monocytogenes ATCC15313 and E. faecium VRE19 Pre-formed biofilms were treated with antimicrobials combinations composed of either bacteriocins ST651ea, ST7119ea, or ST7319ea and vancomycin or ciprofloxacin. After the challenge, topographic residual biofilms were quantified by crystal violet biofilm staining assay while simultaneously monitoring the cellular metabolism of the residual biofilms in a parallel setup. The topographic representation of the biofilm formation results after the challenge is presented in Figures 3b, 4b and 5b. Observations on the activities of the antimicrobial combinations showed a decreased effect against the biofilms formed by both test organisms based on the FIC indices shown in Table 1. Biofilms known to provide a protective layer for these microorganisms play as adaptive and defense mechanisms against the antimicrobials employed. The topographic visualized levels of activities (Figures 3b, 4b and 5b) of combinations of bacteriocins and antibiotics against both test organisms and their corresponding FIC indices were calculated and presented in Table 1, demonstrating that higher amounts of each component for the majority of the combinations are needed to eradicate the previously formed biofilms relative to the concentrations needed to inhibit the biofilm formation of both test microorganisms. FIC indices showed that combinations of bacteriocins with ciprofloxacin majorly demonstrated an additive effect on the pre-formed biofilms of both test organisms, while synergistic activities were noted when bacteriocins were combined with vancomycin against E. faecium VRE19 but not against L. monocytogenes ATCC15313 ( Table 1). The viability, measured by TTC assay of the residual biofilms formed by L. monocytogenes ATCC15313 or E. faecium VRE19 coinciding with the previous results (Figures 3c, 4c and 5c). Discussion Bacteriocins produced by E. faecium ST651ea, ST7119ea, and ST7319ea, previously characterized by Fugaban et al. [36], were further assessed in this study for their potential activities against biofilms formed by L. monocytogenes and vancomycin-resistant enterococci. It has been reported that E. faecium ST651ea harbors genes coding for enterocins B and P, while both E. faecium ST7119ea and ST7319ea have genes for enterocin A and B [36]. Based on obtained nucleic acid sequenced targeting genes associated with the production of enterocins A, B, and P, recorded in E. faecium 651ea, ST7119ea, and ST7319ea, respectively, the putative amino acid sequences were reconstructed, and some mutations in the protein structure were observed [36]. Moreover, based on the comparative analysis of the spectrum of activity of the bacteriocins expressed by E. faecium 651ea, ST7119ea, and ST7319ea along with additional physiological and biochemical properties of studied bacteriocins, it was suggested that most probably they belong to the class IIa [36]. Moreover, it has been mentioned by Nes et al. [43,44] that majority of the known bacteriocins produced by Enterococcus spp. belong to class I (lantibiotics) and II bacteriocins (small unmodified peptides), whose mode of action is cell lysis [45][46][47]. Target molecules, such as lipid II for L. monocytogenes or the sugar permease systems found on the surface of target microorganisms, serve as the docking point for bacteriocins [30,32,43]. These modifications in the functionality of these docking molecules by the bacteriocins cause disturbance in the integrity of the cell membrane, thereby leading to intracellular component leakage, which eventually leads to the death of the target cell. In this study, bacteriocins produced by E. faecium ST651ea, ST7119ea, and ST7319ea were partially purified by ammonium sulfate precipitation (60% protein saturation) obtained at 60% isopropanol in 100 mM phosphate buffer (pH 6.5) in a step-gradient elution assay were previously quantified against L. monocytogenes. Application of the bacteriocins as a crude extract, partially purified preparations, or pure (homogeneous) protein is strictly dependent on the experimental model. Purification is a costly procedure, and normally pure bacteriocins are applied in analytical procedures or medical applications. For most food-associated experiments and/or sanitization purposes, a crude extract or partially purified bacteriocins are typically applied. The previously identified MICs coincide with the ≤MIC95 of bacteriocins measured in this study, which was used for subsequent evaluations. Furthermore, these current data further strengthen the findings from the study as matching observations were demonstrated through the inhibitory kinetics of the assessed bacteriocins against actively growing cells of target microorganisms sampled after 3, 6, 9, and 24 h of incubation [36]. The MICs of two selected antibiotics, ciprofloxacin and vancomycin, were also determined against the planktonic cells of both test organisms used. Ciprofloxacin, a known fluoroquinolone antibiotic, has been used as the benchmark in quantifying and comparing the efficacy of newly discovered or elucidated fluoroquinolones [48]. It has been employed as a treatment across a wide range of pathogenic microorganisms, including infectioncausing members of Enterobacteriaceae, Neisseria-associated meningococcal infections, and Pseudomonas infections, among others. Additionally, it has also been used as a common drug to treat UTI and renal infections [48][49][50], although, in some cases, it has been demonstrated that the occurrence of ciprofloxacin-resistant L. monocytogenes typically has a range of around 30-35% of all the strains evaluated [51]. Additionally, it has demonstrated that an inherent adaptive system is expressed by L. monocytogenes when exposed to disinfectant benzalkonium for an extended time, consequently resulting in resistance to ciprofloxacin [4,24,52,53]. On the other side, ciprofloxacin is primarily administered as a treatment for uncomplicated UTI infections only. Although ciprofloxacin is not considered a primary drug for enterococcal-associated UTIs due to its modest activity against this pathogen, it still demonstrated successful employment as a treatment. Perry et al. [54] stated that higher concentrations of ciprofloxacin are needed to assess the sensitivity of enterococci to this drug (5 µg per disc instead of 1 µg). Thus, in this study, we have evaluated the minimum inhibitory concentration of ciprofloxacin against L. monocytogenes ATCC15313 and E. faecium VRE19 independently through microbroth dilution. Vancomycin, a tricyclic glycopeptide antibiotic that was initially isolated from Streptococcus orientalis, whose mechanism of action involves interference in the early stage of cell wall synthesis [55,56]. This glycopeptide antibiotic is typically administered intravenously due to its low absorption by oral intake. Furthermore, vancomycin has been used as one of the "last resort" drugs for the treatment of severe systemic infections caused by multi-drugresistant Gram-positive bacteria. However, the exorbitant usage of this antibiotic has led development and occurrence of vancomycin-resistant enterococci and staphylococci [57,58] which pose a serious threat in medical practice. However, the occurrence of antibiotic resistance from this group is not unusual, noting that inherent resistance against vast groups of antibiotics was observed, especially against β-lactams (cephalosporins and penicillins), fluoroquinolones, clindamycin, and in low concentrations of aminoglycosides [59][60][61]. In this study, MIC of vancomycin against planktonic cells of L. monocytogenes ATCC15313 and E. faecium VRE19 were determined as previously described noting that a minimum of 64 mg L −1 and 128 mg L −1 are needed to completely inhibit the growth of each respective test organism. This further confirms that E. faecium VRE19, indeed, is resistant to vancomycin based on the cut-offs suggested by both CLSI and EFSA. All values measured against planktonic cells of both test microorganisms were used as the basis of all succeeding experiments. To secure the integrity of the succeeding assays, confirmation of the presence of antibiotic resistance genes harbored by E. faecium VRE19 was carried out, identifying the presence of vanA and vanB genes. The selective pressure in the occurrence of VRE by excessive vancomycin treatment has caused the rise of different genotypic classifications of resistance to this drug. These include resistance phenotypes van A, B, C, D, E, and G. Plasmid-associated resistance has been elucidated to be responsible for vanA and vanB resistances, but the distinction between the two includes co-resistance to teicoplanin as characterized only for vanA phenotypes due to the associated modifications in the Nacetylmuramic acid (NAM on the vancomycin-resistant E. faecium and E. faecalis [62,63]. VanB phenotype, which is typically characterized by its high resistance to vancomycin (≤250 mg/L), is usually located in a plasmid, which increases the threat it poses regarding the transfer of resistance genes. On the other hand, vanC and vanD resistance-associated genes are all chromosomally located and non-transferrable, manifested by low resistance to vancomycin (16-32 mg L −1 ). Although these are still considered to be low concentrations of vancomycin, other factors such as the occurrence of pathogenicity-associated insertion sites glean the occurrence of these genes negatively; thus, its surveillance is of importance [64][65][66]. Additionally, vanE and vanG are both characterized by non-transferrable genes and are also characterized by resistance to low concentrations of vancomycin [67]. The biofilm inhibition and eradication capacities of the semi-purified enterocins produced by E. faecium strains ST651ea, ST7119ea, and ST7319ea were evaluated in two different assays as shown in Figures 1a and 2a and further confirmed for the retention of bioactivity after treatment through triphenyl tetrazolium chloride (TTC) (Figures 1c and 2c) and flow cytometry (Figures 1b and 2b) assay. The observations support the hypothesis that higher concentrations of antimicrobials are needed to destroy or kill microorganisms protected within biofilms [1,2]. In addition, a study conducted by Pérez-Ibarreche et al. [68] on bioengineered nisin with activity against S. uberis biofilms also demonstrated the same patterns of increased concentrations of bacteriocins are needed against biofilms vs. planktonic cell counterparts. Furthermore, these similar observations were noted in the treatment of planktonic cells and biofilms of Pseudomonas aeruginosa with chemical disinfectants and antibiotics, as demonstrated by [69]. Although these results are promising, the use of high concentrations of antimicrobials, including bacteriocins, may lead to the development of resistance to these antimicrobial peptides [70]. With the continuous development of antimicrobial-resistant pathogens, bacteriocinbased treatments or methods of control against various pathogens have been rallying for the past decades [28,30,32,43]. Furthermore, the increasing occurrence and persistence of "superbugs" in the clinical setting and the threats they pose amidst the current COVID-19 pandemic that drastically increased the consumption of various antibiotics now act as a selective pressure for the dominance of these pathogens [71,72]. Furthermore, O'Toole [71] also mentioned increased occurrence and outbreaks of extended-spectrum βlactamase-producing Kl. pneumoniae, metallo-β-lactamase-producing carbapenem-resistant Enterobacterales, carbapenem-resistant A. baumannii, and vancomycin-resistant enterococci, which are clinically acquired, are now an alarming concern worldwide. Therefore, it is imperative to find solutions to these arising concerns with the use of different possible alternatives from conventional antibiotics, including bacteriocins are antimicrobial peptides produced by various microorganisms. Furthermore, these antimicrobials are particularly distinctive from antibiotics due to their narrower spectrum and lack of elaborate modifications in their peptide sequences [28,30,32,43,44]. Furthermore, these antimicrobial peptides have been in the spotlight, particularly those produced by lactic acid bacteria. This is due to the associated safety status of these microorganisms. Aside from this, the specificity of bacteriocins against their targets in comparison with antibiotics can be used as a key tool for targeted infection treatment. Although, handling and purification of these naturally occurring antimicrobials are still part of the challenge that needs development in this field. Likewise, their applications, although majorly assessed against planktonic cells of food-contaminants, still need further evaluation to assess in which other ways we can employ and advance these antimicrobial peptides as an important tool in both the clinical setting and the food industries. In this study, evaluation of possible synergistic activities across bacteriocins in combination with either ciprofloxacin or vancomycin was evaluated as demonstrated in along with the quantification of the effects of their combinations quantified through FIC indices as shown in Table 1, identifying those combinations of vancomycin with bacteriocins work synergistically on the eradication of vancomycin-resistant E. faecium VRE19. This re-sensitization phenomenon can be attributed to various factors, for one, the different mechanisms of action of the two antimicrobial compounds used in the cocktails, the antibiotic and bacteriocin. The resistance mechanisms of vancomycin on VRE have been identified to be associated with alteration of peptidoglycan structure resulting in to decrease in binding, thus limiting its ability to carry out its function [73]. On the other hand, Diep et al. [45] hypothesized that enterocins, which primarily cause membrane perforation, use Man-PTS as a docking molecule, which has been supported by Barraza [74] in their study, may aid in the exacerbation of activities of vancomycin in the VRE cells. Synergistic activities of bacteriocin and antibiotics have also been demonstrated by Singh et al. [75] using nisin and β-lactam antibiotics as an adjunct treatment for MDR Salmonella enterica, whose mechanism of synergy was associated with the different mechanisms of action of antimicrobials used. On the other hand, most of the setup for vancomycin in combination with the bacteriocins do not result in synergism but only demonstrate additive functionalities. The application of a combination between bacteriocins and antibiotics in the process of control of biofilms was previously suggested and explored [21,28,34,38]. Application of antimicrobials with different or same mode of actions has his arguments for the better success of control of biofilm-associated pathogens. On one side, antibiotics, such as vancomycin and bacteriocins from class IIa, are known to use the same receptor, lipid II, in the interaction between antimicrobials and target cells [30,32,43]. In these processes, both antimicrobials (antibiotic and bacteriocin) may have an extended effect on the target pathogens. Moreover, it was previously shown that when applied in high concentrations, nisin can act bactericidal even if lipid II receptor was not biologically available [30,32,43]. Thus, it can be an argument to suggest that in combined application between vancomycin and studied bacteriocins is a possibility that the applied antibiotic is targeting the test pathogens via lipid II receptor; however, bacteriocins were interfering with the target cells via different mechanisms. When ciprofloxacin was applied, the opposite scenario was realized in the inhibition of the target pathogens. Ciprofloxacin is classified as a bactericidal antibiotic, part of the fluoroquinolone drug class. His mode of action was associated with inhibition of DNA replication by interfering with bacterial DNA topoisomerase and DNA-gyrase [76]. In this way of application, most probably applied bacteriocins were responsible for pore formation as a consequence of the interactions with lipid II and facilitating the effect of the ciprofloxacin to perform his bactericidal effect. Conclusions The use and application of bacteriocins as a promising alternative to conventional antibiotics have been proposed by various scientists for decades. The elucidation of their function and possible applications is now eyed as a possible solution to the alarming emergence of AMR/MDR pathogens. In this study, we have evaluated the possible use of bacteriocins in combination with selected conventional antibiotics as a treatment against biofilms formed by L. monocytogenes and vancomycin-resistant E. faecium, food-borne and clinically significant pathogens, respectively. Findings showed that combinations of naturally occurring antimicrobial peptides produced by beneficial enterococci with conventional antibiotics have more notable effects against both planktonic and biofilms of vancomycinresistant E. faecium, although higher concentrations of both bacteriocins conventional antimicrobials are needed to completely eradicate functional or abolish metabolically active cells. This perspective can be further explored as an alternative way of addressing the current issues of increasing infections associated with AMR pathogens, but the use of high concentrations of antimicrobials, may it be bacteriocins or conventional antibiotics, intended for any application should be regarded carefully and regulated, especially as a bane, acting as another layer of selective pressure for development of new resistant strains, rather than a boon on this current issue. Data Availability Statement: All data related to this work is available upon request.
2022-08-03T15:23:09.763Z
2022-07-29T00:00:00.000
{ "year": 2022, "sha1": "18a79da552ca5cf7de6e2710373657c1d37bbaea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2036-7481/13/3/33/pdf?version=1659678313", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0a807aecb8cd3967bcb8fc768a971272130757bb", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
247446318
pes2o/s2orc
v3-fos-license
Innovative pedagogical technologies in education system Tecnologias pedagógicas inovadoras no sistema educacional Tecnologías pedagógicas innovadoras en el sistema educativo The society in which we live is constantly evolving and changing. The modern world educational space is constantly being replenished with new content of knowledge, new qualifications. New spheres of relations are emerging, new specialties that form new disciplines. World higher education is undergoing reform. This led to the search for new forms and technologies of education. Harmonization of higher education in accordance with the requirements of the world space and standards, its development is carried out according to certain principles. This is, first of all, the priority introduction of innovative achievements in education and science. It is known that it is the innovative way of development of society that can ensure the formation of a generation of people who think and work in a new way. As a result, the main attention will be paid to the development of personality, cultural and communicative preparedness, the ability to independently acquire and develop knowledge, to form information and social skills. Considering this, the main purpose of the article is to study the main aspects of innovative pedagogical technologies in education system. INTRODUCTION The analytical report of UNESCO "The Post-2015 Sustainable Development Program" noted that in the new information era, it is higher education that should become the main element in the direction of progress, and innovations in various spheres of public activity should include high dynamism, rapid change in knowledge, information, technology. In such conditions, the social importance of the state is increasing in ensuring access to quality education, a high level of knowledge, the possibility of acquiring relevant skills and competencies through the provision of academic mobility and freedom to higher educational institutions. In the context of the formation of an innovative society, the functional features of education are not only providing students with the amount of knowledge and skills accumulated in previous years, but also the ability to perceive and use in practice new scientific ideas, tools and methods. The world in which a person lives becomes complex and contradictory. In order to develop a reasonable strategy for one's own life, it is necessary to have a sufficiently high intellectual and creative potential, high professionalism, therefore one of the most important tasks of a higher school is the personal and professional development of students. Pedagogical practice requires the creation of a relatively simple and at the same time the most universal toolkit for the implementation of the personal and professional development of students. This toolkit should reveal the structure of this development and its dynamics in innovative learning technologies, in modeling the educational environment itself. In this context, the main components of education should be reviewed: content, forms, methods, teaching technologies, methodological support (including textbooks), teacher's functions. The concept of "pedagogical technologies" was transformed into new concepts: educational technologies, pedagogical technologies, teaching technologies. Educational technologies reflect the general strategy for the development of education, a unified educational space, their purpose of predicting the development of education, its specific design and planning, predicting results, as well Innovative pedagogical technologies in education system as determining the corresponding educational goals of standards. Examples of educational technologies are the concepts of education, the education system. At the present stage, this is a humanistic concept of education; education system, etc. If educational technologies reflect the strategy of education, then pedagogical technologies embody the tactics of its implementation in the educational process by introducing models of the latter and identical models of managing this process. For example, a model of personality-oriented developmental learning, modular developmental learning, problem-based learning, etc. So, pedagogical technology reflects the model of the educational and management processes of an educational institution and combines the content, forms and means of each of them. Close, but not identical to the concept of pedagogical technology, the concept of teaching technology. It reflects the way of mastering a specific educational material (concept) within the corresponding academic subject, topic, issue. Needs a special organization of educational content, adequate forms and methods of teaching. But the following options are also possible: the content and methods of teaching are matched to the forms of teaching, or to the methods -the forms and the content of the teaching is structured (Mtawa, Masanche Nkhoma, 2020). For example, it can be subject learning, game technology, problem learning technology (at the method level), information technology, technology for using support schemes, abstracts, classical lecture training, training using audiovisual technical means or books, consulting, individual, distance learning, computer training, etc. Pedagogical technique reflects the level of the teacher's skill. The degree of development of the subjects of training and education depends on how and what methods of teaching and upbringing he owns. This means that the concept of "pedagogical technology" is undoubtedly associated with the concepts of "educational or educational technology", "pedagogical technology", "educational technology". Pedagogical technology must meet some basic methodological requirements, criteria of manufacturability (Kolgatin, Kolgatina, 2019): conceptuality (reliance on a certain concept containing philosophical, psychological, didactic and socio-pedagogical justifications of educational goals); consistency (pedagogical development must have all the features of the system); consistency of the process, the interconnection of all its parts, integrity; manageability (the possibility of goal planning, designing the learning process, step-by-step diagnostics, varying means and methods in order to correct the results); efficiency (cost optimality, guaranteed achievement of the planned result -a certain training standard); reproducibility (the possibility of using in other conditions of the same type, by other subjects) and the unity of the content and procedural parts, their interdependence. Modern pedagogical technology is a synthesis of the achievements of pedagogical science and practice, a combination of traditional elements of past experience and that generated by social and technical progress and humanization, democratization of society and the technological revolution. The sources and components of new pedagogical technologies are: social transformations and pedagogical thinking; social, pedagogical, psychological sciences; modern advanced teaching experience; historical domestic and foreign experience (acquisition of previous generations); folk pedagogy (Iqbal, 2020). In modern pedagogical theory and practice, there are many options for pedagogical technologies. Each pedagogical technology has its own procedural characteristics (motivational, managerial, category of students), also has software and methodological support (curriculum and programs, teaching aids, didactic materials, visual and technical teaching aids, diagnostic interpretations). Recently, educational interactive technologies have been actively involved in the practice of higher education. The essence of interactive technologies is that learning occurs with the interaction of students. The teacher and students are subjects of learning. Innovative pedagogical technologies in education system The special value of interactive learning is that students learn to work effectively in a team (unfortunately, students do not have teamwork skills). With the correct, planned and systematic use of interactive technologies, this problem can be solved. Interactive teaching methods are part of student-centered learning, since they contribute to the socialization of the individual, awareness of oneself as part of a team, of one's role and potential. What does the term "innovative learning" mean? Innovative learning is a constant striving to reassess values, preserving those of them that are of undeniable importance and rejecting those that are already outdated. Innovations in educational activities are associated with the active process of creating and disseminating new methods and means of solving didactic problems of training specialists in a harmonious combination of classical traditional methods and the results of creative search, the use of non-standard, progressive technologies, original didactic ideas and forms of ensuring the educational process. In the modern world, it is necessary to solve urgent problems of pedagogy effectively and consistently, and in a fairly short time, because the needs of restructuring education and the development of an appropriate educational and material base in our country are already obvious today. New pedagogical and information technologies can help in this. It is impossible to separate one from the other, since only the widespread introduction of new pedagogical technologies will make it possible to change the very paradigm of education, and only new information technologies will make it possible to most effectively realize the possibilities inherent in new pedagogical technologies. It is the new information technologies that make it possible to fully disclose the pedagogical and didactic functions of the methods, to realize the potential capabilities inherent in them (Awe, Church, 2020). METHODOLOGY The main purpose of the study is is to study the main aspects of innovative pedagogical technologies in education system. For this, a number of methods were applied, which form the research methodology. The study was carried out using the following theoretical methods: systems analysis and synthesis, induction and deduction, comparison, classification, generalization and systematization, idealization and abstraction. RESULTS AND DISCUSSION Innovative pedagogical technology is considered as a special organization of activity and thinking aimed at organizing innovations in the educational space, or as a process of assimilation, implementation and dissemination of new things in education. Innovation of the pedagogical process means the introduction of something new into the target, content, forms and methods of teaching and upbringing, into the company of joint activities of the participants in the educational process. Innovative technologies used in the higher education system are considered as the teacher's modeling of the content, forms and methods of the educational process in accordance with the set goal using novelty. In the practice of educational activities of a modern university, such teaching technologies are used as: differentiated, problem-based, contextual learning, game learning technologies, information technologies, credit-modular technology, student-centered learning, etc. Modern didactic searches for contextual learning technologies are characterized by an orientation towards a close connection between education and the immediate life needs, interests and experience of undergraduates. Each master's student is a bearer of individual personal experience, which should be taken into account and on which it is necessary to rely in the process of professional training. Innovative pedagogical technologies in education system This approach to organizing the process of vocational training helps to create an atmosphere of professional competent formation, which turns a master's student not only into a subject of knowledge, but also a subject of his own professional and personal development (Bingimlas, 2009). One of the types of application of modern innovative teaching technologies in the process of professional training of a future teacher is information teaching aids, for the successful and purposeful use of which university teachers must know their didactic capabilities and principles of functioning. The effectiveness of the use of modern information technologies in the development of the foundations of the pedagogical skills of the future teacher is provided by a variety of forms of information presented, a high degree of clarity; the possibility of organizing collective and individual research work. The introduction of innovative technologies in the process of professional training of a future teacher helps them to master the educational material at an individual pace, independently, using convenient ways of perceiving information, which causes them positive emotions and forms a positive motivation for learning. In order to intensify the professional training of students in universities through the introduction of computer presentations, electronic dictionaries, textbooks and manuals; test programs, textbook programs, training programs, dictionaries, reference books, encyclopedias, video tutorials, libraries of electronic visual aids, thematic computer games, etc., a professionally oriented educational information environment is created that contributes to the development of the foundations of the pedagogical skills of future teachers (Cooper, 1998). Educational innovations are characterized by a purposeful process of partial changes leading to the modification of the goal, content, methods, forms of education, methods and style of activity, adaptation of the educational process to the modern requirements of the time and social demands of the labor market. In addition, the introduction and approval of something new in educational practice is due to positive transformations, therefore, it should become a means of solving urgent problems of a particular educational institution and withstand experimental testing for the final application of innovations. First of all, this should consist in modern modeling, the organization of non-standard lectures, practical, seminars; individualization of teaching aids; office, group and additional training; optional, at the choice of students, deepening of knowledge; problem-oriented learning; scientific and experimental in the study of new material; development of a new control system for assessing knowledge; the use of computer, multimedia technologies; educational and methodological products of a new generation Currently, the following pedagogical technologies are most often used in educational practice (González-Zamar, et al, 2020): -structural and logical technologies: the phased organization of the training system, providing a logical sequence for the formulation and solution of didactic problems based on the phased selection of content, forms, methods and means, taking into account the diagnosis of results; -integration technologies: didactic systems that ensure the integration of interdisciplinary knowledge and skills, different types of activities at the level of integrated courses (including electronic); -professional and business gaming technologies: didactic systems for using various "games", during which the skills of solving problems are formed on the basis of a compromise choice (business and role-playing games, simulation exercises, individual training, computer programs, etc.); -training tools: a system of activities for the development of certain algorithms for solving typical practical problems using a computer (psychological trainings for intellectual development, communication, solving managerial problems, etc.); Innovative pedagogical technologies in education system -information and computer technologies implemented in didactic computer training systems based on the "man-machine" dialogue with the help of a variety of training programs (training, control, information, etc.); -dialogue and communication technologies: a set of forms and methods of teaching based on dialogue thinking in interacting didactic systems of the subject-subject level. In educational practice, the diversification of teaching technologies allows you to actively and effectively combine them through the modernization of traditional education and its reorientation to an effective, purposeful one. With this approach, there is an emphasis on the personal development of future specialists, the ability to master new experience of creative and critical thinking, role-based and simulation modeling of the search for solutions to educational problems. In our case, in the educational environment of innovative communication technologies, the basis of training should be holistic models of the educational process, based on the dialectical unity of the methodology and methods of their implementation. Let us consider individual teaching methods from the standpoint of their novelty, efficiency, effectiveness, expediency of use in modern conditions of informatization of higher education. In today's market for educational services, these are innovative active and interactive teaching methods. Since the creative component of education is growing significantly, the role of all participants in the educational process is becoming more active, the creative and search independence of students is strengthened, the concepts of problematic and interactive learning associated with the use of computer systems have acquired particular relevance. During such an educational process, the student can communicate with the teacher online, solve creative, problematic tasks, simulate situations, including analytical and critical thinking, knowledge, search abilities, etc. The algorithm of the teacher's work during an interactive lesson (Kryshtanovych, Kryshtanovych, Stechkevych, Ivanytska, Huzii, 2020): 1) determining the appropriateness of using interactive techniques in this particular lesson; 2) careful selection and analysis of educational material, including additional (tests, examples, situations, tasks for groups, etc.); 3) lesson planning -stages, timing, approximate division into groups, roles of participants, questions and possible answers; 4) development of criteria for assessing the effectiveness of the work of groups, classes; 5) motivation of educational activity by creating a problem situation, bringing interesting facts, etc. 6) ensuring that students understand the content of their activities and the formation of expected results when announcing, presenting a topic; 7) providing students with the necessary information to complete practical tasks in the shortest possible time; 8) ensuring the assimilation of educational material by students through an interactive exercise (at the choice of the teacher); 9) reflection (summing up) in different forms -individual work, work in pairs, groups, discussion, in the form of drawings, diagrams, graphs, etc. Despite the abundance of approaches in pedagogical and psychological science, models in the practice of higher educational institutions, there are four main options for pedagogical technologies (Table 1). Technologies for the spiritual and moral formation of the personality, the ecological purity of the approach to the nature of the student, the upbringing of noble virtues in him on the basis of faith in his innate mission and various possibilities. This option is most consistently and fruitfully reflected in the pedagogical system of V. A. Sukhomlinsky: a young man is a phenomenon, a carrier of his mission and the energy of spirit. The priority is the development of his spiritual world, the upbringing of excellent thinking, good thinking, responsibility for his thoughts, aspirations, and not only for his actions. Another system of teaching principles and actions is being promoted. Pedagogy is considered the highest form of human thinking, part of human culture. Innovative pedagogical technologies in education system The content of information and development technologies, the purpose of which is to develop the foundations of the pedagogical skills of a future teacher who has the necessary system of knowledge and a large supply of information, includes lectures, seminars, practical classes, independent study of literature, etc. should take into account the individual, author's manner of the lecturer, the specifics of the academic discipline, the level of preparation of the student audience. The use of information technologies in practical classes opens up broad prospects. An extremely effective teaching tool is the development of theoretical material using presentations and mind mapping technologies (creating logical diagrams). The technical advantages of information technology is the use of hypertext information, which provides easy access to reference data, glossary, animation applications. The availability of software will allow students to carry out reflexive activities and be aware in real time of the level of their professional progress in the development of the foundations of pedagogical excellence. This helps to differentiate educational material by levels of complexity, to create a positive emotional background for the student's work with information teaching aids by means of the interface (Ambra, Ferraro, Girardi, Iavarone, 2020). An important component of pedagogical skill is the information culture of the future teacher, that is, the ability to productively read books, find the necessary information, comprehend and transmit it to users. The use of information technologies in this context will contribute to the development of not only a higher level of motivation of the future teacher, his critical thinking, but also the formation of a telecommunications community for the implementation of active forms of constructive communicative interaction (Crawford, 2020). Innovative pedagogical technologies in education system The development of information culture is facilitated by the independent and research work of students, which requires an individual approach and affects the formed individual style of their professional activity. The productive methods of such work are the implementation of individual educational and research tasks, such as a scientific report, which is a publicly pronounced message, a detailed presentation of a specific scientific problem. One of the most important components of the educational process at the university is the research activities of students, including the preparation of scientific reports, articles, abstracts, writing abstracts, term papers, diploma and other works. The emergence of network communications and the World Wide Web contributes to the introduction of problem-research computer teaching methods into the process of professional training of a future teacher. Among them, one can single out the project-based teaching technology, which helps students independently solve professional problems with the obligatory presentation and protection of the results of their scientific work. Consequently, the research work of students is an integral part of the application of information technology and contributes to the development of information competence and the foundations of pedagogical skills of the future teacher. In the process of scientific activity, the future teacher acquires knowledge that constitutes the informative basis of heuristic activity, masters the methods and pedagogical actions that determine the operational basis of the search cognitive activity, gains experience in information activities in the field of software, as well as experience in the relationship "student-computer" (Beauchamp, 2004). According to the scientific provisions, generally accepted teaching methods can be classified according to the following criteria: types of student work (oral, written; classroom, independent, extracurricular); general (collective, group, individual, etc.); a source of knowledge acquisition and the formation of skills and abilities (lecture, document analysis, work with the legislative framework, use of visual aids, Internet resources, etc.); the degree of independence and the nature of student participation in the educational process (active, interactive, passive); the level of sustainability and novelty (traditional, classical, innovative, new, innovative), authorship (original, copyright, general, didactic), etc. However, examining scientific didactic and pedagogical developments, we can say that in the modern teaching methodology in higher education, the most acceptable classification is based on an effective approach to teaching. According to her, there are the following methods (Burkšaitienė, 2018): a) ensuring the mastery of the academic subject (verbal, visual, practical, reproductive, problem-search, inductive, deductive); b) stimulating and motivating educational and scientific activities (educational discussions, problem situations, professionally oriented business games, creative tasks, search and research, experiments, contests, quizzes, etc.); c) methods of control and self-control in educational activities (survey, test, exam, control work, test tasks, questions of self-control, including through computer educational systems). At present, and this is confirmed by the practice presented, in particular, in the works of scientists, in the educational environment of innovative communication technologies, holistic models of the educational process, based on the dialectical unity of the methodology and means of their implementation, are the basis of teaching. Let us consider individual teaching methods from the standpoint of their novelty, efficiency, effectiveness, expediency of implementation in modern conditions of informatization of higher education. In today's market for educational services, these are innovative active and interactive teaching methods. Since the creative component of education is growing significantly, the role of all participants in the educational process is becoming more active, the creative and search independence of students is strengthened, the concepts of problematic and interactive learning associated with the use of computer systems have acquired particular relevance. During such training, the student can communicate with the teacher online, solve creative, problematic tasks, simulate situations, including analytical and critical thinking, knowledge, search abilities. For example, the modern methodology of teaching legal sciences has a certain arsenal of various methods, techniques and means, both general didactic (used in the teaching of any academic subjects) and branch-didactic (reflect the specifics of a particular discipline or a number of related disciplines). As you can see, we are talking about an innovative teaching methodology. Therefore, it is necessary to understand the concept of "innovative teaching methods". In our opinion, it is multicomponent, since it unites all those new and effective ways of teaching (obtaining, transferring and producing knowledge), which, in fact, contribute to the intensification and modernization of the educational process, develop the creative approach and personal potential of applicants for higher education. Among the interactive methods, forms and techniques most often used in the educational work of higher education, we can name the following: analysis of errors, collisions, incidents; audiovisual teaching method; brainstorming ("brainstorming"); dialogue of Socrates (Socratic dialogue); "Decision tree"; discussion with the invitation of specialists; business (roleplaying) game (students are in the capacity of a legislator, expert, legal adviser, notary, client, judge, prosecutor, lawyer, investigator); Take a position; commenting, evaluating (or self-evaluating) the actions of the participants; master classes; method of analysis and diagnosis of the situation; interview method (interview); method of projects; modeling; training "polygon"; PRES-formula (Position -Reason -Explanation or Example -Summary); problem (problem-search) method; public speaking; work in small groups; individual and group trainings (both individual and complex skills) and others. Of the innovative mechanisms for enhancing pedagogical and scientific processes, the need to revive the idea of competition in all areas of life is increasingly mentioned, in particular, we are talking about the "race for the leader" method. The authors of this methodology illuminate the retrospective, meaning, content of the concept of "competition", reveal the methodological aspects of the use of non-traditional (artificial) competition, provide sensible proposals for scoring the main types of educational activities, offer specific formulas for calculating the total number of points, focus on the development of names (Marek et al 2020). With the introduction of distance learning, many universities are already using the technology of an online seminar called "webinar", which demonstrates comparative tables, presentations, videos, etc. With the help of Internet technologies, the webinar retained the main feature of the seminar -interactivity, which provides modeling of the functions of the speaker, listener, who will work interactively, communicating together according to the scenario of such a seminar. Practitioners have also developed and experimentally tested a model for organizing the independent work of correspondence law students, which provides for three stages: indicative (preparatory), activity (executive), control and correction (final). This model should contribute, first of all, to ensure an increase in the level of individual psychological readiness of students for independent study, the acquisition of appropriate qualifications by future lawyers, the acquisition of skills and habits of legal work, the development of professionally significant personality traits. In the process of organizing the independent work of law students by correspondence courses, the content and significance of such didactic principles as independence, intensification and activation, individualization and differentiation, professional and practical orientation, continuity, involvement in the educational process of the life and practical experience of students, feedback , the developmental and educational nature of the educational process in order to form the skills of self-education and self-improvement of students and the use of andragogical approaches to learning (Škobo, Đerić-Dragičević, 2019). In addition, revealing the active methods of teaching applicants for higher education, it is also necessary to pay attention to the issues of social psychological training, in which the main principle is the active position of each of its participants. The essence and classification of training, the main types of exercises and procedures, stages of training work, etc. are reduced to "feedback", which consists in the expression by each participant of their own opinion on individual issues of the training session. The inclusion of active forms of education in the educational process, including psychological trainings, has a significant impact on the development of the professional and personal qualities of a future specialist. It is expedient to consider modern systems of interactive teaching in legal disciplines as complexes of a certain way ordered technologies (including distance learning technologies), having the appropriate specifics and logic. For example, an interactive system for teaching law may include such blocks as: a competence-based approach to the study and teaching of law (the method of Socratic dialogue); "Technology of productive activity of intelligence"; a course to improve creative competence; collection of articles and teaching materials "Using interactive methods in teaching legal sciences"; training courts, that is, possible questions for role and business feedback (general information, preparation and methodological recommendations for conducting a civil procedure, assessment form); interactive teaching methods as part of public education; small groups, rules for effective group work; how to organize effective work in small groups; training in professional skills (interactive teaching methods); feedback, practical advice; development and use of role-playing games; work in "legal clinics", etc (Charalambos, 2014). Thus, the essence and structure of the innovative educational process in higher education must correspond to the nature and speed of social changes in society, high European standards for training competitive specialists of an innovative type. The modern content of higher education should focus on the use of information technology, the dissemination of interactive, e-learning with access to digital resources and intelligence-learning for the future. CONCLUSION Innovative educational activity is a complex process that requires skillful, constructive management. The introduction of innovative pedagogical technologies significantly changes the educational process, which makes it possible to solve the problems of developing, student-centered learning, differentiation, humanization, and the formation of an individual educational perspective. In the modern learning process, both traditional and innovative teaching methods should be used, which are no less effective, and in other cases they simply cannot be dispensed with. It is necessary that they are in constant relationship and complement each other. These two concepts must exist at the same level. Summing up, it can be noted that with the introduction of innovative technologies into the pedagogical process of the educational system of higher educational institutions, there is an increase in pedagogical skills and professional competence of future teachers -participants in innovative processes, an improvement in the quality indicators of students' educational achievements. At the same time, the regional education system as a whole is being modernized, the development of universities is traced on the basis of the search, development, development and implementation of innovative pedagogical technologies; scientific and methodological support of the development of the educational institution is provided. At the level of the personality of a specialist, the formation of a modern style of thinking with its characteristic features is monitored: creativity, consistency, flexibility, dynamism, perspective, objectivity, conceptuality, etc. Consequently, when innovative pedagogical technologies take their place in the educational process, they will gradually, which is quite natural, replace traditional methods and forms of work. In this case, higher education institutions will be able to develop an optimal approach to organizing the educational process, taking into account the specifics of higher education in the world and the international cultural environment.culture. Innovative pedagogical technologies in education system Authors' Contributions: Pliushch, V.: conception and design, acquisition of data, analysis and interpretation of data, drafting the article, critical review of important intellectual content; Sorokun, S.: conception and design, acquisition of data, analysis and interpretation of data, drafting the article, critical review of important intellectual content. All authors have read and approved the final version of the manuscript. Ethics Approval: Not applicable.
2022-03-15T15:09:51.394Z
2022-03-12T00:00:00.000
{ "year": 2022, "sha1": "418ecd25d6f53231b13e3cc610427e89163af2c5", "oa_license": "CCBY", "oa_url": "https://seer.ufs.br/index.php/revtee/article/download/16960/12585", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a6d5e188188e47d8248c10da5a948351923a283e", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
254006038
pes2o/s2orc
v3-fos-license
Assessing the Innovation of Mobile Pedagogy from the Teacher’s Perspective : This paper focuses on the use of mobile technology to assist teaching and learning in distance education. It aims to investigate teaching behaviour in mobile pedagogy and examine the impact of technology on current education. A case study was conducted through semi-structured interviews with a cohort of 30 Chinese lecturers who taught English through online tutoring. Thematic analysis was used to analyse the interview data and the assessment was based on teacher perceptions of mobile pedagogy. The impact of technology on the current educational environment is discussed through an analysis of mobile pedagogy and teacher perceptions. The findings show that mobile pedagogy is highly regional in practice and nature and features in-country software applications and social communication tools. Despite the attributes of connectivity and flexibility, mobile pedagogy only disrupted traditional teaching methods, leading to minimal changes to the education system. This study provides recommendations for the sustainable development of mobile pedagogy for future education systems in the digital age. Introduction Given the abrupt transition from classroom lectures to online instruction caused by COVID-19 in 2020, mobile pedagogy has influenced the norms of teaching and learning in different education systems, from primary to higher education.According to the Sustainable Development Goals (SDG) report, the COVID-19 pandemic has deepened the global learning crisis, resulting in 147 million children missing over half of their in-person instruction in 2020-2021 [1].In response, the SDG goal proposed for education is to ensure inclusive and equitable quality education and promote life-long learning opportunities for all.Technology-driven distance learning is an important tool for making education practicable and equitable.Studies show that mobile learning and mobile pedagogies are playing an important role in education [2,3], however, the effectiveness of mobile pedagogy is a major concern for educators [4].Although modern teachers have the skills to manage virtual learning environments and motivate students to learn from distance [5], educators disagreed on how to use mobile pedagogies to lead effective and sustained teaching practices in education systems [6,7]. Long before COVID-19, educators had been making use of mobile phones as a learning tool in and beyond the classroom, using technological advances to improve teaching and learning in a global context [8].As a result, mobile learning and pedagogies rebuilt the concept of disruptive pedagogy, encouraging ubiquitous learning environments, where teachers design innovative pedagogies to achieve real-time immediacy and extensive connections [9,10].Accordingly, mobile pedagogy provides opportunities for learners to enhance ubiquitous learning and seamless experiences [11,12].For example, mobile pedagogy has been widely adopted for English language learning to enable authentic language interaction [13,14].A systematic review of mobile learning related to mathematical studies showed positive learning outcomes [15].Seamless learning environments support a variety of educational university programs to enable lifelong learning across the world [16].Particularly, emergency remote teaching during the COVID-19 lockdown has proven the effectiveness of mobile pedagogy in facilitating the delivery of online courses [17].Thanks to the flexibility to learn and teach at any time and from anywhere, mobile pedagogy ensures that educational activities run on schedule. However, there is a need to explore the feasible innovation of mobile pedagogy in the current educational environment, especially considering the impact of COVID-19.Scholars have discussed the potential for the sustainable development of mobile pedagogy in education systems.The authors [15] indicate that the effectiveness of mobile pedagogy across formal and informal contexts is unclear.The authors [18,19] argue that mobile pedagogy has not fully shown how to exploit the connectivist, real-time, feasible characteristics of mobile learning scenarios.Research by the authors [17,20] points out that the limitations of mobile pedagogy, such as peer isolation, poor interactivity, and increased screen time, have undermined its use.Educators must design effective mobile-technology-supported learning experiences aimed at formal and informal environments to increase mobile pedagogical use [21].There is a lack of assessment to identify teachers' perceptions of their experiences regarding the use of mobile pedagogy [22].The authors [23] conducted a comparative study in the UK, Australia, Belgium, Cyprus, Ireland and the Netherlands to identify and evaluate innovative mobile pedagogies during the pandemic, which showed that teachers are adapting traditional classroom pedagogies for use online by transferring former face-to-face teaching to online teaching.Furthermore, there is limited research discussing the use of mobile pedagogy in China in terms of use of technology and teachers' perceptions.As such, this study examines disruptive technologies and teaching practices in China, thus enriching the knowledge of mobile pedagogy in worldwide practice. This study aims to address the issue of mobile pedagogy in China, examining its innovations in education and the use of technology.The research questions are as follows: • How are mobile teaching practices affecting distance learning in China's higher education? • How can mobile pedagogy be evaluated based on teachers' perceptions? The remainder of this paper proceeds as follows.First, relevant research is reviewed to address the theoretical background and empirical research on mobile pedagogy.Then, the research design and thematic analysis process are explained, followed by coding findings.Following this, the results are discussed.Finally, a conclusion and the limitations of the study are given. Disruptive Technologies The term 'disruptive technology' was coined by McGraw, who explained it as simple commercialised products in an emerging market [24].Christensen believes that the force of emerging disruptive technology enables customization, and pushes schools to shift from a monolithic class model to a student-centric model [24].The use of technology for learning provides opportunities to modularise the education system and personalise learning.Burden et al. discussed the usability of disruptive devices, including hardware, software and communication technology, foreseeing learning potential with disruptive technologies [25].Likewise, Alexander argued that the combination of mobile technologies would transform education, tuning learner recipients into learner nomads [26].Pedagogical innovation is supposed to change with disruptive technologies, but this is arguably not reflected in actual practice [27].Based on the longitudinal action research in mobile learning, Cochrane and Thomas highlighted the use of technology for reforming teaching practices in different learning contexts [28].Disruptions to pedagogical practices have been invested in virtual learning environments for educational innovation.However, little evidence showed the anticipated fruition of teaching practices from existing pedagogical implementation using disruptive technologies [29].As seen by [19], current pedagogies lag behind and are misaligned with diverse learning opportunities in the digital age. Mobile Pedagogy Using Disruptive Technologies Many studies have discussed disruptive technologies and teaching with mobile pedagogy.Technological improvement adds flexibility to the teaching and learning process [27,28].Digital tools and internet connectivity expanded seamless and ubiquitous learning environments to facilitate distance education [30].Mobile pedagogical practices, such as open online classes, mobile-assisted language learning, and smart teaching and learning are research trends in distance education.Table 1 shows a review of mobile pedagogies depicted by scholars.As mentioned by scholars, mobile pedagogy has emerged with the popularity of mobile learning, highlighting portability and flexibility in order to provide inclusive learning opportunities, that is, technology-supported, learner-centered, borderless learning models.Hence, mobile pedagogy requires the ability to use technology to facilitate the transfer of knowledge and training skills in space and time.The UK Open University defines mobile pedagogy as "new pedagogies making use of technologies to go further, to open up new possibilities" [33].In addition to the application of advanced educational technologies, emphasis should be focused on the intended use of technology to fit educational environments.The prospect of mobile pedagogy is to stimulate effective learning in an equitable and inclusive way.Cochrane at al. concluded that pedagogy should be designed with a focus on the context and content of the students [34], as illustrated in Figure 1.Advances in educational technology have had a significant impact on flexible learning environments.The authors [35] investigated mobile pedagogical practices in K-12 using a systematic review; the results indicated that high-level innovation only occurs when Advances in educational technology have had a significant impact on flexible learning environments.The authors [35] investigated mobile pedagogical practices in K-12 using a systematic review; the results indicated that high-level innovation only occurs when there is a radical shift in student engagement across boundaries.However, student agency and learning outcomes depend heavily on high levels of autonomy.As such, mobile pedagogy is digitally disrupted by technological interventions [36,37] with connectivity and flexibility of mobile learning [25,35,38,39] aiming to meet the needs of learners in the digital age. Disruptive Technologies and Mobile Pedagogy in China China is advancing information technology in the form of digitalisation, networked connectivity and smart technology to improve rural education [40].Government policies have emphasised a close relationship between information technology and all-round education in the digital era.In addition, China provides opportunities for teachers to further their lifelong learning by providing continuous support in ICT knowledge and skills [41], especially in the rural regions to achieve equitable quality education.As a result, the use of mobile pedagogy has rocketed from concept to practice in order to improve the quality of education around China [42,43]. Mobile pedagogy has shown a positive effect in improving modernised education in China [44,45].In addition, learners have responded favourably to learning Apps and social media [46].However, recent studies show that MOOC education has gone from a high to a medium even low point in universities [47][48][49][50].Indeed, mobile pedagogy in higher education is facing poor performance and high student dropout rate [51,52].There is a need to assess the innovation and feasibility of mobile pedagogy from the teacher perspective in order to achieve its sustainability. Research Design This study adopted a qualitative methodology to investigate teachers' perceptions.The research design used in this study is interpretive paradigm.Interpretive paradigm, mostly used in social sciences, is constructed on the basis of individual subjective experiences that influence an understanding of the world [53].Individual experience is emphasised in the interpretive paradigm [54] to explore perceptions, experiences and backgrounds.Thematic analysis is used to better understand the phenomenon in virtual learning environments.In this study, data were collected through interviews, transcribed verbatim into instrument data, then summarised, compared and categorised using codes and coding [55]. Sample Population The sample were a cohort of 30 university lecturers who taught English as a foreign language to undergraduate students.All of them had been using mobile technologies to conduct online teaching since the end of 2019.In order to fully reflect a scenario of teaching practice using mobile pedagogy, each participant had to take a certain number of online classes, i.e., 10 period online classes per week or above.Demographic information is presented in Table 2. In-Depth Interview We followed the interview procedures of [56].Based on a semi-structured interview protocol, we encouraged participants to share their online teaching experiences, both operational and perceptional.The interviews lasted an average of 35 min.After the interviews were completed, memos were written and organised.With participants' consent, all interviews were recorded and transcribed verbatim through Iflyrec, a transcription App. In order to achieve in-depth communication [57], the interview process was conducted in Chinese, the native language of the participants.This study used the back-translation technique used in cross-cultural research [58], which means that all transcripts were translated from Chinese into English and then back into Chinese.Afterwards, these documents were sent to the participants for translation consent and authorisation. Ethical Considerations We described the purpose of this study to the participants and obtained their consent prior to the interviews.Moreover, we gave each participant a consent form explaining the details.To ensure the privacy of the participants, we anonymised their names and any possible identifiable indicators that appeared in the quotations. Data Analysis Data analysis was performed in two stages.The first was a coding process, in which the data were coded according to the text relevant to the research.In the second stage, we generated themes from the coding.Specifically, we compared the patterns of coding, divided them into groups, and summarised the categories with themes.The streamline of the codes-themes-assertions model [55] is shown in Figure 2. Credibility and Reliability A controversial claim about qualitative analysis is that researchers may fail to be impartial when dealing with interview data and may interpret the data with bias.Therefore, self-evident analysis is necessary [59].We outline here the process with credibility and reliability, as shown in Figure 3. Credibility and Reliability A controversial claim about qualitative analysis is that researchers may fail to be impartial when dealing with interview data and may interpret the data with bias.Therefore, self-evident analysis is necessary [59].We outline here the process with credibility and reliability, as shown in Figure 3. Credibility and Reliability A controversial claim about qualitative analysis is that researchers may fail to be impartial when dealing with interview data and may interpret the data with bias.Therefore, self-evident analysis is necessary [59].We outline here the process with credibility and reliability, as shown in Figure 3. Coding Results The example of open coding is shown in Table 3.In this process we categorised the original data into groups.After open coding, we used axial coding to build the link between codes.In this process, the codes were sorted to establish the connection (see Table Coding Results The example of open coding is shown in Table 3.In this process we categorised the original data into groups.After open coding, we used axial coding to build the link between codes.In this process, the codes were sorted to establish the connection (see Table 4).In addition, we used a coding-theme concept map (see Figure 4) derived from NVivo to display the relationship between themes and subthemes.The participants are anonymous for confidentiality [60]. Assessing Disruptive Technologies In terms of technology, innovative practices involving software and hardware are well-represented in current education.Indeed, market demand for educational applications and digital gadgets is surging despite the economic downturn in 2022 [3].Software applications, including learning management systems (LMS) and online teaching platforms, such as Tencent meeting, DingTalk, Chaoxing Learn, Ulearning, and Rain Class are popular tools for online teaching and learning in China.Social communication applications, such as Tencent QQ, and WeChat are used frequently to interact with students.Most of these applications are in-country technology, as described below: Remark 1. I recommend domestic applications and platforms. They focus on learning experiences to achieve the goals of China's higher education. Of course, I recommend that students use abroad applications to enhance their learning. But for me, Chinese applications are easier to handle. The digital platforms are easier to use. Participants emphasised the user-friendliness of in-country applications, which made it simple for them to apply mobile pedagogy.Despite some technical issues, most of the participants shared their successful experiences of implementing mobile pedagogy, as stated by P10: Remark 2. It is good and successful.I had a few technical problems, but they were all solved quickly.We fully prepared the devices for teaching, and the university supported us in every aspect. In addition, participants strongly agreed that the interactive function of mobile pedagogy in distance teaching and learning offers students the freedom to ask and answer questions without being noticed by their peers. Remark 3. I find some students, who are not active in face-to-face class, are quite brave in online classes.For example, they leave messages in the chat box. Assessing Disruptive Technologies In terms of technology, innovative practices involving software and hardware are wellrepresented in current education.Indeed, market demand for educational applications and digital gadgets is surging despite the economic downturn in 2022 [3].Software applications, including learning management systems (LMS) and online teaching platforms, such as Tencent meeting, DingTalk, Chaoxing Learn, Ulearning, and Rain Class are popular tools for online teaching and learning in China.Social communication applications, such as Tencent QQ, and WeChat are used frequently to interact with students.Most of these applications are in-country technology, as described below: Remark 1.I recommend domestic applications and platforms.They focus on learning experiences to achieve the goals of China's higher education.Of course, I recommend that students use abroad applications to enhance their learning.But for me, Chinese applications are easier to handle.The digital platforms are easier to use. Participants emphasised the user-friendliness of in-country applications, which made it simple for them to apply mobile pedagogy.Despite some technical issues, most of the participants shared their successful experiences of implementing mobile pedagogy, as stated by P10: Remark 2. It is good and successful.I had a few technical problems, but they were all solved quickly.We fully prepared the devices for teaching, and the university supported us in every aspect. In addition, participants strongly agreed that the interactive function of mobile pedagogy in distance teaching and learning offers students the freedom to ask and answer questions without being noticed by their peers. Remark 3. I find some students, who are not active in face-to-face class, are quite brave in online classes.For example, they leave messages in the chat box. It provides opportunities for inactive students to participate in class, and students feel less embarrassment if they make mistakes ( Participants 7,11,23).The attribute of enabling learners to interact seamlessly across distant geographies and to become a borderless learning community is a common benefit of mobile pedagogy. Teachers' Attitudes The majority of the participants showed supportive attitudes towards the use of mobile pedagogy.Their attitudes are summarised in Figure 5. 14, x FOR PEER REVIEW 9 of 15 It provides opportunities for inactive students to participate in class, and students feel less embarrassment if they make mistakes ( Participants 7,11,23).The attribute of enabling learners to interact seamlessly across distant geographies and to become a borderless learning community is a common benefit of mobile pedagogy. Teachers' Attitudes The majority of the participants showed supportive attitudes towards the use of mobile pedagogy.Their attitudes are summarised in Figure 5. Supportive attitudes are listed and include the tendency of future education, convenience in teaching etc. Unsupportive attitudes include ineffective teaching, distraction, technical concern, and privacy exposure.The main comments in support of favorable attitudes are listed below: Remark 4. I personally have a more agreeable attitude because I think mobile learning and teaching is an inevitable trend.It assists teaching and exceeds traditional teaching methods, helpful to promote students' enthusiasm and interest in learning.It is beneficial for learning. Remark 5.It should be promoted and encouraged because students are used to the model of mobile learning.It is an educational environment, a tendency, and students' learning habit.Remark 6.The current era is changing fast.I may describe it as an era of post-modernism.Network ecology is very developed.Mobile pedagogy is the teaching tool in our time.If you don't catch up with the advanced technology, you will be doomed to be left far behind. In addition to supportive attitudes, a quarter more expressed unsupportive opinions, as described below. Remark 7. I don't think mobile pedagogy will last long.It is an emergency teaching behaviour in response to COVID-19.Teaching and learning will go back to normal face-to-face class.Anyway, they have to take paper exams.Remark 8. My university still requires students to take paper-based exams with the monitor of teachers.For me, students have to follow all the rules of paper exams, which discouraged the use of technology in teaching. Remark 9. I am very worried when I use mobile pedagogy.You know, there are so many private incidents that come to light in webcasts. Technical problems i.e., unstable Wi-Fi, web cameras, unmuted microphones, and an unresponsive system happen unexpectedly.In particular, teachers are concerned that their privacy will be exposed during the use of technology, for example, by forgetting to turn off the camera, unmuting the microphone, or the appearance of a family member on camera. Feasibility and Sustainability of Innovation As for the effectiveness of mobile pedagogy, the participants did not have clear visions for its future benefit.Because mobile pedagogy was an emergency plan in response to the COVID-19 outbreak, the participants showed ambiguous perceptions, viewing it as a replacement and the only option for class teaching under these circumstances.Although the government and institution demanded the use of mobile pedagogy during the COVID-19 outbreak, the higher education system did not prepare to set any specific goals for distance schooling.Furthermore, universities still rely on traditional methods to evaluate teaching outcomes.P16 pointed out the dilemma of using mobile pedagogy: Remark 10.I do not think there are any differences before and after using mobile pedagogy.Furthermore, the results of using mobile pedagogy are poor, hardly achieve teaching objectives with traditional teaching methods. In addition, there was no difference in course syllabi, examinations and student assessment after technology mediation.P16 elaborated her viewpoints as follows: Remark 11.Everything is similar, teaching objectives, teaching design, assessment etc.We use mobile pedagogy, but we still require students to attend paper-based exams if possible.Mobile pedagogy enhances teaching, but we still use traditional ways to assess teaching outcomes. Mobile pedagogy enhances teaching models; however, the assessment system has not changed.The adoption of mobile pedagogy changed teaching strategies, but the education system, including assessment and evaluation, did not change.There are some pilot online exams, for example, the IELTS indicator.However, online exam results have less credibility compared to paper exams. Remark 12.I don't trust online exam results.Students have a lot ways to cheat online.For example, how do we authenticate the identity of test-takers in distance? Technology is effective in designing distance learning models, but at the same time, it facilitates cheating and plagiarism.The reliability and validity of online tests in a virtual environment are issues that affect the sustainability of mobile pedagogy in the education system. Efficacy of Mobile Pedagogy Mobile pedagogy is considered as an advancement in teaching across national boundaries, as reviewed in the literature (see Section 2.2); however, this study found that the use of mobile pedagogy is limited with a strong regional dimension.This corroborates the studies of [61,62], which point to the prevalence of in-country applications and platforms in distance education.These applications focus on specific knowledge of regional course descriptions, and the test system caters to the needs of local learners.From this perspective, educational technology reflects the local education system and represents the characteristics of regional education, and is hardly universal, despite its attributes of connectivity and ubiquity. Innovations using disruptive technologies are mainly found at the student level.As described by [35,38], technology-enhanced pedagogy invoked learner-centered, reflective and collaborative learning processes.In this regard, mobile pedagogy identifies the learning community, and seeks to address the needs of learners by facilitating and developing interaction between participative subjects through social interaction tools.Meanwhile, the potential of technology-based teaching practices is obvious in the digital age, that is, a positive relationship is observed in technology mediation [25], where teachers adopted technology effectively to support online teaching and learning activities. Regarding the use of mobile pedagogy, the results showed limited innovation in teachers' pedagogies and conception.As indicated by [63], educational strategies bear the hallmarks of traditional classroom pedagogy.This is consistent with the report of [25], which classified disruptive mobile pedagogies as medium or low in innovation.Furthermore, this study found that teachers were under enormous pressure.They are more concerned about the unpredictable failures associated with the use of technology.In contrast to face-to-face teaching, mobile pedagogy is subject to unpredictable errors that are irrelevant to teaching but may have a negative effect on teachers.As indicated by [35], mobile pedagogy has blurred the boundary of formal and informal environments. Impact on Higher Education This study sheds light on the reform of the higher education system.Mobile pedagogy has been used for schooling innovation; however, traditional teaching methods are only disrupted.The use of mobile pedagogy is limited by the purpose of achieving existing curriculum goals [64].Practically, mobile pedagogy is considered innovative because it enhances teaching strategies for learning.However, innovation in formal educational environments changed little.Consistent with [65], teaching staff continue to focus on traditional practices to achieve course objectives.The very important issue is that, despite the diversity and flexibility of instructional designs, the education assessment system remains the same. As discussed in the literature, the integration of technology, pedagogy, and content has led to new educational activities, but this professional, applied knowledge requires a collaborative effort between content experts, educational technology developers, educational researchers, and pedagogical practitioners [20,66,67].Most participants perceived paper-based exams as a formative assessment to measure student learning.The effectiveness of appropriate online assessment systems in a virtual environment is a pressing need for educational trends. To sum up, the innovative attributes of mobile pedagogy have raised questions about the current curricula and testing system in higher education.Nevertheless, little has been done in terms of the higher education system [68].As online learning and teaching are becoming mainstream for universities, it is necessary that virtual online exams become part of the education system.However, higher-order thinking skills are not suitable for designing online exams and many subjects are limited by virtual environments [69].The effectiveness of online assessment requires institutional, administrative, and pedagogical support. Sustainable Development of Mobile Pedagogy in the Post−COVID-19 Era Disruptive technologies and teaching with mobile pedagogy have flourished due to social distancing and lockdown policies to address COVID-19.Mobile pedagogy has worked effectively in the emerging situation to achieve the purpose of education.However, sustained use of technology in education is obscure.A key corollary to this issue is the digital-use divide, as articulated by [23].Defined by [70], the digital-use divide requires that we not only have appropriate access to technology, but also expertise on its optimal use, particularly to promote interactivity in teaching and learning, which is the main problem that exists in current mobile pedagogical practice.Teachers are replicating 'traditional' classroom methods and incorporating them into online teaching by recording instructional videos, organizing activities, and sharing assignments online.All of these can be done in face-to-face class and may achieve a better result than in the virtual-learning environments.As researched by [18], mobile pedagogy is promising for student-centred online activities, but teachers play a weakened role.Therefore, the sustainable use of technology in teaching has been questioned by scholars [71][72][73], especially in the post-COVID-19 era.Apparent weaknesses, such as lack of social interaction, poor communication and poor student performance are seen as ineffective for disruptive technologies in mobile pedagogy. As described in Table 1, disruptive technologies are perceived as the way to change a teacher-led class to a student-centric one.However, emergency teaching with disruptive technologies exposed the disconnect between teachers' positive perceptions of integrated technology and its adoption in online situations.The future use of technology depends to a large extent on how teachers use mobile pedagogy to explore viable pedagogical innovations to improve teaching outcomes [19,35,74]. Conclusions This study investigated the use of mobile pedagogy in distance teaching practice based on interviews with 30 lecturers.The findings show that mobile pedagogy is highly regional in nature and reflects in-country educational characteristics despite its connectivity and ubiquity.The adoption of mobile pedagogy has innovated teaching methods, but it has not touched the education system, which is an issue that hinders the sustainability of mobile pedagogy in formal and non-formal education.There is a need to improve the education system so that it is adapted to the digital age. From the managerial perspective, this study provides enlightenment on the sustainable use of technology in current educational environments.It is proven that advances in educational technology have had a significant impact on flexible learning environments.Mobile pedagogy enhances teaching and learning through the provision of online tutorial services.However, teachers, as practitioners, are overlooked in the practice of assessing mobile pedagogy.The analysis of this study provides empirical insights for educators and institutions to improve pedagogy-related systems in distance teaching, particularly in the aspect of policies and institutions.In addition, it offers suggestions for future pedagogical innovations using technology in the digital age. This study has several limitations.First, it has been conducted at a university located in China.Due to differences in the software market, applications and websites may vary from region to region.Additionally, the sample may be limited to the scope and may have overlooked teaching performance in other subject domains.Last, the interpretation of the interview data may be impacted, to a minor extent, by the researchers' reflexivity. Teaching using mobile pedagogy is still at a basic stage of being influenced by disruptive technologies to accomplish traditional teaching objectives.Issues related to online syllabi and assessment have rarely been innovated in pedagogical performance.It would be useful to explore specific courses featuring the use of mobile pedagogy, which would be valuable to guide teachers in their innovation. Figure Figure 3. Thematic Analysis Flow. Table 1 . Review of Mobile Pedagogy. Table 2 . Description of the sample.
2022-11-27T16:18:26.487Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "e956dbabd325c9a19a86dbec3e23d3890961aa4f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/14/23/15676/pdf?version=1669368437", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4d80fd847fb8d94bcd2e62dedc505f11710c6c5e", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [] }
54748396
pes2o/s2orc
v3-fos-license
Tight binding within the fourth moment approximation: Efficient implementation and application to liquid Ni droplet diffusion on graphene Application of the fourth moment approximation (FMA) to the local density of states within a tight binding description to build a reactive, interatomic interaction potential for use in large scale molecular simulations, is a logical and significant step forward to improve the second moment approximation, standing at the basis of several, widely used (semi-)empirical interatomic interaction models. In this paper we present a sufficiently detailed description of the FMA and its technical implications, containing the essential elements for an efficient implementationinasimulationcode.Usingarecent,existingFMA-basedmodelforC-Nisystems,weinvestigated the size dependence of the diffusion of a liquid Ni cluster on a graphene sheet and find a power law dependence of the diffusion constant on the cluster size (number of cluster atoms) with an exponent very close to − 2 / 3, equal to a previously found exponent for the relatively fast diffusion of solid clusters on a substrate with incommensurate lattice matching. The cluster diffusion exponent gives rise to a specific contribution to the cluster growth law, which is due to cluster coalescence. This is confirmed by a simulation for Ni cluster growth on graphene, which shows that cluster coalescence dominates the initial stage of growth, overruling Oswald ripening. I. INTRODUCTION The interest in modeling finite temperature equilibrium, dynamic, and reactive properties of large systems at the atomic scale with sufficient accuracy has increased considerably in the last decades with the appearance and improvement of various types of (semi-)empirical interatomic potentials that are computationally much more efficient and faster than ab initio methods.The (semi-)empirical approaches include Stillinger-and Weber-type models, [1][2][3] (modified) embedded atom methods, [4][5][6][7][8] empirical bond order potentials (BOPs), [9][10][11][12][13][14][15][16] and higher-order, so-called analytical bond order potentials (ABOPs), [17][18][19][20][21][22][23][24][25] which remain closer to tight binding (TB) models building on the works in, for example, Refs.26-29.It is clear that the qualification "sufficient accuracy" strongly depends on the application.It is generally assumed, and to a certain extent confirmed by experiments, that nowadays state-of-the-art ab initio methods are more accurate than (semi-)empirical methods in situations to which the latter models have not been fitted.In fact, a serious difficulty when using a (semi-)empirical model is that it is not so easy to know how accurate it actually is, which undermines its predictive qualities.Normally, such a potential gradually reveals its properties and failures while it is being tested and applied to a variety of systems and conditions for which experimental data are available and/or which allow for comparison with ab initio calculations.After that, models can be improved or another model may be selected for a certain application.The fact remains that for many applications empirical models give access to thermodynamic and kinetic properties via simulation for which ab initio methods, say within density functional theory (DFT), and even standard TB methods are simply too slow and for which the assumed additional accuracy of these latter methods seems only a matter of details.Of course, there are many other applications which absolutely require an ab initio approach. 1][32][33][34] In fact, they can be derived starting from the so-called second moment approximation (SMA) to the electronic local density of states (LDOS) at the atomic positions.Usually, the final model takes a purely analytical form.The SMA involves only interactions between atoms within a close neighborhood (sometimes reaching beyond first nearest neighbors) and by this it benefits from the locality in the dependencies of the terms describing the total energy of the system.Here we are paying a first price for such a description, as we know that quantum mechanics is essentially a nonlocal theory.Although most current quantum mechanical computation models also build on certain local approximations, they are clearly less local than the empirical models.Fortunately, for many systems and especially for pure systems the assumption of a local theory is not so bad as it seems for a property like the cohesive energy due to an intrinsic mechanism which strongly favors states for which local charge neutrality is preserved. While the SMA has been fairly successful for metals, its limitations clearly show up when applying it to covalent systems.For example, for carbon, the large variation in the bond strength between two carbon atoms, including single, double, triple, and conjugated bonds, requires a description which goes beyond nearest neighbors, as is illustrated in Fig. 1.A logical next step to improve the models based on the SMA is to build a model in which the LDOS is treated within the fourth moment approximation (FMA).This approximation stands at the basis of a recently published interaction model for nickel-carbon systems. 25It does not belong to the abovementioned ABOPs, as the analyticity is lost to a certain extent.In turn, more quantum aspects are preserved, including the explicit evaluation and filling of the LDOS.Retaining the electronic structure up to the fourth moment level would, in principle, make it possible to consider electronic (transport) properties and to deal with charge transfer effects, etc.For the description of the energetics of a pure metal, for example, it seems unnecessary to retain the electronic structure up to this level.However, for systems composed of more than one component, such as carbon-nickel systems, 25 the importance of such an approximation becomes evident.In particular, it is the lowest-order (simplest) approximation that takes into account both diagonal disorder (difference in on-site energy levels) and off-diagonal disorder (difference in hopping matrix elements).For example, this property seems to be crucial for a proper description of the mixing behavior of transition metal alloys, as has been demonstrated very recently. 35he major aim of this work is to give a basic description of the FMA and its (technical) implications, considering it a natural next step and for certain systems the necessary improvement beyond the so widely used SMA.This analysis directly applies to the model in Ref. 25, being a prototype FMA model, and stands at the basis of a very efficient implementation in a Monte Carlo (MC) simulation code, which we recently realized and of which we will provide the most important ingredients.The code, which gave an improvement in speed by a factor 100 to 2000 with respect to a previous version based on straightforward implementation, shows a linear dependence of the simulation time on the system size (number of atoms).For a system of a few thousand atoms it is several orders of magnitude faster than standard TB, using diagonalization of the TB Hamiltonian matrix, and only up to one order of magnitude slower than SMA-based models.7][38][39][40][41][42][43][44] It should be noticed, however, that, due to a smaller prefactor, FMA is significantly faster than, for example, the method involving the approximation of the Fermi function by a polynomial of an unavoidable, relatively high order 41,43 requiring as many moments to be calculated, albeit that the latter method may provide higher accuracy depending on the type of system. As an example of a simulation study of an interesting and relevant physical problem, which has become feasible with the new code, we used it to simulate the diffusion of liquid Ni clusters on a graphene sheet.Here our aim is to investigate the dependence of the cluster diffusion constant on its size N (number of Ni atoms in the cluster), in order to establish whether there is a power law behavior D N ∝ N −α and, if so, to extract the exponent α. The fact that there is no true time scale in MC simulation has sometimes led to the idea that MC simulation cannot be used for studying dynamical processes.However, as has been shown in, for example, Ref. 45, under certain conditions one can assume that MC "time," taking it as the average number of displacement attempts per atom, is proportional to the real time except for short time scales.In the present application, keeping the acceptance rate for the MC displacement trials constant is enough to recover the essential features of diffusion in our MC simulations.It is indeed not possible to determine (directly) absolute values of D N , but this does not hinder the study of the size dependence of D N at a fixed temperature T .An estimate of absolute values can be obtained by choosing a suitable reference process with a (experimentally) known time scale like the atomic (self-)diffusion, as we will do. The value of the above-mentioned exponent α is important for the contribution of cluster coalescence to cluster growth on a two-dimensional (2D) substrate, as a second mechanism besides Oswald ripening.For Oswald ripening the (average) linear cluster size grows as t β with β = 1/3 if diffusion is the rate limiting process 46 and β = 1/2 when surface kinetics is the slowest process. 47For compact, 3D clusters this would give rise to a contribution N (t) ∝ t 3β to the cluster growth law.For cluster coalescence, an analytical solution 48 predicts the average density of clusters to behave as ρ cl (t) ∝ t −3/(3+γ ) , implying N (t) ∝ t 3/(3+γ ) where γ is the exponent in the diffusion constant power law dependence D(r) ∝ r −γ on the cluster radius r.So in any case, since normally γ > 0 implying 3β > 3/(3 + γ ), Oswald ripening will be the fastest, and thus prevailing, growth process for large times.However, for small and intermediate times scales cluster coalescence may contribute significantly to the growth, depending on the kinetic prefactors.The two growth mechanisms together and using α = γ /3 give rise to predicting a crossover in the prevailing mechanism.K CC and K OR are the kinetic constants for cluster coalescence and Oswald ripening, respectively.While the diffusion of solid clusters has been studied frequently in the past, both experimentally [49][50][51][52][53] and theoretically, [54][55][56][57][58][59][60][61] investigations of liquid cluster diffusion are limited to only a few. 62,63ost of the works on solid clusters focus on 2D clusters, epitaxially attached to the substrate, for which the diffusion is quite slow, of the order of 10 −17 cm 2 /s, and takes place by single-atom events.Different mechanisms were identified, giving rise power law dependencies ranging from D(r) ∝ 1/r 3 for periphery diffusion to D(r) ∝ 1/r 2 and D(r) ∝ 1/r for respectively a correlated and an uncorrelated evaporation and condensation mechanism (see Ref. 56), r being the 2D cluster radius now.However, these single-atom mechanisms cannot explain the very fast diffusion of the order of 10 −8 cm 2 /s at room temperature reported in Refs.51 and 52.A plausible explanation for this fast diffusion is given in Ref. 60 in which it is shown by simulations based on Lennard-Jones (LJ) interactions for 3D clusters on a substrate, that is, the partial wetting case, that the diffusion constant can increase by many orders of magnitude when changing from a situation in which the cluster and the substrate lattice parameters are commensurate to a situation in which they are incommensurate.The exponent α in the power law D N ∝ N −α was found to vary between α = 2/3 for the incommensurate case with a Brownian-like mechanism to α = 1.4 for the small mismatch case with a hoppinglike mechanism.In both mechanisms the cluster moves as a whole, contrary to the case of single-atom mechanisms. It is not so clear to what extent 3D liquid cluster diffusion on a substrate, as considered here, is qualitatively different from that of 2D and/or 3D solid clusters.In the simulation study of Ref. 62 of a 3D liquid gold (Au) cluster diffusion on an amorphous frozen-in substrate, an exponent α = 1.3 was found for the smaller clusters, but the largest cluster (555 atoms) was found to diffuse even slower than predicted by this power law.Here a rolling-like, or rather a stick-and-roll, mechanism was identified which to some extent corresponds to the stick-and-glide mechanism observed in the small mismatch case of solid cluster diffusion.This could explain the similarity in the power law exponents, that is, 1.3 versus 1.4.In the incommensurate case the energy barriers for diffusion are much smaller, giving rise to a random walk mechanism.It seems that the substrate properties used in Ref. 62, being amorphous and static, might have been decisive for the diffusion mechanism.Apparently, the cluster is able to find relatively stable positions on the surface, with a relatively low escape probability.In our simulations the substrate is crystalline and its atoms are allowed to move.While the (111) surface of Ni matches almost perfectly with graphene, one expects no particular lattice matching for a liquid cluster.In addition, the effect of energy barriers is reduced at high temperature.This makes us expect a power law behavior similar to that for the incommensurate solid cluster case, with α = 2/3, that is, a D N which is inversely proportional to the contact area. In the next two sections we give a description of the TB model within the fourth moment approximation (TBFMA), such as it is applied in Ref. 25, and all important implications and ingredients for constructing a fast MC simulation code.Section IV is devoted to its application to liquid Ni cluster diffusion and growth on graphene, while Sec.V contains a summary and conclusions. II. TIGHT BINDING WITHIN THE FOURTH MOMENT APPROXIMATION The total energy E of the TBFMA model in Ref. 25 is the sum of atomic energies E i : where E R,i and E C,i are the atomic repulsive and cohesive energies for atom i, respectively, and N at is the number of atoms in the system.The repulsive energy of atom i reads where V R (r ij ) is a repulsive pair potential and F is an embedding function to extend the transferability of the model for different coordination environments.A finite cutoff distance at which V R (r ij ) smoothly vanishes limits the sum over j to atoms within this distance from atom i.The atomic, cohesive energy is given by where the prefactor 2 accounts for the two spin states, E F is the Fermi energy, i is the average orbital energy per electron for an isolated atom i, and n i (E) is the LDOS of atom i, consisting of a sum of contributions n i,λ (E) from the different, involved orbital groups or bands λ (e.g., 2s and 2p for carbon), forming the basis of orbitals.Within the TB description the LDOS for band λ is defined as where the sum runs over the n λ orbitals λ m in band λ (e.g., 2p x , 2p y , and 2p z for the 2p band yielding n λ = 3) and where is the Green's function operator with z = E + i and H T B the Slater-Koster 64 TB Hamiltonian. Employing Lanczos tridiagonalization of H T B with the appropriate initial Lanczos vector and using a recursive relation for the cofactors of a tridiagonal matrix, Ĝii,λλ (z) can be rewritten as a continued fraction (CF) expansion 27 : Ĝii,λλ (z where the continued fraction coefficients (CFCs) a iλ m and b iλ m = (β iλ m ) 2 are the diagonal and the squares of the off-diagonal elements of the Lanczos tridiagonal matrix, respectively. Alternatively, Ĝii,λλ (z) can be expanded as Ĝii,λλ (z where |iλ m represents the nth moment for atom i and band λ.The moment μ iλ n involves all closed hopping pathways consisting of n nearest-neighbor hoppings and/or onsite loops beginning and ending on a λ m orbital of atom i (see Fig. 1).There is a one-to-one correspondence between the moments and the CFCs.The CF expansion is much more suitable for evaluation of the LDOS than the moments expansion and normally the CFCs can be calculated accurately using the Lanczos algorithm.However, an efficient implementation of the TBFMA model in a MC code requires the moments, as will be shown in Sec.III A. Since the moments μ iλ n rapidly diverge for increasing n, contrary to the CFCs, naive calculation of the CFCs from the moments can easily lead to large inaccuracies in the CFCs.This problem can be solved by using the following numerically stable algorithm, in which the CFCs a iλ n and b iλ n for n = 1, . . .,m/2 (m even) are calculated from the first m moments μ i (i = 1, . . .,m) iteratively by 65 . ,m). In principle there are several options to terminate the CF expansion.The simplest option would be to truncate it at some level n by just taking b iλ n = 0.This leads to a LDOS containing only Dirac peaks, maximally n.For the application to bulk phases, a more realistic LDOS, containing an energy band, is obtained by taking the CFCs constant beyond a certain level.In particular, within the TBFMA model of Ref. 25 where E F,i is the highest occupied level of the LDOS of atom i and Z i is the number of valence electrons for atom i in the chosen basis of orbitals for that atom.For C, described within the (2s,2p) basis, Z i = 4, whereas Z i = 8 for Ni, described within the 3d basis.The neglect of charge transfer is a reasonable approximation for C-Ni systems, 25 but not a necessary requirement for the analytical analysis in Sec.III B. III. EFFICIENT IMPLEMENTATION A. (Re)calculating moments in an efficient way For standard MC simulation, in order to calculate the change in the total energy for a new trial configuration generated by a random displacement of a randomly chosen atom i, only certain moments of atoms up to second neighbors of atom i need to be recalculated.To facilitate the discussion here, let us consider a move which does not cause a coordination change (no cutoff radius is crossed).After such a move, in principle all four moments for atom i change.For the nearest neighbors j of atom i, the second, the third, and the fourth moments change.However, for a second-nearest neighbor k of atom i, only the fourth moment changes (see Fig. 2).In addition, the change of the fourth moment of such an atom k is only due to the change of a very limited number of closed hopping pathways, namely, only those pathways which pass by the displaced atom i.Also for the first nearest neighbors j , only a fraction of the pathways pass by atom i and contribute The hopping pathways from i to a second neighbor k 1 can be reused to compute the change in the fourth moment of the atom k 1 , as these are the only pathways for atom k 1 which change after a displacement of the atom i.For atom k 1 this includes only one pathway.For atom k 2 , we have drawn all pathways that change and that start and end on k 2 , which include four pathways in this case.All the possible changed pathways are automatically included in the algorithm presented in Table I. to the changes in the second, third, fourth moment.To exploit these facts we designed a very efficient and relatively simple algorithm for updating the moments after a move, which is outlined in Table I. In the description in Table I, H ij represents a (n i xn j ) matrix block of the TB hamiltonian.For j = i, H ij is diagonal and contains the average band energy levels, while for j = i, H ij contains the probabilities for hopping from each of the n i orbitals of atom i to each of the n j orbitals of atom j .H T ij and H 2 T ik are just the transposed matrices of H ij and H 2 ik , respectively, and D[• • •] stands for "diagonal of".The n i components of the vector μ i,n contain the nth moment for each orbital on atom i.In particular, the change of the fourth moment of all second-nearest neighbors can be computed very efficiently from the matrix blocks H 2 ki = H 2 T ik which have already been computed in the recalculation of the moments of the central atom i! B. Analytic integration of the local density of states Within the FMA, the integrated, normalized LDOS for band λ at atom i, I iλ , reads where Allan et al. 66 have found an analytical solution for the general variant of the integral (10) and the corresponding energy integral with a CF expansion of arbitrary length terminated by a (z) of the above-described form.It was TABLE I. Computation steps for efficient recalculation of the moments after a displacement of an atom i in MC simulation.Note that the quantities δμ k,m , δμ j,m (m = 2,3,4) include only a part of closed hopping pathways for atoms k and j , namely, only the ones that change.The quantities with and without prime indexes indicate the values after and before the displacement of atom i.The terms "nn" and "nnn" stand for nearest and next-nearest neighbors respectively. (1) Atom i all moments change -Compute: (3) Nearest neighbors j of atom i μ j,2 , μ j,3 , and μ j,4 change -Compute: H 2 jj = k H jk H kj only for j = i and j nn of i shown that these solutions allow a much faster and more accurate evaluation of the relevant quantities in comparison to numerical integration.We have worked out the solutions for the case of the FMA, which allows additional simplifications and explicit analytical expressions for the integrals in terms of the four CFCs, as given below. It is convenient to apply the following change of variable: by which Eq. ( 10) transforms into where In fact, in the transformed CF expansion, a iλ 2 = 0 and b iλ 2 = 1.Hence, by this variable change, consisting of a scaling and a shift of the energy, we have obtained a simplified CF expansion with only two parameters, a iλ 1 and b iλ 1 .The Eq. ( 9) to find E F,i can be rewritten in terms of the transformed integral as Solving Eq. ( 13) for E F,i (and E F,i ) with the analytical expressions for I iλ given below, has to be done numerically with an appropriate method, such as Newton-Raphson and/or bisection.Once we have E F,i , the cohesive energy for atom i can be determined by the energy integral (4), which after transformation becomes where So to find E C,i we have to perform the integrals I iλ (E F,iλ ) and I E,iλ (E F,iλ ), expressed in terms of a iλ 1 , b iλ 1 , and E F,iλ .In the further description of this problem we drop the prime indexes and superscript/subscript iλ for convenience. Solving the quadratic equation for , the integrand of Eq. ( 12) can be rewritten as Ĝ where , the signs ∓ corresponding to the two roots ± = (z ± √ z 2 − 1)/2.For real z = E, Eq. ( 16) can be worked out to where S = i for |E| 2, S = −1 for E < −2, and S = 1 for E > 2, the signs of S being chosen such that the LDOS is always positive.For energies within the interval −2 E 2, Ĝ(E) has a continuous imaginary part, giving rise to an energy band.Other contributions to the LDOS may come from poles on the real axis giving rise to Dirac peaks.Normally, Dirac peaks only occur for strongly distorted local configurations. One should be careful with the interpretation of the details of the LDOS, like the Dirac peaks (normally indicating singular states), since the LDOS within the TBFMA is only an approximation.Fortunately, integrated quantities like the total energy are not so sensitive for these details.From Eq. ( 16) it readily follows that any pole z i is a solution of the equation which immediately shows that for a real pole z i = E i it holds that |E i | 2; that is, real poles are at the edge or outside the band.However, a real root E i of Eq. ( 18) is not always a pole.As Eq. ( 18) was obtained after multiplying numerator and denominator of Eq. ( 17) with . It also follows from Eq. ( 18) that for any value of a 1 , there is always a b 1 value for which there is a root E i = −2 at the lower band edge and a b 1 value yielding a root E i = 2 at the upper band edge.Indeed, substitution of E i = −2 into Eq.( 18) yields b 1 = 2 + a 1 , whereas substitution of Moreover, a negative root E i of Eq. ( 18) is only a pole when b 1 > 2 + a 1 , whereas a positive E i is only a pole when b 1 > 2 − a 1 .The general, complex roots of Eq. ( 18) for b 1 = 1 are for i = 1,2, showing that the LDOS may only contain pole contributions (Dirac peaks) for b 1 > 1 − a 2 1 /4.The coefficient a 1 describes the asymmetry of the band.Indeed, for a 1 = 0, we have Im to E = 0.The coefficient b 1 controls the tendency to form a band gap.Examples of n(E) for symmetric and asymmetric bands are shown in Figs. 3 and 4, respectively.Figure 5 gives a graphical representation of the properties of n(E) as a function of b 1 for the symmetric and an asymmetric case with a 1 = −1.For b 1 < 2 − |a 1 | there are no Dirac peak contributions to the LDOS.For 2 − |a 1 | b 1 2 + |a 1 | the LDOS contains one Dirac peak and for b 1 > 2 + |a 1 |, it contains two Dirac peaks, as also shown in Fig. 6.The quantities W p1 and W p2 in Fig. 5 are the electronic weight factors (residues) of the poles.In the limit of very large b 1 , the band contribution vanishes and the LDOS consists of just two Dirac peaks corresponding to the two poles with W p1 + W p2 tending to one.This situation can occur for a dimer. Following Allan et al., 66 the band contribution I b to I = I b + I p can be found to be where and with t F = arcsin(E F /2), u F = tan(t F /2), and z i (i = 1,2) given by Eq. ( 19) and where u ± i are the roots of the equation u 2 − (4/z i )u + 1 = 0, reading 17) and the real poles E pi are indicated on the left vertical axis, whereas the corresponding weight factors W pi (residues) are given on right vertical axis.For a 1 = 0, both roots become poles for b The imaginary parts of both complex logarithms in Eq. ( 22) have to be taken within the interval [0, 2 π ). There are two cases where the evaluation of I b by the above equations becomes numerically unstable.The first case is when a root z i becomes very large due to a b 1 value close to one.Then, for the corresponding I b,i both the second and the third term on the right-hand side of Eq. ( 22) diverge, leading to inaccuracy in the sum of them, knowing that the sum remains finite since I b (E F ) 1 by construction.The second problematic case for similar reasons occurs when c i diverges for b 1 tending 1 − a 2 1 /4.In MC simulation, especially at high temperature where the domain of accessible CFC values increases, these situations unavoidably occur so that one has to deal with it rigorously.We solved these difficulties in a practical way by bridging the parameter intervals, which cause troubles with linear interpolations.For example, for b 1 within the interval [b 1,min ,1], with b 1,min close to 1 we calculate I b,i as FIG. 6. (Color online) Domains in the parameter space spanned by the transformed CF coefficients a 1 and b 1 for which the LDOS contains 0, 1, and 2 Dirac peaks.In the domain where it contains 2 Dirac peaks, there is one at the left and one at the right side of the band.In the domain enclosed by the dashed line and the a 1 -axis the roots of Eq. ( 18) are complex. where I b,i (b 1,min ; E F ) and I b,i (1; E F ) are the band integrals for b 1 = b 1,min and b 1 = 1, respectively.Typically, a value of b 1,min = 0.975 is enough to avoid numerical problems.Strictly speaking, this interpolation introduces kinks in the energy curves, which would be a problem for molecular dynamics (MD) simulations requiring continuity of the derivative of the energy.However, for the MC implementation considered here it is not a problem.Moreover, in practice, within the mentioned small parameter intervals, the kinks are so weak that they cannot or can hardly be detected. The case b 1 = 1 (and a 1 = 0) is a special case, where the denominator of Ĝ(E) is a linear function of E and has one, real root equal to E 1 = (a 2 1 + 1)/a 1 , implying |E 1 | 2. For a 1 > 0, E 1 2 and is only a pole when a 1 > 1, whereas for a 1 < 0, E 1 −2 and is only a pole when a 1 < −1.In Eq. ( 20), now the most right-hand side contains only one term, c 1 I b,1 , instead of two with c 1 = −1/(2πa 1 ) and I b,1 given by Eq. ( 22) for i = 1 with z 1 = E 1 , u ± 1 from Eq. ( 23), and t F and u F as before. An even more special and rare case occurs when b 1 = 1 and a 1 = 0.In that case, The pole contribution, I p , to the integrated density of states, I = I b + I p , is given by where is Heaviside step function, N p (1 or 2) is the number of real poles, f ip ∈ [0,1] is a filling factor, E pi is the energy of the pole, and W pi its weight factor.For E F > E pi , the filling factor f pi = 1, but when E F = E pi only part of the pole level may be filled so that f pi 1 in that case.Unless b 1 = 1, W pi (i = 1,2) is given by whereas for b 1 = 1 and |a 1 | > 1, that is, a case with a single Dirac peak, W p1 is equal to Unless b 1 = 1 and a 1 = 0, the band and pole contributions to the energy integral I E = I E,b + I E,p are given by (29) with N r (1 or 2) the number of (complex) roots of Eq. ( 18) and respectively.For b 1 = 1 and a 1 = 0, there is only a band contribution, which is equal to There are two other special cases which have not been considered so far.In nontransformed coefficients, these cases are b iλ 1 = 0 and b iλ 2 = 0, corresponding to a free atom and a dimer, respectively.In these two cases the transformation ( 11) is useless and impossible, respectively, and the cohesive energy should be calculated directly without this transformation.For b iλ 1 = 0, the contribution from band λ to the LDOS consists of just one Dirac peak due to a pole at for → 0, we find with f iλ ∈ [0,1] a filling factor as before.For b iλ 2 = 0 (and /2 and we find /(E + − E − ) and f ± iλ again filling factors. IV. NICKEL DROPLET DIFFUSION ON GRAPHENE Details of simulations.To investigate the size dependence of liquid cluster diffusion on graphene, a prototype crystalline membrane, we performed six simulations for clusters containing N = 19, 38, 92, 147, 276, and 405 Ni atoms, initially positioned in the middle of a 71.3 × 72.4 Å2 graphene sheet containing 1972 C atoms.Periodic boundary conditions were applied in both directions parallel to the sheet.In contrast to previous work, 62 the substrate was not taken static, but the MC displacement trials were applied randomly to both Ni and C atoms.A Metropolis acceptance rule was used.The temperature was taken equal to 2000 K, which is close to the Ni bulk melting temperature, T m = 2010 K, according to our model, but well above the melting temperatures of all clusters considered here. 67Indeed, during a first run at the given temperature melting of the clusters took place in all cases.Instead, the graphene substrate did not melt, in agreement with the recently estimated melting temperature of 4900 K for graphene. 68For each cluster size, the simulation consisted of 2.5 × 10 7 MC cycles.One cycle corresponds to, on average, one trial displacement per atom.In contrast to single-particle (self-)diffusion in a bulk phase, where the statistics is collected by averaging over all the particles, here we have just a single cluster and sufficient statistics has to be collected by running long simulations (see below). To investigate the growth process in terms of Eq. ( 1), we also performed a simulation of the growth of liquid Ni clusters on graphene at 2000 K, starting from an initial configuration with 400 Ni atoms randomly distributed on a graphene sheet of 123.0 × 123.5 Å2 containing 5800 carbon atoms. Ni-graphene adhesion.To obtain information on the adhesion of Ni with graphene according to our TBFMA model, we investigated the low-temperature energetics of several reference structures. For a monolayer of Ni on graphene, the adhesive energy is equal to −0.25 eV per Ni atom, the optimal geometry being that with all Ni atoms positioned above the centers of the hexagons of the graphene substrate.Adding more layers, forming a Ni slab with the (111)-face in contact with the graphene substrate, the adhesive energy reduces from −0.078 to −0.055 to −0.024 eV per Ni interface atom for two, three, and more than three layers, respectively.This weak Ni bulk-on-graphene adhesion, in spite of the almost perfect lattice matching of the Ni-(111) surface with the graphene substrate, is in agreement with DFT calculations. 69,70For clusters with numbers of atoms ranging between 55 and 201 atoms, the cohesive energies were found to vary between −0.1 and −1.2 eV per Ni interface atom, depending on the shape of the cluster, the Ni-interface orientation, and the size of the clusters, the adhesion being stronger for small clusters. These results show that there is a weak to moderate chemical interaction between Ni and the graphene substrate.They are indicative for the nature and magnitude of the Ni-graphene interaction, although the adhesion for liquid clusters can be expected to be weaker than for solid clusters. Analysis of the simulations.Our MC "time" unit τ was chosen to be equal to 500 MC cycles.Assuming that MC "time" is proportional to real time for not-too-short time scales, the real (physical) time interval t per MC "time" unit τ is equal to t = D MC τ/D, where D MC τ is the mean squared center-of-mass displacement (MSD) of the cluster per MC "time" unit τ and D is the diffusion constant in real units.Then, for normal diffusion in 2D, Einstein's Brownian motion formula tells us that where R 2 (n) is the average MSD of the cluster after n MC "time" units, defined as where M n = M − n + 1, with M the total MC simulation "time" and where we defined We corrected for substrate motion defining where R cl and R gr represent the cluster and graphene center of mass positions, respectively.Typically, the "time" interval over which Eq. ( 34) can be verified reliably is much smaller than the total simulation "time" M due to the lack of statistics for large "times" within the interval [0,M].So, before plotting the MSD versus n, we first investigated the statistics by calculating the average MSD distribution function, which we formally define as for a given MC "time" n, where δ is the Kronecker δ function, and the prefactor 1/π normalizes the 2D space integral of ρ MC to one.For normal diffusion, ρ MC ( R 2 ; n) should correspond to the analytical solution of the diffusion equation ∂ρ/∂t = D∇ 2 r ρ(r) in 2D, which for the initial condition of a particle placed at the origin r = 0 at t = 0 is given by This solution, also known as the diffusion propagator, gives the probability that the particle has moved over a distance r within the time interval t.Hence, the statistics of our simulation can be checked by calculating ρ MC ( R 2 ; n) for a given MC "time" n and compare it to the analytical shape (A/π) exp(−A R 2 ) of Eq. (38).Typically, when n is a large fraction of the total simulation "time," M, statistics will be poor and ρ MC will not have converged to the analytical shape.Results.To check the statistics for our single cluster diffusion simulations, we plotted the MSD distribution functions ρ MC ( R 2 ; n) as a function of R 2 for a given, fixed MC "time" interval n.Examples are given in Fig. 7 for the cluster with 92 atoms for four different "times" n.The dashed lines in the graphs represent a best fit of the analytical form (A/π)exp(−A R 2 ) with only one fitting parameter A = 1/(4D).For small "time" intervals n, ρ MC ( R 2 ; n) follows closely the fit, indicating good statistics, while for the largest "time" interval it deviates considerably indicating poor statistics, due to the reduced number of contributions to the sum in Eq. (37). The results for the MSD distribution functions for various "time" intervals and the six clusters indicated that we can expect reliable values for the MSD as a function of n for "time" intervals up to (at least) n = 100, which is indeed confirmed by Fig. 8. Apart from a relatively small initial "time" interval this figure shows a linear relationship between R 2 and n, allowing for a straightforward determination of the MSD per MC "time" unit, D MC τ , for each cluster.Subsequently plotting D MC τ as a function of N in a logarithmic plot (see inset of Fig. 8) shows a power law behavior D MC τ ∝ N −α with an exponent equal to α = 0.645, practically equal to the value 2/3 found in Ref. 24 for solid clusters with a lattice parameter incommensurate with respect to that of the substrate.The considerably larger exponent α = 1.3 found for a liquid cluster on a rigid, amorphous substrate in Ref. 62 might be attributed to the observed stick-and-roll diffusion mechanism.We visually checked for rolling by marking the atoms at one side of the cluster (the one with 92 Ni atoms) with a different color and following these atoms in successive MC snapshots.After only a few MC "time" intervals the marked atoms were distributed almost randomly throughout the cluster while the cluster as a whole had hardly moved.This suggests that the diffusion of the clusters does not proceed by rolling in our case.The difference in "time" scale for cluster diffusion and that of atomic self-diffusion inside the cluster is also demonstrated by the average MSD as a function of n for atomic diffusion shown by the dotted line in Fig. 8.This self-diffusion curve was determined for the biggest cluster with 405 Ni atoms by initially selecting all atoms inside a spherical region around the center of the cluster and following the average MSD only for these atoms to limit the effects of the cluster boundaries for some "time".Indeed, for large "time" scales we found that the MSD versus n curve starts to fluctuate around a constant value due to the finite cluster size, but for smaller "time" scales the linear MSD versus n behavior was recovered, as shown in Fig. 8, allowing for the determination the atomic MSD per MC "time" unit, D MC,at τ .As a result we find that, at 2000 K, the atomic self-diffusion is more than two orders of magnitude faster than the cluster diffusion for the cluster with 405 atoms.Using this ratio from our MC simulations and the literature value D at = 7.0 10 −5 cm 2 /s for the atomic self-diffusion constant, 71 we obtain a cluster diffusion constant of D 405 = 5.7 10 −7 cm 2 /s, comparable to the experimentally found, large cluster diffusion constant of the order of 10 −8 cm 2 /s for nonepitaxially oriented gold and antimony clusters on graphite. 51,52This large diffusion constant suggests a mechanism dominated by random motion of the whole cluster rather than by single-atom events, although the latter is present as well.We note that in the above analysis, we tacitly made the assumption that the real time per MC "time" unit is the same for both diffusion processes; that is, we assumed that D MC,at τ/D at = D MC τ/D, which for the present rough estimation is reasonable. Our simulation of the growth process of liquid Ni clusters on graphene is illustrated in Fig. 9.The snapshots in panels (b) and (c) of this figure are separated by only a limited number of MC "time" units during which the four clusters in the upper right corner have merged to two clusters, which suggests the cluster coalescence represents an important contribution to the growth process.We also see single Ni atoms, detached from the cluster, diffusing on the graphene surface, demonstrating the presence of Oswald ripening.Note the apparently much larger mobility of the single atoms than that of the clusters by comparing again the images of Figs.9(b) and 9(c).Finally, the evolution of the average cluster size, N (t) , is shown in Fig. 9(d), and compared with a best fit of the form N 0 + K CC t 3/5 + K OR t 3β from Eq. ( 1) with α = 2/3 (dashed line).This good agreement could only be obtained by assuming a value for K OR much smaller than that for K CC , which implies that the growth process is almost completely dominated by cluster coalescence for the "time" scales here considered. V. SUMMARY, CONCLUSIONS We have presented an analytical description of TBFMA.While the TBFMA model can be considered as a next step beyond the SMA model, it is, in fact, closer to the classical TB model of which it contains the essential physics.Important implications and technical details are discussed, including, for example, an overview of the shapes of the local densities of states possible within the FMA, and the analytical expressions for the relevant integrals, obtained by applying and elaborating the general solution in Ref. 66 to the FMA.These and several other important ingredients provide the basis for a very efficient implementation of the FMA model in a MC simulation code, which allows for a simulation speed approaching that of the simpler models, based on SMA, to within one order of magnitude and which scales linearly with the system size. Due to the possible discontinuities in the derivatives of the energy, mentioned in Sec.III B, constructing a rigorous MD implementation is not a straightforward task.We are currently working on an MD version of the model. Our MC implementation of the TBFMA model allowed us to simulate the diffusion and growth of Ni liquid droplets on a graphene sheet.Despite the absence of a true time scale in MC simulation, an analysis of our simulations confirms that the MC "time," taken equal to the number of MC cycles, can be assumed to be proportional to the real time for not-too-short time scales, as has also been demonstrated previously. 45Simulation of the single droplet diffusion for different sizes of the droplet revealed a power law behavior of the diffusion constant D MC ∝ N −α with α = 0.645, very close to the value α = 2/3 found earlier for solid clusters with an incommensurate matching to the substrate. 60As in the latter case, the main diffusion mechanism is the random motion of the whole cluster, giving rise to a much faster diffusion (10 −7 cm 2 /s) than the diffusion dominated by single-atom events (10 −17 cm 2 /s).Our exponent α is different from the value α = 1.3 found for a liquid cluster on an amorphous, rigid surface, where the diffusion was shown to proceed via a stick-and-roll mechanism 62 not observed in our simulations.These facts are likely to explain the different exponents. The exponent α = 2/3 for the size dependence of the diffusion constant gives rise to a contribution proportional to t 3/5 in the growth of the average cluster size by coalescence in an ensemble of clusters on a 2D substrate.Our simulation of such a cluster growth process on a graphene sheet is well described by the law t 3/5 and suggests that cluster coalescence is by far the dominant process for the "time" scale and system size considered here, which did not allow us to see the crossover to a regime were Oswald ripening becomes the dominant process. FIG. 1. (Color online)The bond energies of a CC bond between two threefold coordinated carbon atoms i and j depends significantly on the coordinations of the other neighbors, and thus on the secondnearest neighbors.In panel (a) these neighbors are saturated with coordination 4, giving rise to a double CC bond, whereas in panel (b) the environment is sp 2 , like, for example, in graphene. 3 FIG. 2 . FIG. 2. (Color online)Schematic representation of all the atoms whose energies change after a displacement of the central atom i.The hopping pathways from i to a second neighbor k 1 can be reused to compute the change in the fourth moment of the atom k 1 , as these are the only pathways for atom k 1 which change after a displacement of the atom i.For atom k 1 this includes only one pathway.For atom k 2 , we have drawn all pathways that change and that start and end on k 2 , which include four pathways in this case.All the possible changed pathways are automatically included in the algorithm presented in TableI. FIG. 3 . FIG. 3. (Color online) Local density of states for a 1 = 0 (symmetric case) and four values of b 1 . FIG. 4 . FIG. 4. (Color online) Local density of states for a 1 = −1 and four values of b 1 . 1 FIG. 5 . FIG. 5. (Color online) Graphical representation of the LDOSproperties as a function of the parameter b 1 for the symmetric case (a 1 = 0, top graph) and a nonsymmetric case (a 1 = −1, bottom graph).The energies of real roots E ri of the denominator in Eq. (17) and the real poles E pi are indicated on the left vertical axis, whereas the corresponding weight factors W pi (residues) are given on right vertical axis.For a 1 = 0, both roots become poles for b 1 > 2 − a 1 = 2.For a 1 = −1, E r1 becomes a pole E p1 for b 1 > 2 − a 1 = 3, whereas E r2 becomes a pole E p2 for b 1 > 2 + a 1 = 1. FIG. 7 . FIG. 7. (Color online) The mean squared displacement distribution function from MC simulation, ρ MC ( R 2 ) (solid lines), and the best fitting analytical solution (38) (dashed lines) for different "time" intervals n, as indicated in the graphs.These results are for the Ni cluster with 92 atoms. ~N−0. 645 FIG. 8 . FIG. 8. (Color online)The mean squared displacement (MSD) as a function of MC "time" from the MC simulations (red solid lines) for the six different clusters as indicated by the total number of Ni atoms and the average number of atoms in the Ni cluster during the simulation (number in parentheses).We note that at the give temperature Ni atoms can detach from the cluster and eventually rejoin the cluster at a later "time."The dashed line gives the best linear fit.The dotted line gives the MSD for atomic self-diffusion inside the cluster with 405 atoms.The inset gives the MSD per MC "time" unit, D MC τ , as a function of the average number of atoms inside the cluster using logarithmic scales obtained from the slopes in the main figure (solid diamonds) and from the best fit of ρ MC ( R 2 ) using the analytical solution(38) (open circles).The dashed line in the inset gives the best fitting power law resulting in an exponent α = 0.645. FIG. 9 . FIG. 9. (Color online) Snapshots of the simulation of the liquid droplet growth on graphene starting from randomly deposited Ni atoms (a).The snapshots (b) and (c), separated by only a limited number of MC "time" units, clearly shows the occurrence of cluster coalescence.Graph (d) gives the average cluster size N (t) (in number of atoms) as a function of the MC "time" t MC . all CFCs beyond n = 2 are taken constant and equal to a iλ n = a iλ
2018-12-13T15:12:00.218Z
2011-08-01T00:00:00.000
{ "year": 2011, "sha1": "af730c814501e18f1398952f3b04e014f2a0672e", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/69609/2/Los-2011-Tight%20binding%20within%20the%20fourth%20moment%20approximation.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "af730c814501e18f1398952f3b04e014f2a0672e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
236308311
pes2o/s2orc
v3-fos-license
Food Estate Management as Global Food Crisis Prevention Through the Implementation of the BNI Tani Card Program at Pulang Pisau The soaring demand for food in the world demands that a country, including Indonesia through its government, can partially synergize together to prepare a strategy for food security (Aday, 2020) that is able to fight the issue of the Global Food Crisis (Pierre, 2020). The central and regional governments are jointly trying to overcome the issue of the global food crisis by boosting food production by developing food estates or food gardens. In fact, food estate is a grand design of economic development called the Master Plan for the Acceleration and Expansion of Indonesian Economic Development (MP3EI) 20112025. This food estate program was created to anticipate the food crisis as predicted by the World Food Agency (FAO) (Bhwana, 2020) making it a center for food agriculture for strategic logistics reserves for national defense (Laborde, 2020). The program will be one of the National Strategic Programs (PSN) 2020-2024 and is expected to be able to become one of the pillars supporting national food security, including contributing to economic stability, politics and national security (Issabella, 2020). The food estate itself in this study Abstract I. Introduction The soaring demand for food in the world demands that a country, including Indonesia through its government, can partially synergize together to prepare a strategy for food security (Aday, 2020) that is able to fight the issue of the Global Food Crisis (Pierre, 2020). The central and regional governments are jointly trying to overcome the issue of the global food crisis by boosting food production by developing food estates or food gardens. In fact, food estate is a grand design of economic development called the Master Plan for the Acceleration and Expansion of Indonesian Economic Development (MP3EI) 2011-2025. This food estate program was created to anticipate the food crisis as predicted by the World Food Agency (FAO) (Bhwana, 2020) making it a center for food agriculture for strategic logistics reserves for national defense (Laborde, 2020). The program will be one of the National Strategic Programs (PSN) 2020-2024 and is expected to be able to become one of the pillars supporting national food security, including contributing to economic stability, politics and national security (Issabella, 2020). The food estate itself in this study Abstract Needs realization of the food estate concept partially in strengthening food security in the agricultural sector cannot be done by the government and society, but also involves corporations. One form of corporate involvement in this research is a program from BNI in the form of farmer cards. The farmer card program is expected to provide efficiency for farmers so that they can receive the distribution of government support in the right amount, the right type, the right time, the right place, the right quality and the right price. This study aims to measure the implementation of the BNI farmer card implementation to provide a pattern of relationship to the potential development of the food estate concept in Pulang Pisau. implementation of BNI farmer cards in the Pulang Pisau community through indicators of understanding and compliance (X1), behavior and culture (X2), economic conditions (X3), policy issues (X4), facilities and infrastructure (X5), and stakeholder support (X6) for Realization food estate concept. The research design was carried out in a cross-sectional manner using a quantitative approach through smart PLS. The results show the value of the six factors measured through the implementation of the BNI farmer card, which later on this value will become a basis for sustainability and policy adjustments in the realization of food estate in the region itself through government, community and corporate cooperation in realizing food security against global food crisis. Keywords food estate; farmers card; food security; BNI will be set in Central Kalimantan, which is a program that is becoming a trend by empowering transmigrants. Komarayanti et al (2018) stated that Local fruits and vegetables contribute to food security in the region through optimizing the utilization of resources of local fruits and vegetables as a provider of food. The need for the partial realization of the food estate concept in strengthening food security in the agricultural sector cannot be done only by the central government and the community, but also by the participation of several SOEs as corporations that help realize government programs (Masudin, 2020). This research will focus on the corporate role of PT Bank Negara Indonesia Persero Tbk (BNI) as an agent of development which has an obligation and a role to succeed in the government's program in combating the Global Food Crisis (Laborde, 2020). One of the manifestations of this support is the realization of an integrated Food estate program with the BNI farmer card program. The BNI farmer card is an instrument which is a program to support the food estate concept issued by BNI as a corporation (Ashari, 2019). As a company with a great target in 2021 in national economic recovery during the Covid-19 pandemic, BNI plans a Food Security target in Indonesia which is one of the government's focuses and is one of the Nawacita programs through this farmer card program. At least more than 2.5 million farmer cards have been distributed in 2020, which aims to be efficient in the form of agricultural subsidies from the government for the community, through subsidies for fertilizers and agricultural needs (Chakim, 2019). Dianto et al (2020) In an increasingly advanced era like now it's not just money that is used as a means of payment but there are also ATM cards or debit cards and cards that are a symbol of lifestyle. The farmer card program is expected to provide efficiency for farmers to receive the right amount of government support, the right type, the right time, the right place, the right quality, and the right price. Therefore, every element in the realization and optimization of farmer cards through the government, BNI and the community must be a force and synergy that supports this vision. Previous research described several factors that could hinder the optimization of the implementation of the farmer card in the development of food estates, including public misunderstanding (Fahmi, 2020) and government socialization (Kurniawati, 2020). In fact, the implementation of the BNI farmer card implementation in the community can be measured in six (6) factors, including 1) understanding and compliance with regulations; 2) behavioral and socio-cultural factors of the Kendal community; 3) factors of community economic conditions; 4) the issue of fertilizer availability when a new policy emerges; 5) factors of facilities and infrastructure; and 6) stakeholder support factor. These six factors can measure the implementation of the BNI farmer card program in community groups. A farmer card program in it will be interpreted as an integrated program in the unity between the government, the community and companies or corporations in participating in providing synergies on the values of the food estate concept (Fahmi, 2020). Thus, in this study, the authors explain the acceptance of the implementation of the farmer card in the community which is a part of the acceptance and success of a systematic food estate program in a well-defined program, where each of these factors can contain an intervention. which allows speeding up or even slowing down Cai, 2020) effectiveness in achieving the realization of the food estate concept in Pulang Pisau. This can be translated into a conceptual framework as follows: Figure 1. Conceptual Framework for Implementing Farmer Cards Measured by 6 Indicators As for the previous description, the conceptual framework in this study will refer to the instrument to measure the implementation of the BNI farmer card (X) in the Pulang Pisau community, namely through indicators that will be translated into 6 assessment factors (X1-X6) to influence the realization of the food concept. estates (Y). Therefore, this study aims to measure the implementation of the BNI farmer card implementation to provide a relationship pattern to the potential development of the food estate concept in Pulang Pisau itself. The research hypothesis shows that there is an influence from the implementation of the BNI farmer card in the Pulang Pisau community through indicators of understanding and compliance (X1), behavior and culture (X2), economic conditions (X3), policy issues (X4), facilities and infrastructure (X5), and stakeholder support (X6) towards the realization of the food estate concept. Thus, the positive synergy of the factors measured through the implementation of the BNI farmer card is expected to have a foothold on sustainability and policy adjustments in the realization of food estate in the region itself through the cooperation of the government, communities and corporations. II. Research Methods The research design was cross-sectional using a quantitative approach. This research was conducted in Pulang Pisau Regency. The population in this study used farming communities who were willing to be contacted and then given online instruments using a form via the Google Form application (due to the Covid-19 pandemic). The time in distributing the form is limited to a period of 14 days to select the research sample. The sample is limited to farmers who have a BNI farmer card and are willing to become participants. Knowledge of the concept of food estate is also one of the screenings in determining the sample as a participant. Participants will scan questions in the form of: (1) are you registered and have a BNI farmer card?, (2) Are you willing to be a participant by answering 21 questions about farmer cards and the food estate program in Pulang Pisau Regency?. In the end the number of samples was found to be 115 participants who were willing to fill out the Gform, then the sample as participants was found to be 115 respondents or 89.1% of the total population. The number of samples was taken according to the number of samples in the PLS (Partial Least Squares) guidelines. To obtain the necessary data using a questionnaire. The way to measure the realization of the food estate concept, understanding and compliance, social and cultural, economic conditions, policy issues, facilities and infrastructure and stakeholder support is by using a questionnaire with a semantic differential scale, namely a scale to measure attitudes and others, but the form is not multiple choice. or checklist and arranged in a continuum line where positive answers are located on the right of the line, and negative answers are located on the left of the line, or vice versa. The data obtained by measuring the semantic differential scale is interval data and is used to measure certain attitudes or characteristics of a person. Example: Respondents can give answers in the range of positive to negative answers. This depends on the perception of the respondents being assessed. Respondents who gave an assessment of 5, it means that the measurement of the realization of the food estate concept in Pulang Pisau Regency is positive and vice versa. The measurement exposure is presented as follows; The data obtained from the questionnaire results were recapitulated using the Excel program with the CSV extension and then processed using the SmartPLS program. Data analysis uses two models, descriptive analysis and Structural Equation Model (SEM), where descriptive analysis model is used to quantify the value of understanding and compliance, social and cultural factors, economic conditions, policy issues, facilities and infrastructure and stakeholder support for the realization of the food concept. estate, as well as describing the description of the research variables based on the answers to each questionnaire by giving a score for each answer. In the analysis using the average value and the percentage of the respondent's answer score. III. Results and Discussion The research was carried out during the pandemic by observing health protocols, where information was extracted via telephone, chat and filling out forms. This becomes a limitation of research measurement. The focus of the research is the implementation of the use of farmer cards, where cards are issued by banks to farmers to be used in subsidized fertilizer redemption transactions through Electronic Data Capture machines at authorized retailers (BNI, 2020). In the implementation study, the acceptance of BNI farmer cards in the Pulang Pisau community was assessed by 6 factors, namely understanding and compliance, social and culture, economic conditions, policy issues, facilities and infrastructure and stakeholder support. These six factors will show an influence on the support for the realization of the food estate concept that has been implemented by the local government. This research will be able to become a basis for evaluating the role of corporations through the farmer card program to support the food estate concept launched by the government as a form of food security against the global food crisis. The study will include 115 participants who work in Pulang Pisau District. The assessment was filled out by respondents to assess the direct or indirect influence between understanding and compliance, economic, social and cultural conditions, policy issues, facilities and infrastructure and stakeholder support on the realization of the food estate concept in Pulang Pisau Regency. The characteristics of the respondents include age and status of migrants as part of the program. The answer characteristic categories per variable from 115 participants were then processed into an assessment of the range based on variable descriptive statistics, namely: a. Understanding and Compliance Variable (X1) Understanding and compliance variables in this study were measured through 15 statement items with an assessment of 1-5. So the score of the questionnaire ranged from 15-75 and the actual score ranged from 48-68. Figure 2. Histogram of Understanding and Compliance Score Frequency The frequency distribution of respondents' scores on the understanding and compliance variables is as follows: Based on table 2, it shows that the average value is> 0.5. Then the composite reliability value> 0.7. So it can be concluded that the indicators in the study are able to measure well. b. Social and Cultural Variables (X2) Social and cultural variables in this study were measured through 15 statement items with an assessment of 1-5. So the score of the questionnaire ranged from 15-75 and the actual score ranged from 23-43. Figure 3. Histogram of Social and Cultural Score Frequency The frequency distribution of respondents' answers to social and cultural variables is as follows: Based on table 4, it shows that reward has a very weak effect on student satisfaction, then satisfaction has a moderate effect on student loyalty c. Variable Economic Condition (X3) Variable Economic conditions in this study were measured through 15 items with an assessment of 1-5. So the score of the questionnaire ranged from 15-75 and the actual score ranged from 48-68. Figure 4. Histogram of Economic Condition Score Frequency The frequency distribution of respondents' answers to the variable Economic conditions is as follows: d. Policy Issue Variable (X4) Variable Policy issues in this study were measured through 15 statement items with an assessment of 1-5. So the score of the questionnaire ranged from 15-75 and the actual score ranged from 23-43. Figure 5. Histogram of Policy Issues Score Frequency The frequency distribution of respondents' scores on the policy issue variable is as follows: e. Variable Facilities and Infrastructure (X5) The variables of facilities and infrastructure in this study were measured through 15 statement items with an assessment of 1-5. So the score of the questionnaire ranges from 15-75 and the actual score ranges from 40-61 Figure 6. Histogram of the Score Frequency of Facilities and infrastructure The frequency distribution of respondents' answers to the variables of facilities and infrastructure is as follows: f. Stakeholder Support Variable (X6) Stakeholder support variables in this study were measured through 15 statement items with an assessment of 1-5. So the score of the questionnaire ranged from 15-75 and the actual score ranged from 23-43. Figure 7. Histogram of Stakeholder Support Score Frequency The frequency distribution of respondents' answers to the Stakeholder Support variable is as follows: g. Variable Realization of Food Estate Concept (Y) Variables Realization of the concept of food estate in this study was measured through 15 statement items with an assessment of 1-5. So the score of the questionnaire ranged from 15-75 and the actual score ranged from 46-67. Figure 8. Histogram of the Score Frequency Realization of the Food Estate Concept The frequency distribution of respondents' answers to the variable Realization of the food estate concept is as follows: The results of the Chi Square test on the variables of Realization of the food estate concept (Y), Understanding and compliance (X1), Social and culture (X2), Economic conditions (X3), Policy issues (X4), Facilities and infrastructure (X5), Stakeholder support ( X6) with a significance level of 5%, all are greater than 0.05. This shows that all these variables have no relationship with the characteristics of the respondents. Indicator validity can be measured by evaluating the results of cross loading for all variables shown as follows: An indicator is declared valid if it has the highest loading factor for the intended construct compared to the loading factor for other constructs. The table above shows that the loading factor value for (X1-1) -(X1-3) is the highest for the understanding and compliance variable compared to other variables, so that the understanding and compliance variable is able to predict the factor loading value (X1-1) to (X1-3). ) is higher than the other variables. The results of the analysis of data processing show that the construct used to form a research model, in the confirmatory factor analysis process, has met the criteria of goodness of fit that have been determined. The probability value in this analysis shows a value above the significance limit of 0.05. From the results of data processing above, it is also seen that each indicator or dimension forming the latent variable shows good results, namely with a high loading factor value where each indicator is greater than 0.5. With these results, it can be said that the indicators forming the latent variables of the construct of understanding and compliance, economic conditions, policy issues, facilities and infrastructure, stakeholder support and the realization of the food estate concept have shown good results. Another way to test disciminant validity is through the Square root of variance extracted (AVE) value. The expected value is above 0.50. Below is the AVE table: seen that all variables are declared valid because they provide an AVE value above 0.5. So it can be concluded that the evaluation of the measurement model has a good or valid discriminant validity. Another method to assess discriminant validity is to compare the value of the Square root of variance variance extracted (AVE) of each construct with the correlation between the construct and other constructs in the model, so it is said to have a good discriminant validity value. After being tested for validity and declared that the variables and indicators have been valid, the reliability test is carried out. The reliability test is carried out by looking at the composite reliability value from the indicator block that measures the composite reliability result construct which will show a satisfactory value if it is above 0.70. The results of the evaluation of the reliability of the outer model can be seen in the table by evaluating the value of Cronbach's Alpha and composite reliability. Here are the values: Based on the table above, it shows that all variables are declared reliable because the Cronbach's Alpha and Composite reliability values are above 0.70 so it can be said that the construct has good reliability. Furthermore, the Inner Model test is carried out, testing the structural model is done by looking at the R-Square which is the Goodness-fit test model. The following is the result of measuring the R-Square value, which is also the value of the goodnees-fit model. Based on the table above, it can be seen that the value of r square is most dominant when the components of economic, social and cultural conditions, policy issues, facilities and infrastructure and stakeholder support affect the subject well being. The results of the significant evaluation of the inner model are arranged in the SmartPLS output below by evaluating the reflection of the T statistic value of the indicator on the variable. The table above states that the statistical T value reflected on the variable is mostly > 1.96, thus indicating the indicator block has a positive and significant effect on reflecting the variable. The results of this study found several significant findings to describe the relationship between variables, namely: The findings of this study are: 1. There is a direct and magnitude effect between understanding and compliance, on the realization of the food estate concept of 27.34%, an indirect effect of 0.51% and the T statistic value of 5.426 and significant at 5% alpha. 2. There is a direct and magnitude effect between economic conditions on the realization of the food estate concept of 9.07%, an indirect effect of 0.01% and the T statistic value of 0.356 and significant at 5% alpha 3. There is a direct influence and magnitude between social and culture on the realization of the food estate concept of 7.21 and the T statistic value of 5.651 and significant at 5% alpha. 4. There is a direct and magnitude influence between policy issues on the realization of the food estate concept of 40.4 and the T statistic value of 1.899 and significant at 5% alpha 5. There is a direct influence and magnitude between facilities and infrastructure on the realization of the food estate concept of 15.9 and the T statistic value of 4.255 and significant at 5% alpha. 6. There is a direct and magnitude influence between stakeholder support, on the realization of the food estate concept of 26.4 and the T statistic value of 5.691 and significant at 5% alpha. After analyzing the data, then hypothesis testing is carried out on these variables, where this testing method is carried out by bootstrapping. The statistical test used is the t test. Based on the table above, it can be seen that all variables have a t-statistic value greater than 1.96% and the variables Understanding and compliance, Social and culture and Stakeholder support for Subject well being appear as the largest values, so H0 is rejected because the T-value is The statistic is far above the critical value (1.96) so it is significant at 5%. The percentage of influence between variables will then be presented as follows: Based on the table, it states that the attributes of understanding and compliance have a direct and indirect effect on the realization of the food estate concept. The results of the coefficient test produce 3 main parameters that have a direct influence, namely 1) Understanding and compliance with the realization of the food estate concept shows that there is a direct effect of 32.4%, 2) Social and cultural influence on the realization of the food estate concept shows that there is a direct effect of 15, 3%. And 3) Stakeholder support for the realization of the food estate concept shows that there is a direct effect of 16%. The way to calculate the direct effect of understanding and adherence to the realization of the food estate concept is by multiplying the path coefficient of the understanding and adherence to the realization of the food estate concept by the latent variable, which also applies to the calculation of the path coefficient of other variables. So that from each of the direct effects of the exogenous latent variables, if together they show conformity with R square or in other words, it states that the variables of understanding and compliance, economic conditions, policy issues, social and culture, capabilities and stakeholder support are ( 32.4% + 10.6% + 15.3% + 13.4% +10.6% + 16.0%)= 98.3. Meanwhile, the indirect effect between understanding and compliance with the realization of the food estate concept is 0.27%, the indirect effect between economic conditions on the realization of the food estate concept is 0.62%, and the indirect effect between social and culture on the realization of the food estate concept is 0.29%. , the indirect effect of policy issues on the realization of the food estate concept is 0.26%. the indirect effect between facilities and infrastructure is 0.14%. while the indirect effect of stakeholder support on the realization of the food estate concept is 0.12%. IV. Conclusion The test results found that the variable realization of the concept of food estate on farmer card ownership for the community as a result of BNI corporate cooperation in Pulang Pisau Regency was influenced by the influence of understanding and compliance (32.4%), economic conditions (10.6%), policy issues (15 .3%), social and cultural (13.4%), facilities and infrastructure (13.4%) and stakeholder support (10.6%). Based on these findings, it can be concluded that the implementation of the farmer's card which consists of 6 measurement indicators is something that will relate to success and policy making in the realization of the food estate concept in Pulang Pisau Regency. The study has limitations on the number of samples and time as well as face-to-face limitations to obtain in-depth data due to the pandemic, but is able to answer to show acceptance and analyze factors that can affect the potential for food estate realization in Pulang Pisau Regency through the procurement of BNI farmer cards.
2021-07-26T00:05:47.332Z
2021-06-11T00:00:00.000
{ "year": 2021, "sha1": "2965f05461d714b0de052ddf1f70bc79b9fc2f7d", "oa_license": "CCBYSA", "oa_url": "http://www.bircu-journal.com/index.php/birci/article/download/2035/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aeeba4d7d37c480981e2ab77cba0d711ffc4b481", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
51937313
pes2o/s2orc
v3-fos-license
The incidence of hypoglycemia among insulin-treated patients with Type 1 or Type 2 diabetes: Bangladeshi cohort of international operations-hypoglycemia assessment tool study Objectives: The objective of this study was to assess the incidence of hypoglycemia in patients with type 1 diabetes mellitus (T1DM) or type 2 diabetes mellitus (T2DM) in Bangladeshi cohort of the International Operations-Hypoglycemia Assessment Tool study. Materials and Methods: Patients diagnosed with either T1DM or T2DM, aged ≥18 years, treated with insulin (any regimen) for >12 months, and completed self-assessment questionnaires (SAQs) to record demography, treatment information, and hypoglycemia during the 6-month retrospective and 4-week prospective periods (a total of 7 months) were enrolled in the study. Results: A total of 1179 patients were enrolled and completed the SAQ1 (T1DM, n = 25; T2DM, n = 1154). Almost all patients (T1DM: 100.0% [95% confidence interval (CI): 86.3%, 100.0%] and T2DM: 97.0% [95% CI: 95.9%, 97.9%]) experienced at least 1 hypoglycemic event prospectively. The estimated rates of any and severe hypoglycemia were 26.6 (95% CI: 19.8, 35.0) and 14.1 (95% CI: 9.3, 20.4) events per patient-per year (PPY), respectively, for patients with T1DM and 18.3 (95% CI: 17.4, 19.2) and 12.1 (95% CI: 11.4, 12.9) events PPY, respectively, for patients with T2DM during the prospective period. At baseline, mean glycated hemoglobin (HbA1c) (±standard deviation) was 8.1 (±1.8%) for T1DM and 8.8 (±1.8%) for T2DM. Hypoglycemic rate was independent of HbA1c levels and types of insulin. Conclusions: This is the first patient dataset of self-reported hypoglycemia in Bangladesh; results confirm that hypoglycemia is underreported. real-world, 6-month retrospective, and 4-week prospective assessment on self-reported hypoglycemia using a two-part self-assessment questionnaire (SAQ1 and SAQ2) and the patient diary (PD) for 28 days [ Figure 1], designed to assess the incidence of hypoglycemia in patients with diabetes mellitus (DM) treated with insulin (premix, short-acting, long-acting, or insulin pump) in Bangladesh, Colombia, Egypt, Indonesia, the Philippines, Singapore, South Africa, Turkey, and the United Arab Emirates. [8] In this subanalysis, data on hypoglycemia were collected from all patients in Bangladeshi cohort of the IO-HAT study who responded to SAQ1. The patients were recruited across 28 sites in Bangladesh between October 30, 2014, and April 15, 2015. The study was approved by BIRDEM Ethical Review Committee and carried out in accordance with Good Pharmaco-epidemiological Practice and the Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Patients. [9,10] The study materials were translated into local language, and data acquired were translated back into English for analysis. Patients Eligible Bangladeshi patients with T1DM or T2DM, ≥18 years of age at baseline, ambulatory, literate, and treated with insulin for >12 months, who had given informed consent to participate in the study, were included in the study. The patients were enrolled from primary or secondary care centers in Bangladesh during the course of their routine scheduled clinical consultation with their health-care provider. Study assessments The SAQ1 was used to record baseline demographics, treatment information, hypoglycemia unawareness, perceptions of hypoglycemia, history of severe hypoglycemia over the last 6 months, and any or nocturnal hypoglycemia over the previous 4 weeks leading up to the baseline study entry, whereas SAQ2 was used to record the history of severe or any or nocturnal hypoglycemia during the 4-week prospective period based on symptoms or self-monitoring of blood glucose (SMBG) or both and the effect of hypoglycemia on work/studies and use of health system resources following hypoglycemia. The patients also recorded hypoglycemic events during the prospective period using the PD which was used to assist the recall of events. Study end points The primary focus of the study was to assess the percentage of patients experiencing at least one hypoglycemic event during the 4-week prospective period among insulin-treated patients with T1DM or T2DM. The key secondary outcomes included the difference in the reported incidence of hypoglycemia between the 4-week retrospective (for any and nocturnal hypoglycemia)/6-month retrospective (for severe hypoglycemia) and the 4-week prospective period among insulin-treated patients with T1DM or T2DM, relationship between the incidence of hypoglycemia, and duration of diabetes in each quartile (1.0 to <7.0 years, 7.0 to <12.0 years, 12.0 to <18.0 years, and 18.0-60.0 years), glycated hemoglobin (HbA1c) at baseline (HbA1c level <7.0%, 7.0%-9.0%, and >9.0%), and insulin treatment. The use of health system resources following hypoglycemia and patient behaviors against hypoglycemia were also studied. Other secondary outcomes included patients' knowledge of hypoglycemia, hypoglycemic awareness, and impact of hypoglycemia on the work/studies. The use of health system resources following hypoglycemia was assessed on hypoglycemic events resulted in hospital admission, additional clinical appointments, and additional telephone contacts made. Behavior against hypoglycemia was assessed on the following parameters: consulted nurse/doctor; required any form of medical assistance, increased calorie intake, avoided physical exercise, reduced insulin dose, skipped insulin injections, and increased blood glucose monitoring. Patient knowledge of hypoglycemia was assessed by checking if the patient's definition was consistent with the American Diabetes Association definition of hypoglycemia. [11] Hypoglycemic awareness was evaluated through the self-assessment question: "Do you have symptoms when you have a low sugar level?" where the answers, "always" and "usually" denoted normal, "occasionally" denoted impaired awareness, and "never" denoted severely impaired awareness (unawareness). [12] Classification of hypoglycemia The following definitions of hypoglycemia were used to record the different types of hypoglycemia in SAQ1, SAQ2, and PD. Severe hypoglycemia as an event requiring assistance of another person to actively administer carbohydrate, glucagon, or other resuscitative actions. [11] Nocturnal hypoglycemia as an hypoglycemic event occurring between midnight and Statistical analysis For the primary end point, the percentage of patients who experienced at least one hypoglycemic episode, during the prospective period among patients with insulin-treated DM, was calculated together with 95% confidence interval (CI). Hypoglycemic rates were reported as events PPY, calculated as the total number of events divided by the total follow-up time in patient-years along with 95% CI. Relationship between HbA1c at baseline and log-transformed number of events for patients experiencing hypoglycemia was shown by the scatter plot with regression line and 95% CI. Statistical tests were two sided and regarded as exploratory, with the criterion for statistical significance set at P < 0.05. No adjustments were made for multiple comparisons. No imputation of missing data was performed as majority of analyses were descriptive in nature. Baseline refers to data collected using the SAQ1 and follow-up refers to data collected using the SAQ2 and where applicable, PD. Patient characteristics Descriptive baseline characteristics of Bangladeshi cohort are provided in Table 1. Figure 2]. For patients with T2DM, the rates of any and nocturnal hypoglycemia increased with the duration of diabetes [ Figure 3a and b]. The rates of severe hypoglycemia increased slightly with the duration of diabetes in the retrospective period and were almost similar in all the quartiles in the prospective period [ Figure 3c]. Hypoglycemia by glycated hemoglobin levels No association was seen between the percentages of patients with any hypoglycemia and baseline HbA1c in the 4 weeks before baseline period. All patients with T1DM experienced any hypoglycemia regardless of baseline HbA1c value in the 4 weeks after baseline period. For patients with T2DM, 97.6%, 91.6%, and 93.1% of patients had experienced any hypoglycemia in the HbA1c <7.0%, HbA1c 7.0%-9.0%, and HbA1c >9.0% categories, respectively, in the 4 weeks after baseline period. In addition, scatter plot with regression line and 95% CI for patients with T1DM or T2DM showed no association between HbA1c levels and hypoglycemic events (data not shown). Hypoglycemia by insulin types For patients with T1DM, overall, there were higher rates of severe hypoglycemia during the prospective period when compared with that of the retrospective period. Moreover, the reported rates of severe hypoglycemia were almost similar in the prospective period, irrespective of treatment [ Figure 4 c]. The overall rates of any and nocturnal hypoglycemia were higher in the retrospective period when compared with that of the prospective period [ Figure 4a and b]. For patients with T2DM, there were higher reported rates of any and severe hypoglycemia during the prospective period when compared with that of the retrospective period (any: 18 The reported rates of any and severe hypoglycemia were almost similar in the prospective period, irrespective of treatment [ Figure 4a and c]. The overall rates of nocturnal hypoglycemia were higher in the retrospective period when compared with that of the prospective period [ Figure 4b]. Use of health system resources None of the hypoglycemic events in patients with T1DM resulted in hospital admission in both assessment periods. Mostly, the impact of hypoglycemia on the medical system (hospital admissions, additional clinic appointments, and telephone contacts) in the retrospective period was slightly higher than that in the prospective period (both T1DM and T2DM) (data not shown). Patient knowledge of hypoglycemia and hypoglycemic awareness All (100.0%) patients with T1DM and 89.7% of patients with T2DM knew the overall definition of hypoglycemia before reading the SAQ1. The majority of patients in both groups defined hypoglycemia on the basis of symptoms alone (T1DM: 76.0%; T2DM: 54.2%) [ Table 2]. More patients with T1DM than with T2DM had hypoglycemic awareness (80.0% and 62.7%, respectively) [ Table 2]. Impact of hypoglycemia on work/studies Patients with T1DM (n = 14) and T2DM (n = 330) were studying or in full-or part-time employment. More patients with T1DM or T2DM had an impact on work/study in the retrospective period when compared with that of the prospective period: absence from work or studies (T1DM/T2DM: 21.4%/7.1% vs. 14.8%/4.2%, respectively), late arrival to work or study (T1DM/T2DM: 35.7%/14.3% vs. 10.0%/2.1%, respectively), or early departure from work or study (T1DM/T2DM: 21.4%/0.0% vs. 14.5%/3.9%, respectively). Overall, 14 and 324 days were taken off work or study in the year prior to baseline and 14 and 311 days were taken off work or study in the 4 weeks after baseline by patients with T1DM and T2DM, respectively. dIscussIon This subanalysis of the IO-HAT study presents the first data set of self-reported hypoglycemia that studied hypoglycemic incidence and rates in insulin-treated patients with T1DM or T2DM in a Bangladeshi cohort. The study reported higher rates of any hypoglycemia in patients with T1DM, both in the retrospective and prospective periods (41.2 and 26.6 events PPY, respectively). The reported incidence of hypoglycemia is quite high in comparison to the hypoglycemic rates (150 episodes/100 patient-years) previously reported in European studies, [13][14][15] the PREDICTIVE trial, the Hypo Ana trial, and the UK Hypoglycemia study. [16][17][18][19] Patients with T2DM reported significantly increasing rates of any hypoglycemia from retrospective period to prospective period (P < 0.001). The reported rates were much higher than previously reported hypoglycemic rates from PREDICTIVE trial, ACCORD, and the Veterans Affairs Diabetes Trial (383-1333/100 patient-years). [16,20,21] The study also reported a higher rate of severe hypoglycemia (7.0 and 14.1 events PPY, respectively) during the retrospective and the prospective periods in patients with T1DM, compared to 1.0-1.6 events PPY reported in the European study. [14,15] Higher rate of severe hypoglycemia was also reported for patients with T2DM in this cohort (2.4 and 12.1 events PPY, respectively) during the retrospective and the prospective periods. Again, the rate was much higher than the severe hypoglycemic rates reported (3-9/100 patient-years) in patients with T2DM on insulin in the UK Hypoglycemia study. [18,21] The reported rates were also aligned with the rates observed in the Global HAT study (T1DM: 4.9 events PPY; T2DM: 2.5 events PPY). [2] The study reported higher rates of nocturnal hypoglycemia for patients with T1DM or T2DM than earlier reported rates. [13][14][15][16][17][18] The lower prospective reporting in case of nocturnal hypoglycemia in comparison to the retrospective period may be due to patients missing the entry of the nocturnal events in the PD at night. The other perspective is that the impact due to fear of nocturnal hypoglycemia may have driven the patients to remember these events and hence, patients were able to recall accurately the nocturnal events in the retrospective period. The study reported a higher percentage of patients (60% and 84.0%, respectively) with T1DM who experienced at least one severe hypoglycemic event during the retrospective and prospective periods. These data are much higher than the annual prevalence of up to 30% of severe hypoglycemia in patients with T1DM reported in Northern European populations. [14,15] Moreover, there was a similarity in the frequency of severe hypoglycemia between patients with T1DM and T2DM (84.0% vs. 82.3%, respectively) as reported in an earlier hypoglycemic survey. [22] Overall, in this cohort, the percentages of patients with T1DM or T2DM who reported at least one hypoglycemic event were higher during the prospective period of the study than during the retrospective period. This could be due to the use of the PD, which served as a tool that assisted patients in recall of the events in the prospective period. In contrast, there could have been a recall bias while recollecting the retrospective At baseline, glycemic control in terms of mean HbA1c in patients with T2DM in Bangladeshi cohort of IO-HAT study was poor similar to baseline HbA1c levels in patients with T2DM in DiabCare Bangladesh 2008 study and A 1 CHIEVE study (8.8% vs. 8.6% vs. 10.0%, respectively). [23,24] However, we found that for patients with T1DM or T2DM, there was no association between the percentages of patients with any, severe, or nocturnal hypoglycemia and baseline HbA1c in the 4 weeks before and after the baseline period. Similar to the global HAT trial, the hypoglycemic rates had no significant association with HbA1C level in patients of both T1DM and T2DM. [2] The results showed that hypoglycemia is independent of HbA1c levels, and hypoglycemia was common at all HbA1C levels. [25] Patients should be given the confidence that tight glycemic control with the usage of insulin therapy will not increase the risk of hypoglycemia. One of the limitations of this study is recall bias. Educating patients to regularly document the hypoglycemic events by themselves in a diary may be a possible remedy to lower recall bias. DM in Bangladeshi cohort presented higher rates of hypoglycemia in spite of having a lower average duration of diabetes, duration of insulin use, and high HbA1c levels. While the status of health-care access in Bangladesh was not captured in the study, more DM had increased health-care costs in terms of increased hospital admissions, clinic appointments, and telephone contacts, suggesting that the impact of hypoglycemia on health care was extensive. A higher percentage of patients with T1DM or T2DM were absent from work or studies or arrived late to work/study, and left early from work or study. These results must be interpreted with caution as a low number of patients with T1DM were present in this cohort. Overall, these results indicate that the incidence rates of hypoglycemia were high among patients with T1DM. However, the results may be skewed by the low number of patients with T1DM in this cohort. In patients with T2DM, significantly higher prospective reporting of hypoglycemia compared to the retrospective period indicated that patients had underreported hypoglycemia during the retrospective period. Though patients had a good knowledge of hypoglycemia at baseline, it is observed that the hypoglycemic rates are usually underestimated in Bangladesh on the basis of recall alone. The need of the hour is to educate DM patients in Bangladesh on hypoglycemia and encourage them to better document these events and do regular SMBG. Financial support and sponsorship Financial support for the conduct of the research was provided by Novo Nordisk. Novo Nordisk was involved in the study design; collection, analysis and interpretation of data; and decision to submit the article for publication. Statistical analysis was performed by Paraxel International. Conflicts of interest There are no conflicts of interest.
2018-08-14T20:08:23.012Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "2fe6f29abc8af056f5e0a8112793f9aca382733d", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijem.ijem_545_17", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "b07a3ebb09b4722fe746ede82e954bbd8ba4d117", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258842455
pes2o/s2orc
v3-fos-license
Impact of body composition in advanced hepatocellular carcinoma: A subanalysis of the SORAMIC trial Background: Body composition parameters have been reported to be prognostic factors in patients with oncologic diseases. However, the available data on patients with HCC are conflicting. The aim of this study was to assess the impact of body composition on survival in patients with HCC treated with sorafenib or selective internal radioembolization (SIRT) and sorafenib. Methods: This is an exploratory subanalysis of the prospective, randomized controlled SORAMIC trial. Within the palliative arm of the study, patients were selected if a baseline abdominal CT was available. A broad set of skeletal muscle and adipose tissue parameters were measured at the L3 level. Low skeletal muscle mass (LSMM) and density parameters were defined using published cutoffs. The parameters were correlated with overall survival. Results: Of 424 patients in the palliative study arm, 369 patients were included in the analysis. There were 192 patients in the combined sorafenib/SIRT and 177 patients in the sorafenib group. Median overall survival was 9.9 months for the entire cohort and 10.8 and 9.2 months for the SIRT/sorafenib and sorafenib groups, respectively. There was no relevant association of either body composition parameter with overall survival in either the overall cohort or in the SIRT/sorafenib or sorafenib subgroups. Conclusions: This subanalysis of the prospective SORAMIC trial does not suggest a relevant influence of body composition parameters of survival in patients with advanced HCC. Body composition parameters therefore do not serve in patient allocation in this palliative treatment cohort. INTRODUCTION HCC is the most common primary liver cancer and one of the most common causes of cancer-related mortality worldwide. [1] Main causes are alcohol-associated liver cirrhosis, increasingly NASH, as well as viral hepatitis B and C, with regional variations. [2] Currently, staging and treatment algorithms are based on the Barcelona Clinic Liver Cancer (BCLC) classification. For patients with advanced-stage HCC, the multikinase inhibitor sorafenib has been the standard of care for the past decade, with new treatment regimens added only in recent years. Locoregional therapies such as transarterial chemoembolization and selective internal radiation therapy (SIRT) are treatment options for patients with unresectable HCC and may be used in addition to systemic therapy. [3,4] The multicenter SORAMIC trial (EudraCT 2009-012576-27, NCT01126645) has compared the efficacy of sorafenib and SIRT with Yttrium-90 (90Y) resin microspheres to sorafenib alone, without identifying significant improvements in overall survival (OS). [5] In interventional procedures, patient selection remains pivotal. Multiple factors are known to influence survival in locoregional treatments. For patients treated with SIRT, the albumin-bilirubin ratio (ALBI) has been shown to be superior in predicting survival to the Child-Pugh classification. [6] The BCLC criteria themselves, while suitable for treatment allocation, are limited in their capacity to predict treatment outcomes and are unable to assess functional capacity. [7] In addition, the patient's performance status is not considered in these criteria. In recent years, parameters of body composition such as skeletal muscle mass (SMM) and adipose tissue (AT) have emerged as possible biomarkers influencing clinical outcomes in patients with HCC. The use of CT-derived measurements of skeletal muscle and abdominal fat tissue allows quantification of different body composition parameters in routine clinical use. For SMM, measurements of paraspinal, abdominal wall, and psoas muscles are usually performed at the L3 level. [8] Published studies on the association between body composition parameters and OS in HCC have predominantly been conducted in Asia. Because of the scarcity of data, the influence of SMM and AT in patients with advanced HCC undergoing palliative locoregional therapies remains unclear. Most published studies in the palliative setting are of retrospective design and include only small patient numbers. The present study is a subanalysis of the SORAMIC clinical trial. Using prospectively collected data, we aimed to assess the influence of baseline body composition parameters on OS in both treatment arms, using skeletal muscle and AT-derived parameters. Patient selection This is an exploratory post hoc substudy of the SORAMIC trial, a prospective, randomized controlled, phase II trial conducted at 38 clinical sites in 12 countries in Europe and Turkey. [9] The present study was performed within the palliative part of SORAMIC, where patients were randomized to receive sorafenib monotherapy or SIRT and sorafenib. [9] In short, patients were eligible if they had preserved liver function (Child-Pugh ≤ B7), an Eastern Cooperative Oncology Group performance status (ECOG PS) ≤ 2, and unresectable tumors not eligible for curative treatment or transarterial chemoembolization. The procedural details have been reported elsewhere. [5] The study was approved by the local ethics committees. Study procedures were performed in accordance with the protocol and ethical principles that have their origin in the Declaration of Helsinki and the International Council for Harmonization-Good Clinical Practice. All patients provided written informed consent to participate in the study (ClinicalTrials.gov No. NCT01126645; EudraCT 2009-012576-27). Overall, there were 424 patients involved into the palliative part of SORAMIC. In 55 patients, no computed tomographic images within 30 days before the procedure were available, and they were excluded from the present analysis. Therefore, the final cohort comprised 369 patients. The sorafenib/SIRT treatment group comprised 192 patients, and the sorafenib group included 177 patients. There were 56 women (15.2%) and 313 men (84.8%), with a mean age of 67 ± 8.6 years, median age of 66 years, and age range from 31 to 85 years. Baseline patient characteristics are summarized in Table 1A. Image analysis For all patients, the last available CT scan at baseline before therapy was used. All measurements of body composition parameters were performed in a semiautomatic fashion on axial images at the level of the third lumbar vertebra (L3) with the freely available Software ImageJ (version 1.53, National Institute of Health, USA). The soft tissue window was used [45-250 Hounsfield Units (HU)]. Any necessary adjustments were made by an experienced radiologist (Alexey Surov), blinded to the clinical course of patients. Acquired body composition parameters included the following: total adipose tissue, visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), and intramuscular adipose tissue. The relative distribution of abdominal body fat was assessed by the VSR, which was calculated by dividing VAT by SAT. Thresholds for attenuation measurements were −190 to + 30 HU) for fat tissue and −29 to + 150 HU for muscle tissue. Skeletal muscle area was defined as the crosssectional muscle area, including the quadratus lumborum, psoas, rectus abdominus, and erector spinae muscles, and the internal transverse and external oblique muscles. Measurements of fat and muscle tissue were normalized for patients' body height in meters squared to attain the following indices: VAT index, subcutaneous adipose tissue index (SATI), total adipose tissue index (TATI), and skeletal muscle index (SMI). Low skeletal muscle mass (LSMM) was defined as SMI <52.4 cm 2 /m 2 for males and <38.5 cm 2 /m 2 for females, using the thresholds defined by Prado et al. [10] High VAT and high SAT were defined as an area > 100 cm 2 . High VSR was defined as > 1.1. In addition, radiodensity of the analyzed body compartments was measured. Finally, fat-free mass (FFM) and fat mass (FM) were calculated using the following formulae: [11] FFM (kg) = 0.30 × (muscle L3 cross-sectional area) + 6.06; values on clinical variables and OS, univariable Cox regression model was used. The proportional hazards assumption was checked using graphical diagnostics based on the scaled Schoenfeld residuals using the function ggcoxzph in the survminer R package. No major departures were found. To detect nonlinearity, the martingale residuals against continuous covariates were used to assess the functional form. Visual inspection of the graphs using the R function ggcoxfunctional in the survminer R package do not indicate major violations of the linearity. HR are presented together with 95% CI. The resulting p-values were interpreted in an exploratory sense. Collected data were evaluated by means of descriptive statistics (absolute and relative frequencies). Kaplan-Meier curves were used for survival analysis. Pooled OS Baseline body composition results are shown in Table 1B. Median OS in the overall group was 9.9 months. LSMM was present in 206 (55.8%) patients. There was no relevant difference in OS between groups when stratified by SMI, VAT, SAT, or VSR (shown in Figure 1). Male patients showed a slightly higher SAT HU than female patients (p = 0.007, Supplemental Table S3, http://links. lww.com/HC9/A297). No relevant differences in body composition parameters were observed for stage BCLC B and stage BCLC C patients (Supplemental Table S4, http://links.lww.com/HC9/A297) or between patients aged below 60 and above 60 years (Supplemental Table S5 Table S2C, D Supplement, http:// links.lww.com/HC9/A297) did not reveal a relevant association between body composition and OS. OS in sorafenib group Median OS in the sorafenib group was 9.2 months. The prevalence of LSMM was 52.0% (92/177 patients). Median OS for the low SMI group was 12.2 months (95% CI, 10.1; 14,3), and median OS for the normal SMI group was 11.1 months (95% CI, 8.9; 13.3; p = 0.478). There was no relevant difference in survival between the groups when stratified by body composition parameters ( Figure 2 OS in the SIRT/sorafenib group Median OS in the SIRT/sorafenib group was 10.8 months. A total of 114 patients had LSMM (59.4%). No relevant differences in survival were found when stratified by body composition parameters (Figure 3). No important association between either analyzed body composition parameter and OS was DISCUSSION Our study investigated the influence of different body composition parameters on outcome in patients with HCC undergoing either sorafenib alone or SIRT and sorafenib for advanced HCC, applying a comprehensive range of body composition parameters. Neither skeletal muscle nor AT parameters showed the capability to predict OS in our cohort. The results of our study suggest that the pattern of body composition is not a relevant factor impacting survival in our palliative cohort with patients with compensated cirrhosis. To the best of our knowledge, this is the first study evaluating the association between sarcopenia and OS in HCC in distinct palliative treatment arms. Sarcopenia is a complex syndrome that has been linked to adverse outcomes in oncologic and nononcologic diseases. In clinical routine, it can be measured on CT imaging by the proxy parameter LSMM. Sarcopenia is common in patients with cirrhosis and HCC, with a prevalence of around 40%, and has been linked to adverse outcomes. [12,[13][14][15][16] In a recent meta-analysis including patients with HCC, around 39% of patients were affected by LSMM, with associations with worse OS and lower recurrence-free survival. [12] A meta-analysis with patients with HCC found an association between LSMM and OS both in the curative as well as in the palliative setting. Treatments in the 2 included studies with palliative patients included systemic chemotherapy with sorafenib in one and intraarterial chemoembolization or radiofrequency ablation in the other study, with cohort sizes of 116 and 93 patients, respectively. [17] Studies focusing on patients with advanced HCC undergoing sorafenib therapy are still scarce, based on relatively small cohorts, and retrospective in nature, with the literature showing conflicting results regarding the influence of LSMM on OS. [15,18,19] For example, Nault and colleagues [18] and Labeur and colleagues [19] did not find an association between SMI and OS, whereas Imai et [20] Hiraoka [15] and colleagues found an influence of the SMI and PMI, respectively. A comprehensive overview of the current evidence can be found in Labeur et al. [19] Our study is the first to investigate the association between body composition and the combination of SIRT and sorafenib in HCC. The available data on the influence of body composition in patients treated with SIRT are sparse. High VAT density was significantly associated with increased mortality and more adverse events in a Canadian study with 101 patients. There was no influence of SMI or LSMM on survival. [7] The prevalence of LSMM was 56%, with similar median values for body composition parameters compared with our cohort. However, our cohorts vary significantly: the rate of alcohol-associated cirrhosis was only 14%, whereas it was 35% in our cohort. Moreover, the rate of BCLC stage C patients was 68% in our cohort and only 25% in the cohort by Ebadi and colleagues, potentially accounting for differences in outcome. Sarcopenia as defined by FFM area measured in MRI predicted increased mortality in 2 studies including patients undergoing Y90-SIRT. [21,22] However, with a sample size of 82 and 56 patients, respectively, the analyzed cohorts were relatively small. In contrast to skeletal muscles, analysis of AT in patients with HCC was performed in only few studies, and the results are heterogeneous. With regard to AT, Ohki et al. have shown that baseline visceral fat are was an independent factor for recurrence in non-viral HCC after radiofrequency ablation. [23] A study with a Swiss and UK cohort with patients with HCC at different stages found an association between SAT density and OS. [24] In a study by Montano-Loza et al [25] , a high VATI was identified as a risk factor for HCC and HCC recurrence after liver transplantation. Parikh et al [26] showed that high VAT radiodensity was linked with shorter OS in patients with HCC undergoing transarterial chemoembolization. In patients receiving sorafenib, Nault et al [18] reported an association between VATI and OS in a small cohort of 52 patients. While screening for body composition is pivotal to improve patients' functional capacity, our study does not suggest that LSMM or AT measurements can serve treatment decisions in palliative treatment arms in advanced HCC. There are several possible explanations for our findings. First, given the indications for radioembolization and study inclusion criteria, patients were suffering from advanced tumor stages and high tumor burden. Hepatic tumor burden, macrovascular invasion, and the presence of extrahepatic metastases are known factors for adverse outcomes in HCC. [27] Advancedstage BCLC C patients are a heterogenous patient group. Gianni et al [28] have shown significant differences in OS in patients with stage BCLC C when stratified by performance status and tumor characteristics. In a study with patients with HCC with extrahepatic spread under sorafenib therapy, liver function according to Child-Pugh class and microvascular invasion were identified as prognostic factors for OS. [29] Other prognostic factors associated with OS are new extrahepatic lesions and new vascular invasion. [30] All patients in our cohort had compensated cirrhosis. These factors may diminish the influence of body composition. Second, OS in our cohort may be too short to account for influences of LSMM or AT. It has been reported that in patients with aggressive tumor characteristics and short OS, the effect of body composition IMPACT OF BODY COMPOSITION IN ADVANCED HCC parameters may not have a relevant influence on OS. [31] Our cohort does therefore not prove that there is no association between body composition on clinical outcomes. Yet patients in the selected treatment arms may not benefit from physical exercise and improved nutrition in terms of longer OS. Beyond survival time, multimodal interventions may yet improve quality of live or other functional parameters that we have not studied in our analysis. Third, the prevalence of LSMM in our cohort is higher than reported in most studies with patients under sorafenib therapy. With the exception of the studies by Ebadi et al, Antonelli et al, and Labeur et al, the reported rate of LSMM ranges between 11% and 25%. [7,15,19,32,33] However, both Antonelli and colleagues and Labeur and colleagues, with a prevalence of sarcopenia of 49% and 52%, respectively, applied cutoff values by Martin et al [34] to their cohort, with an additional stratification according to BMI. Labeur and colleagues did not find an association between either single body composition parameter and OS. Ebadi used predefined cutoff values for patients with cirrhosis awaiting liver transplantation. For SMI, we applied fixed cutoff values by Prado et al. [10] We believe these to be best validated in various studies across different diseases. As the SMI has already been normalized by body height, we do not think an additional cutoff based on BMI is necessary. Sensitivity analysis may have provided different cohort-specific cutoff values at the cost of reproducibility. The SORAMIC trial included patients with liver-dominant disease and patients with pulmonary metastases were excluded, potentially leading to bias in our analysis. A limitation is the exclusion of patients without baseline abdominal CT scan, which might lead to selection bias. Strengths of our study are the large sample size and the prospectively collected data within a clinical trial. In conclusion, in this substudy of the multicentric SORAMIC trial, we did not find an association between body composition parameters and OS. Body composition parameters therefore do not serve in patient allocation in this palliative treatment cohort. AUTHOR CONTRIBUTIONS Alexey Surov.: conception and design of the study; generation, collection, assembly, analysis, and/or interpretation of data; drafting or revision of the manuscript; and approval of the final version of the manuscript. Maximilian Thormann: conception and design of the study; generation, collection, assembly, analysis, and/or interpretation of data; drafting or revision of the manuscript; and approval of the final version of the manuscript. Osman Öcal, Kerstin Schütte, Christoph J. Zech, Christian Loewe, Otto van Delden, Vincent Vandecaveye, Chris Verslype, Bernhard Gebauer, Christian Sengel, Irene Bargellini, Roberto Iezzi, Thomas Berg, Heinz J. Klümpen, Julia Benckert, Antonio Gasbarrini, Holger Amthauer, Bruno Sangro, Peter Malfertheiner, and Mattes Hinnerichs: generation, collection, assembly, analysis, and/or interpretation of data and approval of the final version of the manuscript. Max Seidensticker: generation, collection, assembly, analysis, and/or interpretation of data; drafting or revision of the manuscript; and approval of the final version of the manuscript. Ricarda Seidensticker: generation, collection, assembly, analysis, and/or interpretation of data; drafting or revision of the manuscript; and approval of the final version of the manuscript. Jazan Omari: generation, collection, assembly, analysis, and/or interpretation of data; drafting or revision of the manuscript; and approval of the final version of the manuscript. Andreas Wienke: generation, collection, assembly, analysis, and/or interpretation of data; drafting or revision of the manuscript; and approval of the final version of the manuscript. Jens Ricke: generation, collection, assembly, analysis, and/or interpretation of data; drafting or revision of the manuscript; and approval of the final version of the manuscript. Maciej Pech: conception and design of the study; generation, 2) and normal VSR group (median OS 11.8 mo, 95% CI, 10.0; 13.6). Abbreviations: LSMM, low skeletal muscle mass; SAT, subcutaneous adipose tissue; VAT, visceral adipose tissue; VSR, visceral-tosubcutaneous tissue ratio. collection, assembly, analysis, and/or interpretation of data; drafting or revision of the manuscript; and approval of the final version of the manuscript. FUNDING INFORMATION SORAMIC is an investigator-initiated trial sponsored by the University of Magdeburg. Financial support was granted by Sirtex Medical and Bayer Healthcare. Jens Ricke received grants from Sirtex and Bayer and personal fees from Sirtex and Bayer. CONFLICTS OF INTEREST Max Seidensticker advises and received grants from Bayer. He received grants from Sirtex. Kerstin Schutte advises Bayer. Chris Verslype advises, is on the speakers' bureau, and received grants from Bayer. He consults for Ipsen and Roche. Bernhard Gebauer received grants from Sirtex. Thomas Berg advises and is on the speakers' bureau for Roche, Bayer, Eisai, and Sirtex. He is on the speakers' bureau for Ipsen. Antonio Gasbarrini consults for AbbVie, Alfasigma, Lion Health, Roche, Sanofi, and Takeda. Holger Amthauer consults and received grants from Sirtex. Bruno Sangro consults, advises, is on the speakers' bureau, and received grants from Bristol-Myers Squibb and Sirtex. He consults, advises, and is on the speakers' bureau for AstraZeneca, Bayer, Eisai, Eli Lilly, Incyte, Ipsen, Novartis, Roche, and Terumo. He consults and advises Boston Scientific. Peter Malfertheiner consults for Aboca and Bayer, advises Allergosan, is on the speakers' bureau for Biocodex and Malesci, and received grants from Menarini. Maciej Pech consults and is on the speakers' bureau for Sirtex. The remaining authors have no conflicts to report. DATA AVAILABILITY STATEMENT All data generated or analyzed during this study are included in this article. Further enquiries can be directed to the corresponding author.
2023-05-24T06:17:50.151Z
2023-05-23T00:00:00.000
{ "year": 2023, "sha1": "d686687b70ca15bf51acc4f78bc094cc79c6303d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/hc9.0000000000000165", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3427c034cdd9d5db60b963a9553e111a4a110ea3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17563561
pes2o/s2orc
v3-fos-license
Prediction of response to anti-EGFR antibody-based therapies by multigene sequencing in colorectal cancer patients Background The anti-epidermal growth factor receptor (EGFR) monoclonal antibodies (moAbs) cetuximab or panitumumab are administered to colorectal cancer (CRC) patients who harbor wild-type RAS proto-oncogenes. However, a percentage of patients do not respond to this treatment. In addition to mutations in the RAS genes, mutations in other genes, such as BRAF, PI3KCA, or PTEN, could be involved in the resistance to anti-EGFR moAb therapy. Methods In order to develop a comprehensive approach for the detection of mutations and to eventually identify other genes responsible for resistance to anti-EGFR moAbs, we investigated a panel of 21 genes by parallel sequencing on the Ion Torrent Personal Genome Machine platform. We sequenced 65 CRCs that were treated with cetuximab or panitumumab. Among these, 37 samples were responsive and 28 were resistant. Results We confirmed that mutations in EGFR-pathway genes (KRAS, NRAS, BRAF, PI3KCA) were relevant for conferring resistance to therapy and could predict response (p = 0.001). After exclusion of KRAS, NRAS, BRAF and PI3KCA combined mutations could still significantly associate to resistant phenotype (p = 0.045, by Fisher exact test). In addition, mutations in FBXW7 and SMAD4 were prevalent in cases that were non-responsive to anti-EGFR moAb. After we combined the mutations of all genes (excluding KRAS), the ability to predict response to therapy improved significantly (p = 0.002, by Fisher exact test). Conclusions The combination of mutations at KRAS and at the five gene panel demonstrates the usefulness and feasibility of multigene sequencing to assess response to anti-EGFR moAbs. The application of parallel sequencing technology in clinical practice, in addition to its innate ability to simultaneously examine the genetic status of several cancer genes, proved to be more accurate and sensitive than the presently in use traditional approaches. Electronic supplementary material The online version of this article (doi:10.1186/s12885-015-1752-5) contains supplementary material, which is available to authorized users. patients amenable to cetuximab treatment. The focused use of cetuximab against tumors harboring wild-type RAS improved its overall usefulness. However, 35-45 % of wild-type RAS cases still do not respond to this treatment. Additional studies have now indicated that other elements of the MAPK and PI3K pathways, such as BRAF, PI3KCA, or PTEN, may be involved [1,[3][4][5]. These findings led to updated guidelines for CRC treatments, which advocated the inclusion of the mutational status of both KRAS and NRAS genes and the consideration of BRAF mutations in wild-type RAS cancers [6]. High-throughput sequencing methods, thanks to their ability to analyze several genes in parallel, could represent a helpful support in detecting the numerous genetic changes implicated in anti-EGFR moAb resistance. With massive parallel sequencing, millions of fragments of DNA can be sequenced in the same reaction, allowing the acquisition of in-depth information that traditional Sanger sequencing cannot readily achieve. For this reason, the use of parallel sequencing technologies is rapidly expanding. In addition to instruments that can sequence full human genomes, "bench" sequencers with lower throughput-but reduced running costs and faster turnaround time-are becoming common. These bench sequencing systems are more apt when a relatively small number of genes need to be sequenced. Sample preparation and data analysis are compatible with barcoding, meaning that multiple samples can be labeled and loaded in the same sequencing assay, allowing consistent time and cost savings. For these reasons, in addition to simpler data analysis, this type of sequencer can be more easily accommodated in a clinical setting. In this study, we selected a group of 21 genes involved in CRC [1,7] to sequence 65 CRCs from patients treated with cetuximab or panitumumab by using the Ion Torrent Personal Genome Machine (PGM) platform. The study proved the usefulness of parallel sequencing, confirmed earlier reports about the genes involved in cetuximab resistance, and revealed a potential important role for FBXW7 and SMAD4 mutations in conferring therapy resistance to anti-EGFR moAbs. Clinical samples Samples were obtained from 65 patients with histologically confirmed colorectal adenocarcinoma and undergoing surgery at the Masaryk Memorial Cancer Institute (MMCI, Brno, Czech Republic) between 2004 and 2011. Patient age ranged from 31 to 81 years with a mean of 58 years. The Ethical committee of the Masaryk Memorial Cancer Institute approved the study protocol. Written informed consent was obtained from all patients. All participants included in the study were anonymized by using sample identifiers that could not be connected with any individual. Clinicopathological features of the patients are summarized in Table 1 and Additional file 1: Figure S1. At the time the samples were collected, the TheraScreen K-RAS Mutation Kit CE-IVD was in use. The test allowed analysis of the mutational status at codons 12 and 13 of KRAS only. According to the results of this test, all 65 samples carried wild-type KRAS, and patients were treated with cetuximab or panitumumab. At the time of first diagnosis, tumors of some patients were at stages I-III, but at the time of anti-EGFR moAb treatment, all patients were at stage IV. Patients were regularly followed up after beginning this treatment. End points of follow-up were death and progression of disease. Cetuximab response was assessed according to RECIST (Response Evaluation Criteria In Solid Tumors) criteria. Enrolled patients were divided into two groups: one group (responders) included patients with a complete response (CR; 100 % reduction of metastasis) or a partial response (PR; >30 % reduction of metastasis) or with stabilization of the disease (SD), whereas a second group (non-responders) included patients with progressive disease (PD). Gene selection and primer design The CRC gene panel was assembled by considering the 19 most frequently mutated genes in non-hypermutated CRCs [7]; to these, EGFR and BRAF genes were added for their involvement in the EGFR pathway [1,[3][4][5]. Gene regions and the 584 primer pairs are listed in Additional file 2: Data analysis and variant identification Sequencing data analysis was conducted by using Torrent Suite software v. 3.4 (Life Technologies). Briefly, low-quality reads were removed, adapter sequences trimmed, and alignment against a reference genome (hg19) performed by using the Torrent Mapping Alignment Program. The Variant Caller plugin was used to identify variations from the reference sequence. To identify pathogenic variations, mutations that did not affect the protein coding regions (intronic, 3' and 5' UTR variations and silent exonic mutations) were filtered out; insertions and deletions belonging to homopolymeric regions were removed, because sequencing error rate is high in these regions; alterations found at a frequency lower than 15 % were excluded, based on the hypothesis that mutations at lower frequencies could marginally affect tumor behavior and because this filtering step allowed to remove most of variations derived from formalin fixation artifacts [9]. Remaining mutations were compared with data present in the public databases dbSNP [10], COSMIC [11], and cBIO [12] to search for known pathogenic mutations. Annotated non-pathogenic variations were excluded from results, whereas remaining potentially pathogenic variations and mutations of unknown significance were retained. At the same time, Annovar [13] and Mutation Assessor [14] algorithms were used to predict damaging or potentially damaging changes in tumor suppressor genes. Statistical analysis The association of gene mutations with anti-EGFR treatment resistance was evaluated by using a two-tailed Fisher's exact test. GraphPad Prism 5 was used to perform survival analysis. Sanger sequencing Sanger sequencing was performed according to standard procedure. Amplicons were prepared using the same primer pairs employed for library preparation and were sequenced using the following sequencing primers: Results Detection of anti-EGFR treatment-related genes through next-generation sequencing The DNA of primary tumor lesions from 65 advanced CRCs treated with cetuximab or panitumumab (Table 1, Additional file 1: Figure S1) was investigated. Thirty-seven patients were responsive to therapy, whereas 28 displayed progression of disease. The two groups were balanced for age, sex, stage, and grade. A slight difference in tumor localization was present, with a prevalence of right colon cancers in the non-responder group (p = 0.04). All samples were negative for KRAS mutations according to the Ther-aScreen K-RAS Mutation Test. We investigated the coding sequences of 21 genes, selected according to the previously reported genes most frequently mutated in CRCs [3,5,7] or present in public cancer mutations databases, such as COSMIC [11] and cBioPortal [12] (Additional file 2: Table S1). Amplicon libraries and sequencing reactions were performed as described in the Methods. All designed gene segments (n = 584) were sequenced with an average coverage of 506 reads each. Variations were identified in comparison with a human nucleotide reference sequence (hg19) by the Variant Caller plugin (Torrent Suite v. 3.4). To identify potentially pathogenic variations, we filtered out some identified nucleotide changes, as indicated in the Materials and Methods. All of the remaining variations are listed in Additional file 3: Table S2. At least one mutation was detected in each sample, the only exception being sample ID_5032. This sample showed a complete response to anti-EGFR treatment. The most frequently mutated genes were TP53 and APC, which were mutated in 40 (62 %) and 37 samples (57 %), respectively. These results are in agreement with the published literature, which, besides confirming the importance of these two genes for CRC pathogenesis, validates the reliability of our sequencing results. The mutation rate for the remaining genes was as follows: 17 % for KRAS (additional mutations, previously undetected by TheraScreen test), 14 % for CSMD3, 14 % for TCF7L2, 9 % for PIK3CA and FBXW7, and less for the other genes. No mutation was detected in SMAD2 (Fig. 1). Mutation rates for each gene in the responder and non-responder groups are shown in Fig. 1. The two genes with the highest mutation frequency, TP53 and APC, displayed mutations in both groups: TP53 had a higher mutation frequency in responders than in non-responders (70 % versus 50 %), but the difference was not significant (p = 0.125); APC showed a similar mutation frequency in both responders and non-responders (59 % versus 54 %). Despite initial analyses based on TheraScreen KRAS Mutation Test indicating that all tumors carried a wild-type KRAS gene, the subsequent next-generation sequencing (NGS) analysis revealed that 11 primary tumors harbored a mutation in this gene. In particular, we found the following variations: G12C, G12V, G13D, Q61H, Q61L, and A146T. All of these alterations were previously described as pathogenic for the KRAS gene and recorded in the COSMIC database. Eight of the 11 samples were found in patients who were resistant to anti-EGFR treatment, whereas the other three KRAS mutated samples belonged to the responder group. Of these three responsive tumors, one sample with a Q61H variation (detected in 54 % of reads) was from a patient with stable disease, and two samples harboring Q61H and G13D (detected in less than 18 % of reads) were from patients who showed a partial response to treatment. These additional, previously undetected mutations in the KRAS gene alone were significantly associated with the appearance of drug resistance (p = 0.045) ( Table 2 and Additional file 4: Table S3). Besides KRAS, the mutational status of BRAF, NRAS, and PIK3CA was already shown to correlate with cetuximab resistance. In this study, BRAF and PI3KCA genes displayed an imbalance (>2 fold), albeit not a statistically significant one, in mutations detected in the responder patients as compared with those in the non-responder patients ( Table 2). We detected five BRAF mutations (V600E and S616F), two NRAS (Q61R and G12D), and six PIK3CA variations (R38H, I391M, and H1047L). All of these variations were annotated in the COSMIC database. Considering a combination of these three genes, eight mutations belonged to tumors that did not respond to therapy, and three mutations were in samples from patients that showed either a partial response or stable disease (Additional file 3: Table S2). Notably, mutations in KRAS, BRAF, NRAS, and PIK3CA appeared to be mutually exclusive ( Fig. 1 and Additional file 1: Figure S2). Because these four genes are downstream effectors of the EGFRinduced pathways, they appear to be functionally significant in driving cetuximab or panitumumab resistance. Combined NRAS, BRAF and PIK3CA mutation frequency significantly correlated to anti-EGFR resistance phenotype (p = 0.045) ( Table 2). If additional KRAS mutations, found by NGS, were considered in addition to the three genes of the panel, the combined mutation frequency became highly significantly correlated with resistance to anti-EGFR therapy (p = 0.001) (Additional file 4: Table S3). To validate these findings, we confirmed the presence of mutations in KRAS, BRAF, and NRAS genes by the standard Sanger sequencing method ( Fig. 2 and Additional file 1: Figure S3). From the chromatograms, we noticed that mutant nucleotides were called by Sanger sequencing only in the case of high-frequency mutations (e.g., KRAS G12C, found in 63 % of reads of sample ID_5060), whereas in the vast majority of samples, mutant nucleotides could be observed by visual inspection of chromatograms, but they were not called by the analysis software, as wild-type nucleotides were prevalent. These results confirm the better qualitative and quantitative accuracy of NGS data. Besides genes of the EGFR pathway, the other genes most frequently mutated in CRCs included CSMD3 (14 %), TCF7L2 (14 %), and FBXW7 (9 %) (Fig. 1). Whereas CSMD3 and TCF7L2 mutations were equally distributed between the two groups, mutations in FBXW7 Table 2). The FBXW7 mutations included a nucleotide insertion involving amino acid 481 (A481fs) in 15 % of reads; a nonsense mutation R479* (COSM206697) in 57 % of reads; two missense variations (D399Y and N81S) in 19 and 57 % of reads, respectively; the variation R505C in 24 % of reads (COSM22975); and missense mutation S582L (COSM22979) found in 21 % of reads. With the exception of N81S, which is a variation of unknown clinical significance annotated in the dbSNP database (rs139738471), all of the other variations occur inside WD-40 domains (also known as WD or beta-transducin repeats) (Fig. 3), which are responsible for protein-protein interactions. These five mutations are predicted to have significant effects on protein stability, suggesting that they interfere with the production of a functional FBXW7 protein. As mentioned earlier, the only responder sample with a FBXW7 mutation (ID_5428 with mutation S582L in 21 % of reads) belonged to the stable disease group. If samples from the borderline stable disease group are not considered, the statistical association between FBXW7 mutations and resistance to cetuximab becomes significant (p = 0.05). Moreover, if the FBXW7 mutations are considered together with BRAF, NRAS, and PIK3CA alterations, the significance of the association with the chemoresponse is further increased (p = 0.016) ( Table 2). All of the mutations in FBXW7 were further validated by Sanger sequencing (Additional file 1: Figure S3). Mutations in the SMAD4 gene also exhibited an imbalance (5.3-fold change) between the non-responder and the responder patients: four mutations were found in samples from the non-responders and one mutation was found in a patient with a partial response to anti-EGFR therapy ( Fig. 1 and Additional file 1: Figure S2). As in all of the other individual genes, with the exception of KRAS, the difference was not statistically significant (p = 0.16) ( Table 2). However, if all of the genes with imbalanced mutations (excluding KRAS) are considered in responders versus non-responders, 13 cases (46 %) exhibit a mutation in at least one of the five genes (NRAS, BRAF, PI3KCA, FBXW7, and SMAD4) among the non-responders and 4 (11 %) among the responders. This difference is statistically highly significant (p = 0.002) ( Table 2). If the additional KRAS variations are included within the five gene panels, significance strikingly improves (p = 0.0001) (Additional file 4: Table S3). Discussion Anti-EGFR therapy based on cetuximab or panitumumab moAbs is administered to treat advanced CRCs that carry a wild-type KRAS gene [1,[3][4][5]. Nonetheless, some patients do not respond to this therapy, as genes other than KRAS are involved in the resistance to anti-EGFR molecules. Indeed, previous work has reported the involvement of mutations in genes such as BRAF, PIK3CA, and PTEN [1,[3][4][5]. All of these genes represent down-stream effectors or modulators of the EGFR pathway, thus establishing a rationale for the anti-EGFR moAb response. More recently, new guidelines for the use of cetuximab and panitumumab treatment in CRC patients supported the analysis of the mutational status of both KRAS and NRAS genes and BRAF in wild-type RAS cancers [6]. Samples analyzed in this study were antecedent to these guidelines and were evaluated only for the mutational status of KRAS (codons 12 and 13). Here, we performed a targeted resequencing of a group of genes previously reported as the most frequently mutated genes in non-hypermutated CRCs [7]: TP53, APC, KRAS, CSMD3, TCF7L2, PI3KCA, FBXW7, SOX9, SMAD4, PTPRD, GPC6, EDNRB, GNAS, AMER1, NRAS, KIAA1804, CTNNB1, ACVR1B, and SMAD2. Analysis of EGFR and BRAF were added for their known involvement in the EGFR signaling pathway. Using this 21-gene panel, we investigated 65 CRC samples from patients treated with anti-EGFR moAbs to uncover genes whose mutational status could be associated with differential sensitivity to therapy. This study proves the feasibility of highthroughput sequencing of several genes in large numbers of samples to get detailed information about the mutational status of analyzed genes and it highlights the better sensitivity of NGS technologies compared to traditional capillary sequencing. The most frequently mutated genes in our samples were TP53 and APC. Since these two genes have been previously described as the most frequently mutated in CRC [7,15], this finding validates the reliability of our sequencing results. Although a higher percentage of mutant TP53 was detected in responders, no significant correlation between the mutational status of these two genes and the resistance phenotype was found (p = 0.125). We also found that a number of genes, including KRAS (previously undetected mutations), BRAF, PI3KCA, FBXW7, and SMAD4, exhibited a higher frequency (>2-fold) of mutations in non-responders than in responders. Notably, although the samples were classified as wildtype KRAS by the TheraScreen KRAS Mutation Test, sequencing of primary lesions identified the presence of additional mutations in the KRAS gene in 11 samples. This discrepancy partially occurred because the Therascreen test consists of an allele-specific PCR able to detect seven mutations at codons 12 and 13 of the KRAS gene, whereas codons 61 and 146, whose clinical significance has now been proven, were not covered by the assay. However, five of the KRAS mutations were at codons 12 and 13. A similar underestimation of KRAS mutated samples using this assay was previously found by Dono and colleagues [16], and differences between NGS and routine clinical assays have been previously described [17]. Surprisingly, mutations in KRAS were also discovered in samples belonging to the responder group. Contrasting significance had already been reported for the KRAS-G13D mutation, found in sample ID_5443 [18]. Our results do not help to clarify the issue, since another G13D mutation was detected in a non-responsive patient (ID_5074). Contrasting data were also obtained for the Q61H mutation: two cases were found within the responders (ID_5064 and ID_5408) and one within the non-responders (ID_5454). An additional different mutation at codon 61 (Q61L in sample ID_5430) was found in non-responders. A Q61R mutation was also found to affect NRAS in a nonresponder case (ID_5035). These findings suggest that a complex picture emerges in deciphering the role of the RAS mutation in conferring resistance to moAbs against EGFR; in particular, the response of mutations at codons 13 or 61 may depend on the type of mutation and possibly on other tumor alterations that may affect individual susceptibility. With the exception of NRAS, alterations in genes involved in the EGFR pathway other than KRAS generally exhibited an imbalance, albeit at lower frequencies, between responder and non-responder patients. Overall, mutations in NRAS were found in 3 % of samples, in BRAF in 8 %, and in PIK3CA in 9 %. The detected mutation rates overlapped those reported by Smith and colleagues in a study performed on a large series of CRCs [19]. Our data confirmed that mutations in these genes were generally mutually exclusive. The combined mutations in NRAS, BRAF, and PIK3CA could predict resistance to anti-EGFR moAbs with a statistically significant value, as recently also highlighted by Ciardiello and colleagues [17]. The association of mutations in these three genes with the resistant phenotype had a p-value of 0.045. Although the newly identified KRAS mutations alone significantly associated with anti-EGFR resistant phenotype (p = 0.045), we excluded KRAS gene from correlation calculations because patients' cohort was initially selected for wt KRAS status, thus potentially producing a bias in KRAS mutation frequency. It should be highlighted that, if KRAS gene is added to the panel, prediction significance will strikingly increase (Additional file 4: Table S3)". Two other genes, the E3 ubiquitin protein ligase F-box and WD repeat domain containing 7 (FBXW7 or FBW7) and the SMAD family member 4 (SMAD4), exhibited an imbalance in mutations between responders and nonresponders. Mutations in FBXW7 were identified in six samples (9 %): five in the non-responders and one in the stable disease subgroup of the responders. No FBXW7 mutation was detected in patients with a partial or a complete response. FBXW7 mutations alone were associated with a resistant phenotype with a p-value of 0.077. However, by adding FBXW7 to the NRAS-BRAF-PIK3CA mutation panel, the significance of the panel became stronger (p-value of 0.016). The involvement of FBXW7 in resistance to traditional chemotherapies has been previously reported [20] (for a review, see [21]). Mutations in FBXW7 in human CRCs were previously found in 11-12 % of cases [7,15] and its low expression was shown to be correlated with a poor prognosis [22]. Downregulation of FBXW7 expression was also reported in other cancers [23,24] and leukemias [25][26][27]. F-box proteins constitute one of the subunits of the ubiquitin protein ligase complex and they function in the ubiquitin-mediated degradation of several cellular proteins. Genes belonging to the ubiquitin proteasome complex are often mutated in cancer, leading to reduced oncoprotein turnover [28]. In particular, FBXW7 is the component for substrate recognition [21] and, through its F-box domain, is able to bind and mediate the degradation of some known oncogenes, including cyclin E [29], c-Myc [30], c-Jun [31], c-Myb [32], Notch [31], and mTOR [33]. Since nearly all of the mutations we found in FBXW7 affected the WD-40 domains, which are responsible for protein-protein interactions, and they all appear to be inactivating mutations that are able to interfere with the production of a functional FBXW7 protein, the results of this work provide evidence for a potential role of inactivating mutations in conferring resistance to anti-EGFR moAbs. Importantly, previous mutational studies and animal models have shown that even monoallelic mutations in FBXW7 could be sufficient to promote tumorigenesis [34,35], suggesting that these mutations could dominantly affect functionality of the ubiquitin proteasome complex. The mechanism through which mutations in FBXW7 could impair cetuximab or panitumumab efficacy requires further studies. It is possible that, since some of its proven targets are downstream effectors of EGFR, mutations of FBXW7 could impair their degradation and thus contribute to the resistant phenotype. One other gene that appeared potentially interesting is SMAD4. Like the mutations in FBXW7, mutations in SMAD4 appear to be unbalanced, with one mutation in a responder sample versus four mutations in resistant samples. Although not significant in itself (p-value of 0.16) in our sample cohort, combining mutations in SMAD4 with those in NRAS-BRAF-PIK3CA-FBXW7 further improved the significance of the panel (p-value of 0.002). Given the function of SMAD4 protein, a possible involvement of a non-functional TGFβ pathway in conferring resistance to anti-EGFR moAbs might be suggested. Conclusions By sequencing a panel of 21 genes involved in CRC, we found that, beside KRAS gene, whose previously undetected mutations reached statistical significance as individual gene, the combined mutations of other genes belonging to the EGFR pathway (NRAS, BRAF and PIK3CA) together with mutations at FBXW7 and SMAD4 genes achieved a significant association with resistance to anti-EGFR therapy. These results indicate that mutations at KRAS combined with the 5 gene panel found in this study, have a strong potential for predicting response to anti-EGFR moAbs. This work supports the usefulness of NGS technology and multigene sequencing over the traditional capillary sequencing for improving patient management in a clinical setting. Availability of supporting data Sequencing data supporting the results of this article are available in the ArrayExpress database [36], with the following accession number: E-MTAB-3883; hyperlink to dataset: http://www.ebi.ac.uk/arrayexpress/experiments/ E-MTAB-3883.
2017-06-21T16:32:55.972Z
2015-10-27T00:00:00.000
{ "year": 2015, "sha1": "3f7110191bd4f1f6720dc9dfdb2ed16354923346", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-015-1752-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10819b2bb2bbcb5f7ebbe3fc15cb69c3592f1b9d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208212707
pes2o/s2orc
v3-fos-license
Association of IKZF1 SNPs in cold medicine-related Stevens–Johnson syndrome in Thailand Purpose Our meta-analysis of several ethnic groups (Japanese, Korean, Indian, Brazilian) revealed a significant genome-wide association between cold medicine-related SJS/TEN (CM-SJS/TEN) with severe ocular complications (SOC) and IKZF1 SNPs, suggesting that IKZF1 might be a potential marker for susceptibility to CM-SJS/TEN with SOC. In this study, we examined the association between CM-SJS/TEN with SOC and the IKZF1 SNPs in the Thai population. Methods 57 CM-SJS/TEN with SOC and 171 control samples were collected at Chulalongkorn University and Mahidol University. Genomic DNA samples were genotyped for the IKZF1 SNPs at Kyoto Prefectural University of Medicine in Japan using the TaqMan SNP genotyping assay. Results The four SNPs previously reported to be associated with CM-SJS/TEN with SOC in the Japanese were examined in the Thai samples. Although the number of Thai cases (n = 57) was small, a significant association between CM-SJS/TEN with SOC and IKZF1 SNPs which included rs4917014 (T vs G, OR = 2.9, p = 0.0012, Pc = 0.0049), rs4917129 (T vs C, OR = 2.8, p = 0.0026, Pc = 0.010) and rs10276619 (G vs A, OR = 1.8, p = 0.012, Pc = 0.048) was identified. Conclusion In addition to the Japanese, Korean and Indian populations, Thai cases with CM-SJS/TEN and SOC were significantly associated with IKZF1 SNPs. With our previous report of the critical role of IKZF1 in mucocutaneous inflammation, these results suggest that IKZF1 is important in the pathogenesis of CM-SJS/TEN with SOC. To the editor Stevens-Johnson syndrome (SJS) and its severe type, toxic epidermal necrolysis (TEN), are acute inflammatory vesiculobullous reactions of the skin and mucosa including the ocular surface, oral cavity, and genitals. Severe ocular complications (SOC) appear in about half of SJS/TEN patients diagnosed by dermatologists [1]. Cold medicines (CM), including multi-ingredient cold medications and non-steroid anti-inflammatory drugs (NSAIDs) were the main causative drugs of SJS/TEN with SOC [2]. In the acute stage, in addition to skin eruption and erosion, SJS/TEN with SOC patients manifest severe conjunctivitis with corneal and conjunctival erosion and pseudo-membranes. Despite healing of the skin lesions, in the chronic stage of SJS/TEN with SOC, ocular surface sequelae such as severe dry eye, symblepharon, trichiasis, scaring of palpebra conjunctiva, and conjunctival invasion into the cornea may persist [3]. While the reported annual incidence of SJS/TEN is very rare (only 1-6/10 6 individuals), its mortality rate is high (3% for SJS and 27% for TEN) [4]. We previously reported that the IKZF1 gene was strongly associated with CM-SJS/TEN with SOC in Japanese patients [5]. In addition, a meta-analysis Open Access Clinical and Translational Allergy of several ethnic groups (Japanese, Korean, Indian, Brazilian) revealed a significant genome-wide association between CM-SJS/TEN with SOC and IKZF1, suggesting that IKZF1 might be a potential marker for susceptibility to CM-SJS/TEN with SOC [5]. In this study, we examined the association between Thai CM-SJS/TEN with SOC and the IKZF1 SNPs, known to be associated with the Japanese CM-SJS/TEN with SOC. The CM-SJS/TEN with SOC and control samples were collected at Chulalongkorn University (King Chulalongkorn Memorial Hospital) and Mahidol University (Ramathibodi Hospital and Siriraj Hospital). Genomic DNA samples were genotyped for the IKZF1 SNPs at Kyoto Prefectural University of Medicine in Japan. The study was approved by the institutional review board of all institutes. Protocol explanation and obtaining written informed consent were done in all participants before starting experimental procedures. All experimental processes were complied with the principles set forth in the Helsinki declaration. The diagnostic criteria of SJS/ TEN were based on a confirmed history of acute onset of high fever, skin eruption with at least two sites of serious mucocutaneous involvement including the oral mucosa and the ocular surface [5]. Thai healthy volunteers were used as controls. CM was defined as the drug that patients took for relieving cold symptoms including nonsteroidal anti-inflammatory drugs (NSAIDS), acetaminophen, and other multi-ingredient cold medications [6]. We previously reported that in Japanese, for SJS/TEN with SOC, acetaminophen was a main drug of CM; 48% of the patients have taken acetaminophen before developing SJS/TEN with SOC [6]. In Thailand, paracetamol (which is equal to acetaminophen) might be also important causative drug of CM, because 20 of 57 (35%) patients have taken paracetamol before developing it. The patients were classified as having SOC if the following manifestations were detected; severe conjunctivitis, pseudomembrane, and epithelial defect on the ocular surface in the acute stage and/or ocular sequelae such as dry eye, trichiasis, symblepharon, and conjunctival invasion into the cornea in the chronic stage [6]. Of 57 CM-SJS/TEN with SOC, 23 were male and 34 were female; their age ranged from 6 to 73 years [median 42.3 ± 15.6 (SD) years]. The age at SJS/TEN onset ranged from 2 to 54 years (median 24.8 ± 13.8 years). The controls were 85 males and 86 females; their median age was 39.5 ± 14.3 years. Some of the CM-SJS/TEN patients and some of the controls had been included in our earlier studies [7]. Subjects were obtained DNA extraction from whole peripheral blood using the PAX gene blood DNA kits (Qiagen, Hilden, Germany) or from saliva using Oragene DNA (Kyodou International, Kanagawa, Japan). The TaqMan SNP genotyping assay (Applied Biosystems, Foster City, CA) was used for the genotypes of the IKZF1 gene as previously reported [5]. Chi squared test was applied to a two-by-two contingency table for the allele frequency and the dominant and recessive models. The 4 SNPs previously reported to be associated with CM-SJS/TEN with SOC in the Japanese were examined in the Thai samples. Although the number of Thai cases (n = 57) was small, we again found a significant association between CM-SJS/TEN with SOC and 3 IKZF1 SNPs which included rs4917014 (T vs G, OR = 2.9, p = 0.0012, Pc = 0.0049), rs4917129 (T vs C, OR = 2.8, p = 0.0026, Pc = 0.010) and rs10276619 (G vs A, OR = 1.8, p = 0.012, Pc = 0.048) ( Table 1). Our previous results of meta-analysis in the Japanese, Korean, Indian and Brazilian showed the significant associations in 3 IKZF1 SNPs [rs4917014 (T vs G, OR = 2, p = 8.5 × 10 −11 ) (which is equal to (G (minor allele) vs T (major allele), OR = 0.5) in the previous paper), rs4917129 (T vs C, OR = 2, p = 8.0 × 10 −9 ) (which is equal to (C (minor allele) vs T (major allele), OR = 0.5 in the previous paper)) and rs10276619 (G vs A, OR = 1.8, p = 4.3 × 10 −9 )] [5]. Present results in the Thai population are in concordance with our previous report. Our functional analysis of SNPs of the IKZF1 gene revealed that the ratio of the splicing isoforms Ik2/Ik1 could be affected by IKZF1 SNPs significantly associated with susceptibility to CM-SJS/TEN with SOC [5]. The quantity of the Ik2 isoform is increased in disease-protective genotypes of IKZF1 (rs4917014 G/G and rs10276619 A/A) [5]. As Ikaros 2, an Ik2 isoform lacks the DNAbinding ability and seems to be dominant-negative. It is possible that the function of Ikaros, the protein of IKZF1, is enhanced in CM-SJS/TEN with SOC [5]. Ikaros is a transcription factor that regulates numerous biological events. It was reported that Ikaros-null mice lack B-lineage cells, NK cells, peripheral lymph node-and fetal T-cells, thus Ikaros family members regulate important cell-fate decisions in the development of the adaptive immune system [8]. On the other hand, we have reported that epithelium might be contribute to the pathobiology of CM-SJS/TEN with SOC [9]. Thus, we produced K5-Ikzf1-EGFP transgenic mice (Ikzf1 Tg) by introducing the Ik1 isoform into cells expressing keratin 5, which is expressed in epithelial tissues such as the epidermis and conjunctiva and found that mucocutaneous inflammation was exacerbated in Ikzf1-Tg mice. They developed dermatitis with some having blepharoconjunctivitis [10]. Histological analysis showed not only dermatitis but also tissue inflammation in the blepharoconjunctiva, tongue, and paronychia [10], similar to the findings in patients in the acute state of SJS/TEN with SOC [9]. Our studies demonstrated that IKZF1 could play a critical role in maintaining mucocutaneous homeostasis [10] and suggested that it might be implicated in the aggravation of mucocutaneous inflammation seen in the presence of CM-SJS/TEN with SOC [10]. In addition to the Japanese, Korean and Indian in our previous report [5], CM-SJS/TEN with SOC was significantly associated with IKZF1 SNPs in the Thai cases. With our previous report of the critical role of IKZF1 in mucocutaneous inflammation [8], these results suggest that IKZF1 is important in the pathogenesis of CM-SJS/ TEN with SOC.
2019-11-22T16:20:07.831Z
2019-11-22T00:00:00.000
{ "year": 2019, "sha1": "0be8eacd1ac4e0f26323b93c13033f761b327b37", "oa_license": "CCBY", "oa_url": "https://ctajournal.biomedcentral.com/track/pdf/10.1186/s13601-019-0300-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0be8eacd1ac4e0f26323b93c13033f761b327b37", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
106991208
pes2o/s2orc
v3-fos-license
Accounting Framework to Measure the Environmental Costs and Disclosed in Industrials Companies—Case Study of Societe Cement Hamma Bouziane (SCHB) in Constantine Accounting, being an integrated information system, is not merely influenced by its environment, but also affects environment as well, due to its crucial role in generating the necessary information to decision-makers. It has two main pillars: measuring and declaring the costs that result from the activities of companies, especially industrial companies because they affect the environment. Therefore, the objective of this research is to investigate how well industrial companies are committed to measuring their environmental duties and declaring them in their balance sheets. This topic will be developed over two parts: scientific basis of environmental costs and accounting; environmental measuring and disclosure. Introduction The interaction with the environment in its various facets requires the contribution of different arenas from miscellaneous fields and scientific specialties-that is because the environmental studies often depend on the new fruitful results arrived at in such fields-this could patiently show the importance of the environment in tracing and evaluating the direction of scientific and technological developements, taking into account that the environment is the main source of the resources used by the industrial institutions in their productive processes. There are those who believe that the industrializing and developing processes have greatly, but negatively, contributed to environment pollution-that is due to the negligence of environment-proper considerations when planning out the industrial process. As a consequence, it caused the prodigal loss of primary resources as well as energy. In addition, it ensued from the so-called ambitions plans with regard to realizing the fast growth, together with the excess in intensity of the competition among the institutions functioning in similar sectors. The latter resulted in negative effects on the environment. The accountants encounter difficulty in measuring the charges ensued from these monetarily. The industrial institutions are held responsible for that, because they know the environmental costs. As a result of all that, it is mandatory that all the various institutions, particularly those functioning in environment sensitive industries, take into account all the environmental consideration, when evaluating the environmental revenue. This would require them to develop a professional method in measuring, so as to seriously care about the environment-proper matters which the accountancy possesses within its branches, a side relating to the environment and its issues. Therefore all that is executed within the framework of environmental costs account. Accounting aims at availing future effective information for decision-takers and the environmental policies-makers, and this would purposefully indicate all the environmental social costs of all the processes and activities targeting the protection of the environment from the direct and the indirect disadvantages. In doing so, they depend on two firm bases: measurement and the accounting disclosure about the environmental performance. Problems The problems are summed up in the attempt to know the extent to which the industrial institutions reached in the accounting measurement, their environmental effects, and their disclosure about them. It becomes a necessity especially for its big monetary costs and also the obligations owing to the pollution they cause in the environment. So, the problems can be imbued in the following main query: To what extent do the industrial institutions pursue their obligation in measuring their environmental costs and disclosure those in their financial lists? Significance of Research The significance of this study springs from the excessive interest in holding the social and environmental responsibility. So, the social revenue is taking precedence over its monetary revenue. In addition, the efficiency of the accounting system is measured by the availability of a secondary system which would give information about the environmental performance of the institutions and its effects on society. The study is also important to all institutions of different plans and inclinations so as to actualize the social and environmental dimensions in future plan. That would help them realize their aims and help society in the continual development. Also, they will guarantee their continuity and possibility of evaluating the performance objectively and comparing them to those of other institutions. Moreover, they would put on a competitive feature as the information concerning the environmental performance is of a quantitative and monetary nature. Thus, that makes them affect, in a direct way, the monetary centre of the institution and the result of their activity. Objectives of the Research This research aims to specify the state of the accounting measurement and disclosure about the environmental charges in those industrial institutions. The latter can be done through realizing the following offshoot aims:  clarifying the nature of environmental costs and the ways of their measuring and the requirements of their disclosure;  tackling the environmental accountancy basics;  analyzing the causes of the constrainment of the accounting measurement by industrial institutions' environmental performance and their disclosure about it, and the most important hindrances that impede them to do that. Research Methodology The study of the topic "Accounting framework to measure the environmental costs and disclosed in industrial enterprises" in alterations and the attempt to link all these alterations should lead to utilize two approaches: the analytical descriptive method, which is used in the theoretical part of this study because of its dependence on describing the Phenomenon under study and summing up the most crucial results that can be arrived at; a case study within the practical part. Scientific Basis of Environmental Costs and Accounting Environmental Costs-Definition and Classification Environmental costs-Definition. That the business sector industrial enterprises include environmental considerations constantly working to improve the environmental performance in the long-term strategy for the environment and ensure the survival of the institutions in the market has been realized. Accountants are faced with the problem of measuring the costs of pollution forms of cash, which is expressed in environmental costs (Al Doussari, 2007, p. 18). Expenditures incurred to prevent, contain, or remove environmental contamination. Such costs are generally expensed. However, only in the following cases, the company may elect to either expense or defer the costs: (1) the expenditures either extend the life or capacity of the asset or increase the property's safety; (2) the expenditures are made to get the property ready for sale; and (3) the expenditures prevent or lessen environmental contamination that may result from future activities of property owned (Retrieved from http://www.answers.com/topic/environmental-costs) According to USA Environmental Protection Agency, the definition of environmental cost depends on utilization of information in a company and the environmental costs can include conventional costs (raw materials and energy costs with the environmental relevance), potentially hidden costs (costs which are captured by accounting system but then lose their identity in overheads), contingent costs (costs in a future time-contingent liabilities), and image and relationship costs (Betianu, 2013, p. 125) Classification of environmental costs. The novelty of this type of costs differed researchers in the classification and a review of these views can divide environmental costs into the following (Al Sharairi & Al Awawdeh, 2011, p. 80). The division for sustainable development of the United Nations has proposed a definition of environmental costs that distinguishes four types of costs:  The first one is related to all the efforts made by organizations to reduce the environmental effects of their activities, by using "end-of-pipe" measures and technologies;  The second one is related to all activities made by organizations to prevent their environmental effects before the end of the production process, for example, by using cleaner technologies or by establishing environmental management systems;  The third and fourth types of cost are defined on the idea that anything that does not enter the product produced by a company is a non-product output, such as wastes, waste water, or lost energy, and that all costs associated to this non-product output are regarded as environmental costs. These include both the purchasing value of the materials and the production costs of producing the non-product output. The guidance document of International Federation of Accountants (IFAC) on Environmental Management Accounting (EMA) draws that distinction between "waste and emission control costs" and "prevention and other environmental management costs", which, together with research and development projects, helps to reduce the material costs of non-product output and thus increases co-efficiency. The environmental cost categories stated by United Nations are as follows.  Waste and emission treatment includes: depreciation for related equipment; maintenance and operating materials and services; related personnel; fees, taxes, charges; fines and penalties; insurance for environmental liabilities; and provisions for cleanup costs, remediation;  Prevention and environmental management includes: external services for environmental management; personnel for general environmental management activities; research and development; extra expenditure for cleaner technologies; and other environmental management costs;  Material purchase value of non-product output includes: raw materials; packaging; auxiliary materials; operating materials; energy; and water;  Processing costs of non-product output includes: labor costs and energy cost. The IFAC environmental cost categories are as follows.  Materials cost of product outputs includes the purchase costs of natural resources such as water and other materials that are converted into products, by products, and packaging;  Materials cost of non-product outputs includes the purchase (and sometimes processing) cost of energy, water and other materials that become non-product output (i.e., waste and emissions);  Waste and emission control cost includes: cost for handling, treatment and disposal of waste and emissions; remediation and compensation cost elated to environmental damage; and any control related regulatory compliance cost;  Prevention and other environmental management cost includes the cost of preventive environmental management activities such as cleaner production projects; cost for other environmental management activities such as environmental planning and systems, environmental measurement, environmental communication, and any other relevant activities;  Research and development cost includes the cost for research and development projects related to environmental issues;  Less tangible cost includes both internal and external costs related to less tangible issues. Examples include liability, future regulations, productivity, company image, stakeholder relations, and externalities. Also, the cost related to environment can be described as cost within internal management account, or external financial accounts. In this approach, internal environmental cost to the firm is composed of direct cost, indirect cost, and contingent cost. These typically include such things as remediation or restoration cost, waste management cost, or other compliance and environmental management cost. Internal cost can usually be estimated and allocated using the standard costing models that are available to the firm. Direct cost can be traced to a particular product, site, and type of pollution or pollution prevention program (e.g., waste management or remediation costs at a particular site). Indirect cost, such as environmental training, research and development, record keeping and reporting, is allocated to cost centers such as products and departments or activities. External cost is the cost of environmental damage external to the firm. These cost can be "monetized" (i.e., their monetary equivalent values can be assessed) by economic methods that determine the maximum amount that people would be willing to pay to avoid the damage, or the minimum amount of compensation, that they would accept to incur it (Betianu, 2013, pp. 125-126). Objectives of the Environmental Cost Environmental cost of several targets can be displayed as follows (El Fadal, Nour, & Aldogdji, 2002, p. 227).  It enables the environmental cost of institutions to study the negative impact of operational processes on the environment and related programs for the protection and budget for these programs, and their impact on profitability and the discovery of new ways to reduce the negative environmental impacts;  The inclusion of environmental cost in the annual reports contributes to the competent organs of the state to assist in the preparation;  Long-term plans for natural resources and environmental indicators reports of the various regions and the state needed to achieve the control over the elements of the pollution of the environment;  Better management of environmental cost must be reviewed periodically to reveal shortcomings in the accounting software used and enable organizations to measure revenue and environmental benefits;  When investing in shares of companies, environmental information makes supply decision-makers invest in areas with high efficiency in the fight against pollution and avoid not taking into account the cost of environmental pollution in the preparation of its financial statements;  Disclosure of the environmental costs of the institutions provides information on the nature of the activity, environmental legislation and discretionary capital expenditures and the actual implications for compliance with those regulations, and the associated cost, and their impact on the financial position, liquidity and equity returns. Concept of Environmental Cost Accounting Several developments have been got in various fields. The industrial sector was a part of them, as one of the sectors most closely to the environment and its issues. The industrial enterprises have faced many of these issues to develop their practices in maintain the ecological balance. So they have adopted integrated economic accounting that alternates the traditional ones in the context of environmental cost accounting (Roger, 2004, p. 13). Environmental accounting is generally perceived as accounting for cost, related to the environment (Reyes, 2002, p. 3). EMA, for internal use, is as following:  Cost control and management;  Correlate business and environmental performance;  Proper resource management;  Prioritization tool. Financial, for external release:  Governed by G.A.A.P.;  Financial reports to stakeholder, lenders, etc.;  Frequently, reporting liabilities financially relevant cost. Environmental Full Cost Accounting (EFCA) generally refers to the process of collecting and presenting information-about environmental, social, and economic costs and benefits/advantages (collectively known as the "triple bottom line") -for each proposed alternative when a decision is necessary. It is a conventional method of cost accounting that traces direct cost and allocates indirect cost. (Schaltegger & Burritt, 2000, p. 111) Importance of Environmental Accounting The environmental accounting is very important for the following reasons:  It can integrate business and environmental planning;  It can identify company priorities;  It can improve product and process design (LCA);  It can give more accurate cost-benefit evaluations;  It can be an important step in pollution prevention activities;  It can support budget planning and resource management (Mike, 2013, p. 3);  There are demands for environmental excellence from various sectors;  Traditional accounting systems are primarily geared towards external reporting and disclosures;  Accurate information on environmental cost is a key tool for internal business decisions (Reyes, 2002, p. 7). Accounting Treatment of the Environmental Effects The environmental accounting is an active service, being developed to promote initiatives and environmental policies by including cost and environmental benefits that result from the practice of institutions for their activities, not limited importance to governments and environmental protection agencies without the other, and the need for a framework for environmental accounting in economic institutions to achieve the advantages of the accountant and the decision maker. The institutions, especially industrial ones, in their attempts to reduce the effects of pollution and control, as well as for processing and removal of its effects on the environment and society, have practiced many activities; and whether it is voluntary or mandatory, they bear the cost of the promotion of those activities and adhere to it that can result in income and benefits. As the accounting through information system deals with the events and processes as well as the conditions resulting from the sum of the activities on contact economic institutions, method is to be set by which to determine the environmental cost and to trace through the accounting information systems in organizations, and in this area some studies have shown that there are four ways to identify environmental cost and traced, namely (Dhahabi & Mouwafak, 2009, pp. 10-11):  Air traditional accounting system;  The use of cost accounting faculty;  The use of system cost on the basis of activities (ABC) to link the activities related to the environment with the cost incurred because of it;  The use of estimating total cost, especially in investment decisions. The success of the process of assessing the environmental impact requires the adoption of integrated economic accounting instead of the traditional ones. This new kind of accounting can be considered as a method to assess the environmental and social impact of the economic projects. So the method aims to provide actual and future information to decision-makers and environmental policy-makers. Beside the purpose behind providing information is the identifying of all the environmental and social cost related to activities done by industrial companies to protect the environment from direct and indirect damages. Measurement and Disclosure of Accounting for Environmental Performance There is no doubt that users of financial reporting need the environmental information to make decisions. This environmental information is mainly the amount of costs and benefits that return to the society and stakeholders. All of this cannot be done without following the process of environmental accounting measurement and disclosure. Accounting Measurement for Environmental Performance Researchers presented in the areas of different measurements and multiple definitions for the measurement process are even different somewhat in shape, but they are consistent in content as follows: (1) Accounting measurement is only an expression of quantitative and critical phenomenon events and facts of certain financial and economic unity, and displayed in the useful and clear form (Ahmed, 2009, p. 154). (2) The accounting measurement of environmental processes can be defined as: It is to determine the values of all elements of the costs generated by the commitment of industrial enterprises, social and environmental responsibilities, whether this commitment sole discretion is under the law (Chhadha, 2010, p. 283). Accounting measurement of assets and liabilities environmental. The objective of the accounting measure of environmental assets and liabilities is to seek the fundamentals of measuring the environmental positive and negative contribution. Accounting measurement of environmental assets. The accounting measurement of environmental assets is reflecting environmental expenditures which permitted them to benefit more than one accounting period. Also obstacles imposed on polluters to reduce their waste increase costs placed on them, which is designed to measure the costs of treatment:  provide cost information that will help determine how much cost is borne by the institution to address the pollution damage resulting from the activity;  link the level of treatment required to achieve approval with cost;  determine the impact of treatment cost on cost prices in industrial enterprises. There are many difficulties, however, to achieve the objectives of measuring the costs, including the difficulty of obtaining information on the cost of the damage, pollution treatment costs borne by individuals, and these difficulties are due to the lack of great interest to the problem of pollution treatment, as well as the weakness of environmental awareness. Due to the difficulties in increasing the personal judgment of the evaluation of accounting for the effects of environmental processes on the origins of the assets and liabilities of industrial enterprises and the absence of accounting standards to evaluate and measure such impacts, it affects processes generally on the assets and liabilities of institutions. The most prominent effects on the environmental assets include as follows (Bouhafs, 2007, p. 120): (1) accounting measurement of technological systems aiming at addressing environmental pollution emitted at the end of the production line; (2) evaluation of the reduction in fixed assets or inventory because of the environmental impacts, resulting from damaged or obsolete inventory environmentally, the shortage of fixed assets due to environmental accidents. Accounting measurement of environmental liabilities. Environmental obligations are known as the amount of money paid by industrial companies in the future to repair the environmental damage caused by their activities affecting the environment. Problems of measurement and evaluation of environmental accounting. Accounting measurement of environmental processes did not meet the attention given by those operations to the kind of measurement, due to the unavailability of cost prices and market prices that could conduct the evaluation financial accounting for such operations. Also damage to the environment raises several questions related to the special difficulties accounting for the costs of the environmental performance of industrial enterprises, which comes in the forefront of the following:  the difficulty of determining the causal relationship between the offending act and the damage it caused;  the difficulty of determining an effective pollution once and for all;  the difficulty of limiting damage to the environment;  diversity in the forms of environmental corruption;  absence of an accounting standard which can be recognized by an independent accounting treatment of ongoing environmental expenses, especially those for initiating them any cash yield. Disclosure of Accounting for Environmental Performance Concept of environmental disclosure. Disclosure in its comprehensive sense means the disclosure of confidential information and its authorization, as it reflects well on the detection revealing accounting. It is the commitment to publish all the facts and information related with the activity of industrial companies which will influence the decisions of the investor. The environmental disclosure is the method or the way in which companies can inform the community with its different limbs, from the various activities related with the environment, according to the financial statements or reports, as this is latter an appropriate tool to achieve this objective. The United States and Britain are also of the most interest by subjecting environmental performance of the institution to accounting disclosure ,where they show their interest in professional organizations specializing in the two countries, as well as the Commission Securities who needs to disclose the environmental impact in terms of cost and yield, as recommended by the committee on accounting standards ,such as England which needs to disclosure environmental information and report on the disclosure of institutions for the cost and benefits resulting from those activities . In Australia and New Zealand, some studies have shown that there are a few institutions which have attempted to disclose their activities in the area of environmental disclosure (Tahar, 2011, pp. 447-450). Motives and mechanisms for disclosure of environmental information. With increased needs of users of financial statements to the disclosure of information environmental and to deal with inadequacies of traditional disclosure, there was an urgent need to develop an accounting standard to include considerations of environmental disclosure. The accounting disclosure in its current form does not meet the needs of information and data on the social responsibility of the institution to the protection of the environment, and then there was an urgent need to develop a standard disclosure in accounting thought to include environmental disclosure in the form of supplements lists and traditional reporting, or in the form of lists and independent reports, leading to increased efficiency of the operation of information by decision makers, and then rationalized their decisions relating to the assessment of financial assets and economic performance, taking into account environmental and economic responsibility of the institution. The voluntary environmental disclosures based on a number of factors are summarized as following (Ben Bouzian & Ben Dhab, 2012, p. 273): (1) working on building better relations between the institution and stakeholders, such as government agencies and inter stock and employees of the institution and the customers and suppliers and financiers and pressure groups, and the use of disclosure as a way to inform the community as a whole that the institution is voluntary disclosure of environmental information; (2) trying to improve the image of the institution within the community which is engaged by the activity, especially for organizations that have been heard for the damage caused by the occurrence of accidents or environmental disasters, which supports the trust and respect of society and individuals in institutions, thereby increasing demand for its products and the expansion of its investments, which is reflected in its impact on the end result of its activity and its money, and the value of it; (3) getting ready for the application of environmental laws and regulations that will require the disclosure of environmental information and are expected to be binding on all institutions; (4) using the disclosure as a means to reach a competitive position in the field of advanced enterprise activity and the preservation of its current location; (5) getting the tax treatment distinctive in terms of the exemption or reduction of taxes imposed on it, and the United States is one of the first countries interested in encouraging institutions to protect the environment; (6) reducing the cost of production because of material support, low-cost funding, or distinctive tax treatment leading to increase the size of the institution's activities. Preassigned institution uses its resources as efficiently as possible and at the same time protects the environment from the harmful effects of pollution to help them increase profits; (7) disclosure of environmental expenditures separately in the financial statements will allow measurement of the utility, such as helping investors to see clearly the policies adopted by the institution for the protection of the environment, and then rationalize their decisions concerning the institution. Conclusions This research, with its practical and theoretical sides, has ended with theoretical results: Pollution is one of the important environmental impacts which require a big part of research to control other factors of environmental impacts. The business sector has realized that institutions, especially those active in industries sensitive to the environment must include continuous environmental aspects that improve the environmental performance of the environmental strategy of long-term. This is to ensure the survival of industrial enterprises in the market. The resulting costs of this are included in what is known about the environmental costs. This latter is treated with a related accountancy framework and it is considered as one of the social accountancy components that aim at protecting renewable environmental and the nonrenewable sources from exhaustion and deterioration because of its being a complementary system for information in an exchangeable relationship with the environment. The effective role plays in decision-taking is realized via the availability of the accounting information relating to the environmental activities performed by the industrial institutions within their financial reporting, as it is considered as the main helpful tool in planning and decision-making and drawing the targeting policies of protection against the environmental effects. Accountancy is then able to effectively exercise its efficient role by depending on two main bases (pillars): measurement and the accounting disclosure. The end from this study is to make an attempt to know the extent to which the industrial institutions are committed to measuring their environmental effects and their disclosure about environmental effects through a practice in the industry of cement Hamma Bouziane (SCHB) Constantine, an industry of a negative effect on the environment. This methodological framework helps to identify such cost in other industries, as an invitation (a call) for the economists to point at the importance of the environment in the process of evaluating the performance and the revenue (results). Here are some conclusions and recommendations:  There is a constrainment in the process of accounting measurement and their environmental effects and their disclosure about environmental effects in their financial reporting, and that the environmental disclosure takes from taxes and fines that the institution pays via constant values the investments acquired to protect the environment;  There are a lot of impediments that constrain the institution in accounting revealing its environmental performance, and the most prominent impediment is the paucity in educational programs of defining the requirements of revealing the environmental performance;  The lack of mandatory rules reveals the environmental performance and the difficulties in measuring the environmental cost;  There is a lack of harmony between the accountant systems followed by the institution and the new discoveries in the new social and economical environment, in particular, the inefficiency of this system in analyzing the components of the environmental performance costs;  This research could not get all the information about the cost components in the factory, and what each section takes from counting due to the lack of a developed analytical accountancy;  As for the results of the cost of treating the industrial pollution, it's clear that the institutions under study exert considerable efforts so as to remove/eliminate dust which contributes to reducing the gap between the disadvantages caused by this pollution and the investment directed to dissolving this problem. From the conclusions, this research arrives at a group of suggestions, among which the important ones are:  There should be a necessity of the complementarity between the systematic factors and the accountant policies suitable in the field of analyzing the elements of the environmental performance costs, piling them and their relation with the activity cycle of the institution. The role of the accountant measuring will be shown through the data and the qualitative and quantitative environmental information relating to the environmental cost variables that influence the economic performance of the institution, and their revelation about their role in protecting the environment, and reducing the negative environmental effect of their activities;  There should be an informational bank that would help avail the information so to measure pollution disadvantages and treat them;  There should be a suitable taxation system and policy for the social and environmental accountancy;  There should establish an environmental information system so as to improve managing the environment which is a principal condition of integrating the environmental considerations in different activities.
2019-04-11T13:15:46.187Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "5425563f4bc1289a81ec3d71029e217df1b9d700", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.com/Public/uploads/Contribute/550a74a00925c.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7462f0c15d49681662eabb02ed5f2a52a0214117", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [ "Engineering" ] }
226074271
pes2o/s2orc
v3-fos-license
Diagram of concrete dynamic deformation confined by CFRP jackets A review of the current state of the issue of describing the deformation of columns confined by CFRP jackets under of dynamic loading is carried out. The insufficiency of the study of elements under the influence of axial dynamic load is indicated. The dynamic increase factors (DIF) for concrete and CFRP, which are obtained by the results of existing experiments, are substantiated. Using the principle of the invariance of the potential energy of deformation of concrete at the time of its destruction under regime loading, a formula is obtained for determining DIF for ultimate relative strain of unconfined concrete. Based on the assumption of the same law of concrete deformation under static and dynamic loading, was obtained a diagram of concrete dynamic deformation confined by CFRP jackets. The diagram is valid in the range of strain rates 10−3-102 s−1. A comparison is made between a static diagram and a dynamic diagram derived from it. The characteristic of the main sections of the diagram is given. An increase in the strength and ductility of confined concrete at the initial stage of loading is obtained. At stresses equal to the tensile strength of limited concrete, its ductility is somewhat reduced. It was revealed that a significant increase in the bearing capacity of confined concrete begins at strain rates of 10 s−1 or more. Introduction Currently, composite materials are most in demand when reinforcing reinforced concrete structures damaged as a result of accidents, which allows them to restore their original bearing capacity [1]. However, the method is also applicable to withstand accidental impacts, that is, amplification is carried out before an emergency occurs. Most often, such situations are accompanied by dynamic loads. The study of reinforced concrete deformation diagrams in the development of methods for calculating structures for dynamic effects is quite an urgent task. When describing diagrams at high loading speeds, the specific behavior of not only individual materials, but also the system as a whole, should be taken into account. There are many publications devoted to the study of the operation of elements strengthening by CFRP jackets at high strain rates. However, most of these works focus on emergency shock loads in the horizontal direction, for example, during explosions and collisions with vehicles [2,3]. The technology of reinforcing columns with composite materials is quite relevant in seismically dangerous areas, due to the low weight of the strengthening elements. This explains the ongoing work on the study of structures strengthening by CFRP jackets at seismic loading [4]. Nevertheless, not many publications are devoted to the study of the operation of columns confined by CFRP jackets under axial dynamic loading. While such a loading scenario is quite common when performing building calculations for progressive destruction [5]. In the case of confined concrete by transverse steel reinforcement, this issue has been studied in sufficient detail [6,7]. The work [8] is devoted to the analysis of the operation of elements strengthening by CFRP jackets under dynamic loading. A method for determining the bearing capacity of elements based on a nonlinear deformation model is developed. The technique is consistent with experimental data. Thus, the development of a dynamic deformation diagram of confined concrete by CFRP jackets remains a rather urgent task. In this paper, an attempt is made to describe such a diagram by modifying the corresponding deformation diagram under quasistatic loading. Quasistatic diagram To describe the deformation diagram of concrete confined by CFRP jackets enough dependencies have been proposed [9,10,11], which have received experimental confirmation. In this paper, the diagram obtained in the study [9] is taken as the basis. The diagram has two characteristic sections: parabolic, at the initial stage of concrete deformation before significant damage to its structure occurs; and a rectilinear section characterizing the work of concrete in a pseudoplastic state [12], when only CFRP jacket resists to the development of transverse deformations. The diagram is written as a system of equations where f R , f E -ultimate strength and modulus of elasticity of a CFRP, respectively. The strains corresponding to the transition from the parabolic section of the diagram to the linear one are determined by the formula , the first expression of system (1) will describe the deformation of unlimited concrete in accordance with the diagram proposed in [13]. Taking the value of ultimate relative strains Where do we find , that proposed in [9]. To set the deformation diagram of concrete confined by CFRP jackets under dynamic loading, we will use the approach used in [6,14]. Suppose that the deformation of concrete strengthening by CFRP jackets under dynamic loading obeys the same laws as under static loading. In this case, the parameters describing the deformation diagram сo f , сo  , are modified depending on the deformation rate by introducing the corresponding dynamic increase factors (DIF). To indicate the parameters of the diagram under dynamic loading, the superscript "d" is used. Dynamic increase factors of concrete The strength properties of concrete under the influence of dynamic load increase, while its plasticity decreases with increasing strain rate. The increased dynamic strength of concrete is associated with the appearance of inertial forces of viscous resistance, which control the development of transverse deformations. To describe the behavior of concrete under the influence of dynamic loads, many dependencies have been proposed [6,15,16,17]. As studies show, the DIF of concrete depends not only on the strain rate, but also on the type of stress state [16], and on concrete strengths [15]. However, regarding the latter, researchers have no consensus. In the work [17] it is noted that, in view of the dimensionless ness of DIF, the results obtained on concrete of one class are applicable to others. Dynamic loads on building structures most often occur as a result of accidents, such as fires. In such cases, the DIF of concrete will be a function of not only the strain rate, but also the temperature [18,19]. The DIF of concrete is determined by the following expression To describe DIF, we use the logarithmic dependence [17], which is valid at strain rates (11) Studies show that concrete deformation in the elastic stage does not depend on the loading rate, therefore, the initials modulus of elasticity under static and dynamic compression are most often assigned equal to each other [15]. In addition, it was indicated in the work [16] that this type of deformation will also be valid for the pseudoplastic stage, characterized by intensive growth and opening of macrocracks. However, the influence of the strain rate is manifested in the section of the diagram, which is limited by strains corresponding to the lower and upper limit of crack formation. Taking this fact into account is of little interest from a practical point of view, since it will only affect the shape of the curved section of the confined concrete diagram and will not affect the ultimate stresses and strains. In this work, used equality between the initials modulus of elasticity under the static and dynamic action of the load, i.e. To determine the ultimate relative strains under dynamic loading, we use the postulate on the invariance of the potential energy of deformation of concrete at the time of its destruction, which was first proposed in [20]. It is believed that the work done on concrete at the time of destruction is constant, regardless of the strain rate. It is worth noting that this approach was applied to concrete with indirect transverse steel reinforcement in work [6]. The postulate is given in the form To describe the deformation diagram of unconfined concrete on an ascending branch, as noted earlier, we use the first expression of system (1) for 0 2  E . Performing the substitution in (14), integrating and giving similar ones, we obtain Dynamic increase factors of CFRP In contrast to concrete in CFRP under dynamic loading, not only the strength characteristics increase, but also the ultimate tensile strain and modulus of elasticity [21,22,23]. In this case, a linear stressstrain relation is maintained. It is worth noting that when exposed to dynamic loads, the risk of delamination of the composite material along the adhesive joint or on contact with the concrete surface increases, as well as damage to the structure of the matrix, rupture of individual fibers and delamination into separate layers [24]. Such destruction scenarios should be taken into account by appropriate factor of safety. However, it 5 can be assumed that the occurrence of failure due to peeling of the composite during a single dynamic loading without unloading is unlikely, since even in the absence of adhesion to the concrete surface, jacket will resist the development of deformations in the transverse direction. To describe the change in the mechanical characteristics of the CFRP depending on the strain rate, we use the dependencies proposed in work [21], which were tested in the range of strain rates Analyzing formulas (18)- (20), it can be noted that the values do not quite correctly correspond to the linear relationship between stresses and strains; therefore, it is recommended to replace formula (20) with In this case, it is expression (20) that is replaced, and not (18) Dynamic diagram The diagram of the deformation of a compressed concrete confined by CFRP jacket under dynamic loading can be represented as the following set of equations ), which is described by the system (22), is presented in figure 1. At the initial stage of loading, fairly close results were obtained, since, as already noted, the initials elastic modulus of concrete under static and dynamic compression were taken equal to each other. At this point, the jacket is not yet stretched enough to significantly change the slope of the curved section of the diagram. We note that in the place of the inflection of the dynamic diagram, not only the strength properties of limited concrete are higher, but also plastic deformations. This is caused by the specific properties of CFRP under the influence of dynamic load. When moving to a straight section, the contribution to the work of the composite material increases. The slope of the linear section of the diagram is determined only by the increased mechanical characteristics of CFRP under dynamic loading. At strain rates of less than 50 s -1 the slope of the linear section does not differ significantly from that obtained under quasistatic loading, i.e. Figure 2. The dependence of DIF for the ultimate strength of confined concrete from the strain rate. Analyzing the dependence presented in figure 2, we note that a significant increase in the bearing capacity of limited concrete is observed at strain rates of more than 10 s -1 . The results obtained from the diagram require experimental confirmation. In addition, these dependences are valid only for sufficiently short columns, when the influence of longitudinal bending under dynamic actions is not significant [25]. Conclusions Following results were obtained: 1. Based on the assumption of the same law of deformation of concrete under static and dynamic loading, a deformation diagram of a concrete confined by CFRP jackets was obtained in the range of deformation rates of 3 10  -2 10 s -1 . This diagram is a modification of the quasistatic diagram obtained in work [9]. 2. Using the principle of the invariance of the potential energy of deformation of concrete at the time of its destruction under regime loading, a formula is obtained for determining DIF for ultimate relative strain of unconfined concrete  c d k , . 3. It was revealed that a significant increase in the bearing capacity of confined concrete begins at strain rates of 10 s -1 or more.
2020-07-16T09:02:42.382Z
2020-04-02T00:00:00.000
{ "year": 2020, "sha1": "29ebe6eb91df9f8418c754d8bb225d2bd0e240d4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/869/5/052046", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8fd752699651bb00c405c9f76ca959aff4c85dd6", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
118416851
pes2o/s2orc
v3-fos-license
Dephasing in a quantum dot coupled to a quantum point contact We investigate a dephasing mechanism in a quantum dot capacitively coupled to a quantum point contact. We use a model which was proposed to explain the 0.7 structure in point contacts, based on the presence of a quasi-bound state in a point contact. The dephasing rate is examined in terms of charge fluctuations of electrons in the bound state. We address a recent experiment by Avinun-Kalish {\it et al.} [Phys. Rev. Lett. {\bf 92}, 156801 (2004)], where a double peak structure appears in the suppressed conductance through the quantum dot. We show that the two conducting channels induced by the bound state are responsible for the peak structure. We investigate a dephasing mechanism in a quantum dot capacitively coupled to a quantum point contact. We use a model which was proposed to explain the 0.7 structure in point contacts, based on the presence of a quasi-bound state in a point contact. The dephasing rate is examined in terms of charge fluctuations of electrons in the bound state. We address a recent experiment by Avinun-Kalish et al. [Phys. Rev. Lett. 92, 156801 (2004)], where a double peak structure appears in the suppressed conductance through the quantum dot. We show that the two conducting channels induced by the bound state are responsible for the peak structure. Introduction. Coherent transmission of electrons through a quantum dot (QD) has been investigated using an Aharonov-Bohm (AB) interferometer to understand phase coherent transport transport [1,2]. For this purpose, controlled dephasing experiments are essential. In Refs. [3,4], experiments were performed using mesoscopic structures with QDs. Ref. [3] measured the suppression of coherent transmission through a QD embedded in an AB ring. A quantum point contact (QPC) is capacitively coupled to a QD in the Coulomb blockade regime. Adding an electron to the QD changes the transmission probability T through the QPC by ∆T . When the source-drain voltage V QPC through the QPC is finite, there are the current fluctuations, i.e. the shot noise. The QPC then induces dephasing in the QD. The visibility of the AB interference pattern is 1 − α [3,5,6,7] with α = γ/Γ, where Γ is the level width of the QD and Recently a controlled dephasing experiment was investigated for the QD in the Kondo regime [8]. Preceding this experiment, Silva and Levit addressed the problem using the slave boson mean field theory [9]. The conductance G of the QD is suppressed by ∆G = −G(V QPC = 0)α with where T K is the Kondo temperature. Later, Kang [10] using the 1/N expansion calculated that ( The experiment demonstrated several interesting features. One of them is the magnitude of ∆G. It is about 30 times larger than Eq. (2). Kang [10] showed that the dephasing rate can be large when the QPC is geometrically asymmetric. Another intriguing result is that ∆G shows a double peak structure as a function of T . This result has not yet been addressed and it is natural to associate the problem with another intriguing feature of the QPC, the 0.7 structure [11,12]. In many experiments, the conductance G QPC through the QPC shows an additional plateau near G QPC = 0.7 × 2e 2 /h at zero magnetic field [11,12,13,14,15,16]. Further experiments have been performed [17,18,19,20,21] to understand the features that cannot be explained by the conventional point contact model. In parallel, many theoretical studies have been made using different models, including an antiferromagnetic Wigner crystal [22], spontaneous subband splitting [23], spindependent electron correlations [24,25,26,27], and numerical calculations using the density functional theory [28,29,30,31,32]. Refs. [29,31,32] demonstrated the formation of a quasi-bound state in the QPC, which is responsible for localized spins near the QPC. Grounded in this finding, a generalized Kondo model has been invoked to describe transport properties through the QPC [33]. The bound state and the Coulomb interaction in the QPC cause an additional plateau of G, which exhibits the Kondo effect. Kondo physics has been observed at low temperature and voltage bias [15]. In addition, recent experiments [34,35] measured the shot noise through the QPC as a function of magnetic field. The results indicate two conducting channels with different transmission amplitudes. Ref. [36] showed that the model [33] is consistent with the experimental results in Ref. [34,35]. In this paper, we investigate a dephasing mechanism in a QD using the generalized Kondo model [33] in a QPC. The dephasing rate is examined in terms of charge fluctuations of the quasi-bound state in a QPC. The presence of the state in the QPC accounts for a dephasing mechanism which is qualitatively different from the mechanism without the bound state. The two conducting channels due to the bound state are responsible for a double peak structure of the dephasing rate, which is observed in Ref. [8]. Model. We consider a QD-QPC hybrid system as depicted in Fig. 1(a). The model Hamiltonian of the system consists of three parts, H QPC , H QD , and H QPC−QD as shown below. The model Hamiltonian of the QPC proposed in Ref. [33] is the generalized s-d model: H sd with andc kσ creates an electron with momentum k and spin σ in lead L and R; E (1) = E 0 and E (2) = E 0 + U with the energy level of local spin state E 0 and the Coulomb energy U . S is the local spin due to the localized state. We assume J The Hamiltonian of the QD is the conventional Anderson model: an electron in the QD with spin σ, whilef kσα creates an electron with momentum k and spin σ in the lead α attached to the QD with the tunneling matrix element V T ; n σ =d σ d σ , and ε 0 and U d are the energy level and the Coulomb energy in the QD, respectively. The third part of the Hamiltonian, H QPC−QD describes the interaction between the QPC and the QD. Localized electrons in the QPC interact with the electrons in the QD: where n QPC is the number of localized electrons in the QPC, and W is the coupling constant. The energy level in the QD is shifted by H QPC−QD : ε 0 → ε 0 + W n QPC . Two channel induced dephasing. The conductance through the QPC was calculated using second order perturbation theory [33,37]: r ) 2 (8) and the density of states ρ L (ρ R ) in the left (right) leads. We have introduced a renormalized coupling constant ] that characterizes the Kondo effect in the QPC [33]. The right-hand side of Eq. (8) consists of three terms proportional to (J (1) ) 2 , (J (2) ) 2 , and J (1) J (2) . This combination of the terms indicates an AB interferometer picture with the J (1) and J (2) channels in the QPC as depicted in Fig. 1 (b). Note that the appearance of the J (1) J (2) term is not peculiar to the s-d model (5). When multi channels involve electron transport, interference between them occurs. The electron transport through the QPC induces fluctuations of n QPC since it takes place via the co-tunneling processes described by J (i) in Eq. (6). If no current flows through the QPC, n QPC = 1. When electrons pass through the J (1) channel, virtual excitations from n QPC = 1 to n QPC = 0 are involved, while when electrons pass through the J (2) channel, excitations to n QPC = 0 are involved. These situations are depicted in Fig. 1 (c) with n QPC − 1. This change in n QPC shifts ε 0 in the QD. In this way, the transmission of electrons through the QPC is monitored by electrons in the QD. The current fluctuations (shot noise) through the QPC lead to fluctuations in n QPC and eventually in ε 0 . It has been shown that the fluctuations of ε 0 due to the external environment lead to dephasing in the QD, where the time evolution of d σ shows a exponential decay due to the fluctuations [6]. Transport through the "AB ring" in the QPC is monitored by the QD through these charge fluctuations. The terms proportional to (J (1) ) 2 and (J (2) ) 2 give the transmission probability with n QPC − 1 = ∓1, respectively. These processes are monitored by the QD. The J (1) J (2) term, on the other hand, describes the interference between the excited states with n QPC − 1 = ±1. This indicates that the term, compared to the (J (1) ) 2 and (J (2) ) 2 terms, involves smaller charge change in n QPC after an electron passes through the QPC. In other words, the current fluctuations of the (J (i) ) 2 terms contribute to the dephasing in the QD while those of the J (1) J (2) term can be negligible. The dephasing rate γ is then the sum of the dephasing rates of the two independent channels, (J (1) ) 2 and (J (1) ) 2 terms. In each channel, we use the result of the previous theories [3,5,6,7] for a single channel QPC model. The measured ∆T characterizes the interaction between the QPC and QD. The total dephasing rate γ is, instead of Eq. (1), with γ 0 (T ) = [∆T (T )] 2 /T (1 − T ), where T i is the transmission probability through the channel J (i) : The common factor of ∆T (T ) appears for both transmission channels. This is because ∆T is measured by adding an electron to the QD, and this affects both channels equally. We calculate γ in Eq. (9) as a function of T in Eq. (8). We use a perturbative approach with in place of T and T i . This corresponds to taking into account the perturbative corrections to H Lead by H sd . The current though the QPC is calculated in the following way. We expand the Keldysh action T c exp(−iS sd ), where S sd is the action of the s-d interaction (5), and the time order is taken along the Keldysh contour. We expand the action up to the second order in J (i) , and reduce it to the bilinear form with respect to conduction electron fields using Wick's theorem [38]. Then we have a noninteracting model without H sd and with the renormalized actionS 0 for the kinetic term of conduction electrons. Then the current through the QPC is calculated withS 0 and the current operator The transmission probability is then given by Eq. (11). If the renormalization of S 0 is disregarded, whereS 0 = S 0 with the action S 0 for Eq. (4), the transmission probability is given by Eq. (8). The origin of T in the denominator of Eq. (11) is the s-d scattering in each lead, while T in the numerator is the scattering between two leads. Since the s-d coupling constants are equal for both scatting processes, the same factor of T appears. In a similar way, the transmission probability T i through the channel J (i) acquires the denominator, 1 + T i . Comparison with experiment. We need to find the T dependence of γ 0 from the experimental data. In Fig. 2(a), symbols indicate the two sets of the experimental data in Ref. [8]. To fit these data, we use γ 0 (T ) = 0.9 T /0.2 × 10 −5 when T < 0.2 and γ 0 (T ) = 0.9 exp(1 − (T /0.2) 0.7 ) × 10 −5 when T > 0.2. The plot is shown by the solid line in Fig. 2(a). This choice of γ 0 reflects the fact that ∆T is a highly asymmetric function with respect to T ; The maximum of ∆T is located at T = 0.2. Other choices of γ 0 will give qualitatively similar results. In the experiments, the differential conductance through the QPC exhibited a zero bias anomaly (ZBA) while no clear sign of the 0.7 structure was observed. In Ref. [15], a ZBA was observed, which confirms that it originates from the Kondo effect. The absence of a clear 0.7 structure does not contradict the Kondo effect but rather it indicates that the effect is strong. In Fig. 2(b), G QPC is plotted as a function of the Fermi energy E F of conduction electrons with ρ(V (1) ) 2 /|E 0 | = 0.25, ρ(V (2) ) 2 /|E 0 | = 0.025, and U/|E 0 | = 1.5. The parameters are chosen so that the QPC does not show a clear 0.7 structure in G QPC . In Fig. 2(c), γ/(eV QPC /8π) = γ 0 (T 1 ) + γ 0 (T 2 ) is plotted as a function of T by the thick solid line, while for comparison γ 0 (T ) for the conventional single channel QPC model is shown by the thin solid line. A double peak structure of γ appears as in the experiment, in contrast to a single peak structure. The peak positions are located at T ∼ 0.25 and T ∼ 0.7. According to Ref. [10], ∆G is given by Eq. (3). It is proportional to γ 2 ∝ [γ 0 (T 1 ) + γ 0 (T 2 )] 2 when γ ≪ T K . The broken line in Fig. 2 (c) shows the result for this case. The double peak structure becomes more pronounced. We should mention a consequence of the asymmetric line shape of ∆T , which questions the dephasing theory based on the conventional model of the QPC. The dephasing rate is too small when T ∼ 0.7 besides the absence of the extra peak. If ∆T were symmetric, γ 0 (T ) would be symmetric around T = 1/2. The difference between the experiment and theory was then quantitative, but not qualitative. The experiment revealed an essential feature of the QPC. For the two channel model used here, on the other hand, this asymmetry helps to show the double peak structure. Discussion. If the 0.7 structure of G QPC is observed, the second peak near T = 0.7 of γ is sharper than the one without the 0.7 structure. This is because the conductance is changed noticeably near T = 0.7, and then the shot noise through the J (2) channel changes abruptly as well. We did not address the amplitude of ∆G. As pointed out by Kang [10], the asymmetrical structure of the QPC induces an larger dephasing rate in the experiment. In this case, the dephasing rate depends on not only ∆T but also on the change of the phase shift through the QPC, which requires additional information from experiments, such as measurements in the device setup in Ref. [4]. In conclusion, we have discussed the dephasing mechanism due to charge fluctuations of a quasi-bound state in a quantum point contact. The bound state is responsible for there being two transmission channels. The dephasing rate is proportional to the sum of the transmission probability through these two channels. This mechanism explains the double peak structure of the suppression rate of the conductance, observed in a recent experiment [8]. The result is qualitatively different from the rate without the bound state in the QPC.
2007-09-13T04:20:47.000Z
2007-09-13T00:00:00.000
{ "year": 2008, "sha1": "cd5e044875bfafe4e7195ef2b879109a7497131f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0709.1991", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cd5e044875bfafe4e7195ef2b879109a7497131f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236609851
pes2o/s2orc
v3-fos-license
THE LEGAL PROTECTION WEAKNESSES ON COSTUMERS OF ONLINE SHOP TRANSACTIONS Ukie Tukinah Sekolah Tinggi Ilmu Ekonomi Semarang Electronic transactions that are practiced in online transactions create unequal bargaining power between businesses and customers. Business actors often use the weak position of customers to get the maximum benefit from customers. This study uses a normative juridical approach. The research results obtained include the weaknesses of law enforcement, both from the Customer Protection Agency and the Indonesian Customers Foundation, arguing that there are factors that cause customer protection conditions in Indonesia to be so alarming: First, there is still an asymmetrical relationship between producers and customers. Second, customers generally do not meet sufficient bargaining power against business actors. Third, the Government in general tends to side with business actors. Fourth, there is no sense of concern from existing law enforcement institutions, both from the Police, Attorney General's Office, and the Court. A. INTRODUCTION The Indonesian nation has a noble legal instrument as the foundation of national and state life, namely Pancasila and the 1945 Constitution. The consequence of making Pancasila as the basis of the nation's philosophy means that in every life of the nation and state, Pancasila must be the basis that animates every step of development including the development of the Indonesian National Law System., both in the development of legal substance, legal structure and legal culture. 1 Technological developments are very significant, resulting in developments in various aspects of people's lives. Including the business community, while the trading community has taken advantage of technological advances. Not only that happens in trade traffic, but also in trade relations. 2 Online transactions are a new way of buying and selling by utilising advances in information technology. Online transactions develop in society as a result of technological developments and the increasing number of internet users in Indonesia. 3 Customer-only transactions on the Internet (World Wide Web) involve retail trade operators (run by individuals, families or groups, or incorporated companies) who take orders from customers and fulfill them directly from their own inventory or, if the retail operator does not own any savings, indirectly through manufacturers or wholesalers who pack and deliver goods to customers on their behalf. 4 Companies use e-commerce on many levels. There are those who just use e-mail for certain parts, for example: only applied in the sales department. But there are also those who use web pages to display company and product profiles. 5 In the business world, a website in the form of e-commerce is already a necessity for a business that has advanced today for business development because there are various benefits that e-commerce has. Among them are customers who do not need to come directly to the store to choose the items they want to buy and for companies to carry out transaction activities for 24 hours. Second, from a financial point of view customers can save costs and for entrepreneurs can save on promotional costs, if the location of the store is far away, customers can save on travel costs by being replaced by cheaper shipping costs and for entrepreneurs they can market their shops to a wider area. 6 Along with the development of today's business world, e-commerce is a necessity to increase and win business competition and product sales. In the process of using e-commerce, buying and selling and marketing activities are more efficient where the use of e-commerce will show ease of transactions, reduce costs and speed up the transaction process. The quality of data transfer is also better than using manual processes, where there is no re-entry which allows human errors to occur. 7 The development of electronic transactions is inseparable from the growth rate of the internet, because electronic transactions run through the internet network. The rapid growth of internet users is a fact that makes the internet an effective medium for businesses to introduce and sell goods or services to potential customers from all over the world. 8 Electronic trading in practice is similar to traditional trading, but has advantages that can directly benefit to increase company income and profits. With its flexibility, electronic commerce can cut marketing costs with its ease and sophistication in conveying information about goods and services directly to customers wherever they are. Companies that do business electronically can also cut store operating costs because they don't need to display their goods in large stores with many employees. 9 Electronic transactions that are practiced in online transactions create unequal bargaining power between businesses and customers. It can be explained by the fact that business actors selling their goods and / or services online often include standard contacts so as to create asymmetric bargaining power (unequal bargaining power). Business actors often use the weak position of customers to get the maximum benefit from customers. The factor of customer ignorance is due to unclear information on goods / services provided by business actors, customers do not understand the transaction mechanism. Therefore, in order to create a healthy business climate for customers in conducting trade transactions through e-commerce, it is necessary to seek a new and adequate form of legal regulation capable of regulating all their activities. 10 The purpose of writing in this study is to determine and analyze the weaknesses of legal protection in unjust online shopping (e-commerce), due to the wide impact that can be generated. It is hoped that it can be useful theoretically for the development of legal science, especially customer protection (e-commerce) and business actors, as well as this research is expected to find new theories of law enforcement to fulfill the sense of justice in society. B. RESEARCH METHODS The researcher used a normative juridical research method with 3 (three) approaches to examine the two problems discussed by this normative research method, namely the legal approach and the conceptual approach. A statutory approach is needed to trace the legislative ratios and the ontological basis for the formation of legislation. 11 The specification of this research is descriptive analysis, which is research that not only describes the state of the object but provides an overview of the weaknesses of legal protection in unjust e-commerce, due to the wide impact it can have. It is hoped that it can be useful theoretically for the development of legal science, in particular. customer protection (ecommerce) and business actors, as well as this research is expected to find new theories of law enforcement to fulfill the sense of justice in society. 12 C. RESULTS AND DISCUSSION 1. Juridical Weaknesses In Online Transactions The development of e-commerce is very significant, so it must be balanced with legal certainty that regulates protection for e-commerce customers. Until now, protection for e-commerce customers has not been made. This worsens the condition of customer protection due more to the weakness of the system, seen from the weak coordination between departments or institutions, for example in the issuance of integrated regulations. In terms of protecting customers involved in online transactions, Rothchild highlighted that the inherently international nature of electronic commerce presents an "opportunity" and a "challenge" to customer protection policies as a result of "disinter mediation". Compared to customer contracts through suppliers in the same jurisdiction, disinter mediation allows customers to directly enter into contracts with sellers (business actors) who are in other jurisdictions. This disintermediation of customer contracts provides benefits and challenges to businesses and customers. 13 Customer rights as stated in Article 4 of Act No. 8 of 1999 concerning Customer Protection are broader than the basic rights of customers as first put forward by the President of the United States J.F. Kennedy before the congress on March 15, 1962, which consisted of: a. The right to security; b. The right to vote; c. Right to information; d. The right to be heard. 14 The customer protection system in Indonesia still has many weaknesses. Weaknesses in this system result in many violations of the rights of customers or Indonesian society. a. Weaknesses of the Customer Protection Act 1) The first weakness There is a legal vacuum from e-commerce customer protection. Currently, the National Customer Protection Agency is working to implement Act No. 8 of 1999 concerning Customer Protection. However, the 20-year-old Law is considered unable to accommodate the current rapid technological developments, especially technological developments, especially the development of e-commerce, related to the protection of e-commerce customers. 2) The second weakness This happened because the Personal Data Protection Act was not immediately promulgated into a regulation. At the same time, customer personal data is used in a number of digital applications. If you buy goods on sites of online business actors such as 3) The third weakness This happens because of the many channels of complaints for the community, especially e-commerce customers. For example, the BPKN only acts as a complaint receiving agency. For the settlement process, the authorized institution is the Customer Dispute Resolution Agency. b. Weaknesses of the Law on Electronic Information and Transactions E-commerce has weaknesses, namely the method of electronic transactions that does not bring business actors together with customers directly, and there is no opportunity for customers to see directly the goods ordered have the potential to cause problems that harm customers, including mismatching of the type and quality of goods promised, inaccuracies time of delivery of goods, insecure transactions ranging from payments using other people's credit cards (piracy), illegal access to information systems (hacking), website destruction to data theft. Furthermore, payment by filling in a credit card number in a public internet network also carries no small risk, because it opens up opportunities for fraud or theft. Transaction problems through e-commerce have a large enough risk. Especially regarding the payment there is a risk of loss on the part of the customer, because the customer is usually required to make a payment in advance, while he cannot see the quality of the goods ordered and there is no guarantee of certainty that the goods ordered will be sent according to the agreement. From a legal point of view, the problems are related to legal certainty. These problems include, for example, the validity of business transactions from the aspect of civil law (for example if it is carried out by someone who is not yet capable / mature), the problem of digital signatures or electronic signatures and data mesage. In addition to other problems that arise, for example, with regard to guaranteeing the authenticity of data, confidentiality of documents, obligations in relation to taxes, the law appointed in case of breach of agreement or contract, issues of legal jurisdiction and also which legal issues should be applied in the event of a dispute. If you pay attention to the legal protection for customers, it is quite complete, especially from e-commerce, there are other supporting regulations, namely Law No. 11 of 2008 concerning the Law on electronic information and transactions. In fact, customers are still victims of transactions in e-commerce because business actors do not pay attention to business ethics, entrepreneurs who are not responsible for losses or delays in delivery times, as well as defective goods, these business actors take advantage of the situations and conditions of the existence of customers who are far away and not face to face with business actors. From the customer side, namely the lack of knowledge about e-commerce, there needs to be an even and continuous socialisation of the applicable rules and regulations. Customers must pay attention to the slogan carefully before buying. Customers themselves must also know what their rights and obligations are. 15 The agreement that exists in e-commerce also applies to the principles in the Civil Code. In the Criminal Code, customer protection exists in article 378, which protects customers from fraud, including those committed by business actors. Regarding evidence of electronic transactions since the enactment of Law no. 11 of 2008, then electronic transaction files or e-commerce files can be used as evidence. E-commerce transaction security guarantees are needed to protect customers and foster customer confidence, and in the end it is hoped that an increase in the volume of transactions through ecommerce is expected. In relation to protection guarantees, not a few customers are unaware of the ITE Law or protection guarantees provided by other laws and regulations, including the Customer Protection Law. This is especially so for customers who are unfamiliar and inexperienced in online transactions with concerns that their transactions are not as expected. This concern is mainly for sellers or business actors who offer their goods through online shops or electronic system providers that facilitate transactions without certification. When compared with transactions in the real world, transactions or buying and selling relations in cyberspace have the potential for crime or at least harm other parties, which is much greater, in addition to the benefits of each party. This is due to the easier interaction between business actors and customers who transcend national boundaries. Although in various countries, even internationally, various regulations have been established that attempt to eliminate actions in transactions that are detrimental to other parties, this is not fully controllable by state agents having the authority to do so. The difficulty in controlling legally is mainly due to issues of jurisdiction and legal substance that are not yet fully harmonised between one country and another, including dispute resolution mechanisms or procedures. The gap in positions between one company and another business actor or between business actors and customers conducting transactions is not easily aligned, with the position of customers being weaker than business actors. This appears to be an inherent trait in the principle of freedom of contract. Therefore, it is not uncommon for a contract to emerge, which substantively, the whole of its intent, whether understood or not understood by the customer, puts the customer at a disadvantage. It is not an exaggeration if the Customer Protection Law sets limits for business actors. These limits can mean giving additional strength to customers so that their weak position can be protected from the abuse of the strong position of business actors to gain profits at the expense of customers. In the context of electronic trading, the ITE Law does not impose limits on business actors or business actors in their relationship with customers, so that it is fully based on the Customer Protection Law. The regulation of the ITE Law is general in relation to the implementation of electronic transactions and electronic systems. The limitations regulated by the Customer Protection Law in relation to electronic contracts and matters relating to electronic contracts do not in itself guarantee the absence of potential customer losses caused by contract terms and contracts made by companies or business actors. This shows that the legal terms of the agreement in article 1320 of the Criminal Code are not yet effective because the information on contract terms is incomplete and clear where the competent element has not been regulated even though someone can make buying and selling transactions at the age of 18 years and over, the element of agreement is that there are still many customers who are disadvantaged. Because the making of standard or standard contracts in electronic commerce places customers in a weaker position compared to business actors, for certain elements customers suffer a lot because the ordered goods received by customers do not match what is offered, such as in the purchase of household appliances, there are many disappointments that experienced by customers and the elements of halal causes, many prohibited items are easily obtained through electronic trading. Apart from customer knowledge of the provisions of the Law on Electronic Information and Transactions, actually in the Law on Electronic Information and Transactions there are several provisions that indirectly provide protection for customers. The provisions referred to include: electronic information as valid legal evidence (Article 5), digital certificate, which in Articles 13 and 14 of the Law on Electronic Information and Transactions is called "electronic certificate", the obligation to administer an electronic system in an electronic manner reliable and safe and responsible. In fact, this is difficult to enforce, so there is a possibility that electronic system operators that are not certified may emerge. Legal Protection for E-Commerce customers based on the value of justice Electronic commerce which continues to develop will also be followed by the development of laws that provide customer protection. This is directly proportional to the level of customer confidence and the increase in electronic transactions (e-transactions). Therefore, laws and regulations on customer protection as well as the Law on Information and Electronic Transactions are needed which can guarantee the settlement of the main problems of concern in electronic trading. As stated by Sutatip Yuthayotin in a study in the European Union that: A recent EU survey on customer confidence revealed that the confidence level of E-Commerce is extremely low. Approximately 56% of the surveyed e-customers gave the reason that foreign businesses are less likely to comply with customer protection law. Around 71% of the customers surveyed cite the extreme difficulty in resolving dispute, e.g., those that may arise from return of goods, price settlement, warranties, etc., as the reason for such lack of confidence. About 65% of the surveyed customers believe it would be problemetic if they change their mind and return the products that they purchased online for a merchant locating abroad. 16 A recent European Union survey of customer confidence revealed that the level of confidence in e-commerce is very low. An estimated 56% of surveyed e-commerce customers argued that foreign businesses were poorly compliant with customer protection laws. Approximately 71% of the customers studied stated that it was very difficult to resolve disputes, such as returning goods, settling prices, guarantees, and so on, as reasons for their lack of trust. About 65% of customers studied believe this to be a problem when they change their intentions and return products that have been purchased online from sellers located abroad. Furthermore, it is said that: Looking at the above data, it can be said that customers around the world have no great confidence in E-Commerce and this constitutes an important barrier that prevent customers from participating in online sales. This, as a result, undermines the potential growth of B2C etransactions. In a global debate on customer protection in the online market, a lack of customer trust in the existing customer protection standard is the main problem that needs to be addressed. 17 There is a need for a new concept regarding the law of electronic commerce (e-commerce law), in particular the law that guarantees customer protection. The customer protection law and the Information and Electronic Transaction Law in this case are an instrumental form of law, which organises the achievement of the goals of a fair and efficient customer market. This means that in the customer market in cyberspace, the customer-business actor relationship does not merely give the business actor a greater weight of profit, but is balanced with meeting the needs or interests of customers. Nonetheless, neither the customer protection law nor the Electronic Information and Transaction Law does not hinder the development of electronic commerce. As a comparison, it is also a concern that UNCITRAL's Model Law on Electronic Commerce (1996) was formed, especially in relation to limitations or requirements regarding writing and electronic signatures. As Paul Todd said, "UNCITRAL appears to have feared that development of state laws, such as those in Utah, Germany and Italy, might impede the development of E-Commerce." (UNICITRAL appears to be concerned that developments At first customer protection law in particular, was developed from the concept of inequality of bargaining power, contemporary customer protection law is the best conceptualization as regulation of the customer market and includes an analysis of the relative role of public, private and self-regulatory techniques. A study of institutional discretion and the issue of guaranteeing the effectiveness and the formation of accountable laws, standard sets and enforcement. This regulatory perspective removes the traditional distinction between private and public law, and includes soft law, moral and other "non-legal" techniques in this area. Instrumental conceptualization undermines legal reasoning autonomy because analysis of customer protection law and policy refers to disciplines such as economics and sociology that contribute to understanding customer behaviour and the consequences of different policy choices. 18 The law of electronic trading is currently experiencing a lack of uniformity (equality), an unfavorable situation since electronic trading transactions are often carried out by people from different countries. Apart from differences, the world's e-commerce laws have other drawbacks, 19 so that the Law on Information and electronic transactions or the provisions regarding contracts in the Civil Code also still need harmonisation to approach the uniformity of the law. The reconstruction in this case uses a comparative law and culture approach, as expressed by Jean Brissaud, a legal historian, who has compared society with biological organisms when discussing the history of law in France. In the same way, legal systems are like living things. Legal systems have specific and separate organs, each carrying out its own function and as a whole declaring something that is alive and continues (survives). In this connection, legal reconstruction such as major surgery. 20 In this surgery, organ transplants that are imported from other people's bodies can be performed. Likewise, the reconstruction of national law is related to weaknesses in dealing with electronic commerce, especially in terms of information on contract terms. This reconstruction is related to the substantive customer protection law. As stated by Lorna E. Gillies, this substantive customer protection law is to provide material justice to customers in recognition of the inequality of bargaining power between the parties. 21 Furthermore, by quoting Ramsay's opinion, Gillies emphasised that the substantive customer protection law must be critical of the inequality of bargaining power as a reason for customer 18 protection. The quote referred to, namely: ......the concept for those market and private-law failure which cause customers to suffer economic detriment. It is therefore necessary in a case of alleged unequal bargaining power to diagnose its particular source, for example, information failure, or the high transaction costs of redress or of customer organization. 22 In another section, Gillies emphasised that the main approaches for legal regulation of electronic trading activities are individual state regulation, model laws and harmonisation. Since the objective of substantive customer protection law is to protect customers, customers are undoubtedly given juridical protection through international private law, regardless of the methods used by the parties in one-to-another contracts. To achieve this goal, the government must adapt the rules regarding electronic trading contracts in its jurisdiction. 23 D. CONCLUSION Weaknesses in law enforcement, both from the Customer Protection Agency and the Indonesian Customers Foundation, argue that there are factors that cause the condition of customer protection in Indonesia to be so alarming: First, there is still an asymmetrical relationship between producers and customers. Second, customers generally do not meet sufficient bargaining power against business actors. Third, the Government in general tends to side with business actors. Fourth, there is not enough care from existing law enforcement institutions, both from the Police, the Attorney General's Office, and the Court. Weaknesses in customer protection, which indicate an imbalance in the position between business actors and customers, among others, the Law on Information and Electronic Transactions as well as the Law on Customer Protection and information on contract terms that may even lead to unbalanced standard contracts information on contract terms is difficult to access; and information on contract terms is not complete and clear. 22 Ibid. 23 Ibid., page.45.
2021-08-02T00:06:54.943Z
2021-04-17T00:00:00.000
{ "year": 2021, "sha1": "83bc22b16b66b869f66ce0fce5d7d29803ba3d77", "oa_license": "CCBY", "oa_url": "http://jurnal.unissula.ac.id/index.php/PH/article/download/15380/5411", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bd9e0ebacf7b4ad4edf32dd1c60168aa40e9ecf6", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Business" ] }
233796121
pes2o/s2orc
v3-fos-license
Fuzzy Logic Approach for Routing in Internet of Things Network A performance of network is evaluated by considering different parameters. The network lifetime depends on many factors Residual energy, Link lifetime and Delay. The Major Challenge in IoT is to the increased lifetime of low power and lossy network (RPL).The process considering input and output to evaluate Network performance by considering the above factors. The proposed system makes use of FIS (Fuzzy Inference System) for selecting the best path to maximize network lifetime. The outcome obtained by using MATLAB and Network performance is increased. The excellent route is selected if Residual Energy is 194, Link quality is 51.2 and Delay is 1.05 then excellent route quality is 73.4%. INTRODUCTION The word [1] "Internet of Things (IoT)" performances as an umbrella word that shields the several structures. The deployment of different embedded devices having to sense capabilities to communicate with embedded devices and linking between physical and digital devices .The IoT is providing smarter services and always changing technology [2]. In [3] Internet of Things (IoT) has delivered a encouraging opportunity to form prevailing developed structures and applications by leveraging the increasing ubiquity of radiofrequency identification (RFID), and wireless, mobile, and sensor devices. As an energy to comprehend the enlargement of IoT in productions novelists evaluation the recent exploration of IoT, key-enabling skills, foremost IoT solicitations in productions, and recognizes exploration leanings and encounters. The key involvement of authors is that they brief the recent advanced IoT and its use in businesses analytically. RPL is measured [4] ordinary for improving the routing structure for congregating troupe movement design. Preliminary from a boundary router, RPL paradigms a DODAG by one or numerous metrics. The DODAG is produced by seeing the concerning budgets, node restriction and multiobjective purpose. Rank group for each node on the DODAG is completed by the detached function. It cares numerous kinds of circulation such as MTP, points to multipoint and points to point. For consuming free topology, the rank essential severely growth from the root near plants of the DODAG. In composite situations lossy relation system is separated into several screens reliant on the request's situation. So in conditions, it might procedure numerous awkward DODAG's with autonomous roots. RPL having many occurrences and it can be route synchronously on the system device and in RPL if nodes need to contribute in DODAG by using different direction-finding procedures for conclusion the greatest way to transporting data. In this paper, we suggest three main limitations residual energy (RE), Link Lifetime (LT), Delay to choice the greatest direction. The main objective of this research is to invent and enlarge routing algorithm for IoT Network by proposing Node selection algorithm. The goal of the research is to develop a novel routing strategy based node selection algorithm. The selection of best route is based on residual energy, link quality and delay. The major contributions of the paper to proposed Node Selection algorithm to acquire best route quality to improve network performance for IoT network. The section of the paper is planned as follows: Literature survey is considered in section 3, Section 4 cover the problem definition, section 5 deals with the proposed factors affecting to route. Section 7 with proposed algorithm. The results, with outcome in section 8 and, section 9 deliberates the conclusion. RELATED WORKS 2.1 RPL Overview RPL routing protocol remains to exploit the complete generation of the system by attractive maintenance of the most energy-constrained nodes. RPL planned the Expected ELT for meaning the outstanding instance of the node. They created a DODAG constructed on the ELT metric for precisely approximating the period of all the routes near the boundary router and envisioned a device for observing bottlenecks designed for dispersion the circulation load to numerous parents. RPL [4] has mostly four control messages, DODAG Information Solicitation (DIS), Information Object (DIO), Advertisement Object (DAO) and Advertisement Object -Acknowledgement (DAO-ACK). Firstly, the DODAG request is carried out in two ways • Applicant node directs the DIS demand to DODAG • DODAG directs the DIO demand messages to all contributor nodes. The DODAG permits the drop timer and the contributor node wants to transmit DAO controller communication to DODAG inside the time intermission. Then, the DODAG direct DAO-ACK controller communication to entirely contributor nodes. Challenges 1.1 The steady system is conserved by decreased the overhead and end-end delay [5]. 1.2 The routing in the system in serious condition due to convergence problems [6]. 1.3 The main factors related to security tasks are network topology [7]. 1.4 The IoT used mainly the relay function for proper functioning of sensor node. LITERATURE SURVEY Many types of researches have areas completed work on energy-aware routing in RPL and in this, it will minimize energy consumption and increase network lifetime. In [8] this offerings the routing protocols for the Internet of Things which is supportive in transporting the data into the vapors or to the operators. Several of the general directionfinding protocols are studied in this laterally with the submissions of IoT. In this paper stretches a short-term opinion of the tasks which originate when by IoT for realtime. Here IPv6, CoAP, MQTT and RPL routing protocols are conferred and enlarged. IoT consumes the possible to yield a huge quantity of facts into the folders and the data will be transmitted proficiently. Secure Multi-hop Routing Protocol (SMRP) [9] protocol attentions on collective the security of the data by avoiding spiteful outbreaks. This direction-finding protocol allows the IoT strategies to confirm previously starting a novel network or construction a standing one. The confirmation uses multilayer restrictions such as User-Controllable ID, user's pre-agreed submission(s) and list of allowable strategies into routing algorithms for joining the confirmation and routing procedures without suffering substantial expenses. As per observation by Sharief M. A et al. [10], given that IoT system fits to dissimilar holders, PAIR protocol announce a estimating perfect for assistances the transitional nodes to acquire the economic assistance as they apply their properties for transmitting. As estimating perfect of PAIR protocol is based on many restrictions like Residual energy and power consumption, recent weight and buffer space, Distance to neighbours. The persistence of the routing network designed for IoT (AOMDV-IoT) [11] is to find and generate the linking among expected nodes and the Internet nodes. The protocol defines as reactive protocol that defines the pathway on request. In this paper, the author contributions an expansion of AOMDV improved used for IoT, which can choice a steady Internet broadcast pathway energetically through informing the Internet linking the table. Using reproductions authors presented that the package defeat is better-quality then the end to end delay is reduced. The main detached of the Energy-aware Ant Routing Algorithm (EARA) is to adjust the routing process for exploiting the lifetime of network [12]. It defines as the swarm intelligence algorithm and reflects the similarly equal number of nodes. As the remaining drive in the IoT strategies deviations finished phase, the authors had announced the instrument near appraise energy evidence. Routing protocol originated on link and residual energy (REL) [13] usages the linkage excellence of remaining energy and wireless network throughout the pathway collection procedure to growth organizations dependability then offers QoS towards the various IoT requests. The load balancing device of this protocol circumvents the extreme use of a solitary track or solitary knot which can additional support in dropping the spots or energy hovels in the system. The energy application will be unchanging in the system. In this paper [14], the authors spoke the network lifespan optimization for the wireless sensor system. The Authors defined the strategy and investigation of numerous energy complementary methods. For a consistent grid topology, we resulting an ideal explanation. The authors demanded that the location of the base position (in the corner) streamlines the optimization problem. They presented that variable the base station location presents new dissimilarities restrictions to the problematic. Authors in [15] reflect together energy and delay metric to discovery and best pathway with lowest energy ingesting and a lowest end to end delay for real-time circulation in wireless sensor systems. This total is calculated as a linear grouping of the broadcast delay and node's energy on the pathway. PROBLEM DEFINITION Internet of things having an increase in the number of devices due to this strategy traffic will increase which is beyond the capacity of the network. The outcome will be to decrease the performance of the network. It is necessary to find proper routing paths that will give good network performance. THE PROPOSED WORK We suggest an enhanced type of RPL network. The fuzzy logic approach to excellent the finest direction to transmission the facts proficiently. The proposed algorithm finds out the quality of the selected node and it compares with the set of nodes and then selects the finest node in DODAG and the remaining nodes send data through the finest node The factors consider as below. Residual Energy Consumption Residual Energy ingestion of node is calculated after every time interval t. With the following equation, it is possible to find out the value of every node with some time interval [16]. Where: EN t -After time t energy spent by node N, N t -Total of transferred packets, N r -Total of expected packets, E t -Energy of transferred packet, E r -Energy of acceptance the packet. The remaining energy is intended by the variance among primary energy and consumed energy End to End Delay As per specified [17] as average interval occupied by data packets to effectively communicating messages crossways the system from source to destination 1 ( ) Link Lifetime The system link lifespan is predicated from the quantity of transmissions. It represents forward and reverses data delivery. The Link quality of the path can be calculated by Where: N i -Link Lifetime, F d -Represents data packet reach to the destination successfully, R d -Represents acknowledge packets are received by the sender successfully [18]. FUZZY LOGIC BASED ROUTING ALGORITHM IN RPL The fuzzy logic applies completed routing to excellent the greatest route for transporting data effectively with attention of three-parameter Residual energy consumption, Delay and Link lifetime. The fuzzy logic set was presented in 1965 as a scientific way to denote linguistic vagueness (Zadeh, 1965) [21]. Allowing to the fuzzy logic impression, features and measures can be secret without certain bounds. Fuzzy logic is actual valuable for lecturing real-world difficulties, which typically contain a grade of vagueness. Figure 1 Fuzzy inferences System The FIS takes linguistic inputs (as stated for simplification), procedures the evidence and outputs the presentation [19]. Fuzzification Fuzzification takes input fuzzy value from crisp value. The input values are (Residual Energy, Link Lifetime, Delay) its convert these values in linguistic variable and membership function Linguistic Variable The variable represents the input and output of the variable. In this residual energy having three linguistic variables High, Average and Low. The output variable also define linguistic variable Awful, Bad, Degraded, Average, Acceptable, Good, Excellent Membership Function It is a mapping of membership function values to the real world measurement values, so that the actions can be functional to them. This function evaluates the linguistic variable. Membership function values are in-between range 0 to 1. Fuzzy Rule Base The effect which the FIS types is resulting from the instructions which are kept in the record. These are kept as a set of instructions. The rules are 'If-Then' declarations that are in-built and informal to appreciate meanwhile they are unknown but public English declarations Defuzzification It is the procedure of changing the fuzzy input into a crisp set. The value ranges by MF in between 0 and 100 and it delivers single crisp value. We require certain weighted average technique for Defuzzification [20]. The fuzzy inference system to determine the optimal path from a basis node to the endpoint node. This will progress the performance of the network. PROPOSED WORK BASED ON RANK CALCULATION The rank of the node computes from the root node and at each level increases the rank by 1. The Equation value can be calculated by using the Defuzzification process. The rank equation can be defined as Node Selection Process The node selection process based on construction of MF using rule based system. The node selection process using FIS system. Results and Discussion This segment demonstrates the evaluation of anticipated system with fuzzy inference over feigning the routing for IoT network. The analysis is done by Fuzzy rule based system. The study of presentation built on the suggested Node Selection algorithm using residual energy, delay and link quality factors is estimated in this segment with output parameter as Route quality. The analysis is performed by selecting rule based system using FIS to generate the result. The analysis is done by varying parameters The fuzzy set is grouping of dissimilar metrics, every metric cover specific fuzzy variable. The rule constructed contains of 3 3 = 27 fuzzy based rules. This is constructed on the input variable and membership function. We can describe the fuzzy based rule which characterizes the first column as the count of total number of rules and next 2 to 4 column signifies input fuzzy logic variable and the last column characterizes output variable in the form of Route Quality. The output follows max operator as combination and min operator as configuration function. TECHNICAL JOURNAL 15, 1(2021), 18-24 Fig. 7, we observed that Route_ quality is above 73.4% means its excellent route selected from this we first declare the variable as Residual Energy is 194, Link Quality is 51.2, and Delay is 1.05. As per fuzzy membership function, the linguistic variable Energy value is "Average and high" and MF values are 0.5 and 0.5. The Link Quality of the linguistic variable significance is "Average" with MF value is 1, linguistic variable significance of delay is "Low" with the MF value is 1. From rule number (2) and (5) its process outcome in the form of route quality parameter as acceptable and good. The Defuzzification process can be applied and calculated using a formula . From the outcome excellent route selected with proper selection of input variable. The surface view can be represented with parameter detail. CONCLUSION In this paper fuzzy logic approach for RPL network utilised for the IoT network system. Considering the input and output parameter in a FIS to generate the required outcome in the form of route quality. The selection of the route is constructed on three factors Residual Energy, Link Lifetime, Delay to generate proper route selection to increase network lifetime. The suggested algorithm allows the operative presentation and collection of achievable and excellent path. The yield of anticipated algorithm is calculated by selecting excellent route ,if Residual Energy is 194, Link quality is 51.2 and Delay is 1.05 then excellent route quality is 73.4%. The Matlab simulation gives the outcome in the form of route quality and future work will consider the deployment of the node in the real-time environment. Notice This paper was presented at IC2ST-2021 -International Conference on Convergence of Smart Technologies. This conference was organized in Pune, India by Aspire Research Foundation, January 9-10, 2021. The paper will not be published anywhere else.
2021-05-07T00:03:16.448Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "73412bc0794135cffd681c04c270466976431308", "oa_license": "CCBY", "oa_url": "https://hrcak.srce.hr/file/367611", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cd3fd7535410d262b4e89229463e47dcc31c90cd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
223102081
pes2o/s2orc
v3-fos-license
Mechanical Behavior of Sandwich Panels with Hybrid PU Foam Core +e traditional composite sandwich structures have disadvantages of low shear modulus and large deformation when used in civil engineering applications. To overcome these problems, this paper proposed a novel composite sandwich panel with upper and lower GFRP skins and a hybrid polyurethane (PU) foam core (GHP panels). +e hybrid core is composed of different densities (150, 250, and 350 kg/m) of the foam core which is divided functionally by horizontal GFRP ribs. +e hard core is placed in the compression area to resist compressive strength and improve the stiffness of the composite sandwich structure, while the soft core is placed in the tension area. Six GHP panels were tested loaded in 4-point bending to study the effect of horizontal ribs and hybrid core configurations on the stiffness, strength, and failure modes of GHP panels. Experimental results show that compared to the control panel, a maximum of 54.6% and 50% increase in the strength and bending stiffness can be achieved, respectively. GHP panels with the hybrid PU foam core show obvious secondary stiffness. Finally, analytical methods were proposed to predict the initial stiffness and peak load of the GHP panels, and the results agree well with experimental results. Introduction Composite sandwich panels with two high stiffness skins and a middle light core are increasingly applied in civil engineering applications [1][2][3][4][5][6]. However, there have been very limited attempts to use these structures for large-scale structural elements. e main reason is that the core material currently used is foam or light wood, and the deformation of the composite panels is large for the cause of its low Young's modulus. Till now, researchers have done a lot of research on these problems. Steeves and Fleck [7,8] investigated the mechanism of the composite panel with GFRP skin and PU foam using experimental and analytical methods and gained its typical failure modes. Umer et al. [9] investigated the bending properties of composite panels with various core densities. e study showed that the load bearing capacity of sandwich panels increased with the increase of foam density. Sharaf et al. [10] researched the flexural properties of ten sandwich panels, and the results showed that the shape and density of the sandwich plate play an important role in the failure mode, load bearing capacity, and stiffness of the panel. Dweib et al. [5], Keller et al. [11], and Fam and Sharaf [12] studied the bending properties of composite sandwich panels reinforced with GFRP ribs. e results showed that the longitudinal ribs can significantly increase the bending stiffness and strength of the structure. Zi et al. [13], Moon et al. [14], and Mohamed et al. [15] researched the bending behavior of sandwich panels with transverse ribs, which shows that the reinforcement can alter the failure mechanism of the composite panels. Wang et al. [16] studied the bending properties of foam-filled sandwich plates using 4point bending test. Compared to the reference material, the final bending strength is greatly increased by the introduction of lattice ribs. Manalo et al. [17,18] and Awad et al. [19] studied the bending behavior of fiber composite sandwich panels with horizontal ribs in the middle. e study also illustrated that the horizontal rib can alter the failure mechanism of the composite panels. In order to further increase the stiffness and the final bearing strength of sandwich structures, authors have developed a novel composite sandwich panel with GFRP skins, lattice ribs, and a PU foam core. Test results indicated that the lattice ribs can increase the ultimate load bearing capacity of sandwich panels greatly [20]. is paper detailed analysis of the flexural properties of the composite sandwich panel (GHP panels) with GFRP skins and a hybrid PU foam core (Figure 1). e hybrid foam core is functionally designed with different densities (150, 250, and 350 kg/m 3 ) of the PU foam core and divided by horizontal ribs. e hard core (350 kg/m 3 ) is located at the compression area to resist the compressive strength and improve the stiffness of the composite sandwich structure, while the soft core (150 kg/m 3 ) is located at the tension area. is paper studied the flexural properties of the GHP panels to evaluate their possibility as structural panel elements. Six panels with the same size (1400 × 120 × 80 mm 3 ) were tested to evaluate their ultimate bending strength, failure mechanism, and bending stiffness. An appropriate analysis model was proposed to predict the bending stiffness and strength of the proposed GHP panels. Experimental Program e properties of the GFRP skins and ribs were shown in the paper by Zhang et al. [6]. e hybrid PU foam core with different densities (150, 250, and 350 kg/m 3 ) is divided functionally by horizontal GFRP ribs. e GFRP face sheets and ribs were composed of [0/90]°symmetric E-glass woven fiber (800 g/m 2 ) and HS-2101-G100 unsaturated polyester resin. e GFRP laminates were manufactured by Vacuum Infusion Process (VIP). e GFRP fiber and the resin were provided by Nanjing Spare Composites Co., Ltd. e mechanical properties of GFRP laminates and ribs were examined by tensile, compression, and shear testing according to the ASTM standards. Table 1 summarizes the detail material property result. In this study, six panels with the same dimensions (1400 × 120 × 80 mm 3 ) were fabricated. Table 2 shows the summary of the test parameters. Specimen GHP-CON, a controlled sandwich panel, is composed of GFRP skins and a 150 kg/m 3 density PU foam core. Specimens GHP-1-1 and GHP-2-1 were fabricated with horizontal ribs and with a kind of 150 kg/m 3 density PU foam core to evaluate the bending properties of sandwich panels with different spaces of horizontal ribs. Specimens GHP-1-2 and GHP-2-2 were fabricated with two different foam core densities (150 and 350 kg/m 3 ), divided by horizontal ribs. Specimens GHP-2-3 with a functionally multilayered PU foam core (150, 250, and 350 kg/m 3 ) and horizontal ribs. Composite sandwich structure is composed of two 3.6 mm-thickness GFRP skins. e thickness of the horizontal ribs is 2.4 mm, and the detailed thickness of each layer of the PU foam core is shown in Table 2. e manufacture process and the tested specimens are shown in Figure 2. e detailed manufacture process was described in the paper by Zhang et al. [6]. Four-point bending tests were conducted on each panel according to ASTM C393 [21]. e net loading span L c of the sandwich beam is 1200 mm, and the spacing between loading points is 300 mm. e panel deflections are measured using three 100 mm linear variable differential transducers (LVDTs, with a precision of 25 micrometers). e resistance distortion meter is applied to the upper and lower sides to test the longitudinal tension and compression strain of GFRP materials. Failure Mechanism. e sample failures can be divided into two main types (see Figure 3): (1) completely core shear failure: the panels lost their bearing capacity completely when the foam core shear failure happened, which occurred in the control specimen (panels GHP-1-1 and GHP-2-1) (see Figures 3(a)-3(c)); (2) core shear failure occurred step by step. When the soft core failed in shear failure, the panel still has the ability to carry the load because of the existence of the hard PU foam core, which can continue to provide the bending stiffness and strength. Finally, specimens collapsed in core shear failure when the strength reached the peak strength of the hard PU foam core, which occurred in specimens GHP-1-2, GHP-2-2, and GHP-2-3 (see e reason of the corresponding failure mode is as follows: (1) the shear strain beyond its maximum shear failure strain of the PU foam; (2) the shear strain of the soft foam core exceeds its maximum failure strain, and the cracks developing path were prevented due to the existence of the horizontal GFRP ribs and the hard PU foam core. e ribs, the hard foam core, and the GFRP skins formed a new sandwich panel. us, the GHP panels can continue carrying the load until the shear strain of the hard core goes beyond its maximum value. e results indicated that the GHP panels with the hard PU foam core in the compression area can alter the failure mechanism of the composite panels. Comparing panels GHP-CON, GHP-1-1, and GHP-2-1, one can get that the horizontal GFRP ribs have no effect on the failure mode of the composite panel. Comparing panels GHP-1-2, GHP-2-2, and GHP-2-3 with panel GHP-CON, one can get that the hybrid PU foam core made the panel show a more ductility behavior. Load-Deflection Curves. e load-deflection curves of GHP panels are shown in Figure 4. e figure shows that the load-deflection curve of GHP-CON shows linear behavior before 6 kN, and when it reached the peak load of 10.8 kN, the curve dropped sharply. A similar load-deflection curve was obtained in panels GHP-1-1 and GHP-2-1 (see Figure 4(a)), failed at 11.1 kN and 10.6 kN, respectively, which was approximately equal to that of specimen GHP-CON. e results indicated that the horizontal ribs have little effect on the load-deflection curve of GHP panels. is is because the ribs distributed in the hybrid PU foam core have small contribution to the bending stiffness rather than placed at the top or bottom GFRP skins. respectively. e failure load of panels GHP-2-2 and GHP-2-3 was approximately 45.4% and 54.6%, higher than that of panel GHP-COM. Comparing panels GHP-1-2, GHP-2-2, and GHP-2-3 to panel GHP-CON, one can get that the hybrid PU foam core can significantly improve the ultimate load and the bending stiffness of the GHP panels. Figure 5 gives the load-strain relationship of the GHP specimens. e figure shows that the load-strain curve of the GHP panels exhibited almost linear behavior. Table 3 shows the tested mean tensile and compressive distortion of the midspan top and bottom GFRP skins. e ultimate tensile and compressive strains are 0.36% and 0.43%, respectively, which are lower than the values gained from the material tests (Table 1). For panels GHP-CON, GHP-1-2, and GHP-2-2, the number of the compressive strain of the upper GFRP skin is nearly the same with the number of the lower GFRP skin. For panel GHP-2-3, the average number of the lower skin is 20% higher than that of the upper skin, which indicated that the high tensile strength of the GFRP laminates can be fully utilized when panels are with a hybrid PU foam core. Analysis and Discussion e following part gives the results of the bending stiffness, ultimate load bearing capacity of the tested panels, and a comparison between experimental and analytical results. Stiffness Analysis. Manalo et al. [17] illustrated that the bending stiffness EI ex of the sandwich panels can be gained according to the bending formula of the composite panel under 4-point bending tests. Using the linear elastic curve segment (Figure 4), EI ex can be written as where (ΔP/Δδ) is the initial linear part of the test load-deflection curves and L c is the net loading span of the test panel. e predicted bending stiffness is calculated by equivalent bending stiffness EI eq which can be written as where E f and E ic are the GFRP laminates and PU foam core elastic modulus, respectively. And t s means the GFRP skin thickness, and c i and c j mean the thickness of the PU foam core and GFRP ribs, respectively. Moreover, d s , d i , and d j are the distances from the panel's neutral axis to the GFRP skins, PU foam core, and GFRP ribs' centroids, respectively. e equivalent shear rigidity GA eq can be obtained by e total midspan deformation Δ of the GHP panels in the 4-point bending test can be written as where Δ b means the deformation attributed from bending and is written as Δ s means the shear deformation and can be written as where a is the distance of the loading points from the supports (450 mm). Ultimate Load Bearing Capacity. e stress distribution along the thickness (z-axis) of the GHP composite panel can be described as [22] where j means either skins or PU foam core layers and S j means the first moment of area of each part of the GHP panel. 4.3. Discussion. e predicted bending stiffness of GHP panels can be obtained by equations (1) and (2), as presented in Table 3. For specimen GHP-CON, the predicted bending stiffness EI eq was 50% higher than that obtained experimentally. For panels with GFRP ribs and different laminated PU foam core configurations, the predicted bending stiffness EI eq was 7-53% larger than that obtained experimentally, which indicated that the contribution of shear deformation cannot be neglected in the GHP panels. e predicted midspan deflection and ultimate bending strength can be calculated according to equations (3)-(8), as presented in Table 4. Compared to the experimental results, there is an average error of 13%. e calculated deflections contributed by bending and shear deformation are shown in Table 4, respectively. e analytical method can conservatively estimate the experimental ultimate load of GHP panels with a maximum error of 22%. e maximum variation between the analytical and the test results is 3.53 kN, which occurred in specimen GHP-2-3 because the analytical method neglected the contribution of the GFRP skins and the horizontal ribs on the ultimate shear bearing capacity. Conclusions is paper researched the bending properties of a novel composite sandwich panel composed of the upper and lower GFRP skins and a hybrid PU foam core (GHP panels). According to the test and analysis results, the following conclusions can be obtained: (1) Compared to the control specimen, panels with a hybrid functional PU foam core exhibited the highest bending stiffness and ultimate load bearing capacity, a maximum of 50% and 54.6% increase, respectively. (2) e panel with the hybrid PU foam core shows obvious secondary stiffness, the deformation ability can be increased substantially, and the hybrid PU foam core made the panel show a more ductility behavior. (3) e horizontal GFRP rib shows little effect on the failure mode of the composite panel, and it also has little effect on the ultimate load bearing capacity of the sandwich structures. in this study has been examined. However, the minimum weight sequence is recommended after establishing the appropriate measurement basis and testing more specimens. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-08-06T09:06:22.003Z
2020-08-04T00:00:00.000
{ "year": 2020, "sha1": "4c8b3b3ac5551689ef79066e8bea4426be62c27d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ace/2020/2908054.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d33adf5bfe92b08aa9bae2959ca9487606e781a7", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
236723743
pes2o/s2orc
v3-fos-license
Analysis of accumulation models of middle Permian in Northwest Sichuan Basin How to cite item Li, B., Li, Q., Mei, W., Zhuo, Q., & Lu, X. (2020). Analysis of accumulation models of Middle Permian in Northwest Sichuan Basin. Earth Sciences Research Journal, 24(4), 418-427. DOI: https://doi.org/10.15446/esrj. v24n4.91149 Great progress has been made in middle Permian exploration in Northwest Sichuan in recent years, but there are still many questions in understanding the hydrocarbon accumulation conditions. Due to the abundance of source rocks and the multi-term tectonic movements in this area, the hydrocarbon accumulation model is relatively complex, which has become the main problem to be solved urgently in oil and gas exploration. Based on the different tectonic backgrounds of the middle Permian in northwest Sichuan Basin, the thrust nappe belt, the hidden front belt, and the depression belt are taken as the research units to comb and compare the geologic conditions of the middle Permian reservoir. The evaluation of source rocks and the comparison of hydrocarbon sources suggest that the middle Permian hydrocarbon mainly comes from the bottom of the lower Cambrian and middle Permian, and the foreland orogeny promoted the thermal evolution of Paleozoic source rocks in northwest Sichuan to high maturity and over maturity stage. Based on a large number of reservoir physical properties data, the middle Permian reservoir has the characteristics of low porosity and low permeability, among which the thrust nappe belt and the hidden front belt have relatively high porosity and relatively developed fractures. The thick mudstone of Longtan formation constitutes the regional caprock in the study area and the preservation condition is good as a whole. However, the thrusting faults destroyed the sealing ability of the caprock in the nappe thrust belt. Typical reservoir profiles revealed that the trap types were different in the study area. The thrust fault traps are mainly developed in the thrust nappe belt, while the fault anticline traps are developed in the hidden front belt, and the structural lithological traps are developed in the depression belt. The different structural belts in northwest Sichuan have different oil and gas accumulation models. This paper built three hydrocarbon accumulation models by the analysis of reservoir formation conditions. The comprehensive analysis supposed the hidden front belt is close to the lower Cambrian source rock, and the reservoir heterogeneity is weak, faults connected source rock is developed, so it is a favorable oil and gas accumulation area in the middle Permian. ABSTRACT Analysis of accumulation models of middle Permian in Northwest Sichuan Basin Un gran progreso se ha logrado en la exploración del Pérmico medio en el noroeste de Sichuan (China) en los últimos años, pero aún persisten muchas preguntas para entender las condiciones de acumulación de hidrocarburos. Debido a la abundancia de rocas madre y los movimientos tectónicos multifase en esta área, el modelo de acumulación de hidrocarburos es relativamente complejo, por lo que se convierte en el principal problema a resolver en la exploración de petróleo y gas. Con base en los diferentes contextos tectónicos del Pérmico medio en el noroeste de la cuenca de Sichuan, se tomaron la falla de corrimiento en la lámina de empuje, la franja delantera oculta, y la franja de depresión como las unidades de investigación para combinar y comparar las condiciones geológicas del reservorio con origen en el Pérmico medio. La evaluación de las rocas madre y la comparación de las fuentes de hidrocarburos sugieren que los hidrocarburos del Pérmico medio provienen principalmente de la parte baja del Cámbrico inferior y del Pérmico medio, y que la orogenia del antepaís propició la evolución termal a etapas altamente maduras y sobremaduradas de las rocas madre del Paleozoico en el noroeste de Sichuan. Con base en información de las condiciones físicas del depósito, el reservorio del Pérmico medio tiene condiciones de baja porosidad y baja permeabilidad, entre las cuales la falla de corrimiento en la lámina de empuje y la franja delantera oculta tienen una alta porosidad relativa y fracturas relativas desarrolladas. La gruesa lutolita de la formación Longtan constituye la roca de cubierta regional en el área de estudio y su preservación en general es buena. Sin embargo, las fallas de corrimiento destruyeron la capacidad de sellamiento de la roca de cubierta en la lámina de empuje. Los perfiles típicos revelan que los tipos de trampa en el Introduction The middle Permian in Northwest Sichuan is an important field of natural gas exploration in the Sichuan Basin. In recent years, several wells have obtained high-production in Qixia Formation and Maokou Formation (Dahai et al., 2016;He, 2014;Shen et al., 2016;Yuanjiang, 2019), showing good exploration prospects. Several sets of source rocks, such as carbonate rocks, argillaceous rocks, and coal, were developed in the Paleozoic (Hui et al., 2012;Lu et al., 2017;Dong et al., 2011), they superimposed and crossed each other on the plane, resulting in oil and gas channeling and mixing in the formation, resulting in complex gas source relationship (Li et al., 2015;Zhang et al., 2019;Jie et al., 2016). At the same time, multi-stage tectonic movements caused multi-stage hydrocarbon generation, migration and accumulation, and adjustment and transformation, which complicated the oil and gas system and led to an unclear understanding of the conditions and modes of hydrocarbon accumulation, and seriously hindered the further exploration progress. In recent years, with the discovery of the lower Paleozoic Cambrian Longwangmiao Formation and Sinian large gas fields in the middle Sichuan paleouplift area (Gu et al., 2016;Shen et al., 2016;Wei et al., 2008;Xingzhi et al., 2019), indicating that the deep Marine strata in Northwest Sichuan Basin may have a good prospect for the discovery of large-scale oil and gas reservoirs (Yang et al., 2019;Yu et al., 2018;Xiao et al., 2018). Therefore, an in-depth study of the basic geological conditions of the Middle Permian hydrocarbon accumulation in Northwest Sichuan and the establishment of a hydrocarbon accumulation model will be helpful to the re-understanding of Marine hydrocarbon distribution in this area. Geological settings Northwestern Sichuan is located in the western margins of the Yangtze Plate between the Sichuan Basin and the Songpan-Ganze Orogenic Belt (Fig. 1). This region extends northwards to the Chaotian District of Guangyuan City and southwards to the Mianyang City-Yanting County. The Northwestern Sichuan has experienced the following tectonic evolution stages. (1) The Northwestern Sichuan Basin was a depression during the early Cambrian that eventually became the most important source rock horizon in the Paleozoic system due to the deposition of the Lower Cambrian marine fragmental rocks (Fig.1b). (2) At the end of the Silurian Period, the Silurian and Ordovician strata were strongly eroded due to the Caledonian orogeny. This region began to sink again after the Caledonian orogeny leading to the deposition of marine and carbonate-dominated Devonian and Carboniferous strata . (3) The Yunnan movement at the end of the Carboniferous period severely eroded the Carboniferous strata (Gu et al., 2016;Yingqiang et al., 2016). The northwestern Sichuan Basin was again sea covered at the beginning of the Permian period, which led to the deposition of the Permian and Middle-Lower Triassic marine strata. (4) The basin then contracted during the Late Triassic due to the Indo-China orogeny that led to the seawater retreat that thus marks the beginning of terrestrial sedimentation (He, 2014). (5) After the Jurassic period, a sharp uplift occurred in the Longmen Mountains of northwestern Sichuan due to the Yanshanian and Himalayan orogeneses (Meng et al., 2016). This resulted in severe stratigraphic erosion in these regions. This series of events ultimately gave rise to the current tectonic state of the region. The study area developed three tectonic units: the thrust nappe belt, the hidden front belt, and the depression belt according to the characteristics of tectonic development in Northwest Sichuan. Driven by tectonic movement, thrust occurred in Longmen Mountains to form nappe tectonic belt (Gu et al., 2016;Renqi et al., 2016;Yingqiang et al., 2016) became the western boundary of the basin, and typical structures developed included Hanwan chang-Kuangshan anticline, etc. High-yielding gas reservoirs were found in the Qixia Formation and Maokou Formation, and the key exploration wells were K1, K2, K3, H1, H2, H12, and HS1. In the front edge of the thrust fault, the hidden front belt was formed, which is mainly distributed in the Shuangyushi-Zhongba structural belt, and exploration wells S1, ST1, and ST3 were drilled in the middle Permian with high-yielding gas reservoirs. The area east to Nanjiang-Cangxi-Yanting is the depression belt with typical structures such as Wujiaba-Jiulongshan structure, etc., and the exploration wells in the middle Permian are WJ1, L17, L16, GJ, B1, etc. (Fig 1c). Samples and data In this study, a total of 128 samples were selected, including 62 hydrocarbon source rocks, 12 natural gas samples, and 54 reservoir samples, based on the comprehensive sampling of different structural belts and different strata. The source rock samples were mainly taken from Wells HS1, K1, K3, WJ1, S1, H2, GJ, L17, ST2, and ST3, as well as the field sections of Northwest Sichuan Tongkou, Changjiang ditch, Qiaoting, and Tongkou. Natural gas samples were taken from wells ST1, ST2, and ST3. Reservoir samples were taken from wells K1, K2, K3, ST3, L16, and WJ1, etc. At the same time, some oil and gas field reservoir physical property test and analysis data were also collected. Besides, seismic data on middle Permian gas reservoirs have been collected from the Exploration and Development Research Institute of PetroChina Southwest Oil & Gasfield Company, including seismic interpretation section (AA ') and seismic interpretation section of wells K1, S1, H2, ST3, and G5. The single well analysis data, including formation pressure and formation water data of wells ST2, L17, B1, GJ, H1, and K3, and data of some wells with denudation thickness of regional strata and basin simulation in Northwest Sichuan were collected from the same institute. Solid bitumen reflectance The solid bitumen reflectance was measured using an MPV-3 microphotometer, related accessories, and a QDI302 microscope. The standard used for these measurements was a yttrium aluminum garnet (whose Ro is 0.92% in standard conditions). The measurements were performed with a magnification of ×500 (the magnification of the oil immersion objective was×50) and an input wavelength of λ=546 nm (green light) under the standard of SY/T 5124-2012. Organic carbon The organic carbon measurements in the hydrocarbon source rocks were performed by first grinding the samples, followed by the addition of hydrochloric acid to remove the inorganic carbon. The samples were then inserted into a CS230SH sulfur/carbon analyzer under the standard of GBT 19145-2003. Gas chromatography-mass spectrometry The chloroform bitumen 'A' content in the samples was obtained by performing chloroform extraction on the crushed samples. After the asphalt was precipitated with n-hexane, the filtrate was passed through a silica gel column. Solvents with different polarities were then used to sequentially separate the saturated hydrocarbons, aromatic hydrocarbons, and non-hydrocarbons. Gas chromatography-mass spectrometry (GC-MS) analyses were performed on the saturated hydrocarbons using a Thermo Scientific Trace GC Ultra-DSQ GC-MS. The GC-MS analyses were performed using an HP-5MS elastic quartz capillary (60m×0.25mm×0.25 mm) GC column. The temperature program began with an initial temperature of 100 °C (held for 5 min), then increased at a rate of 3 °C/min up to 320 °C where the temperature was then held for 20 min. The carrier gas was 99.999% helium, and the inlet temperature was 280 °C. The sample was injected at a constant flow rate of 1 mL/min and the transmission line temperature was 300 °C. Electron impact ionization (70 eV) with a filament current of 100 mA and an ion source temperature of 250 °C was used as the mass spectrometry ionization method. The GC-MS analysis on the source rocks was also performed using the Trace GC Ultra-DSQ GC-MS under the standard of GBT30431-2013. Natural gas composition and carbon isotope analyses The natural gas composition analyses were performed using an HP6890 gas chromatograph, with an SGE-60 GC column (50m×0.25mm×0.25 mm). The inlet temperature of 300 °C was used with nitrogen as the carrier gas. The flow rate was 1 mL/min with a split ratio of 50:1. The temperature of the column was increased from 30 °C to 260 °C at a rate of 3 °C/min. Source rock conditions Field profile and drilling data in Northwest Sichuan have confirmed that multiple sets of source rocks developed in the Paleozoic era (Gu et al., 2016;Wu et al., 2012;Dong et al., 2011;Li, 2018). Based on the test of 62 samples of multiple sets of source rocks (Tab. 1), Qiongzhusi formation of the lower Cambrian has the highest abundance of organic carbon, and Toc distributed between 0.6 and 6.2%, with an average value of 2.67%. The second source rock is Longmaxi formation and Longtan formation, with mean values of 0.91% and 1.89% respectively. The average abundance of organic carbon in Maokou Formation and Qixia Formation is 0.87% and 0.81%, both of which have the basic conditions to form effective source rocks (Tab. 1). The carbon isotope measurements for the natural gas samples were performed using a Finnigan MAT-252 mass spectrometer under the standard of GBT 13610-2014. The carbon isotope values were then compared with the GBW 04405 reference to obtain the corresponding Pee Dee Belemnite (PDB) values. The standard deviation of these values was±0.3‰. Thermal history model Burial-thermal history analysis was prepared using PetroMod 1D software. The stratigraphic sequences, denudation thickness, and tectonic events were derived from geological reports of well K1, ST3, and L16. The source rock assignment and their respective properties, total organic carbon (TOC) and hydrogen indices (HI) implemented in order to simulate the thermal maturity of source rock. This study built three thermal evolution models of well K1, well ST3, and well L16 in different structural belts, as shown in Fig 2. It shows that the major Paleozoic source rocks in this area entered the oil generation threshold in the Late Caledonian, entered the condensate gas-moisture stage in the Late Triassic, and the dry gas stage in the early Cretaceous. At present, they are generally in the high-to-over mature stage, mainly producing dry gas. From Figure 3, the lower Cambrian heat evolution rate is low and was in the mature and over mature stage for a long time (Fig 3). Relatively speaking, the lower Silurian and middle Permian source rock with a higher thermal evolution rate showed that the foreland orogeny movement in western Sichuan promoted the source rock to enter the mature and high mature stage. Laterally, the source rocks in the thrust nappe belt, hidden front concealed belt, and depression belt enter the mature stage later, shows that the foreland movement has a delayed effect on the thermal evolution of source rocks in different structural belts. The source rocks of Lower Cambrian were in the stage of oil generation for a long time in the thrust nappe belt, which offered a good material base for the reservoir formation in this area. The thermal evolution history of Lower Silurian and Permian source rocks was close to Permian source rocks. Due to the large buried depth of the hidden front belt, the Lower Silurian source rock entered the dry gas stage from the Late Cretaceous. Due to the influence of the Caledonian paleouplift, the Silurian Longmaxi Formation, except for the large thickness of the Chaotian and Nanjiang Qiaoting sections, was pointed out southward in the Guangyuan-Cangxi area and had relatively limited influence on the oil and gas reservoirs. Therefore, it can be seen that the Lower Cambrian and Permian source rocks are the main sources of oil and gas accumulation in this area. Reservoir conditions Based on the physical test of 54 reservoir samples and a large number of collected reservoir data, the basic characteristics of the Middle Permian reservoirs in the study area can be seen, as shown in Table 2. It shows that the lithology of the middle Permian reservoirs in Northwest Sichuan is mainly crystalline dolostone, followed by dolomitic limestone and sparite limestone. The porosity of the thrust nappe belt is between 0.13-16.51% with an average of 2.26, and the permeability is between 0.0-784 md with an average of 0.608 md. The porosity of the hidden front belt was 0.23-7.59 % with an average of 1.825 %, and the permeability was 0-94.2md with an average of 5.14md. The porosity of the depression belt ranged from 0.1 to 0.92%, with an average porosity of 0.49%, and the permeability ranged from 0.0 to 17.7md, with an average of 1.73md. In comparison to the sedimentary rocks, the middle Permian reservoir exhibits low porosity and low permeability. In contrast, different structural belts that the thrust nappe belt and the hidden front belt have relatively high porosity and relatively developed fractures, which improve the permeability of the reservoir. On the other hand, the depression belt has poor porosity and permeability affected by the depth of burial. Caprock and preservation conditions Two sets of important caprock developed in Permian and overlying strata of Northwest Sichuan. One is the regional mudstone of the Longtan Formation of Upper Permian, the thickness can reach more than 100 m. The mudstone of Longtan formation is both a good caprock of the underlying reservoir and the main source rock of the overlying reservoir, which constitutes a good regional caprock of the middle Permian reservoir. The other is the gypsum rocks developed in the Middle Triassic, with a cumulative thickness of between 50 m and 450 m. They are the indirect caprock of Permian oil and gas reservoirs (Chen Cong, 2019). In this paper, the characteristics of Paleozoic formation water and formation pressure coefficients were statistically analyzed, as shown in Table 3. It is shown that the wells H1 and K3 in thrust nappe belt from deep to shallow, the water type transition from calcium chloride type to sodium sulfate type, and formation pressure coefficient from 0.96 to 0.87, indicating that the strata were destroyed and the preservation conditions of Qixia Formation and Maokou Formation are poor. The water type of ST2 in the hidden front belt is calcium chloride type, and the formation pressure coefficient is 1.36, which is in the state of overpressure state and conducive to oil and gas preservation. The formation pressure coefficient of wells L17 and GJ well in the depression belt is higher and the preservation condition is better. However, the water type of Maokou formation in well B1 is sodium bicarbonate type, and the formation pressure coefficient is 1.01, which is relatively low, indicating received the influence of Micangshan Fault-fold belt. Overall, the multi-stage tectonic movement has a great influence on the preservation conditions of the thrust nappe belt. Note: Trap development and distribution This paper compiled the reservoir profiles of the key exploration wells in the research area, as shown in Figure 4. It indicates that the trap types of different belts are different due to the influence of the multi-stage tectonic movement. The thrust nappe belt mainly developed thrust blocks and thrust anticlinal traps (Fig 4a, 4d), while the hidden front belt formed a large number of fault-block traps under Caledonian -Hercynian tensile action, and some of the fault-block traps degenerated into compressed anticlinal traps under the compression action of Indo-China period (Fig 4b,4e). Due to weak tectonic activity, anticline and structure-lithologic traps were developed in the depression belt (Fig 4c). Overall, the traps in Northwest Sichuan are mostly structural traps, with the most developed fault traps, but the traps are small in scale. Oil and gas source Due to the few exploration wells and little data in Northwest Sichuan, the oil and gas source of middle Permian has not been clearly understood. At present, there is still a great dispute over the gas source of the middle Permian. Some scholars believed that the oil and gas mainly came from the Cambrian source rocks of the Lower Paleozoic through paleo-reservoir studies (Wu et al., 2012;Dong et al., 2011;Li, 2018). Other scholars proposed that oil and gas came from middle Permian source rock (Lu et al., 2017;Liu Chun, 2008) on the ground of some geochemistry test. Besides, some scholars also supposed that some gas possibly came from the coal-measure source rocks of overlying Longtan Formation according to the carbon isotope of gas composition (Shen Ping, 2015). According to the chromatographic analysis of the source rock and the bitumen in the reservoir (Fig. 5), the peak types of n-alkanes in the source rocks of Qixia and Maokou formation in the middle Permian were all singlepeak types, with the main peak carbon close to each other. The ratio of primers to phytane was less than 0.7, which showed a good similarity. The ethane carbon isotopic data of Qixia formation, Maokou formation reservoir, and Paleogenic source rock kerogen are compared as shown in Figure 6. As you can see that the ethane carbon isotopes of Maokou formation in Hewanchang structure are relatively light, indicating that the gas reservoir in the thrust nappe belt was not only charged from the Permian source rocks but also underlying Cambrian source rocks (Huang Dong, 2011). In contrast, the carbon isotope of ethane of Qixia Formation and Maokou Formation in the Shuangyushi area is heavy, and the isotope of methane and ethane is reversed and revealed the character of the gas mixture. It is assumed that in addition to the natural gas generated from the source rocks of the Middle Permian and the Lower Cambrian, the coaliferous gas of the Upper Longtan Formation is mixed in the hidden front belt. The distribution of ethane isotopes in Maokou formation in Jiulongshan ranges from -35 ‰ to -26 ‰, and methane and ethane isotopes are reversed, which shows typically mixed genesis. It is inferred that the Permian oil and gas in the depression belt come from their source rocks and overlying Longtan formation. In summary, these results show that the middle Permian oil and gas are mainly charged by the lower Cambrian system and the authigenic source rocks. Migration conditions Due to a large number of faults, fractures, and unconformities under the influence of multi-stage tectonics in Northwest Sichuan (Dahai et al., 2016;Yang et al., 2019), a variety of oil and gas migration system has been formed, resulting in extensive longitudinal distribution of oil and gas in Paleozoic (Yuanjiang et al., 2019). Therefore, the understanding of the migration system was still relatively vague (Tian et al., 2012;Xiao et al., 2018). Based on the typical hydrocarbon accumulation profiles and the variation characteristics of natural gas carbon isotope in gas reservoirs, this study established the schematic map of middle Permian hydrocarbon accumulation to provide a new reasonable explanation for the secondary migration of oil and gas in this area, as shown in Figure 7. It can be seen that there are obvious differences of different structural belts in northwest Sichuan, in which a large number of imbricate thrust faults developed in the thrust nappe belt connected the Lower Paleozoic source rocks and form the migration channel of the Hewanchang anticline, resulting in light carbon isotopes of ethane in the gas reservoirs in this area. Interlayer faults and carrier beds in Shuangyushi and Wujiaba area are the main channels for oil and gas migration, resulting in ethane carbon isotopes of Permian gas reservoirs in the hidden front belt close to their source rocks, while the light part of ethane carbon isotope shows the characteristics of multi-source charging. The migration channel of the depression belt is mainly interlayer faults and fracture formed under the state of late Caledonian tension. The oil and gas are dominated by a close vertical migration system, and the ethane isotopes of the gas reservoir are close to the Permian source rocks. Hydrocarbon accumulation model The oil and gas accumulation in the middle Permian in Northwest Sichuan is characterized by "multiple sources and multiple periods", and the process of accumulation is relatively complicated Gu et al., 2016;He, 2014;Yu, et al., 2018). Based on the characteristics of reservoir conditions, this study established the oil and gas accumulation model of middle Permian in this area to guide the study of oil and gas accumulation in different structural belts (Fig. 8). The Lower Cambrian source rocks are developed at the bottom of the thrust nappe belt in the study area, and the reservoirs of Maokou Formation and Qixia Formation are well developed in fractures and pores, and the reservoir physical properties are high. The caprock is the source rock and tight limestone at the bottom of the Middle and Upper Permian. At present, a large number of bitumen and paleo-reservoirs are found in Cambrian and Devonian in Nianziba and Tianjingshan areas, indicating that the liquid hydrocarbons produced by the Lower Cambrian source rocks in the late Caledonian and Hercynian periods migrate upward to the Devonian reservoir along the faults connected source rock (Xiao et al., 2018). Some of the crude oil buried in the early stage was cracked into natural gas and migrated upward to the Permian, and then the gas reservoirs were formed in the Kuangshanliang and Hewanchang areas. Some reservoirs were transformed and destroyed by the Indo-China movement in the later stage and exposed to the surface. Therefore, the Paleozoic oil and gas reservoirs in this area have a reservoir formation model of " far-source charging, the assemblage of the lower source and upper reservoir", and the preservation condition is the key controlling factor. Two sets of source rocks in the Lower Palaeozoic and Lower Permian are developed in the hidden front belt, and fracture-pore reservoirs are developed in the middle Permian (Benjian, 2019). The caprock is the source rock and tight limestone at the bottom of the Permian. Due to the multi-source hydrocarbon charging, the ethane carbon isotope in the gas reservoir of the Shuangyushi structure is light. Because the hidden front belt is on the thrown side of the thrust fault, the traps were preserved well. Therefore, the middle Permian gas The depression belt is located on the side of the rift belt in western Sichuan. It is far from the lower Cambrian source rocks, and the Permian source rocks are well developed with a high abundance of organic matter. The source rock and tight limestone at the bottom of the middle and upper Permian formed the caprock. And the tectonic and lithologic traps are developed. According to the analysis of the existing gas reservoirs in this area, the ethane carbon isotopes of middle and upper Permian gas reservoirs are similar to Permian kerogen, which indicates that the natural gas is mainly migrated upward to the nearby reservoirs along the interlayer faults. Therefore, the belt developed the characteristics of self-generation, self-reservoir, and self-caprock combination. Due to the large buried depth and good preservation conditions of the middle Permian in this area, formed the model of near-source charging, and interlayer fracture is the key factors for hydrocarbon accumulation. Based on the above geological conditions and reservoir formation model, it is supposed that the middle Permian in Northwest Sichuan has better reservoir formation conditions, in which the hidden front belt is relatively weak in tectonic deformation, the developed anticlinal traps are well preserved, and the fault connected source rock is well developed. Since the remarkable discovery of well ST1 and ST3 in the Shuangyushi structure after 2014 (Bai Xiaoliang, 2019)), proved that the hidden front belt in the western Sichuan Basin may be a favorable area for oil and gas breakthrough in the deep marine field. Conclusion (1) The hydrocarbon source correlation supposes that oil and gas in Middle Permian mainly come from the bottom of the lower Cambrian and the Permian. The foreland orogeny movement on the longitudinal promoted the Paleozoic source rocks in the high-to-over mature stage, and on the transverse have delayed effect, causing hydrocarbon source rock of the thrust nappe belt, the hidden front belt, and the depression belt entered the mature stage of later and later. (2) The structural traps are developed in the Northwest Sichuan Basin, among which the thrust nappe belt mainly develops thrust fault block traps, while the hidden front belt is dominated by compressed fault anticline trap, and the depression belt develops tectonic-lithologic traps, which have the characteristics of small in scale. (3) The different structural belts in Northwest Sichuan have different oil-gas accumulation models, among which the nappe belt was affected by thrust fault and develops the assemblage of lower source rock and upper reservoir, and the preservation condition is the key. Under the influence of deep faults in the hidden front belt, the oil and gas come from the lower and self-source rocks. However, the depression belt is mainly controlled by the interlayer fracture and develops a model of self-generation and selfreservoir and near-source charging. (4) Based on the accumulation conditions comprehensive analysis, the hidden front belt is close to the Cambrian source rock, with good source charging, well-developed fault connected source rock and good trap preservation conditions, and is a favorable oil and gas accumulation area in the Northwest Sichuan Basin. Highlights: (1) The middle Permian oil and gas in the northwest Sichuan Basin mainly come from the lower Cambrian and middle Permian source rocks, which are in the stage of the high-to-over mature stage. (2) The middle Permian in northwest Sichuan developed an assemblage of lower source rock and upper reservoir, and faults had a great influence on hydrocarbon migration and accumulation. (3) The hidden front belt is close to the lower Cambrian source rocks, with well-developed fault connected source rock and good trap preservation conditions. It is a favorable oil and gas accumulation area for the middle Permian.
2021-08-03T00:05:12.670Z
2021-01-26T00:00:00.000
{ "year": 2021, "sha1": "54ceee91d375e6f551fa78e79f3a5b4871e4993c", "oa_license": "CCBY", "oa_url": "https://revistas.unal.edu.co/index.php/esrj/article/download/91149/77808", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6274582ba3127819e9846d4d691f1e2eb3df4ceb", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
219631936
pes2o/s2orc
v3-fos-license
Optimization of Friction Stir Welding process parameters of Aluminium alloy AA7075-T6 by using Taguchi method Taguchi technique has been used to determine the most important control variables that will result in greater mechanical characteristics (tensile strength and hardness) of FSW joints of comparable AA 7075 plates. To optimize process parameters including tool rotatory speed, weld travel velocity on tensile strength and hardness of friction stir welded similar AA 7075 aluminium alloy, Taguchi Design of Experiment (DOE) and optimization method was used. The optimum levels of process parameters were identified by using the Taguchi parametric design concept. The results show that welding speed is more contributing process parameter than the rotation speed in getting optimum mechanical property (UTS and HV). The forecasted optimal values of ultimate tensile strength and hardness of friction stir welded similar AA 7075 is 197 Mpa and 93 HRB respectively. Further tests proved these results. Main Words: Aluminium alloy, Friction Stir welding, Taguchi Technique I. INTRODUCTION Friction stir welding (FSW) has emerged as technique of wide spread interest due to its number of advantages and more significant for which can be able to weld generally unweldble alloys. FSW is a solid state joining technique in which the material is being welded below its melting point. The welding parameters are important players during FSW. Optimum selection of welding parameters (Tool travel & Rotational speeds) will alter the better weld quality, mechanical and microstructural properties Taguchi method has been broadly used with minimal variation to optimize and measure the effect of different process parameters on their performance. Taguchi method was performed to optimize process parameters for weld bead tensile strength and hardness. Taguchi proposes using the S / N ratio to assess the quality features that deviate from the target values. The S / N ratio of process parameter of each level is calculated depending on the S / N analysis. Taguchi L9 Orthogonal array is used to find the influence FSW joints of AA7075 from each processing parameter (i.e. rotatory speed, traverse speed) for optimum tensile strength and hardness. Refer (Fig.1) Uses an unconsumable tool to join two aluminum plates facing without melting of the metal workpiece. Heat is generated by friction between the workpiece materials, This tends to a smooth region close to the FSW tool. Whereas the tool is passed along the joint path, the two parts of the metal are mechanically mixed and the mechanical pressure exerted by the tool forges the heated and softened material. A. FSW (Operating Principle) A rotating cylindrical tool with a profiled probe is plunge into a butt joint between two clamped Al plates, till the shoulder, which is bigger than the pin, hits the workpiece surface. The pin is slightly shorter than the necessary weld depth, with the shoulder of the tool riding over the work surface. After a short dwelling time, the tool is traveled at the predefined weld speed by follow the joint line. The wearresistant tool and the workpieces liberate friction heat. This heat, together with that produced by the mechanical mixing process and the adiabatic heat inside the material, causes the stirred materials to smoothen without melts. As the tool is traveled forward, a different profile on the probe forces plastic deformation of material from the leading face to the rear, where the high forces help the weld to be forged. This procedure of the tool passing along the welding line in a plasticized tubular metal shaft results in serious solid-state deformation contributes dynamic recrystallization of the base metal. A. AA7075 Aluminum alloy with Zinc as the main alloy component. It is sturdy, with a strength comparable to so many steels, excellent fatigue strength, and optimal machinability. The abrasion proof is less than many aluminum alloys, but the proof against corrosion is substantially better than the 2000 alloys. The fairly high cost limits for applications. The aluminum 7075 alloy has high strength. It is therefore largely used in aircraft making and other applications in the aviation industry. Due to their high strength to weight ratio 7075 is that also used in marine, automotive industry and transport applications. These same characteristics are usually made from AA7075 in mountain trekking equipment, bicycle parts, inline skating frames and hang glider airframes. Hobby RC designs generally use AA7075 and AA6061 for chassis plates. AA7075 is being used in the production of U.S. military M16 machine guns. In specific, the bottom and top receivers of high-quality M16 machine guns and extension tubes are only made of AA7075-T6 alloy. C. Tool Travel speed and rotational speed This is the key process parameters during FSW and is primarily accountable for the production of heat. The rotation may be in the clockwise or counterclockwise accordingly. The rotational and the movement of the tool produces frictional heat inside the aluminum plates and metal is brought to plasticized state and due application of pressure will cause the weld to form. Travelling speed of the tool often plays an important role. It depends on a number of factors like type of alloy, speed of rotation, depth of penetration and type of joint D. Tool tilt and plunge depth An appropriate angle of tilt onto the spindle offers an extra advantage for keeping the stirred material as well as adequate motion of plasticized material. The tilt angle is represented by (θ). The impact depth on the formation weld bead is defined by its impact on the distribution of temperature and also on the blending of material. In order to produce sound welds with soft tool handles the plunge depth of the pin onto the job parts is essential. E. Tool characteristics Tool designs effect heat output, plastic flow, energy requirements and weld joint homogeneity. The tool profile that contains the tool shoulder and pin contributes to the creation of welds for about all variables. The pin dimensions and pin profile variability directly affects material mixing flow and etc. The profiles of pin frequently used are spherical, cylindrical, threaded, square, octagon shaped etc. Tool steel, carbide tool, High speed steel etc, are the widely used common tool materials for FSW. A. Optimization of Parameters for the Tensile strength of FSW Aluminum alloy with Taguchi method In this study, Vikas, Mandeep Singh [6] use Al 6063 T6 is used as working material. Process parameters rotatory speed, traverse speed and in order to find their impact on tensile strength, the axial force is varied. The test was scheduled in the Taguchi orthogonal array L9. S / N ratios analyze the best possible setting parameters and ANOVA determines the contribution from the input parameter. Al6063 alloy butt joint specification with threaded cylindrical pin by FSW technique was successfully developed. They use rotational speeds 850rpm, 1050rpm, 1200rpm, traverse speeds 40mm/min, 58mm/min, 78mm/min and axial forces are 4kN, 5kN, 6kN.The optimal combination for FSW process parameters is a spindle speed of 1200 rpm, a translational feed of 78mm / min, an axial load of 6 KN, which achieves maximum tensile strength. .The maximum contribution of translation feed was 81.31% and rotational velocity was 15.44% accompanied by axial force with minimal influence of 2.03% on tensile strength. B. Influence of Tool Pin Geometric shapes on Friction Stir Welded similar Aluminum Alloy Joints In this study, an effort was produced to evaluate the tensile strength under distinct tool pin geometries of comparable joints of FSW structural aluminum alloy plates. .The instrument pin geometries used in this investigation were triangular, rounded and hexagonal. In this case study, AA 6082-T6 sheets 200 mm X 80 mm X 8 mm were used. Based on ASTM-B557, the 19.05 mm wide and 158.57 mm2 cross sectional area were prepared.(refer fig. 2 Tensile Test sample) .Then tensile test was performed on UTM to define the tensile strength of 9 samples were welded using different pin profiles. . Fig. 2 Tensile test specimen They use rotatory speeds 1200 rpm, 1400 rpm, 1600 rpm and traverse speeds 10mm/min, 12 mm/min, 14 mm/min. It has been noted that the Hexagonal tool pin profile has maximum tensile strength 82.1 Pa. At a rotatory speed of 1200 rpm and through velocity of 10 mm / min. Maximum tensile strength of 83 MPa at 1200 rpm tool rotatory speed and 10 mm / min tool traverse velocity for Triangular tool pin profile. Moreover, the highest tensile strength was noted for the same rotatory speed and through velocity for the rounded pin profile 78.2 MPa. Comparing all outcomes, Triangular tool pin profile at 1200 rpm tool rotatory speed and 10 mm / min trough velocity results in greater Ultimate traction strength compared to other tool pins. In this , M.V.R.Durga Prasad, Kiran Kumar Namala [5] studied on AA5083 andAA6061 plates of 200 x 100 x 5mm Thickness was quantified. A rotatory speed, weld speed and tilt angle of the tool were the process parameters in this investigation at 3 levels each. Tests are performed in the L9 orthogonal array of Taguchi. The mechanical properties of welded samples are the % of elastic deformation and durability (Vickers). B. Optimzation of processs parameters in FSW by ANOVA analysis Tool used in this study is H13 tool steel with taper cylindrical pin geometry. The tool's profile is 18 mm shoulder dia and 6 mm pin dia with 14 ° taper angle. .They use rotational speeds 800rpm, 1200rpm, 1600rpm, traverse speeds are 20mm/min, 50mm/min, 80mm/min and tilt angle in degrees are 0, 1, and 2. The main factor is the welding speed, which leads to the percentage elongation effect by 54.88%, and the less influence of the tool velocity is 5.39%. The ideal state to achieve a good percentage extension is 800rpm tool rotation speed; 20mm / min weld speed and a 1degree angle tilt. Welding speed was the main factor in the effect of hardness in the weld zone at 67.52% and the tool rotatory speed influence at 4.39%. The optimal tool rotatory speeds 800rpm, weld speed 20mm / min and tool tilt angle 2 degrees have been noticed to achieve well hardening in the weld zone. C. Influence of process parameters on FSW of AL6063 In the study, R.Muthu Vaidyanathan, MahboobPatel, N.SivaRaman, D.Tedwors [7] investigate about Optimum parameters to joining AA6063 butt joints. Rotatory speed, transverse speed, and axial force are the key factors considered for study. The material AA6063 was cut in size of 150x 100x 5 mm. The plates were positioned in a butt joint configuration of 100 mm long; the width is 150 mm and the FSW process is performed in a normal direction to the plates. They use rotational speeds 1000 rpm, 1500 rpm traverse speeds 0.5 mm/s and 1 mm/s axial loads 4000N, 6000N. A group of 4 samples were ready to determine the mechanical properties using an EDM wire cutting machine. Here, by using the Analysis of variance (ANOVA) they find out the factor is most effect the tensile strength and hardness. From the Analysis of Varience technique they concluded that welding speed was the key input parameter with the greatest statistical effect on mechanical properties such as tensile strength, deformation, and hardness. A max nominal Ultimate stress (101Mpa) shown by tool with optimal process parameters of tool rotatory speed, 1000 rpm; axial force, 6000N and transverse speed of 1mm / sec. Axial force and rotatory speeds are the powerful parameters for equivalent stress induced in the tool followed By velocity of rotation. It is found that for all samples the percentage of deformation is lower, showing that the amount of heat liberated in the process is lower. A. Experimental Procedure (i) First AA7075 plates were cut into the dimensions 100mmx50mmx5mm. (ii) Then, these plates were friction stir welded under the process parameters of axial load 500N, Tool rotational speed in the range of 1000 to 1400 rpm and tool travel speed in the range of 20 to 40 mm/min. (iii) The tensile test specimens with dimensions as shown in sample specimen figure were cut on EDM (Electric Discharge Machining) machine. (iv) The tensile specimens were subjected to tensile test on ZWICK ROWELL UTM (Universal Testing Machine). (v) And finally we done hardness test on weld nugget zone in weld joint Rockwell Hardness Machine. (vi) The data obtained from these two tests, we did ANOVA and Taguchi analysis in MINITAB17. By using the analysis of Taguchi and ANOVA, we can conclude the optimum process parameters and parameter contribution on tensile strength and hardness. These process parameters are used to get sound joint of welding by using friction tool. FSW joint with hardness indentation The hardness test technique for Rockwell involves a diamond cone or hardcore steel ball penetration of the sample material. On a small initial load F0 ( Figure 4A) generally 10 kgf the indenter is fed onto the sample accordingly. When stability is achieved, an indicating unit that follows the indenter's penetration and thus reacts to updates in the indenter's depth of penetration is fixed to a datum position. A further large load is applied while the preliminary low load continues with a rise of penetration. (Fig 4 B). The additional major load will be removed when the stability has been reached again, but the initial minor load will still be preserved. The elimination of the extra major load enables partial retain, thus lowering the penetration depth (Fig 4 C). For calculating the Rockwell hardness numbers, continuous rise in penetration depth arising from use and withdrawal of the added significant load is used. Fig. 5 Tensile test specimen used for experiment The nine tests based on the orthogonal array L9 were performed. The effects of the Al 7075 alloy were investigated for various parameters such as tool rotatory speed, longitudinal feed and axial force. Graph. 2 UTS vs rotational speed at different weld speeds C. Taguchi Analysis The Taguchi technique, even renowned as Dr. Taguchi's solid design technique, significantly enhances the productivity of engineering. It is a strong statistical method to improve product / process design and solve manufacturing issues. It is based on the concept that quality must be evaluated not by compliance with predefined tolerance limits, but by the deviation from the defined target value. Quality cannot be guaranteed through inspection and rework but it should be incorporated via a suitable method and product design. There have been 3 ideas in specific as the primary contributions to Taguchi method-orthogonal arrays; robustness; and quality loss function. Taguchi Technique contributes to identifying good control factors in order to achieve the process's best possible results. A set of tests are conducted using Orthogonal Arrays (OA). These experimental values are used for analysis of data and forecast the quality of the products As mentioned above, the Complete Factory based Design needs a huge number of tests to be performed. When the range of factors increases, it will become laborious but also complicated. To resolve this issue, Taguchi proposed a specifically designed technique called the use of "orthogonal array" to investigate the whole space of the parameter with less experimentation to be carried out. Taguchi therefore instructs using the loss function to calculate the performance features that deviate from the desired target value. This loss function's value is further converted into the signal-to-noise ratio (S / N). There are usually 3 performance characteristics classifications to evaluate the S/N ratio. They are: nominal-the-best, greater-the-better, and smaller-the-better. Table -3 Different Level Factors (1) Nominal-the-Better: when the specified value is the most favored in neither a smaller nor larger value (e.g. chemical species ratios or parts in nominal-dimensional mechanical fitting). The S / N-ratio model is The no. of operating conditions is equal to the number of rows in the orthogonal array and must equal to or more than the degrees of freedom. Steps to solving in Taguchi's technique The basic steps are involved in using Taguchi's parameter design. 7. Investigate the data; detect the level and performance of the best possible control factor. Taguchi analysis for hardness and UTS In this analysis our main objective is to find out the optimum parameters which affect the hardness of the FSW joint. Here we use larger the better S/N ratio model to get the optimum parameters. Here delta is the difference between the highest value and smallest value of corresponding factor in response table for S/N ratio. According to value of the delta, we give rank to the factors, which affects the hardness and ultimate tensile strength. From an F- Table, we find out the critical F-value. It this value is greater than the calculated F-value in ANOVA table then the factor is not significant. If the factor is significant then the P-value is less than the alpha value and if the factor is not significant then the P-value is greater than the alpha value. I. ANOVA analysis for hardness General Linear Model: hardness versus rotational speed (rpm), welding speed (mm/min) Analysis of Variance for Transformed Response An e.g: I get a 3.96 F-ratio with (2,20) .DoF I'm going through 2 columns and 24 rows down. F's critical value is 3.49. My F-ratio acquired is greater than this, now I came to know that my F-ratio acquired is likely to occur by chance with a p<.05 Table-11 Table of F values 1) Contribution of factors on hardness For rotational speed, we get F-value from ANOVA table is 1.79. But the critical value at (2,4) degrees of freedom is 6.94. So, the critical value is higher than calculated value. So, rotational speed has not much effect on hardness (i.e. it is not significant). It's contribution percentage is 15.56. Similarly, for welding speed, we get F-value from ANOVA table is 7.7. But the critical value at (2,4) degrees of freedom is 6.94. So, the critical value is lower than the calculated value. So, the welding speed is significant (i.e. it has an effect on hardness). It's contribution percentage is 67.03. 2) Contribution of factors on Ultimate Tensile Strength For rotational speed, we get F-value from ANOVA table is 1.31. But the critical value at (2,4) degrees of freedom is 6.94. So, the critical value is higher than calculated value. So, rotational speed has not much effect on ultimate tensile strength (i.e. it is not significant). It's contribution percentage is 25.85. Similarly, for welding speed, we get F-value from ANOVA table is 1.77 (Table-11). But the critical value at (2,4) degrees of freedom is 6.94. So, the critical value is higher than the calculated value. So, the welding speed is also not significant (i.e. it has less effect on hardness). It's contribution percentage is 34.80. IV. CONCLUSIONS AA 7075 alloy plates with dimensions 100x50x5 mm were friction stir welded under the process parameters of axial load 500kN, Tool rotational speed in the range of 1000 to 1400 rpm and tool travel speed in the range of 20 to 40 mm/min. After creating the FSW joint done the hardness test on Rockwell hardness machine. Optimization of Friction Stir Welding process parameters of Aluminium alloy AA7075-T6 by using Taguchi method
2019-10-24T09:07:19.694Z
2019-10-10T00:00:00.000
{ "year": 2019, "sha1": "77b346486ba3c16a8e5716d4111131510b48e9f5", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijitee.l3911.1081219", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "906e4ae98a4d663902d9f97485ce5e91941435f9", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
204815434
pes2o/s2orc
v3-fos-license
Influence of some methylated hepatocarcinogenesis-related genes on the response to antiviral therapy and development of fibrosis in chronic hepatitis C patients Background and Aim Epigenetics involved in multiple normal cellular processes. Previous research have revealed the role of hepatitis C virus infection in accelerating methylation process and affecting response to treatment in chronic hepatitis patients. This work aimed to elucidate the role of promoter methylation (PM) in response to antiviral therapy, and its contribution to the development of fibrosis through hepatocarcinogenesis-related genes. Methods A total of 159 chronic hepatitis Egyptian patients versus 100 healthy control group were included. The methylation profile of a panel 9 genes (SFRP1, p14, p73, APC, DAPK, RASSF1A, LINE1, O6MGMT, and p16) was detected in patients’ plasma using methylation-specific polymerase chain reaction (MSP). Results Clinical and laboratory findings were gathered for patients with combined pegylated interferon and ribavirin antiviral therapy. Regarding the patients’ response to antiviral therapy, the percentage of non-responders for APC, O6MGMT, RASSF1A, SFRP1, and p16 methylated genes were significantly higher versus responders (P<0.05). Of the 159 included patients, the most frequent methylated genes were SFRP1 (102/159), followed by p16 (100/159), RASSF1A (98/159), then LINE1 (81/159), P73 (81/159), APC (78/159), DAPK (66/159), O6MGMT (66/159), and p14 (54/159). A total of 67/98 (68.4%) cases of RASSF1A methylated gene (P=0.0.024), and 62/100 (62%) cases of P16 methylated gene (P=0.03) were associated with mild-degree fibrosis. Conclusions To recapitulate, the PM of SFRP1, APC, RASSF1A, O6MGMT, and p16 genes increases in chronic hepatitis C patients, and can affect patients’ response to antiviral therapy. The RASSF1A and P16 genes might have a role in the distinction between mild and marked fibrosis. INTRODUCTION Chronic liver disease may be defined as a disease of the liver that lasts over a period of 6 months. It comprises liver pathologies such as chronic hepatitis, liver cirrhosis, and hepatocellular carcinoma. 1 Hepatitis C virus (HCV) infection is one of the causes that associated with chronic liver diseases. Infections with the HCV are pandemic, and the World Health Organization (WHO) estimates a world-wide prevalence of 3%. In Middle Europe, about 1% of the population is infected, mostly with genotype 1 (85% in Austria). In developing countries, chronic hepatitis C (CHC) is the most prominent cause for liver cirrhosis, hepatocellular carcinoma and liver transplantation. 2 Ribavirin/pegylated-interferon combination therapy is currently the most effective treatment for hepatitis C infection. Clearance of this HCV can be predicted by a sustained virological response (SVR). 3 The main predictors of SVR are HCV genotype, stage of fibrosis, baseline HCV RNA levels, the dose and duration of therapy, IL28B polymorphism, body mass index (BMI), age, insulin resistance, gender, the levels of alanine aminotransferase (ALT), gamma glutamyl-transferase (GGT), and co-infection with human immunodeficiency virus (HIV) or other hepatotropic virus. 4 Many authors have found that different types of cancer, including hepatocellular carcinoma (HCC), show distinct DNA methylation profiles; suggesting the existence of cancer-type specific methylation signatures. 5 Others have shown that the presence of hepatitis viruses, especially HCV, could play a role in accelerating the methylation process which is involved in HCC development, potentiate the progression of HCV related liver disease and affect its response to treatment. 6,7 Molecular pathogenesis of hepatocarcinogenesis still unclear. However, it has been revealed that epigenetic changes, especially global DNA hypomethylation concomitant with locus-specific DNA hypermethylation in gene promoters, plays vital roles in carcinoma progression. 8,9 DNA methylation markers could be utilized to detect human cancers in blood, plasma, secretion, or exfoliated cytology specimens and predict the risk of cancer development. 10,11 Thus, cell free DNA circulating in plasma of chronic liver disease patients may represent a promising non-invasive alternative for HCC screening and monitoring. Progression from chronic hepatic inflammation to the fibrotic/cirrhotic stage is supported by numerous core pathways, observed in other fibrotic diseases, as well as tissue-or injury-specific pathways that are only activated in particular conditions. 12,13 Therefore, the present work was applied to verify the previous results, 7,14 and elucidate the role of promoter methylation (PM) in the response to antiviral therapy, and its contribution to the development of fibrosis using some hepatocarcinogenesis-related genes such as SFRP1, p14, p73, APC, DAPK, RASSF1A, LINE1, O6MGMT, and p16. Patient specimens This study was done on 159 Egyptian patients with chronic genotype 4 hepatitis C in addition to 100 healthy control group. These patients were eligible for ribavirin/pegylated interferon combination therapy. Selection of patients was based on clinical and histological examinations. Inclusion criteria were morphologic evidence of chronic hepatitis, normal renal function (normal creatinine level), normal prothrombin time, elevated hepatic function (elevated bilirubin, aspartate aminotransferase and ALT levels), normal cardiac enzymes, HIV-antibody (Ab) negative by ELISA, hepatitis B surface antigen (HBsAg) negative by ELISA and hepatitis B virus (HBV) DNA negative by polymerase chain reaction (PCR), and anti-HCV positive by ELISA. Informed consents were obtained from all the participants enrolled in the study, which was performed in accordance with the declaration of Helsinki, local and national laws. Laboratory investigations They were done, and HCV RNA was quantified using quantitative real time PCR 15 at baseline, after 12, 24, 48, and 72 weeks of anti-viral therapy. Histological examination was done on core needle biopsies to determine the grade of necro-inflammation and the stage of fibrosis according to the Metavir scoring system prior to treatment. For the steatosis assessment tool, it was confirmed histologically, and expressed as % values of fatty changed. Also, it was checked by abdominal ultrasonography, each criterion of none, minimal, mild, and moderate steatosis was demonstrated in Table 1. Clinical and laboratory follow up were done for every patient to report any adverse side effects and treatment response according to interferon treatment guidelines. DNA extraction DNA was extracted from patient's plasma before receiving ribahttps://doi.org/10.3350/cmh.2019.0051 virin/pegylated interferon combination therapy, according to the previously published protocol. 16 DNA was extracted through a phenol/chloroform treatment. Briefly, equal volume of buffer equilibrated phenol (pH 7.0-7.5) was added to samples and vortexed. The upper aqueous layer was removed with a "cut down" pipette tip, and an equal volume of phenol/chloroform (1:1) was then added to the aqueous supernatant and vortexed. The upper aqueous layer was removed again in a similar fashion, and an equal volume of chloroform/isoamyl (24:1) was then added and vortexed. Sodium acetate (3 M) (pH 4.7-5.2) was added to the aqueous supernatant, followed with ice-cold ethanol. Samples were then incubated overnight at -8°C. After decantation of the liquid, the DNA pellet was recovered and dissolved in sterile wa-ter. The purity and integrity of the DNA was confirmed by carrying out β-actin gene amplification. Bisulphate conversion and methylation-specific polymerase chain reaction (MSP) After DNA extraction, it was subjected to bisulfite treatment using EZ DNA methylation kit that uses 300 ng of the extracted nucleic acid. This was followed by MSP using the primer sequences and the methylation-specific PCR conditions illustrated in Table 2. DNA methylation of CpG islands for SFRP1, p14, p73, APC, DAPK, RASSF1A, LINE1, O6MGMT, and p16 genes was determined using specific primers for methylated (M) and unmethylated (UM) DNA. Statistical analysis Statistical analysis was done using IBM SPSS Statistics 21.0 (International Business Machines Corporation Company, New York, NY, USA). For categorical variables, percentages were calculated, and differences were analysed with chi square tests and Fisher's exact test when appropriate. Continuous variables were analysed as mean±standard deviation or median and range as appropriate. Differences among continuous variables with normal distribution were analysed by Student's t-test; comparison between three groups was done using Kruskel-Wallis test (non-parametric analogue for analysis of variance). P-value which is less than (0.05) was considered statistically significant. Clinico-pathological features of the patients The demographic, laboratory, and histopathological data of the 159 patients (81 responders and 78 non-responders) are illustrated in Table 1. No significant difference was observed between the two groups (responders and non-responders) regarding age, sex, haematological parameters, liver profile, HCV viral load. However, a significant difference was found in other variables such as BMI, Fibrosis, necroinflammatory activity, and steatosis (Table 1). HCV RNA results For HCV RNA levels by RT-PCR technique, there was no signifi-cant difference (P =0.789) between responders (193.000±108) and non-responders (338.000±237) for the 159 CHC patients before treatment (Table 1). HCV RNA results at different treatment end points and follow up of our patients were done to detect treatment response as shown in Table 3. Promotor methylation index It defined as the ratio between the number of methylated genes and the total number of the studied genes for each sample was calculated for all patients. 19 For methylation index, no significant difference was found between responders and non-responders (2.65±1.31 and 2.71±1.23; P=0.67) respectively. Also, there is no significant difference between mild fibrosis (F1 and F2) and marked fibrosis (F3 and F4) except for RASSF1A (P =0.024) and p16 (P=0.03) methylated genes ( DISCUSSION The foremost predictors of response to interferon-based HCV therapy included both patient and viral factors. Patient factors that were associated with worse response to interferon-based therapy included male gender, older age, high BMI, advanced liver fibrosis, history of failed treatment, black race, non-CC IL28B genotype, and the presence of certain comorbid conditions, such as HIV coinfection, insulin resistance, or diabetes. Viral factors that were associated with worse response included non-genotype-2 infection, high viral load, and unfavourable viral kinetics during treatment. 4,20 Some authors have revealed that hepatitis viruses infection might play a role in fast-tracking the methylation process which is involved in HCC development, and affect its response to treatment. 6,19,21,22 Progression from chronic hepatic inflammation to the fibrotic/cirrhotic stage is supported by numerous core pathways, observed in other fibrotic diseases, as well as tissue-or injuryspecific pathways that are only activated in particular conditions. 12,13 In an early work done by our group, 16 detection of APC, FHIT, p15, p16 and E-cadherin-PM (range, 67.9-89.2%) had been done in the plasma and tissues of 28 chronic HCV and/or HBV-associated HCC patients, with a high concordance for all studied genes. However, no significant association was found, in this study, between the methylation status of any gene and the presence of hepatitis virus infection. This was partially attributed to the small sample size in this study. Then, we assessed the contribution of methylation status to the development and progression of HCVassociated HCC and CH in Egyptian patients using a specific panel of genes (APC, FHIT, p15, p73, p14, p16, DAPK1, CDH1, RARb, RASSF1A, O6MGMT). 19 We found that HCV infection may contribute to hepatocarcinogenesis through enhancing PM of certain genes. A panel of 4 genes (APC, p73, p14, O6MGMT) out of 11 tested genes successfully classified cases into HCC or CH with high accuracy (89.9%), sensitivity (83.9%) and specificity (94.7%). A more extended confirmatory study, including 516 Egyptian patients with HCV-related liver disease (208 HCC, 108 liver cirrhosis, 100 CHC, and 100 controls), was then performed to detect PM of P14, P15, P73 and Mismatch repair gene (O6MGMT) in patient's plasma by using EpiTect Methyl qPCR Array technology. 23 The candidate genes selection (SFRP1, p14, p73, APC, DAPK, RASS-F1A, LINE1, O6MGMT, and p16) of the present work was analyzed by the Gene Expression Profiling Interactive Analysis database. In the current study, significant efforts had been done to elucidate the role of PM to the response to antiviral therapy and its contribution to the development of fibrosis using some hepatocarcinogenesis-related genes. Percentage of non-responders for APC, O6MGMT, RASSF1A, SFRP1, and p16 methylated genes were significantly (P<0.05) higher than those in responders. The most frequent methylated genes in the 159 CHC patients was SFRP1 (102/159), followed by p16 (100/159), RASSF1A (98/159), then LINE1 (81/159), P73 (81/159), APC (78/159), DAPK (66/159), O6MGMT (66/159), and p14 (54/159). In a previous study done by Iyer et al. 16 , they detected a high frequency of 5 methylated genes (APC, FHIT, p15, p16 and E-cadherin) which ranged from 67.9% to 89.2% in the plasma and tissues of 28 chronic HCV and/or HBV-associated HCC patients. Although, no significant association was found in his study between the methylation status of any gene and the presence of hepatitis virus infection which could be attributed to the small sample size. Also, in a previous study done by our group, 7 we assessed the contribution of methylation status to the development and progression of HCV-associated HCC and CH in Egyptian patients using a specific panel of genes (APC, FHIT, p15, p73, p14, p16, DAPK1, CDH1, RARb, RASSF1A, O6MG-MT). We found that HCV infection may contribute to hepatocarcinogenesis through enhancing the promotor methylation of certain genes. On the other hand, Huang et al., 14 determined whether methylation status in plasma could be employed for monitoring the multistep carcinogenesis, multiplex MSP was applied to assay the methylation status for p16, SFRP1, and LINE1 in plasma specimens of 119 HCC patients, 105 LC patients, 52 patients with benign lesions and 50 healthy people. Therefore, Huang et al. 14 found that the modification in the expression of p16, SFRP1, and LINE1 genes might be involved in the process of hepatocarcinogenesis. For the PM of the studied genes and degree of fibrosis, 67/98 (68.4%) cases of RASSF1A methylated gene (P =0.0.024) and 62/100 (62%) cases of p16 methylated gene (P=0.03) were associated with mild fibrosis. This finding was close to the results that found by Zekri et al. 7 where they stated that only PM of the RASS-F1A gene was significantly associated with mild fibrosis in the studied patients (P=0.0.019). However, his study was done on six genes (p14, p73, APC, DAPK, RASSF1A, and O6MGMT) of 53 chronic HCV patients while our study was applied on nine genes (SFRP1, p14, p73, APC, DAPK, RASSF1A, LINE1, O6MGMT, and p16) of 159 CHC patients. This finding might be explained by the fact that DNA methylation modification is played by the HCV core protein which inhibit the expression of the CDKN2A gene, that encodes for p16INK (inhibitor of cell proliferation) by up-regulat- ing the methyltransferases DNMT1 and DNMT3b. 24,25 Moreover, HCV core protein also increases the methylation of RASSF1A promoter, a negative regulator of the Ras pathway, by inducing the histone methyltransferase SMYD3. 25,26 Therefore, our results provide an evidence for the role of RASSF1A, and p16 genes in the induction of fibrogenesis in chronic HCV patients. In conclusion, the PM of SFRP1, APC, RASSF1A, O6MGMT, and p16 genes increases in CHC patients. These methylated genes can significantly affect patients' response to antiviral treatment, whereas RASSF1A and p16 genes are involved in the process of fibrogenesis and possibly will have a role in the distinction between mild and marked fibrosis in those patients. Ethics approval and consent to participate This study was performed in compliance with relevant laws and institutional guideline and in accordance with the ethical standards of the Declaration of Helsinki. The Institutional Review Board (IRB) of the NCI approved the protocol. Informed written consent was obtained from all patients and individuals enrolled in the study.
2019-10-22T13:03:23.111Z
2019-10-22T00:00:00.000
{ "year": 2019, "sha1": "bba63a928d66699b67b65fff85e3932a5ae2f54c", "oa_license": "CCBYNC", "oa_url": "https://www.e-cmh.org/upload/pdf/cmh-2019-0051.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c08d4dfc4c722715e0c9606489137c464908157", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
86026746
pes2o/s2orc
v3-fos-license
Moving Toward Sustainability with Alternative Containers for Greenhouse and Nursery Crop Production: A Review and Research Update S UMMARY . Market researchers have found that nursery and greenhouse production practices that reduce plastic use can increase consumer interest. However, there are broader crop performance, production efficiency, and environmental factors that must be considered before adopting containers made with alternative materials. This review highlights current commercially available alternative containers and parent materials. In addition, findings from recent and ongoing nursery, greenhouse, and landscape trials are synthesized, identifying common themes, inconsis- tencies, research gaps, and future research needs. P lastic containers have been the predominant container type in U.S. greenhouse and nursery production since the 1980s. Serving a variety of functions and found in a multitude of shapes, sizes, and colors, plastic containers are used for propagating, growing, transporting, and marketing ornamental crops (Evans and Hensley, 2004;Hall et al., 2010;Helgeson et al., 2009). This reliability and flexibility come at a relatively inexpensive price, which has helped establish the prominence of plastic containers in ornamental production. Unfortunately, this combination of characteristics also creates an overabundance of unreclaimed plastic waste each production cycle. Most plastics are derived from petroleum-a nonrenewable resource that, while still relatively inexpensive, is subject to price fluctuations (Knox and Chappell, 2011). Furthermore, given limited access to recycling centers, high collection and sanitation costs, and chemical contamination concerns, used plastic containers are primarily disposed of in landfills (Garthe and Kowal, 1993;Hall et al., 2010;Helgeson et al., 2009). Amidon Recycling (1994) estimated that the United States used 521 million pounds of plastic in agriculture in 1992. Of this, 66% was used in the nursery industry in the form of containers. The most recent estimate of plastic use for ornamental plant containers raises this to 1.66 billion pounds (Schrader, 2013). Many consumers view the use of plastic products in ornamental plant production as an unsustainable practice (Behe et al., 2013). In market studies where various sustainable greenhouse plant attributes were tested, container type was consistently listed as having the greatest impact on consumer product perception . These findings, coupled with more general market studies on green consumer habits have motivated some growers to explore avenues for making their businesses more ''green''-both in terms of environmental impact and public perception Hall et al., 2009). Green industry stakeholders (i.e., nursery, greenhouse, and landscape professionals) have identified the use of plantable or compostable biodegradable container alternatives as a marketable way to improve the sustainability of current production systems. This article provides an update on advancements in the development of alternative biocontainers in nursery and greenhouse production, with the hope of fostering future research and adoption by the green industry. Types of alternative containers Alternative containers were developed to replace traditional petroleumbased plastic containers in nursery and greenhouse production. Plasticbased containers consume landfill space and can remain in our environment indefinitely. Sustainable containers are designed to decompose rather than contribute to landfill waste. The ability to degrade when planted or composted is a major marketing focus that distinguishes biocontainers from their conventional plastic counterparts. As such, alternative containers are classified as plantable, compostable, or recycled plastic, based on their requirements for and ability to degrade at the end of their crop production life and parent materials (Table 1). PLANTABLE. Plantable biocontainers can be planted directly into the soil. These containers are intended to withstand watering and handling during short-term production and shipping conditions. Once planted, the containers are intended to rapidly break down and allow plant roots to penetrate the pot and grow into the soil. The use of plantable containers eliminates container removal and disposal costs and can reduce the cleanup time required at installation. Plantable containers could eliminate root disruption and transplanting shock (Khan et al., 2000). To function as claimed, it is imperative that plantable containers do in fact break down quickly once installed to allow root establishment into surrounding soil (Evans and Hensley, 2004). The rate of container biodegradation in landscapes depends on many factors. The container material, available nitrogen, moisture, temperature, pH, microbes, and other soil factors can all impact degradation (unpublished data). In addition, regional differences may occur due to different soil types and climates. COMPOSTABLE. Plants must be removed from compostable containers at installation and the containers are composted separately. These containers do not degrade quickly or completely in the landscape. Most bioplastics, as well as hard rice hull, peat, and thick-walled paper or wood fiber containers intended for longer term production fall into this category. Compostable materials can be further differentiated based on whether they require industrial composting facilities to break down completely. Industrially compostable containers may not break down in a typical backyard compost pile due to unsuitable temperature, moisture, pH, aeration, and microbial populations. ASTM D6400 is the main standard for certification of industrially compostable plastics in the United States (ASTM, 2004). According to this standard, bioplastics must be at least 60% degraded within 90 d at or above 140°F to be considered compostable. RECYCLED PLASTIC. These containers are produced from recycled plastic water and soft drink bottles. The used bottles are converted into a liquid and blended with biodegradable natural fibers, such as cotton, jute, vegetable fibers, or bamboo. When heat pressed, the z Postproduction visual quality after 14-week greenhouse production period. Good quality rating defined as not different from a rating of ''5'' (intact, no visible changes in terms of color or construction); low quality rating defined as significantly less than ''5'' (based on Lopez and Camberato, 2011 Koeser et al., 2013a). VP = vertical and punch; strength good if wet vertical and punch strengths equal to or greater than 2 kg (4.4 lb), low if less than 2 kg, following a 4-week greenhouse production period with daily overhead irrigation (based on Evans et al., 2010). T = tensile; strength good if tensile strength not different from plastic control or equal to or greater than 2 kg, low if lower than plastic control and less than 2 kg following a 15-week greenhouse production period with ebb-and-flood irrigation (based on Beeks and Evans, 2013b). x Based on current manufacturer guidelines. mixture bonds to produce a fabric-like geotextile that is sewn into a container. These containers are not biodegradable or compostable but will slowly disintegrate to a point that leaves behind much less residue (much reduced carbon footprint) compared with plastic containers derived entirely from petrochemicals. An example of this type of product is the Root Pouch (Root Pouch, Hillsboro, OR). Materials used to produce alternative containers Alternative containers can be made from a variety of natural materials. These containers are generally made from renewable materials that are often by-products of an industrial process. Their use in the manufacture of containers can significantly reduce landfill waste by using waste from another process. PRESSED FIBER. There are a wide variety of hot-pressed fiber containers available on the market. These are constructed from fibrous materials such as rice hulls (Oryza sativa), wheat (Triticum aestivum), paper, peat, wood pulp, spruce fibers (Picea sp.), coir fiber from coconut palm (Cocos nucifera), rice straw, bamboo (subfamily Bambusodeae), or composted cow manure. Fiber containers are semiporous and promote water and air exchange between the rooting substrate and surroundings. The containers may be biodegradable or compostable depending on the material and the manufacturing process. Some containers include a natural or synthetic binding material such as resins, glue, wax, latex, or cow manure. Other containers depend on the material itself to provide structural stability and extended life span for long-term use. Pressed fiber containers tend to have varying degrees of rigidity, material strength, and decay resistance depending on source material and processing. Unlike plastic, which provides relatively consistent performance in a mechanized production system, the resiliency of pressed fiber containers depends on the container (source material, material moisture content, binder, etc.). Production practices affect the environment to which the containers are subjected (irrigation, use of shade/ supplemental lighting, ambient temperature, etc.). Plant rooting pattern, pot spacing, and production duration can also influence container performance and lifespan. Also, some types of fiber containers weigh significantly more than a thin-walled plastic container-especially when saturated with water, which impacts container movement during production as well as shipping costs. BIOPLASTICS. Bioplastics are similar to traditional plastics and are created from either biopolymers (nonpetroleum based) or a blend of biopolymers and petrochemicalbased polymers. Biopolymer-based plastics are produced using renewable raw materials. Starch or cellulose is obtained from organic feed stocks [i.e., beet (Beta vulgaris), corn (Zea mays), potato (Solanum tuberosum), cassava (Manihot esculenta), sugarcane (Saccharum officinarum), palm fiber, or wheat]. Protein is acquired from soybeans (Glycine max) or keratin from waste poultry feathers. Lipids are derived from plant oils and animal fats. These raw materials are usually blended with fossil fuelbased polymers derived from petrochemical refining to reduce cost, enhance performance, or both (Riggi et al., 2011). There are two main types of bioplastics currently used in the manufacture of nursery containers: 1) starch-based plastics and 2) poly lactic acid (PLA). Starchbased plastics are water soluble, so starch blends are produced by linking 20% to 80% of starch with either biobased or fossil fuel-based polymers to improve their physical and chemical characteristics. Poly lactic acid is produced by anaerobic fermentation of feedstock and is mainly used with starch blends due to their slow biodegradability in soil. Bioplastics can be processed on equipment designed for petrochemical plastics, eliminating the need to develop new industrial machinery (Koeser et al., 2013a). The advantages of bioplastics are their physical properties including light weight, structural stability, rigidity, resistance to decay, and being the most similar to traditional plastics, which allows them to be easily integrated into a wide variety of production systems involving both shortterm and long-term crops. Most bioplastic containers are intended to be removed and either composted or anaerobically digested at the end of plant production. The slow degradability inherent to bioplastics would affect root establishment if the container was not removed before transplanting. Some containers such as the SoilWrap (Ball Horticultural Co., West Chicago, IL), a bioplasticbased sleeve design (see below), will degrade in the soil and are considered plantable pots. SLEEVES. There are several types of containers available in small sizes that are simply growing substrate wrapped in a paper, fiber, or bioplastic sleeve. These are not true containers, as they must be kept in a tray until the plant's roots hold the substrate together. These are often paper containers, which are plantable and fully degrade in a single season in the central and southern United States. Further north, they may persist for over 1 year. An example of commercially available sleeve is Ellepot (Blackmore Co., Belleville, MI) made from paper. Effect of alternative containers on plant production While biocontainers can reduce waste going into the landfill, that is only one of many environmental and economic aspects that may change as a grower transitions from conventional plastic pots to alternative containers. Past and ongoing research has documented differences and similarities regarding plant growth, plant quality, water requirements, mechanized production success, transplant shock, and a variety of containerrelated physical attributes. This section summarizes the current knowledge and potential issues associated with production and postproduction biocontainer use. PLANT GROWTH AND QUALITY. Positive and negative impacts of using biocontainers compared with plastic containers have been reported on plant growth and development during production or establishment into the landscape. At the Center for Applied Horticulture Research (CAHR, Vista, CA), tomato (Solanum lycopersicum) plants grown in plastic containers had greater shoot dry weight than plants grown in wood fiber (Fertil Pot/DOT Pot; Fertil International, Boulogne-Billancourt, France), decomposed cow manure (CowPot; East Canaan, CT), and coconut coir pots but not different from • February 2015 25(1) plants grown in recycled paper (Western Pulp; Corvallis, OR) containers (CAHR, 2009). Root dry weight was greater for plants in plastic containers compared with all other container types. When planted in the field, recycled paper and coir containers degraded more slowly than Fertil Pot/DOTPot and CowPot. 'Midnight' (Dreams) petunia (Petunia ·hybrida) grown in bioplastic (Soil-Wrap) and slotted rice hull (NetPot; Summit Plastic Co., Akron, OH) containers had a larger growth index compared with plants grown in plastic pots; whereas, plants grown in bioplastic (Terra Shell/OP47, Summit Plastic Co.), coir, and plastic pots were not different (CAHR, 2010). Petunia flower number was not different during production or postproduction for plants grown in bioplastic (SoilWrap), rice hull (NetPots), and coir containers compared with plants in plastic control containers (CAHR, 2010). Similarly, recycled paper, peat (Jiffy-Pot; Jiffy International, Kristiansand, Norway), bioplastic (Terra Shell/OP47), rice straw, cow manure, coconut coir, and rice hull container types produced marketable transplants ['Score Red' geranium (Pelargonium ·hortorum), 'Dazzler Lilac Splash' impatiens (Impatiens wallerana), and 'Grape Cooler' vinca (Catharanthus roseus)] within the same time frame . Kuehny et al. (2011) also investigated shoot dry weight of 'Dazzler Lilac Splash' impatiens produced in 4-and 5-inch biocontainers at three sites. For the 5-inch size containers, there was no difference in shoot dry weight at any location. For the 4-inch size, no container type was superior for all measurements (root and shoot dry weight and root: shoot ratio) at all three locations. Following greenhouse production, plants in plantable containers were installed in the Longwood Gardens (Kennett Square, PA) landscape and generally performed no differently than plants produced in plastic containers . 'Eckespoint Classic Red' poinsettia (Euphorbia pulcherrima) plants grown for 12 to 16 weeks in recycled paper (Western Pulp) containers under greenhouse conditions were reported to have increased root and shoot dry weight, plant height, and bract area index compared with plants grown in straw (StrawPot; Ivy Acres, Baiting Hollow, NY), composted cow manure (CowPot), coconut coir, rice hull (NetPot), wheat starchderived bioresin (Terra Shell/OP47), plastic, and sphagnum peatmoss and wood pulp (Jiffy-Pot) containers (Lopez and Camberato, 2011). In an experiment using ebb-and-flood irrigation, shoot dry weight of 'Rainier Purple' cyclamen (Cyclamen persicum) grown in bioplastic, solid rice hull, slotted rice hull, recycled paper, peat, cow manure, rice straw, and coconut coir containers for 15 weeks was greater than for plants grown in plastic containers (Beeks and Evans, 2013a). A 3-month study showed no negative impact of plantable containers [bioplastic (SoilWrap), paper (Ellepot) and slotted rice hull] on root and shoot development of two sedum species (Sedum hybridum 'Immergrunchen' and Sedum spuricum 'Red Carpet Stonecrop') and 'Big Blue' liriope (Liriope muscari) during production in a quonset and in the landscape (Ingram and Nambuthiri, 2012). WATER USE. Evans and Hensley (2004) found that peat containers wicked water from the substrate causing 'Janie Bright Yellow' marigold (Tagetes patula), 'Cooler Blush' vinca, and 'Orbit Cardinal' geranium plants to wilt. Plants grown in peat (Jiffy-Pot) containers had the lowest shoot dry weight of all three container types, whereas plants had the greatest shoot dry weight when grown in plastic containers followed by plants grown in poultry feather containers. Tomato seedlings grown in corn/ palm-derived biocontainers had reduced biomass compared with those in plastic containers (Sakurai et al., 2005a). Further, seedlings in biocontainers had slower initial establishment in the field compared with those grown in plastic containers (Sakurai et al., 2005b). The researchers attributed this to inadequate irrigation and temporary root restriction of plants grown in biocontainers (Sakurai et al., 2005a(Sakurai et al., , 2005b. Plant biomass is sometimes greater when plants are produced in alternative containers, but in other research, plants produced in conventional plastic containers have greater growth. The inconsistency may be due to increased potential for water loss through biocontainer sidewalls and related factors, which are the subject of this section. Some disparity in alternative container research results may be due to irrigation frequency, individual crop species water use, evaporative demand in the experiment location, plant size, and differences in container size and dimension. Because of the semiporous nature of some alternative materials, there is an increased potential for water loss through biocontainer sidewalls during plant production, wicking moisture from the substrate and increasing the crop water requirement. Also, the combination of stage of production and evaporation through biocontainer sidewalls may influence water loss. For example, as plants grew larger, a once per day irrigation regime was no longer sufficient for crops grown in coconut coir containers but was sufficient for other alternative containers and conventional plastic containers during experiments conducted at CAHR (L. Villavicienco, personal communication). The average water use of Gold Splash wintercreeper (Euonymus fortunei) plants grown outdoors in 1-gal paper and recycled paper containers was 30% to 50% higher than those grown in standard plastic containers in a 4-month study in Texas (unpublished data). In a separate study, the highest rate of sidewall water loss was for wood fiber (Fertil), followed by peat (Jiffy-Pot), and composted manure (CowPot) and lower sidewall water loss was noticed with coir, rice straw, and slotted rice hull (unpublished data). The lowest sidewall evaporation was observed for bioplastic (Terra Shell/OP47), solid rice hull, and traditional plastic containers. Total loss of water after 8 h in an environment-controlled chamber under a vapor pressure deficit of 2.6 kPa was %15% for plastic-and rice-hullbased containers, whereas the loss was %50% for recycled paper containers . Porous containers (wood fiber, manure, and straw) required more water and produced smaller 'Yellow Madness' petunia plants compared with plants grown in plastic containers in a greenhouse study (Koeser et al., 2013b). The amount of water required for producing a 4-inch 'Score Red' geranium, ranged from 0.55 gal/container [plastic, rice hull, and bioplastic (Terra Shell/OP47] to 1.1 gal/container [wood fiber, peat (Jiffy-Pot), rice straw, and recycled paper (Western Pulp)] under greenhouse and retail environments, necessitating an increased irrigation frequency for plants grown in biocontainers . In a greenhouse study, 'Cooler Blush' vinca and 'Dazzler Rose Star' impatiens grown in peat-based containers required three times more water than plants grown in plastic containers (Evans and Karcher, 2004). Similarly, plants grown in feather-based containers required 2.5 times more water than those grown in plastic containers. The greater irrigation requirement was due to evaporation through sidewalls of peat and feather containers compared with impermeable plastic. The greater drying rate of biocontainers means increased irrigation volume, more frequent irrigation, or both, which could adversely affect the economic and environmental sustainability of alternative containers. Ebb-and-flood irrigation was found to be a viable option for conserving water when using biocontainers (bioplastic, solid rice hull, slotted rice hull, recycled paper, and coconut coir) in greenhouse production (Beeks and Evans, 2013b). Reusable plastic shuttle trays to support biocontainers may further reduce irrigation requirements (Koeser et al., 2013b). Irrigation is an important aspect of container plant production. Refining irrigation practices to match water loss and identifying waterconserving strategies such as use of shuttle trays will be critical to successfully adopting biocontainers in nursery and greenhouse production. Additionally, research is needed to examine how potentially greater or more frequent irrigation influences weed pressure, and herbicide and fertilizer longevity. SUBSTRATE TEMPERATURE. The importance of keeping substrate temperature below 100°F to avoid root injury is well documented (Kramer, 1949). However, during warmer months supraoptimal substrate temperatures can occur due to large solar influx and lack of heat dissipation from the nonporous black plastic containers (Beattie et al., 1987). In the southeastern United States, it is common for the substrate temperature in black plastic containers to exceed 107.5°F for several hours and injure roots (Ruter and Ingram, 1990). Using porous containers (clay, paper, peat, etc.) is one method of mitigating heat stress as the root zone experiences a slower increase in temperature than when in nonporous containers (plastic, glass, paraffin protected, etc.) due to the high latent heat of vaporization of water (Jones, 1931). Maximum root-zone temperature of wood fiber containers was %8°F lower than that of black plastic containers in a southern Georgia study (Ruter, 2000b) as the fiber containers are semiporous and promote water and air exchange between the rooting substrate and surroundings. Shrub rose (Rosa sp.) grown in Texas in black plastic pots experienced substrate temperature of about 130.8°F compared with only 97.4°F of those grown in fabric (Smart Pots) containers (Arnold and McDonald, 2006). Fiber containers were found to improve plant production, survival, and quality by moderating the substrate temperature of 'Otto Luyken' cherry laurel [Prunus laurocerasus (Ruter, 1999)] and Gold Splash wintercreeper Wang et al., 2012) compared with plastic containers. 'Cunningham's White' rhododendron (Rhododendron ·) grown in fiber containers were never exposed to temperatures greater than 104°F, whereas the plants grown in plastic containers experienced supraoptimal temperatures (>122°F) especially when located on the southwest part of the production area (Svenson, 2002). 'Aztec Gold' daylily (Hemerocallis) grown in fiber containers produced three times greater foliage dry weight and four times greater root dry weight than those grown in plastic containers, which was attributed to lower substrate temperature and improved aeration for the plants grown in fiber containers (Ruter, 2000a). In a laboratory study, greater substrate temperatures were observed in plastic, bioplastic (Terra Shell/OP47) and rice hull containers compared with lower heat buildup in decomposed cow manure (CowPot), wood fiber pot (Fertil), coir, peat (Jiffy-Pot), rice straw, and slotted rice hull containers due to the increased evaporative cooling and the fact that fiber containers reflect more light compared with standard plastic containers . Use of fiber containers may be a management tool for growing plants that are particularly temperature-sensitive. CONTAINER NUTRIENT SOURCE. The aforementioned lack of consistency in growth may be partly due to the capacity of containers from different materials to affect the plant nutrient status by supplying different levels of nitrogen during production. For example, Schrader et al. (2013b) found that containers composed exclusively of soybean-based bioplastic containers supplied excess nitrogen, whereas those composed of a blend of PLA and soy-based plastic added nitrogen at a desirable rate. Additionally, Schrader et al. (2013b) found that removing, crushing, and placing the container in the ground near the root zone at transplant increased fruit production as well as shoot dry weight, shoot volume, and plant quality index compared with polypropylene plastic. More research is needed to determine the influence of parent container material on plant growth, plant nutrient status, and fertilization requirements during and postproduction. CONTAINER INTEGR ITY A ND APPEARANCE. Alternative containers vary in both integrity and strength during production and marketing. Greenhouse managers have reported loss of saleable products and potential injury liability as some biocontainers are easily broken during shipping or by mechanized systems. The physical strength of peat and cow manure containers indicates that some biocontainers may tear or break during greenhouse production, packaging, shipping, and retailing especially when wet (Koeser et al., 2013a), necessitating more careful handling until they are installed in the landscape. Evans et al. (2010) found rice hull, coir, and recycled paper containers have the greatest wet and dry vertical and lateral strength among biocontainers tested, similar to those of plastic containers and had no algal or fungal growth on container walls. Porous rice straw containers and bioplastic (Terra Shell/OP47) with a thin wall had the lowest dry punch strengths. Containers composed of fiber, composted manure (CowPot), or peat (Jiffy-Pot) had low wet vertical • February 2015 25(1) strength and intermediate dry vertical strength. Poor wet strength was reported for wood fiber (Fertil), peat (Jiffy-Pot) and composted manure pot (CowPot). Containers with sidewalls that absorb moisture can soften and develop algal and fungal growth and a subsequent reduction in strength (Evans and Karcher, 2004). In a 14-week greenhouse evaluation of 'Eckespoint Classic Red' poinsettia production, color and integrity of plastic, rice hull, wheat starch (TerraShell/OP47), and recycled paper (Western Pulp) containers remained unchanged, whereas plants with acceptable quality grown in peat (Jiffy-Pot) and composted cow manure (CowPot) containers were not marketable due to loss of container integrity or mold and/or algal growth creating a poor appearance (Lopez and Camberato, 2011). Bioplastic, solid rice hull, and slotted rice hull containers were good plastic alternatives in a 15-week greenhouse study with 'Rainer Purple' cyclamen (Beeks and Evans, 2013b). During this research, these containers had similar irrigation requirements, retained high levels of punch and tensile strength, and supported no microorganism growth. However, peat, cow manure, wood fiber, and rice straw containers were not acceptable replacements for plastic containers due to substantial microorganism growth on the containers, weak strengths of containers at the end of the production period, and more frequent irrigation requirements when compared with plastic containers (Beeks and Evans, 2013b). Growers and consumers have to consider that the same physical characteristics that promote degradation in the landscape or during composting could also contribute to premature degradation during production, transportation, and/or point of purchase, hampering efficient handling during production and retail sales. Some alternative container types may be more suited to long-term production crops, while others may be better suited to shortterm greenhouse production. B I O D E G R A D A B I L I T Y P O S T -TRANSPLANTING. Biodegradability of containers in soil depends on soil organic matter content, rooting pattern of crop, weather, cultural practices, and carbon-nitrogen ratio of containers (unpublished data). Plantable containers reduce waste and increase labor efficiency during installation. As previously mentioned, degradation before planting can be detrimental, yet porosity and the ability to degrade in soil are essential characteristics that support use of alternative containers in landscapes. In a landscape trial conducted at multiple locations, none of the five plantable biocontainer types used were completely degraded 8 weeks after planting; composted manure (CowPot), which has high cellulose and nitrogen content, had the greatest container decomposition, whereas peat (Jiffy-Pot) and rice straw containers showed moderate degradation, and coir containers had the lowest level of decomposition likely due to its high lignin content (Evans et al., 2010). Similarly, a field study using tomato transplants (Vista, CA vicinity) found faster degradation of composted cow manure (CowPot) and wood fiber plus peat (Fertil Pot/ DOTPots) compared with recycled paper and coir containers (CAHR, 2009). For landscape beds replanted each season, these data suggest that certain types of containers would need to be removed or manually broken apart and incorporated into the soil before the bed can be replanted. Slow container degradation posttransplanting could cause root circling, leading to restricted water and nutrient movement and ability to adequately anchor (Appleton, 1993). More research is required to develop standards for ''biodegradability'' of alternative containers. LIFESPAN. Container life span can vary from a few months to several years to match the crop production cycle. One economic consideration with many alternative containers is the inability to reuse them, either because they are designed to be planted with the crop or because they will degrade substantially during one production cycle. This may increase production costs for some growers. Studies are ongoing to extend the lifespan of biocontainers using various natural or synthetic adhesives, resins, waxes, and binding agents that determine the rate of containers biodegradability or compostability (Schrader et al., 2013a). In general, biocontainers designed for shortterm crops such as vegetables, herbs, and seasonal flowering plants must last a few months, whereas nursery containers must last from one to three years and usually are not quickly biodegradable, but may be compostable. Marketing Biocontainers can be considerably more expensive, typically 10% to 40% more than their plastic counterparts (Robinson, 2008). This increased cost means that growers must be able to realize a premium price for plants grown in biocontainers or reduce production costs for the system to be economically viable. Hall et al. (2010) reported an increased customer demand for biodegradable containers compared with the traditional plastic container. Another study determined the willingness of consumers to pay more for biodegradable containers using experimental auctions in which consumers made purchases (Yue et al., 2010a). This system allowed researchers to determine what consumers will actually do compared with what they say they will do on a survey. The results revealed that consumers are willing to pay $0.58 more for a chrysanthemum (Dendranthema ·morifolium) in a 4-inch rice hull container, $0.37 more for a straw container, and $0.23 more for a bioplastic container than for a chrysanthemum in a traditional black plastic container. During the 2010 National Poinsettia Cultivar Trials at Purdue University (West Lafayette, IN), consumers were willing to pay $0.50 or $1 more for 'Eckespoint Classic Red' poinsettias grown in hard rice hull, Terra Shell/OP47, recycled paper, and coir fiber containers than those grown in plastic containers (Camberato and Lopez, 2010). Environmentally friendly packaging was found to increase the likelihood of purchasing fresh-cut flowers and floral products (Rihn et al., 2011). These studies reflect a potential area for improving marketing and sales of nursery and floriculture products. Future prospects The limited supply of petrochemicals for conventional plastic containers and the increasing worldwide demand for petroleum will continue to dictate greenhouse and nursery container prices (U.S. Energy Information Administration, 2013). Additionally, consumers are becoming increasingly aware of and interested in the green industry's impact on the environment. Therefore, economic and social pressure to reduce plastic use and increase sustainable production practices will only increase. The green industry must consider greater reuse and recycling of plastic products as well as containers made of alternative materials to satisfy the demands of their businesses and customers. Growers and landscapers must evaluate the compatibility of the entire production system from planting and irrigation, to harvesting, transportation and marketing, as well as, crop species, business location, level of mechanization, and many other factors to successfully integrate a new container type . Identification of container types suited to crop cycles of varying duration (i.e., short-term, long-term) is needed. The economic and environmental viability of alternative containers including the carbon and water footprints associated with manufacture, transport, and use of these new products is not yet fully understood. The environmental benefits of using alternative containers must be weighed against potential challenges and associated losses incurred due to the decrease in container integrity over time as well as other increased costs (e.g., increased water usage and energy requirements of industrial composting). Recently, alternative containers impregnated with various components such as natural color, slow release fertilizers, fungicides, insecticides, and plant growth regulators that are released during plant growth are gaining entry to the market and could enhance production system efficiency. Members of the green industry, allied industries, and researchers must continue to work together to develop and fine-tune sustainable alternative containers that are compatible with current production practices and are economically feasible.
2019-03-30T13:13:15.303Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "709ea9e283d66337a11c0f8b3df2a4dc2f44425e", "oa_license": null, "oa_url": "https://journals.ashs.org/downloadpdf/journals/horttech/25/1/article-p8.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "739c21379b682c37afe0aef08c3d9d2d3c61f227", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
56082752
pes2o/s2orc
v3-fos-license
The Tibial Nerve and Its Vasculature: An Anatomical Evaluation The study has contributed to evaluate the tibial nerve and its vasculature anatomically. Ten preserved cadavers (5 male, 5 female) have been used for this study. Each cadaver was injected with red latex and through incisions the tibial nerve was exposed at the level of bifurcation of sciatic nerve. The tibial nerve in 85 % cadavers was located between middle and lower th ir s at upper angle of popliteal fossa; whereas, in 15 % cadavers it was present below the piriformis muscle in gluteal region. The tot al length of the tibial nerve was at a mean of 65.26 ±14.42 cm in males and 64.79 ±67.61 cm in females, without significantly different. Its total diameter was at a mean of 5.51 ±1.55 mm, with a mean of 4.11 ±0.88 mm at the popliteal fossa and a mean of 3.24 ±0.81mm at its termination deep to the flexor retinaculum in male cadavers. In female; the means were 5.11 ±0.21 mm, 3.97 ±1.78 mm and 3.14 ± 0.03 mm respectively without significance difference. It was concluded that tibial nerve has sufficient and good blood supply. Moreo ver, it can be utilized as allogeneic vascularized nerve graft to repair sizable nerves after limb salvage. INTRODUCTION Morphology is a diversified sub-classification of science, which includes the structural study of different organisms and their specific features.Moreover, morphology also deals with the relationship between the features that are associated with each other for developing of systematic structures.The main contribution of this study is in the domain of anatomical evaluation.The study has mainly focused on the anatomic evaluation of tibial nerve, which would be helpful for the professionals to develop differential techniques for grafting.Thus, the study has aimed to evaluate tibial nerve and its vasculature anatomically. The tibial nerve is considered as the largest sciatic nerves' terminal branch, which is derived from the ventral branches of the 4 th and 5 th lumbar region, and 1 st to 3 rd sacral ventral rami.It descends along the back of the thigh and the popliteal fossa to the distal border of the popliteus muscle (Ndiaye et al., 2003;Standring, 2005).It further passes deep to gastrocnemius and soleus muscles, and then anterior to the arch of soleus muscle with the popliteal artery.Furthermore, it also runs into the leg where it descends with the posterior tibial vessels to end deep to the flexor retinaculum, by dividing into medial and lateral plantar nerves.Sometimes, the tibial nerve is given the name of the posterior tibial nerve at the lower border of popliteus muscle (Standring).In 90 % of cases, its bifurcation was recorded deep to the flexor retinaculum 1 cm from the line, between the medial malleolus and the medial calcaneal tubercle (Ndiaye et al.).The tibial nerve becomes superficial in the distal part of the leg; covered only by skin and fascia supplying the muscles of the flexor compartment of the leg (Apaydin et al., 2008). The tibial nerve can be subjected to stretch during the lower limb movements or different limb positions; specially, ankle joint dorsiflexion and inversion of the foot.Consequently, the nerve has to adapt itself to such changes through its mechanical properties.It can adapt repeated forces and also can slide in relation to the surrounding tissues (Shacklock, 2005;Apaydin et al.).Forces and stretch can jeopardize the blood supply to the nerve, leading to ischemia that can affect the nerve function.The tibial nerve has its clinical importance.Its branches to both heads of the gastrocnemius and the posterior soleus muscle were suggested to be used as donors, to restore the function of the deep fibular nerve in cases of high sciatic nerve injury (Flores, 2009). Anatomy Department, Faculty of Medicine, Umm Al-Qura University, Makkah, Saudi Arabia. The anatomy of the tibial nerve is important for successful clinical management.It was reported that, percutaneous tibial nerve stimulation can improve the sexual function of females with the overactive urinary bladder.Transcutaneous stimulation of the posterior tibial nerve (tibial nerve name distal to the lower border of popliteus muscle) was also reported to improve the conditions of overactive bladder in children (Patidar et al. 2015), including the refractory cases (Boudaoud et al., 2015).The endoneurial microenvironment and the internal milieu include the endoneurial blood flow that represents a cornerstone of nerve function and the regenerative power after injury (Yasuda & Dyck, 1987).The peripheral nerves receive blood supply from regional arteries.Thus, affection of the feeding blood vessels is of great importance in different kinds of neuropathies (Bradley et al., 2000;Kogawa et al., 2000).It was also reported that vascularized nerve grafts have better regenerative power and shorter time for recovery (Schupeck et al., 1989).The size, length, and the group of muscles supplied by the peripheral nerve together with its vascularity can affect choosing the suitable nerve for vascularized nerve grafts.Therefore, the aim of the study was to examine some morphometric measures of the tibial nerve together with its detailed arterial supply; thus, helping its clinical applications and its possible use as a vascularized nerve graft. MATERIAL AND METHOD Twenty lower limbs from 10 preserved plastinated cadavers (5 males and 5 females) have been used for this study.The research has followed the regulations of the ethical committee of the Faculty of Medicine, Umm Al-Qura University that is following the international ethical rules of the researches on the human cadavers. The external iliac artery of each cadaver was injected with red latex to visualize the arterial tree of the lower limbs.A longitudinal incision has been done on the posterior surface of each limb to dissect the tibial nerve from its origin till its termination.The back of the thigh, the popliteal fossa, and the back of lower leg has been dissected.The skin and subcutaneous tissue were removed.The tibial nerve was exposed at the level of bifurcation of the sciatic nerve and at its terminal division deep to the flexor retinaculum, between the medial malleolus and medial process of calcaneal tuberosity. Quantitative and qualitative measurements.The following measurements have been done in each dissected nerve (the male and female cadavers): a) The distance from tibial nerve origin (at the end of the sciatic nerve) to its terminal division deep to the flexor retinaculum.The length of tibial nerve was measured by a standard metric tape. b) The diameter and thicknesses of tibial nerve in a site of the bifurcation, at popliteal fossa, and at the site of terminal division has been determined.The diameter and thicknesses of tibial nerve in the three positions were measured by Vernier Swiss digital caliper with (0.05) accuracy. c) The number of the vascular pedicles to the tibial nerve and the source of each were also identified. Statistical analysis. The mean and standard deviations of all the measurements have been performed.Finding of measurements have been statistically calculated using the ttest with the significant value of p ≤ 0.05 and the results, tables, and histograms have been developed. RESULTS The morphometric measurements.The tibial nerve in 17 dissected cadavers (85 %) originated in the thigh between the middle and lower thirds at the upper angle of the popliteal fossa (Fig. 1A).It emerged below the piriformis muscle in the gluteal region in three dissected cadavers (15 %) (Fig. 1C). The total length of the tibial nerve its origin (bifurcation of the sciatic nerve) (Fig. 1) till the its termination deep to the flexor retinaculum (Fig. 2) was at a mean of 65.26 ±14.42 cm (ranged from 55.1 to 72.01 cm) in males and 64.79±67.61cm (ranged from 55.7 to 70.04 cm) in females, the two are not significantly different (p < 0.9822).The total diameter of the tibial nerve in male cadavers was at a mean of 5.51±1.55mm (ranged from 5.40 to 5.55) at its origin, at a mean of 4.11 ±0.88 mm (ranged from 4.01 to 4.2 mm) at the popliteal fossa and at a mean of 3.24 ± 0.81mm (ranged from 3.1 -3.34 mm) at its termination deep to the flexor retinaculum.The total diameter of the tibial nerve in female cadavers was at a mean of 5.11 ± 0.21 mm (ranged from 4.90 to 5.23) at its origin, at a mean of 3.97 ± 1.78 mm (ranged from 3.43 to 4.10 mm) at the popliteal fossa and at a mean of 3.14 ± 0.03 mm (ranged from 2.98 -2.32 mm) at its termination deep to the flexor retinaculum.There were no significant differences between measurements in the males and females (p < 0.4388, p < 0.827 and p < 0.336 respectively), as shown in Table II. There was no significant difference between the total diameter in the right and the left sides both in males and females (p < 0.9648 and p < 0.3232 respectively).Also there Table I.Diameter of the tibial nerve at different sites in mm (mean and standard deviation) in the male and female cadavers. The blood supply of the posterior tibial nerve The inferior gluteal and first perforator arteries: The tibial portion of the sciatic nerve got arterial supply from the inferior gluteal artery through its branch, that accompanied the posterior cutaneous nerve of the thigh (Fig. 2A) and from the descending branch of the first perforator branch of the deep artery of thigh (Arteria produnda femoris) (Fig. 2B). The popliteal artery: At the popliteal fossa, the popliteal artery gave the medial sural artery that sent an arterial pedicle to the tibial nerve. It was accompanied by two venae comitantes that drained into the vena comitantes accompanying the medial sural artery (Fig. 3A). The popliteal artery also gave a branch that ramified on the tibial nerve (Fig. 3B). The fibular artery: It had a mean diameter of 3.38±0.1 mm (ranging from 2.5 to 4.5 mm) at its beginning from the posterior tibial artery.It gave arterial pedicles to the tibial nerve at a mean of 4±0.1 (ranged from 3 to 6).The mean diameter of these pedicles was 0.6 mm (ranging from 0.5 to 0.7).Some of these pedicles reached the nerve via mesentery in the outer sheath of the tibial nerve.They divided into an ascending and descending branches that shared with other pedicles in the formation of a central longitudinal artery (Figs 4, 5B).Fascicular branches from the central artery passed to supply the different nerve fascicles inside the tibial nerve.Other branches passed directly into the nerve to join the central tortuous artery (Fig. 5C). The posterior tibial artery: It had a mean diameter of 4.24±0.2mm (ranging from 3.5 to 5 mm) at its beginning from the popliteal artery. It gave arterial pedicles to the tibial nerve at a mean of 3 (ranged from 2 to 5 pedicles) with a mean diameter of 0.7 mm (ranging from 0.5 to 0.8).These pedicles entered the nerve and joined the central longitudinal tortuous artery (Fig. 5A, 5C). The venous drainage of the tibial nerve. The feeding arterial pedicles to the nerve were accompanied by two vena comitantes that drained into the regional veins (Figs. 4, 5 and 6).was no significant difference between the right and left sides in either of the males and females (p < 0.5429 and p < 0.6714 respectively) (Table II). DISCUSSION Posterior tibial nerve has various clinical applications.Its stimulation is effective in the treatment of non-neurogenic overactive bladder among children and women (Kummer et al., 1994).Similarly, it is also useful and safer option in a management of neurogenic lower urinary tract dysfunction (Preyer et al., 2015).It is also recommended in the treatment of refractory cases of urinary dysfunction in children (Schneider et al., 2015), in urinary fecal incontinence (Patidar et al.), and in the treatment of chronic anal fissures (Lecompte et al., 2015).The procedure proved to have fewer side effects; commonly pain at the site of stimulation. Percutaneous tibial nerve stimulation was reported in the treatment of fecal incontinence (Manríquez et al., 2016), for the treatment of refractory spastic foot and its consequences (Ducic & Felder, 2012), and for suppression of irritation induced bladder over activity (Fouad, 2011;Tai et al., 2011).Tibial nerve block was reported as a safe and effective method for controlling pain after outpatient surgery of hallux valgus (Burton et al.).The precise location of the surface anatomy of the nerve in the popliteal fossa is of extreme important in certain selective tibial nerve block, which is used for postoperative analgesia after total knee arthroplasty in combination with the femoral nerve block.Tibial nerve decompression by the release of known anatomical compression points, the soleus arch, and the tarsal tunnel, can be accomplished safely and effectively via minimized skin incisions (Martín et al.). The blood supply of the posterior tibial nerve is crucial in recovery following its decompression and for perfect stimulation.Moreover, vascularized nerve grafts were found to result in rapid and sound nerve repair; in this respect, the length, diameter and blood supply is extremely important.The current study showed that the total length of the tibial nerve was at a mean of 65.26 ± 14.42 cm in males and 64.79±67.61cm in females without significant difference in both sexes.Also, its diameter at beginning, at the popliteal fossa, and at its end showed no significant differences in males and females.However, in both sexes, the diameter of the nerve progressively decreased as the nerve passes distally.It was at a mean of 5.51±1.55mm at its origin and at a mean of 3.74±0.81mm at its end.The nerve received different arterial pedicles from the popliteal artery in the popliteal fossa, and from the fibular and posterior tibial arteries at the back of the leg.These arterial pedicles shared in a continuous central tortuous artery inside the nerve among its fascicles.Such an arrangement could be useful to maintain the arterial supply to the tibial nerve in different movements and positions.It also explains its adaptation to stretch at different movements of the ankle or even compression (Apaydin et al.).Moreover, it is helpful to get vascularized tibial nerve grafts depending on one large arterial pedicle when needed. The fibular artery or the posterior tibial artery (mean diameter was 3.38±0.1 mm and 4.24± 0.2 mm respectively) could be elevated with a segment of the nerve, and can be transferred to distant areas as a free vascularized nerve graft.The presence of the central artery was helpful to maintain the blood supply of the free segment of nerve.Moreover, vascularized peroneus brevis or longus muscle graft with the removal of the fibular artery has not affected the vascularity of the tibial nerve.Since feeding vessels from the posterior tibial artery maintain the central artery within the tibial nerve, and maintain its blood supply.Occlusion of one of these pedicles during walking, running or sitting could not severely affect the vascularity of the nerve.The large diameter and reasonable length on the back of the leg together with rich arterial supply and with presence of a large feeding artery can be a reason for successful free vascularized tibial nerve grafts.A number of the arterial pedicles were found during the current study to be more increased at the back of the leg distal to the popliteal fossa, which originated from the fibular and posterior tibial arteries.Elevation of a vascularized segment of the tibial nerve in the leg, distal to the popliteal fossa, would not seriously affect the movements of the leg.The extensor and fibular (lateral) compartments of the leg are intact together with gracilis, popliteus, and soleus muscles.Movements of the ankle joint might not be seriously deteriorated (Ahmad et al., 2012).Tibial neurotomy has also been advised as a treatment of lower limb spasticity (Buffenoir et al., 2004). There is an origination from different anastomoses and sources, which are mutually held in connective tissue sheaths in regards of nerves and arteries, as well as within nerves fascicles and between the nerve fibers.There is a level of overlapping among vascular territories that allows preservation of blood and collateral circulation supply in situations, where one or more regional vasa nervorum is interrupted.The pattern of vascularization is present in the major trunk of the tibial and sciatic nerves.Two to six arteries are needed to form the extraneural arterial chain of the sciatic nerve, which pass to it from neighboring arteries, perforating, popliteal artery, inferior gluteal, and medial circumflex femoral at the certain level (Ugrenovic et al., 2013). Fig. 3 . Fig. 3. Tibial Nerve and Arterial Pedicle A. It shows arterial pedicle (arrows) to the tibial nerve (TN) from the medial sural artery (MSA), a branch from the popliteal artery (POA).B. shows arterial pedicle (arrow head) from branches of the popliteal artery.MH, LH; medial and lateral heads of gastronomies; PV; popliteal vein Fig. 4 . Fig. 4. Arterial Pedicle and Fibular Artery B. is a magnified portion of A. They show arterial pedicle (arrow) from the fibular artery (FA) that gives ascending (a) and descending (b) branches in a mesentery-like structure (me) in the outer sheath of the tibial nerve (TN).These branches form a central artery (CA) inside the nerve and another feeding pedicle (arrow head) from the fibular artery that joins the central longitudinal artery (CA).FHL, flexor hallucis longus, v1 & v2 are vena comitantes Fig. 6 . Fig. 6.Diagrammatic illustration of the arterial supply of the tibial portion of the sciatic nerve and the tibial nerve; AP; arterial pedicle, CFN; common fibular nerve, LPN; lateral plantar nerve, MPA; medial plantar artery, a and b are ascending and descending branches. Fig. 7 . Fig. 7.The central artery of the tibial nerve is shared by the feeding pedicles. Fig. 8 . Fig. 8.It shows the differences of the diameter of the tibial nerve (in mm) in male and female at different sites. Table II . Diameter of the tibial nerve at different sites in mm between the right and left tibial nerves (mean and standard deviation). Fig. 1.Sciatic nerve and tibial never A: It shows the bifurcation (arrow head) of the sciatic nerve (Sc) at the middle of the thigh into common fibular nerve (CFN) and the tibial nerve (TN).B shows the tibial nerve from its origin (arrow head) to the popliteal region (arrow).C shows high bifurcation of the sciatic nerve.P<0.9999 insignificant changes
2018-12-10T04:32:46.683Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "05f582b512f7189117769e71832578d5b6ecc52b", "oa_license": "CCBYNC", "oa_url": "https://scielo.conicyt.cl/pdf/ijmorphol/v35n3/art04.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "05f582b512f7189117769e71832578d5b6ecc52b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237553817
pes2o/s2orc
v3-fos-license
Statistical Perspective on Hyper Spectral Classification Systems for Accuracy Improvement , image classification to identify the pixel label is the essential step [1]. Even though a large amount of hyperspectral image classification methods has been analyzed that there are still some of the issues faced in training samples. Using the unsupervised manner classification accuracy is one of the most critical parameters that has been analyzed in various articles using training samples as these are aware that training samples are some of the most limited in the number of numbers for HSI classification. There is still some controversy due to the unsupervised classification, which is the main challenge that leads to a challenging and complicated high dimensional data observation that is suggested in a more magnificent combination of spectral information [2]. Image classification has been a topic of research for more than 2 decades now, it includes multiple levels of processing for the input image. These levels are depicted with the help of figure 1 where, the input images can be normal images, hyperspectral images or military images, the flow always remains constant. The images are collected and labeled according to the classes needed at the output. For example, for crop classification, we need the output to contain classes like cotton crop, wheat crop, bajra crop [3], among other types, so we collect images and label them with the given classes, this step is critical, and defines the accuracy of the overall classification process, a thoroughly selected dataset ensures better classification results. The collected images are then given to a pre-processing and noise removal block, where the images are cleaned of any noises and are processed so that they are ready for feature extraction [4]. In this paper, we have compared various algorithms for classification for the hyperspectral image classification system, and identified the optimum algorithms used for a given application, the next section describes the algorithms in brief, followed by the comparison of results between the algorithms. Finally, we conclude the paper with some interesting observations about the compared algorithms and proposed the future work which researchers can perform in order to further analyze these algorithms. Image classification has been a topic of research for more than 2 decades now, it includes multiple levels of processing for the input image. These levels are depicted with the help of figure 1 where, the input images can be either normal images, hyperspectral images or military images, the flow always remains constant. The images are collected and labelled according to the classes needed at the output. For example, for crop classification, we need the output to contain classes like cotton crop, wheat crop, bajra crop, among other types, so we collect images and label them with the given classes, this step is critical, and defines the accuracy of the overall classification process, a thoroughly selected dataset ensures better classification results. The collected images are then given to a pre-processing and noise removal block, where the images are cleaned of any noises and are processed so that they are ready for feature extraction. In this paper, we have compared various algorithms for classification for the hyperspectral image classification system, and identified the optimum algorithms used for a given application, the next section describes the algorithms in brief, followed by the comparison of results between the algorithms. Finally, we conclude the paper with some interesting observations about the compared algorithms and proposed the future work which researchers can perform in order to further analyse these algorithms. II. LITERATURE SURVEY Remote sensing or hyperspectral image classification has limited training datasets, the ones which are used are listed as in Table I. There are other data sets as well, but some parameters of these are still unclear, so we use the words "not-sure" to replace the unknown values in the table, Table 1. Datasets used for evaluation A case-study defining the use of deep learning algorithms for classification of hyperspectral images is defined in [1]. In [1] researchers Yabin Hu, Jie Zhang, Yi Ma, Xiaomin Li, Qinpei Sun and Jubai-An have proposed deep convolutional neural network (DCNN) for classifying Huanghe (Yellow) River Estuary coastal wetland images. These images were taken in real-time and both spectral & textural features were selected for classification. Classes like Reed, Tamarix, Spartina, Water, Tidal flat, Farmland and OCA [1] were selected. Accuracies of SVM-linear, SVM-polynomial, SVM-RBF, SVM-sigmoid and the proposed DCNN were compared. It was found that the proposed DCNN algorithm outperforms the other algorithms in terms of core accuracy by more than 8%, and thereby can be used for real-time hyperspectral classification applications. The kappa coefficient (which is a measure of an algorithm's effectiveness) is also evaluated, and results indicate that the proposed DCNN is atleast 10% better than the others in terms of kappa. While DCNN is found to be superior to SVM, the work done by Yanhui Guo, Xijie Yin, Xuechen Zhao , Dongxin Yang and Yu Bai in [2] uses SVM with a guided filter to improve the classification performance. The guided filter acts as a feature improvement algorithm, and helps in describing the images with better accuracy. Thereby improving the effectiveness of the algorithm. In their work [2], the researchers have compared the results with SVM, SVM-EPF, Co-SVM, Co-SVM-EPF, GF-SVM & the proposed GF-SVM-EPF. They found that the proposed GF-SVM-EPF outperforms the other algorithms by atleast 6%. The comparison of GF-SVM-EPF with DCNN is not done, which can be an interesting research to be pursued by any reader of this text. DCNN is a variant of CNN, in [3] the researchers Hongmin Gao, Yao Yang, Chenming Li, Xiaoke Zhang, Jia Zhao and Dan Yao have proposed the use of simple small CNNs for spectral-spatial classification of multi and hyper spectral images. They have proposed a small-level architecture for the classification of these images. Using their architecture, the images are divided into different sectors, and each sector is able to perform one task very precisely. For example, the first sector is for pre-processing of images using gaussian filters. This section performs the task and makes sure that all images are properly processed using the filter. Similarly, there are multiple such sectors which perform a small but effective task for hyperspectral classification. They have compared 6 different CNN architectures, and found that their proposed architecture gives better accuracy than others. This proposed CNN can be combined with deep CNNs to further optimize their accuracy. CNN, SVM & deep CNN are classes of deep learning. The research done in [4] proposes different algorithms for deep learning-based hyperspectral classification. They have compared SVM, EMP, JSR, EPF, 3D CNN, CNN-PPF, Gabor-CNN, S-CNN, 3D GAN and DFFN models in order to evaluate the best working algorithm. From their extensive research, it is found that the DFFN (Deep feed forward network) can be a good option for classification in the hyperspectral space. Their analysis is done on more than 10 classes, and thus can be considered as a good starting research and study point for any researcher. Similar to DFFN, the work done in [5] uses Cascaded Recurrent Neural Networks for classification of hyperspectral images. They use the concept of adding multiple networks together in order to perform classification. From their study the combination of 10 recurrent neural networks with a loss function and a sum operator is enough to obtain accuracies in the range of 90% to 95%. They have compared the accuracy rates of CasRNN, CasRNN-F, CasRNN-O and SSC as RNN, and found that the proposed SSC as RNN [5] method is excellent in performing the classification tasks. It outperforms the other algorithms by more than 15% in terms of core accuracy. SVMs are nonparametric factual methodologies for tending to regulated arrangement and relapse issues. In this way, there is no presumption made on the hidden information dispersion. The numerical establishment of the SVMs can be found in [6], [7], and [8]. In the first plan of SVMs, the technique is given an arrangement of information tests, and the SVM preparing calculation intends to decide a hyperplane that segregates the informational index into a discrete predefined number of classes in a manner predictable with the preparation precedents [9]. The term ideal isolating hyperplane is utilized to allude to the choice limit that limits misclassification achieved amid the preparation stage. Learning alludes to finding an ideal choice limit to isolate the preparation examples and after that to isolate test information under a similar arrangement [10]. An itemized depiction of the SVM calculation as an instrument for example acknowledgment can be checked on in [11] and [12]. The vital part for any piece-based method, including SVMs, is the best possible meaning of a portion work that precisely mirrors the closeness among tests. Some generally utilized parts to create distinctive SVMs and other portion-based classifiers fulfilling Mercer's condition [13] are straight bit, polynomial piece, outspread premise work (RBF) bit, sigmoid bit among others. 3D CNNs have made their mark in hyperspectral classification. The work done in [14] is a milestone in the research on hyperspectral classification, because they have been able to successfully apply CNN's 3D model for the task of classification. They have further used transfer-based learning mechanisms to further improve the system performance. It is found that the proposed system is able to achieve more than 98% accuracy, which is a commendable number. Moreover, the algorithm is free from any bottlenecks, and thus can be used for real time applications. Another CNN design is presented in [15], wherein fully automatic classification is proposed. They have combined 1D CNNs with 3D CNNs in order to get the advantages of both the architectures in terms of feature processing and classification respectively. The resulting system is able to achieve more than 95% accuracy across multiple datasets. The results have been compared with RF-200, MLP, L-SVM, RBF-SVM, RNN, 1D CNN, 1D DCNN and the proposed model. The proposed model outperforms all the other models by atleast 5% in terms of core accuracy values. Another interesting piece of work is done in [16], wherein researchers have used Discriminative Compact Representation for learning features and classifying hyperspectral images. The results showcase that boosting of the features is able to increase the accuracy of the classification system, therefore boosting has a place with a group of calculation or procedures that are skilled to change over feeble student to solid student. When all is said in done, frail student can be characterized as a student or model that is somewhat superior to the random speculation. Then again solid student execution is near most exact outcome. Boosting is a general technique for enhancing the execution of any learning strategy. In [17] it is proposed the boosting system that depends on an idea that a powerless student can be supported to a solid student. Boosting is a forward added substance display [18] and boosting utilizes the whole informational collection as each stage. This technique consolidates the yields from numerous classifiers with the end goal to create an amazing board of algorithms [19]. Random forest is one of the well-known group classifier increased much consideration by scientist in the most recent decade. This group strategy deal with the idea of various choice trees by utilizing randomly chosen subset of preparing information and factors [20]. Random Forest turns into a famous decision for picture characterization in the field of remote detecting since it delivered great order precision. Random Forest [21] demonstrated its quality in various application area [22][23][24]. RF classifier is an arrangement of CARTs (Classification and Regression Tree) for definite expectation [25]. Trucks are produced by illustration the subset of preparing information through bagging approach. This expresses one same preparing test might be utilized commonly then again, some example may not be utilized even once. Around 70% of the example utilized for the preparation of the trees, these examples are otherwise called in-pack tests and every single outstanding example are known as out-of-the sack tests. These out-of-sack tests are utilized in inside cross approval strategy to assess the execution of resultant RF show. This mistake is alluded as out-of-pack blunder. This strategy requires two parameters that should be set purchase client: first parameter is Ntree (number of tree) and Mtry (number of highlights). Every hub in the tree is part by utilizing Mtry parameter. RF created trees that have low predisposition and high difference [26]. For definite grouping averaging of class task probabilities determined by all tree in the forest [27]. A few examinations in writing demonstrated that characterization precision is less touchy to the parameter Ntree when contrasted with the other client characterized parameter Mtry [28]. RF is considered as computationally effective classifier. Much research demonstrated that the estimation of parameter Ntree set to 500 the reason is that mistake settle at this esteem [29]. Be that as it may, in writing numerous specialists have tried the execution of RF classifier by utilizing distinctive estimation of Ntree parameter 5000 [30], 1000 [31], or 100 [32]. Be that as it may, a few specialists have demonstrated that the estimation of Ntree parameter might be accepted little when contrasted with the above said an incentive for an explicit application and accomplished great characterization result. Work in [33] utilizes RF to arrange oil slicks from SAR information and presume that the quantity of tree (Parameter Ntree = 70) give great order result. Then again, the parameter Mtry is considered as the square base of the quantity of info variables [34]. In one research [35] the estimation of Mtry is taken as the estimation of aggregate number of variable however this expansion the computational multifaceted nature of the calculation. A few investigations demonstrated that RF classifier perform superior to anything other classifier like Linear Discriminant Analysis, Artificial Neural Network, Binary Hierarchical Classifier and Decision tree [37]. Support Vector Machine is a machine learning procedure classifier that delivered incredible outcome regarding precision for different applications. A few investigations have demonstrated that the execution of RF classifier is near the SVM [38], and RF creates great outcome for hyperspectral information (high dimensional information). Work in [39] utilized RF classifier for multi-scale question picture analysis (MOBIA) on EO hyperspectral symbolism and got extremely all-around characterized pictures. Elhadim Adam looked at the execution of SVM and RF classifier on Rapid Eye symbolism and break down the significance of different groups of Rapid Eye satellite. Then again, some exploration revealed that SVM give better arrangement in the field of Object based Image Analysis (OBIA). Baoxun Xu proposed an enhanced rendition of RF classifier and guaranteed that it gives preferred outcome over unique RF strategy. Picture arrangement will be directed utilizing managed methods feed-forward neural network which is backpropagation calculation. Prior to preparing and arrange LU/LC of picture satellite, the standardized procedure of preparing test has been performed. This procedure is to maintain a strategic distance from the immersion during the time spent network broadcasting. In BPNN process numerous concealed layers for feed-forward will be utilized, and the quantity of shrouded layers can be changed dependent on caution. The quantity of neurons in yield layer will be equivalent to the quantity of classes (N), which depends on coding, pursued the yield. The quantity of shrouded layer neurons is proposed concurring the a few criteria incorporate the quantity of concealed neutrons ought to be in the range between the span of the info layer and size of the yield layer. Back-propagation is the most well-known technique which has just modified to make the network demonstrate and to show the networks. These days have other present-day techniques for prepared the information which is conjugate angle strategy and the Lavenberg-Marquardt strategy. These strategies have their very own favourable position which is they are quicker. Be that as it may, such preferred standpoint happens just in the event that when the issue ought to be illuminated by the neural network with discovering the strategy for its answer on the premise prepared process. This undertaking favours utilizing backpropagation calculation contrasts and present-day calculation in light of the fact that BPNN is the technique that works freely from what so ever hypothetical suspicions. That is to say, in opposition to other cunning calculation which once in a while works, the backpropagation techniques dependably work. III. PROPOSED METHODOLOGY In our work, to address the limitations of previous records and present a comparative survey on HSI for improving classification accuracy using Neural Network. To consider the different problem with the neural network by in-depth learning approach. Our significant contribution in the paper can be summarized as follows:  Discussion on the various performance of deep learning techniques such as CNN, ANN, SVM, and KNN  Classification of different approaches  Identification of specific gaps and research challenges to the production of about present status on using neural network The main motive of this paper is to introduce new algorithm on HSI data to achieve greater accuracy using a neural network. Figure 1. Flow of image classification The processing includes, image fusion, segmentation of the image, and any morphological structure operations on the image, among other steps which are usually application dependent. The processed image is then given to the feature extraction unit, where the features of the image are evaluated. Feature evaluation is another very critical step, it defines the accuracy with which features are evaluated for the image, many methods including Speed up Robust Features (SuRF) and others have been proposed specifically for hyperspectral image processing in order to have better feature extraction capability. Feature evaluation is usually accompanied with feature selection for large datasets, in order to remove any redundancy from the extracted features. After feature extraction, the classifier is trained with the input features, training is done with the images extracted from the training set, while the actual classification is done from the evaluation block, where the trained classifier is used with the input features from the given image. The training and testing (evaluation) sets are decided based on the application, usually 70% of the data is used for training, while remaining 30% of the data is used for evaluation. The evaluation process identifies the accuracy of the classifier used for the process, and can be used to re-train the algorithm in order to improve the accuracy based on the steps followed by the system. In this paper, we have compared various algorithms for classification for the hyperspectral image classification system, and identified the optimum algorithms used for a given application, the next section describes the algorithms in brief, followed by the comparison of results between the algorithms. Finally, we conclude the paper with some interesting observations about the compared algorithms and proposed the future work which researchers can perform in order to further analyse these algorithms. IV. RESULTS The consequences of the characterization are relying upon the accuracy evaluation and Kappa coefficient esteem. The level of accuracy of arrangement result for all classifiers was determined by dissected with disarray network and furthermore called mistake grid. Next to this, there is some marker that used to demonstrate the arrangement results, for example, by and large accuracy, producer accuracy, user accuracy and Kappa coefficient esteem. Producer accuracy was determined by partitioning the quantity of right questions of an explicit class with the genuine number of reference information objects for that class. While the user accuracy was controlled by isolating the quantity of right questions of an explicit class by the aggregate number of articles doled out to that class. To play out the producer accuracy the extent of named question in the reference information was educated accurately. User accuracy, notwithstanding, evaluates the extent of items allotted to an explicit class that concur with the articles in the reference information. User accuracy shows the likelihood that an explicitly marked question likewise has a place with that explicit class as a general rule. It can demonstrate the commission mistakes. The following table indicates the classification performance of the mentioned algorithms, All the algorithms were compared with the KSC dataset, with 1200 images for evaluation. The delay evaluated is the mean delay for classification of a single image with 70% training and 30% testing dataset. It can be observed that DBN, Random forest and CNN outperform all other algorithms in terms of raw accuracy, but Random forest and CNN have high delay when compared with DBN, thus DBNs can be used for any real time hyperspectral classification applications with high accuracy and high speed. Other algorithms are good as well, but bagging, boosting and AE are not advisable to use due to their low levels of accuracy and low kappa values. V. CONCLUSION From the results we can observe that the deep belief networks are the most suitable option for hyperspectral image classification followed by random forest and convolutional neural networks. Other algorithms like support vector machines and cascaded neural networks are equally useful, but they do not have that level of accuracy as produced by the DBN, RF or CNN algorithms, and thus should be used only in case of very high speed applications where accuracy is not the primary concern, and moderate level of accuracy will also suffice, like land detection applications for town planning. Researchers can further check the performance of these algorithms on different datasets and check their results in order to suit the application in use. Thus, we conclude that from this review the deep learning-based algorithms provide a better accuracy when compared with their conventional counterparts. Combination of more than one deep learning algorithm will always be beneficial to the system accuracy, but it will increase the computational complexity of the algorithm. Moreover, combining algorithms must always be done intelligently so that the nuances of one algorithm are covered up by the other algorithm(s). Redundancy during combining algorithms must be reduced as much as possible. ACKNOWLEDGMENT I am very thankful to Dr. R. R. Sedamkar for his guidance and constant support in conducting this review. He is working as a Dean in Thakur College of Engineering and Technology, Mumbai.
2020-03-12T10:47:22.357Z
2020-02-29T00:00:00.000
{ "year": 2020, "sha1": "e56256cf1b1e1895bf79fc1ac5962c99a4c5d95c", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijeat.c5646.029320", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4a26898da96c806cc2d015e97a0dbabc3e1123e3", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [] }
119739065
pes2o/s2orc
v3-fos-license
Deformation of Vect($\mathbb{R})$-Modules of Symbols We consider the action of the Lie algebra of polynomial vector fields, $\mathfrak{vect}(1)$, by the Lie derivative on the space of symbols $\mathcal{S}_\delta^n=\bigoplus_{j=0}^n \mathcal{F}_{\delta-j}$. We study deformations of this action. We exhibit explicit expressions of some 2-cocycles generating the second cohomology space $\mathrm{H}^2_{\rm diff}(\mathfrak{vect}(1),{\cal D}_{\nu,\mu})$ where ${\cal D}_{\nu,\mu}$ is the space of differential operators from $\mathcal{F}_\nu$ to $\mathcal{F}_\mu$. Necessary second-order integrability conditions of any infinitesimal deformations of $\mathcal{S}_\delta^n$ are given. We describe completely the formal deformations for some spaces $\mathcal{S}_\delta^n$ and we give concrete examples of non trivial deformations. Introduction Let vect(1) be the Lie algebra of polynomial vector fields on R. Consider the 1-parameter deformation of the vect(1)-action on the space R[x] of polynomial functions on R defined by: where X, f ∈ R[x] and X ′ := dX dx . Denote by F λ the vect(1)-module structure on R[x] defined by this action for a fixed λ. Geometrically, F λ is the space of polynomial weighted densities of weight λ on R: The space F λ coincides with the space of vector fields, functions and differential 1-forms for λ = −1, 0 and 1, respectively. Denote by D ν,µ := Hom diff (F ν , F µ ) the vect(1)-module of linear differential operators with the vect(1)-action given by the formula Each module D ν,µ has a natural filtration by the order of differential operators; the graded module S ν,µ := grD ν,µ is called the space of symbols. The quotient-module D k ν,µ /D k−1 ν,µ is isomorphic to the module of weighted densities F µ−ν−k , the isomorphism is provided by the principal symbol map σ pr defined by: (see, e.g., [10]). As vect(1)-module, the space S ν,µ depends only on the difference δ = µ − ν, so that S ν,µ can be written as S δ , and we have as vect(1)-modules. The space of symbols of order ≤ n is The spaces D ν,µ and S δ are not isomorphic as vect(1)-modules: D ν,µ is a deformation of S δ in the sense of Richardson-Neijenhuis [14]. In the last two decades, deformations of various types of structures have assumed an ever increasing role in mathematics and physics. For each such deformation problem a goal is to determine if all related deformation obstructions vanish and many beautiful techniques been developed to determine when this is so. Deformations of Lie algebras with base and versal deformations were already considered by Fialowski in 1986 [6]. It was further developed, introducing a complete local algebra base (local means a commutative algebra which has a unique maximal ideal) by Fialowski in (1988) [7]. Also, in [7], the notion of miniversal (or formal versal) deformation was introduced in general, and it was proved that under some cohomology restrictions, a versal deformation exists. Later Fialowski and Fuchs, using this framework, gave a construction for versal deformation [8]. We use the framework of Fialowski [7] (see also [1] and [2]) and consider (multi-parameter) deformations over complete local algebras. We construct the miniversal deformation of this action and define the complete local algebra related to this deformation. According to Nijenhuis-Richardson [14], deformation theory of modules is closely related to the computation of cohomology. More precisely, given a Lie algebra g and a g-module V , the infinitesimal deformations of the g-module structure on V , i.e., deformations that are linear in the parameter of deformation, are related to H 1 (g, End(V )) . Denote D := D(n, δ) the vect(1)-module of differential operators on S n δ . The infinitesimal deformations of the vect(1)-module S n β are classified by the space where H i diff denotes the differential cohomology; that is, only cochains given by differential operators are considered. Feigin and Fuchs computed H 1 diff vect(1), D λ,λ ′ , see [5]. They showed that non-zero cohomology H 1 diff vect(1), D λ,λ ′ only appear for particular values of weights that we call resonant which satisfy λ ′ − λ ∈ N. Therefore, in formula (1.2), the summation ⊕ λ,k is over all λ and k satisfying 0 In this paper we study the deformations of the structure of vect(1)-module on the space of symbols S n δ . We give the second-order integrability conditions which are sufficient in some cases. We will use the framework of Fialowski [6,7] and Fialowski-Fuchs [5] (see also [1] and [2]) and consider (multi-parameter) deformations over complete local algebra base. For some examples, we will construct the miniversal deformation of this action and define the local algebra related to this deformation. The space H 1 diff (vect(1), D λ,λ+k ) was calculated in [5], and for space H 2 diff (vect(1), D λ,λ+k ) we can deduce the dimension from [5], see also [4]. We give explicit expressions of some 2-cocycles that span H 2 (vect(1), D λ,λ+k ). where (X, Y, Z) denotes the summands obtained from the two written ones by the cyclic permutation of the symbols X, Y, Z. The First Cohomology Space The first cohomology space H 1 diff (vect(1), D λ,λ+k ) was calculated by Feigin and Fuks in [5]. The result is as follows (1), D λ,λ+k ) has the following structure: otherwise. These cohomology spaces are spanned by the cohomology classes of the 1-cocycles, C λ,λ+k : vect(1) → D λ,λ+k , that are collected in the following table. We write, for X d dx ∈ vect(1) and The maps C λ,λ+j (X) are naturally extended to S n δ = n j=0 F δ−j . The Second Cohomology Space Let g a Lie algebra and V a g-module, the cup-product defined, for arbitrary linear maps C 1 , C 2 : g → End(V ), is defined by: Therefore, it is easy to check that for any two 1-cocycles C 1 and C 2 ∈ Z 1 (g, End(V )), the bilinear map [[C 1 , C 2 ]] is a 2-cocycle. Moreover, if one of the cocycles C 1 or C 2 is a 1-coboundary, then [[C 1 , C 2 ]] is a 2-coboundary. Therefore, we naturally deduce that the operation (2.4) defines a bilinear map: Thus, we can deduce the expressions of some 2-cocycles by computing the cup-products of 1-cocycles. That is especially important if we know the dimension of H 2 (g, End(V )). Besides, by direct computation, as before, we show that the cup-products ) can be spanned respectively by the cohomology classes of the nontrivial 2-cocycles Ω λ,λ+3 and Ω λ,λ+4 defined by ✷ Now, we consider the cohomology spaces H 2 diff (vect(1), D λ,λ+k ) for k = 5, 6. These spaces are generically trivial, but, for k = 5 and λ = −4, 0 or k = 6 and λ = a 1 , a 2 (where a 1 = − 5+ ), they are two dimensional. In the following proposition we exhibit a basis for each of them. 2), are respectively spanned by the cohomology classes of the nontrivial following 2cocycles: , , , Proof. The 2-cocycles Ω 0,5 and Ω 0,5 are defined as follows: By Lemma 2.4, it is easy to show that these 2-cocyles are nontrivial. Indeed, for instance, compering the term in f in both the expressions of Ω 0,5 and of ∂b 0,5 given in (2.8), we see obviously that Ω 0,5 can not be a coboundary. ) are respectively spanned by the cohomology classes of the nontrivial following 2-cocycles: , Here we omit the explicit expressions of these last 2-cocycles as they are too long. But, as before, by direct computation, we show that they are nontrivial. The General Framework In this section we define deformations of Lie algebra homomorphisms and introduce the notion of miniversal deformations over complete local algebras. Deformation theory of Lie algebra homomorphisms was first considered with only one-parameter of deformation [8,14,17]. Recently, deformations of Lie algebras with multi-parameters were intensively studied ( see, e.g., [1,2,15,16]). Here we give an outline of this theory. Infinitesimal deformations Let ρ 0 : g → End(V ) be an action of a Lie algebra g on a vector space V . When studying deformations of the g-action ρ 0 , one usually starts with infinitesimal deformations: where x, y ∈ g, is satisfied in order 1 in t if and only if C is a 1-cocycle. Moreover, two infinitesimal deformations ρ = ρ 0 + t C 1 , and ρ = ρ 0 + t C 2 , are equivalents if and only if C 1 − C 2 is a coboundary: where A ∈ End(V ) and ∂ stands for differential of cochains on g with values in End(V ). So, the space H 1 (g, End(V )) determines and classifies the infinitesimal deformations up to equivalence. (see, e.g., [9,14]). If H 1 (g, End(V )) is multi-dimensional, it is natural to consider multi-parameter deformations. More precisely, if dimH 1 (g, End(V )) = m, then choose 1-cocycles C 1 , . . . , C m representing a basis of H 1 (g, End(V )) and consider the infinitesimal deformation with independent parameters t 1 , . . . , t m . In our study, an infinitesimal deformation of the vect(1)-action on S n δ is of the form where L X is the Lie derivative of S n δ along the vector field X d dx defined by (1.1), and and where t λ,λ+j and t 0,1 are independent parameters, δ − λ ∈ N, δ − n ≤ λ, λ + j ≤ δ and the 1-cocycles C λ,λ+j and C 0,1 are defined in Table 1. We mention here that the term t 0,1 C 0,1 (X) don't appear in the expression of L Integrability conditions Consider the problem of integrability of infinitesimal deformations. Starting with the infinitesimal deformation (3.10), we look for a formal series where the highest-order terms ρ ijk , . . . are linear maps from g to End(V) such that satisfies the homomorphism condition in any order in t 1 , . . . , t m . However, quite often the above problem has no solution. Following [6,7] and [2], we will impose extra algebraic relations on the parameters t 1 , . . . , t m . Let R be an ideal in C[[t 1 , . . . , t m ]] generated by some set of relations, the quotient is a local algebra with unity, and one can speak about deformations with base A, see [6,7] for details. The map (3.14) sends g to End(V ) ⊗ A. Example 3.1. Consider the ideal R generated by all the quadratic monomials t i t j . In this case and any deformation is of the form (3.10). In this case any infinitesimal deformation becomes a deformation with the base A since t i t j = 0 in A, for all i, j = 1, . . . , m. Given an infinitesimal deformation (3.10), one can always consider it as a deformation with base (3.16). Our aim is to find A which is big as possible, or, equivalently, we look for relations on t 1 , . . . , t m which are necessary and sufficient for integrability ( cf. [1], [2]). Equivalence and the miniversal deformation The notion of equivalence of deformations over commutative associative algebras has been considered in [8]. where I is the unity of the algebra End(V) ⊗ A. The following notion of miniversal deformation is fundamental. It assigns to a g-module V a canonical commutative associative algebra A and a canonical deformation with base A. (ii) in the notations of (i), if A is infinitesimal then ψ is unique. If ρ satisfies only the condition (i), then it is called versal. The miniversal deformation corresponds to the smallest ideal R. We refer to [8] for a construction of miniversal deformations of Lie algebras and to [2] for miniversal deformations of g-modules. but we show that So, we obtain the second-order integrability conditions for k = 7 and for generic λ. Besides, we study, as before, singular values of λ and then we obtain the corresponding second-order integrability conditions. More precisely, the map B λ,λ+7 has the following form: where Hereafter we omit the expressions of the maps b λ,λ+k and ω 2 λ,λ+k as they are too long. 2) Now, for k = 8 and 2λ = −7 ± √ 39, the spaces H 2 diff (vect(1), D λ,λ+8 ) are spanned by the cohomology classes of the 2-cocycle Ω λ,λ+8 = [[C λ+4,λ+8 , C λ,λ+4 ]] and generically we have But, for singular values of λ, other cup-products appear in the expression of B λ,λ+8 . More precisely, we show that where 2) For k = 9, the maps B λ,λ+9 exist only for some singular values of λ. More precisely, we show that, for λ = −4, Examples The second-order conditions given in Theorem (4.4) are not, in general, sufficient, but they are in some cases. In this section we give examples of symbol spaces S n δ for which the corresponding second-order integrability conditions are also sufficient and then we describe completely the formal deformations of these spaces. Finally, we consider an example for which the second-order integrability conditions are not sufficient, but we exhibit the higher-order integrability conditions and then we describe also, in this case, the formal deformations. t λ,λ+j C λ,λ+j (X) = (t 1,1 C 1,1 + t 1,3 C 1,3 + t 2,2 C 2,2 + t 3,3 C 3,3 )(X). (5.27) We have the unique equation : as necessary integrability condition of this infinitesimal deformation. The following proposition shoes that this condition is also sufficient. (5.34) In this case also these conditions are sufficient and any formal deformation of S 3 3 is equivalent to infinitesimal one satisfying (5.34). (5.35) Proposition 5.3. Any formal deformation of S 4 5 is equivalent to a polynomial one with degree ≤ 3. Under the following third-order integrability condition: It is easy to see that, under the conditions (5.36) and (5.38), the right hand side of (5.39) is identically zero. Thus, the solution L (4) of (5.39) can be chosen identically zero. Choosing the highest-order terms L (m) with m ≥ 5, also identically zero, one obviously obtains a deformation (which is of order 3 in t). Now, by studying the equations (5.36) and (5.38), we can see that, up to equivalence, the Lie derivative on S 4 5 admits a formal deformation with seven independent parameters, this deformation corresponds to the solution t i,i = t j,j of the equations (5.36) and (5.38). A great number of non trivial deformations with k independent parameters can be constructed if k < 7, each deformation corresponds to a solution to equations (5.36) and (5.38). All these deformations are polynomial of order equal or less than 3 in t. ✷ Remark 5.5. In the previous four examples we obtain the same results if we substitute S m λ+n for S m n where λ ∈ R * + .
2019-04-12T09:22:52.807Z
2007-02-22T00:00:00.000
{ "year": 2007, "sha1": "0dda3f75b03216458b16a4d10cb6eca7577cbf57", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.geomphys.2009.12.002", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "3923cd9d5982d723a1b940112c6a1298d82e35b5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233468176
pes2o/s2orc
v3-fos-license
Relations between motor skills and language skills in toddlers and preschool-aged children The purpose of this longitudinal study is to (1) examine the relations between language and motorlife skills in toddlers and preschool-aged children (n = 646) in real-life situations; and (2) to explore how the level of motor-life skills in toddlers (2 years and 9 months, T1) is related to language skills at preschool age (4 years and 9 months, T2). Data were collected through structured observation during play and daily life activities (authentic assessment) by staff in Norwegian Early Childhood Education and Care institutions. The correlations between motor-life skills and language skills at T1 were significant but small (r = .12 to .29) and were somewhat stronger at T2 (r = .18 to .46). The correlation between motor-life skills at T1 and language skills at T2 (total score) was small (rho = .25) but significant. However, the subgroups with weak and strong motor-life skills at T1 differed significantly in language skills at T2 (effect size: .40). These findings support and complement previous research, which indicates significant relations between the level of motor-life skills in toddler age and language skills in preschool age. Introduction The importance of early language learning for the development of language and literacy in later years is frequently emphasised in the literature (see, for example, Aukrust, 2005;Kuhl, 2011;Rogde et al., 2016). The specific importance of speech-, oro-or verbal-motor skills for language development is widely accepted (Dodd & McIntosh, 2010;Hotulainen et al., 2010;Nip, et al., 2011). However, the need for a deeper understanding of the relationship between general motor skills and language skills has been highlighted in the literature (Iverson, 2010;Leonard & Hill, 2014;Son & Meisels, 2006). We agree with Iverson's (2010) characterization of the relations 48 between motor and language development as 'complex and multi-faceted rather than simple and directional ' (p. 258). This study aims to examine the relations between motor-life skills and language skills in a group of children at ages 2 years and 9 months (2;9, T1) and 4 years and 9 months (4;9, T2) and to explore how the level of motor skills in toddlers is related to language skills in preschool. Beyond that, we provide a more in-depth investigation by scrutinising various subdimensions that constitute the motor and language domain. Our particular focus is the opportunity for staff in Early Childhood Education and Care institutions (ECECs) to assess children's motor and language development from a functional angle by applying structured observation tools implemented by the staff at these institutions. The relations between motor and language skills There is some evidence that motor skills and experiences from motor activities are related to language already at an early age (Cameron et al., 2012;Dodd & McIntosh, 2010;Leonard & Hill, 2014;Oja & Jurimae, 2002;Webster et al., 2005). According to Iverson (2010), 'Studying the ways in which motor achievements contribute to the development of language may not only yield a more comprehensive picture of the emerging language system; it may also provide fundamental insights into the processes underlying this emergence ' (p. 258). Ionesco and Ilie (2018) have recently shown that embodied learning processes in early language development may be superior to learning processes that do not involve the child's body. One possible explanation for this is that the ability to (inter) act intentionally with the environment requires sensory, physical and motor skills that are continuously developing throughout childhood. For instance, one of the earliest motor milestones, the onset of walking, affects how young children share objects with their mothers and, in turn, mothers' verbal responses to their children (Karasik et al., 2014); this underscores the close, mutual relationship between motor behaviour and verbal and non-verbal communication. Becker, McClelland, Loprinzi, and Trost (2014) observed that a higher level of physically active play in preschool age was positively related to self-regulation, which in turn increased literacy and mathematics scores. This improvement only emerged when self-regulation also improved, not as a direct consequence of more play activity by itself. These findings suggest that the positive outcomes of physically active play on literacy and mathematical skills may be engendered by reinforcing and integrating executive functions such as memory, attention and inhibitory control. Movement training and physically active play stimulate the development of motor skills (Logan et al., 2012;Wick et al., 2017) and, through this, can contribute to strengthening children's language skills. The importance of physically active play is also supported from a neuroscientific perspective. Kuhl (2011) emphasises that language performance is strongly related to children's experiences and brain development. Although results from brain research are still mostly correlational, the connection can be considered 'potentially causal and […] further research will allow us to develop causal explanations' (Kuhl, 2011, p. 13). Peer interactions and, to some degree, adult-child interactions rely heavily on motor skills (Leonard & Hill, 2014). Participating in play, playful activities and playful relationships with others requires sufficiently developed motor skills. Successful participation in play contributes to the physical mastery of play tasks, inclusion and appreciation in the peer group, as well as to being perceived as an attractive playmate for others. Since language learning is a highly social process, social settings that provide opportunities to experience and acquire complex language skills are one of the key demands for high-quality early childhood education, both within families and ECECs. Thus, an environment that provides affordances and opportunities to relate properly to peers and adults is of particular importance. This is consistent with the findings in a systematic review of the literature conducted by Leonard and Hill (2014), who explored the connections among motor development, social cognition and language. They concluded, 'It is evident from these studies that developing motor skills can influence the number and types of opportunities that infants and children have to interact with others, and the consequent development of social relations' (Leonard & Hill, 2014, p. 167). The comorbidity between impaired motor skills and challenges in other domains of development has been frequently stated (Gillberg & Kadesjo, 2003;Hill, 2001;Iverson, & Braddock, 2011;Visser, 2003) in regard to developmental and learning problems at a young age. Son and Meisels (2006) revealed that weak motor-life skills at ages 5 to 6 can be a marker for the risk of weak development of academic skills. Hill (2010) emphasised that a large proportion of children with specific language impairment (40%-91%) show weak motor skills similar to developmental coordination disorders. Poor or atypical motor development could be considered a possible moderating factor related to problems with language, communication and social interaction that arise in several neurodevelopmental disorders (Leonard & Hill, 2014, p. 167). However, the causality of the relationship has not yet been adequately clarified. Deficient motor skills might compromise preschool-aged children's participation and enjoyment (Bart et al., 2011) and thereby limit their opportunities for communication as the basis for language development. Another explanation suggested by Adi-Japha, Strulovich-Schwartz, and Julius (2011) could be that deficient skill acquisition in language might not be exclusively linked to the language system but could be tied to the procedural memory system, which may affect both the language and motor domains. This is in line with Webster et al. (2005), who concluded that factors causing weak motor performance may lead to language deficits as well. Assuming that multiple approaches to understanding comorbidity may be more relevant than single explanations, Carpenter and Drabick (2011) called for longitudinal studies that focus on specific subgroups of children and provide relevant knowledge of the predictors of developmental outcomes in these subgroups. Thus, in this study, we aim to contribute longitudinal data pertinent to educational practice regarding all children of toddler and preschool ages. Assessing children's motor-life skills and language skills from a functional perspective In characterising requirements for individual professional competence among ECEC staff, Urban, Vandebroek, Lazzari, Van Laere and Peeters (2012, p. 35) identified the following as core elements of educational practice: 'Observing children in order to identify their developmental needs', 'Documenting children's progress systematically in order to constantly redefine educational practices' and 'Identifying children with special educational needs and elaborating strategies for their inclusion'. Our specific interest lies in the relationships between the motor and language domains in children's natural environment in Norwegian kindergartens. In this way, we hope to promote knowledge that is especially relevant to employees' professional work with and for children. Professionals' educational and pedagogical activities are based on daily observations of interactions with children. In this study, we thus build on the information professionals themselves can generate in their work and support them with observation instruments that centre on children's actions and expressions in everyday life. The term 'functional perspective' denotes an understanding of assessment that is closely related to, and could potentially benefit to, the field of educational practice, particularly regarding the professional work carried out by staff in ECECs. From a sociocultural and ecological understanding of learning and development (Vygotsky, 1997), according to Säljö (2009, p. 207), a major area of interest lies in 'the study of how human skills-be they bodily, cognitive, perceptual or a mix of these dimensions-are appropriated by individuals'. Hence, we assume that knowledge relevant to educational decisions and actions requires observations of children in natural interactions with their social and physical environments. The Norwegian Framework Plan for Kindergartens (Norwegian Directorate for Education and Training, 2017) and the national regulation of preschool teacher education (at the bachelor's level) are highly process-and resource-oriented (see summary in Engel et al., 2015). Therefore, in this study, we are particularly interested in the opportunity for ECEC staff to implement the obtained knowledge in their pedagogical practice in ECECs. In line with Iverson (2010), we consider knowledge about the relations between motor and language skills, and the development of these skills, to be highly relevant for practitioners working with young children. Systematic observations of children's skills in everyday life in ECECs have a direct bearing on educational practice and practitioners. Adapted support for individual children by the staff requires reliable, valid observations of children's functional skills in their everyday activities and play. Thus, our methodology to assess functional skills among toddlers and preschool-aged children is consistent with the authentic assessment approach Macy & Bagnato, 2010). In this approach, toddlers' and preschool-aged children's interactions with their natural physical and social environments become the core objects of observation, and functional skills are understood as important prerequisites for meeting the challenges of daily life. Research questions The purpose of this study is to examine the relations between language and everyday motor-life skills in toddlers and preschool-aged children in real-life situations, as well as the association between these skills from toddler to preschool age (i.e., how the level of motor skills in toddler age is related to language skills in preschool age). The research questions are as follows: (1) (a) What is the relation between language skills and motor-life skills at age 2;9 and at age 4;9 respectively, (2) How are motor-life skills in toddler age related to language skills in preschool age? Design and method This study is part of the longitudinal, interdisciplinary Stavanger Project -The Learning Child following children's development from 2 ½ to 10 years of age (Reikerås et al., 2012). Instruments As discussed above, data generation is based on authentic assessment as a presumed reliable, valid and non-intrusive way of assessing children's skills in their play and everyday life activities in ECECs (Bagnato et al., 2014). The children's language skills were assessed using the observation material TRAS -Tidlig registrering av språkutvikling (Early registration of language development) (Espenakk, 2003). The TRAS consists of eight sections, including nine items in each section (for a total of 72 items): Language comprehension, Linguistic awareness, Attention, Communication, Interaction, Sentence production, Word production, and Pronunciation. There are three levels of difficulty within each section, with level 1 as the easiest and level 3 as the most difficult. The interrater reliability for the sections in TRAS varied from .69 to .83 (Espenakk, 2003). The material was developed in Norway for children between two and five years old and was constructed for use in ECECs. Natural situations and children's play activities are the main observational arenas. To make the observations easier for the staff in the ECECs and to strengthen the quality of the data, a detailed description of each item and guidelines for scoring were developed for the project (Helvig & Løge, 2006). The 3-point response scale ranges from 0 (proficiency not yet observed) to 1 (partial proficiency for the given task) to 2 (the child possesses competence in the given task). Motor-life skills were assessed by ECEC staff who applied the Early Years Movement Skills Checklist (EYMSC, Chambers & Sugden, 2002. Moser and Reikerås (2016) reported the adaptation of the material to fit the Norwegian context. The EYMSC provides information about motor skills in natural surroundings for children from three to five years old (Chambers & Sugden, 2006). The interrater reliability was .96 (p<.01), and the test-rest reliability was .95 (p<. 01). A validation of the EYMSC (Chambers & Sugden, 2002) against the Movement Assessment Battery for Children (Henderson & Sugden, 1992) revealed a correlation of r = .76 (p<.01). The material is divided into four sections: Self-help skills (six items); Desk skills (five items); General classroom skills (five items); Recreational and playground skills (seven items). Each of the 23 items is scored on a four-point scale to register how well a child has mastered the particular skills. First, the teachers must decide whether the child can or cannot perform the task. Subsequently, the teachers concretize their choice by using two further subcategories: for children who can perform the task, the subcategories are (1) can do this task well or (2) can just do this task; for children who cannot perform the task, the subcategories are (3) can almost perform this task (4) or not close to performing this task. After the observation period, the scores for the items in each section are summed, and the sum of these section scores becomes the EYMSC total score. The lower the total score is, the larger the number of items that are well-performed or mastered by the child. The EYMSC was developed for children between 3 and 5 years of age (Chambers & Sudgen, 2002); the age group at T1 in the current study lies somewhat outside this range. However, a study comparing the same sample as in the current study at T1 with 3-year-old British children revealed relatively high motor competence in the Norwegian sample compared to the slightly older British sample (Moser & Reikerås, 2016). This indicates that the EYMSC can be used for the participants in our study, although they are slightly younger than the age span for which the material is designed. Notwithstanding, we must interpret the findings cautiously. Since both TRAS and the EYMSC are thought to identify developmental difficulties in children up to 5 years old, we expected ceiling effects at preschool age within a sample containing a majority of children assumed to not have such difficulties. Such ceiling effects at T2 are expected to reduce the information regarding average and high-performing children's skills at T2. A detailed description of each item and guidelines for scoring the EYMSC were developed (Iversen & Larsen, 2007) to help the staff gather the data, thereby strengthening the comparability of the assessments and increasing the reliability of the data collection. During the first round of data analysis, the response categories were re-coded so that high scores represented a better level of motor-life skills, while lower scores represented a weaker level. The recoded values are as follows: 1 = not close to performing this task; 2 = can almost perform this task; 3 = can just do this task; 4 = can do this task well. This was done essentially for convenience; it is easier to understand and communicate the results when a higher numerical score expresses a higher level of motor skills. Recruitment, participants and dropout rate All public (61) and 50% of the private ECECs (25) of Stavanger municipality accepted the invitation to participate in the study. The parents of children born between July 1 st , 2005, and December 31 st , 2005, who attended one of the participating ECECs received oral and written information about the project and were asked for written consent for their child to participate in the study. Apart from this period of birth, no other criteria excluded a child from participating in the study. We consider the city of Stavanger to be representative of other Norwegian cities and urban settlements of a certain size in terms of ECEC services. The law nationally regulates ECEC, and the national curriculum guidelines for ECEC must be applied all over the country. The proportion of private and public ECECs in Stavanger corresponds to the national average. Because only half of the private institutions participated in the study, this could be a source of error. On the other hand, most national studies carried out in ECECs have usually not revealed any significant differences between private and public institutions. Nevertheless, due to the presence of the oil-related industry, the residents of Stavanger had higher average incomes during the data collection period (approximately 24% higher than the national average according to Kommunefakta, 2017). Nevertheless, we assume that the results are transferable to other Norwegian cities and urban settlements. At baseline (T1), we gathered data on language skills and motor-life skills for 1,077 children (529 girls, 548 boys). All children had been enrolled before they were 2;6 years of age. Between the first round of data collection (T1) and the second round (T2), 200 children moved out of the municipality, and two consent forms were withdrawn. In addition, some ECECs had not returned results from either TRAS or the EYMSC at T2 for 219 children. These ECECs did not observe the children in the proper time intervals, forgot where they had stored their observation schemes, or the children had been absent because of holiday or illness. Finally, for 10 of the children, we identified failures in the registration of data at T2. Thus, the study had a dropout of 431 children, equivalent to 40% of the baseline group at T1. For the remaining 646 children (323 girls, 323 boys), data on motor-life and language skills were available for T1 and T2. Compared to the remaining group, there was a slightly larger proportion of boys (225) than girls (206) in the dropout group. In addition, 15.3% of the children in the dropout group were multilingual, which is somewhat less than those in the remaining group (19%). On the basis of these analyses, we assume that dropouts do not seriously affect the findings. Socioeconomic status (SES) was measured by parents' education levels. In a questionnaire, both parents were asked to indicate their highest level of education achieved by choosing one of four levels (upper secondary school; high school; college/university education [1 to 3 years], and college/university education [>3 years]). Even though all parents received the questionnaire, not all of them returned a completed questionnaire. SES data are therefore only available for 269 (41.6%) of the participants, of which 263 contained answers for both the mother's and father's education level, five for only the mother's education level and one for only the father's education level (see Table 1). As shown in Table 1, the SES level for the participants is considerably higher than the educational level in all of Stavanger and in all of Norway. This difference in SES, possibly based on selection effects, may affect the results. Families with a higher level of parental education may have a better understanding of the study's relevance and/or may be more generally interested in educational issues regarding their children. The proportion of parents of multilingual children who answered the questionnaire was 39.8%, which was similar to the proportion of parents of children who are living in a monolingual Norwegian environment at home. Thus, there were no differences in the response rates between multilingual families and monolingual Norwegian-speaking families. Although the limitations of the study, due to the high share of missing SES data, should be noted when generalising the findings, t-tests between the groups with and without available SES data did not reveal any significant differences between the scores in the dependent variables (two-tailed, p<.05; see Table 2). Whether parental SES data were available apparently did not interfere with the children's scores for the dependent variables. Procedure Data were collected through structured observation of the children's motor and language competencies during play and daily life activities by the staff in the ECECs when the toddlers were between 30 and 33 months (T1) and when the children were between 54 and 57 months (T2). Two of the staff independently had to observe whether the children had partially or fully mastered the various items for both the EYMSC and TRAS. In addition, before the observation started, the staff in the ECECs received updated information on young children's language development and training to rehearse how to use the EYMSC and TRAS. Data analysis The Statistical Package for the Social Science (SPSS), Version 21.0 (IBM Corporation, 2013), was used for all statistical analyses. Two research assistants entered the data into an SPSS file. Alternately, one entered the data while the second controlled the results of the data input. After data entry, two other research assistants re-entered the data for a randomly selected 10% of the participants to compare the degree of deviation. The outcomes of this control procedure showed good consistency (nearly 100%) between the datasets. Furthermore, frequency analyses were conducted for all variables in the whole sample to check whether the values were within the range of possible values. The few deviations discovered in this control procedure were corrected in the data set. On an item level, the observations in the EYMSC and TRAS produced data on an ordinal scale. There is a ceiling effect for the data at T2; thus, nonparametric analysis was applied. The association between motor-life and language skills within and between T1 and T2 was analysed by Spearman-Brown correlations. To explore how the level of motor skills at T1 was related to language skills at T2, nonparametric group comparisons (the Kruskal-Wallis test) were used. We established three groups, with two groups representing toddlers with the weakest and strongest levels of motor skills and a middle group. The first two groups were defined by the 15% of children with the weakest and strongest motor-life skills, respectively, at T1. The middle group encompassed 15% of children who scored closest to the mean EYMSC total score at T1. Ethical considerations The study was approved by the Norwegian Social Science Data Services and was conducted in accordance with the ethical regulations for research in Norway. Participation was based on the parents' voluntary and written consent. Applying authentic assessment as a respectful methodological approach is considered to provide low strain for the participating children. In general, children's participation in the study did not notably affect their everyday lives in the ECECs. Results The first research question addresses the relations between language and motor-life skills at age 2;9 and 4;9. Table 3 provides an overview of the correlations between motor-life skills, including the four EYMSC section scores and the eight TRAS section scores, and the total scores at 2;9 years of age (T1). 57 All the correlation coefficients are significant (p<.01) and vary between .12 and .29; they are thus considered small (Cohen, 1988). There are small variations in the correlation coefficients between the EYMSC total score and each of the TRAS section scores, as well as the EYMSC section scores and the TRAS total score. Table 4 presents an overview of the correlations between motor-life skills, including the four EYMSC section scores and the eight TRAS section scores, and the total scores for the EYMSC and TRAS at 4;9 years of age (T2). Table 4 reveals considerably higher correlations between motor-life skills and language skills at age 4;9 (T2) compared to age 2;9 (T1). All correlations at the section and sum score levels are statistically significant (p<.01). Eighteen of these 45 correlation coefficients are of moderate size (Cohen, 1988), with the highest correlation between TRAS and the EYMSC total score (.46). In general, there are larger differences between the intersectional correlations at age 4;9 than at age 2;9. Among all TRAS sections, Linguistic awareness had the strongest overall association with motor-life skills, while Attention and Pronunciation had the weakest relations with the EYMSC total score. Among the EYMSC sections, Desk skills were the most strongly related, while General Classroom skills were weakly related to the TRAS total score. The second research question asks to what degree the level of motor-life skills at age 2;9 is related to language skills at age 4;9. To explore this relation-which is of particular interest for children with weak motor-life skills-we examined the degree to which the groups with the lowest, highest and middle scores of motor-life skills at 2;9 years of age (T1) differed in language skills at 4;9 years of age (T2). Table 5 shows the results of the Kruskal-Wallis test analysing the differences in language skills at T2 between the three groups with different motor-life skill levels at T1. Table 5. Differences in language skills (TRAS total score) at age 4;9 (T2) between groups with weak (n = 109), middle (n = 116) and strong (n = 99) motor-life skills at age 2;9 (T1; EYMSC Total score); Kruskal-Wallis test There were significant differences (p<.01) at T2 in the total TRAS score between groups with weak, middle and strong motor-life skills at T1. Mann-Whitney U tests were used to examine which groups significantly differed from one another, and effect sizes were calculated to estimate the effects of grouping for the total TRAS score at T2. To prevent a Type 1 error due to a three-way group analysis, a Bonferroni correction was applied. The significance level of .05 was divided by three; thus, the significance level was adjusted to .017. Table 6 shows the findings of these analyses. Table 6. Differences in language skills (TRAS total score) at age 4;9 (T2) between the groups with weak (n = 109), middle (n = 116) and strong (n = 99) motor-life skills at age 2;9 (T1); Mann-Whitney U tests; R = effect size) (Cohen, 1988); * p<.017 Between weak and middle Between middle and strong Between weak and strong Based on the measurement at T1, the three motor-life skill groups differed substantially in their TRAS total score at T2. Significant differences were found between all three groups. The difference between the groups with weak and strong motor-life skills at age 2;9 was of medium strength, whereas the differences between the middle group and the two other groups were of small effect size. Discussion Regarding the first research question on the relations between language and motor-life skills at age 2;9 and at age 4;9 respectively, we discuss the findings for toddlers and preschool-aged children separately. In the discussion, we will apply the term gross motor skills, including the sections General classroom skills and Recreational/playground skills, and the term fine motor skills, including the EYMSC sections Self-help skills and Desk skills. For toddlers (T1; 2;9 years), only the correlation between the total EYMSC and TRAS total score reached a moderate level (.31). The association between EYMSC total scores and the eight TRAS section scores varied between .20 and .29, indicating that the relations between motor-life skills and language skills is rather weak in this age group. Neither of the correlations between the EYMSC sections and the TRAS total score reached moderate size, even though all correlations were still statistically significant (p<.01). Only a few studies have examined these associations in a comparable young age group. Houwen, Visser, van Der Putten and Vlaskamp (2016) found somewhat higher correlations (.27 to .46) for children 0;3 to 3;6 years of age; the somewhat younger average age of their sample (1;10 years) may have caused the differences. Additionally, Wang, Lekhal, Aarø, and Schjølberg (2014) found a high correlation (.72) at the very young toddler age of 1;6 but considerably lower correlations at age 3 (.29). Thus, the results at T1 in our study (.29) are comparable to the Norwegian sample of Wang et al., at age 3. The relations between the sections in TRAS and the EYMSC provide a more differentiated picture. The highest correlation between motor-life skills (EYMSC total score) and language skills for toddlers was in Linguistic awareness (.29). This section involves reflecting on language and includes items regarding children's skills in drawing attention to sound structure (phonological awareness), as well as their participation in language awareness activities (Frost, 2003). Prior research has shown (Stangeland et al., 2018) that participation in language awareness activities are the skills in this section that is most commonly mastered by toddlers. The strongest correlation between the TRAS and EYMSC sections at T1 was between Self-help Skills and Linguistic awareness. Mastering self-help skills requires a substantial amount of practice in terms of active participation in everyday life. Likewise, to develop language awareness skills, the child must take part actively and extensively in various activities that stimulate and require language awareness activities over time (Frost, 2003). Hence, the associations between these skills may mirror the toddler's level of participation in different activities. Participation therefore seems to be highly important as a prerequisite to developing skills in several developmental areas simultaneously, as emphasised by Leonard and Hill (2014). Further, the correlations between Language comprehension and Desk skills in toddlers are among the largest in the present study. However, the findings of Houwen et al. (2016) revealed a considerably higher correlation (.46) between the fine motor subscale in their instrument and receptive language. Additionally, our study indicates lower correlations between the EYMSC fine motor skills (the sections Self-help skills and Desk skills) and expressive language skills than those found by Houwen et al. (2016). These divergent findings can be explained by the different age spans, languages or scales applied. Differences in the children's development of motor skills between countries may also influence the results; for example, toddlers in Norway generally have a higher level of motor skills than British children of the same age (Moser & Reikerås, 2016). The correlations between EYMSC gross motor skills (General classroom skills and Recreational/playground skills) and the three sections characterising expressive language skills (Pronunciation, Word production and Sentence production) in toddlers are among the lowest in our study, though they are at the same level as those recently found by Houwen et al. (2016). The weak associations between gross motor skills and expressive language suggests that for toddlers, expressive language still might not be as important for their physically active play. In addition, the rather low correlations imply that the onset of the most prominent development of motor skills in early childhood has not yet begun (Williams et al., 2008). The relations between motor skills and the TRAS sections Interaction, Communication and Attention in our study are in line with the results of Giske et al. (2018) and Stangeland (2017), who found comparable relations between motor skills, social skills and language in toddlers. The strongest correlation appears between Interaction and Recreational/playground skills. This is commensurate with findings from Leonard and Hill (2014), showing that interactions between peers at a young age rely heavily on motor skills. In regard to preschool-aged children (T2, 4;9 years), the associations between motor-life skills and language skills become considerably more salient. All correlations are statistically significant (see Table 4; p<.01); the TRAS and EYMSC total scores correlated on a moderate level (.46), while the correlations between the EYMSC total score and the eight TRAS section scores varied between .25 and .44. Only Pronunciation and Attention were weakly related to motor-life skills, while the correlations in the other six sections achieved moderate strength. Three of the EYMSC sections correlated on a moderate level with the TRAS total score; only the correlation with General classroom skills was weak. It is striking that the two fine motor skill sections were moderately correlated with five of the language sections, four of which belong to Desk skills and one to Self-help skills. Only two correlations with gross motor skills were of moderate size, both belonging to Recreational/playground skills. Our findings of clearly stronger associations between motor-life skills and language skills in preschool-aged children compared to toddlers contrast with those of Wang, Lekhal, Aarø, Holte, and Schjølberg (2014), who found stronger relations between motor-life skills and language skills at age three than at age five. A methodological explanation could be the ceiling effect at T2 for both TRAS and EYMSC in the present study, which reduces the spread of the scores. More suitable instruments may contribute more information regarding high-performing children at age 4;9, possibly leading to other results. The strongest correlations at preschool age emerge between Linguistic Awareness and the two sections Self-help skills (.35) and Desk skills (.41). This association indicates that activities that demand fine motor skills create space for verbal communication, play and reflections on language between children, as well as between staff and children (Frost, 2003). At preschool age, Desk skills also correlate with a medium strength with the remaining four of seven language sections: Interaction (.35), Word production (.33), Language comprehension (.32) and Attention (.30). This is in line with the conclusions of a review study (Van der Fels et al., 2015) on the relations between the motor and cognitive domains in children aged 4 to 16, which showed that fine motor skills had the strongest relations with higher-order cognitive skills such as language. Attention, as a core component of self-regulation, has been proven in several studies to be related to motor skills (McClelland et al., 2016;Robinson et al., 2016). Becker et al. (2014) emphasised self-regulation as a moderating factor between motor and language skills. The moderate correlation between Recreational/playground skills and Linguistic Awareness in preschool age (.30) corresponds with the findings of Becker et al. (2014), Stangeland (2017), and Stangeland, Lundetrae and Reikerås (2018), which underscore the significance of participation in play for language development, as well as the findings of Bar-Haim and Bart (2006) and Giske et al. (2018), who confirmed the relations between social competence and motor skills. Play in early age builds heavily on motor skills and communication with peers; children's play both requires and strengthens language (Dickinson & Porche, 2011). This also appears to be a plausible explanation for the high correlations between Language comprehension and Recreational/playground skills. In addition, the correlation between Recreational/playground skills and Sentence production (.29) is one of the strongest in the gross motor domain in preschool age, indicating that expressive language plays a major role in children's gross motor play. Children use more words and apply more complex sentences in play situations compared to other classroom activities, as shown by Cohen and Uhry (2007) and Fekonja, Marjanovič Umek and Kranjc (2005). The second research question examines the degree to which the level of motor-life skills at age 2;9 is related to language skills at age 4;9. Weak, middle and strong motor skills groups based on the EYMSC scores at T1 were created, each comprising 15% of the total sample (see the method section). The Kruskal-Wallis test reveal significant grouping effects for the TRAS total score at T2 (Table 5). The largest effect size was between the strong and weak groups (.45; Table 6). Additionally, the differences between the weak and middle groups, as well as between the middle and strong groups, were significant but only indicated small effect sizes (.20 and .27, respectively; Table 6). These findings are in accordance with other Norwegian and international (Leonard & Hill, 2014) studies. To some extent, the rather weak association between motor-life and language skills at age 2;9 speaks against the alleged comorbidity between the areas (Hill, 2010;Webster et al., 2005). However, the clear differences in language skills between children with weak and strong motor skills at age 4;9 indicate that the two developmental domains are related from a longitudinal perspective. According to Williams et al. (2008), the most prominent changes in height, muscle strength, body mass and proportion appear between 3 and 5 years of age. These bodily changes allow children to achieve much more complex, well-coordinated movements by boosting their motor skills. Our assumption is that the baseline in motor skills at a young age to some degree determines the developmental track for motor skills and thereby affects children's opportunities to communicate with their social and physical environment (Bart et al., 2011;Kuhl, 2011) as a prerequisite for developing language skills. This implies that motor and language development are closely intertwined from 2;9 to 4;9 years of age and that motor development could be a driving force (Ionesco & Illie, 2018;Leonard & Hill, 2014). However, there are considerable dynamics and discontinuities in early motor skill development (WHO, 2006), and neither delayed nor advanced motor development in toddler age fully determines later motor development (Moser et al., 2018). Thus, an overall effect size of .45 in language differences between the weak and strong motor skill groups may be substantial. Summarising discussion and implications The present study contributes to the body of knowledge on the relations between motor-life skills and language skills, which are crucial for educational praxis in ECECs (Iverson, 2010). The findings in the present study support the conclusion of Leonard and Hill (2014) that motor development at an early age is not an independent process, but it has diverse, complex connections to several cognitive domains. In summarising the findings for our first research question, there are low to moderate correlations between everyday motor-life skills and language skills; these correlations are more prominent for preschool-aged children than toddlers. Although our study does not address causal relations, the results advocate for an embodied approach to language learning as appropriate for young children (Ionescu & Ilie, 2018). Promoting an activity-and movement-oriented pedagogy can strengthen children's motor competency and may simultaneously support development in other domains. An adequate level of motor skills may efficiently contribute to placing children in a position to better experience, understand and cope with demands and challenges that involve their own body, as well as the physical and social environment. These experiences are crucial for general cognitive development and learning, including language. Motor skills are not only a matter of bodily and physical development; they should also be thoroughly integrated in a holistic educational approach that addresses all developmental domains. To determine whether the observed associations between the two domains in the present study are based on a causal relationship or whether other factors create a purely correlative relation, further studies using appropriate experimental designs are needed. As Carpenter and Drabick (2011) underlined, processes from multiple domains are necessary to understand how risk and protective factors translate into different patterns of children's language functioning. Although this study does not allow for causal explanations, we assume that the development of language and motor-life skills mutually influence each other, and that they presuppose and stimulate one another. Young children's interactions rely on motor skills; it is through such interactions that language skills develop (Leonard & Hill, 2014). Children with sufficient language skills are attractive playmates and have a higher participation rate in play with their peers (Stangeland, 2017). Such participation in play is necessary to cultivate vital motor skills. This means that children who have difficulties in one or both areas may end up in a vicious cycle. Due to a lower level of development, they might not be included in play to the same degree as others, and thus have fewer opportunities to develop their skills and to catch up. Knowledge of the relations between the level of motor-life skills in toddler age and language skills in preschool age, may be useful for staff in ECECs in terms of identifying children with possible risk factors, who could then receive early intervention. Motor behaviour, based on motor skills, is an easily observable, core element in young children's everyday lives. Serious problems in achieving age-adequate motor tasks in daily life in ECECs may be trustworthy, initial indicators that we should also pay attention to other domains of development that are not as distinct as motor skills (Son & Meisels, 2006). Limitations The ceiling effect in both TRAS and the EYMSC, and the fact that the instruments produced ordinal data, limit the opportunity to apply more powerful statistical analyses. Future research should use instruments that offer enough variance for average and high-performing children. One can assume that standardised instruments and individual testing routines conducted by advanced students of psychology or special needs education would lead to more reliable assessment scores than those carried out by staff. However, our initial assumption that ECEC teachers tend to evaluate children's skills in a positively biased way due to positive attitudes towards children in general, and their desire for the children in their units to do well, does not appear to be the case. Even though the staff were trained in applying the instrument, the large number of data collectors still might be a problem. It may also be a limitation that it was different staff collecting the data at T1 and T2. The rating scale provides room for interpretation in assessing children's motor-life and language skills. Notwithstanding, these measurement errors would not have a systematic effect. This is supported by the fact that the data revealed a normal distribution of the results at T1, and that the variance in language and motor-life skills was sufficient. In addition, because two staff members conducted all observations independently, the findings' reliability is strengthened. Since many of the effect sizes found in our study are of small to medium size, this leaves plenty room for alternative interpretations, and we must be careful not to draw too strong conclusions. Our design and data do not allow statements about causality or determine whether the associations between motor-life and language skills are an expression of general development across domains or an expression of domain-specific trajectories. There might be a common developmental factor that influences all domains of development. Studies based on multivariate growth-curve models (Rhemtulla & Tucker-Drob, 2011) have shown that a global dimension of development accounts for as much as 42% of the variance across domains (linguistic, mathematics, reading, gross motor and fine motor skills). Author biography Elin Reikerås, PhD, is a professor in Early Childhood Education at The Department of Early Childhood, and leader of FILIORUM -Centre for research in Early Childhood, University of Stavanger. Her research interests are children's development and learning in Early Childhood Education, and children with special needs.
2021-04-30T18:59:28.290Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e7f90e8df0bd1eb102157a4b2b069843e9ac2af6", "oa_license": "CCBY", "oa_url": "https://jased.net/index.php/jased/article/download/2417/4973", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e7f90e8df0bd1eb102157a4b2b069843e9ac2af6", "s2fieldsofstudy": [ "Psychology", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
218596981
pes2o/s2orc
v3-fos-license
Identifying health priorities among workers from occupational health clinic visit records: Experience from automobile industry in India Context: Occupational health surveillance in India, focused on notifiable diseases, relies heavily on periodic medical examination, and isolated surveys. The opportunities to identify changes in morbidity patterns utilizing data available in workplace on‐site clinics is less explored in India context. Aims: Present paper describes longitudinal assessment of morbidity patterns and trends among employees seeking care in occupation health clinic (OHC). The study also intends to explore associations between work department, clinic visits and morbidity pattern. Materials and Methods: Record‐based analysis was undertaken on data available (for the period 2010‐2014) from two OHCs in a leading automobile industry in India. The doctor, examining every employee, documented the provisional diagnosis in specific software which in turn provides summary diagnosis based on affected body organ system as per ICD‐10 categories. This information was used to assess the morbidity pattern and trend among workers. Chi‐square test of significance and Extended Mantel‐Haenszel chi square test was used assess the association and its linear trend. Results: Respiratory, musculoskeletal and digestive system related diseases were the top three reasons for employees visit to OHC. The nature of morbidity varied across different departments in the industry. There was a significant increase in proportion of employees visiting OHC during 2010‐2014. Conclusion: A clinic visit record, with its own strengths and limitations, provides information on morbidity pattern and its trends among workers. Such information will help plan, implement and evaluate health preventive, promotive, and curative services. Introduction Industrial workers are at dual risk for exposures present in general population and workplaces. Changing risks in general population may influence morbidity patterns and health priorities in industrial workers. [1,2] Similarly changes in work environment determinants can alter morbidity pattern and health priorities of employees. It is in the best interest of employers and employees that industry health policies and programs take cognizance of these expectant changes and identify them at the earliest by efficient surveillance systems. Occupational health surveillance in India is focused to detect "notifiable diseases" specified in The Indian Factories Act, [3,4] mostly through periodical medical examination or by isolated surveys for the same. However, opportunities to identify changes in morbidity patterns exist from data available in on-site clinics, health insurance claims and sickness abseentism records. Occupational Health Centres (OHCs), colloquially called as "clinics", are onsite out-patient health care facilities established in industries as per The Indian Factories Act. [3] The OHC records are maintained regularly and digitally in most large industries. Data from OHC records can help to identify leading health problems, their trends, distribution and understand if they relate more to general population risks or industry related risks. However, evidence regarding their utility is limited from Indian context. This paper describes a longitudinal assessment of morbidity patterns and trends among employees seeking care in occupation health clinic (OHC) in a leading automobile industry in India, between year 2010 and 2014. The study also intends to explore associations between work department, clinic visits, and morbidity pattern. Methodology This study is part of a larger, five-year (2010-2014) health and productivity study conducted in a leading automobile industry in India using multiple health related data sources in the year 2015-2016. This paper is limited to data from OHCs. We analyzed available data of employees visiting two OHCs (clinics), one each in Plant 1 and 2, to identify predominant reasons and trends for seeking care in OHC among employees and its association between type of workers and work department. The 24/7 OHCs provide first aid and basic occupational health services (primary care, emergency care, health surveillance, health promotion, physiotherapy, and record maintenance) and is staffed by doctor, nurses, and physiotherapist (as per norms specified by Indian Factories Act). [3] Employees visit OHC after prior permission from their work location supervisors. Each visit is registered and details of diagnosis, management, and referral are documented in their "Patient Health Record". Data is maintained in hard and digital format using company developed software. Data for this current study was extracted from this software for the year 2010-2014 (January-December) in MS excel format. Every employee visiting the OHC was seen by a doctor, who makes a provisional diagnosis which is entered in the occupational health management software. The software further classifies the provisional diagnosis into a "summary diagnosis based affected body organ system". This classification was as per broad ICD-10 categories. [5] Example: Provisional diagnosis: Lower Respiratory infection. Summary diagnosis: Respiratory system disease. Employee visiting OHC for non-health related reasons were classified as "Administrative" visits. Visits made to OHC for the sole purpose of Annual Medical examination were excluded from the study. The following variables were present in the extracted data of OHC visit records (Employee ID, Date of visit, Month, Name of employee, Department, and Diagnosis). Marital status, gender, date of birth, date of joining, and work department information was merged from master data sheet of employees available with Human Resources (HR) Department. Merging was done using VLOOK UP function in MS Excel using 'Employee ID' as unique ID. Data was further checked for consistencies in entry and outliers. Age of employee as on 31 st December 2014 was computed from date of birth. Work experience as on 31 st December 2014 was computed from date of joining. Number of employees in each respective year was sourced from master data in HR department. Statistical Analysis Number and proportion of OHC visits was calculated and presented year-wise. Worker to visit ratio (Visits per employee per year) was computed as ratio of total number of visits to total number of employees visiting OHC in the same year. Leading health conditions for each year is presented as frequency and percentage (provisional and summary diagnosis). Ranking of top five health conditions is provided for each year. Association between age group, work departments and leading health conditions was tested using Chi-square test of significance. Extended Mantel-Haenszel chi square test for linear trend was applied to test for significant change in proportion of employees visiting OHCs every year and visits due to particular health condition. Results were considered statistically significant at P < 0.05. Pearson's r was applied to test for association between number of OHC visits and on-roll employees. Analysis was performed using OpenEpi and SPSS. Ethical clearance was obtained from Institutional ethics committee of NIMHANS. OHC visit pattern (2010-2014) Between years 2010-2014, nearly 141,792 visits to OHC were made by employees. Proportion of on-roll employees visiting OHC (at least once) increased from 71% in year 2010 to 89.9% in year 2014 (P < 0.001). Increase in OHC visits correlated significantly with increase in on-roll employees (Pearson's r = 0.89, P < 0.01). Average yearly visits per employee increased from 5.1 to 7.5 between 2010 and 2014 [ Table 1]. Amongst employees who sought care in OHC (2010-14), nearly 63-73% were aged between 18 and 29 years and 59-70% were from Plant-1. Proportion of Plant-1 employees visiting OHC decreased from 70% in 2010 to 59% in 2014 whereas proportion of Plant-2 employees visiting OHC increased from 29.6% in year 2010 to 40.6% in year 2014 [ Table 2]. Proportion of on-roll employees visiting OHC is higher in Plant-2 in 2014. (Data not shown). Work department and OHC visits Employees from assembly and weld departments accounted for 46.0-50.0% of all employees visiting OHC between 2010 and 2014, followed by paint (15-16%) and office (11-17%) departments. The OHC visit pattern was similar to employment patterns in the respective departments. We analyzed for association between OHC visits and employees working in production departments (assembly, paint, weld, press) and non-production departments (office, quality, ILCD, Maintainence). Around 3862 (71.2%) of OHC visits were made by production department employees as against 1557, (28.8%) by those in non-production line departments between year 2010-14. A statistically significant association between work department and OHC visits was observed (Chi-square = 62.16, P = 0.0002) wherein production line departments visited OHC more compared to non-production line departments. In the year 2014, significant proportion (>90.0%) of all on-roll employees in each department, except maintenance and office department visited the OHC (Chi-square = 35.76, p=<0.0001). Proportion of employees visiting OHC in 2014 was highest among assembly (95.4%) and weld departments (94.6%) [ Table 3]. Morbidity pattern Respiratory, musculoskeletal, and digestive diseases were the top three reasons for employees visiting the OHC. Respiratory conditions were most common reason ranging from 25.0% to 36.0% of total visits between 2010-14 [ Table 4], followed by musculoskeletal conditions (18.8-26.6%). Decline in visits due to workplace injuries is observed (5.5% to 3.4%) between 2010 and 2014 [ Table 4]. Department wise distribution of morbidity pattern is presented in Tables 5. Employees seeking care for respiratory, musculoskeletal, digestive, injuries were significantly higher from assembly, office, weld, and paint departments (P < 0.05). Health care seeking in OHC for most conditions were higher in production department employees, except infectious diseases, which was higher in office employees [ Table 5]. Comparison across departments revealed that nearly 95.0% of the production line employees sought care from OHC as compared to 70.0% among office and maintenance department employees in 2014. (Data not shown). Repeat visits In 2010 it is observed that 388 (13.6%) employees had made >10 visits to the OHC and the same increased to 22% in 2013 and 28.3% in 2014. In year 2014, nearly 1/4 th of all visits to OHC were from employees who were frequent visitors (>10 times per year). The predominant conditions for repeat visits were respiratory, digestive and musculoskeletal. Most of employees who repeated more than 10 times per year were from Assembly, weld, and paint sections. (Data not shown). Discussion The present paper is part of a larger longitudinal study that examined the relationship between health and productivity in a leading automobile industry. [1] Early identification of trends in employee health and morbidity in an industry can be ascertained from periodical medical examination (PME), OH clinic records, independent surveys, and insurance claims records. These sources conceptually differ in profile of employees, type of information derived and its application. PME data comprises of apparently healthy volunteer employees and observations relate to screening for hidden diseases and incident health changes among employees. Its utility is closely associated with compliance to existing laws to identify exposure related occupational diseases. OHC data relates to morbidity pattern of employees seeking out-patient care due to their experience of ill-health whereas insurance data provides information about morbidities necessitating in-patient care or advanced diagnostic evaluation. Understanding employee morbidity pattern from periodical medical examination records is a common practice, but utility of OHC records to identify morbidity patterns and trends is not well established. [6][7][8] Though onsite clinics are proven to be cost-effective, [9] it has not been leveraged for evidence generation in Indian context. [10] This paper discusses inferences drawn from OHC data and its application for ensuring healthier workforce. [1] Number of clinic visits may increase with increase in employees. We observed a significant positive correlation between increase in employee strength and clinic visits. The increase could be a reflection of robust employee wellness initiatives undertaken, presence of pro-active OH staff, increased health consciousness amongst employees or in worst case, an indication of deteriorating health status of employees. In this study, we could not explore factors influencing care seeking as it was a secondary data-based study. Observed morbidity pattern in industrial workers is a combination of morbidity pattern in source population and morbidity specific to occupation related exposures. [2,11] The study revealed that morbidity pattern identified from OHCs (2010-2014) is likely to be reflection of patterns in general population, wherein respiratory, digestive, and musculoskeletal-related morbidities dominate out-patient care systems. This is similar to morbidity pattern expected in primary health clinics/centers. The predominant morbidity identified in clinic visit data in present study was respiratory system related morbidities. Surveys in nine countries, in 76 primary health care facilities revealed that proportion of patients with respiratory symptoms ranged from 8.4% to 37.0%. [12] Similarly, studies from India also have identified respiratory and gastrointestinal symptoms/conditions to be most common among primary care facility attenders. [13,14] Studies from OH clinics in developed countries also indicated that respiratory and musculoskeletal disorders were predominant reasons for visiting the OH clinics. [8,15] In India too respiratory, digestive, and musculoskeletal system related health conditions were most common among workers of automobile sector. [16,17] Utility of OHC data lies in identification of deviations from such expected patterns over a period of time or in identification of health issues specific to the workplace under consideration. Compared to respiratory and digestive disorders, musculoskeletal disorders (MSDs) are commonly prevalent in automobile industry, more so among those involved in production line [18][19][20][21] attributed to posture and monotonous nature of work involved. Posture assessment for employees in an automobile industry in assembly area using RULA score indicated that it was "at-risk" job for work-related musculo skeletal disorders. [22] Available literature indicates that nearly 59% of industrial workers reported presence of MSDs. [18] Present study also observed higher prevalence of musculoskeletal complaints among production line employees. The four departments-assembly production, office, paint production, and weld production contributed to nearly 80.0% of the total musculoskeletal related visits to OHC. MSDs are amenable to reduction by targeted interventions [19] and OHC record-based surveillance is useful to identify such reductions to quantify effectiveness of interventions. An increase in digestive disorders is observed between 2010 and 2014, from 9.3% to 18.4% of all OHC visits. Water and food testing and standards are adhered to strictly in the industry. Food sources and eating behavior outside the workplace may have contributed to these visits. The rate of industrial injuries per 1000 persons employed per year (fatal + nonfatal) in India is less than 2 per 1000 persons. [23] Collateral data regarding injury claims from the industry revealed 11.1 injury claims per 1000 employees and all these were non-work-related injuries. OHC data was useful to point out that injuries which necessitated care seeking in clinic were minor in nature, not needing admissions, as against injuries occurring outside workplace setting. Repeat visits Somatization, malingering, and repeated absence for work and presentism are issues closely linked with low productivity. [1] Increase in repeat visits may indicate chronic nature of illness, increased vulnerability to specific illness, non-relief from therapy or intentional absence from work location. Data from occupational health primary care in Finland of 3167 frequent attendees revealed that musculoskeletal system disorders, depression and anxiety were reasons for repeat visits to OHC clinics. It also indicated that working in industries is associated with frequent clinic visits. [8] Understanding OHC data is useful to stratify or label "red alert employees" who make more than usually expected number of clinic visits in a year, which in this case was defined as "employees with ≥10 visits per year". Benefits of clinic visit data analysis Though periodical medical examination continues to remain a standard occupational health surveillance strategy, analysis of clinic visit records has its own merits and applications. It enables monitoring of trends in out-patient related illness affecting employees, early detection of exposure-related out-breaks, examining associations between work department and out-patient visits, identify chronic OHC attendees and help to improve productivity. Such information will help to organize health preventive, promotive, and curative measures associated with seasonal changes, process changes, and employee recruitment, all which contribute to decrease in loss of productivity. Deficiencies in current systems Medical records systems in the study industry were robust and up to date to facilitate such an analysis. Sadly, not all industries across India have digitalized OHC records hereby posing challenges in compiling large data for analysis. Doctors and nurses attending to the employees in these clinics are not formally trained in Occupational health or basic ergonomics, to understand nuances of data utility to implement evidence-based OH programmes. Current system of worksite clinics in industries is modeled on basis of The Indian Factories Act, which specifies norms according to employee strength and hazardous nature of industries. [24,25] System of periodical reporting of OHC visit patterns to "enforcement authority" or to "health department" is not established, hereby limiting the utility of OHC records. OHC records are maintained usually based on interest and commitment levels of industry managements or their global health policies. There is no standard guideline for disease classification in OHCs hereby limiting cross industry comparison. With overt emphasis on periodical medical examination as health surveillance measure in industries, the role of monitoring clinic visit data is undermined. Volume 9 : Issue 4 : April 2020 Occupational health and primary care The Basic Occupational Health Services (BOHS) are an application of primary health care principles in the occupational health sector which aims at health promotion and prevention of health problems among workers. In India, organized occupational health services are available for less than 10% of the workers population, since 90% the 500 million odd workers are employed in unorganized sector. In this context, strong and close collaboration between occupational health and primary care is often recommended as strategy for universal occupational health care. [26][27][28] It is with this premise that the current primary health care providers in the private and the public health care systems in the country needs to be oriented towards various aspects of occupational health and especially about the importance of maintaining and utilizing clinic visit records for assessing and monitoring morbidities among workers. Furthermore, in many instances even within the organized industrial sector, health services are often outsourced, usually to general practitioners in private sector. Hence, they too need to be oriented about the BOHS and this is being aimed through the present article. Conclusion Our study clearly indicated OHC records are a useful source to identify priority health problems affecting employees as well monitor trends over a period of time. Respiratory, musculoskeletal, and gastrointestinal problems are the most common reasons for seeking care in OHC. Working directly in production related departments was associated with increased clinic visits, hereby affecting loss of work-time. Given its utility and inferences that can be drawn, there is a need to strengthen and standardize OHC records across all industries in India. There is much benefit to integrate OHC data with other data sources in industry to provide comprehensive picture of distribution of morbidity pattern including work lost time.
2020-04-30T09:02:26.235Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "da2944fd549981077f56a614d33b848f7503436e", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_1107_19", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7204f89d2d5f35f70b627a19900b5e81e2aaa1f0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265216956
pes2o/s2orc
v3-fos-license
A Prospective Study on the Roles of the Lymphocyte-to-Monocyte Ratio (LMR), Neutrophil-to-Lymphocyte Ratio (NLR), and Platelet-to-Lymphocyte Ratio (PLR) in Patients with Locally Advanced Rectal Cancer Rectal cancer constitutes over one-third of all colorectal cancers (CRCs) and is one of the leading causes of cancer-related deaths in developed countries. In order to identify high-risk patients and better adjust therapies, new markers are needed. Systemic inflammatory response (SIR) markers such as LMR, NLR, and PLR have proven to be highly prognostic in many malignancies, including CRC; however, their roles in locally advanced rectal cancer (LARC) are conflicting and lack proper validation. Sixty well-selected patients with LARC treated at the Maria Sklodowska-Curie National Research Institute of Oncology in Warsaw, Poland, between August 2017 and December 2020 were prospectively enrolled in this study. The reproducibility of the pre-treatment levels of the SIR markers, their correlations with clinicopathological characteristics, and their prognostic value were evaluated. There was a significant positive correlation between LMR and cancer-related inflammatory infiltrate (r = 0.38, p = 0.044) and PD-L1 expression in tumor cells, lymphocytes, and macrophages (combined positive score (CPS)) (r = 0.45, p = 0.016). The PLR level was correlated with nodal involvement (p = 0.033). The SIR markers proved to be only moderately reproducible and had no significant prognostic value. In conclusion, the LMR was associated with local cancer-related inflammation and PD-L1 expression in tumor microenvironments. The validity of SIR indices as biomarkers in LARC requires further investigation. Introduction Rectal cancer constitutes approximately 35% of all colorectal cancers (CRCs).Its incidence in the European Union is estimated at 125,000 per year, and this is predicted to rise due to sociodemographic changes [1,2].An alarming increase in the incidence of both colon and rectal cancers in young adults has been observed in recent years [3,4].Prognoses, especially in advanced stages of the disease, remain unsatisfactory [5].The current standard of care for patients with locally advanced rectal cancer (LARC) is neoadjuvant radiotherapy/chemoradiotherapy followed by surgery according to total mesorectal excision (TME) principles with or without postoperative chemotherapy [6][7][8].However, the impact of such an approach on overall survival (OS) remains unclear, and it may cause long-term toxicities and impaired quality of life [9,10].New markers are required to appropriately identify low-and high-risk patients, which is crucial for properly adjusting patients' therapy.Blood-based systemic inflammatory response (SIR) markers such as LMR, NLR, and PLR are simple and cheap biomarkers with proven prognostic value in CRC [11][12][13][14].However, the proper validation of these markers is lacking, and their roles in LARC are uncertain [15,16].We conducted a prospective study on a well-selected group of patients with LARC.We investigated the reproducibility of the SIR markers, their correlations with clinicopathological characteristics, and their prognostic value. Materials and Methods A single-arm prospective study among patients treated at the Maria Skłodowska-Curie National Research Institute of Oncology in Warsaw was conducted.The eligibility criteria were as follows: (1) the patients were diagnosed with primary locally advanced rectal cancer confirmed by histopathology; (2) their clinical records, including demographic data and laboratory data, were available and complete; (3) the performance statuses of the patients were ECOG 0-2, and the patients had qualified to receive radio/chemoradiotherapy by multidisciplinary teams; and (4) the patients were >18 years old.The exclusion criteria were as follows: (1) the presence of distant metastasis at the time of diagnosis; (2) the presence of malignant tumors in other organs; (3) the presence of acute or chronic inflammatory diseases, hematological malignancies, autoimmune diseases, and other medical conditions that could affect inflammatory markers; and (4) prior immunosuppressive therapy.Blood samples from the patients were obtained three times within a median period of 21 days (range of 7-55 days).All the tests were performed prior to any oncological treatments.The differential white blood cell counts were analyzed using a Sysmex XN-550 hematology analyzer following the manufacturer's protocol.The LMR, NLR, and PLR were calculated from the blood samples by dividing an absolute lymphocyte count by an absolute monocyte count, an absolute neutrophil count by an absolute lymphocyte count, and an absolute platelet count by an absolute lymphocyte count, respectively.The patients were divided in terms of the baseline values of their SIR markers into high and low LMR, NLR, and PLR groups.The cut-off values were determined based on our previous studies and the data available in the literature [17][18][19][20]. Formulas: LMR-absolute lymphocyte count (g/L)/absolute monocyte count (g/L) NLR-absolute neutrophil count (g/L)/absolute lymphocyte count (g/L) PLR-absolute platelet count (g/L)/absolute lymphocyte count (g/L) All the patients received neoadjuvant radio/chemoradiotherapy according to the multidisciplinary teams' decisions, which were based on the stage of the disease.Ten patients did not agree to proceed with surgery.Six patients progressed/proved to be inoperable before surgery.Surgery was performed on 44 patients. Histopathological Analysis The post-surgical pathological results were collected and analyzed.There were 10 cases of complete pathological response (pCR).In two cases, no pathological specimens were available after surgery, and in three cases, the specimens were deemed not suitable for the histopathological analysis.Twenty-nine specimens were found suitable for the analysis.The presence of tumor-infiltrating immune cells in the tumor centers and the invasive margins was evaluated by immunohistochemistry using the antibodies for the CD8 antigen.For the immunohistochemical staining, primary monoclonal antibodies against CD8 (DAKO, Glostrup, Denmark, Cat.No IR623) with a DAKO EnVision FLEX detection sys-tem (DAKO, Denmark, Cat.No K8002) were used.Paraffin sections (4 µm on silanized slides) were deparaffinized, rehydrated, and then stained according to the manufacturer's procedures.In a semi-quantitative assessment, a four-digit scale (0: 0-10% of the area of scarce and mild staining, 1: 11-50% of the area of moderate or intensive staining, 2: 50-75% of the area of intermediate or intensive staining, and 3: >75% of the area of intermediate or intensive staining) of the density of lymphocytes was used in the measurements for the tumor invasive margins.The inflammatory infiltrates containing lymphocytes, plasmacytes, monocytes/macrophages, and neutrophils were assessed histologically on H&E basic stain at the invasive fronts of the tumors using the same semi-quantitative four-digit scale.An example of intensive inflammatory infiltrates and scarce inflammatory infiltrates at the invasive margins is presented in Figure 1.Primary antibodies against MSH6 (DAKO, Denmark, Cat.No IR086) and PMS2 (DAKO, Denmark IR087) were used to detect the expression of microinstability indicator proteins.The percentage of positive cancer cells was estimated in each case, and the internal positive control consisted of lamina propria inflammatory cells and/or nontumoral glandular cells.As for the PD-L1 expression, clone 22C3 of the monoclonal antibody (DAKO, Denmark, Cat.No SK006) was used, and the staining was performed automatically in a closed system as supplied by the manufacturer.The expression was calculated as a CPS given the number of the PD-L1-staining cells (tumor cells, lymphocytes, macrophages) relative to all viable tumor cells, multiplied by 100% (the range of the results was between 0 and 100).An example of high and low expression of PD-L1-staining cells is presented in Figure 2. Biomedicines 2023, 11, x FOR PEER REVIEW 3 of 18 CD8 antigen.For the immunohistochemical staining, primary monoclonal antibodies against CD8 (DAKO, Glostrup, Denmark, Cat.No IR623) with a DAKO EnVision FLEX detection system (DAKO, Denmark, Cat.No K8002) were used.Paraffin sections (4 µm on silanized slides) were deparaffinized, rehydrated, and then stained according to the manufacturer's procedures.In a semi-quantitative assessment, a four-digit scale (0: 0-10% of the area of scarce and mild staining, 1: 11-50% of the area of moderate or intensive staining, 2: 50-75% of the area of intermediate or intensive staining, and 3: >75% of the area of intermediate or intensive staining) of the density of lymphocytes was used in the measurements for the tumor invasive margins.The inflammatory infiltrates containing lymphocytes, plasmacytes, monocytes/macrophages, and neutrophils were assessed histologically on H&E basic stain at the invasive fronts of the tumors using the same semi-quantitative four-digit scale.An example of intensive inflammatory infiltrates and scarce inflammatory infiltrates at the invasive margins is presented in Figure 1.Primary antibodies against MSH6 (DAKO, Denmark, Cat.No IR086) and PMS2 (DAKO, Denmark IR087) were used to detect the expression of microinstability indicator proteins.The percentage of positive cancer cells was estimated in each case, and the internal positive control consisted of lamina propria inflammatory cells and/or nontumoral glandular cells.As for the PD-L1 expression, clone 22C3 of the monoclonal antibody (DAKO, Denmark, Cat.No SK006) was used, and the staining was performed automatically in a closed system as supplied by the manufacturer.The expression was calculated as a CPS given the number of the PD-L1-staining cells (tumor cells, lymphocytes, macrophages) relative to all viable tumor cells, multiplied by 100% (the range of the results was between 0 and 100).An example of high and low expression of PD-L1-staining cells is presented in Figure 2. Statistical Analysis The Shapiro-Wilk test was used to test the normality of the data distribution.The analysis of the repeatability of the measurements of SIR markers was evaluated using the Friedman test.Binomial variables were compared between measurements with the McNemar test.Additionally, confidence intervals for the proportions were calculated using a binomial exact calculation.Cohen's Kappa was calculated to assess the extent of agreement between the first and the second measurements, including 95% confidence intervals.The relationships between parameters were assessed using Pearson's correlation analysis.Statistical analyses were performed using the IBM SPSS Statistics ver.23 software package and R software, version 4.0.5.The Kaplan-Meier procedure was performed to compare the survival and time without relapse between patients, with low and high levels of the LMR, NLR, and PLR.The log-rank test was used to verify whether any significant differences between groups were present.The 95% confidence intervals were calculated for a cumulative proportion of the patients who did not die/relapse.Correlations between qualitative or semi-qualitative variables were verified using Spearman's correlation coefficients.The levels of the LMR, NLR, and PLR vs. the T, N, CR, and presence of progression were analyzed using Mann-Whitney U tests (comparison of 2 groups) or with a Kruskal-Wallis test (comparison of 3 groups), with a Dunn post hoc test. Ethical Considerations The study conformed to the provisions of the Declaration of Helsinki and was approved by the ethics committee of the National Institute of Oncology.All patients were informed of the investigational nature of this study and provided written informed consent. Statistical Analysis The Shapiro-Wilk test was used to test the normality of the data distribution.The analysis of the repeatability of the measurements of SIR markers was evaluated using the Friedman test.Binomial variables were compared between measurements with the McNemar test.Additionally, confidence intervals for the proportions were calculated using a binomial exact calculation.Cohen's Kappa was calculated to assess the extent of agreement between the first and the second measurements, including 95% confidence intervals.The relationships between parameters were assessed using Pearson's correlation analysis.Statistical analyses were performed using the IBM SPSS Statistics ver.23 software package and R software, version 4.0.5.The Kaplan-Meier procedure was performed to compare the survival and time without relapse between patients, with low and high levels of the LMR, NLR, and PLR.The log-rank test was used to verify whether any significant differences between groups were present.The 95% confidence intervals were calculated for a cumulative proportion of the patients who did not die/relapse.Correlations between qualitative or semi-qualitative variables were verified using Spearman's correlation coefficients.The levels of the LMR, NLR, and PLR vs. the T, N, CR, and presence of progression were analyzed using Mann-Whitney U tests (comparison of 2 groups) or with a Kruskal-Wallis test (comparison of 3 groups), with a Dunn post hoc test. Ethical Considerations The study conformed to the provisions of the Declaration of Helsinki and was approved by the ethics committee of the National Institute of Oncology.All patients were informed of the investigational nature of this study and provided written informed consent. Results A total of 60 patients with rectal cancer treated at the Maria Skłodowska-Curie National Research Institute of Oncology in Warsaw between August 2017 and December 2020 were prospectively enrolled in the study.Forty-three males and seventeen females were included.The median age was 66.5 years (range of 29-89 years old).All the patients in the study were citizens of Poland of Caucasian ethnicity.The distributions of the cancer stages were as follows: stages II-IIIA, 8 (13%); stage IIIB, 41 (68%); and stage IIIC, 10 (17%).The stage of one of the patients remained undefined.There were no stage I or stage IV patients.All the rectal cancers were adenocarcinomas.The intermediate differentiation of the tumor was the most common-in 42 (70%) patients followed by the undefined differentiation-14 (23.3%).Two (3.3%) rectal cancers were well-differentiated (G1) and two (3.3%) poorly differentiated (G3).In terms of localization of the tumor within the rectum (distance of the lowest portion of the tumor from the anal verge), 28 (47%) patients had low, 24 (40%) middle, and 8 (13%) high rectal cancer.There were 15 (25%) smokers and 45 (75%) non-smokers.Most of the patients were overweight-23 (38%); 19 (32%) had normal weight; 17 (28%) were obese, and only 1 (2%) patient was underweight.Almost half of the patients (47%) had normal levels of carcinoembryionic antigen (<5.0 ng/mL).The characteristics of the patients are presented in Table 1.The median values of the lymphocytes, monocytes, neutrophils, and platelet counts, as well as their ratios, are shown in Table A1. Reproducibility The patients were divided into high and low groups according to the baseline values of each SIR marker.The predetermined cut-offs were 2.6 for the LMR, 3.0 for the NLR, and 150 for the PLR.The numbers of patients who belonged to each group in each measurement are presented in Table A2. Over half of the patients (56.7%) (95% CI, 43.2-69.4%)were classified as LMR high, and 61.7% (95% CI, 48.2-73.9%)and 51.7% (95% CI, 38.4-64.8%) of the patients were assigned to the NLR low and PLR low groups accordingly.After the second measurements, 81.7% (95% CI, 69.6-90.5%) of the patients belonged to the same groups (LMR high or LMR low).In terms of the NLR and PLR, 73.3% (95% CI, 60.3-83.9%)and 78.3% (95% CI, 65.8-87.9%) of the patients were in the same groups, respectively.After three measurements, the percentages of patients who stayed in the same groups were nearly identical, as follows: 68.3% (95% CI, 55.0-79.7%)for the LMR and NLR and 70.0%(95% CI, 56.8-81.2%)for the PLR.For the LMR, NLR, and PLR, there were no significant changes in the percentages of the patients classified as low or high between all three measurements (p > 0.05 in all comparisons).The mean percentage change between the third and the first measurements of the lymphocytes, monocytes, neutrophils, and platelet counts ranged from −5.59% to 4.76%, and the standard errors ranged from 2.0 to 3.9 (Table 2). The Cohen's Kappa statistic for the extent of the agreement between the first and second measurements for the LMR was κ = 0.59 (95% CI, 0.39-0.79)(p < 0.001).For the NLR, the Kappa was κ = 0.45 (95% CI, 0.22-0.68)(p < 0.001), and for the PLR, κ = 0.53 (95% CI, 0.32-0.75)(p < 0.001), meaning in all cases, there was a moderate agreement between both measurements.If the LMR at the first measurement was out of the range of 2.2-3.0 (±0.4 from the cutoff), then the risk of misclassification in the second measurement, defined as an affiliation to a different (high or low) group than initially, dropped to 5.0% (95% CI, 1.0-13.9%).In the case of the NLR, when it was outside of the range of 2.5-3.5 (±0.5) in the first test, it was 8.3% (95% CI, 2.8-18.4%),and in the case of a PLR outside of the range of 125-175 (±25), it was 10.0% (95% CI, 3.8-20.5%). An analysis of the correlation between the first and third measurements of the LMR, NLR, and PLR was conducted.The LMR values were correlated with a coefficient of 0.776 (p < 0.00001).The NLR and PLR were correlated with coefficients of 0.696 (p < 0.000089) and 0.751 (p < 0.00001), respectively (Figure A1). Correlation with Clinicopathological Characteristics There was no significant correlation between the LMR, NLR, and PLR and the tumor size.There were no relationships between the pre-treatment levels of the SIR markers and both the progression and inoperability after neoadjuvant therapy as well as complete pathological responses.There were significant differences in the PLR levels between the N0, N1, and N2 subgroups (p = 0.033).A post hoc analysis confirmed that the PLR level in the N0 group was lower (116.35(89.14-145.30) vs. N1, 147.27 (62.70-452.56);and vs. N2, 164.41 (93.47-321.83).There was no correlation between the LMR and the NLR, and the nodal involvement was observed (Table 3).LMR, lymphocyte-to-monocyte ratio; NLR, neutrophil-to-lymphocyte ratio; PLR, platelet-to-lymphocyte ratio; pCR, pathological complete response; *, average level from all three measurements with analyses using the Mann-Whitney U test (T; progression/inoperability, CR) or the Kruskal-Wallis test (N). There was no significant correlation between the LMR, NLR, and PLR and the pretreatment level of CEA (p > 0.05 in all cases) (Table A3).There was a significant positive correlation between the LMR and the cancer-related inflammatory infiltrates in the resected tissues (r = 0.38, p = 0.044) and the PD-L1 expression in the tumor cells and tumor-associated leukocytes (CPS) (r = 0.45, p = 0.016).The NLR and PLR were not related to the level of CPS or the inflammatory infiltrates.The correlation between the density of the CD8+ lymphocytes and the LMR, PLR, and NLR was not significant (Table 4).The combined positive score was significantly positively correlated with the CD8+ (r = 0.56, p = 0.002), as well as with the inflammatory infiltrates (r = 0.51, p = 0.005) (Table A4).There was only one case of mismatch repair deficiency among the twenty-nine histopathologically assessed specimens (3.45%). Prognostic Value The population of patients was analyzed in terms of recurrence-free survival (RFS) and OS depending on the pre-treatment levels of the LMR, NLR, and PLR. Lymphocyte-to-Monocyte Ratio The cumulative proportion of patients who did not relapse at the end of the observation period was 32% (95% CI = 8%; 100%) for the low LMR level group and 68% (95% CI = 53%; 87%) for the high LMR level group.The mean number of months without relapse was M = 39.03 for the low LMR level group and M = 47.01 for the high LMR level group (p = 0.641).At the end of the observation period, the cumulative proportion of alive patients was 80% (95% CI = 65%; 97%) for the low LMR level group and 80% (95% CI = 66%; 99%) for the high LMR level group.The mean time of survival was M = 44.81months for the subjects with low LMR levels and M = 52.61months for the subjects with high LMR levels (p = 0.597) (Figure 3). was M = 39.03 for the low LMR level group and M = 47.01 for the high LMR level group (p = 0.641).At the end of the observation period, the cumulative proportion of alive patients was 80% (95% CI = 65%; 97%) for the low LMR level group and 80% (95% CI = 66%; 99%) for the high LMR level group.The mean time of survival was M = 44.81months for the subjects with low LMR levels and M = 52.61months for the subjects with high LMR levels (p = 0.597) (Figure 3). Neutrophil-to-Lymphocyte Ratio The mean number of months without relapse for the patients with low NLR levels was M = 48.79,and for the patients with high NLR levels, it was M = 36.91.The cumulative proportion of subjects who did not relapse at the end of the observation period was 71% (95% CI = 57%; 90%) for the low NLR level group and 30% (95% CI = 7%; 100%) for the high NLR level group (p = 0.225).No differences were detected between the survival times of the patients with low and high NLR levels (p = 0.927).The mean time of survival was M = 51.36 months for the subjects with low NLR levels and M = 45.66 months for the subjects with high NLR levels.The cumulative proportion of alive patients at the end of the follow-up period was 76% (95% CI = 59%; 98%) for the low NLR level group and 83% (95% CI = 69%; 100%) for the high NLR level group (Figure 4). Neutrophil-to-Lymphocyte Ratio The mean number of months without relapse for the patients with low NLR levels was M = 48.79,and for the patients with high NLR levels, it was M = 36.91.The cumulative proportion of subjects who did not relapse at the end of the observation period was 71% (95% CI = 57%; 90%) for the low NLR level group and 30% (95% CI = 7%; 100%) for the high NLR level group (p = 0.225).No differences were detected between the survival times of the patients with low and high NLR levels (p = 0.927).The mean time of survival was M = 51.36 months for the subjects with low NLR levels and M = 45.66 months for the subjects with high NLR levels.The cumulative proportion of alive patients at the end of the follow-up period was 76% (95% CI = 59%; 98%) for the low NLR level group and 83% (95% CI = 69%; 100%) for the high NLR level group (Figure 4). Platelet-to-Lymphocyte Ratio The cumulative proportion of patients who did not relapse at the end of the observation period was 63% (95% CI = 46%; 86%) for the low PLR level group and 47% (95% CI = Platelet-to-Lymphocyte Ratio The cumulative proportion of patients who did not relapse at the end of the observation period was 63% (95% CI = 46%; 86%) for the low PLR level group and 47% (95% CI = 20%; 100%) for the high PLR level group.The mean number of months without a relapse was M = 40.86among the patients with low PLR levels and M = 44.48among the patients with high PLR levels (p = 0.869).The mean time of survival was M = 42.57months for the patients with low PLR levels and M = 54.56 for the patients with high PLR levels.The cumulative proportion of alive subjects was 72% (95% CI = 56%; 94%) for the low PLR level group and 89% (95% CI = 78%; 100%) for the high PLR level group (p = 0.261) (Figure 5). Platelet-to-Lymphocyte Ratio The cumulative proportion of patients who did not relapse at the end of the observation period was 63% (95% CI = 46%; 86%) for the low PLR level group and 47% (95% CI = 20%; 100%) for the high PLR level group.The mean number of months without a relapse was M = 40.86among the patients with low PLR levels and M = 44.48among the patients with high PLR levels (p = 0.869).The mean time of survival was M = 42.57months for the patients with low PLR levels and M = 54.56 for the patients with high PLR levels.The cumulative proportion of alive subjects was 72% (95% CI = 56%; 94%) for the low PLR level group and 89% (95% CI = 78%; 100%) for the high PLR level group (p = 0.261) (Figure 5). Discussion Cancer may induce both local and systemic inflammatory reactions [21].The LMR, NLR, and PLR are blood-based biomarkers of cancer-related inflammation.In our study, we proved that there was a strong correlation between the LMR and cancer-related inflammatory infiltrates in the resected tissues.Similar results have been reported for cholangiocarcinoma, colorectal, and breast cancers [22][23][24].However, no correlation between the SIR markers and tumor-infiltrating CD8 lymphocytes was found, which was in line with other studies on both rectal and left-sided colon cancers [25,26].This apparent discrepancy may have been due to the large populations of neutrophils, macrophages, or other subsets of lymphocytes in the inflammatory infiltrates.We found a correlation between the LMR and PD-L1 expression in the tumor cells and tumor-associated leukocytes relative to all the viable tumor cells (CPS).To our knowledge, these are the first data on a correlation between the SIR markers and CPS in colorectal cancer.In other malignancies, the data on correlations between SIR markers and PD-L1 expression are conflicting [27,28].We found no association between the SIR markers and the level of CEA, which corresponded to a retrospective study on rectal cancer patients [29].The PD-L1 expression in the immune cells was positively correlated with both the inflammatory infiltrates and the tumor-infiltrating CD8 lymphocytes.Similar relationships have been reported in hepatocellular carcinoma, cholangiocarcinoma, and colorectal cancer [30][31][32].The LMR, NLR, and PLR are biomarkers with high prognostic value in many malignancies.However, their roles in LARC are not clear and lack proper validation.The number of studies assessing their reproducibility is very limited.To the best of our knowledge, our study is the first to directly investigate this subject in a prospectively enrolled cohort.Reference and cut-off values for the SIR markers are not well-established.According to analyses of ostensibly healthy populations, the average values of the LMR, NLR, and PLR may differ depending on race, sex, and age.The mean values for the LMR in healthy individuals were significantly higher, and the mean values for the NLR and PLR were lower in comparison to our results [33][34][35].Our findings were based on a well-selected group of patients with untreated LARC with no concomitant acute or chronic diseases that could have influenced the levels of inflammatory markers, which suggests that all three SIR markers are only moderately reproducible.When divided into high and low groups, the percentages of patients who stayed in the same groups after three measurements were nearly the same for all the parameters (68.3% for the LMR and NLR and 70% for the PLR).Nearly one-third of the patients' affiliations with a group changed between the assessments.However, if the first measurement was out of the range of approximately ±15% from the cut-off, the risk of misclassification in the second measurement dropped significantly, and in terms of the LMR, this dropped to 5% (95% CI, 1.0-13.9%),while for the NLR, it dropped to 8.3% (95% CI, 2.8-18.4%),and for the PLR, it dropped to 10% (95% CI, 3.8-20.5%).These results were in line with our previous retrospective study on the reproducibility of the LMR in patients with LARC, where two peripheral blood tests within five weeks prior to beginning anti-cancer therapies were performed [20].The stability of the NLR over time, up to 100 days, has been demonstrated in cardiac surgery patients; however, it has not been confirmed in a cancer population [36].No other studies investigating the reproducibility of SIR markers have been found in the literature.We analyzed the RFS and OS of patients depending on the levels of their LMR, NLR, and PLR.We found no statistically significant correlations in terms of RFS and OS between the high and low LMR, NLR, and PLR groups.These results were not consistent with the majority of studies assessing the whole population of CRC [37][38][39][40].However, among trials restricted to LARC, the impacts of SIR markers on recurrences and survival have been conflicting.Wu et al. showed no correlation between the LMR and the DFS or OS in a non-metastatic rectal cancer population [15].Similarly, in a large study of over 1500 LARC patients by Dudani et al., no statistically significant correlation between the NLR, PLR, and DFS and the OS was proven [16].These findings were supported by the results of the study by Ishikawa and Portale et al. [41,42].Most meta-analyses have suggested that the SIR markers in CRC have prognostic value, and these have included patients with both metastatic and non-metastatic disease [43,44].The association between SIR markers and prognosis was less noted in non-metastatic stages.There are data that have indicated that SIR markers are associated with adverse OS in colon cancer but not in rectal cancer [45].Our results confirmed that the prognostic value of the SIR markers in LARC is less evident than those among the whole CRC population.The phenomenon of cancer-related inflammation is important for understanding the roles of SIR markers.The relationship between cancer and inflammation has been investigated since the 19th century when Virchow first observed that cancer tends to originate from chronically inflamed sites [46].Through the recruitment of inflammatory cells and cytokines, the production of reactive oxygen species, and the inhibition of repair programs, inflammation promotes the uncontrolled proliferation of defective cells and potentiates neoplastic risk.Inflammatory cells are abundant in a tumor's microenvironment [47].They reflect a reaction of the host towards a tumor, but they also serve as a product of cancer-related cells and a tumor's predisposition toward invading and suppressing the immune system [48].Lymphocyte counts reflect systemic inflammatory responses by inducing the production of anti-tumor cytokines, and cytotoxic activity suppresses a cancer's proliferation and spread [49].Monocytes, on the contrary, have proven to contribute to a tumor's progression and metastatic activity [50].Neutrophils, accounting for 50-70% of leukocytes, play a central role in cancer-related inflammation.Releasing reactive oxygen and nitrogen species that damage DNA, they play a substantial role in cancer initiation [51].Tumor progression is boosted by neutrophil-derived chemokines and cytokines that mediate the process of angiogenesis [52].Neutrocytes take part in suppressing T-lymphocyte proliferation, reducing the anti-tumoral effect of NK cells and promoting metastatic spread [53,54].Similarly, platelets, by releasing cytokines and growth factors, contribute to carcinogenesis.There is a substantial interaction between thrombocyte activation and cancer progression.Tumor cells produce cytokines, such as IL-6, that stimulate thrombocytosis.In turn, thrombocytes promote further tumor growth, leading to an even more intensive stimulation and activation of platelets [55].These immunological interactions have led to the introduction of SIR markers and the investigation of their potential roles in clinical practice.Our study revealed interesting aspects of the SIR markers in LARC. There were two main limitations concerning our study: (a) it had a relatively small group of patients, and (b) our studied population was homogenous, consisting entirely of Caucasian citizens of Poland.Moreover, other factors might have had an impact on the results of our study such as the lack of well-defined cut-off values for the LMR, NLR, and PLR and the possible influence of other parameters (e.g., age, sex, comorbidities, smoking) on the level of the LMR, NLR, and PLR.The time between measurements of blood samples varied, which might have had an impact on the blood results.Finally, the immunohistochemical data may suffer a bias due to the fact that several patients either did not proceed with surgery or had complete pathological responses.Future studies should include larger and mixed populations to confirm our results.Despite its limitations, our study explored subjects that are rarely present in the literature.A better understanding of the roles of SIR indices in LARC and their relationships with other clinicopathological features may enable the application of these markers in clinical practice. Conclusions The LMR, NLR, and PLR are peripheral blood-based markers of cancer-related inflammation.Our results suggest that the LMR is correlated with inflammatory infiltrates and PD-L1 expression in a tumor's microenvironment.However, the prognostic value of the SIR markers appears to be less evident among the patients with LARC compared to other colon cancers and most other malignancies, with no statistically significant impact on the RFS or OS in our study.The reproducibility of the SIR markers is moderate.More prospective studies are required to assess the validity of the SIR indices as biomarkers in LARC.LMR, lymphocyte-to-monocyte ratio; NLR, neutrophil-to-lymphocyte ratio; PLR, platelet-to-lymphocyte ratio; CEA, carcinoembryonic antigen; *, average level from all three measurements; r, Spearman's correlation coefficient. Appendix A Table A4.Correlation between the CPS and the CD8+ lymphocytes and inflammatory infiltrates. Figure 1 .Figure 1 . Figure 1.Inflammatory infiltrates at the invasive margins of the cancers.The intensive inflammatory infiltrate (A) versus nearly no inflammatory cells (B) at the invasive margins of the tumors (both H&E ×100). Figure 2 . Figure 2. PD-L1-staining cells at the invasive margins of the cancers.The high expression of PD-L1staining cells (A) versus nearly no PD-L1-staining cells (B) at the invasive margins of the tumors (DAKO 22C3 antibody). Figure 2 . Figure 2. PD-L1-staining cells at the invasive margins of the cancers.The high expression of PD-L1staining cells (A) versus nearly no PD-L1-staining cells (B) at the invasive margins of the tumors (DAKO 22C3 antibody). Figure 3 . Figure 3. Overall survival curve for the patients with low and high LMR levels.LMR, lymphocyteto-monocyte ratio. Figure 3 . Figure 3. Overall survival curve for the patients with low and high LMR levels.LMR, lymphocyteto-monocyte ratio. Figure 4 . Figure 4. Overall survival curve for the patients with low and high NLR levels.NLR, neutrophil-tolymphocyte ratio. Figure 4 . Figure 4. Overall survival curve for the patients with low and high NLR levels.NLR, neutrophil-tolymphocyte ratio. Figure 4 . Figure 4. Overall survival curve for the patients with low and high NLR levels.NLR, neutrophil-tolymphocyte ratio. Figure 5 . Figure 5. Overall survival curve for the patients with low and high PLR levels.PLR, platelet-tolymphocyte ratio. Figure 5 . Figure 5. Overall survival curve for the patients with low and high PLR levels.PLR, platelet-tolymphocyte ratio. Table 1 . Characteristics of the patients. Table 2 . Calculations of the percentages of the changes between the third measurements vs. the first measurements. Table 3 . Average value of the LMR, NLR, and PLR depending on the size of the tumor, nodal status, complete pathological response, and presence of progression after neoadjuvant treatment. Table 4 . Correlation between the LMR, NLR, and PLR and the CPS, CD8+ lymphocytes, and inflammatory infiltrates. Table A1 . Median and mean values of the ALC, AMC, ANC, platelets, LMR, NLR, and PLR in three measurements. Table A2 . The LMR, NLR, and PLR high and low patients in each measurement. Table A3 . Correlations between the LMR, NLR, and PLR and the CEA.
2023-11-16T16:03:01.890Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "f57c620e4bddde374f68fa7992cf4cfe925c3ddb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/11/11/3048/pdf?version=1699942864", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c23365e0a26b48169e7a22009e615935985bd3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
216035812
pes2o/s2orc
v3-fos-license
Determination of size, albedo and thermal inertia of 10 Vesta family asteroids with WISE/NEOWISE observations In this work, we investigate the size, thermal inertia, surface roughness and geometric albedo of 10 Vesta family asteroids by using the Advanced Thermophysical Model (ATPM), based on the thermal infrared data acquired by mainly NASA's Wide-field Infrared Survey Explorer (WISE). Here we show that the average thermal inertia and geometric albedo of the investigated Vesta family members are 42 $\rm J m^{-2} s^{-1/2} K^{-1}$ and 0.314, respectively, where the derived effective diameters are less than 10 km. Moreover, the family members have a relatively low roughness fraction on their surfaces. The similarity in thermal inertia and geometric albedo among the V-type Vesta family member may reveal their close connection in the origin and evolution. As the fragments of the cratering event of Vesta, the family members may have undergone similar evolution process, thereby leading to very close thermal properties. Finally, we estimate their regolith grain sizes with different volume filling factors. INTRODUCTION An asteroid family is usually supposed to be formed from the fragmentation of a parent body in the mainbelt. The family members may share similar composition and physical characteristics with their parent body. To identify and make a study of the asteroid families, one of the classical methods is to evaluate the proper elements and characterize the distribution of asteroids in proper element space by applying a clustering algorithm (e.g.,the Hierarchical Clustering Method (HCM)) (Zappalà et al. 1990). In addition, Nesvorný et al. (2015) provided an asteroid family catalogue that contains 122 families calculated from synthetic proper elements. The Vesta family, as one of the largest asteroid population, consists of over 15,000 members and locates in the inner region of the main-belt with the proper orbital elements: 2.26 ≤ a p ≤ 2.48 AU, 0.075 ≤ e p ≤ 0.122, and 5.6 • ≤ i p ≤ 7.9 • , where a p , e p and i p are the proper elements of semi-major axis, eccentricity and inclination, respectively (Zappalà et al. 1995). Taxonomically, basaltic asteroids are classified as V-type asteroids that have a photometric, visible-wavelength spectral and other observational relationships with (4) Vesta (Hardersen et al. 2014). For example, their optical spectrums are similar with that of (4) Vesta that displays a strong absorption band attributed to pyroxene centered near 9000Å (Binzel & Xu 1993). In the Vesta family, most of the members are believed to be V-type asteroids. But some V-type asteroids have been discovered outside the Vesta family recently, which may indicate the presence of multiple basaltic asteroids in the early solar system (Licandro et al. 2017). The Vesta family was inferred to be originated from (4) Vesta through a catastrophic impact event approximately 1 Gyr ago (Marzari et al. 1996). Carruba et al. (2005) further investigated the dynamical evolution of the V-type asteroid outside the Vesta family and showed the possibility that the members of the Vesta family migrated via Yarkovsky effect and nonlinear secular resonance. Hasegawa et al. (2014a) investigated the rotational rates of 59 V-type asteroids in the inner main belt region and showed that the rotation rate distribution is non-Maxwellian, this may be caused by the long-term Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect which can change the direction of asteroids' spin axis and rotation periods (Delbo et al. 2015). Additionally, by numerically integrating the orbits of 6600 Vesta fragments over a timescale of 2 Gyr, Nesvorný et al. (2008) demonstrated that a large number of family members can evolve out of the family borders defined by clustering algorithms and constrained the age of this family to be older than 1 Gyr. Also, Bottke et al. (2005) derived the cratering event may be occurred in the last 3.5 Gyr. According to Dawn spacecraft's observation, the two largest impact craters on Vesta were estimated to be formed about 1 Gyr ago (Marchi et al. 2012). Such early formation event of the family could provide sufficient time to yield the rotational distribution obtained by Hasegawa et al. (2014a) under the influence of YORP effect. Moreover, visible and infrared spectroscopic investigations imply that (4) Vesta may be the parent body of near-Earth V-type asteroids (Migliorini et al. 1997) and Howardite-Eucrite-Diogenite meteorites (HEDs) (Cruikshank et al. 1991;Migliorini et al. 1997;Burbine et al. 2001). The HEDs are believed to come from the melted basaltic magma ocean crystallization on the large asteroids (De Sanctis et al. 2012;Mandler & Elkins-Tanton 2013). Fulvio et al. (2012) performed the irradiation experiments in laboratory on HED meteorites to simulate space weathering on Vesta and Vesta family asteroids by using different ions. Their experimental results indicate that space weathering effect can give rise to the spectral differences between (4)Vesta and other V-type bodies. Vesta is known as one of the most frequently observed bodies (Reddy et al. 2013;Hasegawa et al. 2014b). Thomas et al. (1997a) explored the pole orientation, size and shape of Vesta by using images from Hubble Space Telescope (HST). The Hubble's observations unveil an amazing impact crater with a diameter 460 km, being supportive of the collision site (Thomas et al. 1997a). In 2011, the spacecraft Dawn arrived at Vesta and further discovered that the giant basin observed by HST is, as a fact, composed of two overlapping huge impact craters, Rheasilvia (500 km) and Veneneia (400 km), respectively, which their excavation was found to be sufficient to supply the materials of the Vesta family asteroids and HEDs (Schenk et al. 2012). The Rheasilva crater appears to be younger and overlies the Veneneia. Moreover, further study by Dawn mission revealed a wide variety of albedo on the surface of Vesta (Reddy et al. 2012) and a deduced core with a diameter 107 ∼ 112 km, indicating sufficient internal melting to segregate iron (Russell et al. 2012). As aforementioned, V-type asteroids and HEDs do provide key clues to formation and evolution scenario for the main-belt asteroids as well as essential information of the early stage of our Solar system, from a viewpoint of their similar orbits and the spectral properties. Therefore, the primary objective of this work is to investigate the thermophysical characteristics of thermal inertia, roughness fraction, geometric albedo and effective diameter etc., to have a better understanding of such kind of Vesta family members. This can help us establish the relation between Vesta family asteroids and other main-belt asteroids (MBAs) from a new perspective. In fact, thermal inertia plays an important role in determining the resistance of temperature variation over the asteroid surface, which is associated with surface temperature and materials. As a result of the major fragments of the parent body or the impactor, although they may have similar features in orbital evolution or spectral feature, each member of the Vesta family can have a distinguished appearance in size, surface topography and roughness due to surface evolution over secular timescale in space, thereby causing diverse thermal inertia on the asteroid's surface. Moreover, the geometric albedo does hold significant information of the asteroidal composition. Therefore, by comparing thermal inertia and geometric albedo of the family members with those of the parent body, we can have a deep understanding of origin of the Vesta family. In this work, we extensively investigate the thermal properties for 10 Vesta family asteroids whose polyhedron shape models and thermal infrared observational data can be acquired, by using the Advanced Thermophysical Model (ATPM) (Rozitis & Green 2011;Yu et al. 2017;Jiang et al. 2019). Moreover, we derive the thermophysical parameters for the Vesta fmaily asteroids on the basis of ATPM and the mid-infrared data and further explore the correlation of various thermal parameters, to provide implications of the impact history on (4) Vesta. Furthermore, we explore the homology of these Vesta family asteroids by comparing their thermal parameters with those of (4) Vesta. The 10 family members, are (63) Ausonia, (556) The structure of this paper is as follows. Section 2 gives a brief description on modelling of thermal process as well as the convex shape models, mid-infrared observations and ATPM. The radiometric results for each Vesta family asteroids and their analysis are presented in Section 3. Section 4 summarizes the discussions on the relationship of thermal inertia, effective diameter and geometric albedo and the evaluation of regolith grain size. Section 5 gives the conclusions. Shape Model As mentioned above, here we adopt 3D convex shape models for 10 Vesta family asteroids (Kaasalainen & Torppa 2001) from DAMIT. In ATPM, the asteroids are considered to be composed of N triangular facets, hence we employ a fractional coverage of hemispherical craters to describe the surface roughness of asteroids, where each crater is assumed to be composed of M smaller triangular sub-facets (Rozitis & Green 2011). Table 1 lists the parameters of the asteroids' shape model, which includes the number of facets, number of vertices and pole orientations. As can be seen in Fig. 1, the shape models for 10 Vesta family asteroids are plotted, where the red arrow represents the spin axis of each asteroid. Observations In this work, thermal infrared data are obtained from three space-based telescopes: WISE/NEOWISE, Infrared Astronomical Satellite (IRAS), and AKARI. For example, WISE surveyed the sky in 4 wavebands (3.4, 4.6, 12.0 and 22.0 µm noted as W1, W2, W3 and W4, respectively) until the solid hydrogen cryostat (which was utilized to cool down W3 and W4 bands) was depleted on September 30, 2010. Thereafter the satellite continued to work at W1 and W2 bands, known as NEOWISE. In this situation, we can download the data from two source tables of WISE archive 1 According to the orbital database at AstDyS node (http://hamilton.dm.unipi.it/astdys/), we take (7001) Neother as a Vesta family asteroid, because the proper orbital elements of (7001) Neother are within the range of those of the Vesta family (Zappalà et al. 1995). The other 9 Vesta family asteroids here are simply adopted from the Vesta family list (Nesvorný et al. 2015). 2 https://astro.troja.mff.cuni.cz/projects/asteroids3D/web.php (http://irsa.ipac.caltech.edu/applications/wise/), WISE All-Sky Single Exposure (L1b) and NEOWISE-R Single Exposure (L1b). Here we should emphasize that the surface temperature of MBAs is relatively lower than that of NEAs, as a result the data in shorter wavelengths (e.g., W1) can include a large percentage of reflected sunlight. As will be discussed in the following section, W1 band contains roughly 90% of reflected sunlight in the observations, indicating that the thermal portion is merely comparable to the uncertainty of the entire observed flux. For this reason, we do not adopt W1 band data of the Vesta family asteroids in our fitting. In the target searching, we employ a Moving Object Search with a search cone radius of 1". Similar to that of Masiero et al. (2011) and Grav et al. (2012), all artifact identification CC FLAG other than 0 and p is rejected, where 0 indicates no evidence of known artifacts found whereas p means that an artifact may be present. Additionally, the modified Julian date needs to be within 4s of the epochs given in MPC. Subsequently, we follow the method described by Wright et al. (2010) to convert the magnitudes into fluxes and the color correction factors of 1.3448, 1.0006 and 0.9833 for W2 ∼ W4 bands. Since the observed flux is proportional to the cross-section area of the asteroid in the direction of the observer, therefore the thermal light curve of an asteroid should not have an amplitude that may exceed a certain value (Jiang et al. 2019). According to this point, we further screen the data set for each asteroid and the thermal light curves will be discussed later in this work. Table. 2 reports the number of observations for W2 ∼ W4 wavelength of WISE, the range of phase angle, heliocentric distance and distance from asteroid in reference to observer. For detailed WISE/NEOWISE observations for each asteroid are summarized in the Appendix. The observational uncertainties here are set to be 10% for all Vesta family asteroids. Here the observations from AKARI and IRAS are only applied to the fitting for (63) Ausonia and (556) Phyllis, which are not given in the table. Advanced Thermophysical Model with Reflected Sunlight Here ATPM accepts global shape models in the triangular facet formalism and adopts a hemispherical crater to represent the roughness surface. In order to constrain the thermal properties such as thermal inertia, roughness fraction and geometric albedo, we need to compute the temperature distribution over the asteroid's surface. For each shape facet, the temperature T can be determined by solving the 1D heat conduction function Note: a,e,i represents the semi-major axis, eccentricity and inclination, respectively. P orb is the orbital period and Absmag is the absolute magnitude. N facets and N vertices describes the number of shape facets and vertices in the shape models. Prot is the rotation period. with specific boundary condition in which shadowing effect, multiple-sunlight scattering and re-absorption of thermal radiation are taken into consideration (Rozitis & Green 2011). Here, κ, ρ and C represents the thermal conductivity, surface density, and heat capacity. The method to simplify and solve the heat-conduction equation are described in Spencer et al. (1989) and Lagerros (1996). Once the temperature distribution is ascertained, we can use the Planck function to evaluate theoretical thermal emission of each facet, thus the total theoretical thermal emission of an asteroid can be written as where f r is the roughness fraction, is the monochromatic emissivity at wavelength λ which is assumed to be 0.9. For facet i and sub-facet ij, A and v denote the area and the view factor, respectively. B is the Planck function, described by and the view factor v is defined as where s i indicates whether facet i can be seen by the observer, n i and n obs represents the facet normal and the vector pointing to the observer respectively, and d ao is the distance between asteroid and observer. In addition, as pointed out by Myhrvold (2018), thermal infrared observations of shorter wavelengths (e.g., W1 and W2) are dominated by reflected sunlight. It is necessary to remove the effects of the reflected part when we use these observations in shorter wavelengths. Hence, we further deal with the reflected sunlight contained in the observed flux by using the method described in Jiang et al. (2019). We treat each facet i and sub-facet ij as Lambertian surface, and the reflect part can expressed as where B(λ, T ) is the Planck equation in the temperature of the Sun, R sun the radius of the Sun, r helio the heliocentric distance of the asteroid, A B the bond albedo and ψ the sine value of the solar incidence angle. S and v are the area and view factor, respectively. The entire reflected sunlight portion will be given by According to our sunlight reflection model, we assess the reflected sunlight contained in W1 ∼ W4 observations, which it covers ∼ 90% at W1, 30% ∼ 50% at W2 and can be negligible at W3 and W4. Therefore, as described above, W1 band is not adopted in this work. The reflected sunlight contributes a significant part in W2 observation, but the proportion is no more than 50%. As will be discussed in the following section, we account for an overall contribution of thermal emission and reflected sunlight to fit the observations. In addition, WISE only surveyed the sky for roughly 9 months in 2010, being suggestive of the observations of Main-Belt asteroids in W3 and W4 simply covering a very narrow range of solar phase angle. In comparison, the utilization of W2 data can provide diverse observational solar phases, wavelengths, as well as observational numbers, which make the fitting process more reliable. Thus, we utilize the observations from W2 band but we also take reflected sunlight into consideration in our fitting. Thermal-infrared fitting Heat conduction into and outwards the asteroids' subsurface material can lead to a certain thermal memory, named as thermal inertia. In particular, thermal inertia is defined as Γ= √ κρc, where κ, ρ, c have the same meaning as in Equation (1). As a matter of fact, the thermal inertia plays a very important role of governing the heat conduction process and inducing the non-zero night-side temperature of an asteroid. Moreover, this thermal parameter can result in the surface temperature to peak at the afternoon side of an asteroid, thereby causing the diurnal Yarkovsky effect (Delbo et al. 2015). In order to derive a best-fitting thermal emission with the observed fluxes, we set the initial thermal inertia in the range Γ=0 ∼ 300 Jm −2 s −1/2 K −1 at equally spaced steps of 10 Jm −2 s −1/2 K −1 in search for best-fitting value. Other parameters, such as pole orientation, absolute magnitude, rotation period are listed in Table 1. A bolometric and spectral emissivity of 0.9 is assumed for all wavelengths in the fitting procedure. On the other hand, as shown in Table. 2, for each Vesta family asteroids, the solar phase angle only covers a very small range, therefore it brings about the difficulty in placing constraints on thermal inertia and roughness fraction at the same time. In general, the roughness fraction of mainbelt asteroids could be usually small, thus we assume a priori roughness for these Vesta family asteroids to be 0.1 ∼ 0.5. Hence, for each wavelength, we obtain three free parameters, i.e., thermal inertia Γ, geometric albedo p v and the rotation phase φ. In fact, the effective diameter D eff is in connection with the geometric albedo via where H v is the absolute magnitude. For each asteroid, the entire theoretical flux F m can be written as where F thermal is the total theoretical thermal emission, F ref represents the reflected sunlight. Then we compare F m with the observations, and we adopt the minimum χ 2 fitting defined by Press et al. (2007) where n is the observation number, and σ λ is the observation uncertainty. In the following, we will detailedly report our results for 10 Vesta family asteroids. (63) Ausonia Asteroid (63) Ausonia is the largest Vesta family member with a diameter of roughly 100 km. In the Tholen classification, (63) Ausonia is a stony S-type asteroid (Tholen 1984), and in the SMASSII classification, this asteroid is classified as Sa type (Bus & Binzel 2002), Note: N is the number of observations for each wavelength, α denotes the range of solar phase for each asteroid, r helio and r obs represent the heliocentric distance and the distance between the asteroid and the observer, respectively. while in the Bus-Demeo taxonomic it is an Sw subtype (DeMeo et al. 2009). Tanga et al. (2003) estimated the overall shape, spin orientation, angular size of (63) Ausonia using the observations from Fine Guidance Sensors (FGS) of Hubble Space Telescope (HST). They derived an effective diameter of 87 km for this asteroid, which was smaller than the IRAS diameter (103 km) (Tedesco et al. 2004) and that of Masiero et al. (2012) (116 km). In this work, we adopt 97 observations from IRAS (3 × 12µm, 3 × 25µm, 3 × 60µm, and 1 × 100µm), AKARI (Usui et al. 2011;) (4 × 9µm, 2 × 18µm) and WISE/NEOWISE (47 × 4.6µm, 17 × 12.0µm and 17 × 22.0µm) to explore the thermal parameters for (63) Ausonia. Fig. 2 illustrates the Γ − χ 2 profile of (63) Ausonia, where the minimum value χ 2 is related to the thermal inertia 50 +12 −24 Jm −2 s −1/2 K −1 and the roughness fraction 0.5 +0.0 −0.3 . The horizontal line represents the 3 − σ range of Γ. Furthermore, we derive the effective diameter 94.595 +2.343 −2.483 km, and the geometric albedo is then evaluated to be 0.189 −0.009 +0.010 for this asteroid. To examine the best-fitting parameters for (63) Ausonia, here we follow the method described in Yu et al. (2017) and Jiang et al. (2019) to plot the theoretical thermal light curves of (63) Ausonia compared with the mid-infrared observations. As shown in Fig. 3, the thermal flux from ATPM offers a good matching with the data at W2 band for each of four separate epoch. In addition, Fig. 4 exhibits a similar behaviour of the thermal light curves with the observations at W3 and W4 bands, respectively. In order to examine the reliability of our derived results, we again calculate the ratio of theoretical flux obtained by ATPM and the observational flux at diverse wavelengths. each wavelength, being indicative of a reliable outcome for (63) Ausonia. (556) Phyllis is also taxonomically classified as an Stype (Tholen 1984;Bus & Binzel 2002) asteroid with a diameter 36.28 km, and a geometric albedo 0.201 (Masiero et al. 2014). The first photometric observations and optical light curves of (556) Phyllis were performed by Zappalà et al. (1983), and they derived a rotation period 4.28 ± 0.002 h. Marciniak et al. (2007) observed this asteroid for five distinct observation epochs in 1998, 2000, 2002, 2004 and 2005-2006, respectively. They updated a rotation period 4.293 ± 0.001 h and provided two resolved pole orientations (18 • , 54 • ) and (209 • , 41 • ). In the present study, we adopt the pole orientation of the former from that of Marciniak et al. (2007). For the observations, we include the thermal data from IRAS (5 × 12µm, 5 × 25µm and 5 × 60µm), AKARI (5 × 9µm, 4 × 18µm) and WISE/NEOWISE (75 × 4.6µm, 30 × 12.0µm and 29 × 22.0µm), where the number of entire observations is 130. We derive Jm −2 s −1/2 K −1 and 0.40 +0.10 −0.20 , respectively, with respect to a minimum χ 2 value 2.715. WISE/NEOWISE observed this asteroid on eight different epochs. However, for several epochs, the number of data points are too small to fully reflect the asteroid's varied flux with rotational phases. Moreover, Fig. 7 gives the results of thermal light curves for 4 different epochs at W2, while 8 reveals the outcomes of W3 and W4. Figs. 7 and 8 both demonstrate that the theoretical model fits the ob- servations well. Further evidence can be provided from Fig. 9, which shows the Observation/ATPM ratio of (556) Phyllis for each wavelength. (1906) Neaf Asteroid (1906) Neaf, provisional designation 1972 RC, is a V-type asteroid (Xu et al. 1995) in the inner main-belt region. It orbits the Sun at a heliocentric distance 2.1 ∼ 2.7 AU every 3.66 yr. Masiero et al. (2014) derived its diameter 7.923 ± 0.09 km and a geometric albedo 0.234 ± 0.052, based on the observations from WISE/NEOWISE. Similarly, we utilize the observations from WISE/NEOWISE, which are combined with ATPM to derive its thermal properties. Here we entirely use 57 WISE observations at 3 separate epochs (30 in W2, 16 in W3 and 16 in W4). We obtain the effective diameter of this asteroid to be 7.561 +0.449 −0.443 km and geometric albedo 0.257 +0.033 −0.028 , and these results agree with the findings of Masiero et al. (2014). According to Γ−χ 2 profile in Fig. 10, the thermal inertia and roughness fraction are confined to be 70 +19 −16 Jm −2 s −1/2 K −1 and 0.5 +0.0 −0.2 , respectively. From the result of Γ and f r , we plot the 3-bands thermal light curves in Figs. 11 and 12. Our computed thermal fluxes reasonably fit the WISE observation with χ 2 min = 7.095. This may be the reason that the W4 theoretical flux do not agree well with the observations according to Fig. 12, but the fluctuation trends of the thermal light curves are consistent with the observations. In the lower panel of Fig. 12, we again provide an additional thermal curve at W4 with a high roughness f r = 0.5 (marked by the black dashed line) in comparison with the case of low roughness, which leads to a better-fitting solution at W4 band. With the help of all data, the best-fitting thermal inertia is evaluated to be approximately 70 Jm −2 s −1/2 K −1 with respect to a low roughness fraction. The ratio of observed flux and theoretical flux are plotted in Fig. 13. Asteroid (2511) Patterson is a V-type asteroid (Bus & Binzel 2002) that was discovered by the Palomar Observatory in 1980. It has a semi-major axis 2.298 AU, an eccentricity 0.104, and an orbital inclination use the ATPM and combined with 24 WISE/NEOWISE observations (11 × 12.0µm and 13 × 22.0µm) to determine the thermal characteristics of (2511) Patterson. As shown in Fig. 14, the minimum χ 2 corresponds to thermal inertia 90 +58 −43 Jm −2 s −1/2 K −1 , and the roughness fraction can be constrained to be 0.0 +0.50 −0.0 . In addition, the geometric albedo is estimated to be 0.180 +0.055 −0.034 , which is smaller than that of Masiero et al. (2011), and thus the effective diameter is 9.034 +0.997 −1.128 km. To examine our results, we plot the observation/ATPM ratio and thermal light curves for each waveband in Fig. 16 and Fig. 15. The solid curves in Fig. 15 are modeled with Γ = 90 Jm −2 s −1/2 K −1 and f r = 0.0. The model seems to slightly overestimate the W4 data, but the fit to the WISE light curve seems to be reliable for all wavelengths. (3281) Maupertuis Asteroid (3281) Maupertuis is a Vesta family member that orbits the Sun at a distance 2.121 ∼ 2.579 AU every 3.6 years, and has an absolute magnitude fitting. For (3281) Maupertuis, we find that its diameter is constrained to be 5.509 +0.447 −0.270 km with a geometric albedo 0.484 +0.051 −0.074 . The value of the geometric albedo is a bit high for a main-belt asteroid, but as a fragment of asteroid (4) Vesta, it is consistent with a wide range of p v on (4) Vesta's surface. Fig. 17 exhibits the best-fitting values of the thermal inertia 60 +58 −31 Jm −2 s −1/2 K −1 and the roughness 0.50 +0.00 −0.30 . As can be seen in Fig. 18, the model seems to overestimate W4 data. Fig. 19 shows the ratio of Observation/ATPM for (3281) Maupertuis with respect to Γ = 60 Jm −2 s −1/2 K −1 and f r = 0.5. However, the values of Γ and f r correspond to the minimum value of χ 2 . (5111) Jacliff Asteroid (5111) Jacliff, known as provisional designation 1987 SE24, orbits the Sun once every 3.61 yr. In the SMASSII classification, (5111) Jacliff is a R-type asteroid (Bus & Binzel 2002), while in the Bus-Demeo taxonomy, the asteroid is classified as a V-type asteroid (DeMeo et al. 2009). Moreover, Moskovitz et al. (2010) compared the near-infrared (0.7−2.5 µm) spectra of this asteroid with the laboratory spectra of HED meteorites, and showed that it is expected to be a V-type asteroid. By using all available disk-integrated optical data as input for the convex input method, Hanuš et al. (2016) derived the 3D shape model for (5111) Jacliff, and the pole orientation and rotation period were derived to be (259 • , −45 • ) and 2.840 h. In our study, we employ 47 WISE/NEOWISE observations (28×4.6µm, 11×12.0µm and 8 × 22.0µm) to derive thermal parameters of (5111) Jacliff. In the fitting process, when we set the step width of thermal inertia to be 10 Jm −2 s −1/2 K −1 , we can finally obtain a best-fitting value for Γ to be 0 Jm −2 s −1/2 K −1 . Thus, to derive a more accurate value of Γ, we again set the step width of thermal inertia to be 0.1 Jm −2 s −1/2 K −1 to perform additional fittings with observations. However, as shown in Fig. 20, the de- rived thermal inertia still remains 0 +15 −0 Jm −2 s −1/2 K −1 with a corresponding χ 2 min 3.583 and a roughness fraction 0.00 +0.40 −0.00 . Here it should be emphasized that the derived thermal inertia is given in 3 − σ confidence level. Although the χ 2 min is related to thermal inertia of 0 Jm −2 s −1/2 K −1 , it does not mean the value of Γ should be zero, but suggests that the probability of thermal inertia between 0 ∼ 15 Jm −2 s −1/2 K −1 is about 99.7%. We derive the effective diameter 5.302 +0.237 −0.397 km, which produces a geometric albedo 0.523 +0.088 hours. Using 40 WISE observations (18 × 4.6µm, 11 × 12.0µm and 11 × 22.0µm) and ATPM, we derive the thermal properties of (7001) Neother, i.e., the thermal inertia Γ = 20 +21 −20 Jm −2 s −1/2 K −1 , roughness fraction f r = 0.00 +0.40 −0.00 , geometric albedo p v = 0.241 +0.034 −0.013 and effective diameter D eff = 5.923 +0.167 −0.378 km. The outcomes of p v and D eff are close to those of Masiero et al. (2011), where p v = 0.216 ± 0.022 and D eff = 6.122 ± 0.073, respectively. The f r − Γ and Observation/ATPM ratio are plotted in Figs. 24 and 26 with a minimum χ 2 min value 3.851. With the aid of the outcomes of Γ and f r , we offer the thermal light curves for (7001) Neother at W2, W3 and W4 wavelengths (see Fig. 25) with respect to two epochs of 2010.05.03 and 2016.08.31, respectively, indicating that the theoretical results from fitting accord with the observations. (9158) Plate Asteroid (9158) Plate was discovered in 1984 and has an orbital period of 3.49 yr. According to the SDSSbased taxonomic classification developed by Carvano et al. (2010), it is an SQ p asteroid. Using the combined In this study, we use 54 WISE/NEOWISE observations (16×4.6µm, 16×12.0µm and 22×22.0µm) to investigate the thermal parameters for this asteroid. As shown in Fig. 27, a low thermal inertia of 10 +18 −10 Jm −2 s −1/2 K −1 as well as a low roughness fraction of 0.30 +0.20 −0.30 are obtained, with respect to a χ 2 min = 4.902. The geometric albedo is given to be 0.379 +0.026 −0.024 , and the diameter is 4.113 +0.137 −0.134 km. Our result of the effective diameter is close to that of Masiero et al. (2011). Thermal light curves for (9158) Plate are displayed in Fig. 28. As can be seen, our results provide a formally acceptable fit although the modeled fluxes seem to match the observations at W2 and W4 waveband rather than that of W3. Similarly, Fig. 29 shows that the value of Observation/ATPM ratios of various wavelengths moves around 1. (12088) Macalintal (12088) Macalintal was discovered by the Lincoln Observatory in 1998. In the SDSS-based taxonomic classification, it is a V-type asteroid (Carvano et al. 2010). Durech et al. (2018) presented the rotation period of this asteroid to be 3.342 hr. Besides, using the WISE observation and NEATM, the geometric albedo and diameter of this asteroid are, respectively, 0.385 ±0.097 and 3.724±0.250 km. In this work, we first collect the observations for our fitting procedure, but find that the fewer WISE data are available for this asteroid (10 × 12.0µm and 5 × 22.0µm). By performing the fitting, we derive that the geometric albedo is 0.344 +0.120 −0.050 , the effective di- adopt 15 observations during the fitting process. Using the derived Γ and f r , thermal light curves and the observation/ATPM ratio for W3 and W4 wavebands are shown in Figs. 31 and 32. We can notice that the observed fluxes are generally larger than the theoretical results. (15032) Alexlevin Asteroid (15032) Alexlevin was discovered in 1998. In the Moving Objects VISTA (MOVIS) catalogue and SDSS-based taxonomic classification, it is recogonized as a V-type asteroid (Carvano et al. 2010;Licandro et al. 2017). This asteroid has an orbital period of 3.66 −0.093 km. The derived diameter is smaller than that of Masiero et al. (2011). As shown in Fig. 33, we can place constraints on the thermal inertia of asteroid (15032) Alexlevin to be 20 +15 −20 Jm −2 s −1/2 K −1 and a roughness fraction of 0.50 +0.00 −0.40 . The ratio between observed and theoretical fluxes for 3 wavebands are plotted in Fig. 35. In addition, the 3-bands thermal light curves of (15032) Alexlevin are exhibited in Fig. 34. As can be seen, our theoretical flux can fit well with the observations. In this work, we present the first attempt to determine the thermal parameters of 10 Vesta family asteroids by using ATPM and combined with the thermal infrared observations from IRAS, AKARI and WISE/NEOWISE. Our results are summarized in Table.3. All of the Vesta family asteroids have thermal inertia less than 100 Jm −2 s −1/2 K −1 as well as relatively low roughness fractions, which may suggest that they have undergone a long time surface evolution process. It should be noticed that, among the 10 Vesta family members, (1906) Neaf, (2511) Patterson, (5111) Jacliff, (12088) Macalintal, and (15032) Alexlevin are V-type asteroids, they are also called "Vestoids", and the other 5 members are non-Vestoids. We obtain the mean value of p v for the 5 Vestoids to be 0.328 and is very close to the median value (0.362 ± 0.100) of V-type (Bus-Demeo taxonomy) asteroids in Mainzer et al. (2011). While for non-Vestoids, this value is 0.300. As mentioned above, the derived p v for (63) Ausonia (Sa/Sw) and (556) Phyllis (S) are 0.180 +0.010 −0.009 and 0.209 +0.010 −0.010 , respectively, which is similar to the median p v of these two spectral types obtained by Mainzer et al. (2011). As for the SQp type asteroid (9158) Plate, we have derived the value of p v to be 0.379 +0.026 −0.024 , which is inside the geometric albedo range (0.062 ∼ 0.617) of SQp type asteroid obtained by Mainzer et al. (2012). Additionally, (3281) Maupertuis has a geometric albedo of 0.484 +0.051 −0.074 and this value is within the range of Vtype asteroids's p v in Mainzer et al. (2011) but is larger than their median value. While for (7001) Neother, the derived p v is 0.241 +0.034 0.013 , which is comparable with the geometric albedo of S-type asteroids obtained by Mainzer et al. (2011Mainzer et al. ( , 2012. Except for asteroid (63) Ausonia and (556) Phyllis, other Vesta family asteroids we studied have effective diameters smaller than 10 km. In the following, we will make a brief discussion on thermal nature for the Vesta family asteroids based on our derived results. 4.1. Thermal inertia, effective diameter and geometric albedo Delbo et al. (2007) investigated the relationship between thermal inertia and effective diameter and provided the power law formula Γ = d 0 D −ξ , where a linear regression gives the best-fitting parameters of d 0 and ξ to be 300±47 and 0.48±0.04. Furthermore, Delbo & Tanga (2008) showed the thermal inertia of main-belt asteroids by using IRAS data, and they obtained the values of ξ to be 1.4 ± 0.2 for MBAs and 0.32 ± 0.09 for NEAs, respectively. In addition, Hanuš et al. (2018) presented thermal parameters of ∼ 300 main-belt asteroids. Here, we combine our results of Γ and D eff with those of Delbo et al. (2015) and Hanuš et al. (2018) to further explore the relationship between thermal inertia and effective diameter. In Fig. 36, we present our results given by red (Vestoids) and cyan (non-Vestoids) dots with error bars, Note: All the results of thermal properties are in SI units, where Γ is the thermal inertia, fr is the roughness fraction, D eff is the effective diameter and pv is the geometric albedo. p * v and D * eff represent the geometric albedo and effective diameter outcomes of Mainzer et al. (2011 and Masiero et al. (2014). where the values of thermal inertia and effective diameter from Delbo et al. (2015) and Hanuš et al. (2018) are shown in gray (for MBAs) and green (for NEAs) dots (in order to make the diagram more clearer, we do not plot their error bars for each value of Γ and D eff ). The green dashed line is fitted by using the value of ξ = 0.32 for NEAs of Delbo & Tanga (2008). However, according to our results, it should be noteworthy that the small-sized MBAs can have low thermal inertia, thus the gap in kmsized and low thermal inertia area are filled. By fitting the results of Γ and D eff for the main-belt asteroids from Delbo & Tanga (2008) and Hanuš et al. (2018) as well as the Vesta family asteroids we have investigated in this work, then we obtain the value of d 0 and ξ to be 51.68 and 0.023, respectively. Moreover, we further explore the relationship of Γ − D eff by means of the data from NEAs (green dots) and binned MBAs (blue squares), which is denoted by black dashed line with respect to d 0 and ξ to be 344 and 0.441, respectively. To obtain the binned data, we divide the main-belt asteroids into 12 intervals according to the size of diameter, then the average effective diameter and thermal inertia in each interval are calculated. As shown in Fig. 36, the gray dashed line is almost horizontal because of the very low value of ξ for MBAs when compared with that of Delbo & Tanga (2008) for NEAs. Note that the slope of the black dashed line is a bit higher than that of the green dashed line by fitting the NEA results. As shown in Eq. 8, the effective diameter D eff and geometric albedo p v is correlated with each other. Thus, Fig. 37 shows thermal inertia as a function of geometric albedo for MBAs, NEAs and Vesta family asteroids (including Vestoids and non-Vestoids), represented by blue, green, red, and cyan dots, respectively. In Fig. 37, we further show the mean value of thermal inertia and geometric albedo for MBAs, NEAs, Vestoids and non-Vestoids by the dashed lines in the relevant colors. From Fig. 37, we can see that the p v of Vesta family members is relatively larger than that of other main-belt asteroids. Again, the average Γ of the Vesta family asteroids here is very close to the average thermal inertia of all the MBAs in Fig. 37. The average p v of the 5 Vestoids bears resemblance to that of (4) Vesta (see Fig. 37), which may indicate a close relationship between Vesta family asteroids and (4) Vesta. Fig. 38 shows the profile of D eff and p v for 10 Vesta family asteroids (the left panel), where red and cyan circles with error bars represent our results whereas the blue dots with error bars indicate those of the literature (Mainzer et al. 2011;Masiero et al. 2011Masiero et al. , 2014. To compare our results with the previous work, we again plot the mean value of effective diameter and geometric albedo, shown by dashed lines in different colors. As can be seen from the left panel of Fig. 38, we observe that the p v and D eff of most Vesta family asteroids agree well with the earlier results (Mainzer et al. 2011;Masiero et al. 2011Masiero et al. , 2014 from NEATM model. Moreover, we show the D eff -p v profile for over 1000 main-belt asteroids using the data from Masiero et al. (2011) (the right panel of Fig. 38). From Fig. 38, we infer that the geometric albedo may show a downward trend with the increasing effective diameter. Thermal inertia and rotation period Harris & Drube (2016) developed an NEATM-based thermal inertia estimator and calculated the thermal in- ertia for roughly 50 asteroids provided by Delbo et al. (2015). Based on their results, Harris & Drube (2016) investigated the dependence of thermal inertia on asteroid rotation period and showed that for both MBAs and NEOs, Γ has an increasing trend with the decreasing of spin rate. This is probably because for slowly rotating asteroids thermal wave penetrates much deeper into the subsurface than the fast rotators. However, Marciniak et al. (2019) investigated 16 slow rotators with sizes ranging from 30 ∼ 150 km and found that for slowly rotating asteroid, there exists no obvious correlation between thermal inertia and rotation period. In this work, we obtain thermal inertia for 10 Vesta family members, which they may had suffered similar dynamical and thermal histories ever since their formation. Therefore, it is more reliable to investigate the relationship between spin rate and thermal inertia in the asteroid family. For 10 Vesta family asteroids, the rotation period ranges from 2.839 ∼ 11.010 hours, while the thermal inertia varies from 0 ∼ 90 Jm −2 s −1/2 K −1 . However, as shown in Fig. 39, we do not find any obvious correlation between thermal inertia and rotation period, probably because of the limited population of asteroids. Thus it does not mean there exist no growing relationship between Γ and P rot , future study with an abundant of Vesta family asteroids may reveal a clear correlation between Γ and rotation period. Regolith Grain Size According to the method described by Gundlach et al. (2013), the thermal conductivity can be expressed by thermal inertia κ = Γ 2 /(φρc), where φ is the volume filling factor, ρ is the bulk density and c the specific heat capacity. Besides, κ can also be regarded as a function of regolith grain size, according to the theoretical model developed for granular materials in vacuum (Gundlach et al. 2012). Based on our thermal inertia, we derive the regolith grain size of 10 Vesta family asteroids, where the results are shown in Fig. 40. We use various colors to denote the volume filling factors, which ranges Gundlach et al. (2013). The values of grain size give an obvious increasing trend with the increasing thermal inertia. from 0.1 ∼ 0.6 and the temperature is set to be 200 K. As can be seen, we obtain eight values of thermal inertia for the Vesta family asteroids. Here when an asteroid has thermal inertia Γ < 50 Jm −2 s −1/2 K −1 , there is no good agreement between the model and the derived thermal conductivities from thermal inertia measurements for some volume filling factors. Take (9158) Plate (with thermal inertia of 10 Jm −2 s −1/2 K −1 ) as an example, we simply have a regolith grain size of 0.006 mm with a volume filling factor φ = 0.1. While for the asteroids with Γ < 10 Jm −2 s −1/2 K −1 , the regolith grain sizes cannot be well constrained, but may be smaller than 0.006 mm. Table. 4 summarizes the major outcomes for the Vesta family asteroids. We also evaluate the lower and upper limits of the regolith grainsizes for these Vesta family members according to the errors of thermal inerita we obtained. As can be seen in the table, the lower limits of grain radius of (7001) Neother, (9158) Plate, (15032) Alexlevin are not constrained, because the lower limits of Γ for these 3 asteroids are smaller than 10 Jm −2 s −1/2 K −1 . CONCLUSIONS In conclusion, we investigate thermal properties of 10 Vesta family asteroids, including their thermal inertia, geometric albedo, effective diameter and roughness fraction. The average thermal inertia of these Vesta family asteroids is 42 Jm −2 s −1/2 K −1 . For the V-type family members (Vestoids) we derive the average geometric albedos to be 0.328 while for the non-Vestoids family member, the average geometric albedo is 0.300. The mean value of Γ is a bit larger than that of (4) Vesta (Delbo et al. 2015), and the average p v for both Vestoids and non-Vestoids are smaller than that of Vesta, but is similar to the mean value obtained by Masiero et al. (2013) and Mainzer et al. (2011Mainzer et al. ( , 2012. Moreover, we study the relationship between thermal inertia and effective diameter, as well as the relation between thermal inertia and rotation period. Considering both NEAs and MBAs, we place constraints on a new set of coefficients in the Γ − D equation, which is slightly different from the result of Delbo et al. (2015). In addition, taking the published physical data for known Vesta family asteroids into consideration, we do not find the expected increasing trend between thermal inertia and rotation period. Moreover, since Vesta family asteroids are deemed as the fragments of the severe impact event, the wide range of geometric albedo of 10 Vesta family asteroids is in line with that of the surface of (4)Vesta. In the future work, we will address the thermophysical characteristics of more Vesta family asteroids and other families, which can help us better comprehend the formation, evolution and classification of diverse sorts of asteroid population in the main-belt region. This work is financially supported by the National Natural Science Foundation of China (Grant Nos. We list the color-corrected WISE observational fluxes and uncertainties for 10 Vesta family members in this work.
2020-04-22T01:01:34.117Z
2020-04-21T00:00:00.000
{ "year": 2020, "sha1": "21f7956455821c1fa664f2e570f565b3130d8721", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.3847/1538-3881/ab8af5/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "21f7956455821c1fa664f2e570f565b3130d8721", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2697248
pes2o/s2orc
v3-fos-license
Design of Oscillator Networks with Enhanced Synchronization Tolerance against Noise Can synchronization properties of a network of identical oscillators in the presence of noise be improved through appropriate rewiring of its connections? What are the optimal network architectures for a given total number of connections? We address these questions by running the optimization process, using the stochastic Markov Chain Monte Carlo method with replica exchange, to design the networks of phase oscillators with the increased tolerance against noise. As we find, the synchronization of a network, characterized by the Kuramoto order parameter, can be increased up to 40 %, as compared to that of the randomly generated networks, when the optimization is applied. Large ensembles of optimized networks are obtained and their statistical properties are investigated. I. INTRODUCTION Synchronization phenomena are ubiquous in various fields of science and play an important role in functioning of living systems [1]. In the last decade, much interest has been attracted to studies of complex networks consisting of dynamical elements involved in a set of interactions [2,3]. Particular attention has been paid to problems of synchronization in network-organized oscillator systems [4,5]. Investigations focused on understanding the relationship between the topological structure of a network and its collective synchronous behavior [3]. Recently, synchronization properties of systems formed by phase oscillators on static complex networks, such as smallworld networks [6] and scale-free networks [7,8], have been considered. It has also been shown that the ability of a network to give rise to synchronous behavior can be greatly enhanced by exploiting the topological structure emerging from the growth processes [9,10]. However, full understanding of how the network topology affects synchronization of specific dynamical units is still an open problem. One possible approach is to use evolutionary learning mechanisms in order to construct networks with prescribed dynamical properties. Several models have been explored, where dynamical parameters were modified in response to the selection pressure via learning algorithms, in such a way that the system evolved towards a specified goal [11][12][13]. This approach can also be employed to design phase oscillator networks with desired synchronization properties. Using heterogeneous oscillators with a dispersion of natural frequencies, we have previously shown how these elements can be optimally connected, by using a given number of links, so that the best syn- * Electronic address: yanagita@isc.osakac.ac.jp chronization level is achieved [13]. Here, our attention is focused on synchronization enhancement in networks of identical phase oscillators in the presence of noise. In such systems, noise acting on the oscillators competes with the coupling which favors the emergence of coherent dynamics [4,14]. The question is how to connect a set of phase oscillators, so that the resulting network exhibits the strongest possible synchronization despite the presence of noise, under the constraint that the total number of available links and, thus, the mean connectivity are fixed. To design optimal networks, stochastic Markov Chain Monte Carlo (MCMC) method with replica exchange [13] is used by us. Large ensembles of optimal networks are constructed and their common statistical properties are analyzed. As we observe, the typical structure of a synchronization-optimized network is strongly dependent on its connectivity. Sparse optimal networks, with a small number of links, tend to display a star-like structure. As the connectivity is increased, synchronizationoptimized networks show a transition to the architectures with interlaced cores. The paper is organized as follows. In Sec. II, we introduce a model of identical phase oscillators occupying nodes of a directionally coupled network and define the synchronization measure for this system. The optimization method is also introduced in this section. Construction of optimized networks and their statistical analysis are performed in Sec. III. The results are finally discussed in Sec. IV II. THE MODEL AND THE OPTIMIZATION METHOD For identical oscillators, it is known that, in absence of noise, even very weak coupling can lead to complete synchronization [14,15]. Below, we consider the effects of noise acting on a network of coupled identical phase oscillators, so that the model equations are where ξ i (t) are independent white noises, such that ξ i (t) = 0 and ξ i (t)ξ j (t ′ ) = S 2 δ i,j δ(t − t ′ ). Interactions between the oscillators are specified by the matrix w with the elements w i,j = 1, if there is a connection, and w i,j = 0 otherwise. Generally, the connection matrix is asymmetric. Note that since the rotation frequencies of all oscillators are the same, we can always go into the rotational frame θ i → θ i − ω 0 t and thus eliminate the term with ω 0 . Hence, without any loss of generality one can set ω 0 = 0 in Eqs. (1). It is known that, for global coupling, this model shows a transition to synchronization as the ratio of the coupling strength to the noise intensity is increased (see, e.g., [16]). To quantify synchronization of the oscillators, global phase will be employed. To measure the degree of synchronization, we numerically integrate Eq. (1) with the initial conditions θ i (t = 0) = 0 and calculate the average of |r(t)| over a long time T, Our aim is to determine the network w = {w i,j } which would exhibit the highest degree of synchronization, provided that the total number K of links is fixed and the noise intensity S is given. The network construction can be seen as an optimization problem. The optimization task is to maximize the order parameter and, possibly, bring it to unity by changing the network w. We sample networks from the ensemble with the Gibbs distribution P (w) ∼ exp(βR(w)) by the MCMC method. To improve the sampling efficiency, we use the Replica Exchange Monte Carlo (REMC) algorithm, and the details of the algorithm can be seen in [13]. We mainly consider the canonical ensemble average of a network function f (·),i.e., where Z(β) = w exp(βR(w)) is the partition function and the parameter β plays the role of the inverse temperature. III. NUMERICAL INVESTIGATIONS To determine the synchronization degree of a given network at each iteration step of the optimization procedure, equations (1) were numerically integrated with the time increment ∆t = 0.01. Due to limited computational resources, only relatively small oscillator ensembles of sizes N = 15 are considered in this study. The noise intensity is always S = 0.3. Initial phases are θ i (0) = 0. Hence, the order parameter at t = 0 is always equal to unity. To construct an initial random network with a given number K of connections and, thus, with given connectivity p = K/N (N −1), K off-diagonal elements of the matrix w are randomly and independently selected and set equal to unity. For time averaging, relatively long intervals T = 10000 were typically used, since the convergence of the order parameter is slow. The results did not significantly depend on T when sufficiently large lengths T were taken. In parallel, evolution of M + 1 replicas with different inverse temperatures β m = δβ × m, m = 0, 1, . . . , M has been performed (M = 63 and δβ = 5). The statistical results did not significantly depend on the particular choice of inverse temperatures. At every five Monte Carlo steps (mcs), the performances of a randomly chosen pair of replicas were compared and exchanged, as described above. For display and statistical analysis, sampling at each every 50 mcs after a transient of 5000 mcs has been undertaken. A. Optimization at different temperatures Synchronization-optimized networks were obtained by running evolutionary optimization. In this process, the order parameter was progressively increasing until a saturation state has been reached. Figure 1 gives examples of the optimization processes at different temperatures. As clearly seen, when using replicas with the larger inverse temperature β, larger values of the order parameter could be reached, although the optimization process was then slower. This suggests that, for the considered problem, the replicas do not actually get trapped in the local minima even at large β and that already such lowtemperature replicas can be efficiently used to sample the optimized networks. After the transients, statistical averaging of the order parameter over the ensemble with the Gibbs distribution has been performed, according to Eq. (4). In Fig. 2(a), the averaged order parameter R β is displayed as a function of the connectivity p for several different inverse temperatures β. The blue solid circle symbols show the averaged order parameter corresponding to the replica with β 0 = 0, i.e. for an infinitely high temperature. We see that the averaged order parameter increases with the network connectivity p even if the networks are produced by only random rewiring. The red open circles show the average order parameters for the ensemble corresponding to the replicas with the lowest inverse temperature β M . Generally, greater order parameters can be obtained by running evolution at higher inverse temperatures β. At each connectivity p, the order parameter is gradually increased with increasing β and is approximately saturated at β M . This means that, even if one further increases β, only slight improvements of the averaged order parameter can be expected. Thus, the networks sampled by the replica with the largest inverse temperature β M are already yielding a representative optimal ensemble. Figure 2(b) shows the ratio R βM / R β0 of the order parameters averaged over network ensemble with the highest inverse temperature β M and with the zero inverse temperature (i.e. the ensemble with purely random rewiring) for different connectivities p. Since there is no room for the improvement of the order parameter when the number of links is small, the ratio tends to unity as the connectivity p is decreased. On the other hand, when p = 1, global coupling is realized, for which, under the chosen coupling strength, full synchronization occurs. As evidenced by this Figure, the difference between the synchronization capacities of the optimized and random networks is most pronounced at the intermediate connectivities, for p around 0.1. The noise intensity dependence for the synchronization capacities is shown in Fig 2(c). When the noise intensity is small, the ratio becomes larger and the maximum is shifted to the smaller connectivities p. B. Collective dynamics To analyze differences in the collective dynamics of phases oscillators in random and synchronizationoptimized networks, we have calculated the winding num-ber of each oscillator, Ω i = 1 T (θ i (T ) − θ i (0)) for many realizations of random (sampled by replica with β 0 ) and synchronization-optimized (sampled by replica with β M ) networks, and determined the probability distributions of winding numbers for both ensembles. As shown in Fig. 3, there is a significant difference between these two distributions . The probability peak at Ω i = 0 for the synchronization-optimized ensemble is higher and more narrow than that for the random-rewiring ensemble. This means that synchronization-optimized networks tend to have more elements oscillating with the common frequency in the presence of the external noises, as compared with random rewired networks. Thus, elements in the synchronization-optimized network behave more coherently than those in a random network. C. Architectures of synchronization-optimized networks Several typical synchronization-optimized networks are shown in Fig. 4. Their structures strongly depend on the number of available connections (the number of links is always conserved during an optimization process). When connectivity p is small [ Fig. 4 (a)], designed networks usually have star structures. The central element acts on a group of periphery elements which have no connections among them. Additionally, a number of disconnected elements are present. If a larger number of links is available [ Fig. 4 (b)], a core, formed by a group of interconnected elements, becomes formed. There are also periphery elements, which are affected by the core, but do not influence its dynamics. As the mean connectivity of the network is increased, the core grows at the expense of the periphery elements. Thus, the network starts to include [ Fig. 4 (c)] a relatively large group of highly connected elements, with only a few elements which are loosely connected and belong to the periphery. For a synchronization-optimized network, we have integrated the equations (1) for a long time, and calculated the correlations η i between the phase of a local oscillator θ i and that of the global order variable r(t) defined as These quantities show how strongly the dynamics of an oscillator i is synchronized with the global signal r(t). The nodes in Fig. 4 are colored according to the rescaled values η i , i.e., The darker color indicates an oscillator having the stronger phase correlation with the global signal. Figure 4 suggests that the phases of central oscillators are strongly correlated with the phase of the global signal. In order to check this more clearly, we have divided all oscillators into the groups with equal degrees and separately determined average correlations with the global signal for each group. Thus, quantities η k have been calculated, where k i denotes the total degree of a node i, i.e., with k + and k − being the ingoing and outgoing degrees, respectively. In Figure 5, phase correlations η k , averaged over an ensemble of synchronization-optimized networks, are plotted as a function of the degree k for different network connectivities p. We see that, on the average, nodes with higher degrees are stronger correlated with the global signal. Thus, the oscillators having many connections act as organizing centers of the synchronization. Furthermore, as seen in Fig. 5, phase correlations for the nodes with the same degree become larger as the connectivity is increased. This tendency can be understood if we take into account that the synchronization-optimized networks usually have shallow tree-like structures for the smaller connectivities p. As p increases, the network becomes interlaced and has many loops [ Fig. 4(c)]. Since the feedback in a loop enhances the correlation, the averaged phase correlation of nodes with the same degree becomes larger as p increases. Note that in a star structure, the central node does not receive any signal from other oscillators; thus, the phase of the oscillator in the center is only affected by the applied noise. On the other hand, when outgoing connections from the center to the periphery elements are present, the central oscillator effectively acts as a source of common noise applied to the peripheral nodes. Recently, it has been shown that common noise can induce synchronization in an ensemble of identical oscillators [28,29]. This phenomenon may be responsible for the development of correlations between the peripheral elements and the central oscillator. Similar behavior may take place when, instead of a single central node, a core of highly connected oscillators is present in a network. D. Degree distributions To statistically investigate architectures of designed networks, ingoing and outgoing degrees of their nodes have been considered. By sampling over 200 realizations from synchronization-optimized ensemble, we have obtained the ingoing and outgoing degree distributions at p = 0.10, as shown in Fig. 6. For the ensemble of random rewiring networks, both ingoing and outgoing degrees obey the same Poisson distribution (red bro- ken lines in the figure represent the in-and out-degree distributions of networks sampled by the replica with β 0 ). As clearly seen in Fig. 6, most of nodes in the synchronization-optimized networks have only one ingoing connection and no outgoing connections. This indicates that many periphery nodes exist, consistent with a typical realization of synchronization-optimized network shown in Fig. 4(a). Moreover, the outgoing degrees of synchronization-optimized networks are distributed more broadly than those of random rewiring networks, i.e., a long tail in the outgoing connection distribution has emerged. This reflects the development of core nodes. Hence, there are two principal types of nodes, i.e., core and periphery nodes, in the synchronization-optimized networks. The core nodes have many outgoing connections and a smaller number of ingoing connections, whereas the periphery nodes tend to have small numbers of ingoing connections. In order to further investigate the statistics of network structures as a function of the network connectivity, we have calculated the maximum of ingoing and outgoing degrees of each synchronization-optimized network, k + max = max i (k + i ) and k − max = max i (k − i ), respectively, and averaged them over many realizations. In Fig. 7, the ratios of averaged maximum ingoing and outgoing degrees of the synchronization-optimized networks to those and γ − = k − max βM / k − max β0 , are shown. As p increases, the ratio of the averaged maximum outgoing degree of synchronization-optimized networks to that of the random-rewiring networks increases steeply and takes the maximum in the vicinity of p c = 0.075, while that of the outgoing degree (shown by red square symbols) decreases and takes the minimum at approximately the same p c . In the vicinity of p c , the nodes with a small number of ingoing connections and a large number of outgoing connections (corresponding to the cores) are found in the synchronization-optimized networks. E. Eigenvalues of the Laplacian matrix The Laplacian matrix L for network w is defined as Since the considered networks are directed, the eigenvalues of their Laplacian matrices are complex. We can order the eigenvalues according to the magnitudes of their real parts, i.e. as 0 = Re(λ 1 ) > Re(λ 2 ) > · · · > Re(λ N ). The eigenvalues of the Laplacian matrix are known to play an important role for the synchronizability of oscillator networks [5]. Therefore, we have computed Re λ 2 and Re λ N / Re λ 2 for many realizations of synchronizationoptimized networks. In Fig. 8 (a), Re λ 2 β as function of the connectivity is shown for different inverse temperatures β. It is clearly seen that Re λ 2 β decreases with β. Since Re λ 2 determine the inverse relaxation time to the synchronized state in oscillator networks [5], this indicates that the time needed to achieve the synchronized state decreases with β. The ratio Re λ N / Re λ 2 β averaged over the Gibbs ensemble with β is shown in Fig. 8(b). The displayed dependencies reveal that the ratio, which specifies the to synchronizability, decreases as the optimization level, i.e., β is increased. Recently, fluctuations in the collective signal and oscillation precision were also linked to the eigenvalues of the Laplacian matrix [30,31]. The mean intensity of the fluctuations of the collective signal in an ensemble of components subject to independent Gaussian noises can be estimated by the norm of left eigenvector v, corresponding to the zero eigenvalue, vL=0 and normalized as N i=1 v i = 1 (see details in [31]). When all indepen- dent Gaussian noises have the same strength, the meansquare dispersion of the collective signal can be estimated We have computed this property for the ensembles of our designed networks. In Fig. 8(c), we have shown σ as a function of the network connectivity for different optimization levels, i.e., different β. As we see, σ decreases with β, implying that collective fluctuations are suppressed through the optimization. We have also estimated the oscillation precision [30] for our synchronized-optimized networks. The results indicate a tendency to increase the precision at higher optimization levels (the figure does not shown in the figure). While our networks have been only optimized with respect to their synchronization ability in the presence of noise, the above analysis clearly shows that the designed networks turn out to be also optimized with respect to a number of other properties. For the designed networks, the time of relaxation to the synchronized state in the absence of noise is shorter. In the presence of weak noise, such networks have lower intensity of fluctuations in the collective signal and higher oscillation precision. IV. CONCLUSIONS We have designed synchronization-optimized networks with a fixed number of links for a population of identical oscillators under action of independent external noises. This has been done by using the Markov Chain Stochastic Monte Carlo method complemented by the Replica Exchange algorithm. Large ensembles of networks with improved synchronization properties have been constructed at different mean connectivities and their statistical properties have been analyzed by using various characterization tools. Our analysis reveals that the architectures leading to the improved synchronization of identical oscillators in the presence of noise are essentially different from the optimal synchronization architectures for heterogeneous oscillator populations without noise, which have previously been studied [13]. When the number of available links is small, synchronization-optimized networks are typically star-shaped structures. As the number of links grows, the designed networks are seen to develop dense cores, which replace a single central element in the star networks. The core expands as the number of available links is increased, and eventually the network becomes strongly interlaced. The star and core-periphery structures of the designed networks can be qualitatively understood, if one takes into account that the central elements in such networks are effectively operating as the source of common noise for the periphery elements. It is known [28,29] that common noise can induce synchronization in the populations of disconnected oscillators or, in our case, in the group of periphery elements all connected to the same central elements or a central core. Thus, we have shown that efficient design of oscillator networks with the improved synchronization properties is possible. The architectures of such optimal networks strongly depend on the constraints, such as the total number of links available. Through the appropriate rewiring of a network, a strong gain in the synchronization signal can be achieved. Although our study has been performed for a simple system of phase oscillators, similar evolutionary optimization methods can be applied to construct networks of different origins, where the dynamics of individual oscillators may be significantly more complex.
2012-03-23T02:06:27.000Z
2011-09-30T00:00:00.000
{ "year": 2012, "sha1": "f583f9223b9cdafd6966225c5556768fa1d81b40", "oa_license": null, "oa_url": "https://pure.mpg.de/pubman/item/item_1646620_7/component/file_1652165/e056206.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f583f9223b9cdafd6966225c5556768fa1d81b40", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics", "Medicine" ] }
51626576
pes2o/s2orc
v3-fos-license
Epilepsy monitoring units in Saudi Arabia: Objectives: To descriptively assess Epilepsy Monitoring Units (EMUs) and the provided services in Saudi Arabia and compare them based on the geographic region. Methods: In this cross-sectional study, an electronic questionnaire was emailed to all directors of EMUs in Saudi Arabia from July 2013 to January 2016, with constant updates being made by all respondents throughout the period of data collection. Results: All EMU directors participated. There were 11 EMUs in KSA operating in 8 hospitals; 8 (54.5%) EMUs in Riyadh, 2 (18.2%) in Dammam, 2 (18.2%) in Makkah and 1 (9.1%) in Jeddah. Five (54.5%) EMUs were shared for adults and pediatrics, 3 (27.3%) were devoted to adult patients, and 3 (27.3%) to pediatric patients. The average waiting time was 11 weeks (range: 2-52 weeks). The mean percentage of patients coming from an outside region was 30.6%. The average length of stay was 7 days. Less than 100 patients were monitored annually in 54.5% of the EMUs. Seven EMUs (63.6%) admitted less than 100 patients for seizure characterization. Intracranial monitoring was available in all EMUs. Most EMUs (54.5%) admitted less than 100 patients for pre-surgical workup while 36.4% admitted 100-199, and 9.1% admitted more than 300 patients per year. Epilepsy surgeries were performed for less than 50 patients annually in 81.8% of the hospitals. Conclusion: There are 11 EMUs in Saudi Arabia fully equipped to serve epileptic patients. However, they are underutilized considering the number of admitted patient and the number of epilepsy surgeries per year. Also, they are unequally distributed throughout the kingdom. EMUs in KSA ... Aljafen et al www.nsj.org.sa E pilepsy is a common neurological disorder in the Kingdom of Saudi Arabia )KSA( affecting 6.54 per 1,000 Saudis. 1 It affects patients of different age groups, and is known to negatively influence patients' quality of life. 2,3 Epilepsy represents a burden to the community and economy, which can be aggravated if the disorder is not properly managed. [4][5][6] Multiple studies have been conducted over the world investigating the services provided to epileptic patients. [7][8][9] Such services include the Epilepsy Monitoring Units )EMUs(, which provide Long-term video-electroencephalography monitoring )LTM(. 10 The LTM is an important investigational tool used for improving accuracy of the diagnosis of different spells, for seizure classification, and for completing pre-surgical workup in patients with drug resistant epilepsy. 11 Numerous reports from various countries have discussed the characteristics of their EMUs; 10,[12][13][14][15][16] however, similar studies have not been conducted in Saudi Arabia, which has a population of approximately 32.3 million, despite the presence of EMUs in several regions of the country. Thus, our objectives are to descriptively assess the EMUs and the provided services in Saudi Arabia and to perform a region based comparison. Methods. In this cross-sectional study, an electronic questionnaire was designed to collect the following information regarding EMUs: the region, unit type )adult, pediatrics, or shared(, percentage of patients admitted from outside regions, waiting time, average stay time, and the number of adult and pediatric epileptologists, neurologists, neurosurgeons, epilepsy neurosurgeons, neuropsychologists, and technicians, and the number of beds. The survey also included questions about the date of establishment and number of cases monitored per year, along with questions about the available diagnostic modalities and the workup performed for the admitted patients in the EMU. The questions on diagnostic modalities included the availability and number of cases monitored using electroencephalograms )EEG(, magnetic resonance imaging )MRI(, functional MRI )fMRI(, positron emission tomography )PET(, singlephoton emission computed tomography )SPECT(, magnetoencephalography )MEG( and intracranial monitoring )ICM(. Additionally, questions related to epilepsy surgery and stimulation treatment for intractable epilepsy were added. The study was approved by the Institutional review board )IRB( of King Saud University. After the IRB approval, a list of all the tertiary hospitals and EMUs in Saudi Arabia was obtained through officials in the Saudi Ministry of Health and the Saudi Epilepsy Society. The inclusion criteria involved all hospitals with EMUs in Saudi Arabia. Other tertiary hospitals without EMUs were excluded from the study. The questionnaire was emailed to the directors of all the EMUs in Saudi Arabia from July 2013 to January 2016 with constant updates being made by all the respondents through the period. Statistical analysis. The data was entered in an excel spreadsheet then converted into a IBM Statistical Package for the Social Sciences )SPSS( dataset. The frequencies and percentages for all variables were obtained, and the means and ranges of all numeric variables were calculated using for Windows, Version 21.0. )Armonk, NY: IBM Corp.(. The files were split by the region and unit type and then analyzed for comparison purposes. Results. All EMUs directors participated. There were 11 EMUs in Saudi Arabia operating in 8 hospitals; 8 )54.5%( EMUs in Riyadh, 2 )18.2%( in Dammam, 2 )18.2%( in Makkah and 1 )9.1%( Jeddah. Five )54.5%( EMUs were shared for adults and pediatric, 3 )27.3%( were devoted only for adult patients, and 3 )27.3%( were devoted only for pediatrics. The number of in-patient beds allocated to neurology departments in all hospitals was 161, with a mean of 20 beds per hospital )range: 13-30(. The total number adult EMU beds was 31, with a mean of 3.88 beds per hospital )range: 2 -7(. The total number of pediatric EMU beds was 12, with a mean of 1.5 beds per hospital )range: 1-3(. Table 1 shows manpower in the 8 hospitals with EMUs. The average waiting time in the 11 EMUs was 11 weeks )range: 2-52(. The mean percentage of patients coming from an outside region was 30.6% )range: 5-80%(. The average length of EMU stay was 7 days )range: 4-14 days(. Patients admitted to EMUs for seizure characterization were less than 100 in 7 )63.6%(, 100-199 in 2 )18.2%(, 200-299 in 1 )9.1%(, and more than 300 in 1 )9.1%(. Patients admitted to EMUs for pre-surgical evaluation were less than 100 in 6 )54.5%(, 100-199 in 4 )36.4%(, and more than 300 in 1 )9.1%(. Table 2 shows the available diagnostic utilities )outpatient EEG, inpatient EEG, ambulatory EEG, MRI, fMRI, SPECT, PET and MEG( for all the EMUs and how many times per year they were utilized. Intracranial monitoring )ICM( was available in all EMUs. All but one EMU provided ICM for less than 50 patients annually. Only one EMU provided ICM for 50-99 patients annually. Disclosure. Authors have no conflict of interests, and the work was not supported or funded by any drug company. The performed epilepsy surgeries per year were less than 50 in 81.8% of the hospitals, and 50-99 in the remaining hospitals. Figure 1 shows the variation between cities based on the number of epilepsy surgeries per year. Surgeries for lesional epilepsy were performed for 10-49 patients in 63.6% of the hospitals, while less than 10 patients had such surgeries performed in 36.4% of the hospitals. Non-lesional epilepsy surgeries were performed for less than 50 patients in 54.5% of the hospitals, while the remaining hospitals did not provide such surgeries. Eight )72.7%( EMUs located in Riyadh, Jeddah and Dammam provide vagal nerve stimulation )VNS( as a treatment modality for intractable epilepsy. The VNS was performed for 352 patients annually with a mean of 44 )range: 12-70( per EMU. Deep brain stimulation was not utilized by any of the participating hospitals at the time of the study. Neuropsychologists were available at 81.8% of the EMUs. The total number of neuropsychologists was 14 with an average of 1.56 )range: 1-3( per EMU. Nine )81.8%( EMUs had psychologists to treat patients with pseudoseizures, while the remainder had psychiatrists to regions. Located in Riyadh are 6 EMUs with 18 beds for adult patients and 7 for pediatric patients serving an approximate population of 7 million. On the other hand, there are no EMUs in the Asir region which has a population of around 2 million. 17 There are 2 main healthcare sectors in Saudi Arabia. The first belongs to the government and the other is the private sector. In this regard, it is important to mention that all the EMUs are governmental facilities. Thus, access to EMU services is generally limited to Saudi Nationals. Based on our results, VNS has been implemented widely in Saudi Arabia unlike deep brain stimulation )DBS(. The VNS has proven to be effective in managing epilepsy along with other disorders like depression. [18][19][20] However, VNS is not the scope of this article as it has already been discussed thoroughly in the local and international literature. [21][22][23] Rubboli et al 13 had previously published a study about safety issues in 48 EMUs in 18 European countries. Similar data to that provided in the present study were found in their article. Despite the difference in terms of the response rates between our study )100%( and Rubboli et al 13 )32%(, comparisons were conducted and are shown in Table 4. The EMUs in Saudi Arabia are underutilized as only 1 of the 11 EMUs admitted more than 250 patients per year compared with 9 of the 48 EMUs in Europe, this is despite the long waiting lists in Saudi Arabia. 13 Most of the European EMUs )74%( were shared for adult and pediatric patients, which may allow the provision of a higher quality of training for residents, fellows, technicians and nurses. The length of stay is long in Saudi EMUs, Spritzer et al 14 conducted a study investigating the EMU services in Mayo Clinic and Bannar Good Samaritan Hospital; the mean length of stay at both sites was 4.5 and 3.3 days, respectively. This is shorter than that in our study, and might be related to the fact that the majority of patients who travel from distant regions to seek treatment in Saudi hospital might stay longer in hospitals because they are unable to complete outpatient management ahead of admission and additional time is usually required to arrange for return trips. Further comparisons could not be performed due to methodological differences. A major limitation in this study is the long period of data collection, which was minimized by the constant update of information from the respondents. This study did not investigate the epilepsy services provided by hospitals with no EMUs as it has already been discussed previously. 24 In conclusion, there are 11 EMUs in Saudi Arabia fully equipped to serve epileptic patients. They also appears to be a potential to service more patients per year at most centers. It seems that the unbalanced regional distribution of EMUs probably impedes timely patient assessments, and further development of EMUs at highly populated yet underserviced regions would improve overall epilepsy care throughout the Kingdom. Thus, further studies are needed to address the obstacles and suggest solutions. Withdrawal policy By submission, the author grants the journal right of first publication. Therefore, the journal discourages unethical withdrawal of manuscripts from the publication process after peer review. The corresponding author should send a formal request signed by all co-authors stating the reason for withdrawing the manuscript. Withdrawal of a manuscript is only considered valid when the editor accepts, or approves the reason to withdraw the manuscript from publication. Subsequently, the author must receive a confirmation from the editorial office. Only at that stage, are the authors free to submit the manuscript elsewhere. No response from the authors to all journal communication after review and acceptance is also considered unethical withdrawal. Withdrawn manuscripts noted to have already been submitted or published in another journal will be subjected to sanctions in accordance with the journal policy. The journal will take disciplinary measures for unacceptable withdrawal of manuscripts. An embargo of 5 years will be enforced for the author and their co-authors, and their institute will be notified of this action.
2018-08-01T19:56:49.127Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "92e921fb06b84378e0a84620057435b0cef29964", "oa_license": "CCBYNCSA", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8015578", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6f4ae0c28077c03832cde86565e2d3fe2af51ab2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257449696
pes2o/s2orc
v3-fos-license
Sustainability assessment of virtual water flows through cereal and milled grain trade among US counties Transference of the embedded water, so-called virtual water, in the trade of crops among regions within a country is often neglected, leading to no information about the impacts on the water resources of exporting regions, especially if those regions are water-stressed or, worse, water-scarce. Virtual water trade, if not considered through the lens of sustainability, could lead to adverse effects on the water resources of an exporting region. Previous related studies have quantified virtual water trade among the states in the United States providing valuable insights; however, information for specific crop trade among counties, its water footprint (WF) at the county scale, the resultant virtual water flow among counties, and the sustainability assessment of those virtual water flows are lacking. In this study, we calculate the green and blue WF of cereal and milled grain products at the county level and then, using trade data, calculate the virtual water flows among the counties. Then, we assess the sustainability of the import by introducing unsustainable import fraction (UIF), which is the ratio of virtual water imported from water-scarce counties to that of total virtual water imported in the form of cereal and milled grains. Finally, we quantify the change in UIF from the 2007–2017 period. A few of the significant insights discovered through this analysis include: (i) most of the cereal and milled grains trade is occurring among neighboring counties; ii) one-third of US counties import 75% or more virtual water from water scarce regions; (iii) in 2017, Texas and Missouri were the largest importer and exporter, respectively; and (iv) the number of counties importing cereals and milled grains from water-scarce counties increased from 2007 to 2017. Recommendations on alleviating the negative effects of the unsustainable import of cereal and milled grain are provided toward the end of the discussion. Introduction Population growth and the consequent increased food demand exert pressure on the limited freshwater resources [1][2][3]. In addition to the increase in food demand due to the rising population, the changing consumption patterns also add to the declining water availability problem [4][5][6]. Amid the growing population, sectoral competition for water is also a significant concern as the domestic, industrial, and agricultural sectors require more water [7][8][9]. Moreover, climate change is worsening the problem, especially in the already arid to semi-arid regions where the models show an increase in evapotranspiration, hence a reduction in the available green water [10][11][12]. On top of the external stressors impacting water availability, systemic issues such as the unsustainable usage of water or the economic water scarcity in water-abundant regions can also hinder the achievable production of food and affect other uses of water [13][14][15][16]. Economic water scarcity is critical in regions where sufficient water is hydrologically available, but its access is limited due to the lack of infrastructure. Economic water scarcity and its adverse effects are not confined to the agriculture sector alone. Drinking water access can also be limited if people's socio-economic conditions prevent them from accessing the water [17]. Globally, the extent of water-scarce regions is expanding [18,19], and the already scarce areas are experiencing increased water scarcity [20,21]. To offset the effects of increasing demand on available water resources, efforts are now being undertaken to incorporate and implement sustainable food production systems and responsible consumption [22][23][24][25]. Many studies have looked at the world's growing water scarcity problem [26][27][28][29][30]. Even in developed countries such as the US, unsustainable water resource exploitation has been a concern for quite some time. Efforts are being made to mitigate the impacts of low water availability on the agricultural sector through efficient irrigation, crop rotation, and other measures [31][32][33]. However, unsustainable food production and its impact on local water resources is not the only concern. As regions have become more interconnected, food trade has been increasing, especially imports by rapidly growing countries and economies such as China, the Middle East, and North African nations [34,35]. Thus, the scale of the problem of sustainability of food production and the resulting water scarcity has become more prominent. In addition to food trade, other agriculture raw products grown in developing countries for industries like textiles, which are ultimately exported elsewhere are also affecting the water resources in the exporting regions [36]. The effects are more pronounced in regions where there is overuse relative to the available quantities of water [37]. There is a growing concern about addressing the water sustainability issues from the perspective of the water movement across different regions [22,38,39] through the embedded water in a product called virtual water [40]. Virtual water and water footprint (WF) are the most widely adopted and acceptable approaches to quantify and assess the sustainability of food production and consumption [22,41], the trade of food commodities [42], and their relationship to the adverse effects on the water resources of the origin regions [43][44][45][46]. The virtual water converts the agricultural products into the amount of embedded water in a product. Thus, it quantifies and assesses the sustainability in terms of the amount of water used versus that available. Ultimately any required sustainability or scarcity-based indexes can be derived after the calculation of virtual water volumes. At a national scale, in the Continental United States (CONUS), studies have employed the WF and virtual water concept to calculate the virtual water flow among the states [47,48]. In this study, we move beyond the macro-level analysis at the state level to a much finer scale at the county level. We first employ the concept of WF to calculate the green and blue WF of cereal and milled grain products at the county level and then, using trade data, calculate the virtual water flows embedded in the grains among the counties. Finally, we assess the sustainability of the import based on the ratio of virtual water imported from water-scarce counties to that of total virtual water imported in the form of cereal and milled grains. We also look at the change in import sustainability from the 2007-2017 period. Finally, we discuss the sustainability of cereal and milled grains trade at the county level and provide recommendations for reducing the unsustainable trade of these agricultural commodities. Methodology In this study, we estimated the virtual water flows between counties as a result of cereal grains and milled grain products trade and assessed its sustainability for 2007 and 2017. We collected data on the trade of cereals and milled grain products between counties from Karakoc et al [49] that quantified the food flow of seven food commodity groups among US counties for the years 2007, 2012, and 2017. A weighted average WF of cereals and milled grains was calculated by combining the crop water use data [51] and harvested area and production data obtained from U.S. Department of Agriculture (USDA) [52]. The calculated WF and the mass traded of cereals and milled grains were used to compute the virtual water inflow and outflow between the counties which was used to identify the net virtual water exporting and importing counties. The steps followed to undertake the study, data, and data source are summarized in figure 1 and described in detail in the following sections. Cereal WF Gridded data on crop water use (m 3 ha −1 ) and harvested area (ha) for the cereal crops were obtained from Mekonnen and Hoekstra [53] and aggregated to the county-level average cereal crop water use (CWU cereal in m 3 ha −1 ). The ten cereal crops considered in this study are wheat, rice, barley, maize, rye, oats, millet, sorghum, buckwheat, and triticale. To convert the cereal crop water use to WF per unit production, we used harvested area and crop production data from the USDA-National Agricultural Statistics Service for 2007 and 2017. For each county, the cereal WF was calculated by multiplying the cereal crop water use by the total cereal harvested area (ha) and dividing it by the total cereal production (kg): WF cereal = CWU cereal × Area Production (1) where, WF cereal is the average cereal WF (m 3 kg −1 ) and CWU cereal is the average cereal crop water use (m 3 ha −1 ) in the counties. Milled grains WF In addition to cereals, the second category of food commodity considered is standard classification of transported goods (SCTG) category 6 (milled grain products and preparation and bakery products). We followed a step-wise accumulation approach to calculate the WF of milled grain products [54]: where, WF prod [P] is the water footprint of output product p in volume per unit mass, WF prod [i] is the water footprint of input product i, WF proc [p] is the water footprint of the processing step in volume per unit mass of product p, f p [p,i] is product fraction of output p processed from input i, f v [p] is the value fraction of output p and y is the total number of output product. While the group includes milled grains (flours), bakery products, cereals, starch or milk, and baked products, we used the milled grain WF to represent the entire group due to lack of data on the share of each product in the group. As a result, the WF of the traded products may be slightly underestimated. Product fraction and value fraction of milled grain products of the cereal crops were obtained from Mekonnen and Hoekstra [53]. The county-level weighted average value fraction to product fraction ratio was calculated based on the production and value fraction to product fraction ratio of milled grain products of each cereal crop (equation (3)) where, (V f /P f ) mg is the average value fraction to product fraction ratio of the milled grain category, and (V f /P f ) c is the value fraction to product fraction ratio of milled grains produced from each cereal crop c. After calculating the average cereal value fraction to product fraction, assuming that the cereals are processed to one output product that is milled grain and assuming the process WF is zero, the step-wise accumulation equation (equation (2)) was simplified to equation (4) and used to calculate the WF of milled grains for all the counties under study where, WF mg is the average water footprint of milled grains in m 3 kg −1 . Flow of virtual water embedded in cereals and milled grains trade The WF of the two SCTG categories (cereals and milled grains) calculated using the methods described above was used to convert the food flow from Karakoc et al [49] to virtual water flow. This was done by multiplying the WF of the food categories by their corresponding flow in mass (equation (5)) where, VWF is the volume of water virtually transported with traded goods, WF is the water footprint of the good and MF is the mass of cereal or milled grain traded. Sustainability assessment of the trade The sustainability of the cereal and milled grain import was assessed based on the ratio of virtual water imported from water-scarce counties to the total virtual water imported by a county (equation (6)). It is important to note that only blue virtual water is used while calculating UIF. The fraction of virtual water coming from water-scarce counties is considered the unsustainable import fraction (UIF) UIF = VWT ws VWT tot (6) where, VWT ws and VWT tot are the amount of blue virtual water imported from water-scarce counties and total blue virtual water import, respectively. The blue water scarcity data is obtained from Mekonnen and Hoekstra [29] at the spatial resolution of 30 arc min and aggregated to the county level. Export sustainability is considered if a water-scarce county has a net negative virtual water import i.e. exports more virtual water than import. Net import in a county is calculated by subtracting the virtual water import from export. A county is considered a net importer if the value of net import is more than zero otherwise it is considered a net exporter. We assume that the export from a county is not sustainable if the county is a net exporter and faces blue water scarcity. WF of cereal and milled grain production in the CONUS counties Cereal production accounts for 685 km 3 of water use in the CONUS in 2017 (figure 2) which consists of 69 km 3 of blue water and 616 km 3 of green water. Most of the water use is concentrated in the Great Plains and Midwestern states with Kansas being the largest water consumer. Most of the counties with crop water use larger than 0.5 km 3 are situated in Kansas, Iowa, Nebraska, Illinois, and Minnesota. These five states account for about 44% of the total cereal crop water use in the CONUS. These five states are also the largest cereal producer states contributing about 46% of the cereal production in the CONUS (figure S1). A large fraction of crop water use for cereal production in the eastern US comes from green water, in contrast to the water use in the western US, which mostly relies on blue water ( figure S2). Overall, about 90% of the water used for cereal production is green water in the CONUS. The average WF of cereal production is 1500 m 3 /ton in the CONUS counties which is around 91% of the global cereal WF [50] (ton is referred to as the short ton (2000 lbs) in this study). The average milled grain production WF in the CONUS is 1549 m 3 /ton. Oklahoma has the largest WF of both cereal and milled grain production followed by Montana and Texas, while Midwestern states have a comparatively small WF ( figure 3). The difference in the WF of cereal crops among states is due to difference in the climate (evaporative demand) and cereal yield. Virtual water flows through cereal and milled grain trade among CONUS counties We find that around 70% of the virtual water trade in cereal and milled grain products occurs within the states ( figure 4(a)). In fact, most of the trade transactions occur when the origin and destination counties are close ( figure S3). Only around 6% of the virtual water flows between the counties that are more than 1000 km apart. On the other hand, trade in cereal and milled grain products between counties within 500 km of each other accounts for around 86% of the virtual water flow in the CONUS. For long-distance trade (more than 2000 km), California and Minnesota are the largest virtual water importer and exporter states, respectively. The largest virtual water transfer due to trade in cereal and milled grain outside the state occurs from Missouri to Texas (62 km 3 ), followed by Oklahoma to Kansas (22 km 3 ) and Kansas to Texas (20 km 3 ) ( figure 4(b)). However, the top three states with the largest intrastate virtual water transfer are Kansas (181 km 3 ), Texas (151 km 3 ), and Nebraska (122 km 3 ). Texas is the largest virtual water importer, followed by Louisiana, importing 95.6 km 3 and 48.3 km 3 from other states, respectively, while Missouri is the largest virtual water exporter, followed by Kansas, exporting 74.8 km 3 and 48.5 km 3 to other states through cereal and milled grain trades, respectively. Figure S4 shows the spatial distribution of the top 500 virtual water importing and exporting counties in the CONUS. Kanas and Texas have the highest number of top 500 exporting and importing counties, respectively. Great Plains states have a high number of importing and exporting counties due to high within-state trade while the Western and Eastern US states have high water importing counties. About 90% of the Kansas counties are net exporters while 100% of the counties in New Hampshire are net importers (figure S5). The majority of counties in Midwestern and Great Plains states are net exporters while counties of western and eastern coastal states are largely net importers. In terms of total net water imported or exported by the state, Texas is the largest net importer followed by Louisiana while Missouri is the largest net exporter of virtual water followed by Montana (figure 5). Import and export sustainability of cereal and milled grain trade Virtual water import from a water-scarce region may exert pressure on the exporter's resources. The importer may also face supply and demand imbalance due to unsustainable imports from water shortage in the exporter region. We find an eastern and western split in the unsustainable virtual water import fraction ( figure 6). This is because most of the water-scarce counties in the CONUS are situated in the western part of the country. The Western US counties are largely net importers and most of their import comes from water-scarce regions while Eastern counties import a low percentage of their total virtual water import from water-scarce counties. Around 54% of the CONUS counties have a UIF less than 0.25 and about half of the CONUS counties have UIF < 0.10. These counties are mostly situated in the eastern US. Around 36% of the total counties have a UIF more than or equal to 0.50, moreover, 31% of the total counties in the US have UIF more than or equal to 0.75, or in other words, 31% of the counties imports 75% of the total virtual water flow from water-scarce regions. Further, 7% of the CONUS counties have UIF equal to 1 indicating all of the cereal and milled grain products imported from water-scarce regions. One-third of the counties having UIF more than or equal to 0.75 are situated in only three states-Texas, Nebraska, and Kansas. Moreover, Nebraska, Arizona, Utah, Kansas, California, Nevada, Idaho, and New Mexico have more than two-thirds of their counties having UIF more than or equal to 0.75 (figure S6). The states facing high import unsustainability have their major originating regions from the Great Plains or arid water-scarce states ( figure 7). We define export unsustainability condition when a county faces blue water scarcity and still export virtual blue water. Our results show that 26% of the cereal and milled grain exporting counties export unsustainably and 19% of the counties face severe blue water scarcity (figure 8). Most of these counties are situated in the Great Plains region and one-third of them are in Texas, Nebraska, and Kansas. With 90% of the counties facing blue water scarcity and exporter of virtual blue water, Nebraska has the highest fraction of unsustainable exporter counties. This is followed by Utah and Wyoming with 72% and 70% of counties being unsustainable exporters, respectively. Change in virtual water flow and import-export sustainability The flow of cereal and milled grain products between the US counties was reduced by around 10% from 2007 to 2017 due 36% reduction in trade instances. That results in a reduction of virtual water to 1750 km 3 in 2017 from 2396 km 3 in 2007. The green virtual water reduces by 31% while blue virtual water reduces by 36%, however, the contribution of green water in total virtual water remains the same, i.e. 92%, in both years. The WF of cereal and milled grain has substantially decreased over the last ten years. The county-level average WF of cereal grain reduces to 1797 m 3 /ton in 2017 from 3694 m 3 /ton in 2007. The reduction in WF can be attributed to the increase in the state average cereal crop yield which increased by 64% in 2017 from Discussion We calculated the virtual water embedded in the trade of cereal and milled grain products in the US using previously developed datasets of food flow between the US counties [56] and the WF of crops and derived crop products [53]. Cereal and milled grain products account for around 42% of the total food flow between US counties. Our analysis shows that the cereal and milled grain trade resulted in 1756 km 3 of virtual water flow within the US in 2017. One-third of US counties import a large fraction of the cereal and milled grain products from water-scarce counties and are categorized as severely unsustainable importers. About 46% of the total virtual water of cereal and milled grain products are imported by counties that import more than 75% of their products from water-scarce regions. Further, one-fourth of the US counties export virtual water through cereal and milled grain transport, despite facing blue water scarcity. The majority of unsustainable trading counties are located in Great Plains states and Midwestern states. We find that counties in the Corn Belt are large virtual water exporters, while counties in eastern and western coastal states are importers. Midwestern states also include counties that are highly unsustainable in terms of exporting and importing. The number of counties with unsustainable imports increased from 2007 to 2017. Our study uses UIF to identify unsustainable importing and exporting counties based on the fraction of virtual water imported from water-scarce counties to the total water imported by a county. Export unsustainability refers to the trade that originates from an already water-scarce county. In this study, we focus on cereal and milled grain products, hence the sustainability in trade pertains to only these two categories. Therefore, UIF indicating import unsustainability in a county does not imply unsustainability in overall food import. One of the main uses of the UIF is to identify counties that are heavily dependent on imports from water-scarce counties. These counties may be at a greater risk of unsustainability in their food systems, and the UIF can help to highlight this issue. Additionally, the UIF can be used to track changes in the sustainability of imports over time, which can be useful for policy makers and other stakeholders. Moreover, a county may trade cereal and milled grains in an unsustainable manner being a sustainable importer of total food products. A comprehensive study incorporating all major food categories may determine the overall sustainability of the trade between counties. Additionally, UIF may results in high values and its conclusion may not be rational if a county imports little virtual water. In conclusion, while the UIF has limitations, it is still a useful metric for assessing the sustainability of food product imports in a specific context. We use the WF of cereal crops and annual blue water scarcity data [29] from circa 2000 and the county food flow data for 2017. This study assumes that county-level crop water use (actual evapotranspiration) and blue water scarcity do not change significantly since 2000. Moreover, because we assume that milled grain products constitute the majority of the SCTG category 6 food products, we did not include the bakery products such as pasta, baked snacks food, and other baked products in the WF calculation, resulting in a slight underestimation of the WF. Oklahoma, Montana, Texas, Missouri, and North Dakota are the top five states with the largest WF (m 3 /ton) of cereal, with Oklahoma having more than twice the cereal WF in Kansas, the state with the largest total water use (m 3 /year) for cereal production. The large WF makes the virtual water content high for exporting states. The largest out-of-state virtual water transfer occurs from Missouri to Texas, but the food amount traded is in the 72nd percentile. Kansas exports about half the cereal amount that Illinois exports but the water used for cereal production in Illinois is around 60% of that in Kansas. The import unsustainability in the counties can be reduced by either alleviating water scarcity in the origin counties or lessening the dependency on food imports from Midwestern counties. Blue water scarcity in those regions can be reduced by improving water productivity, precision agriculture, building water infrastructure, avoiding water-intensive crops, etc. Conclusion Cereal and milled grains are important sources of nutrition and a major contributor to the calorie requirement for people all over the world. Around 60% of the calories are derived from cereals in developing countries [57]. Trade of food commodities such as cereals not only involve the flow of the foods but also the water embedded in them. This study assessed the sustainability of virtual water flow among US states and counties due to the cereal and milled grains trade. The study revealed that the majority of virtual water flows in the US occur through the trade of cereal and milled grains within neighboring counties. In 2017, Texas was the largest virtual water importer, while Missouri was the largest exporter. Results of this study indicate that about one-third of the US counties import cereal and milled grains unsustainably. The majority of counties in the Great Plain states and the Western US import unsustainably. In addition, counties in the Great Plains states face export unsustainability too due to high blue water scarcity in the region. WF and crop water use distribution changes in the ten-year span from 2007 to 2017. The number of counties importing in an unsustainable manner increases in 2017. This calls for measures regarding changing the direction of trade of goods. Water scarcity level, available water resources and water productivity of the states and counties should be considered as one factor for export and import activities. Data availability statement The data cannot be made publicly available upon publication because they are not available in a format that is sufficiently accessible or reusable by other researchers. The data that support the findings of this study are available upon reasonable request from the authors.
2023-03-12T15:09:39.565Z
2023-03-10T00:00:00.000
{ "year": 2023, "sha1": "4dcd2bf9cc79ffa26d40bca224df017f1fa43fc5", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/2634-4505/acc353/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0f1e54bd5cd9d687f4766908609284edfed0dd46", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
245267768
pes2o/s2orc
v3-fos-license
Methylphenidate Fast Dissolving Films: Development, Optimization Using Simplex Centroid Design and In Vitro Characterization Objectives: The focus of this study was to design and optimize methylphenidate hydrochloride mouth dissolving film (MDF) that can be beneficial in an acute condition of attention deficit hyperactivity disorder (ADHD) and narcolepsy. Materials and Methods: Solvent casting method was used for the preparation of this film. Optimization of the effect of independent variables such as the number of polymers and active pharmaceutical ingredients [hydroxypropyl methyl cellulose (HPMC) E5, HPMC E15, and maltodextrin], % of drug release, disintegration time, and tensile strength of the film done using simplex centroid design. Complex formation of the film was tested using fourier-transform infrared spectroscopy and differential scanning calorimetry study. The multiple regression analysis was obtained from equations of the results that adequately describe influence of the independent variables on the selected responses. Polynomial regression analysis, contour plots, and 3-D surface plots were used to relate dependent and independent variables. Results: Experimental results indicated that different polymer amounts had complex effects on % drug release from the film, disintegration time as well as the tensile strength of the film. The observed responses were in near alignment with expected values calculated from the developed regression equations as shown by percentage relative error. Final formulation showed more than 95% drug release within 2 min and was shown to disintegrate within a minute that had good tensile strength. Conclusion: These findings suggest that MDF containing methylphenidate hydrochloride is likely to become a choice of methylphenidate hydrochloride preparations for treatment in ADHD and narcolepsy conditions. INTRODUCTION Oral drug administration has been most convenient and commonly recognized routes of delivery of most medicinal agents since the dawn of time.Oral drug formulations are solid and liquid preparations that are taken orally, chewed or swallowed, and travel into the gastrointestinal tract for post buccal absorption. 1 Nowadays, the most common solid oral dosage types used today are tablets and capsules, which include traditional tablets, controlled-release tablets, along with hard and soft gelatin capsules. 2,3e of the major problems correlated with use of these oral dosage forms is the time required for onset of action, which is at least half an hour in case of conventional dosage forms and even more in the controlled and sustained release dosage forms.Dysphagia (difficulty in swallowing) is a chronic problem in people of all ages, but it is more prevalent in the elderly and pediatric patients due to physiological differences.Uncooperative, mentally ill, and patients suffering from fatigue, vomiting, motion sickness, allergic attack or coughing are some of the other groups who have issues.This issue affects 35-50% of the population according to reports. 4,5ese concerns created mouth-dissolving films (MDF), a new kind of solid oral dosage medium.These delivery mechanism degrades or disintegrates quickly in mouth, requiring of water 252 to facilitate swallowing.Such technologies make it easier for those with swallowing problems as well as the public to take their drugs.Upon ingestion, saliva serves to rapidly disperse/ dissolve the MDF.The saliva containing dissolved medicament is absorbed from mouth, pharynx, and esophagus.Because of the above-mentioned advantages, bioavailability of drugs is significantly increased than those observed from conventional dosage forms such as tablets and capsules. 2,3thylphenidate hydrochloride is a psychostimulant drug.The drug is useful in the condition of attention deficit hyperactivity disorder (ADHD), a condition that requires immediate medication.By blocking dopamine delivery or carrier proteins, this drug prevents dopamine uptake in central adrenergic neurons.It also induces a heightened sympathomimetic activity in central nervous system by operating on brain stem arousal system and cerebral cortex.Methylphenidate hydrochloride is a biopharmaceutics classification system class-I (high permeability and solubility) drug and its bioavailability is only 11-52% due to its hepatic metabolism.Therefore, main objective of this work was to provide immediate release of the psychostimulant drug methylphenidate HCl for immediate action in ADHD condition in order to improve patient compliance and to avoid hepatic first-pass metabolism of the drug. 4,5erefore, the current study was conducted to develop MDFs of methylphenidate hydrochloride to provide quicker onset of action in the condition of ADHD. 4 MATERIALS AND METHODS Methylphenidate hydrochloride was given as a gift sample from Ipca Laboratories Ltd., Mumbai, India.Different hydroxypropyl methyl cellulose (HPMC) grades were gifted from Colorcon Asia Pvt. Ltd.Goa, India.Maltodextrin was purchased from Himedia Laboratories Pvt.Ltd, Mumbai, India. Calibration curve of methylphenidate HCl Preparation of standard stock solutions Methylphenidate HCl (100 mg) was weighed accurately into a 100 mL volumetric flask and dissolved with phosphate buffer pH 6.8.The volume was made up to 100 mL with the same solution to get a concentration of 1000 µg/mL (1 mg/mL). 6 Scanning of drugs Ultraviolet (UV) spectrum was taken of the stock solution between wavelengths of 200-400 nm.It gave a peak at 257.2 nm and the same was selected as λ max .The absorption maxima of methylphenidate hydrochloride in a pH buffer of 6.8 are shown in Figure 1. 7 Preparation of calibration curve The stock solution was diluted with a pH buffer of 6.8 to get a concentration range of 100 to 1000 µg/mL.Absorbance of these solutions was measured against a blank at 257.2 nm using a UV visible spectrophotometer (Shimadzu Corporation, Japan) and the absorbance values are summarized in Table 1.The calibration curve, which was plotted against absorbance versus drug concentrations, is given in Figure 2. 8,9 Preparation of mouth dissolving film of methylphenidate HCl Calculation of dose of methylphenidate HCl Methylphenidate is an effective drug against ADHD treatment with a good safety profile; evidence shows that dose optimization can improve the safety and effectiveness of treatment.Dose optimization is used widely in general medicine and psychiatry to achieve optimum therapeutic impact, thus minimizing the likelihood of adverse effects.Dose optimization is typical with virtually all psychotropic drugs and may be critical, particularly in therapeutic dose-response relationships with high individual heterogeneity, such as the use of stimulants to manage ADHD.Genetic diversity, patient's weight, age, sex, drug-induced resistance, and associations with other drugs or medical conditions are all considerations that can affect the need for dosage optimization. Dose of methylphenidate HCl is 7.17 mg.Therefore, 7.17 mg dose of methylphenidate HCl was required in a film containing 4 cm 2 area.Total area of 9.4 cm diameter petri dish was 69.43 cm 2 .So, the amount of drug present in 69.43 cm 2 of petri dish was 124.42 mg for all formulations.Therefore, the amount of methylphenidate HCl in each film (4 cm 2 ) was 7.17 mg. 12,13eparation of film by solvent casting method Various methods have been used for film preparation.Among the methods, the solvent casting method is the widely used method to get a good and smooth film.MDF of methylphenidate HCl was made by the solvent casting method. The aqueous solution was prepared by dissolving the chosen polymers in 25 mL purified water and allowed to rest for 1 hour to eliminate any trapped air bubbles.Then, the active pharmaceutical ingredients and plasticizer were dissolved in this polymeric solution.After that, the mixture solution was poured into a silicone petri dish and dried in a 50°C oven for 24 hours.The film was then gently withdrawn from the petri dish and examined for flaws.[16][17] Preformulation study Melting point Melting point of methylphenidate HCl was measured by digital melting point apparatus.The drug sample was filled in a capillary tube and stored using a mercury thermometer in an aluminum block of the apparatus.The block was heated by two elements clamped to the sides in the apparatus and the sample tube was viewed through the magnifying lens by adjusting a dark or bright background.Temperature was recorded at which the sample started to melt and the point, at which it was completely melted. 18,19rtition coefficient Methylphenidate is soluble in alcohol, ethyl acetate, and ether.Hence, ether is chosen for determination of partition coefficient.For this purpose, ether and water were saturated with each other for the period of 24 h in a 500 mL volumetric flask.In a 100 mL volumetric flask, 10% (w/v) of the drug was transferred to mixture of the above-saturated solution and stirred for 24 hours at room temperature on a rotary shaker. After 24 hours of equilibrium, the system was centrifuged for 15 minutes at 3000 rpm for 15 minutes.Concentration of methylphenidate HCl in ether and water was analyzed by a UV-visible spectrophotometer at 257.2 nm after appropriate dilution with methanol.Partition coefficient was determined using the equation below.The experiment was replicated thrice. 19 Optimization of mouth dissolving film components The placebo films were made using polymers like maltodextrin, HPMC E3, HPMC E5, and HPMC E15 by solvent-casting method. Polymers were selected from the abovementioned placebo film by an appearance via visual inspection and disintegration time.An identical approach was used to optimize plasticizers (glycerin, propylene glycol) using the previously optimized concentration of respective components.The plasticizer was optimized based on film tensile strength, folding endurance, and disintegration time. 20,21 Statistical analysis Statistical analysis has been performed using simplex centroid design. Simplex centroid design The use of simplex centroid experimental designs in pharmaceutical research is well known.They are especially useful in formulation optimization procedures, where the overall number of ingredients being considered must remain constant. In the films, the total amount of polymer, if changed, can lead to a large extent change in the mechanical properties of the film, so, simplex centroid is the appropriate design to be applied to the film formulation.The values of dependent and independent variables can be used to develop a polynomial first-order linear interactive model. where Y is the response parameter and Bi are the projected coefficients for factor X i .The main effects (X 1 , X 2, and X 3 ) represent average results of changing one factor from its low to high value at a time.[24] Other common ingredients used for each formulation Other ingredients used include propylene glycol, 0.5 mL, as a plasticizer, and brilliant blue as color.Glycerin was used to the lubrication the petri dish to facilitate smoother peeling of the film. Scanning of methylphenidate HCl in UV spectrophotometer Scanning of methylphenidate HCl has been performed. 25A UV spectrum was run between the wavelengths 200-400 nm and is described in Figure 1. Calibration curve of methylphenidate HCl Methylphenidate HCl (100 mg) was weighed accurately into a 100 mL volumetric flask and dissolved with phosphate buffer pH 6.8.The volume was made up to 100 mL with the same solution to get a concentration of 1000 µg/mL.From this, solutions of concentrations ranging from 100 µg/mL to 1000 µg/mL were prepared and their absorbance was measured at 257.2 nm wavelength in a UV spectrophotometer. 25,26ickness measurement A screw gauge was used to measure the thickness of the MDF (2 × 2 cm 2 ).Each film's thickness was measured in three locations and the standard deviation (SD) was estimated. 27 Drug content uniformity A 4 cm 2 MDF was cut into small pieces and placed in a graduated glass-stoppered flask with 10 mL of 6.8 pH phosphate buffer. The flask was kept for 24 hrs.The solution from the flask was filtered through Whatman filter paper and the amount of drug present was determined by UV spectrophotometric method at 257.2 nm wavelength. 28 Weight variation Three films of size (2 × 2 cm 2 ) from every batch of MDF were weighed on an electronic balance (Citizen CY 220C, Mumbai, India) and the average weight with SD was calculated. 29,30nsile strength Tensile strength was used to precisely calculate the mechanical properties of polymeric MDF.Using a handcrafted tensile strength instrument, the tensile strength of the MDF was measured.MDF was then applied to the assembly and the weights needed to split was measured.The following formula was used to measure tensile strength (formula 1). 31,32S.= Break force/A (1 where A= Cross-sectional area of the film Percentage elongation After calculating tensile strength of the film, percentage elongation was determined using the formula below (formula 2). 32) Here, L F = final length, L O = initial length Moisture content (%) This measure was also used to determine the film's credibility in dry weather.A film with a surface area of 4 cm 2 was cutout, weighed, and placed in a desiccator containing fused anhydrous calcium chloride.The films were removed and reweighed after 24 hours.Formula 3 was used to calculate the percentage moisture content of the film. 33,34) % Moisture uptake The formulation was exposed to an atmosphere of 84% RH at 28°C for three days using a saturated solution of NaCl. After three days the films were removed, weighed and the percentage moisture absorbed was calculated.Calculated the average percentage moisture absorption of each film using the following formula 4. 34 (4) In vitro disintegration time The test was carried out using a slightly modified version of the procedure described by Mishra and Amin 20 .A glass petri dish containing 10 mL of distilled water was used to hold the film size needed for dosage distribution (2 × 2 cm).Time that took to break the film was recorded as the in vitro disintegration time. 20,35lubility study The solubility of methylphenidate hydrochloride was determined in different types of solvent like water, methanol, ethanol, 0.1 N HCl, chloroform, ethyl acetate, acetone, and pH 6.8 phosphate buffer at room temperature.Saturated solutions were prepared by adding excess drug into the solvents to form a suspension and continued stirring for 24 h in the presence of drug particles.The saturated suspensions were filtered (using 0.2 µm PTFE filters) to remove drug particles and the clear solutions were diluted to measure the drug concentration (Table 4). In vitro dissolution study The test was performed with a slight modification using the same method as mentioned by Dinge and Nagarsenker 38 A film of 4 cm 2 was placed in a glass petri dish and 25 mL of dissolution medium (phosphate-buffered saline pH 6.8) was added.A stirring speed of 100 rpm was selected for the dissolution of the batches.An aliquot of 2.5 mL was withdrawn and replaced with equal volumes of pH buffer 6.8 at regular intervals of 1, 2, 3, 4, 5, 7.5, and 10 minutes to maintain sink condition.[38] Folding endurance Folding endurance was observed and determined by repeated folding of strip at the same place until strip broke due to folding.The number of times the film was folded without breaking was determined as the folding endurance value. 39,40ability study Stability testing's goal was to show how the consistency of a drug ingredient or drug product changes over time, when exposed to a range of environmental factors including temperature, humidity, and light, allowing for recommended storage conditions, retest times, and shelf-life.International Conference on Harmonization (ICH) specifies the length of study and storage conditions.[41][42][43] Method The sample was wrapped in aluminum foil and subjected to stability studies as per the ICH guidelines.After that, they were held in a stability chamber at 40°C/75°F for 3 months and tested for their physical appearance, drug quality, in vitro disintegration duration, and drug release at 1 month intervals with the findings being released. 41,43,44lease kinetics and mechanisms RESULTS AND DISCUSSION λ max of the drug was determined by scanning 1000 µg/mL concentration solution prepared with pH 6.8 buffer in range 200-400 nm using a double beam UV-visible spectrophotometer.λ max was found to be 257.257nm (Figure 1).Therefore, further studies were conducted in a wavelenght of 257.2 nm. Fourier-transform infrared spectroscopy (FTIR) and differential scanning calorimetry (DSC) studies An FTIR spectrophotometer was used to conduct the compatibility tests.A KBr disc was used to investigate IR spectrum of a pure substance and a physical combination of drug and polymer. 45,46In different samples, the distinctive peaks of methylphenidate hydrochloride were obtained at different wavenumbers (Figure 3, Table 5) The spectra for all formulations are shown below. In the above spectrum, the characteristic (principal) peaks of methylphenidate hydrochloride are presented as follows. In the spectrum of the drug-polymer mixture, all the peaks are present and in the formulation.This indicates that there is no interaction between the drug and the formulation components. DSC DSC thermogram of methylphenidate hydrochloride showed an endothermic peak at 229.41°C corresponding to its melting Preliminary studies on the selection of polymers A preliminary research was conducted to identify appropriate polymers and a suitable plasticizer capable of manufacturing films with favorable mechanical properties and disintegration times. 48The solvent casting process was used to make the casting solution.The composition of various batches, number of polymers used, and their appearance and disintegration time are given in Table 6. Optimization of polymer Placebo films were prepared using maltodextrin, HPMC E3, HPMC E5, and HPMC E15 as film-forming agents in various amounts.The placebo films prepared using maltodextrin as a film former in various amounts of 750, 1000, 1250, and 1500 mg were not having acceptable physical characteristics.The lowest amount of maltodextrin (PB1), when cast in the plastic petri dish having an area of 70 cm 2 , was insufficient for making the film.In other batches of maltodextrin (PB2 to PB4), amounts were sufficient for making the film, which was sticky.Thus, maltodextrin alone was not selected as the film-forming polymer. HPMC is a hydrophilic polymer that is suitable for the MDF.Various grades of HPMC could make films that were very transparent and had excellent mechanical properties.Placebo films of different grades of HPMC E3, HPMC E5, and HPMC E15 were prepared to verify their film-forming capacity and suitability for MDF.From all HPMC batches, PB7 for HPMC E3, PB9 for HPMC E5, and PB11 for HPMC E15 were easily removed from the petri dish and had good acceptable physical characteristics and low disintegration time in accordance with other batches (Table 6). Films prepared from single polymers (PB7, PB9, PB11) gave good results for disintegration time, but other properties were not so good, so, combinations of different grades of HPMC were A combination of different grades of HPMC and maltodextrin was tried and as a result, films having a much smoother texture were obtained.The combination yielded smoother films with less disintegration time, and finally, among the preliminary batches, PB22 was shown to give the best results (Table 7).Therefore, a combination of HPMC E5, HPMC E15, and maltodextrin was selected as the film-forming combination for the current work. 49,50 Optimization of plasticizer The films were prepared using propylene glycol and glycerol as plasticizers in different amounts ranging from 0.25 to 1.25 mL (Table 8).The results indicated that, with the least amount of plasticizer, films were very brittle and with the highest amount of plasticizer, films could not be dried properly and peeling off the problem was observed.Amongst the prepared films, PB24, PB25, PB30, and PB31 were good but their disintegration time was much higher than PB29 because of more amount of plasticizer.Based on folding endurance, tensile strength, and disintegration time, 0.5 mL of propylene glycol was selected as the optimum amount of plasticizer. 50,51 Statistical analysis Simplex centroid design is a type of mixture design that is often used to modify formulation variables with the simple prerequisite of knowing how independent variables interact.Preliminary investigations of the process parameters revealed that factors such the amount of HPMC E5 (X 1 ), amount of HPMC E15 (X 2 ), and amount of maltodextrin (X 3 ) showed a significant influence on the amount of drug dissolved in 2 min (CPR Q 2 ; R 1 ), disintegration time (R 2 ) and tensile strength (R 3 ) of the drugloaded fast dissolving film.As a result, it was used in further research.All three chosen dependent variables (X 1 , X 2 , and X 3 ) showed large variance in disintegration time, volume of drug released in 2 minutes, and tensile strength for all 7 batches (Table 9).The data showed that X 1 , X 2 , and X 3 had a major effect on those responses (R 1 , R 2 , and R 3 ).Since considering the magnitude of coefficients and statistical signals, polynomial equations can be used to determine, whether the response is positive or negative.The statistical analysis (ANOVA) results for the design batches are shown below. 46,52 The magnitude of coefficients and mathematical signs can be used to determine whether the polynomial equations express positive or negative information.Statistical analysis was carried out in Design-Expert software (7.1.5),which suggested that a special cubic model (SCM) was followed for drug release % in 2 minutes with a p value of 0.0385.This indicated that the model was highly significant.The statistical analysis (ANOVA) results (Table 10), contour plot, and 3D surface plot for cumulative percentage release (CPR), Q 2 (Figure 10) presents a strong effect of three factors (amounts of HPMC E5, HPMC E15, and maltodextrin).A polynomial equation of Q 2 indicates that three polymer amounts have a positive effect on the Q 2 .In vitro dissolution of the films increased with the increase in amount of the polymer.It was noted that, when the amounts of polymer were selected within the limits of the design, in vitro dissolution rate increased to a greater extent with the amount of HPMC E5 and increased to a lesser extent in the case of maltodextrin followed by HPMC E15.As per the equation, better release can be achieved with the combination of the three polymers, rather than combining any two of them. 53 Response 2: Disintegration time (R 2 ) Statistical analysis was carried out in Design-Expert software (7.1.5),which recommended that a SCM was followed for release at T2 min with a p value of 0.0385.This indicated that the model was highly significant. 53o find the contribution of each component and their interaction, an ANOVA for SCM was carried out. Polynomial equation ANOVA results (Table 11), contour plot, and 3D surface plot for the disintegration time (Figure 11) indicates the strong effect of the three factors (amounts of HPMC E5, HPMC E15, and maltodextrin).A polynomial equation of disintegration time indicates that the three polymers amounts have a positive effect on the disintegration time.In vitro disintegration time of the films was observed to increase as the volume of polymer was increased.It was noticed that, when the amounts of polymer were selected within the limits of the design, in vitro dissolution rate decreased the most, when more amounts of maltodextrin were used in the formulation, which increased gradually with HPMC E5 followed by HPMC E15.As per the equation, a shorter disintegration time can be achieved with the combination of the three polymers, rather than the single polymer or with the combination of any two of them. Response 3: Tensile strength (R 3 ) Statistical analysis was carried out in Design-Expert software (7.1.5),which suggested that SCM was followed for release at T2 min with a p value of 0.0385.It revealed that the model was highly significant.To determine impact of each component and their interaction, ANOVA for SCM was carried out.The ANOVA results (Table 12), 3D surface plot, and contour plot for the tensile strength (Figure 12) indicated the strong effect of the three factors (amounts of HPMC E5, HPMC E15, and maltodextrin).A polynomial equation of tensile strength indicates that all the all the three-polymer amount have a positive effect on the tensile strength.It was observed that when the amounts of polymer were selected within the limits of the design, tensile strength was increased when more amounts of HPMC E15 were used in the formulation and it increased to a lesser extent in HPMC E5 followed by maltodextrin.As per the equation, values of tensile strength were decreased with the combination of all three polymers. 53,54aluation parameters for film formulation Weight variation tests Table 13 summarizes weight difference % for all formulations.They were under the pharmacopeial limits of 7.5%, so both of the films passed weight variation test.It was found to be in the range of 37 ± 2.081 to 81.67 ± 2.081 mg.Films having more amount of maltodextrin exhibited higher weight, whereas films having HPMC E5 were lighter in weight.Weight of the films was uniform. 55 Thickness The formulated films were observed to have thicknesses ranging from 0.103 ± 0.015 to 0.207 ± 0.02 mm.Table 13 lists the mean values.In both formulations, the values are almost identical.Films containing maltodextrin resulted in increased thickness, which was required for comfortable handling of the film. 56 Folding endurance The films' folding endurance was measured by folding a small strip of film at the same location before it separated and the average folding endurance of all films is shown in Table 13.All the batches have folding endurance of 101 ± 2.645 to 177.67 ± 3.51.The folding endurance increases as concentration of the polymer increases. 57,58ug content Drug content and uniformity tests were carried out to ensure that the drug was distributed uniformly and accurately.The content uniformity of all nine formulations was determined, where the results are listed in Table 13.A spectrophotometer was used to examine three trials for each formulation.Mean values of all the formulations and SDs were calculated.The findings showed that both formulations had the same drug material.In in vitro release trials, the total % of drug released from each film was calculated using the mean quality of the drug contained in the film.Ranges of drug content in the formulations were 95.218% to 98.00%. 58 vitro dissolution study In vitro release studies of methylphenidate hydrochloride films were performed in phosphate buffer (pH 6.8).Cumulative drug release was calculated based on the drug content of methylphenidate hydrochloride.Rapid drug dissolution was observed in F1, F5, which released 104.44% and 101.41%, respectively, at the end of 2 min.Comparatively, slow drug dissolution was observed in F6, F7 with the release of 96.45% and 99.73%, respectively.At end of 2 min, remaining formulations had slower drug release than the above-mentioned formulations.As the concentration of polymer HPMC E15 increased, the time for drug release was found to be increasing.This might be due to the higher viscosity of the polymer, which results in the formation of a strong matrix layer decreasing mobility of drug particles in swollen matrices, which leads to a delay in drug release. 36ble 14 shows the data of dissolution of prepared design batches.Figure 13 shows the graph of CPR versus time in minutes.The data indicated the data up to 2 min only, so that, we can easily compare the dissolution and % of drug release within our desired time limit.From Figure 13, we may conclude that in the first minute, drug release for every batch is almost the same, but for the consecutive minutes, number of drug release changes.So, we may say that polymers having a lower viscosity release the drug quickly than the polymers with higher viscosity.Thus, in order to get a quicker release, lower viscosity-grade polymers are desirable. 47 Optimized batch analysis by statistical analysis The optimized formulation was chosen based on criteria, a higher amount of drug release in 2 minutes, shortest disintegration time, and a medium value of tensile strength. Overlay plot was drawn to obtain an optimized batch using Design Expert (7.1.5)(Figure 14). An optimized batch of the film was prepared experimentally using the same procedure/the results of stated parameters were compared with the computed values from regression equations.When the experimental and theoretical values were compared, error % was found to be less than 8% for the responses (Table 15). Stability studies A stability study was conducted according to the ICH guidelines for a short time.The developed formulations were tested for stability at 40°C and 75% relative humidity for 6 months and were evaluated for tensile strength, disintegration time, and in vitro drug release at 1, 3, and 6 month intervals.Effects of the formulations were deemed within acceptable limits as seen in Table 16.The measurable parameters showed no major differences.So, the formulation was found to be stable. 47 Release kinetics and mechanisms Data of in vitro release were fit into different equations and kinetic models to explain release kinetics of methylphenidate from these films.Release kinetics of methylphenidate followed zero order from the films (Table 17).A better fit (highest R 2 values) was observed in the case of Higuchi's model than Hixon-Crowel model except film I. Hence, mechanism of drug release from the remaining films followed is diffusion controlled and drug release from film I followed dissolution controlled (Table 18). Application of the Hixon-Crowell cube root law, the equation (M01/3-M1/3)= kt, provides information about the release mechanism, namely the dissolution rate limited.Application of Higuchi's equation (M= K t 1/2 ) provides information about the release mechanism, namely the diffusion rate limited.Korsmeyer-Peppas model indicates that the release mechanism is not well-known or that more than one type of release phenomenon could be involved.The "n" value could be used to characterize different release mechanisms (Table 19). R 2 values are higher for Higuchi's model compared to Hixon-Crowell for the films except film I. Hence, drug release from film I followed a dissolution rate-controlled mechanism and drug release from the remaining films followed a diffusion ratecontrolled mechanism. According to the Korsmeyer-Peppas model, a value of slope between 0.5 and 1 indicates an anomalous behavior (non-Fickian).So, it indicates that the release mechanism from the films follows non-Fickian diffusion (anomalous behaviour).However, film I follows case II transport.Therefore, all designed batches were prepared and their evaluations were carried out which showing acceptable results. Based on the results, we may conclude that aim of the current work was successfully fulfilled. Figure 13 . Figure 13.In vitro release of methylphenidate hydrochloride in phosphate buffer (pH 6.8) from the film formulation Figure 1 . Absorption maxima of methylphenidate HCl in pH 6.8 phosphate buffer Table 1 . Calibration data of drug in pH 6.8 phosphate buffer at 257.2 nm Concentration (μg/mL) *Results are shown in mean ± SD (n= 3), SD: Standard deviation Table 6 . Characteristics of placebo film prepared using different polymers Batch Polymer Amount (mg) Remarks Disintegration time* (sec) *Results are shown in mean ± SD (n= 3), SD: Standard deviation, HPMC: Hydroxypropyl methyl cellulose taken, which exerted better results in terms of disintegration time, folding endurance, and tensile strength. Table 7 . Optimization of mixture of polymers *Results are shown in mean ± SD (n= 3), SD: Standard deviation Table 10 . ANOVA for special cubic model (% release at 2 min) DF: Degree of freedom Table 11 . ANOVA for special cubic model (disintegration time) DF: Degree of freedom Table 12 . ANOVA for special cubic model (tensile strength) DF: Degree of freedom Table 14 . Cumulative% of drug release from film formulations *All results are shown in mean ± SD (n= 3), SD: Standard deviation Table 15 . Evaluation of optimized batch All results are shown in mean ± SD (n= 3), SD: Standard deviation * Table 16 . Results of accelerated stability studies *All results are shown in mean ± SD (n= 3), SD: Standard deviation CPR: Cumulative percentage release
2021-10-18T17:10:38.747Z
2021-10-04T00:00:00.000
{ "year": 2022, "sha1": "e687058a55a4272d2c987298534709036b099ff7", "oa_license": null, "oa_url": "https://doi.org/10.4274/tjps.galenos.2021.99223", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b385612c1ca67213cc7e65389208b53010fa1d41", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
270747948
pes2o/s2orc
v3-fos-license
CELEBRITY ENDORSEMENT ROLE, BRAND IMAGE, AND BRAND CREDIBILITY INFLUENCE PURCHASE INTENTION This study aims to analyze the mediating effect of brand image on the role of celebrity endorsement and brand credibility on the purchase intention of the Samsung Z Flip 5 smartphone. This quantitative research uses structural equation modeling (SEM) with SmartPLS. The results of this study state brand credibility has a positive effect on brand image, and brand image mediates a direct effect on brand credibility with purchase intent. Meanwhile, brand credibility has no significant effect on purchase intent. Celebrity endorsements have no significant effect on brand image. Brand image does not mediate influence in the relationship between celebrity endorsements and purchase intent. The research's implications include the necessity of collaborating with stakeholders and fostering trust in order to raise interest in purchasing the Samsung Z Flip 5 smartphone. INTRODUCTION Since entering Industry 4.0 and preparing fully for Industry 5.0, the use of the Internet and social media has increased rapidly from year to year, and Indonesia is no exception.The presence of Mediatek immensity chipsets that support modern 5G technology, allowing mid-to-low-end smartphones to enjoy the speed of 5G networks, adds to the reasons for intense competition in the smartphone industry.Although Samsung dominates the global market, the company is no longer No. 1 in Indonesia. Various innovations are implemented by Samsung to maintain the top position, the presence of the Z Flip series as a pioneer of folding smartphones that support the 5G network has now released version 5 in July 2023.Apart from carrying out various innovations, marketing strategies with celebrity endorsement. The role of brand credibility and brand image an influencing factor to increase purchase intention and win the competition.This study aims to analyze the influence of celebrity endorsement and brand credibility on Samsung Z Flip 5 smartphone products in Indonesia.The mediating role of brand image is also explored to provide a more comprehensive analysis.This research explores and validates the results of different studies complements the shortcomings and develops previous research that has been done.The research is particularly urgent given Samsung's changing position in the global and national smartphone markets.Although Samsung topped the list as the best smartphone brand in the world in Q1 2023, it saw a decrease of around 18.9% compared to Q1 of the previous year (IDC, 2023).In Indonesia, Samsung is no longer the market leader, losing to Oppo in the January-March 2023 period.This decline was triggered by the renewal of various smartphone brands with diverse price ranges, including innovations from competitors such as Oppo.As a smartphone manufacturer that wants to maintain the top spot, Samsung needs to understand the factors that influence consumer purchase intent, especially in the context of using celebrity endorsers such as BTS's KPop idols. This study is used to validate the results of previous research, that celebrity endorsement has a significant influence on purchase intention (Alessandro et al., 2023;Herjanto et al., 2020;Jannah et al., 2023;Rahmah & Arafah, 2023).However, different research results were found in (Rayining & Agung, 2019;Vidyanata et al., 2022) which states that celebrity endorsements do not have a positive influence on purchase intention.In previous research conducted by (Herjanto et al., 2020), it was suggested that brand image variables affect purchase intention.While different results were stated in the (Jannah et al., 2023) study, the brand image did not have a positive influence on purchase intention. Praditia Andryani, Lina Salim Celebrity Endorsement Role, Brand Image, and Brand Credibility Influence Purchase Intention 4820 Most of the flaws in previous studies used female celebrities with fan populations with research objects on fashion and beauty products.Researchers also have not found research conducted on Samsung smartphone products with variable brand credibility, and mediation with brand image.This study took Samsung smartphone products with male K-pop idol groups as celebrity endorsement variables. The research gap in this study is by combining celebrity endorsement variables and brand credibility variables as well as brand image mediation variables on purchase intention in Samsung smartphone product sales in Indonesia.Celebrity endorsement, brand credibility, and brand image mediation variables in purchase intention will be factors that will be examined in influencing purchase intention.The Samsung Z Flip 5 will be the subject of this study as it is the latest version of Samsung's smartphone. Through this research, the author is interested in exploring and validating the results of different studies as well as completing shortcomings and developing on previous research that has been done.In addition, the study aimed to assess how brand image and credibility affect purchase intent as well as how brand image mediates the relationship between purchase intent and celebrity endorsements as well as the relationship between brand credibility and purchase intent.The benefits of this research include assisting researchers in applying knowledge gained from industry, especially Samsung Indonesia.For businesses, the purpose of this study is to provide objective information about brand image, trustworthiness, and marketing strategies that leverage celebrity endorsements to increase purchase intent.In addition, research findings can also be valuable information for outsiders and researchers who are interested in starting or running a side business. Many companies spend considerable funds to engage celebrities to endorse various company products or services (Rahmah & Arafah, 2023).Meanwhile, purchase intention is another important stage that marketers must pay attention to (Nevilia et al., 2023).In previous studies it was stated that celebrity endorsement has a direct effect on purchase intention (Alessandro et al., 2023;Cuong, 2020;Jannah et al., 2023;Rahmah & Arafah, 2023) research gaps were found in several studies that had different results, that celebrity endorsement did not affect purchase intention (Vidyanata et al., 2022) Clara, 2022), so based on these studies, the researchers proposed the following hypothesis: H1: celebrity endorsement affects purchase intention Brand credibility is considered an antecedent to customer satisfaction, and the effect of brand credibility on purchase intention is relatively high.Research results from journals that have been studied support this (Cuong, 2020;Hasbi, 2020;Vidyanata et al., 2022) so based on these studies, the researchers propose the following hypothesis: H2: brand credibility affects purchase intention Brand image is a barometer to evaluate the suitability of a brand, if the brand is involved in a negative event or incident, customers tend to perceive the brand negatively as well, so consumers have the potential to leave or avoid products with that brand (Herjanto et al., 2020).The results of this study also found that brand http://eduvest.greenvest.co.id image influences purchase intention.So based on these studies, the researcher proposes the following hypothesis: H3: brand image affects purchase intention Celebrity endorsement is defined as a marketing communication strategy involving famous figures who provide product or brand reviews in the form of promotional content to their followers.From previous research, it was found that celebrity endorsement has a positive influence on brand credibility (Rayining & Agung, 2019;Vidyanata et al., 2022) so based on these studies, the researchers proposed the following hypothesis: H4: celebrity endorsement affects brand credibility Prior studies' findings indicate that celebrity endorsement enhances a brand's reputation.Possessing things supported by celebrities helps consumers feel good about themselves and find purpose in life (Herjanto et al., 2020).So the researchers put out the following theory in light of these investigations: H5: celebrity endorsement affects brand image The more carefully a celebrity endorses a product, the better the impact on the brand's reputation.According to (Herjanto et al., 2020;Mao et al., 2020) the trustworthiness of celebrity endorsers affects brand perception and may boost purchase intent.The researchers put out the following theory in light of these investigations. H6: brand image mediates the influence in the relationship between celebrity endorsement and purchase intention. Brand credibility or brand credibility is believed to trigger consumer buying interest.Meanwhile, brand image can influence customer purchasing and consumption behavior.So the relationship between the two is expected to be an important factor in increasing purchase intention.Research results from journals that have been studied support this (Hasbi, 2020).so based on these studies, the researchers propose the following hypothesis: H7: brand credibility affects brand image Brand credibility is expected to be able to change consumers' views on products or services into a positive meaning so that in the end consumers or target markets are interested in making purchases.if consumers have a positive image of a brand, then consumers have the potential to purchase products or services again.Previous research shows that there is a relationship between brand credibility and purchase intention through brand image (Hasbi, 2020).Based on these studies, the researchers propose the following hypothesis: H8: brand image mediates the influence in the relationship between brand credibility and purchase intention. Referring to the basis of these hypotheses, a research model is compiled which is contained in Figure 1. Figure 2. Research Model The hypothesis that can be compiled in this study is as follows: H1: celebrity endorsement affects purchase intention H2: brand credibility affects purchase intention H3: brand image influences on purchase intention H4: celebrity endorsement affects brand credibility H5: celebrity endorsement affects brand image H6: brand image mediates the influence in the relationship between celebrity endorsement and purchase intention.H7: brand credibility affects brand image H8: brand image mediates the influence in the relationship between brand credibility and purchase intention. RESEARCH METHOD This research is quantitative.This research was conducted from October to November 2023 by distributing an online questionnaire (g-form).The population in this study are fans of the KPop idol group BTS who are members of the Indonesian fan group (ARMY BTS) for the DKI Jakarta area, aged 17 to 40 years, and do not yet own a Samsung Z Flip 5 smartphone.The sampling technique used is purposive sampling.The total respondents in this study were 161 out of 189 respondents who passed the filter test. To confirm the validity and reliability of the questionnaire indicators, research instrument testing (pre-test) was done on thirty respondents.According to the test results, every indication is credible and suitable for usage in this research.The primary data gathering period in October to November 2023.Partial Least Square (PLS), a variant-based structural equation modeling (SEM) method, is used in the data analysis procedure.SmartPLS 4.0 software is used for processing the data.The methods of analysis consist of hypothesis testing, inner model analysis, and outer model analysis (which includes convergent and discriminant validity). RESULT AND DISCUSSION http://eduvest.greenvest.co.idVariable indicators such as celebrity endorsement, brand credibility, brand image, and purchase intention will be pre-tested first to analyse whether the variable indicators used are reliable and valid.The testing process will go through two tests, namely the reliability test and the validity test.Previously, researchers had distributed questionnaires to 30 respondents, and the test was carried out using SPSS 25 software.After declaring the data valid and reliable, the questionnaire was distributed again to 161 of the 189 respondents who met the criteria of the filter question.Most of the respondents were female, totalling 157 people (97.5%).Most respondents were aged 17-25 years (43.5%).Respondents who worked as private employees amounted to 60 people (37.3%).The last education of respondents, the majority is high school or equivalent as many as 88 people (54.7%) (see Table 1).Most of the respondents (114 people = 70.8%)were not married.Most monthly income is < Rp 5,000,000 (63.1%).Most of the expenses per month amounted to < Rp 5,000,000 for as many as 132 people (82.5%). Outer model analysis in SEM (Structural Equation Modelling) mainly focuses on the measurement validity of variables or constructs measured by their indicators.This involves two main aspects: convergent validity and discriminant validity (Ghozali, 2016). In convergent validity, each indicator must achieve reliability criteria, where the CA and CR values must be more than 0.700.Next, testing requires that both the AVE value and the outer loading value surpass 0.500 (Ghozali, 2016).In the meanwhile, cross-loading and the Fornell-Larcker test can be used to evaluate discriminant validity (Ghozali, 2016).The AVE root, which is represented by the value on the diagonal axis, is bigger than the AVE root for brand credibility, at 0.860. When the root of the AVE on the construct is greater than the construct correlation with other latent variables in the Fornell-Larcker test, the results are considered valid or excellent.Whereas in cross-loading testing, the indicator value must be higher for each construct compared to the indicator value on other constructs (Sekaran & Bougie, 2016) (see tables 3 and 4).HTMT (Heterotrait-Monotrait Ratio) test, is used to evaluate the extent to which the measured constructs can be distinguished from one another, and the recommended value is less than 0.90 (Hair Jr et al., 2021). Inner model analysis is the stage where the relationship between latent variables or constructs is tested.The steps include VIF value analysis, r-squared analysis, and model fit analysis with SRMR (Standardized Root Mean Square Residual).If the R-Square value is 0.67, it means strong, a value of 0.33 means moderate.While the value of 0.19 is weak (Chin, 1998) (see table 6).Then SRMR is a model evaluation parameter that measures the extent to which the PLS-SEM model fits the data.If the SRMR value <0.080, the model is interpreted as fit (Hair Jr et al., 2021) The CA (Cronbach's Alpha) and CR (Composite Reliability) values on all variables are ≥ 0.7, so it can be interpreted that the measuring instrument (questionnaire) has a good level of consistency (see table 2).The outer loading value must exceed 0.400 and the AVE value must exceed 0.500. in the data, the CA, CR, outer loading, and AVE values meet the criteria, meaning that the constructs used are valid and reliable.if the VIF value is> 5.00, it is stated that multicollinearity occurs (Ghozali, 2016).Based on the data empirical analysis, it Praditia Andryani, Lina Salim Celebrity Endorsement Role, Brand Image, and Brand Credibility Influence Purchase Intention 4828 shows that the VIF test results on all variables are <5.00 and are accepted and there is no multicollinearity.According to Fornel-Lacker test analysis results, brand credibility has a root AVE of 0.860, which is higher than the association with other factors (see table 3).At that point, the correlation variable's discriminant validity is satisfied.The same is true for purchase intention (0.852), celebrity endorsement (0.839), and brand image (0.865) (Hair Jr et al., 2021). The results of the cross-loading test show that all indicators have a higher cross-loading than the construct that should be measured (see table 4).So in other words, the indicator has a significant correlation with the construct being measured.The results in Table 5 show that discriminant validity is achieved because the HTMT value of each variable is <0.90. Brand credibility has an R-square value of 0.607 and is in the moderate category (see table 6).Therefore, it can be said that brand legitimacy is somewhat impacted by celebrity endorsement.Celebrity endorsement has a modest impact on brand credibility, as indicated by the brand image's R-squared value of 0.578 and moderate category placement.Consequently, purchase intention falls into the moderate group with an r-square score of 0.537.According to these findings, purchase intention is somewhat influenced by celebrity endorsement, and brand image.The SRMR value (0.071) is less than 0.080.such that a satisfactory match is certified for the model (see table 7). Furthermore, hypothesis testing is carried out.In hypothesis testing, there are several criteria for assessing statistical test results.If the results of the t-statistic value> 1.96 (t-table) or p-value <0.05, it represents a significant relationship between exogenous variables and endogenous variables (Hair Jr et al., 2021) The findings of the hypothesis test indicate that, with an initial sample value of 0.339, the t-t-statistic value of celebrity endorsement (3.255) has a value> 1.96 and a p-value (0.001) <0.05.Consequently, it may be said that H1 is approved.The hypothesis test findings show that celebrity endorsement has a direct impact on purchase intention (t-statistic = 3.255> 0.196 and p-value = 0.001 <0.005).The study's findings are consistent with other research findings and are lower than what the researchers' estimations suggested (Herjanto et al., 2020;Alessandro et al., 2023;Jannah et al., 2023;Rahmah & Arafah, 2023).Adversely correlated with studies from (Hasbi 2020, Vidyanata et al., 2022, & Clara, 2023) that show celebrity endorsement has little direct impact on consumers' intentions to buy.Celebrities often have a strong appeal in the eyes of consumers.When celebrities who are liked or respected by their target market are involved in brand promotion, this can increase consumer identification with the brand.Likewise, research by Khan et al., (2022) states that celebrity attractiveness, credibility, and product fit drive purchase intentions.It is important for marketing managers to choose the right celebrity for endorsement.All celebrities may not be effective for different product categories and target audiences.Therefore, marketing managers while selecting a celebrity should map the product attributes, personality, and characteristics of the target audience.Also, the chosen celebrity should not endorse too many competing products or brands. H2: Brand Credibility Has No Direct Effect on Purchase Intention Brand credibility does not have a direct influence on purchase intention on the Samsung Z Flip 5 smartphone, the results of the empirical data test show that the ρ-value = 0.131> 0.05 and the t-statistic value = 1.512 < 1.96.This can be caused by several factors that can be concluded from the answers to open questions to fans of the Kpop idol group BTS (ARMY) where purchase intention for a product is also influenced by economic factors, even though Samsung smartphones have high brand credibility, but low purchasing power can make consumers rethink buying them.In contrast to the results of research conducted by Ulfa & Utami (2017) which states that brand credibility has a significant effect on purchase intention with a regression value (β) of 0.639 at a probability of <0.05. Praditia Andryani, Lina Salim Celebrity Endorsement Role, Brand Image, and Brand Credibility Influence Purchase Intention 4830 H3: Brand Image Has a Direct Effect on Purchase Intention This study discovered that purchasing intention is influenced by brand image.The accepted hypothesis is represented by the t-statistic = 2.715 > 0.196 and the pvalue = 0.007 <0.005.The findings of earlier study by (Hasbi, 2020;Herjanto et al., 2020;Mao et al., 2020;Rayining & Agung 2019, & Alessandro et al., 2023) are consistent with this assertion, whereas the findings of earlier research by Jannah et al., (2023).A strong brand image is often related to the values or identity desired by consumers.A positive brand image can also build customer confidence in the quality, reliability, and consistency of the product or service offered.ARMY sees a strong brand image of the Samsung Z Flip 5 smartphone product which is a factor of consideration in buying a Samsung Z Flip 5 smartphone. H4: Celebrity Endorsement Has a Direct Effect on Brand Credibility Celebrity endorsement directly affects brand credibility, as shown by the tstatistic = 11.743 with a p-value of 0.000 <0.05.Thus, it can be concluded that brand legitimacy is greatly impacted by celebrity endorsement.Celebrities are often perceived as having high status, charisma, or certain characteristics that are idolized by their fans.When they associate themselves with a brand, it can give the brand a positive image, enhancing its image and credibility in the eyes of consumers.In contrast to research conducted by Dewi (2017) which states that the use of Celebrity endorsements carried out in marketing activities cannot create brand credibility. H5: Celebrity Endorsement Has No Direct Effect on Brand Image Celebrity endorsement does not have a direct influence on brand image because other factors are considered more dominant in shaping consumer perceptions of a brand, this statement is based on the results of the hypothesis test, which found a p-value = 0.060> 0.05 and t-statistic= 1.881 < 1.96.Consumers are more likely to form a brand image based on their own experience with the product or service than from celebrity endorsement.Product quality, customer service, and direct interaction with the brand can also significantly affect brand image.So brands need to consider the appropriateness, credibility, and overall context of celebrity endorsements to positively influence brand image.The results of this study support the results of previous research conducted by (Rayining & Agung, 2019). H6: Brand Image Does Not Mediate the Effect in the Relationship between Celebrity Endorsement and Purchase Intention It is known that the results of the empirical test on brand image are considered not to mediate a significant effect on the relationship between celebrity endorsement and purchase intention, which is stated with a p-value of 0.164> 0.05 and tstatistic < 1.96.These results are under previous research, (Jannah et al., 2023), in contrast to the results of research conducted by Rayining & Agung (2019), (Alessandro et al., 2023;Hasbi, 2020).In this study, the influence of Kpop idol group BTS in creating a desire to buy depends more on their appeal than on the brand image of the Samsung smartphone itself.http://eduvest.greenvest.co.id H7: Brand Credibility Has a Direct Effect on Brand Image Brand credibility has a direct effect on brand image based on the hypothesis test results (p-value = 0.000 < 0.005 and t-statistic = 6.943 > 0.196).This is similar to previous research from (Hasbi, 2020) which states that brand credibility has a significant direct effect on brand image.The fans of the Kpop idol group BTS (ARMY) in this study assessed that brand credibility affects the brand image of the Samsung Z Flip 5 smartphone.This statement is corroborated by the response from the ARMY group which states that BTS is selective in choosing brands that work with them so that when the Kpop idol group BTS decides to collaborate with Samsung, they believe that the brand credibility of the product is unquestionable.They believe good brand credibility affects the image and or reputation built and attached to Samsung smartphones.In contrast to previous research conducted by Windyastari et al (2018) which states that brand image mediates the influence of credibility on purchase intention.The results of this study indicate that the strength of brand image can influence and determine the effect of celebrity endorser credibility on consumer purchase intentions.In the results of this study, brand credibility does not directly affect brand image but mediates the effect of brand credibility on consumer purchase intentions. H8: Brand Image Mediates the Direct Effect on the Relationship between Brand Credibility and Purchase Intention This study found that brand image mediates the direct effect in the relationship between brand credibility and purchase intention as stated by the p-value = 0.007 > 0.05 and t-statistic = 2.682 < 1.96.The results of this study support the results of previous research conducted by (Hasbi, 2020).The brand image provides a comprehensive picture of the brand, including quality, reliability, and image perceived by consumers.If the brand is perceived as credible, it can form a positive image that helps increase consumers' desire to buy products or services from that brand.From the results of this study, it is known that ARMY believes the brand image is an important mediator that connects brand credibility in influencing their decision to buy Samsung Z Flip 5 smartphone products.The findings further demonstrate that the connection between brand credibility and purchase intention is totally mediated by brand image.Due to the substantial indirect link created by brand image mediation and the direct association between brand credibility factors and purchase intention.In contrast to research conducted by Fitria & Oetarjo (2024) which states that brand image has no positive and significant effect on buying intentions, with the results of the path coefficient value of 0.054 so that there is a relationship between variables but relatively small, and the t-statistic value of 0.981 is smaller than the t table value of 1.96 and p values of 0.327 which is greater than 0.05 so that this variable relationship is said that brand image cannot affect buying intentions. CONCLUSION This study validates that celebrity endorsement and brand credibility have an important role in increasing the purchase intention of Samsung Z Flip 5 smartphone products.Furthermore, it is stated that brand image fully mediates the relationship Praditia Andryani, Lina Salim Celebrity Endorsement Role, Brand Image, and Brand Credibility Influence Purchase Intention 4832 between brand credibility and purchase intention.Meanwhile, the indirect relationship between celebrity endorsement and purchase intention is not mediated by brand image.Samsung needs to maintain and improve cooperation with the Kpop idol group BTS to optimize the influence of endorsements.To stay aligned, Samsung companies can focus on the positive issues emphasized by the K-pop idol group BTS and integrate the message into Samsung's marketing strategy.Samsung companies need to consider targeting the lower middle segment in terms of economy.If it is not possible to use Kpop idol group BTS as an endorsement for Samsung's smartphone products at pocket-friendly prices for Generation Z and millennials, the company can design exclusive content or concerts that bridge fans and their idol group on the condition that they are users of any Samsung smartphone series.Future research can develop research with other factors that can influence the increase in purchase intention such as brand reputation, consumer experience, prices, and offers, to trends and lifestyles.In addition, expanding the scope of research by examining respondents who live in small cities with lower-middle consumer purchasing power, sampling with respondents who are fans and non-fans and use products other than smartphones products.Future research can also use the same research subjects with different classes for Samsung Galaxy series smartphones. Figure 1 . Figure 1.Order of Best Selling Smartphones in Indonesia, Q1 of 2023 Source: Counterpoint Monthly Indonesia Tracker (2023) Table 1 . Profile of respondents (see table7).The results of the validity and reliability tests of this study are contained in Figure2.http://eduvest.greenvest.co.id
2024-06-27T15:23:12.638Z
2024-06-20T00:00:00.000
{ "year": 2024, "sha1": "1ef38e27333611b8001aa1ce01c8c33ddb614aaa", "oa_license": "CCBYSA", "oa_url": "https://eduvest.greenvest.co.id/index.php/edv/article/download/1189/2247", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4d82dda1cc1588d2bf42c0be6e0bf243fbc15d84", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
17851520
pes2o/s2orc
v3-fos-license
Connexin Hemichannel Blockade Is Neuroprotective after Asphyxia in Preterm Fetal Sheep Asphyxia around the time of preterm birth is associated with neurodevelopmental disability. In this study, we tested the hypothesis that blockade of connexin hemichannels would improve recovery of brain activity and reduce cell loss after asphyxia in preterm fetal sheep. Asphyxia was induced by 25 min of complete umbilical cord occlusion in preterm fetal sheep (103–104 d gestational age). Connexin hemichannels were blocked by intracerebroventricular infusion of mimetic peptide starting 90 min after asphyxia at a concentration of 50 µM/h for one hour followed by 50 µM/24 hour for 24 hours (occlusion-peptide group, n = 6) or vehicle infusion for controls (occlusion-vehicle group, n = 7). Peptide infusion was associated with earlier recovery of electroencephalographic power after asphyxia compared to occlusion-vehicle (p<0.05), with reduced neuronal loss in the caudate and putamen (p<0.05), but not in the hippocampus. In the intragyral and periventricular white matter, peptide administration was associated with an increase in total oligodendrocyte numbers (p<0.05) and immature/mature oligodendrocytes compared to occlusion-vehicle (p<0.05), with a significant increase in proliferation (p<0.05). Connexin hemichannel blockade was neuroprotective and reduced oligodendrocyte death and improved recovery of oligodendrocyte maturation in preterm fetuses after asphyxia. Introduction Preterm birth occurs in around 7 to 12% of all live births and is associated with a high level of neurodevelopmental disability and cerebral palsy [1]. The predominant injury seen in these infants involves diffuse, non-destructive white-matter lesions in the periventricular and surrounding white matter that is characterized by acute oligodendrocyte cell loss and prolonged arrest of oligodendrocyte lineage maturation [2]. However, there is increasing evidence from post-mortem and imaging studies that acute subcortical neuronal injury also contributes to long-term neurodevelopmental disability [1,3]. There are currently no clinically proven therapeutic interventions to reduce this brain damage, highlighting the need to better understand the mechanisms underlying the spread of ischemic brain injury in the preterm fetus/neonate. Hemichannels, or connexons, are half of a gap junction channel that sits in the unopposed membrane of a cell, before the formation of new channels. Opening of connexin hemichannels has been associated with ischemia, as well as oxygen glucose deprivation, metabolic inhibition or low extracellular calcium ion (Ca 2+ ) levels [4][5][6][7][8]. This may cause disruption of the resting membrane potential, release of cytotoxic levels of ATP [9] and glutamate [10] and uptake of water, leading to cell swelling and death [11,12]. We have previously shown that blockade of astrocytic connexin 43 hemichannels reduced oligodendrocyte cell loss and seizure activity and improved recovery of brain activity following global cerebral ischemia in the near-term fetal sheep [13]. However, the distribution of injury and particular vulnerability of specific cell types to ischemia varies considerably between the full-term and preterm neonate. Therefore, it is unclear whether connexin hemichannels contribute to the spread of injury following asphyxia in the preterm fetus, when white matter is predominantly populated by oligodendrocyte progenitor cells at a stage when they are most vulnerable to injury [14]. In the present study, we tested the hypothesis that blockade of connexin hemichannels with a specific mimetic peptide after severe asphyxia induced by complete umbilical cord occlusion would reduce loss of oligodendrocytes and neurons and improve recovery of brain activity in 0.7 gestation preterm fetal sheep. At this age, brain development is broadly consistent with 28 to 32 weeks in humans, before the development of cortical myelination [15,16]. Ethics Statement All procedures were approved by the Animal Ethics Committee of The University of Auckland following the New Zealand Animal Welfare Act, and the Code of Ethical Conduct for animals in research established by the Ministry of Primary Industries, Government of New Zealand. Mean arterial pressure and fetal heart rate were transiently elevated after asphyxia in both groups ( Figure 3). Nuchal EMG activity was transiently reduced after asphyxia followed by an increase to above baseline levels in both groups, and was significantly higher in the occlusion-peptide group from 62 to 106 hours (p,0.05). There were no significant changes in extradural temperature in either group. Fetal Surgery In brief, 20 time-mated Romney/Suffolk fetal sheep were instrumented using sterile technique at 97-98 days gestation (term is 145). Food, but not water was withdrawn 18 hour before surgery. Ewes were given 5 mL of Streptocin (procaine penicillin (250,000 IU/mL) and dihydrostreptomycin (250 mg/ml, Stockguard Labs Ltd, Hamilton, New Zealand)) intramuscularly for prophylaxis 30 minutes prior to the start of surgery. Anesthesia was induced by i.v. injection of propofol (5 mg/kg; AstraZeneca Limited, Auckland, New Zealand), and general anesthesia maintained using 2-3% isoflurane (Medsource, Ashburton, N.Z.) in O 2 . The depth of anesthesia, maternal heart rate and respiration were constantly monitored by trained anesthetic staff. Ewes received a constant infusion isotonic saline drip (at an infusion rate of approximately 250 mL/h) to maintain fluid balance. Following a maternal midline abdominal incision and exteriorization of the fetus, both fetal brachial arteries were catheterized with polyvinyl catheters to measure mean arterial blood pressure. An amniotic catheter was secured to the fetal shoulder. ECG electrodes (Cooner Wire Co., Chatsworth, California, USA) were sewn across the fetal chest to record fetal heart rate. And inflatable silicon occluder was placed around the umbilical cord (in vivo Metric, Healdsburg, Ca, USA). Using 7 stranded stainless steel wire (AS633-5SSF; Cooner Wire Co.), two pairs of EEG electrodes (AS633-5SSF; Cooner Wire Co.) were placed on the dura over the parasagittal parietal cortex (5 mm and 10 mm anterior to bregma and 5 mm lateral) and secured with cyanoacrylate glue. A reference electrode was sewn over the occiput. A further two electrodes were sewn in the nuchal muscle to record electromyographic activity as a measure of fetal movement and a reference electrode was sewn over the occiput. A thermistor was placed over the parasagittal dura 30 mm anterior to bregma. An intracerebroventricular catheter was placed into the left lateral ventricle (6 mm anterior and 4 mm lateral to bregma). The uterus was then closed and antibiotics (80 mg Gentamicin, Pharmacia and Upjohn, Rydalmere, New South Wales, Australia) were administered into the amniotic sac. The maternal laparotomy skin incision was infiltrated with a local analgesic, 10 ml 0.5% bupivacaine plus adrenaline (AstraZeneca Ltd., Auckland, New Zealand). All fetal catheters and leads were exteriorized through the maternal flank. The maternal long saphenous vein was catheterized to provide access for postoperative maternal care and euthanasia. Time point zero denotes the start of occlusion in the occlusion-vehicle and occlusion-peptide groups. EEG activity was suppressed in both groups after asphyxia. EEG power was reduced below baseline until approximately 72 hours after occlusion in the occlusion-vehicle group but was significantly higher between 4-42 hours in the occlusion-peptide group (p,0.05). Continuity at 25 mV in the occlusion-peptide group was significantly higher between 4-36 hours compared to the occlusion-vehicle group (p,0.05). Data are mean 6 SEM. doi:10.1371/journal.pone.0096558.g001 Post-operative Care Sheep were housed together in separate metabolic cages with access to food and water ad libitum. They were kept in a temperature-controlled room (1661uC, humidity 50610%), in a 12 hour light/dark cycle. Antibiotics were administered daily for four days I.V. to the ewe (600 mg benzylpencillin sodium, Novartis Ltd, Auckland, New Zealand, and 80 mg gentamicin, Pharmacia and Upjohn). Fetal catheters were maintained patent by continuous infusion of heparinized saline (20 U/mL at 0.15 mL/h) and the maternal catheter maintained by daily flushing. Data Recording Data recordings began 24 hours before the start of the experiment and continued for the remainder of the experiment. Data were recorded and saved continuously to disk for off-line analysis using custom data acquisition programs (LabView for Windows, National Instruments, Austin, Texas, USA). Arterial blood samples were taken for pre-ductal pH, blood gas, base excess (Ciba-Corning Diagnostics 845 blood gas analyzer and cooximeter, Massachusetts, USA), glucose and lactate measurements (YSI model 2300, Yellow Springs, Ohio, USA). All fetuses had normal biochemical variables for their gestational ages [17,18]. Fetal mean arterial blood pressure (MAP, Novatrans II, MX860; Medex Inc., Hilliard, OH, USA), corrected for maternal movement by subtraction of amniotic fluid pressure, fetal heart rate (FHR) derived from the ECG, EEG and EMG were recorded continuously from 224 to 168 hours after umbilical cord occlusion. The blood pressure signal was collected at 64 Hz and low pass filtered at 30 Hz. The nuchal EMG signal was band-pass filtered between 100 Hz and 1 kHz, the signal was then integrated using a time constant of 1 sec. The analogue fetal EEG signal was low pass filtered with the cut-off frequency set with the 23 dB point at 30 Hz, and digitized at 256 Hz (using analogue to digital cards, National Instruments Corp., Austin, TX, USA). The intensity and frequency were derived from the intensity spectrum signal between 0.5 and 20 Hz. For data presentation, the total EEG intensity (power) was normalized by log transformation (dB, 206log (intensity)), and data from left and right EEG electrodes were averaged. Power in the Delta (0-3.9 Hz), Theta (4-7.9 Hz), Alpha (8-12.9 Hz), and Beta (13-22 Hz) spectral bands was calculated as described [19]. Experimental Protocols Experiments were performed at 103-104 d gestation. Fetal asphyxia was induced by rapid inflation of the umbilical cord occluder for 25 minutes with sterile saline of a defined volume known to completely inflate the occluder and totally compress the umbilical cord, as determined in pilot experiments with a Transonic flow probe placed around an umbilical vein [20]. Successful occlusion was confirmed by observation of a rapid onset of bradycardia with a rise in MAP, and by pH and blood gas measurements. If fetal blood pressure fell below 8 mmHg then the occlusion was stopped immediately. For Cx43 hemichannel blocking, a peptide (H-Val-Asp-Cys-Phe-Leu-Ser-Arg-Pro-Thr-Glu-Lys-Thr-OH (Auspep, Vic, AU)) that mimics the second extracellular loop of Cx43 ('Peptide 5' reported in [21]) was infused into the lateral ventricle via the intracerebroventricular catheter attached to an external pump (SS-2222, Harvard Apparatus, Holliston, MA, USA). Vehicle control fetuses received asphyxia followed by infusion of the vehicle (asphyxia-vehicle, n = 7). The asphyxia-peptide group (n = 6) received 50 mmol/kg/h for one hour followed by 50 mmol/kg/24 hours for 24 hours, dissolved in artificial cerebrospinal fluid (aCSF), at a rate of 1 ml/hour for 25 hours starting 90 min after the end of the occlusion. The sham control group received a sham umbilical cord occlusion plus infusion of the vehicle (n = 7). The mimetic peptide was not tested in sham occlusion animals. All animals were killed at seven days with an overdose of sodium pentobarbitone (9 g I.V. to the ewe; Pentobarb 300, Chemstock International, Christchurch, N.Z.). Immunocytochemistry The fetal brains were perfusion fixed with 10% phosphatebuffered formalin at day 7. Slices (10 mm thick) were cut using a microtome (Leica Jung RM2035, Wetzlar, Germany). Slides were dewaxed in xylene and rehydrated in decreasing concentrations of ethanol. Slides were washed in 0.1 mol/L phosphate buffered saline (PBS). Antigen retrieval was performed using the citrate buffer boil method followed by incubation in 1% H 2 0 2 in methanol for NeuN, Iba1, CNPase and ki-67 and PBS for Olig2. Blocking was performed in 3% normal horse serum (NHS) for NeuN and Iba1 and normal goat serum (NGS) for Olig2, CNPase and ki-67 for 1 hour at room temperature. Sections were labelled with 1:400 mouse anti-neuronal nuclei monoclonal antibody (NeuN, Chemicon International, Temecula, CA, USA) 1:400 . The time sequence of changes in fetal blood pressure, fetal heart rate, nuchal EMG and extradural temperature before and after 25 min of complete umbilical cord occlusion. BP was significantly elevated in both groups after occlusion but returned to baseline by 48 hours. A transient tachycardia was seen in both groups after occlusion. A transient suppression of nuchal EMG was seen after occlusion in both groups followed by an increase to above baseline levels for the remainder of the experiment in both groups that was significantly greater between 62-106 hours in the occlusion-peptide group (p,0.05). No significant differences were seen in extradural temperature between groups. For all antibodies, two slides per animal were used. To quantify neuronal number, four images in the cortex of the first parasagittal gyrus and one image in both the CA1 and CA3 regions of the hippocampus, were obtained using light microscopy (Nikon Eclipse 80i, Tokyo, Japan). This Neuronal counts were obtained using automated counting software (NIS Elements version 4.0, Nikon). To quantify oligodendrocyte number, one image was obtained in intragyral white matter of both the first and second parasagittal gyrus and one in the periventricular white matter and quantified in the same way by an investigator who was masked to the treatment group. Confocal microscopy was performed on an Olympus Fluoview FV1000 (Image capture: FV10-ASW software, Olympus Corp., Tokyo, Japan). Brain regions of the forebrain used for analysis included the mid-striatum (comprising the caudate nucleus and putamen), and the frontal subcortical white matter (comprising the intragyral and periventricular regions). The cornu ammonis (CA) of the dorsal horn of the anterior hippocampus (divided into CA1/2, CA3, CA4, and dentate gyrus (DG)) were assessed on sections taken 17 mm anterior to stereotaxic zero. Neuronal (NeuN), oligodendrocyte (Olig-2, CNPase) and microglial (Iba-1) changes, and proliferation (Ki-67) were scored on stained sections by light microscopy at x40 magnification on a Nikon 80i microscope with a motorized stage and Stereo investigator software V.8 (Microbrightfield Inc; Williston, VT, USA) using seven fields in the striatum (four in caudate nucleus, three in putamen), two fields in the white matter (one intragyral, one periventricular) and one field in each of the hippocampal divisions. For each animal, average scores from one section across both hemispheres were calculated for each region. Data Analysis Data was analyzed using ANOVA or repeated measures ANOVA, followed by the Tukey post-hoc test when a significant . Cell counts of oligodendrocytes, proliferating cells and microglia in the white matter seven days after 25 min of complete umbilical cord occlusion. Cell counts include total oligodendrocytes (Olig-2), immature/mature oligodendrocytes (CNPase), proliferating cells (Ki-67) and total microglial number (Iba-1) in the intragyral (panel A) and periventricular white matter (panel B). A significant increase in total numbers of oligodendrocytes was seen after occlusion-peptide compared to occlusion-vehicle in both the intragyral and periventricular white matter (p,0.05). A significant reduction in numbers of immature/mature oligodendrocytes was seen in the occlusion-vehicle group compared to sham control, with an intermediate number in the occlusion-peptide group (p,0.05). A significant increase in proliferation was seen in the occlusion-vehicle and occlusionpeptide groups compared to sham control with a further significant increase in the occlusion-peptide group compared to occlusion-vehicle (p,0.05). A significant increase in total microglial number was seen in the occlusion-vehicle and occlusion-peptide groups compared to sham control (p,0.05). *p,0.05 compared to sham control. #p,0.05 compared to the occlusion-vehicle group. Data are mean 6 SEM. doi:10.1371/journal.pone.0096558.g006 difference was found. Statistical significance was accepted when p,0.05. Results There were no significant differences in baseline blood gas, pH, glucose or lactate values between the occlusion-vehicle and occlusion-peptide groups (Table 1). Occlusion was associated with severe metabolic and respiratory acidosis in both groups (p,0.05 compared to baseline). There was no significant difference in any parameter during asphyxia or the recovery period between groups. EEG activity was suppressed below baseline after the end of asphyxia (Figure 1). EEG power gradually returned to baseline in the occlusion-vehicle group. Occlusion-peptide was associated with earlier recovery, with greater EEG power than occlusionvehicle from 4 to 42 hours (p,0.05). Continuity of the EEG at 25 mV was reduced after the occlusion and returned to baseline significantly earlier in the occlusion-peptide group compared to occlusion-vehicle, with greater continuity from 4 to 36 hours (p, 0.05). There was no significant difference in seizure burden between groups (Figure 2). Asphyxia was associated with a significant reduction in neuronal number after 7 days recovery in the CA1/2 and CA3 regions of the hippocampus as well as in the caudate and putamen of the striatum, but not in the CA4 or dentate gyrus (data not shown) in the occlusion-vehicle group compared to sham control (p,0.05, Figure 4 and Figure 5). In the occlusion-peptide group, neuronal cell number was also significantly reduced in the CA1/2 and CA3 compared to sham control but was significantly increased in the caudate and putamen compared to occlusion-vehicle (p,0.05). Asphyxia was associated with a borderline reduction in Olig2positive oligodendrocytes in both the intragyral and periventricular white matter (Figures 6 and 7). The occlusion-peptide group showed a significant increase in Olig2-positive oligodendrocytes compared to the occlusion-vehicle group (p,0.05). Immature/ mature (CNPase-positive) oligodendrocytes were significantly reduced in both occlusion groups compared to sham controls in both the intragyral and periventricular white matter (p,0.05). Numbers of CNPase-positive oligodendrocytes in the occlusionpeptide group were significantly increased compared to the occlusion-vehicle group (p,0.05) in the intragyral and periventricular white matter. The percentage of CNPase positive oligodendrocytes was significantly reduced in the occlusion-vehicle group in both the intragyral and periventricular white matter compared to sham controls (p,0.05, Figure 8). The occlusionpeptide group showed an intermediate percentage of CNPase positive oligodendrocytes, and was not significantly different from either sham controls or occlusion-vehicle in both areas. Asphyxia was associated with a significant increase in proliferating (Ki-67-positive) cells in both the occlusion-vehicle and occlusion-peptide groups compared to sham control in both the intragyral and periventricular white matter (p,0.05, Figure 6 and Figure 7). Ki-67 positive cell numbers were further increased in the occlusion-peptide group compared to occlusion-vehicle in the intragyral white matter (p,0.05). Confocal microscopy of fluorescent double labeling showed many of the proliferating cells colocalized with oligodendrocytes ( Figure 9). Asphyxia was also associated with a significant increase in Iba-1 positive cells (p, 0.05), with no effect of peptide infusion. Discussion The present study shows for the first time that blockade of connexin 43 hemichannels with a specific mimetic peptide resulted in earlier recovery of brain activity after asphyxia in preterm fetal sheep, associated with a corresponding reduction in neuronal loss in the striatum, enhanced proliferation and improved numbers of immature/mature oligodendrocytes in the white matter tracts. This finding supports the hypothesis that hemichannel opening contributes to spreading hypoxic-ischemic injury even in the very immature brain, at an age that corresponds clinical with the age of greatest vulnerability to neural injury [23]. Acute severe asphyxia was associated with profound suppression of EEG activity, consistent with previous studies [19]. Blockade of connexin hemichannels resulted in earlier recovery of brain activity as seen both by earlier recovery of EEG power to baseline and earlier recovery of EEG continuity compared to the occlusionvehicle group. Clinically, infants with increased background EEG activity within the first 24 hours after ischemia have better outcome than infants whose background activity remains suppressed [24,25]. Further, this is consistent with our previous finding in near-term fetal sheep that connexin hemichannel blockade after global cerebral ischemia was associated with both an earlier increase in EEG power and better final recovery of EEG power [13], despite the considerable differences in distribution of injury and particular vulnerability of specific cell types to ischemia between the full-term and preterm neonate [26][27][28]. Subcortical neuronal loss is a known consequence of asphyxia in premature infants and is associated with neurodevelopmental handicap, including cerebral palsy [1,29]. In the present study, blockade of connexin hemichannels after asphyxia reduced neuronal loss in both the caudate and putamen nuclei of the striatum. However, there was no improvement in neuronal loss in the hippocampus. We can only speculate on potential reasons underlying this lack of efficacy in the hippocampus. The hippocampus is particularly vulnerable to ischemic injury across a range of experimental settings in the fetus [30][31][32][33]. Injury evolves more quickly with severe injury, and thus the window of opportunity for treatment is less after more severe injury [34]. Alternatively, different mechanisms may contribute to the spread of ischemic brain injury in the hippocampus, or, given that the mimetic peptide was infused directly into the lateral ventricles, it may be that the cells in the hippocampal region were exposed to relatively higher concentrations of mimetic peptide. Higher dose of mimetic peptide are associated with a worse outcome [35], likely mediated by reduced coupling of gap junctions and hence reduced ability of the astrocytic syncytium to maintain homeostasis [21]. We have previously shown intra-cerebroventricular infusion of fluorescent-tagged mimetic peptide in the near-term fetal sheep after global cerebral ischemia is associated with high levels of fluorescence surrounding the ventricles with a graded reduction towards the cortex [35]. Against this hypothesis, in the present study we found improved outcome in the caudate nucleus that is also adjacent to the ventricle and so equally exposed to high peptide concentrations. In the present study we found no difference in the total number of Olig2 labelled cells in the oligodendrocyte lineage seven days after asphyxia, compared to sham controls. However, there was a significant reduction in CNPase labeled immature/mature oligodendrocytes, as well as a reduction in percentage of CNPase positive cells. Given that the Olig2 labeled cells include all immature/mature oligodendrocytes [22], these data are consistent with the hypothesis that the reduction in immature/mature oligodendrocytes corresponds with increased numbers of preoligodendrocytes. We have previously shown that in the same model as the present study, there is significant loss of oligodendrocyte progenitor cells three days after asphyxia [33,36] and we now show that after 7 days there was a significant increase in proliferation in the occlusion-vehicle group compared to sham control. The exuberant proliferation response to injury is almost entirely mediated by oligodendrocyte progenitor cells [37], as shown here by co-localization of Olig2 and Ki-67. Taken as a whole, these data suggest early loss of preoligodendrocytes [36] followed by significant proliferation of new preoligodendrocytes with either impaired maturation, or at least failure to replace immature/mature oligodendrocytes by day 7. This is consistent with data in the neonatal rat and the preterm fetal sheep that showed degeneration of preoligodendrocytes offset by dramatic proliferation, but impaired maturation after hypoxia ischemia [27,28] and with the critical finding of maturational arrest of preoligodendrocytes in human neonatal white matter injury at post-mortem [2]. Strikingly, blockade of connexin hemichannels significantly increased both the total number of oligodendrocytes as well the number of mature oligodendrocytes in the white matter, with a further significant increase in proliferation in the intragyral white matter. The percentage of CNPase positive cells was intermediate between sham controls and occlusion-vehicle, and not significantly different from either group. This suggests that blockade of connexin hemichannels may have reduced oligodendrocyte cell loss, enhanced proliferation of new oligodendrocytes and/or help partially restore maturation of oligodendrocyte lineage development. Intriguingly, blockade of connexin hemichannels has no known direct effect on proliferation. Therefore the increased oligodendrocyte numbers is presumptively mediated indirectly through reduced acute cell death and restoration of a more favorable extracellular environment conducive to cell survival, proliferation and maturation. Blockade of connexin hemichannels after asphyxia in preterm fetal sheep did not have any significant effect on seizure activity in the present study, in contrast with marked reduction in status epilepticus in the near-term fetal sheep [13]. A key difference between these studies is that in contrast with the common development of status epilepticus after ischemia in the termequivalent fetal sheep, discrete seizures predominate in the preterm fetus [33,38]. There is considerable evidence implicating gap junctions and/or connexin hemichannels in the initiation, propagation and particularly in the continuity of seizure activity [39][40][41]. We speculate that connexin hemichannel blockade attenuates the propagation of abnormal electrical activity rather than the generation of seizures, and so had no effect on the discrete seizures seen in this study. There are also considerable differences in the characteristic patterns of neural injury and the cell types affected in the preterm compared to the term neonate. At term, injury is characterized by profound cortical and subcortical neuronal loss with some white matter injury, whereas preterm brain injury is associated with severe white matter injury, with particular vulnerability of the premyelinating oligodendrocytes and some subcortical neuronal loss, but sparing of cortical neurons [26][27][28]. Despite these differences in the pattern, pathogenesis and specific cell vulnerability to global hypoxic-ischemic brain injury between the nearterm and preterm neonate, blockade of connexin hemichannels significantly reduced cell loss and improved recovery of EEG activity at both gestational ages [13]. Further, opening of connexin hemichannels has been implicated in the spread of injury in models of adult ischemic stroke and of retinal ischemia [8,42,43]. This suggests that connexin hemichannels are a common mechanism in the spread of ischemic brain injury across a wide range of brain maturity and types of ischemic insults. Importantly for clinical translation, we have shown in the nearterm fetal sheep that connexin hemichannels play a role in the spread of brain injury after ischemia but do not appear to contribute significantly during the period of ischemia itself [44]. Consistent with this, we found that connexin hemichannel mRNA expression is significantly upregulated four hours after the end of ischemia in the near-term fetal sheep [13]. This delay allows time for identification of infants that may potentially benefit from mimetic peptide therapy. Based on this evidence, in the present study peptide infusion was begun after 90 min recovery from asphyxia, in order to model a clinically realistic treatment protocol. Reassuringly, blockade of connexin hemichannels after asphyxia had no effect on mean arterial pressure, fetal heart rate or extradural temperature. A greater increase in body movements, as measured by nuchal EMG activity, was seen between 62-106 hours in the occlusion-peptide group compared to the occlusionvehicle group. This may reflect improved early behavioral recovery. We have previously shown that the neuroprotective effects of peptide infusion are specific to the particular mimetic peptide administered in this study as an alternate peptide targeting another region of Cx43 did not affect neural injury after ischemia in near-term fetal sheep [13]. A limitation of the present study is that we did not examine the effect of mimetic peptide infusion in healthy preterm fetal sheep. Reassuringly, in healthy 0.85 gestation fetal sheep, at an age when the fetal sheep neural maturation is consistent with that of the full term human infant [16], we found no effect of mimetic peptide infusion at the same dose per kg as the present study on normal brain activity [13]. Despite continuous long-term monitoring of these animals, we did not observe any off target effects, however, it is not possible to wholly exclude this possibility. The present study showed for the first time that blockade of connexin hemichannels improved recovery of brain activity as well as subcortical neuronal and white matter cell survival and maturation in the preterm fetal sheep. These data suggest that blockade of connexin hemichannels may be a useful therapeutic intervention for the treatment of preterm infants following asphyxia.
2017-10-31T23:26:03.875Z
2014-05-27T00:00:00.000
{ "year": 2014, "sha1": "ed61a9f95300acf6da3145ba3b0733a5071b6d97", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0096558&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed61a9f95300acf6da3145ba3b0733a5071b6d97", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
89822719
pes2o/s2orc
v3-fos-license
Proteomic Responses of the Cyanobacterium Nostoc Muscorum under Salt and Osmotic Stresses In this paper, we examined the effect of salt stress (NaCl) and osmotic stress (sucrose) on proteomic level in the diazotrophic cyanobacterium Nostoc muscorum. The aim of this study is to compare proteins appeared in control vs. salt treated, control vs. sucrose treated and salt treated vs. sucrose treated cultures. In the salt treated cultures about 37 proteins were expressed differentially out of these only 5 proteins have shown fold regulation of 1.5 or more. About 141 proteins were found to express independently in control and about 554 proteins were express independently in salt treated culture. When we compared proteins in control and sucrose treated cells, it was reported that about 37 protein spots were express differentially, out of these only 7 proteins have fold regulation 1.5 or more. The independently expressed proteins appeared on gel are 141 and 186 respectively. Similarly, when we compared proteins appeared in salt and sucrose treated cells, it was reported that about 54 proteins were express differentially, out of these 10 proteins have fold regulation 1.5 or more. About 537 protein spots were independently present in salt treated cells and about 186 proteins were independently present in sucrose treated cells. In addition, the differentially expressed proteins and their identification with their functional group have also been discussed. Introduction Cyanobacteria are Gram negative eubacteria, their evolutionary history dated back to 2.7 billion years ago [1]. The origin of cyanobacteria and the evolution of oxygenic photosynthesis have been considered as the most important event in the evolution of aerobic atmosphere. Cyanobacteria are known to be found in almost all the ecological niches with diverse environmental conditions. The native cyanobacterial species present in such habitats confronted with cation toxicity and water loss. The microorganisms, including cyanobacteria that grow and multiply in such stressful habitats have ability to change their morphological and physiological parameters to cope up with such stressful conditions [2]. The ionic component of the stress factor is usually overcome by the efflux mechanism driven by Na + /H + antiporter activity or by the Mrp system [3,4,2]. On the other hand the osmotic component of the stress factor is overcome by the synthesis/accumulation of low molecular weight organic compounds collectively known as compatible solutes [5,6]. The nature and the biosynthesis of compatible solutes depend upon the habitat in which cyanobacteria grow. The fresh water cyanobacterial strains are known to synthesized sucrose, trehalose and proline as an osmotic balancer [7,2,8]. Glucosyl-glycerol is a major compatible solute synthesized by moderately halotolerant strains [9,10]. On the other hand hyper saline strains produce glycine-betaine or glutamate-betaine as compatible solutes [11,12]. The modern molecular biology techniques such as genomics and proteomics have provided valuable databases for the better understanding of many physiological and biochemical processes including cyanobacterial adaptation to salt and osmotic stresses. It is known that during such stresses cellular proteins either denatured or inactivated followed by altering other metabolic activities. During such stresses molecular chaperones play a vital role in maintaining cellular homeostasis [13,14,15,16]. The initial signal of environmental changes perceived by cell surface and ultimately transferred this signal to the cells. In the cyanobacterium Anabaena sp PCC 7120 it has been reported that about 18 cell surface associated proteins were over-expressed under stress conditions. These over-expressed proteins have involved in nucleic acid binding, protein synthesis, proteolytic activity, electron transfer and other proteins [17]. Salinity and osmotic stresses triggered distinct protein synthesis in the Anabaena species [18]. In this strain synthesis of several proteins was repressed by salinity stress. Similarly, some proteins were induced only under salinity stress. However, there are certain proteins which were induced by both salinity and osmotic stresses. In addition, salinity and osmotic stress have been known to induce some independently expressed proteins. In cyanobacteria, gene expression under salt and osmotic stresses, has been studied by Kanesaki, et al. [19]. Their findings indicate that about 28 genes were expressed only under salt stress condition, while those of 11 genes were expressed only in response to osmotic stress. In addition, 34 genes are expressed both under salinity and osmotic stresses. The products of some of these genes are hypothetical proteins whose functions have not been characterized so far. In this study protein profile of the cyanobacterium Nostoc muscorum under salinity (NaCl) and osmotic (sucrose) stress was compared in terms of commonly and differentially expressed proteins (control vs. treated and salt vs. sucrose). Organism and Growth Conditions The cyanobacterium is Nostoc muscorum, used in the present study is fresh water, filamentous and diazotrophic cyanobacteria that is capable of oxygenic photosynthesis. This species was grown in modified Chu No. 10 medium [20] for routine as well as for experimental purposes. The cultures were routinely grown in 250 ml Erlenmeyer's flask containing 100 ml of liquid medium and incubated in a culture room set at a temperature of 24± 1°C and illuminated for 16 hrs per day with cool daylight fluorescent tubes (intensity approximately 10 -50W/m 2 ). The culture medium was maintained at pH 7.5 with the help of 10mM HEPES-NaOH. The survival studies revealed that NaCl, at the concentration of 100mM was found lethal to the cyanobacterium N. muscorum. The osmotic stress was generated by the sucrose. Sucrose at the concentration of 250mM was found lethal to the N. muscorum. The diazotrophically grown cultures were exposed to the lethal doses of NaCl and sucrose for 12 hrs and then inoculated into fresh diazotrophic growth medium for further use. Total Protein Extraction Exponentially grown cultures of the cyanobacterium were harvested by centrifugation (Remi C-24BL, India) and the cell suspension was washed thrice with culture medium. The cell pellets thus obtained were weighted and then mixed in five times their volume of extraction buffer (B1). Then the mixture was grind with mortar pestle in liquid nitrogen three times followed by Sonication (Sonic Vibra-cell, USA) 10 times (70% intensity) for 20s each with an ice bath, with 40s cooling breaks. The homogenate was centrifuged for 45 min at 16000 g at 4 o C [21]. The supernatant thus obtained designated as total soluble protein fractions. The precipitation of protein was done with the help of trichloroacetic acid (TCA). Protein quantification of the extracted protein was carried out with the help of standard curve (BSA). TCA Precipitation The TCA precipitated protein was free of various non-protein contaminants which can interfere with isoelectric focusing and electrophoresis, such as lipids and salts. Extracted impure protein was precipitated by a mixture of TCA and chilled acetone in the ratio of 1:1:8 (impure protein: TCA: Acetone) for more than 2 hours. Precipitated proteins were washed thrice, first wash with 70% chilled acetone containing 0.07% DTT and the rest of the two wash with 70% chilled acetone only [22]. 2-Dimensional Gel Electrophoresis (2DE) Two-dimensional polyacrylamide gel electrophoresis (2-D PAGE) (O'Farrell, 1975) is the method in which protein molecules are separated according to the charge (pI) by isoelectric focusing (IEF) in the first dimension and according to the size (Mw) by SDS-PAGE in the second dimension. 2-DE has a unique capacity for the resolution of complex mixtures of proteins, permitting the simultaneous analysis of hundreds or even thousands of gene products. The protein sample was solubilized in appropriate amount of rehydration buffer and rehydration of immobilized pH gradient dry strip gel, IEF, equilibrium of IPG strip for proper protein transfer and SDS-PAGE were performed as described previously by Gupta et al [23]. Image Scanning and Image Acquisition Gel imaging was performed on an Image Scanner III (GE Healthcare Bio-Sciences Ltd, India) and the image was saved in .tif (dot tif) and .mel (dot mel) format. Image acquisition was done using Image Master 2D Platinum 7 (IMP7, GE Healthcare, Freiburg, Germany) software. Protein spots of the gel were further analyzed using images of 2DE followed by calculation by Image Master 2D Platinum version 7.0 (GE Healthcare) software. The theoretical pI and molecular weight of overall functional annotation of the data were received by Expasy (http://web.expasy.org/compute_pi/Mw). Results and Discussion In this study proteomics of the cyanobacterium N. muscorum under salt and osmotic stresses have been analyzed. This analysis has paved the way to compare protein spots in terms of differentially expressed and independently expresses proteins. The protein spots and multiple protein spots that showed fold regulation 1.5 or more [24] were further categorized into various functional groups and their role in salt and osmotic stresses. The 2-DE images showed that most of the protein spots were detected in a pH range of 4-7 and their molecular mass lies in the range of 10-90kDa. 2D Analysis of Proteins under Salt Stress The protein spots appearing in control as well as in its salt treated cells were compared, as shown in table-1 about 37 proteins were expressed differentially. Out of these only 5 protein spots have showed fold regulation of 1.5 or more. The differentially expressed proteins and their identifications on the basis of their functional group are summarized in table-2. The spots which are marked by sign + in the Fig. 1 (G & H) are independently present in control (141 spots) and salt treated cells (554 spots). Out of these protein spots, some proteins were found to occur in two or more spots. These multiple spots have similar molecular masses, but different pI values. The variation in pI value reflects post translation modification in the concerned protein molecule. On the contrary, some multiple spots of the same protein showed difference in their molecular masses. The various functional categories of differentially expressed proteins are discussed below: In add proteins independ simultan regulatio Ce In . The expression of genes involved in energy metabolism under stress condition is the key factors involved in cyanobacterial adaptation to stress factors [42]. Central Intermediary Metabolism The expression level of alr0692 was higher in the nitrogen depletion condition. This ORF identified as a NifU like protein, it harbors NifU like domain partially over lapping a thioredoxine like domain. Thioredoxine catalyzing the reduction of intermolecular disulphide bonds by this means it plays a major role in the formation of Fe-S clusters [43]. The differentially expression of this protein may be related to the assembly of a functional uptake hydrogenase. The gene involved in assembly of hydrogenase should be regulated differentially depending on strains, environment and type of hydrogenase [44]. The differential expressions of this protein in the present investigation are inconsistent with the above hypothesis. Another enzyme of this group i,e. inorganic pyrophosphatase catalyses the conversions of diphosphate to phosphate, induced differentially. Its role in metabolism is thought to be the removal of inorganic pyrophosphate, which is a byproduct of many anabolic reactions. It is also believed that pyrophosphate also plays an important role in the bioenergetics under various biotic and abiotic stresses [45,46,47]. Unknown & Hypothetical Phototrophs like cyanobacteria might use gas vesicle to expose them into appropriate light intensity. These gas vesicles are basically protein bodies and in prokaryotes they evolutionary most conserved bodies. In the cyanobacterium Anabaena sp. five additional proteins were identified (Gbp-F, Gbp-G, Gbp-j, Gbp-l and Gbp-M). These proteins are involved in the initiations of vesicle formation. In cyanobacteria buoyancy is regulated either by the formation of gas vesicle or synthesis/breakdown of carbohydrate molecules [48]. Our findings regarding the over expression of various proteins are inconsistent with the above finding. The ATP binding protein i. e. alr2300 has identified as conserved hypothetical proteins in the present study. The over expression of this protein (HetY) suppresses the heterocyst formation [49]. In the sucrose treated cells heterocyst differentiations delayed as compared to the control. This delay in heterocyst differentiation correlated with the expression of alr2300 gene. In addition, to the above mentioned differentially expressed protein, there are a number of proteins that were identified in the control as well as sucrose treated cells, which were expressed independently. This observation suggested that sucrose stress caused over expression of certain genes and simultaneous repression of certain genes. This up regulation and down regulation of certain genes helps in surviving cells under the given stresses. 2D Analysis of Protein under Salt and Sucrose Stress In the next series of analysis we compared salt treated and osmotic treated samples in terms of commonly expressed proteins ( Table 5). The protein spots with fold regulation 1.5 or more and their identification with functional group are given in table 6. The spots which are marked by sign + are independently present in salt (537 spots) and sucrose treated cells (186 spots), Fig. 3 (K and L). Am In this c protein A biosynth and of or Bi Biosynth including many ba an addit to salinit operon in Ce The phe program degradat suggeste obtained erythroc are unab The cy expressio chaperon shock pr was also level of only ove of Gro-E expressio Photosynthesis and Respiration Cyanobacterial nitrogen fixation is an energy requiring process; it requires ATP and a reductant for efficient nitrogen fixation. The over expressions of NADH dehydrogenase under stress conditions produce more ATP and a reductant to support nitrogen fixation and other metabolic activities. The protein involved in energy metabolism (photosynthesis and respiration) e.g. NADPH quinone oxidoreductase and NADH-plastoquinone oxidoreductase was highly abundant in the present analysis. This suggested that more ATP and a reductant is available to the organism for nitrogen fixation. Similar finding has also been reported by many workers [35,36]. Unknown & Hypothetical Arginyl-tRNA synthetase (ArgRS) is known to responsible for aminoacylating its cognate tRNA(s) with a unique amino acid in a two-step catalytic reaction. In the first step amino acid t-RNA ligases binds to the amino acid, ATP to activate the amino acid through the formation of N-aminoacyl-Adenylate. The second step involved the transfer of aminoacyl of the t-RNA. Phycobillisomes are the major light harvesting complexes of cyanobacteria under nitrogen fixing condition and under salt stress conditions; major component of the phycobilisomes is strongly expressed [36,59]. The above findings are in agreement with our interpretations. Phosphoglycerate kinase (PGK) is an enzyme that catalyzes the reversible transfer of a phosphate group from 1,3-bisphosphoglycerate (1,3-BPG) to ADP producing 3-phosphoglycerate (3-PG) and ATP during carbohydrate metabolism. The differentially expression of this protein suggested that the interaction of metabolic protein associated with the survival of the organism under stress condition. Similar role of carbohydrate metabolism in stress has also been reported in Anabaena sp. [60]. The enzyme 1,4-dihydroxy-2-naphthoyl-CoA hydrolase is known to be involved in the formation of a nephthaquonone ring of phylloquinone. In higher plants the cleavage of this enzyme leads to formation of phylloquinone; the cognate thioestrase of the same enzyme has been recently characterized in the cyanobacterium Synechocystis sp [61]. In photoautotrophic organisms, including certain species of cyanobacteria phylloquinone is a vital redox cofactor required for electron transfer in PSI and the formation of protein disulphide bond [62,63,64]. In consistence with the above findings, in cyanobacterium Synechocystis sp. PCC 6803, salt stress enhances the expression of genes of ribosomal proteins (rpl2, rpl3, rpl4 and rpl23), on the other hand hyperosmotic stress, enhances the expression of genes for the synthesis of lipids and lipoproteins (fabG and rlpA) and for other functions. The over expression of these genes clearly indicates that Synechocystis sp. PCC 6803 recognizes salt stress and hyperosmotic stress as different signals. To the best of our knowledge this is the first report from the Nostoc muscorum investing proteomic responses under salt and osmotic stress. Conclusion The over expression of commonly induced proteins under salt and osmotic stress suggested that some factors might perceive and transducer such signals of the specific pathways that control the expression of a number of genes. Therefore; the role of various differently expressed proteins is to overcome given stress for the normal functioning of the cell. This metabolic adaptability of the cyanobacteria could be useful in the production of biofertilizer for stressful ecosystems and isolation of commercially important bioactive compounds.
2019-04-02T13:08:21.409Z
2017-06-08T00:00:00.000
{ "year": 2017, "sha1": "e1e001ceb5104b4c6b62a182dcd6c71a7d04cd79", "oa_license": null, "oa_url": "https://doi.org/10.22606/jamb.2017.11001", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3c91e4cecaeffb6e114f385339b557040eb1690b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
252185399
pes2o/s2orc
v3-fos-license
Regular black holes in three dimensions and the zero point length In this paper, by means of regularisation procedure via $r\to \sqrt{r^2+l_0^2}$ (where $l_0$ can play the role of zero point length), we first modify the gravitational and electromagnetic potentials in two dimensions and then we solve the Einstein field equations to end up with an exact and regular black hole solution in three dimensions with a negative cosmological constant. We show that, the black hole solution is asymptotically AdS, non-singular at the origin and, under specific conditions, it has a flat de Sitter core at the origin. As a special case, we obtain the charged Banados-Teitelboim-Zanelli (BTZ) solution. Finally, using a dimensional continuation and the NJ algorithm, we end up with a legitimate rotating black hole solution in three dimensions. I. INTRODUCTION Black holes solutions was believed to cannot exist in three spacetime dimensions. This line of thought was based on the no local gravitational attraction and therefore no mechanism to produce black holes. It came as a surprise, when Banados, Teitelboim, and Zanelli (BTZ) [1,2] reported a vacuum black hole solution in three spacetime dimensions with Antide Sitter (AdS3) space. The BTZ black hole solution was an exact solution of Einstein field equations, besides the mass parameter, it was shown that such a black holes can have electric charge and can rotate. Another aspect of the BTZ black hole is presence of singularity at the origin. The BTZ solution gained a lot of interest in the community and the interested reader is referred to some interesting works in this direction [3][4][5][6][7][8] and references therein. The solutions in AdS are of interest due to the AdS/CFT correspondence proposed by Maldacena [9]. This correspondence was studied for the case of BTZ spacetime in AdS3 [10]. In this work, we are interested to construct regular black holes in three dimensions using a regularisation procedure via r → r 2 + l 2 0 , where l 0 can play the role of zero point length. Such a regularisation procedure is closely related to ideas from T-duality. The T-duality theory is an equivalence between two different string theories in two contexts. Precisely speaking, by T-duality theory, with the winding modes and identification of momentum, one can relate the geometries having large and small compact directions. Recently, a four dimensional quantum corrected black holes in T-duality was found [11]. Such a black hole was shown to be regular and coincides with the Bardeen solution. In addition, for an exact black hole solution with charge in T-duality see [12]. In the present work, we would like to extend this idea to study regular black holes in three dimensions. The paper is structured as follows. In Section II, we modify the gravitational and electromagnetic potentials and we solve the Einstein field equations in three dimensions. In Section III, we use dimensional continuation and the NJ algorithm in order to generate a rotating black hole solutions. In Section IV, we comment on our findings. * kimet.jusufi@unite.edu.mk II. REGULAR AND CHARGED BLACK HOLES IN THREE DIMENSIONS It was recently shown (see for details [11,12]) that by using ideas from T-duality one can modify the standard Newtonian potential [and similarly the electric potential] as follows Φ(r) → −M/ r 2 + l 2 0 . The gravitational potential in two dimensions on the other hand is given by Φ ∼ M ln(r), with k being some constant and M is mass per unit surface. We now impose the regularization factor r → r 2 + l 2 0 , to the potential which results with Now solving the Poisson's equation in polar coordinates we can obtain the energy density yielding meaning that when l 0 → 0, it reduces to zero, i.e., point mass. In other words, due to the zero point length, we get a smeared matter distribution due to quantum gravity effects described by the quantum modified energy momentum tensor (T µ ν ) corr = diag (−ρ, p r , p φ ). Here, p r and p φ are the radial and transverse pressures, respectively. For the mass function of the black hole we get If we further set k = 1 and l 2 0 → 2M L, this mass profile has a similar form to the Hayward model used in four dimensions although not exactly the same as here it is used in context of three dimensional case. Furthermore, such a profile has been conjectured recently in [8]. It is thus very interesting to see that we obtain this profile using the regularisation procedure. We are now interested to find a three dimensional black hole solution derived from the Einstein-Maxwell action and quantum effects using the action where the cosmological constant Λ = 1/l 2 > 0 in our notation. The line element of the black hole in cylindrical coordinates reads Solving for the field equations one can find the Einstein-Maxwell equations with the energy-tensor for the electromagnetic field given by In the spirit of Eq. (1) we regularise now the electromagnetic potential as follows Using the spacetime metric (6) we get the only nonvanishing components of the Faraday tensor along with the scalar quantity For the energy-momentum tensor of the electromagnetic field we find the components: From the Einstein field equations and energy momentum tensor we obtain the t − t component given by where for simplicity we have set k = 1/8. One can then easily obtain the following exact solution where the integration constant can be fixed after we take the limit l 0 → 0, which should yield the standard result [1,2] f (r) = r 2 hence, we can get C = −M . Thus, the general result of our solution is given by To the best of our knowledge this metric is new and not obtained yet in the literature. The black hole solution is regular for any r, including the limit r → 0. To see this, let us show here the Ricci scalar this means that in specific conditions the quantum gravity effects can produce a de Sitter core at the origin where we have set M − C = 1 and Λ ef f = M/l 2 0 − 1/l 2 > 0, which M > l 2 0 /l 2 . A similar result was shown in a recent work [8]. III. GENERATING ROTATING SOLUTIONS In this section we like to generate a rotating black hole solution in three dimensions. In doing so, we shall apply the complex coordinate transformation scheme used in Ref. [13], however here we shall generalize it for any function f (r). The main idea behind this derivation is to assume that our solution in three dimensions is a slice of a static and spherically symmetric of a four geometry given by with f (r) given by Eq. (18), and h(r) = r 2 . This "dimensional continuation" basically allows us to introduce the null tetrad system of vectors and therefore to make use of the Newman's complex coordinate transformation method. As was nicely explained in [13], ans we note here also that we do not demand that this four dimensional geometry to be a solution of Einstein equations, but we expect its slice [for θ = π/2] to be a solution of Einstein equation in three dimensions. Let us now apply the NJ method, however first we have to rewrite metric (22) in Eddington-Finkelstein-type retarded null coordinates (u, r, φ), given by On the other hand, this metric can be decomposed in terms of null tetrads as follows along with the null vectors defined as In the above notationm µ is the complex conjugate of m µ . In particular, these vectors further satisfy the conditions for normalization, orthogonality and isotropy as Following the NJ prescription and its modified version [14] we write, in which a stands for the rotation parameter. Next, let the null tetrad vectors Z a = (l a , n a , m a ,m a ) undergo a transforma-tion given by Z µ = (∂x µ /∂x ν )Z ν , following where we assumed that the functions f (r) and h(r) = r 2 transform to F = F (r, a, θ) and Σ = Σ(r, a, θ), respectively. This leads to the following metric where Σ ≡ (r 2 + a 2 cos 2 θ). Since the geometry in three dimensions can be thought of as the slice of the four dimensional geometry, at this point, we simply set θ = π/2 in the metric above to arrive at the rotating black hole metric in three dimensions (38) It is more suitable to further rewrite this metric in terms of the Boyer-Lindquist-type coordinates (t, r,φ) that is a generalization of the Schwarzschild coordinates. This can be achieved via the transformation where ∆ ≡ r 2 f (r)+a 2 . Finally, the rotating AdS 3 black hole solution given in Boyer-Lindquist-type coordinates is given by [henceforth we drop "hat" on φ coordinate)] where there is an expression for F given by [see for details [14]] where k(r) = h(r). Remarkably, as was shown in [13], one can find a coordinate transformation and relate a new time coordinatet to t viat and obtain the rotating BTZ geometry wheret is the BTZ time coordinate. Furthermore we can also compare it with the general expression of the rotating BTZ black hole in three dimensions given by Introducing a = J/2, it is not difficult to show the following relations: Note that, we interpret the above solution only as an effective geometry derived by the dimensional continuation and the NJ algorithm. Nonetheless, it is quite interesting that from the dimensional reduction we can end up with a legitimate rotating black hole solution in three dimensions. IV. CONCLUSIONS In this work, we used a regularisation procedure given by the replacement r → r 2 + l 2 0 , with l 0 being the zero point length and encodes the quantum gravity effects. Then we first modified the gravitational and electromagnetic potentials, we find the energy density of the smeared matter distribution and the energy momentum components of the electromagnetic field. The energy density of the smear matter distribution is regular and finite at the origin while the mass function vanishes at the origin and correctly reduces to the mass parameter for large distances. With these expressions in hand, from the Einstein field equations, we then obtained an exact charged black hole solution in three dimensional AdS space which reduces to the BTZ solution in the limit l 0 → 0. Moreover, we have shown that the black hole solution is regular at the origin, it is asymptotically AdS for large radial coordinate and, interestingly, in specific conditions, due to the quantum gravity effects the solution has a flat de Sitter core at the origin when r → 0. Finally, using a dimensional continuation, i.e., by assuming that the three black hole geometry is a θ = π/2 slice of a static and spherically symmetric four geometry, we applied the complex coordinate transformation via NJ algorithm to obtain the rotating black hole solution to three dimensions. In the near future we plan to explore in more details the thermodynamics, stability, and other properties of the black hole solution reported in this work.
2022-09-12T01:16:13.093Z
2022-09-08T00:00:00.000
{ "year": 2022, "sha1": "97f3d7697650537688db639dfb5ca21174725f3a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "97f3d7697650537688db639dfb5ca21174725f3a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220962583
pes2o/s2orc
v3-fos-license
Effects of the fat-tailed ewes' body condition scores at lambing on their metabolic profile and offspring growth Abstract This experiment aimed to evaluate the effect of body condition score (BCS) of fat-tailed Barbarine ewes at lambing on their metabolic profile around parturition and lamb's growth. The experiment was carried out on 69 Barbarine ewes, divided into three groups according to BCS, which were inferior to 2, between 2 and 2.5 and superior to 2.5 for the thin, middle and fat group, respectively. Along the trial, all groups received the same dietary treatment based on hay, pasture and concentrate. Birth weight (Bi-W), weights at 30 and 70 d (W30 and W70) and average daily gains (ADGs) of lambs were recorded. Metabolites were determined at late pregnancy and at the beginning of lactation. Ewes' BCS at lambing had no effect on lambs' Bi-W (P>0.05), which was 3.8, 3.8 and 3.9 kg, respectively, for thin, middle and fat groups. However, W30, W70 and ADG increased with a mother's BCS. A positive correlation between lamb growth parameters and ewe body weight and BCS at weaning was recorded. Energetic metabolites (glucose and triglycerides) and proteic metabolites (creatinine, total protein and urea) were similar among groups according to BCS but significantly different between pregnancy and lactation stages except triglycerides and urea. In conclusion, BCS may be used as dietary management tool during ewe lactation. With the transition from pregnancy to lactation, the content of some metabolites has changed irrespective of BCS; this aspect needs more investigations. Introduction The concept of body condition reflects the amount of body reserves, particularly fat, in the living animal (Kenyon et al., 2014). The body condition score (BCS) better indicates these reserves than live weight alone (Russel et al., 1969;Sanson et al., 1993;Atti and Bocquier, 2007). It has the potential to be a useful management tool for producers to increase animal performance, leading to decisions on when and how to practice nutrition supply to the whole flock or only a part, allowing assessment of animal nutrition level. Farmers may accept and use BCS as a management tool when they understand the benefits that it will provide to their production system. Therefore, it might be expected that ewes of lower BCS will display reduced reproductive performance in comparison with those of greater BCS (Atti et al., 2001;Kenyon et al., 2014). There is an optimum BCS for the flock at each stage of the production cycle. It was shown that females of different mammalian species such as sheep and goats mobilize their reserves in some critical physiological stages (pregnancy and lactation) in order to cover foetus needs and maintain their milk production (Chilliard et al., 1998). They also resort to mobilizing their reserves in the case of feed shortage, especially in dry areas, to meet their energy requirement and survive (Chilliard et al., 1998;Atti et al., 2004;Caldeira et al., 2007). This phenomenon reflects the capacity of ewes to adapt to different conditions while maintaining their vital functions. The fat-tailed sheep breeds like Barbarine are rustic and well adapted to the harsh conditions by using their body reserves (Atti et al., 2004). The amount of the tail fat presents a visible part of the body reserves; for the Barbarine breed, its weight varied from 1 to 4 kg (Atti et al., 2004). For this, a body condition score proper to fat-tailed sheep breeds has been developed (Atti and Bocquier, 2007). There are many studies showing the relationship between BCS at mating and reproductive performance for thin-tailed (Griffiths et al., 2016) and fat-tailed sheep breeds (Atti et al., 2001). For the impact of ewe BCS at lambing on lamb growth, the research is abundant and with confounded conclusions for thin-tailed breeds (Caldeira et al., 2007;Kenyon et al., 2014;Corner-Thomas et al., 2015). However, for fattailed breeds having an additional body reserves site, results are scarce. From the bibliographic synthesis of Kenyon et al. (2014), ewe BCS could have no effects on lamb growth from birth to weaning and on weaning weight or have positive effects on these parameters. Given these variations between studies undertaken on thin-tailed ewes on a large spectrum of BCS values, the purpose of the current investigation was to study the effect of the fat-tailed Barbarine ewes' BCS at lambing on their lambs' growth, and we undertake this experiment in the limited spectrum of BCS. The effect of BCS on metabolic statute around parturition was also determined. Ewes, diet and experimental design The study was carried out at the experimental farm (Bourebiaa) of the National Institute of Agronomic Researches of Tunisia (INRAT) on 69 heads of fat-tailed Barbarine ewes. They were 3-4 years old, averaging 36.7 + 4.98 kg of body weight (BW) and judged healthy when submitted to mating. They were managed under semi-intensive conditions and naturally mated with fertile Barbarine rams. All animals received the same diet based on pasture, hay and concentrate during pregnancy and lactation (Table 1). The concentrate contained barley, soybean meal and vitamin-mineral supply (calcium carbonate, sodium chloride and phosphate) with 14 % of crude protein. Fresh water was at all times offered ad libitum. The breeding season of sheep extended from the beginning of July until the end of August, so the lambing continued from late November to January. The BCS was regularly recorded for ewes every 15 d; it was taken at lumbar (LS) level and at caudal (CS) level according to Russel et al. (1969) and Atti and Bocquier (2007), respectively. Both LS and CS were determined by careful palpation; they were performed by two trained technicians and the adopted score value was determined in common agreement. Both BCS values were assessed on a five-point scale, with divisions of 0.25 points at each score. For each ewe, the calculated mean of both scores (LS and CS) was considered to characterize groups, and then ewes were divided into three groups according to the mean BCS at lambing: Ewe body weight (BW) and lamb growth control Ewes were weighed every 15 d. The BW and BCS were recorded from mating to lamb weaning. The lamb's number and birth weight (Bi-W) were recorded at birth. Then, lambs were weighed every 3 weeks until weaning at 4 months old. Weights at 30 and 70 d ages (W30 and W70) and average daily gain (ADG) were calculated by extrapolation as -ADG Bi-30 = ADG between birth and 30 d -ADG 30−70 = ADG between 30 and 70 d Blood sampling and haematological analyses From all animals blood samples were collected through jugular venipuncture using heparinized Vacutainer tubes with no additive at two stages, 1 week before lambing (late pregnancy) and at the beginning of lactation (1 week after lambing). In order to keep the serum samples as fresh as possible, blood samples were centrifuged immediately after collection at 3000 g for 20 min; plasma samples were transferred into plastic tubes of 2 mL and frozen at −20 • C for subsequent metabolite analysis. All haematological analyses were carried out using commercially available kits. For nonesterified fatty acids (NEFAs), the kits were not available. The dosage of glucose and triglyceride levels was carried out using phase kit supplied by Bio-Systems S.A. The dosage of total protein, urea and creatinine was conducted using the kit supplied by Phase Biomaghreb (Tunisia). The absorbance reading was performed by an ultra-visible spectrophotometer (Milton Roy, France). Glucose and triglycerides absorbance was read at 500 nm; however, creatinine, total protein and urea absorbance were read at 492, 546 and 590 nm, respectively. Statistical analysis A one-way ANOVA was used to test the effect of a ewe's body condition score (thin, middle or fat) on a lamb's growth parameters using GLM (general linear model procedure of S.A.S. Institute, 1989). The differences between groups was compared by Duncan's test. In addition, the correlation between the different parameters was determined using the correlation procedure of a statistical analysis system (SAS). Data of ewe metabolites during the two physiological stages of measurements (pregnancy and lactation) were analysed using the MIXED procedure for repeated measures of SAS. The analyses were performed with a ewe's body condition score as a between-subject fixed effect, a physiological stage as a within-subject effect and a random animal effect as subject (experimental unit). For all the tests, the level of significance was 0.05. Ewe body weight, body condition score and body reserves mobilization The descriptive statistics for ewe BCS (LS and CS) and BW (kg) are reported in Table 2. The BW of ewes at lambing was significantly different (P = 0.001) among groups. The mean BW of ewes was 32.8, 36.7 and 41 kg for thin, middle and fat groups, respectively. Ewes with the higher BCS (fat) were heavier than ewes from thin and middle groups by 8.2 and 4.3 kg, respectively. This result confirmed that BW increased with improving BCS; this positive relationship between BW and BCS was previously shown (Atti et al., 1995;Kharrat and Bocquier, 2010;Sejian et al., 2015). Then, ewes' BW and BCS decreased between lambing and weaning (Figs. 1, 2 and 3) until reaching low body scores, being a reflection of body reserve mobilization to cover the lamb's needs along the suckling period (Chilliard et al., 1998). During this period between lambing and weaning, the body weight lost was 2.8 kg in both middle and fat groups, being significantly (p = 0.01) higher for them than thin ones (Table 3), which decreased their body weight slightly (1.1 kg), seeing that they are meagre and they have no reserves to mobilize. This significant decrease in BW, LS and CS during the suckling stage is the consequence of the physiological state, which is common for all females of different mammalian species and breeds who lose body reserves in the beginning of lactation (Chilliard et al., 1998;Beker et al., 2010). Furthermore, for the current study the ewes were undernourished; they were forced to mobilize body reserves to ensure higher milk production for their lambs (Atti et al., 2004;Beker et al., 2009). These results confirmed other studies on goats and sheep. It was shown that fat goats lost weight (1.84 kg) and BCS during lactation, while the thin ones increased (+0.43 kg) their BW (Kharrat and Bocquier, 2010). Also, fat ewes at lambing lost (Atti et al., 1995). After lamb weaning, all ewes begin to increase their BW, LS and CS (Figs. 1, 2 and 3) by using their diets to replenish their reserves lost during pregnancy and lactation. Lamb growth The lamb growth parameters are shown in Table 4. Irrespective of a ewe's BCS, the lamb's Bi-W was similar for all groups averaging 3.8 kg (P >0.05). Similar results, where ewes' BW at lambing did not affect lamb growth parameters, were previously reported for the same breed (Atti et al., 2004) and other breeds (Kenyon et al., 2012;Karakus and Atmaca, 2016). However, others works reported that ewes from other breeds with higher BW or BCS at lambing had lambs with higher birth weights (Clarke et al., 1997).Then, the result of the current study can be explained by the fact that thin ewes, even with low BCS, have drawn on their body reserves during pregnancy to support the requirement of the conception. The ability of the Barbarine ewes to achieve pregnancy with acceptable performance even in underfeeding conditions has been shown (Atti et al., 2004). For rustic breeds, the birth weight of the lamb may be considered as a breeddefining characteristic, which is less determined by the body condition score of the ewe mother. The relationship between a ewe's BCS at different physiological stages and a lamb's birth weight was widely examined. Some studies outlined a significant effect (Corner-Thomas et al., 2015;Sejian et al., 2015), while other reported no effect (Aliyari et al., 2012). It is probable that this difference is due to differences in the timing of BCS measurement, ewe nutrition and particularly breed characteristics and ability to mobilize body reserves. For W30, W70 and both ADGs (ADG Bi-30 , ADG 30−70 ), lambs from the fat group had significantly higher values than those of the thin one, while lambs of the middle group had intermediate values. These differences may be explained by the higher milk production, which is the result of a higher reserve mobilization of fat ewes compared to thin and middle ones (Atti et al., 1995). This phenomenon was reported for thin-tailed (Caldeira et al., 2007) and fat-tailed breeds (Atti et al., 2004) where the ewes in higher body condition used their body reserves to cover their energy requirement, even in undernutrition, to maintain a high level of milk production to suckle their lambs. In the same context, Khar-rat and Bocquier (2010) showed that thin goats increased their energy intake to maintain milk yield near to that of fat ones. Karakuş and Atmaca (2016) recorded similar results, although not statistically significant; they showed that lambs issued from ewes with the highest BCS (3.5) had higher live weights, between 30 and 120 d of age, than lambs from BCS 2.5 and BCS 3.0 ewes. For lambs of all groups in the current study, the ADG 30−70 was significantly lower than the ADG Bi-30 . This phenomenon may be the result of the low nutrient availability in the second stage, which did not provide energy leading to the same growth as that of the mother milk. The ADG Bi-30 demonstrated the maternal capacity to rear its offspring, while the ADG 30−70 reflects the own potential growth of lambs since the ewes at this stage (30-70 d) had no more reserves as in the first use (Bi-30 d) to produce milk for their offspring. This tendency was reported for the same breed for which the lamb's ADG 30−70 was frequently lower than the ADG 10−30 in correct and undernutrition conditions (Atti et al., 2004). Positive and significant correlations were recorded between W30, W70, ADG Bi-30 and ADG 30−70 on the one hand and BW, LS and CS at lambing on the other ( Table 5).The lamb W30, W70 and both ADGs were significantly correlated with ewes' BW, LS and CS at weaning ( Table 6). The correlation coefficients varied between 0.414 and 0.645. Significant correlations were recorded between the BW variation ( BW) between lambing and weaning and lamb W30 and W70. This result should be taken into account in the operation of culling ewes where they lost more BW and BCS during suckling to permit more growth for their offspring and should be maintained in the flock even if they are have poor BW or BCS at lamb weaning. In the other situation, it was reported that BCS had no significant effect on a lamb's weaning weight (Kenyon et al., 2011). The different conclusions among studies could result from differences in nutrition during pregnancy and lactation (Karakus and Atmaca, 2016) or from the feeding level and feed quality offered (Kenyon et al., 2014). Also the breed characteristics could affect behaviour; in fact, genetic and maternal factors influenced foetal development and account for over 30 % of the variation in birth weight (Johnston et al., 2002). Ewe metabolic profile The results of metabolic profile according to a ewe's BCS around parturition were shown in Table 7. Concentrations of blood metabolites for ewes in this study were consistent with the normal range for healthy sheep. In the current study, all metabolites were not affected by the ewes' BCS. Similar results were found, where ewes with a different BCS did not affect glucose level (Jalilian and Moeini, 2013). However, Caldeira et al. (2007) recorded different metabolic status for ewes with a different BCS with lower glycaemia for thin (BCS between 1 and 2) than fat animals (BCS between 3 and 4). In addition, Mazur et al. (2009) showed lower values for Caldeira et al. (2007) where it increased in case of undernutrition and with low body condition scores (1 and 2). Similarly, total protein and urea were unaffected by a ewe's BCS. However, in previous studies, BCS appears to influence protein metabolism after lambing or calving where sheep and dairy cows with higher BCS showed an increase in urea level than thinner ones (Karapehlivan et al., 2007). In the current study, the similarity of metabolite concentrations irrespective of the BCS may be explained by the same ewe management conditions and/or the high rusticity, resilience and adaptation of Barbarine ewes and generally the fat-tailed breeds to harsh conditions. Especially for creatinine, which is an indicator of protein or muscle catabolism, similar concentrations means that even with low BCS ewes are able to cover the foetus requirement and do not need to use their muscles. Irrespective of the ewe's BCS, for the energetic metabolites, the physiological stage (late pregnancy and lactation) significantly affected (p = 0.001) the content of glucose; however, triglycerides level was unaffected. The results found for glucose level are in agreement with those reported in other works (Caldeira et al., 2007) for pregnant ewes (0.42-0.76 mmol L −1 ) and for lactating ones (0.41-0.65 mmol L −1 ; Dubreuil et al., 2005). The decrease in glucose concentration for all groups in lactation compared to pregnancy could be explained by the higher demand, related to needs, of glucose in postpartum than that during preg- nancy (Block et al., 2001). These results corroborate other studies for goats suggesting that glucose is critical molecule for meeting a goat's nutritional requirement during lactation (Cepeda-Palacios et al., 2018). In fact, this phenomenon is related to the increase in milk production, which involves mobilization of glucose for the synthesis of milk lactose (McNeill et al., 1998), which was also confirmed for cattle (Bach, 2012). The triglycerides concentration between pregnancy and lactation (0.51 and 0.42 mmol L −1 , respectively) was comparable to usual values reported by Mollereau et al. (1995). There was a slight decrease in triglycerides level, but it was not significant. This phenomenon may be explained by the transition of triglycerides in the milk of lactating ewes because the milk fat is composed essentially of triglycerides (Nazifi et al., 2002). Then, triglycerides in the blood are fuel sources that are consumed when energy requirements increase during pregnancy and lactation (Nazifi et al., 2002;Caton and Hess, 2010;Pesántez-Pacheco et al., 2019). These responses of the energetic metabolites through suckling are the results of lipid mobilization (Mazur et al., 2009) to cover the high-energy needs during this physiological stage (Chilliard et al., 1998). Concerning the proteic metabolites, creatinine and total protein were significantly affected by the ewe's physiological stage, while urea level was unaffected. The lactating ewes had a higher creatinine concentration than pregnant ones (9.08 vs. 5.27 mmol L −1 , respectively). Yokus et al. (2006) recorded the same tendency but without significant difference. Moreover, Roubies et al. (2006) reported a significant influence of the reproductive stage on creatinine concentration and attributed this difference to the development of the foetus musculature. The total protein concentration during lactation was significantly higher than that dur-ing pregnancy. Jelinek et al. (1985) recorded the same tendency with same value during lactation progress (from 58.7 to 64.5 mmol L −1 ). However, Celi et al. (2008) found that total protein level was significantly lower after parturition than in pregnant goats. It was shown that the decrease in the blood protein for goats is due to its removal from the blood stream in order to support mammary secretion after parturition (Chen et al., 1998).The urea concentrations are within the norms of Ndoutamia and Ganda (2005) for pregnancy (0.20-0.30 mmol L −1 ) and for lactation (0.32+0.17 mmol L −1 ) but were not affected by the physiological stage. This parameter is related to the importance of protein intake in the diet and especially the protein efficiency for small ruminants (Friot and Calvet, 1973). Indeed, for the current study the diet level and its protein content were similar in late pregnancy and the beginning of lactation. The proteic metabolites of ewes were in the normal range for healthy sheep; then, these results indicate that the nutritional management of the ewes was appropriate regardless of the normal changes related to the physiological stages. The correlation test between metabolic profile and lamb growth parameters showed a positive and significant correlation between ADG Bi-30 and glucose (r = 0.266; P = 0.036). However, a negative and significant correlation between ADG Bi-30 and triglycerides was shown (r = −0.337; P = 0.063, Table 8). Zywicki et al. (2016) showed that foetal plasma glucose and triglycerides levels were directly related to foetal weight (P <0.0001), while Hu et al. (1990) found that total weight of lamb born was negatively related to plasma glucose concentration (r = −0.22; P <0.01). The ADG 30−70 tended to be related to the glucose and triglycerides levels during lactation, and the correlations were positive. Total protein level during pregnancy was inversely correlated with lamb birth weight, and the correlation was significant (r = −0.276; P = 0.054). However, in previous studies (Addah and Karikari, 2008), the relationship between total protein and birth weight was nearly linear (r = 0.93; P <0.05). It was reported that maternal body protein can only serve as a major source of protein for supporting visceral organ metabolism and foetal growth without significant effect on maternal body under moderate levels of undernutrition but not under chronic nutritional conditions (Robinson et al., 1999). Creatinine concentration during lactation tended to be positively correlated with W30, W70 and ADG Bi-30 , but no relationship between lamb growth parameters and urea level was observed as previously shown (Hu et al., 1990). Lamb growth parameters were not related to creatinine level during pregnancy, while Zywicki et al. (2016) reported that foetal plasma creatinine levels were inversely related to foetal weight (P <0.0001). Conclusions This study showed that fat-tailed ewes mobilize their reserves during pregnancy to cover the conception requirements to reach lambing even with low BCS and produced lambs with similar birth weight. However, the lamb growth rate and their weight at 30 and 70 d were higher for offspring of ewes with middle and high BCS. Since the BCS can be useful as a dietary management tool after lambing, at this stage, a high diet level is required to meet suckling needs in general but especially for those who are thinner than middle and high BCS sheep. These results did not shown any relationship between a ewe's BCS at lambing and their metabolic profile; this aspect would be studied with more frequent blood sampling. Data availability. The original data of the paper are available upon request to the corresponding author. Author contributions. NA designed the experiment and revised the article, and YY carried out the experiment and wrote the first draft of the article.
2020-06-25T09:10:01.673Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "1a5669ecdd6a63dd9a1650bc8196dc1bfd9f0324", "oa_license": "CCBY", "oa_url": "https://aab.copernicus.org/articles/63/183/2020/aab-63-183-2020.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6bb7ec951f5137c184aadcb9c648eb02ae1176ae", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
19106674
pes2o/s2orc
v3-fos-license
Discrete light localization in one dimensional nonlinear lattices with arbitrary non locality We model discrete spatial solitons in a periodic nonlinear medium encompassing any degree of transverse non locality. Making a convenient reference to a widely used material -nematic liquid crystals-, we derive a new form of the discrete nonlinear Schrodinger equation and find a novel family of discrete solitons. Such self-localized solutions in optical lattices can exist with an arbitrary degree of imprinted chirp and a have breathing character. We verify numerically that both local and non local discrete light propagation and solitons can be observed in liquid crystalline arrays. I. INTRODUCTION Optical energy localization in lattices has become an important branch of contemporary nonlinear science, due to a wealth of basic physics and potentials for light switching and logics [1,2,3] (and references therein). Attention has been recently devoted to light propagation in the frame of tunable discreteness -i.e. in lattices with an adjustable period, index contrast, nonlinearity. Examples to this extent are waveguide arrays in photorefractives [4], and droplet arrays of a Bose Einstein condensate [5,6,7,8]. In nematic liquid crystals (NLC), materials encompassing a large non resonant reorientational response, spectrally extended transparency, strong birefringence and mature technology [9], excitation as well as switching/steering of discrete solitons has been reported in voltage-tunable geometries [10,11,12]. Discrete solitons in NLC result from the interplay between evanescent coupling (owing to discreteness), molecular nonlinearity (leading to progressive mismatch as the extraordinary index increases) and non locality (owing to intermolecular elastic forces). The latter aspect, despite its role in several systems [13,14,15,16,17,18,19,20] including NLC [21,22], has only been discussed in the framework of 1D discrete lattices with reference to first order contributions [18] and long range dispersive interactions [17,19,20], a general description of discrete solitons in the presence of a transverse nonlinear non locality being still lacking. In this paper, for the first time and with explicit reference to a physical system of interest -i. e., nematic liquid crystals-we model discrete light localization in media with an arbitrary degree of non locality, elucidating the interplay between non locality and nonlinearity on soliton dynamics. Using coupled mode theory to derive the governing equations (CMT), [23] we demonstrate the existence of a new family of chirp-imprinted discrete * Electronic address: frataloc@uniroma3.it † Electronic address: assanto@uniroma3.it Ex is applied trough an electrode-array across x and alters the mean molecular angular orientation θ(x, y), inducing an index modulation with the same period Λ. The top graphs sketch the distribution Ex versus y (left) and versus propagation z (right), respectively. breathers which could not be sustained by a purely local response. Finally, we verify the theoretical predictions by numerical experiments with a standard NLC. The paper is organized as follows. Section II introduces a model of liquid crystals and carries out an original reduction to a non integrable discrete nonlinear non local Schrödinger equation, outlining the novelties with respect to previous studies dealing with non locality. In Sec. III we apply a variational approach using a convenient soliton ansatz and derive the differential equations for the evolution of soliton parameters in propagation. We highlight the impact of non locality on soliton generation and demonstrate a novel family of discrete chirped solitary waves, never reported before. Finally, in Sec. IV, we perform a full numerical simulation of the actual liquid crystalline system and demonstrate the excellent agreement with our analytical predictions. We conclude by emphasizing how the examined NLC-lattice offers the rare possibility to observe both local and non local light propagation in one and the same system. II. THEORETICAL APPROACH We consider light propagation in a thin film planar waveguide of nematic liquid crystals, subject to a periodic transverse modulation along y and across x (Fig. 1). NLC consist of rod-like molecules which, electrically polarized across x, react to and reorient towards the field vector in order to minimize the free energy [9]. Under "planar" anchoring conditions at top and bottom interfaces of the cell, the mean angular orientation of the NLC molecules (i.e., the molecular director) is conveniently described by their angle θ with the axis z in the plane (x, z), as sketched in Fig. 1. This identifies the NLC optic axis with respect to the propagation wavevector of a light beam injected in the cell. If n 2 a = n 2 − n 2 ⊥ is the NLC optical birefringence (with n and n ⊥ along or orthogonal to the director, respectively), an electric field (static or low frequency) applied across x, constant in z and periodic along y ( Fig. 1), can reorient the director and determine a one-dimensional optical lattice with index modulation n 2 (x, y) = n 2 ⊥ + n 2 a sin 2 θ(x, y) for e-polarized light. We assume an applied electric field E x (x, y) = E 0 · 1 + ǫF (y), with a zero mean-value F (y) = F (y + Λ) and an arbitrary ǫ < 1. In actual experiments, E x is determined by the bias V (x, y) ∝ x(1 + V (y)), with V (y) = V (y + Λ) applied through an array of parallel finger-electrodes [10]. In the framework of the elastic continuum theory [9], the director distribution θ 0 (x, y) at rest -i.e., with no injected light -can be obtained by minimizing the NLC energy functional K being the NLC elastic constant (single constant approximation [9]) and ∆ǫ RF the (low frequency) anisotropy. When an e-polarized optical beam of slowlyvarying envelope A propagates in the medium, Eq. (1) modifies into: The overall director distribution can be written as θ(x, y) = θ 0 (x, y)(1 + ψ(x, y)), with θ 0 (x, y)ψ(x, y) the nonlinear optical contribution. In usual experiments θ 0 is small (≤ 0.4), hence first order approximations are justified [10]. The non locality (linked to the ∇ operator in Eqs. (1)-(2)) has a different impact along x and y, respectively, due to the strong asymmetry of the problem (Fig. 1). A bell-shaped beam with x-waist comparable with the cell thickness d does not experience a non local response along x, owing to the planar anchoring with θ 0 ≈ 0 in x = ±d/2. Conversely, along y no anchoring is present and the index perturbation is free to widen. After substituting θ into Eqs. (1)-(2), assuming the response to be weakly nonlinear (ψ ≪ 1) and local in x (∂ 2 x (θ 0 ψ) ≈ 0), we obtain: Eq. (3) models the all-optical response of the NLC lattice. It can be cast in the integral form ψ = G(ζ − x, η − y)|A(ζ, η)| 2 dζdη, with G(x, y) the Green function. For a sufficient bias to induce an array of singlemode channel waveguides with nearest-neighbor coupling, CMT (in the tight-binding approximation) yields the z evolution of each eigenmode: , f Q and β the modal eigenfunction and eigenvalue, respectively, |Q n | 2 the mode-power in the nth-channel, C the coupling strength and: . Factorizing the Green function as G(x, y) = G(x)G p (y)G e (y), with G p (y) = G p (y + Λ) and the envelope G e (y) wider than the guided mode, the integral (5) becomes Γ m+n = G e [(m + n)Λ]Υ, with: To proceed with the analysis, we need to calculate the Green function from Eqs. (1) and (3). By set- (3) with (1) can be cast in the dimensionless form: In order to solve Eq. (7) above, we separate the variables X and Y by letting 3α 4θ 2 r = υ x + υ y and obtain a simple form of Hill's equation in Y: with ϑ(X, Y ) = ϑ(X)ϑ(Y ) and ϑ(X) ∝ sin(πX + π 2 ) corresponding to a harmonic oscillator across X. According to Floquet theory, the periodic solutions ϑ(Y ) = ϑ(Y +1) are located on a transition curve [24]. We adopt the perturbative method of strained parameters [24,25], performing the following expansion: with ϑ m (Y ) = ϑ m (Y + 1). By collecting and equating terms of the same order in ǫ, we find: Substituting Eq. (14) into Eq. (8), for ϑ(X) ≈ constant within the effective modal width, we get: and a periodic h m (Y ) = h m (Y + 1) the expression of which stems from Eqs. (11)- (14). For u 0 = 0 Eq. (16) is Hill's equation; otherwise for u 0 = 0 its solutions can be found through the expansion: Substituting and equating terms of the same order in ǫ, we have: At each order the solution can be factorized in the form g(Y ) = g e (Y )g pn (Y ), i.e. an envelope g e (Y ) modulated by the periodic function g pn (Y ) = g pn (Y + 1), with: Therefore, the generic solution g(Y ) can be written in the form g n (Y ) = g e (Y )g p (Y ), with a peaked envelope g p (Y ) and a periodic modulation g p (Y ) = n ǫ n g pn . Now, after introducing the (dimensionless) fields q n = locality in the nonlinear response) and, thereby, it is expected to possess a radically distinct dynamics with respect to its linear counterpart (see Sec. III) and to non local models in the frame of nonlinear photonic crystals ( [13] and references therein). The lack of propagation terms in Eqs. (8)-(9) of [13], for instance, does not allow to study the system evolution (as investigated hereby). Hence, equation (22) can be regarded as a novel general model of discrete, dispersive, nonlinear non local media. III. ANALYSIS OF THE DISCRETE MODEL Equation (22) can be also derived using the variational principle and the following Lagrangian L: We adopt the peaked ansatz q n (ξ) = A q (ξ) exp[iϕ(ξ) + ib(ξ)|n − n 0 | − µ(ξ)|n − n 0 |] with n 0 = 0, obtaining the effective Lagrangian L ef f : with non local contribution N (µ, κ): as calculated with a bilateral Z-transform. Setting to zero each variational derivative of soliton parameters (A q , ϕ, b, µ), we obtain the evolution of both soliton chirp b and soliton width µ. After some algebra: with the soliton power W 0 = n |q n | 2 = A 2 q coth µ. Equations (26)-(27) define a two-dimensional phasespace with the conserved Hamiltonian H ef f : tanh(κ/2) tanh 2 µ(2 sinh 2µ + sinh κ + sinh(4µ + κ)) 4 sinh 2µ sin 2 (µ + κ/2) Solitons correspond to stationary points of Eqs. (26)- (27) with b = 0 and: Since ∂W0(µ,κ) ∂µ > 0, all fixed points representing solitons are stable. [26] As shown in Fig. 2a, the existence curve (29) of discrete solitons rapidly approaches the local Kerr-case for diminishing non locality (i.e., increasing κ > 3). As non locality is enhanced and κ reduces towards and below 1, however, the refractive perturbation becomes broader and broader (in y) and W 0 larger and larger. Substantial changes are visible near the soliton solution, as displayed in the phase-plane of Eqs. (26)- (27) in Fig. 2(b)-(d). In a Kerr regime (κ → ∞) the phase-plane consists of a series of periodic orbits near the localized state (µ = 1.5, b = 0) and µ tends to zero for higher chirps b (Fig. 2b). Therefore, the addition of an initial chirp above a certain value -i.e., enough chirp imprinting-destroys the soliton [7,27,28]. The situation keeps unchanged as κ ≥ 3 (Fig. 2c). In the non local regime (κ = 1.0), conversely, the trajectories evolve from a closed-loop to a limit-cycle, hence no chirp-imprinting can break the soliton (Fig 2d). This remarkable finding is confirmed by numerical simulations, as visible in Fig. 3. While a local system cannot sustain discrete light localization with an input spatial chirp above a threshold (Fig. 3a), non locality allows for the propagation of chirped discrete solitary waves (Fig. 3b) with periodically varying width, as predicted by our model. Clearly, the soliton amplitude oscillates as well in order to conserve the total power W 0 . These novel solutions belong to the class of discrete breathers and are the lattice counterparts of the continuously breathing solitons reported in highly non local bulk NLC [22]. IV. NUMERICAL RESULTS WITH THE NLC LATTICE In order to link the analysis to an actual NLC lattice, we need to estimate the range of available κ. As it stems from the model above, the nonlinear index change has a peaked envelope of transverse size κ = Λ Rc ∝ ΛV / √ Kd (being E x (x) ≈ V /d and θ r ≈ const) and, therefore, non locality can be tuned by acting on either one of the form-factor Λ/d, the bias V , the elastic constant K, the temperature [29]. With reference to a standard NLC (nematic 5CB, with K = 3.8 × 10 −12 N), to evaluate the Green function along Y we employed an optical (λ = 1.064µm) excitation A(x, y) consisting of a Dirac distribution across y and a Gaussian beam of waist ≈ d across x. We numerically integrated Eqs. (1) and (2) using the relevant potential distribution (see Ref. [11]), and finally derived the size κ versus σ ≡ ΛV / √ Kd by fitting the calculated reorientation profile. As the material is tuned from local ( In the latter analysis we employ both geometric (d/Λ) and material (V ) tuning, slightly adjusting the bias to keep θ r ≈ 0.35. This results in a quasi-linear transition (see Fig. 4c) from a local to a non local response as σ varies. The corresponding transverse size of the non local response, represented by dots in Fig. 4c, shows that theory and numerics are in excellent agreement in the range covering local (κ = 3.0) to non local (κ = 1.0) responses. Moreover, the all-optical reorientation across x, vi sible in Fig. 4d, is nearly sinusoidal (dashed line) and does not widen significantly when the nonlinearity intervenes (solid line), supporting the validity of the local approximation previously adopted. V. CONCLUSIONS In conclusion, for the first time to the best of our knowledge and with specific reference to nematic liquid crystals and their reorientational response, we have modeled discrete light localization in a nonlinear medium with an arbitrary degree of transverse non locality. Starting from the governing equation of the liquid crystalline system, we performed an original reduction to a novel general form of discrete nonlinear non local Schrödinger equation. Remarkably, the latter result was not achieved by introducing an a priori specific from of non locality, [17,18,19,20] but one derived from the molecular response of NLC. We employed a variational procedure and investigated the role of non locality in supporting chirpimprinted discrete spatial solitons. Such novel solutions are periodic breathers and cannot exist in purely local systems. Since the degree of non locality in NLC arrays can be adjusted by acting on geometric or material or external parameters [29], we anticipate that our findings will trigger the observation of discrete light propagation in both local and non local regimes in one and the same system. Our numerical experiments, in excellent agree-ment with the theoretical predictions, fully support such possibility. We acknowledge enlightening discussions with C. Conti and D. Levi.
2018-04-03T02:07:31.541Z
2005-09-05T00:00:00.000
{ "year": 2005, "sha1": "84469e226bfc6acf124859dc4b9b6ad04b92c993", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nlin/0509011", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7028d8707f3d7f83ecdf0da9d6c9628a2e2c245f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
11695541
pes2o/s2orc
v3-fos-license
A Combined Independent Source Separation and Quality Index Optimization Method for Fetal ECG Extraction from Abdominal Maternal Leads The non-invasive fetal electrocardiogram (fECG) technique has recently received considerable interest in monitoring fetal health. The aim of our paper is to propose a novel fECG algorithm based on the combination of the criteria of independent source separation and of a quality index optimization (ICAQIO-based). The algorithm was compared with two methods applying the two different criteria independently—the ICA-based and the QIO-based methods—which were previously developed by our group. All three methods were tested on the recently implemented Fetal ECG Synthetic Database (FECGSYNDB). Moreover, the performance of the algorithm was tested on real data from the PhysioNet fetal ECG Challenge 2013 Database. The proposed combined method outperformed the other two algorithms on the FECGSYNDB (ICAQIO-based: 98.78%, QIO-based: 97.77%, ICA-based: 97.61%). Significant differences were obtained in particular in the conditions when uterine contractions and maternal and fetal ectopic beats occurred. On the real data, all three methods obtained very high performances, with the QIO-based method proving slightly better than the other two (ICAQIO-based: 99.38%, QIO-based: 99.76%, ICA-based: 99.37%). The findings from this study suggest that the proposed method could potentially be applied as a novel algorithm for accurate extraction of fECG, especially in critical recording conditions. Introduction Fetal heart rate (FHR) monitoring during pregnancy is clinically relevant and can be obtained with several invasive or non-invasive techniques, including Doppler ultrasound fetal magnetocardiography (FMCG) and fetal electrocardiography (FECG). Doppler ultrasound is the traditionally applied technique for monitoring the fetus during pregnancy. It can usually identify and measure embryonic heartbeat by six weeks [1], while exams performed between 18 and 22 week of gestation allow screening for most fetal cardiac anomalies [2]. In particular, congenital heart disease (CHD) is the most common congenital anomaly worldwide [3], with an incidence estimated at 6-12 cases per 1000 live births [4]. Early diagnosis of CHD during pregnancy can increase survival rates of the fetus and decrease long-term morbidity in both ductus-dependent and foramen ovale-dependent CHD [5]. However, the probability of ultrasound to accurately detect CHD ranges from 65% to 81%, with a significant part of events missed. Inaccuracies in ultrasound detection can be due to the complex anatomy of the fetal heart, its movement, small size and mixing maternal and fetal heart rate. Higher detection rates can be achieved using three-and four-dimensional ultrasonography [6]. The disadvantages of these techniques are that they require expert personal and that they are extremely expensive. FMCG can be recorded reliably from the 20th week onward. Compared to ultrasound it has a higher resolution and higher signal quality, allowing an assessment of PQRST complex alterations, and detecting fetal arrhythmia. Early diagnosis of fetal arrhythmia permits an appropriate therapeutic intervention and the reduction of unexplained fetal death in late gestation [7]. Compared to ultrasound and FMCG, fetal electrocardiography (fECG) is more cost-effective and provides additional useful information for an accurate evaluation of the fetal status, having the potential to provide FHR data with beat-to-beat accuracy. fECG can be performed invasively using intra-uterine electrodes, which have a direct contact with the scalp of the fetus. Despite providing a high quality fECG, this technique has consistent drawbacks due to its invasive nature and its limited applicability during labor. Recently, non-invasive fECG recording has received considerable interest in monitoring fetal health. With this modality, signals are recorded by multiple electrodes placed on the abdomen of the mother. Non-invasive fECG has numerous advantages including the safeness, the possibility of long-term continuous monitoring, the wide time-range of applicability (from 18 weeks of gestation) and the relative low cost. On the other hand, several technical challenges prevent a direct usability of the acquired signals. In particular, the fetal signal has low amplitude and is mixed with several sources of noise and interference [8] such as maternal ECG (mECG), baseline drifts, power line, muscle electrical activity (EMG), maternal respiration, motion artefacts and electrode contact noise. These sources of noise often have higher amplitude than the fECG so the signal to noise ratio (SNR) is low. Fetal ECG amplitude can vary depending on several factors of the recording setup. For example, skeletal muscle artifacts introduce high frequency components between 10 Hz and 500 Hz during skeletal muscle activity, in particular during a contraction, masking fECG. In addition, fetal movements can result in a different orientation of the fetal heart vector with respect to the electrode grid, changing the amplitude and morphology of the measured signals. In addition, fECG often overlaps in time and in frequency with mECG and the other noise components so the extraction of fECG from abdominal leads results in a very challenging task. Importantly, it should be noted that between the 28th and 32th weeks of gestation the recording of surface fECG is unfeasible due to the formation of the vernix caseosa, a thin fatty layer which almost electrically shields the fetus [9]. However, for normal pregnancies (non-premature deliveries), the layer slowly dissolves in the 37th to 38th weeks of pregnancy [10]. For these reasons, different signal-processing methods have been implemented to extract fECG from abdominal mixtures (for a review see [11]). Methods for fECG extraction can be broadly divided into two groups: mECG canceling and blind source separation (BSS). The first group includes the regression-based methods which use the mECG as reference input to estimate and cancel its contribution on abdominal signals. This task can be performed in a continuous way by adaptive filtering (AF) or in a beat-by-beat mode by template subtraction (TS). AF [12,13] uses maternal reference channels to estimate their projection onto each abdominal signal [13][14][15][16]. TS can be considered as a special case of AF that uses as reference input signal an impulse sequence synchronized with maternal beats [12,17]. However, TS is usually implemented by first estimating the mean contribution of the maternal cycle (template) and then subtracting it from the mixture of abdominal signals [18,19]. A powerful method to estimate the contribution of each single maternal heart beat to the abdominal signal is to use a reduced space approximation by Principal Component Analysis (PCA) which can be implemented by Singular Value Decomposition (SVD) [20][21][22][23]. BSS aims at separating the different components of the abdominal mixture without a priori knowledge of the signal, but according to the statistical properties of the data. Commonly used approaches are PCA or SVD [24,25] and Independent Component Analysis (ICA) [26][27][28]. Hybrid methods consisting in a combination of the previously described methods have also been developed. Some approaches have been implemented, for example combining TS and BSS [29,30]. Recently our research group has contributed to the challenging task of extracting fECG from abdominal maternal signals by developing two different signal-processing methods. Our first algorithm was developed for the PhysioNet/Computing in Cardiology Challenge 2013 (CinC 2013), which promoted the development of accurate and robust algorithms for estimating fHR, fetal interbeat intervals, and fetal QT intervals from multichannel maternal ECG recordings. Further details about the CinC 2013 can be found in [31], which discussed the background issues, the design of the Challenge, the key achievements, and the follow-up research. This first algorithm that we developed, called the ICA-based method, belongs to the hybrid group of methods, being based on the sequence of ICA, mECG canceling (in particular TS) and a second ICA [23] and obtained the top official scores during event 1 and event 2 concerning fetal heart rate and fetal interbeat intervals estimation section [32]. However, there was one non-official higher score, as reported in [31], obtained by Behar and colleagues [33], which also implemented a hybrid method based on the fusion of several different techniques of source separation (including PCA, template subtraction, and ICA). Despite the high performance of our first algorithm, for a few records it failed in extracting a sufficiently clean fECG. This drawback could be ascribed to the model order selection problem. In theory, the number of independent sources is undetermined and always higher than the number of acquired signals, thus ICA method works in a sub-optimal contest and it will be able to separate the independent sources of interest only if their components in the acquired mixed signals have higher power greater than the others. To face this problem, we implemented a second algorithm, the quality index optimization (QIO)-based method, which a priori takes into account specific characteristics of the fECG rather than only the unspecific independence of sources [34]. In this second method, two quality indexes (fQI and mQI) were devised, which discriminate the two components of interest (fECG and mECG, respectively) from noise sources. These indexes were built exploiting the morphological and temporal characteristics of the fECG/mECG signals. An optimization procedure based on the Nelder-Mead algorithm was then applied in order to find a linear combination of abdominal signals, which maximizes these indexes. This method can be considered a novel approach for fECG extraction, which attempt to find the fetal QRS (fQRS) in the abdominal mixture by exploiting its characteristics. In our recent paper [34] this QIO-based method was compared with the ICA-based one on the same dataset extracted from the PhysioNet Challenge 2013 Database and it outperformed the ICA-based approach for most of the records. However, when comparing the record-by-record performance of the ICA-based and QIO-based methods we observed that, although generally the QIO-based method outperformed the ICA-based one, for some records, the opposite was true [34]. Thus, we hypothesized that a combination of the two approaches could improve the performance in fQRS detection and reduce the number of records with low performance. Indeed, as the QIO-based method and the ICA-based approach use different information in separating fECG, an integration of the two criteria should lead to an improvement in the performance in fECG extraction. In this paper, we propose a novel method, which combines both the criteria: that based on the independence of sources and that based on the a-priori specific characteristics of sources. In the following section the algorithm proposed in this paper will be referred as "ICAQIO-based" method. Recently a Fetal ECG Synthetic Database (FECGSYNDB) for benchmarking of fetal ECG extraction and detection algorithms has been developed [35]. Data were generated using the fetal ECG synthetic simulator (fecgsyn, [36]), which can generate maternal-fetal ECG mixtures with realistic amplitudes, morphology, beat-to-beat variability, heart rate changes and noise. In [35] the authors also evaluated their methodology by testing some common fECG extraction methods on the developed database. Successively another research group has used the FECGSYNDB to evaluate the performance of its fECG extraction algorithm based on sequential total variation denoising [37]. The aim of our paper is to propose the novel "ICAQIO-based" method and evaluate its performance on the recently developed FECGSYNDB as well as on the PhysioNet CinC 2013 Database, on which we already tested our previously developed algorithms. In order to test the improvement achieved by the combination of the independence of sources and the quality index optimization with the performances obtained applying the two criteria separately, we compare the performances of the "ICAQIO-based" method with that of the "ICA-based" and "QIO-based" approaches. Data The FECGSYNDB on which algorithms were tested consists of 1750 synthetic signals. The database is structured in seven cases resembling different physiological events (see Table 1) and for each case five different levels of additive noise are included (0, 3, 6, 9, and 12 dB). In addition, for each case ten different heart dipole models were generated by randomly selecting one of the nine vectorcardiograms available in the fecgsyn toolbox. Finally, simulations were repeated five times to obtain a more representative database. Each simulation is 5 min long for a total of 145.8 h, it is sampled at 250 Hz with a 16-bit resolution and it is projected onto 34 channels (32 abdominal and two mECG reference channels). For details about the parameters used for the simulation see [35,36]. In order to compare the performance of our algorithms with that of the ones already tested on the same database, we selected the same combination of channels of the previous study [35]. In particular, among the different combinations used, we selected the one with 4 channels (1, 11, 22 and 32). This choice was motivated by the fact that it allows ICA to achieve the separation of sources without loading excessively the mother with abdominal channels. In addition, the performance of the Nelder-Mead algorithm improves in low dimension [38]. We also selected channels 33 and 34, which are the mECG reference channels. It should be noticed that, compared with the PhysioNet CinC 2013 Database, in the FECGSYNDB there are reference mECG channels, so the algorithms do not include the mECG extraction step as our previous implementations [23,34]. Among the seven cases of the database we discard case 5 as the QIO-based approach is not suitable to detect more than one fECG as it will be better explained in the following paragraphs. Moreover, we selected only the two highest levels of noise (i.e., SNR = 0 and SNR = 3 dB) as, according to previous results [35,37], they bring out the most significant differences among the different algorithms. To test the performance of the proposed method on real data, we also used the annotated open set of recordings "set-a" of the PhysioNet CinC 2013 Database. The dataset consists of 75 records (length: 60 s) from five abdominal signal collections. Each record includes four channels of maternal abdominal ECG sampled at 1 KHz. Further details about the PhysioNet CinC 2013 Database can be found in [31,39]. The records a33, a38, a52, a54, a71, a74 were excluded because they had partial or inaccurate reference annotations. Moreover, the first and the last annotated beat of each record were ruled out from the evaluation because their reference annotations were often inaccurate. Tested Methods In this paper, we tested the novel proposed "ICAQIO-based" method and we compared it with our previously developed "ICA-based" and "QIO-based" approaches. The three different methods are schematized in Figure 1. In Section 2.3 we will fully describe the "ICAQIO-based" method, which we propose in this paper ( Figure 1). In Section 2.4, we will briefly describe the "ICA-based" and "QIO-based" approaches ( Figure 1), for a full description see [23,34], respectively. Proposed Combined Method: ICAQIO-Based The proposed method attempts to extract the fECG from the abdominal mixture on the basis of the combination of our previous approaches, respectively based on ICA [23] and on the optimization of a quality index built on the morphological and temporal characteristics of the signal [34]. As the two methods are based on two different criteria, we figured out that their combination could outperform each single method. The proposed ICAQIO-based method includes five steps: preprocessing, separation of sources based on ICA, maternal ECG canceling, enhancement of fQRS based on fQI optimization and fQRS detection ( Figure 1). The main steps of the algorithm are summarized hereafter, for a detailed description see [23,34]. Pre-Processing Pre-processing aimed at removing the most undesired noisy components before separating mECG and fECG. The same pre-processing steps were applied for all the three methods tested in these studies and include: impulsive artefacts canceling, baseline wandering removal and power-line interference canceling. A detailed description of the procedures used in this study can be found in our previous papers [23,34]. Figure 2 shows a 5 s interval of the four selected channels of the FECGSYNDB after pre-processing. The three extraction algorithms that are differently aggregated in these methods are: ICA, mECG canceling and a QIO algorithm. ICA, which is a form of BSS technique assuming the independence of sources, is becoming a very widely applied method in fECG. Maternal ECG canceling is one of the most commonly applied technique for fECG extraction and in this study, it is implemented as TS. Conversely, the third algorithm, was first introduced by our group in [34]. In Section 2.3 we will fully describe the "ICAQIO-based" method, which we propose in this paper ( Figure 1). In Section 2.4, we will briefly describe the "ICA-based" and "QIO-based" approaches ( Figure 1), for a full description see [23,34], respectively. Proposed Combined Method: ICAQIO-Based The proposed method attempts to extract the fECG from the abdominal mixture on the basis of the combination of our previous approaches, respectively based on ICA [23] and on the optimization of a quality index built on the morphological and temporal characteristics of the signal [34]. As the two methods are based on two different criteria, we figured out that their combination could outperform each single method. The proposed ICAQIO-based method includes five steps: pre-processing, separation of sources based on ICA, maternal ECG canceling, enhancement of fQRS based on fQI optimization and fQRS detection ( Figure 1). The main steps of the algorithm are summarized hereafter, for a detailed description see [23,34]. Pre-Processing Pre-processing aimed at removing the most undesired noisy components before separating mECG and fECG. The same pre-processing steps were applied for all the three methods tested in these studies and include: impulsive artefacts canceling, baseline wandering removal and power-line interference canceling. A detailed description of the procedures used in this study can be found in our previous papers [23,34]. Figure 2 shows a 5 s interval of the four selected channels of the FECGSYNDB after pre-processing. most commonly applied technique for fECG extraction and in this study, it is implemented as TS. Conversely, the third algorithm, was first introduced by our group in [34]. In Section 2.3 we will fully describe the "ICAQIO-based" method, which we propose in this paper ( Figure 1). In Section 2.4, we will briefly describe the "ICA-based" and "QIO-based" approaches ( Figure 1), for a full description see [23,34], respectively. Proposed Combined Method: ICAQIO-Based The proposed method attempts to extract the fECG from the abdominal mixture on the basis of the combination of our previous approaches, respectively based on ICA [23] and on the optimization of a quality index built on the morphological and temporal characteristics of the signal [34]. As the two methods are based on two different criteria, we figured out that their combination could outperform each single method. The proposed ICAQIO-based method includes five steps: preprocessing, separation of sources based on ICA, maternal ECG canceling, enhancement of fQRS based on fQI optimization and fQRS detection ( Figure 1). The main steps of the algorithm are summarized hereafter, for a detailed description see [23,34]. Pre-Processing Pre-processing aimed at removing the most undesired noisy components before separating mECG and fECG. The same pre-processing steps were applied for all the three methods tested in these studies and include: impulsive artefacts canceling, baseline wandering removal and power-line interference canceling. A detailed description of the procedures used in this study can be found in our previous papers [23,34]. Figure 2 shows a 5 s interval of the four selected channels of the FECGSYNDB after pre-processing. Independent Component Analysis The application of ICA after pre-processing aims at separating the fECG from the mECG and the other components. These components include the electromyographic signal, residual noise and artefacts. Among the different approaches proposed for fECG extraction, BSS is one of the most commonly applied [24][25][26][27][28][29]40]. It attempts to decompose the multichannel abdominal mixture into the different components i.e., mECG, fECG and noise. BSS can be performed using PCA, which assumes that the signals are a linear combination of the sources, that large variance represents interesting structures and that the principal components are orthogonal. However, the second assumption could not be satisfied, which means the maximization of variance criterion does not comply with fECG, mECG and noise source separation. Conversely ICA, beyond the linear mixing, assumes that the sources are statistically independent, non-Gaussian and/or autocorrelated; assumptions that are generally satisfied for fECG, mECG and noise sources. Several algorithms have been implemented, which realize ICA, including second order blind identification (SOBI) [41], joint approximate diagonalization (JADE) [42] and FastICA [43]. In our approach, the FastICA [43] with deflationary orthogonalization was selected as ICA algorithm as it gave the most reliable results respect to the other tested algorithms (SOBI and JADE) [23]. The ICA algorithm was applied using the all registration length as block size. The hyperbolic cosine was the preferred contrast function; in the few cases when convergence failed, kurtosis was automatically selected. Figure 3 shows the results of ICA applied to the pre-processed signals shown in Figure 2. Independent Component Analysis The application of ICA after pre-processing aims at separating the fECG from the mECG and the other components. These components include the electromyographic signal, residual noise and artefacts. Among the different approaches proposed for fECG extraction, BSS is one of the most commonly applied [24][25][26][27][28][29]40]. It attempts to decompose the multichannel abdominal mixture into the different components i.e., mECG, fECG and noise. BSS can be performed using PCA, which assumes that the signals are a linear combination of the sources, that large variance represents interesting structures and that the principal components are orthogonal. However, the second assumption could not be satisfied, which means the maximization of variance criterion does not comply with fECG, mECG and noise source separation. Conversely ICA, beyond the linear mixing, assumes that the sources are statistically independent, non-Gaussian and/or autocorrelated; assumptions that are generally satisfied for fECG, mECG and noise sources. Several algorithms have been implemented, which realize ICA, including second order blind identification (SOBI) [41], joint approximate diagonalization (JADE) [42] and FastICA [43]. In our approach, the FastICA [43] with deflationary orthogonalization was selected as ICA algorithm as it gave the most reliable results respect to the other tested algorithms (SOBI and JADE) [23]. The ICA algorithm was applied using the all registration length as block size. The hyperbolic cosine was the preferred contrast function; in the few cases when convergence failed, kurtosis was automatically selected. Figure 3 shows the results of ICA applied to the pre-processed signals shown in Figure 2. Maternal ECG Cancelling After the separation of the components constituting the abdominal mixture, mECG canceling was applied estimating and subtracting the component due to the maternal ECG from the signals. Indeed, the maternal component is the main interference in abdominal fetal ECG recordings. Thus, mECG canceling procedure is the most common approach in fECG extraction and its robustness makes it a basilar in any non-invasive fECG analysis system. Maternal ECG canceling consists of the construction of an estimate of the mECG component and in its subtraction from the abdominal signals. Specifically, TS refers to a technique based on the estimation of the PQRST maternal pattern on the abdominal signals using its synchronization with the maternal QRS. Several TS techniques Maternal ECG Cancelling After the separation of the components constituting the abdominal mixture, mECG canceling was applied estimating and subtracting the component due to the maternal ECG from the signals. Indeed, the maternal component is the main interference in abdominal fetal ECG recordings. Thus, mECG canceling procedure is the most common approach in fECG extraction and its robustness makes it a basilar in any non-invasive fECG analysis system. Maternal ECG canceling consists of the construction of an estimate of the mECG component and in its subtraction from the abdominal signals. Specifically, TS refers to a technique based on the estimation of the PQRST maternal pattern on the abdominal signals using its synchronization with the maternal QRS. Several TS techniques have been implemented in the literature, some are based on the construction of an average PQRST complex, which is then subtracted from each subsequent mECG after scaling and shifting operations [18,19], others apply PCA for dimensionality reduction [20][21][22]. In our methods, the mECG canceling procedure was performed estimating the approximation of each mECG beat by PCA implemented by SVD. First, to allow an accurate cancelling of mECG all signals obtained from the previous step were upsampled at 4 kHz with the Fourier transform method. Then, a trapezoidal window (whose length depends on the mean RR-interval computed on the whole record) is used to select and weight the signal around each detected mQRS. This operation allows obtaining weighted PQRST segments, which represent the columns of X, an nd × nq matrix, where nd is the length of the PQRST segments and nq is the number of mQRSs. This matrix is then decomposed using the "thin" form of SVD [44], which is valid for nd > nq, as follows: where S is an nq × nq diagonal matrix of the singular-values, U (nd × nq) and V (nq × nq) are the unitary matrices of the left and right singular-vectors, respectively. The first columns of the matrix U, corresponding to the first eigenvectors, giving the largest contribution to covariance, likely represent the maternal PQRST waves. The matrix X is then rebuild (i.e., X r ) using a reduced number of singular-vectors: where the matrices S r (ne × ne), U r (nd × ne) and V r (nq × ne) contain "ne" number of singular eigenvalues and singular eigenvectors respectively. The final step consists in the subtraction of the estimated PQRST segments: they are first unweighted by the trapezoidal window and then the ending of each segment is connected with the beginning of following one with a straight line, obtaining an estimated mECG, which is then subtracted from the original signal. For the FECGSYNDB, the two reference mECG channels were used to achieve a robust detection of maternal QRS complexes. For the PhysioNet CinC 2013 Database the component with the best mECG was identified by taking into account a priori knowledge of the QRS derivative, width and pseudo-periodicity. The maternal QRS detection was then performed on the selected ICA component. For a detailed description about maternal QRS detection see [23]. In both cases, maternal ECG canceling was performed independently for each of the four ICA separated channel. Figure 4 shows the application of the mECG canceling to the independent component 3 (ic3) of Figure 3. In this example, the ICA step well separated the maternal and the fetal components so that the estimated maternal contribution is very small (Figure 4, middle) and canceling has a little added value: it removes small, maternal, residual spikes as that occurring at the time 164.5 s. This step removes mECG while leaving the noise. have been implemented in the literature, some are based on the construction of an average PQRST complex, which is then subtracted from each subsequent mECG after scaling and shifting operations [18,19], others apply PCA for dimensionality reduction [20][21][22]. In our methods, the mECG canceling procedure was performed estimating the approximation of each mECG beat by PCA implemented by SVD. First, to allow an accurate cancelling of mECG all signals obtained from the previous step were upsampled at 4 kHz with the Fourier transform method. Then, a trapezoidal window (whose length depends on the mean RR-interval computed on the whole record) is used to select and weight the signal around each detected mQRS. This operation allows obtaining weighted PQRST segments, which represent the columns of , an matrix, where is the length of the PQRST segments and is the number of mQRSs. This matrix is then decomposed using the "thin" form of SVD [44], which is valid for , as follows: where is an diagonal matrix of the singular-values, ( ) and ) are the unitary matrices of the left and right singular-vectors, respectively. The first columns of the matrix , corresponding to the first eigenvectors, giving the largest contribution to covariance, likely represent the maternal PQRST waves. The matrix is then rebuild (i.e., ) using a reduced number of singular-vectors: where the matrices ( ), ( ) and ( ) contain " " number of singular eigenvalues and singular eigenvectors respectively. The final step consists in the subtraction of the estimated PQRST segments: they are first unweighted by the trapezoidal window and then the ending of each segment is connected with the beginning of following one with a straight line, obtaining an estimated mECG, which is then subtracted from the original signal. For the FECGSYNDB, the two reference mECG channels were used to achieve a robust detection of maternal QRS complexes. For the PhysioNet CinC 2013 Database the component with the best mECG was identified by taking into account a priori knowledge of the QRS derivative, width and pseudo-periodicity. The maternal QRS detection was then performed on the selected ICA component. For a detailed description about maternal QRS detection see [23]. In both cases, maternal ECG canceling was performed independently for each of the four ICA separated channel. Figure 4 shows the application of the mECG canceling to the independent component 3 (ic3) of Figure 3. In this example, the ICA step well separated the maternal and the fetal components so that the estimated maternal contribution is very small (Figure 4, middle) and canceling has a little added value: it removes small, maternal, residual spikes as that occurring at the time 164.5 s. This step removes mECG while leaving the noise. Fetal Quality Index Optimization Maternal ECG canceling provides four residual signals ( Figure 5), however the fECG amplitude could be still low compared to the other components or even not visible so the fQRS enhancement can increase the performance of the algorithm. In this step, a fQI, which characterizes the morphological and temporal characteristics of fECG is devised. The fQI is then computed on a generic signal obtained as linear combination of abdominal signals, thus resulting in a multivariate function of the coefficients of that linear combination. The algorithm finally attempts to maximize the fQI by searching for the maximum of this multivariate function. Thus, fQI is a value, between 0 and 1, which represents the quality of the fetal component estimated as linear combination of the abdominal signals. However, it should be noticed that fQI is specific of each record, in particular it is affected by fECG inter-beat interval. Therefore, the value of the maximum fQI is not an absolute index of quality of the enhanced fQRS signal extracted from the different abdominal ECGs, but is relative to each record. Maternal ECG canceling provides four residual signals ( Figure 5), however the fECG amplitude could be still low compared to the other components or even not visible so the fQRS enhancement can increase the performance of the algorithm. In this step, a fQI, which characterizes the morphological and temporal characteristics of fECG is devised. The fQI is then computed on a generic signal obtained as linear combination of abdominal signals, thus resulting in a multivariate function of the coefficients of that linear combination. The algorithm finally attempts to maximize the fQI by searching for the maximum of this multivariate function. Thus, fQI is a value, between 0 and 1, which represents the quality of the fetal component estimated as linear combination of the abdominal signals. However, it should be noticed that fQI is specific of each record, in particular it is affected by fECG inter-beat interval. Therefore, the value of the maximum fQI is not an absolute index of quality of the enhanced fQRS signal extracted from the different abdominal ECGs, but is relative to each record. In order to describe the characteristics of both the fECG and the noise, specific features were built which were based on signal derivatives and on specific time windows (see Table 2). For further details about the definition of these features see [34]. A fQI was then devised based on these features, the following empirical formulation was implemented: in which ε is a very small constant introduced to avoid division by zero. In order to describe the characteristics of both the fECG and the noise, specific features were built which were based on signal derivatives and on specific time windows (see Table 2). For further details about the definition of these features see [34]. A fQI was then devised based on these features, the following empirical formulation was implemented: in which ε is a very small constant introduced to avoid division by zero. Once the fQI is devised, it is assumed that a linear combination of abdominal signals enhancing the fQRS exists. The fQI is computed on a generic signal z obtained by linear combination of the abdominal signals thus resulting in a multivariate function of the coefficients of such linear combination: The aim of finding the signal z with maximum fQI can be achieved searching for the coefficients vector a which maximizes the function fQI(a). An analytic expression for the derivatives of this quality function does not exist, therefore direct search algorithms must be adopted in searching for the maxima. However, this function is scale independent (assuming the constant ε negligible), and an unconstrained optimization algorithm can be used. The Nelder-Mead algorithm [45] was selected as the optimization method as it gives good performance in low dimensional optimization problem (the number of abdominal signals is four) and represents a nice compromise between performance and convergence speed [38]. Figure 6 shows the fQRS-enhanced signal extracted from the residual signals by the application of the fQI optimization algorithm. It can be observed how the application of the fQI optimization step improves the estimation of fECG compared to that obtained after ICA and canceling (Figure 4, bottom). Once the fQI is devised, it is assumed that a linear combination of abdominal signals enhancing the fQRS exists. The fQI is computed on a generic signal z obtained by linear combination of the abdominal signals thus resulting in a multivariate function of the coefficients of such linear combination: The aim of finding the signal z with maximum fQI can be achieved searching for the coefficients vector a which maximizes the function fQI(a). An analytic expression for the derivatives of this quality function does not exist, therefore direct search algorithms must be adopted in searching for the maxima. However, this function is scale independent (assuming the constant ε negligible), and an unconstrained optimization algorithm can be used. The Nelder-Mead algorithm [45] was selected as the optimization method as it gives good performance in low dimensional optimization problem (the number of abdominal signals is four) and represents a nice compromise between performance and convergence speed [38]. Figure 6 shows the fQRS-enhanced signal extracted from the residual signals by the application of the fQI optimization algorithm. It can be observed how the application of the fQI optimization step improves the estimation of fECG compared to that obtained after ICA and canceling (Figure 4, bottom). Fetal QRS Detection The procedure for fQRS detection was based on two passes. In the first one the absolute derivative of the enhanced fQRS was filtered by a forward-backward Butterworth bandpass filter (6.3-16 Hz). The QRS was then detected with an adaptive threshold on derivative amplitude automatically initialized and recursively updated depending on the temporal distance from the previous QRS detection [46]. The fiducial point of each detected QRS was selected as the time of the maximum or minimum (according to the sign assigned in the initialization phase) of the derivative signal. The second pass was based on a QRS detector, which, starting from the best fetal RR interval identified in the previous step, proceeded in forward/backward direction. Figure 7 shows an example of the estimated fetal RR series compared to the reference one. It is evident how the estimated series is almost superimposed to the reference one. It should be noted that the core of the QIO approach stands in finding the linear combination of signals that maximizes the fQI; for this reason, only one Fetal QRS Detection The procedure for fQRS detection was based on two passes. In the first one the absolute derivative of the enhanced fQRS was filtered by a forward-backward Butterworth bandpass filter (6.3-16 Hz). The QRS was then detected with an adaptive threshold on derivative amplitude automatically initialized and recursively updated depending on the temporal distance from the previous QRS detection [46]. The fiducial point of each detected QRS was selected as the time of the maximum or minimum (according to the sign assigned in the initialization phase) of the derivative signal. The second pass was based on a QRS detector, which, starting from the best fetal RR interval identified in the previous step, proceeded in forward/backward direction. Figure 7 shows an example of the estimated fetal RR series compared to the reference one. It is evident how the estimated series is almost superimposed to the reference one. It should be noted that the core of the QIO approach stands in finding the linear combination of signals that maximizes the fQI; for this reason, only one single fQRS component is extracted. Thus, QIO method is unsuitable to manage the twin pregnancy condition and for this reason we decided to exclude case 5 in the evaluation of the performances, as stated before. Fetal RR series estimation after the application of the ICAQIO-based method. mRR: maternal RR series obtained by the reference maternal channels (magenta); rfRR: reference fetal RR series (blue); efRR: estimated (red) fetal RR series obtained by the application of the two-pass fetal QRS detection procedure. Figure 8 summarizes the main steps of the ICAQIO-based method in processing a real signal. In particular, we selected the record "a75" of the PhysioNet CinC 2013 Database. QIO-Based Method The QIO-based method depends uniquely by the capacity to enhance the fECG, after mECG canceling, on the basis of its temporal and morphological characteristics. As regards the FECGSYNDB, the method applied corresponds to that published in [34], with the exception of the mQRS enhancement and detection steps, as this database provide maternal reference. The method includes three steps: mECG canceling, enhancement of fQRS based on fQI optimization and fetal QRS detection (Figure 1). These modules are the same as described for the ICAQIO-based method. As discussed above for the ICAQIO-based approach, case 5 was not included in the evaluation of the performance of this algorithm. As regards the PhysioNet CinC 2013 Database, which do not include the reference maternal channels, the algorithm included an additional step of mQRS enhancement, based on devising a mQI, and a maternal QRS detection step, as previously described in [34]. ICA-Based Method As regards the FECGSYNDB, the ICA-based method applied was a combination of mECG canceling and ICA for fECG separation. This method is a simplified version of the one presented by our team at the PhysioNet CinC 2013 [23,32]. Indeed, our CinC 2013 method applied a second step of ICA after mECG canceling which on the FECGSYNDB did not significantly improve the performance (data not shown). The approach used in this study includes the following three steps: separation of sources based on ICA, mECG canceling and fetal QRS detection (Figure 1). Also in this case, these modules are the same as described for the ICAQIO-based method. In particular, in the maternal ECG canceling step mECG channels were used for mQRS detection. For the application on the PhysioNet CinC 2013 Database, the original version of the approach, proposed in [23,32], was used. The ICA-based approach applied the fQRS detection procedure to all the channels obtaining four hypothetical QRS annotations and the relative RR series. Then, the best estimated fQRS annotations (and so the best fECG channel) were automatically selected without considering the fQRS reference annotations. The selection was based both on the basis of the knowledge of the typical fetal RR values and on the minimization of a criterion based on the following features: the mean of absolute RR first derivative, the mean of absolute RR second derivative and the number of detected fQRSs matching maternal QRSs. The minimization of the mean of absolute RR first/second derivatives is based on the hypothesis that the fetal RR series is more regular respect to the one resulting from the application of the QRS detection algorithm to a noisy signal. Figure 9 shows the fetal RR series estimated by the ICA-based method for the same record of Figure 7. In the interval 130-170 s such estimate is bad as the ICA fails to sufficiently separate fetal component from noise and the performance is lower compared to the ICAQIO-based approach. For the application on the FECGSYNDB, we implemented two different versions of the ICA-based method. The first one (ICA-based) was a complete algorithm applying an automatic channel selection for providing an estimated fetal RR series and an estimated fetal ECG. This method was compared with the ICAQIO-based and QIO-based methods on all the cases of the FECGSYNDB, excluding case 5 (Twin pregnancy). The second version of the algorithm (ICA-based_post) was aimed at comparing the performance of our algorithm with the ICA methods tested in [35], which were applied on the same FECGSYNDB. These methods all selected the channel whose estimated fQRS annotations best fitted with the reference annotations, so the ICA-based_post algorithm also was implemented in this way, without the automatic choice of the channel. To compare our algorithm with those tested in [35], we applied the ICA-based_post on all the cases of the FECGSYNDB, including case 5 and considering all the noise levels. Figure 9. Fetal RR series estimation after the application of the ICA-based method. mRR: maternal RR series obtained by the reference maternal channels (magenta); rfRR: reference fetal RR series (blue); efRR: estimated (red) fetal RR series obtained by the application of the two-pass fetal QRS detection procedure. Evaluation of fQRS Detection The performance of the tested methods was evaluated on the total length of the fQRS signal, using sensitivity (SE), positive predictive accuracy (PPA) [47] and their harmonic mean (F1) [14]: where TP indicates the number of true positives (correctly detected fQRS), FN the number of false negatives (missed fQRS detections) and FP the number of false positives (falsely detected nonexistent fQRS). For the calculation of SE and PPA, each fQRS detection was considered correct if it differed of less than 50 ms from the reference annotation. Since the first and last fQRSs could be sometimes mis-annotated, they were excluded from the evaluation. To test the significant differences within the different cases we reported F1 gross values for the different cases and applied the McNemar test on paired proportions for evaluating paired differences between methods. Simulated Data: FECGSYNDB The overall gross statistics, which is obtained by computing the F1 for all the records with the two lowest level of SNR (excluding case 5), showed that the proposed combined ICAQIO-based algorithm obtained the best performance (98.78%) over both the single methods (QIO-based: 97.77%; Figure 9. Fetal RR series estimation after the application of the ICA-based method. mRR: maternal RR series obtained by the reference maternal channels (magenta); rfRR: reference fetal RR series (blue); efRR: estimated (red) fetal RR series obtained by the application of the two-pass fetal QRS detection procedure. Evaluation of fQRS Detection The performance of the tested methods was evaluated on the total length of the fQRS signal, using sensitivity (SE), positive predictive accuracy (PPA) [47] and their harmonic mean (F1) [14]: where TP indicates the number of true positives (correctly detected fQRS), FN the number of false negatives (missed fQRS detections) and FP the number of false positives (falsely detected non-existent fQRS). For the calculation of SE and PPA, each fQRS detection was considered correct if it differed of less than 50 ms from the reference annotation. Since the first and last fQRSs could be sometimes mis-annotated, they were excluded from the evaluation. To test the significant differences within the different cases we reported F1 gross values for the different cases and applied the McNemar test on paired proportions for evaluating paired differences between methods. Simulated Data: FECGSYNDB The overall gross statistics, which is obtained by computing the F1 for all the records with the two lowest level of SNR (excluding case 5), showed that the proposed combined ICAQIO-based algorithm obtained the best performance (98.78%) over both the single methods (QIO-based: 97.77%; ICA-based: 97.61%). As regards the two single methods, the QIO-based method outperformed the ICA-based one. Moreover, the ICA-based_post algorithm, with a posteriori selection of the channel, for comparison with the previously ICA tested algorithms on the FECGSYNDB in [35], obtained a performance of 97.51%. In Table 3 the gross values of F1 for the ICAQIO-based method, the QIO-based method and the ICA-based method for each case are reported. Comparing paired differences within each case between the performance of the ICAQIO-based, the QIO-based and the ICA-based algorithms, we obtained that in case 3 the F1 gross value of the ICAQIO-based method was significantly higher than that of the QIO-based method (p = 0.0003) and the F1 gross value of ICA-based method was significantly higher than that of QIO-based method (p = 0.0003). No significant differences between the ICAQIO-based method and the ICA-based method were obtained in case 3. In case 4 the F1 gross value of the ICAQIO-based method was significantly higher than that of the ICA-based method (p = 0.0002) and the F1 gross value of QIO-based method was significantly higher than that of ICA-based method (p = 0.0002). No significant differences between the ICAQIO-based method and the QIO-based method were obtained in case 4. No significant differences among the three methods were obtained in the other cases. Figure 10 shows the F1 of the three methods for all the cases. ICA-based: 97.61%). As regards the two single methods, the QIO-based method outperformed the ICA-based one. Moreover, the ICA-based_post algorithm, with a posteriori selection of the channel, for comparison with the previously ICA tested algorithms on the FECGSYNDB in [35], obtained a performance of 97.51%. In Table 3 the gross values of F1 for the ICAQIO-based method, the QIO-based method and the ICA-based method for each case are reported. Comparing paired differences within each case between the performance of the ICAQIO-based, the QIO-based and the ICA-based algorithms, we obtained that in case 3 the F1 gross value of the ICAQIO-based method was significantly higher than that of the QIO-based method (p = 0.0003) and the F1 gross value of ICA-based method was significantly higher than that of QIO-based method (p = 0.0003). No significant differences between the ICAQIO-based method and the ICA-based method were obtained in case 3. In case 4 the F1 gross value of the ICAQIO-based method was significantly higher than that of the ICA-based method (p = 0.0002) and the F1 gross value of QIO-based method was significantly higher than that of ICA-based method (p = 0.0002). No significant differences between the ICAQIO-based method and the QIO-based method were obtained in case 4. No significant differences among the three methods were obtained in the other cases. Figure 10 shows the F1 of the three methods for all the cases. Figures 11 and 12 show the histograms of F1 for each method for the cases in which statistically significant differences were found among the different methods, i.e., case 3 and case 4. It can be observed that the distributions are skewed and there are records for which the methods fail in giving acceptable performance. In particular, the ICA-based method, has more records for which it gives a Figures 11 and 12 show the histograms of F1 for each method for the cases in which statistically significant differences were found among the different methods, i.e., case 3 and case 4. It can be observed that the distributions are skewed and there are records for which the methods fail in giving acceptable performance. In particular, the ICA-based method, has more records for which it gives a very low performance. For this reason, the gross values of F1, along with the min and the max values, are reported in Table 3 as a representative measure of the overall performance of the algorithms. Table 3 as a representative measure of the overall performance of the algorithms. Real Data: PhysioNet CinC 2013 Database The gross F1 score of the proposed ICAQIO-based algorithm, which was obtained by computing the value for all the selected records of the PhysioNet CinC 2013 Database, was 99.38%. This performance was slightly higher than that obtained for the ICA-based method (99.37%), which was the method implemented for the PhysioNet CinC 2013 with the few changes as explained in [34]. On this Database, the QIO-based method was the one that obtained the highest performance with a gross F1-value of 99.76% [34]. The performance obtained by the three different algorithms are summarized in Table 4. Table 3 as a representative measure of the overall performance of the algorithms. Real Data: PhysioNet CinC 2013 Database The gross F1 score of the proposed ICAQIO-based algorithm, which was obtained by computing the value for all the selected records of the PhysioNet CinC 2013 Database, was 99.38%. This performance was slightly higher than that obtained for the ICA-based method (99.37%), which was the method implemented for the PhysioNet CinC 2013 with the few changes as explained in [34]. On this Database, the QIO-based method was the one that obtained the highest performance with a gross F1-value of 99.76% [34]. The performance obtained by the three different algorithms are summarized in Table 4. Figure 12. F1 histograms for the three tested algorithms, ICAQIO-based, QIO-based and ICA-based, for case 4 (maternal and fetal ectopic beats + noise). Samples tested: 1750 synthetic signals; 0 and 3 dB SNR levels; 10 different heart dipole models; 5 simulations. Real Data: PhysioNet CinC 2013 Database The gross F1 score of the proposed ICAQIO-based algorithm, which was obtained by computing the value for all the selected records of the PhysioNet CinC 2013 Database, was 99.38%. This performance was slightly higher than that obtained for the ICA-based method (99.37%), which was the method implemented for the PhysioNet CinC 2013 with the few changes as explained in [34]. On this Database, the QIO-based method was the one that obtained the highest performance with a gross F1-value of 99.76% [34]. The performance obtained by the three different algorithms are summarized in Table 4. Discussion In this paper, we propose a novel hybrid fetal extraction algorithm, which combines the classical mECG canceling procedure with two other different approaches for fetal QRS enhancement: independent component analysis and a quality index optimization criterion. This algorithm integrates two methods that we previously developed and tested on the PhysioNet CinC 2013 Database. The first one is a method based on ICA, which is a simplification of the method that we presented at the Physionet CinC 2013 [23,32]. Indeed, in the implementation on the FECGSYNDB, with respect to the previous algorithm, we eliminated the second step of ICA as we observed that, on this database, it did not improve the performance (data not shown). The second method is an algorithm based on devising a quality index that is built exploiting the morphological and temporal characteristics of the signal of interest (in this case the fECG) and then finding a linear combination of signals which maximizes this index [34]. It should be noted that all the methods integrate a classical step performing mECG canceling, aimed at removing from the abdominal mixture this undesired component before obtaining the final fECG. Indeed, in our previous paper [34] we have shown as the mECG canceling is fundamental and that its integration with ICA provides better results than ICA alone. Moreover, QIO optimization alone is not able to separate the fECG from the mECG. Considering the overall gross statistics, according to our expectations, we found that on the FECGSYNDB the combined ICAQIO-based method outperformed the two methods applied as single (F1 ICAQIO-based: 98.78%; F1 QIO-based: 97.77%; F1 ICA-based: 97.61%). This can be due to the fact that the two methods use different criteria to enhance the fQRS so that one can succeed when the other fails. Indeed, in our previous paper [34], we have compared the performance of ICA-based and QIO-based methods record-by-record and we observed that, although generally the QIO-based method outperformed ICA-based one, for some records, the opposite was found. Thus, as we have hypothesized, a combination of the two approaches improves the overall performance in fQRS detection. The improvement obtained adding a successive step of fQIO optimization after ICA and mECG canceling, can be observed from the comparison of fECG signals in Figure 4 (bottom trace) and 6 and even more from the comparison of the estimated fetal RR series shown in Figures 7 and 9. Indeed, in the represented record, while the application of ICA + mECG canceling (ICA-based method) presents some errors in the estimation of the fetal RR series, the fetal RR series estimated with the ICAQIO-based approach is exactly superimposed to the reference series. On the real data of the PhysioNet CinC 2013 all the three methods performed very high, above 99%, without a significant difference among the three methods. The performance of the ICAQIO-based approach was slightly higher than that of ICA-based method, the performance of the QIO-based method slightly exceeded that of ICAQIO-based approach. It should be highlighted that both the ICA-based and QIO-based methods were developed using the "set-a" of the PhysioNet CinC 2013 as learning set using some tuning of the algorithms on these data to maximize the performance. For example, the optimal type of ICA algorithm, contrast function and number of eigenvalues were selected for the ICA-based method, while the coefficients and window lengths were chosen for the QIO-based method. Despite this tuning, the two methods have demonstrated to be highly generalizable, giving still high performance, although a bit lower to that obtained on the "set-a" of the Challenge, on other datasets. Indeed, the ICA-based obtain the top official score on the "hidden" set of the Challenge and both the methods performed overall above 97% on the FECGSYNDB. This tuning could however have led to an over-estimation of the performances of the ICA-based and the QIO-based methods on this specific dataset, making less evident the improvement of the combination of the two criteria, were the performance of combined method was overall comparable to the ICA-based approach and the QIO-based method. When applied on larger database including several non-stationary and critical conditions for fECG extraction, the ICA-based and QIO based approaches, while still giving an overall high performance, failed for some records and gave lot of extreme values. The ICAQIO-based on the contrary, limited the number of extreme values, highlighting the advantage of combining the criteria of independence of sources and quality index optimization. Considering the ICA-based and the QIO-based method separately, we observed that the QIO-based approach obtained a higher score than that of the ICA-based with an a priori selection of the channel (ICA-based), both on the FECGSYNDB and on the PhysioNet CinC 2013 Database. This result confirmed our previous findings [34]. The better performance obtained by the proposed QIO-based approach could be due to some several reasons. First, the ICA-based approaches can fail in separating fECG if the number of underlying sources is higher than the number of the measured signals and if the fECG power is small compared to noise. In addition, the QIO-based method eliminates the problem of ICA-based approach of automatically selecting the mECG (or fECG) among the estimated independent sources. Comparing the performances separately for each case on the FECGSYNDB, we observed that the most significant differences were obtained for case 3 and case 4, that were uterine contraction and maternal and fetal ectopic beats + noise, respectively. Thus, in these cases the combinations of the two criteria, ICA and fQI optimization, can be particularly helpful in the accurate extraction of fECG. We presented gross F1 values for each case to have a more general evaluation of the performance of the algorithms. Indeed, the distributions of the F1 values are skewed, with several records giving high performance but other records for which the performance is low, i.e., extreme values. In particular, in case 3 the QIO-based method presents more low-performance records, while in case 4 the ICA-based method has more extreme values. Indeed, in case 3 F1 value of the ICAQIO-based method was significantly higher compared with that of QIO-based method and in case 4 with that of ICA-based method. In this sense, the ICAQIO-based approach, which combines the two criteria, is overall more robust to extreme values compared to both ICA-based and QIO-based algorithm in particular in critical conditions as uterine contractions and ectopic beats. In general, it should be observed that for all the three compared methods (ICAQIO-based, QIO-based and ICA-based) we obtained high gross overall F1 scores, which were all above 97% on the FECGSYNDB and above 99% on the PhysioNet CinC 2013 Database. Notably these results were obtained considering the lowest SNR levels, thus showing that the methods proposed are robust to high noise level. Importantly the proposed methods work in a fully unsupervised way with the same parameter setting for all the selected records of the database. In this study, we also tested the ICA-based method with a posteriori selection (ICA-based_post), in order to compare the performance of our algorithm on the FECGSYNDB, to that of the methods tested in [35]. We obtained a performance of 97.51%, which was slightly higher than the one obtaining the best performance using JADE in [35] (97.46%). These results confirm that our hybrid approach based on the combination of ICA and mECG cancelling outperformed the results of the most common BSS-based methods for fECG extraction. Some limitations of the proposed method should be considered. First, the algorithms were only tested on simulated data and on the Physionet Challenge 2013 Database, with a possible over-estimation of the results. Further tests on realistic data are needed to better evaluate the performance of the different algorithms and to evaluate their robustness as well as their ability to extract fECG in different recording conditions such i.e., ectopic beats and uterine contractions. In particular, systematic tests of the performance of the technique as a function of changes in realistic noise, signal quality of mECG, relative amplitude and signal quality of fECG, number of channels of mECG and fECG, nonstationarty, nonstationary mixing and arrhythmia are needed. Another drawback is the computation time of the proposed algorithm, which was about one minute for each record of 5 minute duration with Matlab R2014a on a Samsung NP730U3E Notebook (i7-3537U, 2 GHz x4; DDR3 6 GB-1600 MHz; SSD 256GB; Linux Ubuntu 16.04, Samsung, Seoul, South Korea ). A decrease of the computation time could be achieved both using a more efficient or more suitable optimization algorithm and/or tuning/changing the fQI function. However, this optimization could be possible, without truly incurring in over-fitting, only if a larger annotated database were available. Finally, the QIO approach is based on finding the linear combination of signals that maximizes the fQI, thus only one single fQRS component is extracted. For this reason, the QIO method, and so the combined ICAQIO-based method, is unsuitable to manage a twin pregnancy condition. In the future, the algorithm could be modified in order to also manage this condition thus allowing the extraction of two fECG components. One possible solution could be to use multiple QIO algorithms with an ICA module at each iteration, thus integrating the search for the maximum of fetal QI with the search for the maximum independence. Conclusions In this paper, we have introduced a novel combined independent source separation and quality index optimization method (ICAQIO-based) for fECG extraction from abdominal maternal leads. The method was tested and compared with the two single methods, on the recently developed FECGSYNDB for benchmarking of fECG extraction and detection algorithms and on the PhysioNet CinC 2013 Database. The comparison of the three methods showed that the combination of the criterion of independence of source and the optimization of a fetal quality index optimization, outperformed the two methods applied alone, in particular in critical conditions like uterine contraction and maternal and fetal ectopic beats, where the two criteria applied independently give more low performance records. The algorithm can be applied in a fully unsupervised way and also works in the presence of low amplitude fECG signals and noise. Future studies will be needed to test the performances of the algorithm in a real-world scenario with different conditions of noise, recording setup and fetal configurations.
2017-08-19T23:11:15.074Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "6488d0d7600c66727a5d849a9ea452ba5cd56914", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/17/5/1135/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6488d0d7600c66727a5d849a9ea452ba5cd56914", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
2122378
pes2o/s2orc
v3-fos-license
An Investigation of Singular Lagrangians as Field Systems The link between the treatment of singular lagrangians as field systems and the general approch is studied. It is shown that singular Lagrangians as field systems are always in exact agreement with the general approch. Two examples and the singular Lagrangian with zero rank Hessian matrix are studied. The equations of motion in the field systems are equivalent to the equations which contain accleration, and the constraints are equivalent to the equations which do not contain acceleration in the general approch treatment. In Ref. [6] the singular Lagrangians are treated as continuous systems. The Euler-Lagrange equations of constrained systems are proposed in the form With the aid of Eq.(1.9), Eq.(1.10) can be identically satisfied, i.e. 0 = 0, or they lead to equations free from acceleration. These equations are diveded into two types: type-A which contains coordinates only and type-B which contains coordinates and velocities [9]. The total time derivative of the above two types of constraints should be considered in order to have a consistent theory. In this paper we would like to study the link between the treatment of singular Lagrangians as field systems [6] and the well-known Lagrangian formalism. In Section 2 the relation between the two approaches is discussed, and in Section 3 two examples of singular Lagrangians are constructed and solved using the two approaches. In Section 4 the treatment of a singular Lagrangian with Hessian matrix of zero rank is discussed. THE RELATION BETWEEN THE TWO APPROACHES One should notice that Eqs.(1.4) are equivalent to Eqs. (1.9). In other words the expres- In Eqs.(1.4) can be be replaced byq a andq a respectively in order to obtain Eqs.(1.9). EXAMPLES The procedure described in Section 2 will be demonstrated by the following examples. A. Let us consider a Lagrangian of the form The Euler-Lagrange equations then read as For consistent theory, the time derivative of Eq.(3.4) should be equal to zero. This leads to the new B-type constraint Taking the time derivative for the new constraints we get a second order differential which has the following solution Now, let us look at this Lagrangian as a field system. Since the rank of the Hessian matrix is one, the above Lagrangian can be treated as a field system in the form Thus, the expressionq can be replaced in Eq.(3.1) to obtain the following modified Lagrangian L ′ : Making use of Eqs.(1.4), we have Note that we have made the substitution α = 0, 2 and a = 1, in order to get the above equation. Making use of Eq.(3.8) and the fact thaẗ Eq.(3.10) will be the same as Eq.(3.2). According to Refs. [1][2][3][4] the quantity H 2 can be calculated as Hence, and taking the total differential of Eq.(3.13) one gets, Replacing the expression in the first parenthesis from Eq.(3.10) one gets For a valid theory, the variation of F 1 should be zero; thus one gets This is B 2 constraint defined in Eq.(3.5). Again taking the total differential of the new constraint F 2 , we have This is a second order differential equation for q 2 and is the same as Eq.(3.6). In addition, the function G 0 can be evaluated and and this does not lead to any further constraints. B. Consider the Lagrangian of the form Then, the Euler-Lagrange equations are given as Expressing Eq.(3.25) as and substituting in Eq.(3.26), one gets an A-type constraint There are no further constraints. Thus Eq.(3.26) takes the form As in the previous example, this system can be treated as field system, and the modified Lagrangian L ′ reads as The Euler-Lagrange equation for this field system is obtained as Again replacingq 1 by the expression (3.11), Eq.(3.31) will be the same as Eq.(3.25). Besides, the function G 2 can be calculated as and the total differential of G 2 can be written as Using Eq.(3.31) and Eq.(1.6), we have which leads to the following constraint This is an A-type constraint of the form (3.28). Taking the total differential of F 1 , we have and this leads to a new constraint which is equivalent to the total time derivative of the constraint (3.28). Again calculating the total differential of F 2 , one gets and making use of Eqs.(3.31) and (3.35), we get which is the same as Eq. (3.29). Besides the function G 0 is calculated as Thus, the total differential of G 0 is obtained as and with the aid of Eq.(3.35), it is identically satisfied. A SINGULAR LAGRANGIAN WITH ZERO RANK HESSIAN MATRIX According to the treatment of singular Lagrangians as field systems: if the Hessian matrix has rank equal to zero, the Lagrangian cannot be treated as field system. Whereas, the equation of motion which does not contain acceleration can be obtained using the constraints (1.6). CONCLUSIONS As it was mentioned in the introduction if the rank of the Hessian matrix for discrete systems is (n − r); 0 < r < n, the systems can be treated as field systems. It can be observed that the treatment of Lagrangians as field systems is always in exact agreement with the general approach. The equations of motion (1.4) are equivalent to the equations of motion (1.9). Besides, the constraints (1.6) are equivalent to the equations (1.10). The consistent theory in the treatment of Lagrangians as field systems also leads to two types of constraints: a B-type which contains at least one member of the set q µ , ∂qa ∂t , ∂qa ∂qµ , and an A-type which contains coordinates only. As we have seen, in the first example F 1 and F 2 are B-types; while the constraint F 1 in the second example is an A-type. In the general approach the constraints can be obtained from the Euler-Lagrange equations, whereas, in the treatment of Lagrangians as field systems, the constraints can be determined from the relations (1.5,6) and the new constraints can be obtained using the variations of these relations.
2014-10-01T00:00:00.000Z
1995-08-14T00:00:00.000
{ "year": 1995, "sha1": "12e985ec088a3704c4cc41231b1a7531d55edf73", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "12e985ec088a3704c4cc41231b1a7531d55edf73", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244689608
pes2o/s2orc
v3-fos-license
PROTECTION OF RIGHTS OF SECURED CREDITORS IN THE BANKRUPTCY DEBTOR ASSETS SALES PROCEDURE Securing claims by way of real assets such as mortgage or chattel mortgage has great significance for the operation of banks and other economic entities. Opening bankruptcy proceedings over the owner of the real estate under mortgage or movable property under chattel mortgage has a significant impact on the process of exercising rights and the position of secured creditors. Bankruptcy framework in the Republic of Serbia limits their rights on the one hand, and provides extensive guarantees, on the other, by prescribing several specific institutes that additionally protect the rights of secured creditors in the procedures of bankruptcy debtor asset sales, which is the topic of this paper. Provisions of the Law have been analyzed, positions of the judicial practice as well as opinions of the jurisprudence on secured creditors as a special category. Special attention was paid to the impact of the legal prohibition of individual enforcement for the settlement of claims from the assets that are under any burdens as well as the cancellation of moratorium. Significance of the right of the creditor to offset its secured claim against purchase price has been explained in detail in case of the best bidder (credit bidding) as well as the legal preemptive right on the subject of secured right or lien, in case of sales method by direct agreement. Also, rules were considered that condition the possibility of leasing assets under burden of the bankruptcy debtor with the consent of secured creditors. Introduction Bankruptcy proceedings in the Republic of Serbia have been prescribed by the Law on Bankruptcy [50] and are initiated by the petition of the creditor, debtor or liquidator as authorized petitioners (Article 55, paragraph 1). Adopting positive decision on such petition, in case the court determines the existence of one of prescribed legal bankruptcy conditions represents the opening of the bankruptcy proceedings [10, pp. 87-88]. Bankruptcy judge shall open bankruptcy proceedings by adopting a decree on opening bankruptcy proceedings that adopts the petition for initiating bankruptcy proceedings (Article 69, paragraph 1). All legal consequences of bankruptcy, including the prohibition of enforcement and settlement -moratorium shall come into effect by way of opening bankruptcy proceedings, and not by the initiation of proceedings by submitting the initial act -petition of authorized persons [11, p. 84]. The moratorium shall entail the prohibition of implementation of individual enforcement over the property of the bankruptcy debtor for the purpose of the settlement of claims of individual creditors and shall commence as a process and legal consequence on the date of opening bankruptcy proceedings or as an option -in case the court, in the prior bankruptcy proceedings, determines the security measure that contains the same prohibition that refers to the exercise of rights of secured creditors [12, p. 920]. Security measures (Article 62, paragraph 2, item 4) and moratorium (Article 93, paragraph 1) may be cancelled under the same conditions regulated by the provision of Article 93a and 93b [36, pp. 108-109]. The moratorium delays the deadline for debt payment by law [47, p. 585] to a certain phase of bankruptcy proceedings -the validity of the decision on the main distribution of bankruptcy estate (Article 143, paragraph 1), that is, until the sales of assets under burden in case of secured creditors. Satisfaction of secured creditors must be performed within five days from the date the bankruptcy administrator received the funds from sales of property, that is, the collection of claims (Article 133, paragraph 12), where the bankruptcy administrator shall be obligated to offer for sales each item of property that is subject to secured right or lien within six months from the validity of the decision on bankruptcy (Article 133a). The share of settlement of secured creditors is approximately 70% from assets remaining after the settlement of costs and liabilities of the bankruptcy estate [15, p. 105], and they do not have the right to claim default interest due to delay of the bankruptcy administrator in settlement [36, p. 196]. Legal consequences of opening bankruptcy proceedings over the owner of real estate under mortgage or movables under chattel mortgage, and primarily the moratorium, have a significant impact on the procedure of exercise of rights and position of secured creditors, regardless of the fact they do not lead to the cessation of real securities since, as a rule, they disable the implementation of proceedings of individual enforcement and settlement outside bankruptcy proceedings, thus limiting their rights. The mortgage loses the property of adequate collateral if the owner of the real estate under mortgage is in bankruptcy, which leads to the classification of claims of the bank from debtors into the least favorable category D [29] and to the increase of the amount of required provisioning for estimates loss as per assets from balance sheet and off-balance items representing deduction from the basic share capital [30] with adverse effect on the bank's capital adequacy. Therefore, it is necessary to provide additional protection of rights to this category of creditors in the procedure of bankruptcy debtor assets sale. For this purpose, having in mind the significance of real securities, such as mortgage or chattel mortgage in the operations of banks and other economic entities, the bankruptcy framework of the Republic of Serbia prescribes several specific institutes that establish a special mechanism for protection and exercise of secured creditor's rights. The value of lien lies in the fact that it provides the secured creditor with the option of settlement even when other creditors cannot be satisfied in full or at all, since the debtor does not hold sufficient assets to meet all the obligations [42, p. 35]. After the clarification of the legal position of secured creditors and the effect of moratorium, the most significant institutes and procedures have been reviewed that enable secured creditors to implement lien in bankruptcy proceedings from the value of assets under burden. Separate creditors and pledge creditors as two categories of secured creditors Law on Bankruptcy differs separate and pledge creditors as two categories of secured creditors. The criteria for differentiation are whether the creditor has or does not have claims that are secured by mortgage or pledge over the assets of the bankruptcy debtor, that is, whether the bankruptcy debtor is simultaneously the debtor of the secured claim or it is a third party. Our bankruptcy law recognizes specially regulated situations when the owner of the movable under pledge or real estate under mortgage (pledge or mortgage debtor) [49, p. 20] and the debtor from the original transaction are not the same person, thus, the bankruptcy is initiated over the owner of assets, towards which the creditor has no claim from the original transaction. For example, from loan agreements, since this person established pledge or mortgage over its own property for securing creditor's claim towards a third party -loan beneficiary. Such situations caused a number of issues and different interpretations in earlier court practice that were mostly removed by legal novelty from 2004 [51], by prescribing certain rules based on positions and solutions reached by court practice. Separate creditor has claims towards the bankruptcy debtor that are secured by mortgage or pledge over the property of the bankruptcy debtor (lien) or legal right of retention or right to settle over items and rights registered in public books or registries and shall be entitled to a priority settlement from funds received from the sales of assets, that is, the collection of claims that form the basis of such right [2, pp. 205-221]. "One should keep in mind that the separate creditor shall be entitled to priority collection only from certain items owned by the bankruptcy debtor, subject to lien or settlement right. There is no general lien over the entire property of the debtor and all income of the debtor that would weaken the position of the debtor" [22, p. 81]. When the litigation for the purpose of determination of the amount of secured right is ongoing, and the assets under burden are sold before the valid finalization of the litigation, it would be prudent for the bankruptcy administrator to pay to the separate creditor "the undisputed portion of claims secured by the right to priority in settlement" [36, p. 164]. "Existence of dispute regarding the order of settlement of separate creditors shall not affect the right of the buyer to register ownership right and erase the burden. The amount of available funds for the settlement of separate creditors shall remain the same in case of dispute on the order of settlement of separate creditors and in situations where there is no dispute on the order of settlement of separate creditors" [36, pp. 195-196]. On the other hand, pledge creditor has real estate collateral over the assets of the bankruptcy debtor (lien over items or rights of the bankruptcy debtor that are registered in public books or registries) but has no monetary claim towards the bankruptcy debtor that is secured by such lien. In legal theory such persons are named "pledge creditors with claims towards third parties" [38, p. 249]. Pledge creditors are not bankruptcy creditors and are not separate creditors, and they shall be settled in the maximum amount received from cashing in assets being subject to lien. Therefore, pledge creditors have the main claim towards third parties [43, p. 205], with the pledge over the property of the bankruptcy debtor as their own collateral. "If a third party disputes the status of the pledge creditor and the pledge is registered in public books or registries, the third party may dispute the validity of the pledge instrument only in litigation. The bankruptcy judge may not decide on the nullity of the pledge statement. If the bankruptcy administrator or a third party disputes the validity of the pledge statement, the civil court shall adopt a decision on such matter" [37, pp. 128-129]. "If the bankruptcy administrator considers the lien non-existent, and it is a right registered in public books or registries, litigation shall be initiated seeking to determine the nonexistence of such lien including the litigation for rebuttal of transactions" [36, p. 160]. It is a negative determination suit. Establishment of mortgage over the real estate of bankruptcy debtor for the benefit of pledge creditors, for securing obligations of other persons, for example, claims of the bank towards third parties, has been qualified in court practice as unencumbered disposal, because the "pledger did not receive adequate counter-value" and "pledger may not request any counter-act by a person benefiting from such disposal", "even though there was no legal obligation for such disposal" [36, p. 171], which represents the act of causing intentional damage to the creditor that may be rebutted if taken in the last five years prior to the submission of petition for bankruptcy, in which case there is a rebuttable legal presumption that the pledge creditor had knowledge about the intention to damage other creditors (Article 123, paragraph 1). Pledge creditors are recognized as parties in bankruptcy proceedings which was disputed in earlier court practice in a certain number of cases, with the argumentation that they have no claims towards the bankruptcy debtor. But the pledge creditors are not entitled to vote in the creditor's assembly that is, "they may not vote or be elected in the creditor's assembly and committee" (Article 49, paragraph 7) while separate creditors may participate in the creditor's assembly only to the extent of their claims for which they are likely to appear as bankruptcy creditors (Article 35, paragraph 3), where at the first creditor's hearing separate creditors chose one member of the creditor's committee from their ranks (Article 38a, paragraph 1). If the transfer of secured claim is executed during the bankruptcy proceedings (Article 117a), upon the submission of the request for correction of the final list of determined claims, the recipient shall be enabled to exercise the right of assignee -prior separate creditor, as the party in bankruptcy proceedings. Even though the novation from 2017 [52] cancelled the limitation related to the stage of bankruptcy proceedings in which transfer of claims is possible [15, p. 336], jurisprudence mainly implements interpretation that "submission of this request during bankruptcy proceedings for the acquirer is enabled until the validity of the decision on main distribution, after which time the transfer of claims in bankruptcy is not possible" [43, p. 208]. Other property rights at disposal of their owner may be subjected to lien. Provisions on the pledge on items may be applied to pledge on claims and other rights, unless prescribed otherwise [34, pp. 491-507]. Bankruptcy debtor has procedural standing to seek the collection of claims and litigate against the debtor of claim under pledge, after which, from the funds received, separate creditors shall be paid out that have collaterals over the claim of the bankruptcy debtor towards his debtor. Secured right should be recognized conditionally since the settlement of the separate creditor depends on the fact whether the bankruptcy debtor will succeed in collecting his claims [36, pp. 126-127]. Also, the subject of lien may be the right of claim of the pledger towards the debtor in the case where the pledge creditor is the debtor of pledger, except for claims whose transfer is prohibited by law and those related to an individual person that may not be assigned to others [53]. In this way, through the implementation of the pledged claim, in case the pledge creditor is the debtor of the pledger at the same time, a compensation is possible -offsetting of mutual, similar and due claims [48, p. 472] as one of the legally prescribed methods to cease the obligation. Moratorium -legal prohibition of individual enforcement over the assets of the bankruptcy debtor Initiating bankruptcy proceedings over the owner of real estate under mortgage or movables under pledge leads to important changes in the position and rights of secured creditors, regardless of the fact that it will not lead to the cessation of real estate collaterals. Because, by initiating bankruptcy proceedings, significant substantive legal consequences shall occur for the bankruptcy debtor and its assets, claims of creditors and transactions. Also, there are procedural legal consequences in proceedings the debtor is part of [39, p. 603] that lead to the mandatory cancellation of all court and administrative proceedings as well as the establishment of legal prohibition of enforcement and settlement against the bankruptcy debtor, that is, over its assets. Monetary claim shall be collected in the procedure of individual or general enforcement [35, p. 436]. Individual enforcement shall be executed in the enforcement proceedings, while general enforcement shall be executed in the bankruptcy proceedings. The principle first in time, greater in right (prior tempore potior iure) is valid in the enforcement proceedings, while in the bankruptcy proceedings the creditors are settled at the same time and concurrently [40, p. 404], implementing one of the main principles of bankruptcy -equal treatment of creditors (par conditio creditorum) [41]. Bankruptcy is an institute of simultaneous collective and proportional settlement of all creditors through general enforcement on the entire assets of the bankruptcy debtor, by which such debtor ceases to exist as a legal entity [46, p. 325]. In a situation where the assets of the debtor are so depreciated that the liabilities are higher than assets, conditions for settlement in bankruptcy proceedings arise, thus the principle of collective enforcement over the entire assets of the insolvent economic entity for joint and proportional settlement of creditors derogates the principle of priority of collection that is valid for enforcement proceedings, as the process for individual settlement. Bankruptcy proceedings enable joint and proportional settlement of creditors [3, p. 3]. This means that these two proceedings are mutually exclusive. This is why one of the procedural legal consequences of initiating bankruptcy proceedings is the established prohibition of individual enforcement and settlement of creditors that leads to the inability of enforcement over the assets of the bankruptcy debtor and mandatory interruption of enforcement (Article 93), thus making court decisions and other enforcement documents lose their property of enforceability, but not the property of validity [16, p. 74]. The term "moratorium" in jurisprudence [28, p. 36] as well as court practice [36, pp. 108-109] is used to signify the prohibition of settlement and enforcement as legal consequences of initiating bankruptcy proceedings. The moratorium protects the bankruptcy debtor by providing it with the option to consolidate before the creditors start collecting their claims and by allowing the bankruptcy administrator to prepare the sales of debtor's assets when the proceedings are forwarding in the direction of bankruptcy [4, p. 66]. Thus, the losses arising from bankruptcy for the creditors are evenly distributed among them if collected in the same payment lines [4, p. 64]. Prohibition to initiate, that is, the cancellation of enforcement proceedings has been established since the enforcement would favor only those creditors with an enforcement document [9, pp. 70-71]. Moratorium shall not be valid for enforcement that refers to the obligations of the bankruptcy estate and costs of the bankruptcy proceedings, that is, obligations incurred during the bankruptcy proceedings. Obligations arising during the proceedings shall be considered costs of bankruptcy proceedings, which are settled regularly and as priority, prior to the claims of creditors classified into payment lines, thus, their enforcement is possible [21, p. 151]. Hence, the bankruptcy proceedings have priority in execution over the enforcement if the debtor is subjected to both at the same time. Therefore, the enforcement, which is ongoing at the moment of initiating bankruptcy proceedings, shall be cancelled ex officio except in special cases when it entails a timely acquired right for separate settlement [44, p. 112]. Procedure legal consequence of the prohibition of enforcement and settlement against the bankruptcy debtor, that is, its assets, has been established with the purpose of not interfering with the even settlement of all creditors [23, p. 148], accomplishing the basic principle of protection of bankruptcy creditors enabling collective and proportional settlement of bankruptcy creditors (Article 3). Prohibition of enforcement and settlement that occurs ex lege, as a consequence of initiating bankruptcy proceedings, shall primarily refer to ordinary -bankruptcy creditors, that is, persons that have unsecured claims towards the bankruptcy debtor on the day of initiating bankruptcy proceedings (Article 48) and to the exercise of rights of secured -separate and pledge, creditors, as two categories of secured creditors. By initiating bankruptcy proceedings, the secured right is exercised only in bankruptcy proceedings, except in case of the adoption of a decision on cancellation of the prohibition of enforcement and settlement in line with the Law on Bankruptcy (Article 80, paragraph 2) [12, pp. 919-942]. Possibility of cancellation of the legal prohibition of enforcement and settlement at the proposal of secured creditor Secured creditors may propose cancellation of the prohibition of enforcement and settlement for the purpose of collecting secured claim from the pledged assets of the bankruptcy debtor, which is subject to a court decision. In case the conditions for moratorium cancellation are met, the secured creditors shall implement the settlement procedure individually and outside the bankruptcy proceedings, in line with general rules on settlement out of court or in court, therefore, in the same manner as if the bankruptcy debtor was not bankrupt [13, pp. 515-529]. Non-performing loan market is incentivized by enabling the secured creditors to individually implement claim collection procedures. The novelty from 2017 [52] modified three former reasons for moratorium cancellation, and provisions that regulate them are distributed into new Articles 93a and 93b, while Article 93c contains mutual provisions for cancellation of security measures, that is, the prohibition of enforcement and settlement, and Article 93d regulates the consequences of failure to cash in property by secured creditors in a legally prescribed deadline. The Law prescribes the duty of securing an adequate protection of assets and, as the reasons to cancel moratorium, prescribes the failure to adequately protect the assets or the depreciation of assets that are being secured (Article 93a). The possibility of cancelling moratorium related to the assets being subject of collateral has also been regulated, which is not of key importance for reorganization or the sale of bankruptcy debtor as a legal entity [24, p. 35] for the period of nine months, provided that the claim of the secured creditor is due in part or in whole and if the value of the asset in question is lower than the amount of secured claim (Article 93b). "Creditors prove the acquisition of status of bankruptcy, i.e. separate creditors by adopting the final list of claims by the bankruptcy judge, in case their claims are determined, and in case they are disputed, by the adoption of the valid court decision based on which they can seek correction of the final list… Moratorium cancellation may be requested only after the adoption of the final list of claims, that is, conclusion on the list of determined and disputed claims." [36, pp. 136-137] A new model of secured creditors settlement was introduced (Article 93a-93e), improving the mechanism of the bankruptcy debtor's assets cashing in. Secured creditors have an option to individually implement the procedure of individual settlement of their own claims from the assets in their pledge. Considering the procedure of cashing in assets prescribed by special laws and the actions that need to be taken in this procedure and court practice, nine-month period was set during which, after the cancellation of moratorium, individual settlement of secured creditors is allowed. In case secured creditors do not execute settlement in this period, this right shall be denounced from them by the reestablishment of moratorium, except in cases of submission of the petition to extend such deadline [12, pp. 919-942]. Discretional authority of the bankruptcy judge to assess whether the assets are of key importance for reorganization or for the sale of the bankruptcy debtor as a legal entity has been cancelled, which is a condition of the decision on moratorium cancellation. It has been regulated that the judge shall not adopt any decisions on security measures cancellation, that is, the prohibition of settlement and enforcement, in case the bankruptcy administrator proves that the assets in question are of key importance for reorganization, or the sale of the bankruptcy debtor as a legal entity (Article 93b, paragraph 2). This introduced the obligation of proving the significance of property for the reorganization or the sale of bankruptcy debtor as a legal entity, and the burden of proof was transferred to the bankruptcy administrator, meaning that the law presumes the asset that is the subject of secured right or lien is not of key importance for the reorganization or sales of the bankruptcy debtor, but it allows that the bankruptcy administrator may prove otherwise (rebuttable legal presumption) [ Credit bidding by separate or pledge creditor The rules named "Credit Bidding by Separate or Pledge Creditor" (Article 136b) regulate the right of the secured creditor to offset its secured claim against the purchase price in case such creditor is the best bidder (credit bidding). Special rules for two possible situations have been prescribed -the first one, when the secured claim is higher than the purchase price, that is, its portion over which the right of priority is given to the secured creditor (for example, if a part of the asset is sold or the bankruptcy debtor as a legal entity, or if there are creditors with higher priority over the same real estate) and the second one, when the secured claim is lower than the purchase price, that is, its portion over which the right of priority is given to the secured creditor. In both cases the secured creditor shall be obligated to pay all expenses that have to be settled from the purchase price (appraisal, notices, legal obligations, etc. including the compensation for the bankruptcy administrator) in order to secure due collection. However, the "bankruptcy debtor shall bear all the costs of property tax over the subject of lien" [36, pp. 197-199]. Additionally, in the first case, the secured creditor as the buyer of property or of the bankruptcy debtor as a legal entity has an obligation to pay the remaining portion of the purchase price "from which there is no right of priority settlement" (therefore, the amount of difference between the portion over which the right of priority settlement exists and the total price) in order to secure the settlement of secured creditors of higher order of priority, that is, the collection of the price portion that belongs to the bankruptcy estate. Such situation occurs if, for example, on the same real estate, the separate creditor -buyer holds a mortgage of lower priority, second order mortgage, that secures its claim in the amount of EUR 100,000, and another creditor holds a first order mortgage established for securing EUR 20,000 claim, while the real estate was sold at EUR 90,000. In this situation the second creditor should settle first from the purchase price, for the entire amount of its claim, EUR 20,000, which means that the separate creditor -buyer, in addition to expenses, should pay another EUR 20,000 ("remaining portion of the purchase price from which there is no right of priority settlement") and would be considered settled via offsetting in the amount of approximately EUR 70,000, less sales costs, while such creditor would settle for the difference to the full amount of claim (somewhat over EUR 30,000) as a regular creditor from the third payment order [5, p. 490]. Due to the principle of indivisibility related to the subject of pledge [6, p. 139] the mortgage shall include the real estate as a whole, even in case of its division [32, p. 463], including all the improvements in value of the property under mortgage, which is a consequence of the extensivity principle [45, p. 31]. Therefore, our opinion is that in the clarification of novelties an error occurred with the clarification of this institute since the obligation of payment of price difference in case that the secure claim exceeds the amount of the purchase price has been explained with the requirement to settle the creditor of the lower priority, that is, "to secure the settlement of secured creditors of lower priority" [24, p. 41], which does not make sense. If the buyer of the property (separate creditor) has claim that is higher than the purchase price, that is, it "exceeds" the price amount, then such creditor will settle through compensation to the amount of the purchase price (reduced for any expenses). Legally, it is not possible to use the price that is not sufficient to entirely settle the secured claim of the buyer of the property for settlement of other secured creditors "of lower priority" but only if there are creditors of higher priority compared to the secured separate creditor that is the buyer of the property [17, p. 81]. Maybe the sales costs could have been distributed more fairly and maybe such costs should have been divided proportionally to all mortgage creditors, in line with the value of claims to be collected from the purchase price. In the second case, if the secured claim is lower ("will not reach the amount of purchase price, that is, the portion over which the priority of settlement exists"), the secured creditor shall be obligated to pay the difference to the full amount of the purchase price (that is, "difference between the secured claim and the full amount of the purchase price"). Through provisions that regulate the sales procedure the separate creditor is not "released from deposit payment" that must be differentiated from the costs of sales, which are not paid, but collected primarily from the purchase price. Deposit payment is a condition for participation of the secured creditor in the sales procedure and if its bid is the most favorable one, such creditor is pronounced the buyer and shall exercise rights from Article 136b [37, pp. 144-145]. The deposit shall be retained in case the separate creditor with the most favorable bid withdraws from the purchase. "Bankruptcy administrator shall, in case of sales of property to the secured creditor by price bidding, prepare the settlement calculation as well as the calculation of sales costs. Creditors may object to the settlement calculation which is subject to the decision of the bankruptcy judge. Bankruptcy judge shall decide on costs, by way of special conclusion which may be subject to an appeal" [37, pp. 147-148]. Based on the rules stated, the "bankruptcy administrator shall call upon the separate or pledge creditor to execute the payment, otherwise it shall not be considered that the separate or pledge creditor met the foreseen sales terms" [37, pp. 147-148], which means that such creditor will not be announced a buyer but another most favorable participant in the asset sales process. Application of credit bidding institute when the buyer of property under burden is the pledge creditor Wording of Article 136b does not mention the pledge creditor even though it is stated in its headline: "Credit Bidding by Separate or Pledge Creditor" [5, p. 491]. Despite this omission, the credit bidding institute may be applied if the buyer of the property is the pledge creditor -that is, the creditor with lien over objects or rights of the bankruptcy debtor registered in public books or registries, that has no monetary claim towards the bankruptcy debtor that is secured by such lien, but towards third parties (Article 49, paras 5 and 6), since there is one basic difference between separate and pledge creditors -which is whether they have simultaneous monetary claim towards the bankruptcy debtor that is secured by mortgage or lien, over objects that are subjected to sales, or towards third parties. However, this difference does not exclude the application of the credit bidding institute to pledge creditors [15, pp. 448-449]. If the property buyer is an excluding creditor that has priority in settlement from the funds received through sales, such buyer would be entitled to compensate its secured monetary claim (towards a third party, not towards the bankruptcy debtor) from the amount of purchase price that is owed. Unlike general terms of compensation from the contract law, in case of application of the credit bidding institute the mutuality of claims of the secured creditor and the bankruptcy debtor, as the owner of the real estate under mortgage, is not a condition for compensation. Absence of mutuality, therefore, shall not prevent the compensation of the secured claim towards a third party with the price that the pledge creditor paid for the object under burden owned by the bankruptcy debtor. Credit bidding provides the separate creditor with the right to compete in case of the sales of assets on which holds secured right and to use the amount of its claim instead of money to pay the price. In this manner separate creditors may control the sales of assets over which they hold secured rights [26] and react in case they feel the achieved price of collateral in public bidding, and their settlement, is not adequate. In other words, in case the opinion of the separate creditor is that the received price is low, such creditor may offer a higher price and after the transfer of ownership right such creditor may try to sell those assets for a higher price or retain those assets [25, pp. 111-112]. On the one hand, the outcome in case of price payment using claims or money is the same, as a rule. Let us presume that the separate creditor has a claim in the amount of one million dinars and that the assets over which secured right exists is intended for sale. Offsetting (compensation) as a basis of the credit bidding institute In case the received price is also one million dinars, for the purpose of simplicity of the example, and is paid in cash, the entire amount, after deduction of costs, would be used for settlement of the same separate creditor. In case the received price is "paid" using claims instead of cash, the outcome would basically be the same. The basis of this institute is, therefore, offsetting (compensation), since the separate creditor will offset its obligation to pay the purchase price against its monetary claim that is secured by mortgage over the real estate in question. The only difference is that the separate creditor bears the costs of sales in the second case. If the received price is lower than the amount of claim, the separate creditor, after paying sales costs, shall acquire the right to settle for the difference in the value of such amounts as the bankruptcy creditor. If the received price is higher than the amount of claim, the separate creditor shall only pay the difference between its own claim and the received price. Without special provisions (Article 136b), offsetting of such claims and obligations according to general rules (Article 82) would be impossible, since they do not meet the regulated conditions, regardless of the fact that they are not explicitly included in cases when offsetting is not allowed (Article 83). As a counterargument to this position, one might state that general rules on the right to compensate claims in bankruptcy proceedings (Article 82) shall not apply to separate and excluding creditors that are entitled to priority in settlement from funds received from the sales of assets, that is, entitled to priority and separate settlement from the price received from the sales of real estate under mortgage or other assets of the bankruptcy debtor that are under lien. Therefore, a conclusion could be drawn that the application of the credit bidding institute was possible even before the novelty from 2017 based on general rules and principles of bankruptcy proceedings. The institute of credit bidding itself is not completely new in the Serbian legislation. The Law on Enforcement and Security -LES from 2015 (Article 192, paragraph 4) prescribes the possibility of the buyer being the enforcement creditor that can participate in public bidding by offering only the difference in price between its claim and the price received, considering the priority of such creditor [18, p. 464]. Similar rule was included in the prior Law on Enforcement and Security from 2011 (Article 130, paragraph 2): "if the buyer is the enforcement creditor whose claim does not reach the amount of received price on public bidding and if, considering its priority, such creditor could settle from the price, such creditor shall pay only the difference between the claim and the price received", and also in the Law on Enforcement Procedure from 2004 (Article 128, paragraph 2). The important difference compared to bankruptcy is that in the enforcement proceedings the buyer may be not only the mortgage creditor, but also an ordinary, regular creditor, therefore, any enforcement creditor, but priority of such creditor compared to others shall be taken into account, primarily related to pledge creditors, since this is not a collective settlement, as the case is in bankruptcy, but individual enforcement and settlement of the enforcement creditor [17, p. 85]. Preemptive right of the separate or pledge creditor in case of sales through direct agreement One of the consequences of initiating bankruptcy proceedings for the bankruptcy debtor is the termination of previously acquired preemptive rights (Article 75), both contractual and legal rights (for example, preemptive right of the co-owner of real estate or the neighboring agricultural land) [55], and simultaneous establishment of legal preemptive right for the benefit of secured creditors and their related persons on the subject of secured right or lien, in case of method of sales through direct agreement (Article 136d). "Preemptive right may be defined as the right whose holder is authorized, in case of sales of items to which the preemptive right refers to, to acquire such items prior to anyone else, through purchase in case conditions of sale are met that are determined by the owner of the item (seller)" [1, pp. 147-148]. Through the cancellation of previously acquired preemptive rights the collision with the legal preemptive right of secured -separate and pledge, creditors, over the subject of secured right or lien is avoided, that would occur had the stated consequence of bankruptcy proceedings initiation not been prescribed. Hence, in addition to the transaction (for example, contract or last will and testament), the source of preemptive right may be the law [33, p. 573], where the legal preemptive right is applied erga omnes. On the other hand, the contractual preemptive right is applied inter partes, thus, only related to the contracting parties (for example, seller and buyer from the contract on sales with preemptive right) and can be applied related to third parties only in case of negligence in particular case [33, p. 573]. When assets that are subject to secured right or lien are sold through direct agreement, the secured creditor may, within five days from the receipt of the notice of the bankruptcy administrator on proposed sale that must include all the terms of the sale that is proposed, including the price and payment method, notify the court and the bankruptcy administrator that it accepts to purchase the subject of sales under conditions from the notice (or more favorable conditions for the bankruptcy debtor) (preemptive right), where it must be stated whether the right (Article 136b) to compensate its secured claim with the amount of purchase price shall be exercised (credit bidding) [17, pp. 69-90]. This additionally protects its position in situations where there are no public announcements of the sales process (when the method of sales is not public bidding or public collection of bids), without damaging the bankruptcy estate, since such creditor, provided that it wishes to use this right, shall be obligated to offer the same terms as offered by the best bidder, at minimum. The establishment of preemptive right for the separate creditor in case of sales through direct agreement, similar to the credit bidding, enables the separate creditor, in case that he is of the opinion that adequate price has not been received, to purchase the subject of sales under the same (or more favorable for the bankruptcy debtor) terms from the notice of the bankruptcy administrator on the proposed sales. In case the right to credit bidding is not exercised, the secured creditor shall, simultaneously with the statement on purchase, be obligated to pay the price agreed with the third party, or deposit it with the court, in line with the application of rules on the price payment deadline from the Law on Contracts and Torts -LCT (Article 528, paragraph 2) [54]. LCT regulates that the rules on sales with preemptive rights shall be applied accordingly to the legal preemptive right (Article 533, paragraph 4). Exercise of preemptive right through related parties Separate, that is, pledge creditor may exercise preemptive rights through related persons in the sense of the Law on Companies [56] with submission of evidence that such person is indeed related. Considering a widespread practice of banks (as the most common secured creditors) to, due to regulatory limitations, establish special companies for the purpose of purchase of claims or assets that are collateral in cases of enforcement or bankruptcy, the banks are enabled in this manner to exercise the preemptive right through related persons, too. Law on Banks [57] (Article 34, paragraph 2) prescribes collective limitations, that is, a limit of 60% of bank's capital for investments into entities in the non-financial sector as well as fixed assets and investment real estate of the bank. The same regulatory limitation has been prescribed by the Decision on Bank's Risk Management [31] (Item 60) that defines investment risks of the bank, stating that such risks include the risks of investments into other legal entities, fixed assets and investment real estate, as well as limitations according to which bank's investments into one entity that is not in the financial sector may not exceed 10% of its capital, where this investment entails investments that result in the acquisition of shares or stock of the non-financial entity, and the total investment of the bank into entities which are not in the financial sector and fixed assets and investment real estate of the bank may not exceed 60% of bank's capital, where this limit does not refer to the acquisition of shares for sales within six months from such acquisition. Hence, in assessing the investment limit, investments of the bank into nonfinancial entities (for example, if the bank founded a limited liability company, or acquired a share or stocks in a company during the process of reorganization through the conversion of claims of banks and other creditors into capital -shares or stocks in the bankruptcy debtor) shall be added to the investments of the bank into fixed assets and investment real estate [19, p. 74]. Cancellation of sales as a consequence of the secured creditor's preemptive right violation Law on Bankruptcy does not prescribe sanctions, that is, legal consequences for the violation of preemptive rights of secured creditors [20, pp. 34-38]. Hence, it can be concluded that general rules from LCT shall be applied (Article 527-532) that regulate sales with preemptive rights [15, p. 458]. Persons that hold preemptive rights in line with the law must be notified in writing on the intended sales and their terms, otherwise they shall be entitled to request the cancellation of sales (LCT, Article 533, paragraph 2). Therefore, the secured creditor with the legal preemptive right over the subject of secured right or lien shall be entitled to demand the cancellation of sales through direct agreement in case of failure to duly notify such creditor on intended sales and their terms [19, p. 76]. At the same time, the secured creditor shall be entitled, that is, he must demand that the asset is sold to him under identical terms, by way of a collective claim with the request for sales cancellation. Otherwise, in case the plaintiff (secured creditor) does not request the cession under the same terms, then there is no legal interest for a suit for sales cancellation which is a process obstruction and a reason for dismissal [35, p. 194]. According to legal opinion of the Civil Department of the Court of Appeals in Novi Sad from 26 May 2014, the probable cause of the claim of the holder of preemptive rights depends on the deposition of funds in the amount of monetary market value of the real estate: "Depositing cash in the amount of market value of the real estate simultaneously with the suit is the basis for probable cause of the claim of the holder of preemptive right for the cancellation of the real estate sales agreement and the request for selling the property to such holder under the same terms" [7]. Due to the violation of priority in the acquisition of rights that is the essence of the preemptive right, in this way, priority purchase right is activated, which is also included in this right. The priority purchase right occurs only if preemptive right has been violated by concluding a contract with a third person [1, p. 148]. "Preemptive right occurs where there is still no contract, and the priority purchase right occurs only after the conclusion of the valid sales agreement between the owner and the third party" [27, p. 1114]. The exercise of authority arising from preemptive right is related to strict legal, preclusive deadlines, whose expiry leads to the loss of preemptive right [1, p. 148]. Therefore, knowledge of the plaintiff about the transfer of ownership, that is, precise contract terms after the expiration of the objective five-year period from the transfer of such ownership to a third party is not legally relevant and has no significance related to the maintenance of such deadline, nor can it lead to the extension of such objective period. Therefore, regardless of the fact that the duration of the legal preemptive right is not limited (LCT, Article 533, paragraph 2) unlike the contractual preemptive right that shall cease after five years from the conclusion of the contract (LCT, Article 531, paragraph 2), the right to protect the legal preemptive right of the separate or pledge creditor, that is, sales cancellation claim, shall be subjected to preclusive subjective six-month deadline starting on the day of receiving knowledge on such transfer, that is, precise contracted terms, where the preemptive right shall cease in any case upon the expiration of the objective fiveyear deadline from the transfer of ownership to a third party (LCT, Article 532). The verdict of the Supreme Court of Cassation, Rev. 1788/2017 from 13 September 2018, adopted through the application of the Law on Real Estate Trade (Article 10, paragraph 2) that also prescribes a subjective-objective deadline, included a position on preclusive legal nature of the subjective deadline: "With the expiration of the subjective deadline of 30 days starting from the day of receiving knowledge about the conclusion of the real estate sales agreement, the owner of the neighboring plot shall lose the right and possibility to exercise the protection of preemptive right" [8]. According to jurisprudence, legal preemptive right is applied erga omnes, while the contracted preemptive right is applied inter partes, that is, it can be exercised towards a third party only in case of negligence in a specific case [33, p. 576]. "Right of priority purchase can always be exercised in case of violation of the legal preemptive right, and in case of violation of the contractual preemptive right only if the person to which the asset was sold was negligent, that is, if such person knew or should have known that preemptive right has been violated" [1, pp. 147-148]. Therefore, one could accept a position that negligence of the third party (buyer) is not a precondition for the adoption of the claim of the separate, that is, pledge creditor, as the holder of the legal preemptive right for the cancellation of sales and cessation of asset under the same terms. In this case the right to damage compensation towards the bankruptcy administrator and/or bankruptcy debtor would belong to a third party and it would be treated as an obligation of the bankruptcy estate [19, p. 79]. Consent of secured creditors for the lease of assets burdened by secured rights or lien Leasing assets of the bankruptcy debtor burdened by secured rights or lien shall be considered a matter of utmost importance (Article 28, paragraph 1) and shall be conditioned on the consent of secured creditors (Article 28, paras 2-4), regardless of their value compared to the total value of bankruptcy estate [36, p. 120]. The bankruptcy administrator shall deliver the notice on the intent to lease to secured creditors and such action may be implemented only with the receipt of the approval of creditor that, in line with the application of rules of assessment of the probability of settlement for the purpose of voting at the creditor's assembly (Article 35, paragraph 3), makes probable that his secured claims may be settled from the assets under burden in part or in whole (Article 28, paragraph 2). Therefore, the probability of settlement of the secured claim may be proved by secured creditors by delivering the appraisal of the value of assets that is the subject of secured right prepared by the authorized professional (appraiser), not older than 12 months. To avoid the possibility of abuse of this right by creditors that have no basis to expect any settlement from the value of such assets (if they are holders of the lower priority right), the consent shall be received only from the creditors that present the probability of settling their secured claims from assets under burden (in whole or in part). Therefore, the precondition for the use of this right is proving the probability (which represents a lower degree of evidence than certainty) of settlement from the property being subjected to leasing. Interests of secured creditors may differ depending on their position, thus the interest of some of them may be sales, while others may have interest in leasing [25, pp. 111-112]. Bearing in mind that secured creditors are justifiably interested in preservation of the subject of collateral of their claim and its earliest possible cashing in, their consent is required since these are transactions that include providing subject of pledge to a third person for use, thus, potentially, over time, its value may be depreciated for example, from regular use. Moreover, leasing will, de facto, delay cashing in of such assets (due to the fact that in this manner fixed monthly costs of bankruptcy proceedings are financed), which is contrary to the urgency principle of bankruptcy proceedings and legitimate interests of secured creditors that have no benefit from leasing, the benefit is attributed to the bankruptcy estate. Opposed interest of secured creditors on the one side and the bankruptcy administrator and regular creditors on the other are balanced by not denying the bankruptcy administrator to lease property burdened by secured right or lien, but such right is conditioned on the consent of secured pledge creditors [24, pp. 28-29]. In case of lack of declaration for any reason, that is, failure to submit to the court explicit written rejection of consent, a fiction of the existence of consent of secured creditors has been prescribed for the lease of assets under burden. Consent shall be considered given in case secured creditors fail to submit their statement related to the matter within eight days from the receipt of the written request of the bankruptcy administrator (LB, Article 28 paragraph 3). The law, therefore, prescribes the fiction that the consent was provided tacitly which is a deviation from the basic principle in law that silence does not mean approval (LCT, Article 42, paragraph 1). Clarification of the draft of the law falsely qualified this institute as a "non-rebuttable presumption" [24, p. 29] even though it is a fiction since the law considers the consent given, even though actually it was not. Legal presumption has another role -it presumes a fact, that actually exists in reality, cancelling the requirement of proof, or in case of rebuttable presumptions, transfers the burden of proof to the other side. This solution secures efficiency of consent provision process by imposing the obligation to act to non-consenting secured creditors, while the bankruptcy administrator is not obligated to actively pursue such consent, which may be a time-consuming process, especially in case of secured creditors that are companies with a complex structure of decision making, such as banks. Consent of secured creditors has been introduced due to the issues in the application of prior bankruptcy framework that occurred in the instances of lease of assets burdened by secured rights. On the one hand, bankruptcy administrators were motivated to lease the assets of the bankruptcy debtor, thus covering the costs of bankruptcy proceedings. On the other hand, this prevents prompt cashing-in of assets of the bankruptcy debtor. Solutions that were in use before the novelties from 2017 prescribed receiving the consent from the board of creditors but, considering the fact the members of the board were exclusively bankruptcy creditors (except in the case where the board included creditors that were secured and bankruptcy creditors at the same time), the interest of such a board was, as a rule, leasing such assets. Such solution significantly harmed the interest of secured creditors. The Law on Bankruptcy does not prescribe the legal consequence of leasing property without due notification of secured creditors or in the case where they explicitly deny providing such consent. Consent for contract conclusion is an institute of the contractual law. If third party consent is mandatory for contract conclusion, such consent may be provided prior to contract conclusion, as a permission, or after conclusion, as an approval, unless the law prescribes otherwise (LCT, Article 29). It can be concluded that the consent from the Law on Bankruptcy (Article 28, paras 2-4) is actually a permission, since it is provided prior to the lease agreement conclusion. Therefore, lease agreement shall not be valid if concluded without the consent of secured creditors. It is a completely null transaction (LCT, Article 103) since it contradicts the quoted regulations. More precisely, it is considered that such contract had never been concluded since the law prescribed prior consent -permission of the secured creditors "for contract conclusion", that is, for taking "action" of leasing assets under burden. Conclusion Law on Bankruptcy distinguishes separate and pledge creditors as two categories of secured creditors. The differentiation criteria are whether the creditor has claims towards the bankruptcy debtor that are secured by mortgage or pledge over the assets of the bankruptcy debtor, that is, whether the bankruptcy debtor is, at the same time, the debtor of secured claim or it is a third party. Legal consequences of initiating bankruptcy proceedings over the owner of property under mortgage or movables under pledge, primarily the moratorium, have significant impact on the exercise of rights and the position of secured creditors, regardless of the fact that they do not lead to the cessation of real estate collaterals, since, as a rule, they disable the implementation of the procedure of individual enforcement and settlement outside of bankruptcy, thus limiting their rights. Therefore, it was necessary to provide additional protection of rights to this category of creditors in the procedure of sales of assets of the bankruptcy debtor. For this purpose, having in mind the significance of real estate collaterals, such as mortgage or chattel mortgage, in the operations of banks and other economic entities, the bankruptcy framework in the Republic of Serbia prescribes several specific institutes establishing a separate protection mechanism and exercise of rights of secured creditors. Amendments to the law from 2007 introduced a new model of settlement for secured creditors, enhancing the mechanism of cashing in the assets of the bankruptcy debtor. Secured creditors were now able to independently implement the procedure of individual settlement of their claims from the assets over which they hold lien. Nine-month period has been set during which individual settlement of secured creditors is allowed, after the cancellation of moratorium. In case secured creditors fail to execute settlement during this period, moratorium is reestablished. The bankruptcy judge shall not adopt a decision on security measures cancellation, that is, prohibition or enforcement and settlement if the bankruptcy administrator is able to prove that the assets in question are of key importance for the reorganization or sale of the bankruptcy debtor as a legal entity. This introduces the obligation of proving significance of assets for the reorganization or sales of the bankruptcy debtor as a legal entity, while the burden of proof has been transferred to the bankruptcy administrator. Credit bidding provides the right for the secured creditor to, in case of the sales of assets under burden, bid and use the amount of its claim instead of cash to pay the price. In this manner secured creditors are able to control the sales of assets over which they hold lien and to react in case they think that the received price of collateral from public bidding, and their settlement, is not adequate. The basis of the credit bidding institute is compensation. One of the consequences of initiating bankruptcy proceedings is the establishment of the legal preemptive right for the benefit of secured creditors and their related parties over the subject of secured right, or lien, in case of method of sales through direct agreement. This additionally protects their position in situations where there is no public announcement of sales, and without reducing the bankruptcy estate, since such creditor, in case it wishes to exercise this right, must offer at least the same terms as the best bidder. Law on Bankruptcy does not prescribe sanctions for the violation of preemptive rights of secured creditors, which means that the general contractual law provisions shall apply that prescribe that persons holding preemptive rights by law must be notified in writing on the intended sale and its terms, otherwise they shall be entitled to demand sales cancellation. Leasing assets of the bankruptcy debtor burdened by secured right or lien shall be considered an action of utmost importance and shall be conditioned on the consent of secured creditors, regardless of their value compared to the value of the total bankruptcy estate. Lease agreement shall not be valid if concluded without the permission of secured creditors. Vladimir Kozar He is a special advisor in the law office Aleksić and Associates in Novi Sad. He is also a full-time professor at Faculty of Law for Commerce and Judiciary, University Business Academy in Novi Sad, where he teaches Civil Procedure Law and Introduction to Civil Law. As an external associate, he participates in the scientific project "Serbian and European Law -Comparison and Harmonization" at the Institute for Comparative Law, University of Belgrade. For more than 12 years he was a judge of the Commercial Court in Belgrade. He is the author of a large number of scientific papers in the field of corporate and civil law that are published and presented in domestic and foreign journals and conferences. He is the vice president of the Association for Compensation Law and a member of the Assembly of The Business Lawyers Association of the Republic of Serbia. Ivana Maraš She is the head of the banking and finance department and a member of the Collegium in the law office Aleksić and Associates. Ivana has extensive experience in advising clients in the banking sector, in the field of non-performing loan collection, restructuring, bankruptcy, and enforcement and litigation proceedings. The International legal directory Legal 500 recognized Ivana as one of the leading legal experts in the field of banking and finance. Her clients are international and domestic banks, as well as financial institutions that operate on the Serbian market. Also, Ivana has extensive experience in the field of corporate law. She attends doctoral studies at the Faculty of Law for Commerce and Judiciary, University Business Academy in Novi Sad.
2021-11-28T16:04:33.147Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e33b85a8ef72d8b38f9c0d9e8b1b5f53e7a5d7c6", "oa_license": null, "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0353-443X/2021/0353-443X2106369M.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e88e1a675f45a74adf309e171125b579bd7e9544", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
269756331
pes2o/s2orc
v3-fos-license
Safety, immunogenicity and protective effect of sequential vaccination with inactivated and recombinant protein COVID-19 vaccine in the elderly: a prospective longitudinal study The safety and efficacy of COVID-19 vaccines in the elderly, a high-risk group for severe COVID-19 infection, have not been fully understood. To clarify these issues, this prospective study followed up 157 elderly and 73 young participants for 16 months and compared the safety, immunogenicity, and efficacy of two doses of the inactivated vaccine BBIBP-CorV followed by a booster dose of the recombinant protein vaccine ZF2001. The results showed that this vaccination protocol was safe and tolerable in the elderly. After administering two doses of the BBIBP-CorV, the positivity rates and titers of neutralizing and anti-RBD antibodies in the elderly were significantly lower than those in the young individuals. After the ZF2001 booster dose, the antibody-positive rates in the elderly were comparable to those in the young; however, the antibody titers remained lower. Gender, age, and underlying diseases were independently associated with vaccine immunogenicity in elderly individuals. The pseudovirus neutralization assay showed that, compared with those after receiving two doses of BBIBP-CorV priming, some participants obtained immunological protection against BA.5 and BF.7 after receiving the ZF2001 booster. Breakthrough infection symptoms last longer in the infected elderly and pre-infection antibody titers were negatively associated with the severity of post-infection symptoms. The antibody levels in the elderly increased significantly after breakthrough infection but were still lower than those in the young. Our data suggest that multiple booster vaccinations at short intervals to maintain high antibody levels may be an effective strategy for protecting the elderly against COVID-19. INTRODUCTION The coronavirus disease 2019 (COVID-19) pandemic, originating in late 2019, has affected over 670 million individuals globally, resulting in more than 6 million fatalities.These developments have had a significant influence on the world economy and healthcare infrastructure.Despite the World Health Organization (WHO) declaring on May 5, 2023 that the COVID-19 outbreak no longer meets the criteria of a public health emergency of international concern, it emphasized that this does not mean that COVID-19 no longer poses a global health risk. 1 With the emergence of new variants and frequent instances of breakthrough infections, the repercussions of COVID-19 might persist longer than anticipated, leading countries to maintain their alertness in combating it. Since the elderly have lower resistance to the virus, the COVID-19 pandemic has posed unparalleled health risks to this demographic group, especially those with underlying diseases, such as hypertension, diabetes, cardiovascular disease, and chronic respiratory disease. 2,3Research indicates that elderly individuals have higher levels of angiotensin-converting enzyme 2 (ACE2) compared to younger adults, 4 which elevates their vulnerability to SARS-CoV-2 infection.Outbreak data from countries such as China, Italy, Japan, Singapore, Canada, and South Korea illustrate that the elderly are more susceptible to COVID-19. 5A meta-analysis of 59 studies involving 36,470 participants revealed that people over the age of 70 had a 65% higher risk of contracting COVID-19 than those under the age of 70. 6Similarly, a study on the epidemiology of SARS-CoV-2 in China reported a notable susceptibility to COVID-19 among individuals aged over 60 compared to younger and middle-aged adults. 7dditionally, the immune system of elderly individuals is often in a chronic, prolonged, pro-inflammatory state associated with the aging process.Continuously low levels of innate immune response and heightened levels of pro-inflammatory cytokines could worsen infection-induced tissue damage and advance the progression of COVID-19. 8Singhal et al. reported that COVID-19 patients of advanced age exhibited a severity rate of ~50% and a mortality rate of 10%. 9accines are a key means of epidemic prevention and control.Since the onset of COVID-19, global vaccine research has entered an arms race.Thus far, of the 199 vaccines undergoing preclinical research, 176 have progressed to the clinical research stage.As of March 2023, China has approved the large-scale use of 14 vaccines across five categories: inactivated vaccines, recombinant protein vaccines, live attenuated influenza virus vector vaccines, mRNA vaccines, and adenovirus vector vaccines.The protective effect of the COVID-19 vaccine is closely related to the production of neutralizing antibodies, the establishment of immune memory, and the production of virus-specific T cells.Vaccination to prevent illness and reduce the number of severe cases and deaths is an important measure to ensure the health of the elderly population.Related data indicate that unvaccinated individuals face a fivefold higher risk of SARS-CoV-2 infection compared to those who have received established vaccines. 10Additionally, the risk of hospitalization and mortality increases by more than 10 times for the unvaccinated group. 10The incidence rate ratio is directly related to vaccine efficacy. 10Overall, the consistent efficacy of the vaccine against severe COVID-19 remains exceptionally high. 10Currently, the evolutionary trajectory of SARS-CoV-2 is becoming more focused, with the majority of recent variants stemming from the subvariants of the Omicron lineage. 11This trend indicates positive prospects for the development of novel vaccines.However, there is growing concern about the spread of these subvariants.While maintaining a binding affinity to ACE2 similar to the original strain, the additional mutation of the spike protein in these subvariants makes them prone to evading antibodies. 11This diminishes the efficacy of neutralizing antibodies, potentially leading to the uncontrolled spread of recent variants among vulnerable populations.The provision of optimal immunization strategies to maximize the immunogenicity of existing vaccines in the context of the ongoing variability of COVID-19 remains to be further explored. While investigations into the safety and effectiveness of COVID-19 vaccination in populations requiring additional safeguarding, such as cancer and AIDS patients, 12,13 have commenced, research relevant to the elderly remains in its nascent stages.According to a recent study conducted in China, the COVID-19 vaccine inoculation rate among the elderly, especially those aged over 70 years, is low. 14This vaccination hesitancy is believed to be driven by concerns about contraindications and side effects and may have contributed to excessive mortality during the COVID-19 pandemic. 14Although several clinical studies have focused on the immune responses of the elderly to COVID-19 vaccines, 15,16 there remains a dearth of convincing evidence regarding the safety and immunogenicity of the vaccines in this population-especially due to a lack of prospective, real-world, long-term follow-up clinical studies.It is also unclear whether the humoral immune response and protective effects induced by the COVID-19 vaccine in the elderly differ from those in the young population.To answer these questions, we conducted a prospective clinical observational study over 16 months to assess the safety and immunogenicity of two doses of an inactivated vaccine, BBIBP-CorV, followed by a recombinant protein vaccine booster, ZF2001, in elderly individuals. Baseline characteristics of participants As shown in Fig. 1 and Supplementary Table 1, from July 29, 2021, to August 3, 2021, 230 participants were enrolled in the vaccination and follow-up visit protocol.The baseline characteristics of all participants are listed in Table 1.Sex and body mass index (BMI) status did not differ significantly between the two groups.The elderly group's blood glucose (6.05 vs 5.33, P < 0.0001), blood urine nitrogen (6.12 vs 5.27, P < 0.0001), and creatinine (70.53 vs 65.49, P = 0.001) levels were marginally higher than those of the young group.Elderly individuals were more likely to have underlying diseases (68.15% vs 19.18%, P < 0.0001).The prevalence of type 2 diabetes and hypertension (15.92% vs 4.11%, P = 0.011 and 20.38% vs 4.11%, P = 0.001, respectively) was significantly higher in the elderly. Safety of the vaccination protocol Within 1 month after priming of the inactivated vaccine (BBIBP-CorV), the incidence of adverse events in the elderly was significantly lower than that in the young (9.55% vs 19.18%, P = 0.041); after the second dose of BBIBP-CorV, the incidence of adverse events was similar in the two groups (17.18% vs 21.92%, P = 0.393) (Fig. 2a).In total, only four cases of grade 3 adverse events were reported; two cases in the elderly group after priming (one case of fatigue and one case of diarrhea) (Fig. 2b) and two cases in the young group after the second dose (both cases of fatigue) (Fig. 2c).After administering the booster dose (ZF2001), the incidence of adverse events decreased significantly to <3% in both groups (2.74% vs 1.91%, P = 0.375), and no grade 2 or 3 adverse events were reported (Fig. 2a, d).Local pain and fatigue were common in young participants, whereas dizziness was prevalent in elderly participants (Fig. 2b, c).All adverse events Fig. 1 Study design.The first two doses of inactivated vaccine BBIBP-CorV were inoculated intramuscularly at 1-month intervals (actually 21-28 days) and the third dose of heterologous recombinant protein booster ZF2001 was administered 6 months after the second dose.The COVID-19 pandemic occurred between the 13th and 16th month.Blood samples were collected at baseline and the 1st, 2nd, 4th, 7th, 8th, 10th, 13th, and 16th months showed recovery in the short term (nausea, fever, and diarrhea resolved within 24 h; fatigue, dizziness, and local pain resolved in ~72 h).During the entire vaccination protocol and follow-up period, the parameters of liver function (Fig. 2e-i), blood glucose (Fig. 2j), blood lipid (Fig. 2k, l), and kidney function (Fig. 2m, n) of the participants in both groups were within the normal range and remained stable. Immunogenicity of COVID-19 vaccines is weaker in the elderly In intention-to-treat (ITT) analysis, the positive rates of neutralizing antibodies in the elderly were lower than those in the young at the 1st month (12.74% vs 38.36%, P < 0.0001) and the peak of 2nd month (73.47% vs 92.42%, P = 0.002), while they also decreased rapidly at the 4th month (38.57% vs 76.56%, P < 0.0001) (Fig. 3a).After the booster, the positivity rate of neutralizing antibodies in the elderly group increased to 100%, comparable to that in the young group (Fig. 3a).However, the positive rate of neutralizing antibodies in the elderly population declined rapidly again, leading to a significant difference at the 13th month (55.93% vs 82.69%, P = 0.001) (Fig. 3a).The overall trend of anti-receptor binding domain (RBD) antibody positivity was similar to that of neutralizing antibodies, except that the positive rates in both groups remained high after administering the booster, with no significant differences between the two (Fig. 3b).Titers of neutralizing antibodies in the elderly were lower at the 1st month (4.6 IU/mL vs 9.96 IU/mL, P < 0.0001), the peak of the 2nd month (22.00 IU/mL vs 37.66 IU/mL, P < 0.0001), and the 4th month (9.49IU/mL vs 19.34 IU/mL, P < 0.0001) than that in the young (Fig. 3c).A remarkable increase in neutralizing antibody titers was observed 1 month after administering the booster in both groups but it was still 12.38 times lower in the elderly than in the young (223.42IU/mL vs 2764.86IU/mL, P < 0.0001) (Fig. 3c).After 6 months of booster administration, the reduction in neutralizing antibodies in elderly and young people was basically the same, with a relative disparity of 8.51 times (17.61IU/mL vs 149.87 IU/mL, P < 0.0001) (Fig. 3c).The titers of anti-RBD antibodies showed a similar trend (Fig. 3d).Per-protocol (PP) analysis showed results consistent with ITT analysis (Fig. 3e-h), indicating that the elderly had a relatively poor ability to maintain antibody levels, even if they received booster doses. Gender, age, and underlying diseases are negatively associated with antibody production in the elderly Univariate and multivariate logistic regression analysis was performed to evaluate the factors affecting antibody production in elderly individuals (Table 2 and Supplementary Tables 2-7).Specifically, in the 2nd month, underlying diseases were independent risk factors for the production of neutralizing antibodies, with 77.2% reduction in the rate of neutralizing antibody production in elderly people (odds ratio [OR] 0.228, 95% confidence interval [CI] 0.094-0.550,P = 0.001) (Table 2).In the 7th month, age and underlying diseases were identified as risk factors for the production of neutralizing antibodies, with 11.2% and 62.3% reduction of the neutralizing antibody production rate, respectively (OR 0.888, 95% CI 0.815-0.967,P = 0.007 and OR 0.377, 95% CI 0.167-0.849,P = 0.019, respectively) (Table 2).And in the 2nd and 7th months, male gender and underlying diseases were identified as risk factors for the production of anti-RBD antibodies (Table 2).Our results suggesting that elderly individuals with male gender, advanced age, and underlying diseases exhibited lower levels of antibody production after vaccination. Antibody levels before infection are associated with symptoms severity after infection in the elderly During the COVID-19 pandemic, at the end of 2022, 50 young and 98 elderly participants were followed up.The baseline characteristics did not significantly differ between the lost and follow-up groups (data not shown).Forty-five young (90.0%) and 84 elderly (85.7%) participants were infected, showing no statistically significant differences (P = 0.461) (Fig. 4a).Among them, three young (6.0%) and eight elderly (8.2%) participants were asymptomatic, and no severe cases were reported (Fig. 4a).Young participants typically experienced fever, whereas elderly participants had both fever and cough (Fig. 4b).There were significantly more elderly participants with symptoms lasting longer than 1 week than young participants (43.42% vs 16.67%, P = 0.001) (Fig. 4c).There was no significant difference in the pre-infection antibody titers between the infected and uninfected participants in either group (Fig. 4d, e).Elderly individuals with lower antibody levels were having a potential risk of extended symptom duration, accompanied by fever, and showed multisystemic symptoms; even though the difference was not statistically significant (Fig. 4f-h).The demographic data of the two groups of participants are presented in Supplementary Table 8.Antibody levels before infection may play an important role in the severity of breakthrough infection symptoms in the elderly. Changes in antibodies before and after infection The levels of neutralizing antibodies were significantly higher in both young and elderly patients with breakthrough infections than those before infection (P < 0.0001) (Fig. 5a).Although the level of neutralizing antibodies in the elderly after infection was lower than that in the young, the difference was not statistically significant (Fig. 5a).Similarly, the post-infection anti-RBD antibody levels in elderly and young people with breakthrough infection were significantly higher (P < 0.0001 and P = 0.0002, respectively) (Fig. 5b).However, the anti-RBD antibody levels after breakthrough infection were still significantly lower in the elderly than in the young patients (P = 0.0012) (Fig. 5b).Interestingly, the level of neutralizing antibodies in the elderly after infection was 28.00-fold higher than that before infection, which was much higher than the 3.32-fold increase in young individuals.The increase in anti-RBD antibodies was similar in the elderly and young groups (4.98-and 5.76-fold, respectively). In the uninfected participants, there was no significant difference between the pre-and post-pandemic antibody levels (Fig. 5c, d).The production and protection of virus-specific neutralizing Thirty young and 30 elderly individuals were randomly selected from the breakthrough infection cohort for the pseudovirus neutralization assay.The results showed that 1 month after the 2nd dose, the positivity rate for wild-type SARS-CoV-2 D614G (WT)-specific neutralizing antibodies in the young group was significantly higher than that in the elderly group (77% vs 47%, P = 0.033).However, after receiving the booster dose and at the post-breakthrough infection, no significant differences were Fig. 4 The relationship between breakthrough infection rates or symptoms and pre-infection antibody levels.The information on the breakthrough infection (a), the symptom types (b), and the symptom durations (c) were collected from 50 young and 98 elderly participants in the cohort.The association between pre-infection antibodies and infection status is presented in (d) and (e), whereas the relationship between pre-infection antibodies and symptoms is depicted by durations (f), accompanying fever (g), and symptom complexity (h) (monosystemic symptoms: only systemic, respiratory, or gastrointestinal symptoms; multisystemic symptoms: at least two of those three above).Data were analyzed using the chi-square test and Mann-Whitney U test.**P < 0.01.ns no significance.Dot lines represented the threshold of antibodies observed between the two groups.One month after the booster dose, the positivity rates for BA.5-and BF.7-specific neutralizing antibodies increased.After the breakthrough infection, all types of variant-specific neutralizing antibodies showed a certain positivity.However, compared with those of BA.5-and BF.7-, the positivity rate for XBB.1.5-specificneutralizing antibodies was low, and the positivity rates for EG.5-and BA.2.86-specific neutralizing antibodies were lower (Fig. 6). The geometric mean of pseudovirus 50% neutralization titers (pVNT 50 ) significantly differed only in the WT-specific neutralizing antibody assay at 1 month after the 2nd dose (25.48 vs 9.53, P = 0.003), 1 month after the booster (105.80 vs 22.14, P < 0.001), and pre-breakthrough infection (14.16 vs 9.20, P = 0.016).Notably, after the booster dose, BA.5-and BF.7-specific neutralizing antibody titers of some participants in both groups reached high levels; after the breakthrough infection, these titers were comparable to those of WT.For XBB.1.5,EG.5, and BA.2.86, very few positive neutralizing responses were observed during the vaccination process, and the titers did not reach the WT level, even after breakthrough infection (Fig. 6).In addition, analysis of symptomatic patients revealed that although most individuals had negative neutralizing responses to BA.5 and BF.7 before breakthrough infection, more cases of higher pVNT 50 levels were observed in both groups of patients with short symptom duration and monosystemic symptoms and in the elderly population without fever (Supplementary Fig. 1). Meanwhile, except at baseline, during the entire follow-up process, titers of WT-specific neutralizing antibodies in the entire cohort showed significantly strong positive correlations with pVNT 50 levels (Supplementary Fig. 2).Correlations between the young group and entire cohort were similar; however, the elderly group showed a strong correlation only after the booster (Supplementary Fig. 2).Additionally, in the corresponding group, only the pVNT 50 s of BA.5-and BF.7-specific neutralizing antibodies showed weak but significant correlations with pVNT 50 s of the WT-specific neutralizing antibodies after the booster and before the pandemic (Supplementary Figs. 3 and 4).After the pandemic, all pVNT 50 s of variant-specific neutralizing antibodies, including XBB.1.5-,EG.5-, and BA.2.86-, showed strong significant positive correlations with the pVNT 50 s of WT-specific neutralizing antibodies (Supplementary Fig. 5). DISCUSSION The fragile immune system and the threat of SARS-CoV-2 variants make the elderly prone to a high risk of infection, hospitalization, and even death in the post-COVID era. 16,17Studies have shown that vaccines can effectively prevent severe COVID-19 cases due to variant infection. 18The WHO recommended a third booster dose for people who received the initial two doses 5 months ago or the adenovirus vaccine 2 months ago, [19][20][21] for maintaining protective effects.Unfortunately, the coverage of primary series and booster vaccination for individuals aged 70 years and even older remains insufficient in China. 141][32] A follow-up study conducted by Parry et al. for 8 months confirmed that both mRNA (BNT162b2) and adenovirus vaccines (ChAdOx1) not only have strong immunogenicity in elderly individuals but also can induce differential humoral and cellular immunity. 33Despite several other vaccine evaluation trials also focusing on the elderly population; [34][35][36] currently, no prospective, real-world study data on the long-term effects of COVID-19 vaccination in this demographic are available. Our study showed that two doses of BBIBP-CorV followed by booster ZF2001 were well tolerated, and the incidence of adverse events was low in the elderly.Very few cases of grade 3 adverse events have been reported.All adverse events resulted in rapid recovery without medical intervention.Moreover, multiple laboratory test indices remained stable during a follow-up period of up to 13 months.Our findings are consistent with those of several studies suggesting that the elderly may experience fewer adverse events following COVID-19 vaccination than younger people. 37,38accine-related systemic adverse events are related to immune responses, and the incidence of early adverse events is low if the immune response is weak. 39Lower antibody levels in the elderly after vaccination also confirm this finding. In our immunogenicity investigation, by confirming the consistency between ITT and PP analyses, two results were noteworthy.The first result we obtained was that at all follow-up time points, the antibody levels were significantly lower and decreased rapidly in the elderly.Based on WHO's "5-month interval" principle for booster injection, a shorter interval of 3-4 months may help the elderly maintain adequate humoral immune response to alleviate symptoms of the COVID-19 breakthrough infection.Second, our results showed that the neutralizing antibody titers of elderly participants in the 1st month after the booster were 10 times higher than those in the 1st month after the second dose, whereas, in another study of a 3-dose COVID-19 inactivated vaccine protocol, there was only a 3.69-fold increase after the third dose. 40It is suggested that the sequential heterologous recombinant COVID-19 vaccine booster was more effective in reactivating the weakened immunological memory than that of the sequential homologous vaccine in the elderly.In addition, since the ZF2001 booster led to fewer adverse events than BBIBP-CorV, the recombinant protein vaccine may be a viable candidate for the second dose.A weak anti-RBD antibody production ability for male participants, which is compatible with the previous reports, 41 was observed in our elderly cohort, and gender-specific behaviors, genetic and hormonal factors, and sex differences in biological pathways related to SARS-CoV-2 infection may lead to this result. 42At present, the severe infection of COVID-19 in the elderly is mainly due to the gradual aging of the immune system with increasing age, that is, immunosenescence, which can be caused by the weakened response to inflammatory stimuli, reduced migration and phagocytic ability of dendritic cells, the low reactivity of late differentiated immune cells, and atrophy of lymph nodes. 43Increased incidence of combined underlying diseases, especially type 2 diabetes, 44 also negatively impacts the immune system in the elderly.An appropriate immunopotentiator dosage is recommended to address the adverse effects of these risk factors. In our breakthrough infection cohort, there were no severe cases or deaths in the elderly population.In addition to the reduced virulence due to viral mutations, the protective efficacy of our vaccination strategy may also play an important role.Nevertheless, symptom duration in the elderly was significantly longer, and the range of respiratory system symptoms was wider.This may be related to the lower antibody levels before breakthrough infection in the elderly than in the young population.Further analysis demonstrated that elderly individuals with lower antibody titers were more prone to have longer Fig. 6 Positive rates and titers of different virus-specific neutralizing antibodies.Pseudovirus 50% neutralization titers (pVNT 50 ) were detected in the two groups at baseline and 2, 7, 8, 13, and 16 months and are represented as geometric means in the column chart; the antibodypositive rates are indicated by the red number at the bottom of the figure.Only significant differences are marked with P values.The red dot line represents the antibody-positive threshold.Data were analyzed using the chi-square test and Mann-Whitney U test.*P < 0.05, **P < 0.01, ***P < 0.001.WT wild-type SARS-CoV-2 D614G; pos.rates positive rates symptom duration, greater likelihood of fever, and higher complexity of symptoms.This trend was not obvious in young people, which may be due to the high titers of neutralizing and anti-RBD antibodies before the breakthrough infection.However, a larger cohort will be required for confirmation. When using vaccination strategies to prevent COVID-19 breakthrough infection, it is important to ensure the maintenance of its SARS-CoV-2-specific memory repertoire, so that the body can mobilize rapid and strong anamnestic response to fight against the virus when it invades again. 45Studies that have started to concentrate on the elderly population's hybrid immunity with series booster vaccinations have so far reported encouraging outcomes. 46To gather further testing and analysis evidence, more research is still required.Based on some rare evidence, elderly people may have long-term immunological memory of SARS-CoV-2, 47 and according to the most recent research, the elderly who survived the COVID-19 outbreak have memory B cells that can facilitate the maintenance of anti-RBD antibodies. 48eutralizing antibody levels in elderly individuals recover quickly and strongly after re-exposure to antigen epitopes, suggesting that immunological memory in the elderly can be strengthened.As there is a difference in immunological memory between booster and breakthrough infection, more evidence is needed.At the end of 2022, Omicron lineages BA.5 and BF.7 dominated the pandemic in China.The pseudovirus neutralization assay showed that different types of variant-specific neutralizing antibodies, especially BA.5 and BF.7, can be induced by two doses of BBIBP-CorV followed by the booster ZF2001.For WT-specific neutralizing antibodies, higher production was highly correlated with stronger neutralizing ability, indicating that our vaccination protocol can simultaneously improve the quantity and quality of antibody production.In the elderly group, compared with inactivated vaccines, the booster dose greatly enhances the neutralizing ability of virus-specific antibodies rather than simply increasing antibody production.Although boosters may ultimately only activate existing immunological memory cells rather than create new memory cells, which are necessary to resist new variants, 49 our data confirmed that boosters are indispensable for training the immune system in the elderly.After exposure to WT vaccines, the immune system of elderly people may also produce effective immune responses to BA.5 and BF.7 variants.Similarly, after experiencing breakthrough infection, some elderly patients can develop effective and specific responses to XBB.1.5,EG.5, and BA.2.86 that they have not been exposed to.Overall, the protective efficacy of vaccines can be expanded, and the closer the kinship of the variant, the greater its protective effect.These neutralizing antibodies may serve as crucial factors in the prevention of severe illnesses or even fatalities during future outbreaks.Vaccination with the WT SARS-CoV-2 may not induce immunological protection against SARS-CoV-2 variants in a large proportion of the population, especially the elderly.Receiving vaccines designed on the basis of variants may provide better protection; however, sequential boosters at shorter time intervals are still necessary.In view of a recent report, the overall incidence of COVID-19 reinfection in China has already reached 28.3%, and the longer the time from the first infection, the higher the incidence of reinfection, 50 suggesting that the elderly still need multiple booster vaccinations after the global COVID-19 epidemic, even if some of them have a full vaccination history. Our study had some limitations.First, the weak physical status of the elderly, the interference of the COVID-19 pandemic, and the long-term follow-up itself objectively led to the loss of follow-up and dropouts, which gradually decreased the cohort size.Second, owing to the sample consumption caused by clinical testing, our project did not include an in-depth, comprehensive analysis of virus-specific T-cell immunity, which weakens the impact to some extent.Lastly, without an inactivated vaccine booster control cohort, we could not directly demonstrate the advantages of a heterologous recombinant COVID-19 vaccine booster. In conclusion, with a long-term follow-up study in this prospective observational clinical cohort, we confirmed that the heterologous vaccination protocol not only enhanced immunogenicity while ensuring safety but also elicited promising protective efficacy.Due to the weakened immune response in elderly individuals, it is pivotal for them to receive prompt and appropriate SARS-CoV-2 variant vaccines, depending on the prevalence of the variant types.More importantly, according to our findings, administering multiple COVID-19 booster shots at short intervals in the elderly population may be a protective strategy.Further studies on better vaccine combinations and immune mechanisms are required. Study design This was a prospective observational clinical cohort study conducted from July 29, 2021, to June 30, 2023, in Hunyuan County, Shanxi Province.This study was conducted according to the principles of the Declaration of Helsinki.The study protocol was approved by the Ethics Committee of the Fifth Medical Center of the PLA General Hospital and clinical registration was completed (NCT05012800).Signed informed consent was obtained from each participant prior to screening. As shown in Fig. 1, all participants received two doses of inactivated vaccine at 21-28 days intervals (annotated as the 1-month interval for narrative convenience) and then received the third recombinant protein vaccine 6 months after the second dose.The first two doses were China's Sinopharm COVID-19 inactivated vaccine BBIBP-CorV (Vero cells) containing 4 μg/0.5 mL in a vial.The third booster dose was Zhifei Longcom recombinant COVID-19 vaccine ZF2001 (CHO cells) containing 25 μg/0.5 mL in a vial.Both vaccines were adjuvanted by aluminum hydroxide.All the participants received the vaccine intramuscularly through a deltoid. Follow-up visits were conducted at baseline and 1st, 2nd, 4th, 7th, 8th, 10th, 13th, and 16th months.Participant diary cards were established to record short-term adverse events within 1 month of receiving each dose of the vaccine, and long-term adverse events at the 4th, 7th, 10th, and 13th months.Blood samples were collected for laboratory testing and antibody titer determination.The COVID-19 outbreak and breakthrough infection occurred between the 13th and 16th months of follow-up.SARS-CoV-2 infection was diagnosed by nucleic acid or antigen testing. The main endpoints of this study were safety profiles within 1 month after each vaccination dose, including the incidence of adverse events, liver and kidney function, blood glucose and lipid levels, and routine blood indices.The definition and grade of adverse events were evaluated according to the Common Terminology Criteria for Adverse Events, version 5.0.Briefly, events were rated as grade 1 (mild), asymptomatic or mild symptoms, clinical or diagnostic observations only, intervention not indicated; grade 2 (moderate), minimal, local, or non-invasive intervention indicated, limiting age-appropriate instrumental activities of daily living (ADL); and grade 3 (severe or medically significant but not immediately life-threatening), indication for hospitalization or prolongation of hospitalization, disabling and limiting self-care ADL.The secondary endpoints were immune protection of the participants, specifically the titers of neutralizing and anti-RBD antibodies at 1, 3, and 6 months after each vaccination dose. Enrollment criteria for participants Eligible participants were those provided (1) signed informed consent; (2) in females, the urine pregnancy test was negative; (3) at least 6 months of follow-up was completed; (4) HBsAg, anti-HCV, HIV, TPHA screening was negative; (5) armpit temperature was ≤37.0 °C; (6) the young group was aged 18-59 years, and the elderly group between 60 and 80 years. Exclusion criteria were as follows: (1) in females, the urine pregnancy test positive; (2) pregnant or lactating women; (3) known allergies to certain components of these two vaccines; (4) patients with serious chronic diseases or advanced diseases such as high blood pressure, diabetes, asthma, thyroid disease, etc. which cannot be controlled by drugs; (5) those suffering from thrombocytopenia, hemorrhagic disease or thrombotic disease; (6) those with congenital or acquired angioedema/neuroedema; (7) those with a history or family history of convulsions, epilepsy, encephalopathy, other progressive neurological diseases, and psychiatric disorders; (8) lymphadenopathy, (9) lymphoma, leukemia, and other systemic malignancies; (10) autoimmune disease, and (11) chronic diseases with cute exacerbation or acute infectious diseases and fever. Antibody detection The chemiluminescent microparticle immunoassay was used to detect SARS-CoV-2 spike protein-neutralizing antibodies titers (detection range: 4.6-4600 IU/mL, positive threshold: 11.5 IU/mL, performed by Wan Tai Kairui Biotechnology Co., Ltd. with the protocol of the 2019-nCoV Neutralizing Antibody Detection Kit) and anti-RBD antibodies titers (positive threshold: 1.0 cut-off index, performed by Wan Tai Kairui Biotechnology Co., Ltd. with the protocol of the 2019-nCoV Antibody Detection Kit) in human plasma samples.The sample, 2019-nCoV recombinant antigen-coated magnetic particles, and reaction diluent were mixed.After washing, acridine ester labeled with 2019-nCoV recombinant antigen (or antihuman lgG antibody) was added to the reaction system to form a complex.Pre-excitation and excitation solutions were added after washing the samples again.Chemiluminescence reaction signals were measured and expressed in relative luminescent units (RLUs), and antibody titers in the samples were proportional to the RLU. According to the protocols of JOINN Beijing Technology Testing Co., Ltd., pseudovirus neutralizing assay was used to detect the neutralizing activity of WT-, BA.5-, BF.7-, XBB.1.5-,EG.5-and BA.2.86-specific neutralizing antibodies against the SARS-CoV-2 spike protein.The HIV lentivirus vector containing the surfaceexpressing SARS-CoV-2 spike protein was incubated with HEK293T-ACE2 cells in DMEM.Neutralization of the pseudovirus was measured after serial dilution of the sample, and the results were expressed as reciprocal titer of the sample required to reduce the RLUs by 50% compared with the control group.The positive threshold was set to 10. Statistical analysis All statistical analyses were performed using IBM SPSS (version 26).Participants in both groups who had received all three doses of the COVID-19 vaccine were included in the ITT analysis, and those who completed more than six follow-up visits were included in the PP analysis.The Kolmogorov-Smirnov test was used to evaluate the distribution types.Non-normally distributed and ordered data are represented by the median (range).Measurement of pVNT 50 s is represented by geometric means.Pearson's chi-square test, continuity correction, and Fisher's exact test were used to check the proportion of the count data.The Mann-Whitney U test and paired t-test were used to compare differences between groups, the Wilcoxon rank-sum test was used to compare intragroup differences, and the nest t-test was used to compare unmatched groups.We displayed the median values of each group in the figures using GraphPad Prism 8 and estimated the differences based on the 95% CI.Baseline characteristics were screened using univariate logistic regression, and the significantly different factors (P_value < 0.1) were further analyzed using multivariate logistic regression.The analysis results are presented using an OR combined with a 95% CI (lower-upper) in the table.In correlation analysis, the Pearson correlation test was used to process normally distributed datasets, while the Spearman rank correlation test was used to process nonnormally distributed datasets.An absolute R value > 0.5 indicated strong correlation, whereas an absolute R value ≤ 0.5 indicated weak correlation.The hypothesis test was bilateral, and a P_value < 0.05 was considered statistically significant. Fig. 2 Fig. 3 Fig.2Safety evaluation.The incidence of adverse events within 1 month after each dose of COVID-19 vaccine (a) and the proportion of each adverse event to the overall in two groups within 1 month after the first dose (b), the second dose (c), and the booster (d).Changes in the biochemical indices in two groups at baseline and the 1st, 2nd, 4th, 7th, 8th, 10th, and 13th months (data of the 16th month was not detected due to the pandemic interference) shown in e-n.*P < 0.05; ns no significance; GGT γ-glutamyl transpeptidase; ALP alkaline phosphatase.Other adverse events included rash, cough, pharyngeal malaise, tinnitus, and insomnia Fig. 5 Fig. 5 Analysis of changes in antibodies before and after infection.Changes in antibody levels were depicted in (a) and (b) for infected participants, and in (c) and (d) for uninfected participants.Data were analyzed using the Mann-Whitney U test.**P < 0.01, ***P < 0.001, ****P < 0.0001.ns no significance.Bold black horizontal lines represented the median values of antibody titers.Dot lines represented the threshold of antibodies Table 1 . Baseline characteristics of the participants Data are median (range) or n (%).Bold p values represent significant differences a Laboratory test indices are presented in abbreviations: ALT alanine aminotransferase, AST aspartate aminotransferase, TBIL total bilirubin, TC total cholesterol, TG triglyceride, GLU blood glucose, BUN blood urea nitrogen, CRE creatinine Table 2 . Factors influencing neutralizing and anti-RBD antibody production in the elderly
2024-05-14T13:17:01.375Z
2024-05-13T00:00:00.000
{ "year": 2024, "sha1": "83793ffa27e405ed93e9522af8cf0c0140c47968", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d0448d2cd82331c03bfd7c8b100f24e28fcf6197", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55340978
pes2o/s2orc
v3-fos-license
On the importance of metrics in practical applications Students motivation for learning mathematical concepts can be increased when showing the usefulness of these concepts in practical problems. One important mathematical concept is the concept of metric space and, more related to the applications, the concept of metric function. In this work we aim to illustrate how important is to appropriately choose the metric when dealing with a practical problem. In particular, we focus on the problem of detection of noisy pixels in colour images. In this context, it is very important to appropriately measure the distances and similarities between the image pixels, which is done by means of an appropriate metric. We study the performance of different metrics, including recent fuzzy metrics, within a specific filter to show that it is indeed a critical choice to appropriately solve the task. Introduction Nowadays, the process of digital signals and images, and particularly colour image processing, is a practical problem extensively studied.A problem that appears during the acquisition and transmission of digital images is impulsive noise, that affects to some pixels of the image, and the reduction of impulsive noise has been extensively studied in the last years.Vector median-based filters [1]- [3] are widely used methods for impulse noise reduction in colour and multichannel images because they are based on the theory of robust statistics and, consequently, perform robustly.These methods apply the filtering operation over all the pixels of the image, and they tend to blur details and edges of the image. To overcome this drawback, a series of switching filters, combining noise detection followed by noise reduction over the noise detected, have been studied in [4]- [9].Also, techniques using fuzzy logic have been studied to solve this problem [10]- [11], and fuzzy metrics have shown to perform appropriately for this task [6,7,12,13,14,15].These works have proved that fuzzy logic and fuzzy metrics are appropriate for image denoising because it can deal with the nonlinear nature of digital images and with the inherent uncertainty in distinguishing between noise and image structures. In this paper, we aim to point out that, apart from the particular filtering method, it is very important to appropriate choose the metric used within the filter.To do so, using the same filtering procedure, we present a study of the performance of different metrics, including recent fuzzy metrics and a novel fuzzy metric specifically designed to detect impulses. The paper is structured as follows.Section 2 introduces the metrics used the detection process.The proposed study and experimental results are described in Section 3 with a performance comparison and discussion.Finally, some conclusions are drawn in Section 4. Metrics to Diagnose Noise In Mathematics, a metric is a function which defines a distance between elements of a set.In colour image filtering every pixel of the image is an RGB component vector with integer values between 0 and 255.Then, metrics provides a way to assess de closeness degree between two pixels.L 1 and L 2 metrics were the first in the experiences, followed by angular distance between pixels, and a set of combinations with several metrics.In this work we are going to use four metrics (two classics and two fuzzy).The classical metrics are L 1 and L 2 : A theory with an important grown in recent years has been fuzzy logic, due to its important use in control systems, expert systems, sensors in electronic devices, etc.At the same time, fuzzy topology and fuzzy metrics were deployed.For this reason, fuzzy metrics penetrate in the image denoising area with very good results.Recent works shown that the use of fuzzy metrics can improve the filtering method. A stationary fuzzy metric [17]- [19], M , on a set X, is a fuzzy set of X × X satisfying the following conditions for all x, y, z ∈ X: where * is a continuous t-norm. M (x, y) represents the degree of nearness of x and y and, according to (FM2), M (x, y) is close to 0 when x is far from y. Let (x i (1), x i (2), x i (3)) the colour image vector x i in the RGB colour space, and let X the set {0, 1, . . ., 255} 3 and fixed K > 0.Then, accord to [12,16], the function M : given by is a stationary fuzzy metric, for the usual product, on X in the sense of George and Veeramani [18].In this way, from now on M * (x i , x j ) will be the fuzzy distance between the colour image vectors x i and x j .Obviously M * is M -bounded and it satisfies for all x i , x j ∈ X. We define the fuzzy set M ∞ on X 3 by M ∞ is a (stationary) fuzzy metric in the sense of George and Veeramani [18].From the mathematical point of view the stationary fuzzy metric M ∞ , started in [8], can be seen as a fuzzy version of the L ∞ classical metric and, like we will prove, it is especially sensitive to impulse noise. These fuzzy metrics are non-uniform in the sense that the measure given for two different pairs of consecutive numbers (or vectors) may not be the same.In this way, increasing the value of K reduces this non-uniformity.According to our experiences, we have set K = 1024 which is an appropriate value for RGB colour vectors [12,13]. On the importance of metrics in practical applications J.G. Camarena, S. Morillas, F.J. Cisneros Experimental Study and Results In recent works about image filtering, one of the most studied concerns impulse noise detection. The key issue is to distinguish between edges, fine details and noise.One switching method that provides good results is the Peer Group Filter (PGF), presented in Smolka [5].This method provides a fast schema of noise detection and a posterior operation of noise replacement.In the first phase, the algorithm makes a study of the neighborhood of every pixel in a filtering window (of usual size 3 × 3), and if the pixel in study have at least m pixels close to it (we have chosen m = 2 as in [5]), the method detects this pixel as noisy free and as noisy otherwise.In the second phase, the noisy pixels are replaced with the output of the Arithmetic Mean Filter of the colour pixels in the neighborhood.The algorithm makes a study of the neighborhood of every pixel in a filtering window (size 3 × 3).If the pixel have at least m pixels close to it (m = 2), this pixel is diagnosed as noisy-free and as noisy otherwise.Lately, only the noisy pixels are replaced with the output of the Arithmetic Mean Filter of the colour pixels in the neighborhood. To show the importance of the choice of the metric used to measure the distance or similarity between colour image pixels, we have implemented different versions of the PGF using four different metrics.We have chosen the city-block and Euclidean classical metrics and the M * and M ∞ fuzzy metrics introduced in Section 2. Two images (figure 1) have been corrupted with impulsive noise according to the model proposed by Plataniotis [2], and then filtered with the four different variants of the filter.To assess the performance, the Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR) and Normalized Colour Difference (NCD), have been used.Notice that for MAE and NCD lower values denote better performance, whereas PSNR is better for higher values. The images in figure 1 have been corrupted by the impulsive noise model proposed by [2] before filtering them. Tables 9.1-9.2show the performance results of the metrics, whereas figure 4 show a graphical analysis from NCD, that is a reference measure that denotes the visual quality of the filtered image. From the tables 9.1 and 9.2 we may conclude that the L 2 metric and the M ∞ fuzzy metric exhibit a much better performance than the rest, specially in terms of PSNR.What? is the question that a curious student must do itself.The reason is than the square and the min operation makes, are specially sensitive to the presence of impulse noise.In particular, the best results of all with M ∞ fuzzy metric provide improvements about 40% in MAE respect to L 1 and M * and pretty goods respect L 2 , specially when noisy intensity grows.We can see that when impulse noise affected at least one component of either x i or x j , it would be On the importance of metrics in practical applications J.G. Camarena, S. Morillas, F.J. Cisneros On the importance of metrics in practical applications J.G. Camarena, S. Morillas, F.J. Cisneros associated to the lowest nearness value between their components.In such a case, the M ∞ fuzzy metric takes the nearness value associated to the presence of the impulse and ignores any possible similarity between the rest of the components.Moreover, as the difference between the components becomes larger, the value of M ∞ drops rapidly. Figure 2 show a visual comparison of the output of every implementation whereas figure 4 show a visual analysis of the behaviour in terms of NCD of every image with every implementation. Conclusions In this paper we try to show students how important is to choose and accurate metric to measure distances, in this case, colorometric distance between pixels of digital color images.We have proved that a set of recent fuzzy metrics have better behaviour in front of classical metrics.This fact must encourage students and novel investigators to test the new mathematical tools (instead to use only the classical ones).Only varying the metric and filtering the images, the results obtained show that an appropriate choice of the metric is of paramount importance in the design of a filtering method.This choice can lead the filtering to significant performance benefits. In this way is interesting to keep looking for new metrics and measures to improve the detection of noisy pixels, distinguishing them from edges and fine details contained in the images. Figure 2 : Figure 2: Visual comparison of the filter output using the Pills image and several metrics: (a) corrupted with p = 10% of impulsive noise, (b) L 1 , (c) L 2 , (d) M * and (e) M ∞ . Table 9 . 1: Experimental results for the PGF Filter in the comparison with diverse metrics when filtering the Pills detail image corrupted with different densities p of fixed-value impulse noise. Table 9 . 2: Experimental results for the PGF Filter in the comparison with diverse metrics when filtering the Statue image corrupted with different densities p of fixed-value impulse noise.
2018-12-07T13:09:03.197Z
2011-06-05T00:00:00.000
{ "year": 2011, "sha1": "a416dd585be47c7259155fc7baec2d1797a36c79", "oa_license": "CCBYNC", "oa_url": "http://polipapers.upv.es/index.php/MSEL/article/download/3066/3156", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a416dd585be47c7259155fc7baec2d1797a36c79", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
248986608
pes2o/s2orc
v3-fos-license
LEAP: Learnable Pruning for Transformer-based Models Pruning is an effective method to reduce the memory footprint and computational cost associated with large natural language processing models. However, current pruning algorithms either only focus on one pruning category, e.g., structured pruning and unstructured, or need extensive hyperparameter tuning in order to get reasonable accuracy performance. To address these challenges, we propose LEArnable Pruning (LEAP), an effective method to gradually prune the model based on thresholds learned by gradient descent. Different than previous learnable pruning methods, which utilize $L_0$ or $L_1$ penalty to indirectly affect the final pruning ratio, LEAP introduces a novel regularization function, that directly interacts with the preset target pruning ratio. Moreover, in order to reduce hyperparameter tuning, a novel adaptive regularization coefficient is deployed to control the regularization penalty adaptively. With the new regularization term and its associated adaptive regularization coefficient, LEAP is able to be applied for different pruning granularity, including unstructured pruning, structured pruning, and hybrid pruning, with minimal hyperparameter tuning. We apply LEAP for BERT models on QQP/MNLI/SQuAD for different pruning settings. Our result shows that for all datasets, pruning granularity, and pruning ratios, LEAP achieves on-par or better results as compared to previous heavily hand-tuned methods. One of the promising approaches for addressing the inference time and power consumption issues of these large models is pruning (Sanh et al., 2020;Michel et al., 2019;Wang et al., 2020a). As the nature of neural networks (NNs), different pruning granularity exists, e.g., structured pruning (head pruning for transformers and block-wise pruning for weight matrices) and unstructured pruning (purely sparsebased pruning). Different pruning methods are proposed, but they generally only target one set of pruning granularity. As such, when a new scenario comes, e.g., hybrid pruning, a combination of structured pruning and unstructured pruning, it is unclear how to choose the proper method. Meanwhile, existing work sometimes sets the same pruning ratio for all layers. However, it is challenging to prune the same amount of parameters of all weights of a general NNs to ultra-low density without significant accuracy loss. This is because not all the layers of an NN allow the same pruning level. A possible approach to address this is to use different pruning ratios. A higher density ratio is needed for certain "sensitive" layers of the network, and a lower density ratio for "non-sensitive" layers. However, manually setting such multi-level pruning ratios is infeasible. Regularization method, e.g., (Sanh et al., 2020), is proposed to address multi-level pruning ratio issue. However, it introduces two drawbacks: (i) a careful hand-tuned threshold schedule is needed to improves the performance, especially in high sparsity regimes; and (ii) due to the regularization term is not directly applied to the final pruning ratio, the regularization magnitude also needs heavily tuning to get desired density ratio. Motivated by these issues, we propose an effective LEArnable Pruning (LEAP) method to gradually prune the weight matrices based on corresponding thresholds that are learned by gradient descent. We summarize our contributions below, • LEAP sets a group of learnable pruning ratio parameters, arXiv:2105.14636v2 [cs.CL] 23 May 2022 which can be learned by the stochastic gradient descent, for the weight matrices, with a purpose to set a high pruning ratio for insensitive layers and vice versa. As the NN prefers a high-density ratio for higher accuracy and low loss, we introduce a novel regularization function that can directly control the preset target pruning ratio. As such, LEAP can easily achieve the desired compression ratio unlike those L 0 or L 1 penalty-based regularization methods, whose target pruning ratio needs careful hyperparameter tuning. • To ease hyperparameter search, we design an adaptive regularization magnitude λ reg to adaptively control the contribution to the final loss from the regularization penalty. The coefficient λ reg is automatically adjusted to be large (small) when the current pruning ratio is far away (close to) the target ratio. • We apply LEAP for BERT base on three datasets, i.e., QQP/MNLI/SQuAD, under different pruning granularity, including structured, hybrid, and unstructured pruning, with various pruning ratios. Our results demonstrate that LEAP can consistently achieve on-par or better performance as compared to previous heavily tuned methods, with minimal hyperparameter tuning. • We show that LEAP is less sensitive to the hyperparameters introduced by our learnable pruning thresholds, and demonstrate the advance of our adaptive regularization magnitude over constant magnitude. Also, by analyzing the final pruned models, two clear observations can be made for BERT pruning: (1) early layers are more sensitive to pruning, which results in a higher density ratio at the end; and (2) fully connected layers are more insensitive to pruning, which results in a much higher pruning ratio than multi-head attention layers. Here, we briefly discuss the related pruning work in NLP. For unstructured pruning, (Yu et al., 2019;Chen et al., 2020;Prasanna et al., 2020;Shen et al., 2021) explore the lotteryticket hypothesis (Frankle & Carbin, 2018) for transformerbased models; (Zhao et al., 2020) shows that pruning is an alternative effective way to fine-tune pre-trained language models on downstream tasks; and (Sanh et al., 2020) proposes the so-called movement pruning, which considers the changes in weights during fine-tuning for a better pruning strategy, and which achieves significant accuracy improvements in high sparsity regimes. However, as an extension of (Narang et al., 2017), (Sanh et al., 2020) requires nontrivial hyperparameter tuning to achieve better performance as well as desired pruning ratio. For structured pruning, (Fan et al., 2019;Sajjad et al., 2020) uses LayerDrop to train the model and observes that small/efficient models can be extracted from the pre-trained model; uses a low-rank factorization of the weight matrix and adaptively removes rank-1 components during training; and (Michel et al., 2019) tests head drop for multi-head attention and concludes that a large percentage of attention heads can be removed during inference without significantly affecting the performance. More recently, (Lagunas et al., 2021) extends (Sanh et al., 2020) from unstructured pruning to block-wise structured pruning. As a continuing work of (Sanh et al., 2020;Narang et al., 2017), hyperparameter tuning is also critical for (Lagunas et al., 2021). Although fruitful pruning algorithms are proposed, most methods generally only work for specific pruning scenarios, e.g., unstructured or structured pruning. Also, a lot of algorithms either (i) need a hand-tuned threshold (aka pruning ratio) to achieve good performances; or (ii) require careful regularization magnitude/schedule to control the final pruning ratio and retain the model quality. Our LEAP is a general pruning algorithm that achieves on-par or even better performance under similar pruning ratio across various pruning scenarios as compared to previous methods, and LEAP achieves this with very minimal hyperparameter tuning by introducing a new regularization term and a self-adaptive regularization magnitude. Methodology Regardless of pruning granularity, in order to prune a neural network (NN) there are two approaches: (i) one-time pruning (Yu et al., 2021;Michel et al., 2019) and (ii) multistage pruning (Han et al., 2016;Lagunas et al., 2021). The main difference between the two is that one-time pruning directly prunes the NN to a target ratio within one pruning cycle. However, one-time pruning oftentimes requires a pre-trained model on downstream tasks and leads to worse performance as compared to multi-stage pruning. For multistage pruning, two main categories are used: (i) one needs multiple rounds for pruning and finetuing (Han et al., 2016); and (ii) another gradually increases pruning ratio within one run (Sanh et al., 2020;Lagunas et al., 2021). Here, we focus on the latter case, where the pruning ratio gradually increases until it reaches the preset target. Background and Problems of Existing Pruning Methods Assume the NN consists of n weight matrices, W = {W 1 , . . . , W n }. To compress W, gradual pruning consists of the following two stages: • (S1) For each W i , we initialize a corresponding all-one mask M i and denote M = {M 1 , . . . , M n } as the whole set of masks. • (S2) We train the network with the objective function, where M W means W i M i for all i = 1, . . . n, and L pure is the standard training objective function of the associated task, e.g., the finite sum problem with crossentropy loss. As the training proceeds, the mask M i is gradually updated with more zero, i.e., the cardinality, |M i | = s i t at iteration t, becomes smaller. Here s i t in (S2) could be a simple linear decaying function or more generally a polynomial function based on the user's requirement. Such method is called hard/soft-threshold pruning. In (Zhu & Gupta, 2017;Sanh et al., 2020;Lagunas et al., 2021), s i t is set to be the same across all the weight matrices, i.e., s i t := s t and they use a cubic sparsity scheduling for the target sparsity s f given a total iterations of t f : Although threshold methods achieve reasonably advanced pruning ratios along with high model qualities, they also exhibit various issues. Here we dive deep into those problems. Common issues Both hard-and soft-threshold pruning introduce three hyperparameters: the initial sparsity value s 0 , the warmup step t 0 , and the cool-down steps t c . As a common practical issue, more hyperparameters need more tuning efforts, and the question, how to choose the hyperparameters v 0 , t 0 and t f , is by no means resolved. Issues of hard-threshold pruning It is natural for weight matrices to have different tolerances/sensitivities to pruning, which means that a high pruning ratio needs to be applied for insensitive layers, and vice versa. However, for hardthreshold pruning, which sorts the weight in one layer by absolute values and masks the smaller portion (i.e., s i t ) to zero, it uses the same pruning ratio across all layers. That oftentimes leads to a sub-optimal solution for hard-threshold pruning. As such instead of using a single s t schedule, a more suitable way is to use different s i t , i = 1, . . . n for each weight matrix W i . However, this leads the number of tuning hyperparameters to be a linear function as the number of weight matrices, i.e., 3n. For instance, there are 3 × 6 × 12 = 216 hyperparameters for the popular NLP model-BERT base , a 12-layer encoder-only Transformer of which each layer consists of 6 weight matrices (Devlin et al., 2019). Extensively searching for these many hyperparameters over a large space is impractical. Except for the single threshold issue, the hard-threshold method is hard to extend to different pruning scenarios, e.g., block-pruning, head pruning for attention heads, and filter pruning for fully connected layers. The reason is that the importance of those structured patterns cannot be simply determined by their sum of absolute values or other norms such as the Euclidean norm. Issues of soft-threshold pruning One way to resolve the above issues is through soft-threshold methods. Instead of using the magnitude (aka absolute value) of the weight matrix to generate the mask, soft-threshold methods introduce a regularization (penalty) function L reg (S) to control the sparsity of the weight parameters (for instance, L p -norm, L reg = · p , with p = 0 or p = 1). Here, S : and each S i refers to the associated importance score matrix of W i , which is learnable during training. Particularly, (i) this S i can be adopted to different pruning granularity, e.g., structured and unstructured pruning, and (ii) the final pruning ratio of each weight matrix can be varied thanks to the learnable nature of S i . For soft-threshold pruning, the mask, M i , is generated by the learnable importance score S i and s t 1 using the comparison function, is any function that maps real values to [0, 1]. As f (S i ) will prefer larger values as smaller loss will be introduced to training procedure, a regularization term is added to the training objective, The coefficient λ reg is used to adjust the magnitude of the penalty (the larger λ reg , the sparser the W). Although softthreshold pruning methods achieve better performance as compared to hard-threshold pruning methods, it introduces another hyperparameter λ reg . More importantly, as the final sparsity is controlled indirectly by the regularization term, it requires sizable laborious experiments to achieve the desired compression ratio. Short summary For both hard-and soft-threshold pruning, users have to design sparsity scheduling which raises hyperparameters search issues. Hard-threshold pruning can hardly extend to different pruning granularity, which likely leads to sub-optimal solutions by setting the same pruning ratio for all layers. While soft threshold methods could be a possible solution to resolve part of the problems, it introduces another extra hyperparameter, λ reg , and there are critical concerns on how to obtain the target sparse ratio. We address the above challenges in the coming section by designing learnable thresholds with (i) a simple yet effective regularization function that can help the users to achieve their target sparse ratio, and (ii) an adaptive regularization magnitude, λ reg to alleviate the hyperparameter tuning. LEAP with A New Regularization In order to address the previously mentioned challenges in Section 3.1, we propose our LEArnable Pruning (LEAP) with a new regularization. We denote the learnable threshold vector σ = [σ i , . . . , σ n ] and each σ i associates with the tuple (W i , M i , S i ). With a general importance score S and learnable threshold vector σ, LEAP can be smoothly incorporated to Top-k pruning method (Zhu & Gupta, 2017;Sanh et al., 2020). 3 Recall the Top-K pruning uses the score matrix set S to compute M, i.e., M i = Top-K(S i ) with K ∈ [0, 100] in a unit of percentage. By sorting the elements of the matrix S i , Top-K set the mask M i for the top K% to be 1, and the bottom (100 − K)% to 0. Mathematically, it expresses as where sort(S i , K%) contains the Top K% of the sorted matrix S i . Here K is determined by the users, and thus follows various kinds of schedules such as the cubic sparsity scheduling, Eq. 1. As described in Section 3.1, such a schedule usually requires extensive engineering tuning in order to achieve state-of-the-art performance. Moreover, in (Zhu & Gupta, 2017), the Top-K(·) threshold is fixed for all weight matrices. However, different weight matrices have different tolerances/sensitivities to pruning, meaning that a low pruning ratio needs to be applied for sensitive layers, and vice versa. In order to resolve those issues, we propose an algorithm to automatically adjust their thresholds 3 Our methods can thus be easily applied to magnitude-based pruning methods by setting S to be identical to W (Han et al., 2015). for all weight matrices. More specifically, we define K as where the Sigmoid function is used to map σ to be in the range of (0, 1). T is a temperature value which critically controls the speed of k transitioning from 1 to 0 as σ decreases. We remark that Sigmoid could be replaced with any continuous function that maps any positive or negative values to [0, 1]. Investigating for various such functions could be an interesting future direction. is uniquely determined by σ i . However, directly applying this for our objective function will tend to make k(σ i ) always close to 1, since the model prefers no pruning to achieve lower training loss. Therefore, we introduce a novel regularization term to compensate for this. Denote R(σ) the remaining ratio of weight parameter, which is a function of σ (more details of how to calculate R(σ) are given later). Suppose that our target pruning ratio is R target . We propose the following simple yet effective regularization loss, Equipped with Eq. 3, 4, and 5, we then rewrite the training objective as where the masks M σ is written in an abstract manner, meaning that each mask M i is determined by Top-K (defined in Eq. 3). As the Top-k operator is not a smooth operator, we use the so-called Straight-through Estimator (Bengio et al., 2013) to compute the gradient with respect to both σ and S. That is to say, the gradient through Top-K operator is artificially set to be 1. With such a regularization defined in Eq. 6, there exits "competition" between σ i in L pure and σ i in L reg . Particularly, σ i in L pure tends to make k(σ i ) close to 1 as the dense model generally gives better accuracy performance, while σ i in L reg makes k(σ i ) close to the target ratio R target . Notably, our regularization method is fundamentally different from those soft-threshold methods by using L 0 or L 1 regularization. While they apply a penalty to the score matrices with indirect control on final sparsity, our method focus on learnable sparsity thresholds σ i . Thus, we could easily achieve our target compression ratios. On the other hand, one may add L 0 or L 1 regularization to Eq. 6 as the two are complementary. Critical term R(σ) We now delve into the calculation of R(σ). For simplicity, we consider that all three matrices M i , W i , and S i follow the same dimensions d i in ×d i out . Then where the total number of weight parameters , and the number of remaining parame- Adaptive regularization coefficient λ reg Generally, for regularization-based (e.g., L 1 or L 0 regularization) pruning methods, λ reg needs to be carefully tuned (Sanh et al., 2020). To resolve this tuning issue, we propose an adaptive formula to choose the value of λ reg , where λ max and λ min are pre-chosen hyperparameters. 4 We have found that our results are not sensitive to the choice of these hyper-parameters. The idea is that when R(σ) is far away from the R target , the new coefficient λ reg in Eq. 7 is close to λ max (when R(σ) = 1, it is indeed λ max ) so that we can have a strong regularization effect; and when R is close to R target , the penalty can be less heavy in Eq. 6. Detailed comparison between constant and our proposed adaptive regularization are referred to Section 5. In order to do a fair comparison with Soft MvP (Lagunas et al., 2021), for all tasks, we perform logit distillation to boost the performance (Hinton et al., 2014). That is, Here, L ds is the KL-divergence between the predictions of the student and the teacher, L ce is the original cross entropy loss function between the student and the true label, and α is the hyperparameter that balances the cross-entropy loss and the distillation loss. We let α = 0.9 by default for fair comparison (One might be able to improve the results further with more careful hyperparameter tuning and more sophisticated distillation methods). In our experiments, structured pruning refers to applying block-wise pruning to both sets, i.e., W att and W fc . In addition, we make the square block size the same in both MHA and FC sub-layers and we choose d = 32. Unstructured pruning is using d = 1 for MHA and FC. Hybrid pruning means using structured pruning for MHA (setting the block size to be d = 32) and using unstructured one for FC (d = 1). As such, there are three different sets of experiments and we summarize them in Table 1. We test our methods with the scores S described in movement pruning (Sanh et al., 2020;Lagunas et al., 2021) over three datasets across unstructured, hybrid, and structured pruning setups. Moreover, we follow strictly (Lagunas et al., 2021) (referred to as Soft Pruning in later text) on the learning rate including warm-up and decaying schedules as well as the total training epochs to make sure the comparison is fair. Let LEAP-l and soft MvP-1 denote a double-epoch training setup compared to LEAP and Soft Pruning (Sanh et al., 2020). For more training details, see Appendix A. Results In this section, we present our results for unstructured pruning and structured block-wise pruning, and compare with (Sanh et al., 2020;Lagunas et al., 2021) and not include other methods for the two reasons: (1) our training setup Table 2. Different density ratios for unstructured pruning. Here Soft MvP is referred to (Lagunas et al., 2021). Here LEAP uses exactly training strategies as (Lagunas et al., 2021) and LEAP-l doubles the training epochs. For QQP, we report accuracy and F1 socre; for MNLI, we report the accuracy of match and mis-match sets; for SQuAD, we report exact match and F1 score. are close to them and they are the current stat-of-the-art methods for BERT models. Unstructured pruning Unstructured pruning is one of the most effective ways to reduce the memory footprint of NNs with minimal impact on model quality. Here, we show the performance of LEAP under different density ratios of BERT base on QQP/MNLI/SQuAD. As can be seen, compared to Soft MvP, LEAP achieves better performances on 4 out of 6 direct comparisons (as Soft MvP only provides two pruning ratios per task). Particularly, for QQP, LEAP is able to reduce the density ratio to 1∼2% while achieving similar performance as Soft MvP with 3∼4% density ratio; for MNLI, although LEAP is slightly worse than Soft MvP, the performance gap is within 0.6 for all cases. Also, recall that in order to achieve different level of pruning ratios as well as good model quality, Soft MvP needs to be carefully tuned s t in Eq. 1 and λ reg in Eq. 2. However, for LEAP, the tuning is much friendly and stable (see Section 6). We also list the results of LEAP-l, which utilizes more training epochs to boost the performance, in Table 2. Note that for all 9 scenarios, the performance of LEAP-l is much better than LEAP, particularly for extreme compression. For instance, for MNLI (SQuAD) with 2∼3% (3∼4%) density ratio, longer training brings 1.3/0.9 (2.8/2.0) extra perfor- Table 3. Hybrid and structured pruning comparison between LEAP and Soft MvP (Lagunas et al., 2021). For QQP, we report accuracy and F1 socre; for MNLI, we report the accuracy of match and mis-match sets; for SQuAD, we report exact match and F1 score. LEAP-1 (MvP-1) means the training iterations is twice larger than LEAP (MvP). Methods Density 1 One hypothesis to explain why longer training can significantly boost the performance of LEAP is that LEAP introduces both more learnable hyperparameters and the adaptive regularization magnitude. As such, those extra parameters need more iterations to reach the "optimal" values (which is also illustrated in Section 6). Hybrid and structure pruning We start with hybrid pruning and compare LEAP-1 with Soft MvP-1. The results are shown in Table 3. Again, as can be seen, for different tasks with various pruning ratio, the overall performance of LEAP-1 is similar to Soft MvP-1, which demonstrates the easy adoption feature of LEAP. We also present structured pruning results in Table 3. The first noticeable finding as expected here is that the accuracy drop of structured pruning is much higher than hybrid mode, especially for a low density ratio. Compared to Soft MvP-1, LEAP-1 achieves slightly better performance on SQuAD. For more results, see Appendix B. We reiterate that Table 2 and Table 3 are not about beating the state-of-the-art results but emphasizing that LEAP requires much less hyper-parameter tuning but achieves similar performance as Soft MvP that involved a large set of the hyper-parameter sweep. For details about hyper-parameter tuning of Soft MvP-1 and LEAP, see Section B. Analysis As mentioned, LEAP is a learnable pruning method with a minimal requirement of hyperparameter tuning. In order to demonstrate this, we analyze LEAP by delving into the key components of LEAP: the initialization of our thresholds σ, the temperature T , and the regularization term λ reg . Figure 1. Effect of temperature T for unstructured pruning on QQP. The density ratio is set to be 1%. Temperature T As T defined in Eq. 4, it plays a critical role in determining the rate at which the threshold curve k(σ i ) falls. In addition, T also directly links to the initialization of σ i which is set to be 5T for all i such that Sigmoid(σ i /T ) ≈ 1. This allows the model to have sufficient time to identify the layers which are insensitive for aggressive pruning and vice versa. To understand how T influences the performances of the Bert model, we conduct an unstructured pruning on the QQP dataset by varying T ∈ {64, 48, 32, 16} and keeping all other hyperparameters to be the same. We plot the objective loss L obj (loss), the regularization loss L reg (regu_loss), the density ratio R(σ), and F1 accuracy, with respect to the iterations in Figure 1. Among the four curves in Figure 1, T = 48 gives the best Figure 2. Effect of adaptive regularization T for Hybrid pruning (H32) on MNLI with a target dense ratio of 30%. Note that in the plot of density ratio with respect to epochs (left bottom), the purple (blue) and orange (green) curves are overlapped. Also in the right top bottom, blue and green curves are overlapped. F1 accuracy while achieving ∼1% density, which clearly demonstrates the significance of T for LEAP. Meanwhile, we see that the gaps between the performance for all T except 64 are close, thus it shows that LEAP is not sensitive to T . A possible explanation why T = 64 gives the worse performance is that the density ratio of T = 64 decays relatively slower compared to rest curves. As such, when it is close to the desired pruning regime, the learning rate is relatively small and so it cannot be able to recover the accuracy. On the other hand, it is interesting to note that using the temperature T = 16 (orange curve), the density ratio increases after around five epochs and keeps increasing to the end 5 , which results in a much better performance even though it experiences the most accuracy drop in the beginning. This in some scenes illustrates the "competition" between σ i in L pure and σ i in L reg mentioned in Section 3.2: the accuracy increases at epoch 5 meaning that L pure is decreasing effectively and the L reg increases (compromises). Compared to those manual scheduling thresholds, this increasing phenomena of σ i also shows the advantage of learnable thresholds verifying that the model can figure out automatically when to prune and when not. Robustness of hyper-parameter tuning T and λ reg We see in the previous section that given the same λ reg , various values of the temperature T lead to similar results although Figure 3. The density ratio k(σi) to all the weight matrices for structured, hybrid and unstructured pruning on SQuAD, of which the total density ratios are respectively 20% , 16%, and 8%. tuning is necessary to achieve the best one. Here we study how robust the coefficient of λ reg in our proposed regularization L reg . We prune BERT base on the SQuAD task with a target ratio 10% with a combination of λ reg ∈ {50, 160, 320} and T ∈ {16, 32}, for which the results is in Table 6. For a given T , it indicates that the results are not highly sensitive to different λ reg s as there is only about 0.1 variation for accuracy. It is worth noticing that a smaller λ reg (here λ reg = 50) can indeed affect achieving our target sparse ratio. However, the most off pruning ratio is 11.46%, which is reasonably close to the desired target of 10%. For a given λ reg , larger T leads both the accuracy and the density ratio higher as expected. The reason is that the density ratio function, i.e., Sigmoid(σ i /T ), becomes flatter for larger T , which leads to a higher density ratio by using the same value of σ (Generally, σ is negative to achieve < 50% density ratio). And higher density ratio results in higher accuracy. Overall, we can see that LEAP is robust to both T and λ reg . The regularization coefficient λ reg To better understand the effect of adaptive λ reg (Eq. 7), we set λ max ∈ {160, 320} and fix λ min = 10 (same as Section 5) to prune BERT base on the MNLI task with a target ratio 30%. In addition, we also compare this adaptive coefficient with their constant counterparts λ reg ∈ {160, 320}. We plots the λ reg (lambda_reg), the regularization loss L reg (regu_loss), the density ratio R(σ), and accuracy, with respect to the iterations in Figure 2. First of all, we see that our adaptive coefficient λ reg decreases in a quadratic manner and reaching to the λ min = 10 after 4 epochs, which slows down the pruning activities after 4 epochs. Also, note that the curves of different λ max are actually overlapped with each other, which also indicates that LEAP is not vulnerable to λ reg . Meanwhile, as λ reg quickly reaches λ min , the importance score S has more time to figure out the pruning parameters for the last small portion. As such, this slowness can in turn decrease the drop of accuracy and thus eventually recover a much better accuracy than that of the constant regularization. The effect of learnable pruning for different weight matrices As mentioned, the sensitivities of different weight matrices are different. Therefore, a high pruning ratio should be set for insensitive layers, and a low pruning ratio needs to be used for sensitive layers. To demonstrate LEAP can automatically achieve this, we plot the remaining parameters per layer for different pruning granularity on SQuAD in Figure 3. As can be seen, different layers receive different pruning ratios. Particularly, (i) as compared to MHA layers, FC layers are generally pruned more, which results in a lower density ratio. This might indicate that FC layers are less sensitive as compared to MHA layers; (ii) there is a clear trend that shallow layers (close to inputs) have higher density ratios as compared to deep layers (close to outputs). This finding is very intuitive. If the pruning ratio is too high for shallow layers, the information loss might be too high, and it is hard for the model to propagate the information to the output layer successfully. Therefore, the pruning ratio of shallow layers is smaller. Conclusions In this work, we present LEAP, a learnable pruning framework for transformer-based models. To alleviate the hyperparameter tuning effort, LEAP (i) introduces a novel regularization function to achieve desired pruning ratio with learnable pruning ratios for different weight matrices, and (ii) designs an adaptive regularization magnitude coefficient to control the regularization loss adaptively. By combining these two techniques, LEAP achieves on-par or even better performance for various pruning scenarios as compared to previous methods. Also, we demonstrate that LEAP less sensitive to the newly introduced hyperparameters and show the advance of the proposed adaptive regularization coef-ficient. Finally, we also show that there is clear pruning sensitivity associated with the depth and the component of the network. A. Training Details For all three tasks, the temperature parameter T for k(σ i ) is chosen between {16, 32, 48, 64} and λ max varies between {40, 80, 160, 320} with λ min = 10. For initialization of σ i , we set it to be 5T . We use a batch size 32, a sequence 128 for QQP/MNLI and 11000/12000 warmup steps (about 1 epoch) for learning rate. As for SQuAD, we use a batch size 16 and a sequence 384, and we use 5400 warmup steps (about 1 epoch). We use a learning rate of 3e-5 (1e-2) for the original weights (for pruning-rated parameters, i.e., S and σ). We set all the training to be deterministic with the random seed of 17. All the models are trained using FP32 with PyTorch on a single V100 GPU. Note that these configurations strictly follow the experimental setup in (Sanh et al., 2020;Lagunas et al., 2021); readers could check more details there. For the results in Table 1, the entire epoch using LEAP is 10, 6, and 10, respectively, for QQP, MNLI, and SQuAD. For the results of LEAP-1 and the results in Table 2, we simply double training epochs correspondingly (i.e., 20, 12, and 20). B. Results Details Smaller tasks. Note larger datasets (QQP/MNLI/SQuAD) to evaluate the performance of the pruning method is very common due to the evaluation robustness. However, to illustrate the generalization ability of LEAP, we also tested its performance on two smaller datasets STS-B and MPRC, using block pruning with size 32x32. The results are shown in Table 1. As can be seen, with around 20% density ratio, LEAP still achieves marginal accuracy degradation compared to baseline. Hyper-parameter. We emphasize again the results of soft mvp is a strong baseline, and our goal is not to purely beat soft mvp from accuracy perspective. However, their results require extensive hyperparameter tuning (see directory), while ours require to only tune T . To show the generalization of the best hyperparameter, we include the results for various λ max and T on multiple tasks in Table 3. Note that when T is fixed, different λ max gives similar results over various tasks.
2021-06-01T01:16:23.396Z
2021-05-30T00:00:00.000
{ "year": 2021, "sha1": "5e8a892cc33a9fb9e701d5e10333e7b8545bf4a5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5e8a892cc33a9fb9e701d5e10333e7b8545bf4a5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
248103601
pes2o/s2orc
v3-fos-license
Development of Cistanche deserticola Fermented Juice and Its Repair Effect on Ethanol-Induced WRL68 Cell Damage : Cistanche deserticola is a valuable Chinese herb, but traditional dry processing causes the loss of active substances. This study developed Cistanche deserticola fermented juice (CFJ) using lactic acid bacteria and optimized the fermentation process to achieve the maximum active substance content and taste. More interestingly, superoxide dismutase (SOD) activity was increased during fermentation, and CFJ exerted a reparative effect on ethanol-induced cell damage. SOD activity reached 603.26 U/mL when the ratios in the total inoculum volume of Lactobacillus reuteri , Lactococcus pentosus , Streptococcus thermophilus , Bifidobacterium animalis , Lactobacillus casei , and Lactobacillus acidophilus were 31.74%, 15.71%, 17.45%, 11.65%, 9.56%, and 13.89%, respectively. Further, the optimal fermentation conditions for CFJ were determined using a response surface methodology. More importantly, CFJ promoted the proliferation of WRL68 cells, and CFJ exerted an obvious reparative effect on ethanol-treated cells, in which the cell survival rate increased to 120.35 ± 0.77% ( p < 0.05). The underlying mechanism might have been that CFJ reduced the MDA content in damaged cells from 1.36 nmol/mg prot to 0.88 nmol/mg prot and increased GSH-Px and SOD activities by 48% and 72%, respectively. This study provides a theoretical basis and reference data for the fermentation of C. deserticola and its hepatoprotective activity. Introduction Cistanche deserticola (C. deserticola) is known as "desert ginseng" [1] and is a traditional Chinese medicine peculiar to the desert area of northwest China. C. deserticola has excellent medicinal functions and nourishing effects, and pharmacological studies have revealed that Cistanche has antioxidant [2], antifatigue [3], and hepatoprotective [4] functions, enhancing sexual function and modulating the gut microbiome [5]. However, the traditional methods of C. deserticola processing are mainly drying, salting, or soaking in alcohol [6], which take a long time and are affected by other factors, leading to the unstable quality of the herbal medicine and incomplete utilization of its active ingredients [7]. Therefore, a new product must be developed to improve the value of the product. Many studies have discovered the value of fermenting food as a cheap preservation method that improves nutritional quality and enhances sensory characteristics [8]. Fermented plant juice is generated by various microorganisms, such as yeast, lactic acid bacteria (LAB), and acetic acid bacteria, to prepare a juice or other physical form rich in a variety of nutrient bioactive substances, such as enzymes, polyphenols, and mineral organic acids [9]. Studies have shown that fermented juices are richer in flavor [10] and more balanced nutritionally. Yang et al. [11] fermented a beverage containing apples, pears and carrots using two strains of Lactobacillus plantarum as raw material, and Srijita et al. [12] added bacteria to sea buckthorn juice for fermentation. Meanwhile, compared with single strain fermentation, mixed strain fermentation has a more complex metabolic mechanism and generates more abundant fermentation products. Due to the different characteristics and adaptabilities of individual strains, dominant bacteria must be selected for mixed bacterial fermentation. In this paper, C. deserticola was used as a raw fermentation material to develop C. deserticola fermented juice (CFJ) with several lactic acid bacteria and yeast. First, six strains with a high acid production capacity, SOD production capacity, and high sensory evaluation were selected by conducting a single-factor test, and the proportion of inoculum was determined by performing a uniform design test. Then, the fermentation process was optimized using SOD as an indicator based on the preliminary experiment. Furthermore, different concentrations of CFJ were used to repair cells treated with alcohol and the cell survival rate was calculated. Then, superoxide dismutase (SOD) activity, glutathione peroxidase (GSH-Px) activity, and malondialdehyde (MDA) contents were detected to study the mechanism by which Cistanche fermented juice repairs alcoholic liver injury. Preparation of C. deserticola Juice and Inoculum C. deserticola was washed with distilled water to remove dust and surface impurities. Afterward, C. deserticola was mixed with distilled water at a ratio of 1:9 (w/v) and boiled for 30 min. Then, the juice was cooled to 55 • C and ground into a homogenate with a beating machine. Next, 0.4% (w/v) pectinase, 0.2% (w/v) cellulase, and 0.2% (w/v) hemicellulase were added to induce enzymatic hydrolysis at 55 • C for 3 h. After enzymolysis, the juice was pasteurized in a water bath at 70 • C for 20 min and cooled to room temperature. A certain volume of the strain suspension was added to the homogenate. Finally, the juice was fermented at 37 • C for 24 h. Selection of Fermentation Strains and Determination of the Proportions of Strains Fifteen strains were used to ferment Cistanche homogenates individually. The monoculture inoculum was maintained at a bacterial concentration of 2 × 10 6 CFU/mL, and the total soluble solid (TSS) content was adjusted to 10 • Brix. Then, fermentation was performed in a 37 • C incubator for 20 h. After fermentation, the pH, superoxide dismutase (SOD) activity, Cistanche phenylethanoid glycoside (CPhGs) content, and sensory score were determined, and the dominant strains were selected considering the aspects of SOD and acid production capacity, high CPhGs content, and high sensory score. The optimization test used a uniform design table U16: sixteen levels of six dominant strains in the total inoculum volume were set as factors, the uniform design table is shown in Table 1, the SOD value was used as the response value, and SPSS 26.0 software was used to conduct a quadratic polynomial stepwise regression analysis to obtain the regression equation, with the maximum SOD activity as the target, then solve it using the programming solver functions in Excel. Finally, the percentages of the six strains in the inoculum were optimized. Based on the single-factor test, a Box-Behnken test was designed with the soluble solid content (A), fermentation time (B), and inoculum amount (C) as independent variables and SOD activity (Y) as the response value. The table of factors and levels of the Box-Behnken test design is shown in Table 2. Measurement of Physicochemical Indicators The pH value was measured with a pH meter. Superoxide dismutase (SOD) activity was determined according to the instructions provided with the Nanjing Jiancheng kit. The total phenylethanoid glycoside (CPhGs) content was measured using the method reported by Zhang [13], with slight modifications, and the total phenylethanoid content was calculated from the standard curve equation. The sensory evaluation was performed using the methods reported by Wei et al. [14], combined with liquid fermented juice sensory characteristics. Ten people who had received sensory evaluation training rated the appearance, color, smell, taste, and texture of CFJ. Assay of CFJ Cytotoxicity toward WRL68 Cells WRL68 cells were inoculated in 96-well plates (1 × 10 5 cells/mL, 100 µL per well) and incubated for 24 h. Afterward, 100 µL of PBS buffer was added to each well for the blank group, 100 µL of the medium was added to each well for the control group, medium containing different concentrations (10-250 µL/mL) of CFJ was added to the experimental group, and the culture was continued for 12 h and 24 h. Six parallel experiments were repeated. Then, cell viability was determined as described by Guo et al. [15] with slight modifications. The absorbance (A) was measured at 490 nm using a microplate reader, and cell viability was calculated using the following formula: where A sample is the absorbance of the experimental group, A control is the absorbance of the control group without the sample, and A blank is the absorbance of the culture medium without the sample and seeded cells. Effect of CFJ on Repairing Ethanol-Induced WRL68 Cell Damage WRL68 cells in the logarithmic growth phase were inoculated in 96-well plates (1 × 10 5 cells/mL, 100 µL per well) and incubated for 24 h. The experiment was divided into a blank group, a control group, a damaged group, and an experimental group. After the cells were plated, 100 µL of PBS buffer was added to each well of the blank group, 100 µL of the medium was added to each well of the control group, and 100 µL of medium containing 400 mmol/L ethanol was added to each well of the damaged group and the experimental group. After 24 h of culture, fresh medium was added to the cells in the control group and the damaged group. Medium containing different concentrations of CFJ was added to the experimental group. Six parallel experiments were repeated. After 24 h of incubation, the cell survival rate was determined using the method described in Section 2.6. Detection of Biochemical Indices Cells in the logarithmic growth stage were inoculated in 6-well plates (1 × 10 6 cells/mL, 2.5 mL per well) and incubated for 24 h. Then, the cells were treated using the method described in Section 2.7. Three parallel experiments were repeated. After the culture, the cells were collected by incubating them with 0.05% trypsin-EDTA and prepared as homogenates in cold PBS with an ultrasonic cell crusher. The supernatants of cell lysates were collected to determine the intracellular MDA, SOD, and GSH-Px activities. Statistical Analysis The data are presented as the means ± standard deviations or average values and were analyzed using SPSS 24 software. The statistical significance of differences between groups was analyzed using ANOVA (* p < 0.05, ** p < 0.01). All figures were drawn using Prism 9 software. Selection of Fermentation Strains and Determination of the Proportions of Strains The products obtained after mixed fermentation contained more metabolites due to the mutualistic symbiotic relationship between microbes. Studies have shown that the flavor [16] and nutritional [17] and storage qualities [18] of products generated by compound strains are better than those generated by single strains. Therefore, this study selected the dominant strains from 15 strains for subsequent mixed fermentation. The results are shown in Table 3. After fermentation, SOD activity increased in all groups, and pH and CPhGs content decreased. The production of organic acids during fermentation reduced the pH value. CFJ fermented by P. pentosaceus and L. reuteri displayed the greatest decreases in pH value of 3.78 ± 0.01 and 3.79 ± 0.03, respectively, and the SOD activity reached 495.48 ± 4.79 U/mL and 503.63 ± 2.48 U/mL, respectively. The explanations for these differences may include the growth rate of the strain, differences in the optimal growth environment for different strains and differences in the utilization of carbon sources, which affects the ability of bacteria to produce organic acids. Meanwhile, SOD is produced during the fermentation of fruit juices by lactic acid bacteria [19]. Many studies have been conducted on breeding LAB with high SOD production, which may be related to the properties of lactic acid bacteria and their ability to adapt. In addition to pH, the contents of active substances were measured as an index to evaluate CFJ, and the CPhGs content of CFJ fermented by L. casei and S. thermophilus reached 3.47 ± 0.03 mg/mL and 3.45 ± 0.02 mg/mL, respectively. FeiZhou et al. [20] showed that phenylethanol glycoside components are more susceptible to degradation in high pH environments. However, from the data in Table 3, the CPhGs content did not correlate negatively with pH value, which may be attributed to the production of phenylalanine during fermentation, while CPhGs are synthesized via the metabolic regulatory pathway of phenylethanoic acid [21]. The interaction between these two processes may be responsible for the difference in CPhGs content and they decreased to different degrees after fermentation. Meanwhile, sensory evaluation is also a key factor affecting consumers' decisions to purchase products. The sensory evaluation scores of the CFJ generated by B. animalis and L. acidophilus fermentation were 94.00 ± 0.77 and 92.20 ± 0.98, respectively. The potential explanations for these differences are as follows: juices fermented by strains have a characteristic sour taste, while a different organic acid composition renders the taste of each juice unique, and the volatile compound compositions of CFJ fermented by different strains are also different. Therefore, from the perspective of better SOD and acid production abilities, high CPhGs contents and high sensory quality, P. pentosaceus, L. reuteri, L. casei, S. thermophilus, B. animalis, and L. acidophilus were selected as the dominant strains. Mixed fermentation may compensate for defects in the other strains for cooperative fermentation. Li et al. [22] found that mixed fermentation may increase the activity of SOD and that the SOD activities produced by different proportions of mixed bacteria are different; therefore, the SOD activity of CFJ might be improved by determining the ratio of strains in the mixed fermentation. In this study, a uniform design test was conducted to determine the ratios between different strains. The uniform design results are shown in Table 4. The SOD activity of CFJ was significantly increased after fermentation with mixed bacteria; therefore, the quadratic polynomial stepwise regression equation was established with SOD activity as the response value, as follows: p = 0.0097 < 0.01 in the regression equation, proving that the equation predicted the optimal conditions more accurately. In summary, the optimal ratios in the total inoculum volume for CFJ were predicted to be 31.74% for L. reuteri, 15.71% for L. pentosus, 17.45% for S. thermophilus, 11.65% for B. animalis, 9.56% for L. casei, and 13.89% for L. acidophilus, as the SOD activity reached 606.52 U/mL. Based on the predicted optimal conditions, the SOD activity of CFJ was 603.26 U/mL. Response Surface Experimental Results and Analysis of Variance Fermentation conditions must be optimized to obtain the best activity of the efficient substance. As shown in Figure 1 no significant difference in pH or CPhGs content was observed under the different fermentation conditions. Thus, SOD activity was used as the main indicator for the single-factor experiment. of bacteria. Subsequently, the bacteria could not make full use of the carbon source and other nutrients and SOD activity did not reach the maximum value. If the amount of inoculum exceeds 5 × 10 6 CFU/mL, this may lead to excessive fermentation and change the pH of CFJ, thus affecting SOD activity. Therefore, the optimal range of the inoculum amount was 3 × 10 6 − 5 × 10 6 CFU/mL. As shown in Figure 1C, when the fermentation time was too short, the growth and reproduction of lactic acid bacteria were not sufficient, the accumulation of metabolites was insufficient, and SOD activity was low. If the time was too long, lactic acid bacteria growth and nutrients required for reproduction were insufficient, the number of dead bacteria increased, and metabolic waste accumulated in the fermentation liquid, resulting in decreased SOD activity. Therefore, after a comprehensive consideration of the results, the optimal fermentation time was 24 h. As shown in Figure 1D, a fermentation temperature that was too high or too low was not conducive to the growth of lactic acid bacteria, resulting in less acid production and decreased SOD activity. Therefore, the optimum fermentation temperature was 36 °C. Based on the preliminary experiments, the fermentation temperature was set to 36 °C, and three factors (soluble solid content (A), fermentation time (B), and inoculum amount (C)) were identified as factors responsible for SOD activity. The Box-Behnken test design and results are shown in Table 5. The values of the regression coefficients were calculated, and the response variable and the test variables were related by the following second-order polynomial equation: As shown in Figure 1A, when the content of TSS was less than 10 • Brix, fermentation was incomplete, and SOD activity was not high; in contrast, the fermentation of lactic acid bacteria was not sufficient and the SOD activity decreased. A potential explanation for this finding is that sugars in the fermentation broth promote the growth and reproduction of lactic acid bacteria. When the soluble solid content was 10 • Brix, the maximum SOD activity was 602.71 U/mL, the CPhGs content was 3.34 mg/mL, and the pH was 3.76. Therefore, the optimum TSS content is 10 • Brix. As shown in Figure 1B, when the inoculation amount was less than 3 × 10 6 CFU/mL, the CFJ easily polluted miscellaneous bacteria and formed an environment that was not conducive to the multiplication and growth of bacteria. Subsequently, the bacteria could not make full use of the carbon source and other nutrients and SOD activity did not reach the maximum value. If the amount of inoculum exceeds 5 × 10 6 CFU/mL, this may lead to excessive fermentation and change the pH of CFJ, thus affecting SOD activity. Therefore, the optimal range of the inoculum amount was 3 × 10 6 − 5 × 10 6 CFU/mL. As shown in Figure 1C, when the fermentation time was too short, the growth and reproduction of lactic acid bacteria were not sufficient, the accumulation of metabolites was insufficient, and SOD activity was low. If the time was too long, lactic acid bacteria growth and nutrients required for reproduction were insufficient, the number of dead bacteria increased, and metabolic waste accumulated in the fermentation liquid, resulting in decreased SOD activity. Therefore, after a comprehensive consideration of the results, the optimal fermentation time was 24 h. As shown in Figure 1D, a fermentation temperature that was too high or too low was not conducive to the growth of lactic acid bacteria, resulting in less acid production and decreased SOD activity. Therefore, the optimum fermentation temperature was 36 • C. Based on the preliminary experiments, the fermentation temperature was set to 36 • C, and three factors (soluble solid content (A), fermentation time (B), and inoculum amount (C)) were identified as factors responsible for SOD activity. The Box-Behnken test design and results are shown in Table 5. The values of the regression coefficients were calculated, and the response variable and the test variables were related by the following second-order polynomial equation: The statistical significance of the regression model was checked based on the F-test and p-value, and the analysis of variance (ANOVA) for the response surface quadratic model is shown in Table 6. The regression model selected here was highly significant (p < 0.01), and the lack of fit was not significant (p > 0.05), indicating that the unknown factors interfered only slightly with the experimental results and that the model was appropriately chosen. Meanwhile, the model regression coefficient R 2 was 0.9256, and the corrected coefficient of determination R 2 Adj was 0.9707, indicating that this equation fit well with the actual situation, and thus this model can be used to predict and analyze the process of CFJ fermentation. In this table, the linear coefficients (A, B and C) and a quadratic term coefficient (A 2 , B 2 and C 2 ) were significant (p < 0.01) (Figure 2). The coefficients for the other terms were not significant (p > 0.05). After the analysis, the optimal fermentation conditions were determined to be 10.71 • Brix, 26.30 h, and 4.42 × 10 6 CFU/mL inoculum. The maximum SOD activity predicted by the model was 630.43 U/mL. The model was validated with the following modified optimal conditions: the total soluble solid content was 10.7 • Brix, the fermentation time was 26 h, and the inoculum amount was 4.42 × 10 6 CFU/mL. The mean SOD activity of 629.31 ± 1.57 U/mL (n = 3) obtained in the real experiments validated the RSM model, which indicated that the model was adequate for the fermentation process. The optimized conditions were used in the subsequent experiments. Cytotoxicity of CFJ and the Reparative Effect of CFJ on Alcohol-Induced WRL68 Cell Damage The purpose of this paper was to study the reparative effect of CFJ on cell damage. Before the experiment, the cytotoxicity of CFJ was investigated to determine whether this preparation could be applied in the experiment. Concentrations of 10-250 µL/mL CFJ were selected to treat the cells for 12 h and 24 h, respectively. As shown in Figure 3, different concentrations of CFJ had no obvious toxic effect on the cells; in contrast, they all exerted a certain effect on cell proliferation. When the concentration of CFJ was 100 µL/mL, the survival rate of cells increased 1.6 times after 24 h. In the subsequent repair experiment, CFJ at 50-150 µL/mL was selected as the experimental concentration and 24 h was selected as the culture time. Cytotoxicity of CFJ and the Reparative Effect of CFJ on Alcohol-Induced WRL68 Cell Damage The purpose of this paper was to study the reparative effect of CFJ on cell damage. Before the experiment, the cytotoxicity of CFJ was investigated to determine whether this preparation could be applied in the experiment. Concentrations of 10-250 µL/mL CFJ were selected to treat the cells for 12 h and 24 h, respectively. As shown in Figure 3, different concentrations of CFJ had no obvious toxic effect on the cells; in contrast, they all exerted a certain effect on cell proliferation. When the concentration of CFJ was 100 µL/mL, the survival rate of cells increased 1.6 times after 24 h. In the subsequent repair experiment, CFJ at 50-150 µL/mL was selected as the experimental concentration and 24 h was selected as the culture time. The hepatoprotective activity of CFJ was studied in vitro by assessing the effect on WRL68 cell survival rates. The results of the cell-based experiment indicated that CFJ has potential hepatoprotective activity. The results are shown in Figure 4. The survival rate of the model group was 58.82%. When the CFJ concentration was 50-150 µL/mL, WRL68 cell survival rates were noticeably improved compared with the model group (p < 0.01). However, the cell survival rates did not show a concentration dependence. In contrast, as the concentration of CFJ reached 150 µL/mL, the WRL68 cell survival rate decreased from 120.35% to 108.86%. The potential explanation is that a low concentration of CFJ promotes the survival of WRL68 cells, but when the concentration is sufficiently high to influence the microenvironment of the cell, cell survival rates are reduced. In conclusion, when the concentration of CFJ was 50-150 µL/mL, significant differences were observed compared with the model group, and CFJ exerted a certain repair effect. The hepatoprotective activity of CFJ was studied in vitro by assessing the effect on WRL68 cell survival rates. The results of the cell-based experiment indicated that CFJ has potential hepatoprotective activity. The results are shown in Figure 4. The survival rate of the model group was 58.82%. When the CFJ concentration was 50-150 µL/mL, WRL68 cell survival rates were noticeably improved compared with the model group (p < 0.01). However, the cell survival rates did not show a concentration dependence. In contrast, as the concentration of CFJ reached 150 µL/mL, the WRL68 cell survival rate decreased from 120.35% to 108.86%. The potential explanation is that a low concentration of CFJ promotes the survival of WRL68 cells, but when the concentration is sufficiently high to influence the microenvironment of the cell, cell survival rates are reduced. In conclusion, when the concentration of CFJ was 50-150 µL/mL, significant differences were observed compared with the model group, and CFJ exerted a certain repair effect. Effects of CFJ on SOD and GSH-Px Activities and MDA Contents The levels of liver superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), and malondialdehyde (MDA) were measured to quantify oxidative liver injury. MDA is the main product of the intracellular lipid oxidation reaction, and the MDA content in cells reflects the degree of intracellular lipid peroxidation. GSH-Px is an intracellular peroxidase that may clear lipids that undergo oxidative reactions in cells [23]. SOD is the primary line of defense of the body against oxidative reactions in cells, which quickly eliminates oxygen free radicals and protects the body from damage [24]. As shown in Table 7, SOD and GSH-Px activities were significantly reduced, and the MDA content was significantly increased in the model control group compared to the control group. Compared to the model control group, cells treated with 50-150 µL/mL CFJ exhibited increased SOD and GSH-Px activities and reduced MDA contents. When the concentration of CFJ reached 100 µL/mL, SOD and GSH-Px activities increased from 8.37 ± 0.45 U/mg prot and 23.34 ± 0.38 U/mg prot in the model group to 14.37 ± 0.45 U/mg prot and 34.57 ± 0.61 U/mg prot, while MDA contents decreased from 1.36 ± 0.36 nmol/mg prot to 0.88 ± 0.04 nmol/mg prot. When cells are damaged, GSH-Px and SOD in the cells maintain redox homeostasis by clearing oxidized lipids and oxygen free radicals, thus achieving cell repair. After ethanol-induced damage, GSH-Px and SOD activities in cells were significantly decreased, while MDA contents were significantly increased, indicating that ethanol altered redox Effects of CFJ on SOD and GSH-Px Activities and MDA Contents The levels of liver superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), and malondialdehyde (MDA) were measured to quantify oxidative liver injury. MDA is the main product of the intracellular lipid oxidation reaction, and the MDA content in cells reflects the degree of intracellular lipid peroxidation. GSH-Px is an intracellular peroxidase that may clear lipids that undergo oxidative reactions in cells [23]. SOD is the primary line of defense of the body against oxidative reactions in cells, which quickly eliminates oxygen free radicals and protects the body from damage [24]. As shown in Table 7, SOD and GSH-Px activities were significantly reduced, and the MDA content was significantly increased in the model control group compared to the control group. Compared to the model control group, cells treated with 50-150 µL/mL CFJ exhibited increased SOD and GSH-Px activities and reduced MDA contents. When the concentration of CFJ reached 100 µL/mL, SOD and GSH-Px activities increased from 8.37 ± 0.45 U/mg prot and 23.34 ± 0.38 U/mg prot in the model group to 14.37 ± 0.45 U/mg prot and 34.57 ± 0.61 U/mg prot, while MDA contents decreased from 1.36 ± 0.36 nmol/mg prot to 0.88 ± 0.04 nmol/mg prot. When cells are damaged, GSH-Px and SOD in the cells maintain redox homeostasis by clearing oxidized lipids and oxygen free radicals, thus achieving cell repair. After ethanol-induced damage, GSH-Px and SOD activities in cells were significantly decreased, while MDA contents were significantly increased, indicating that ethanol altered redox homeostasis in cells, resulting in cell damage. After treatment with CFJ, GSH-Px and SOD activities in WRL68 cells were significantly restored, and MDA contents were significantly decreased, indicating that CFJ repaired the damage caused by ethanol. Based on these results, CFJ repaired ethanol-induced cell damage by increasing SOD and GSH-PX activities and decreasing MDA contents. Discussion and Conclusions In this paper, Cistanche deserticola, a special raw material from Xinjiang, China, was used as the main raw material. After pretreatment, 15 strains were used for separate fermentation, and six strains were selected as the dominant strains for subsequent mixed fermentation, resulting in high SOD activity, rich nutrients, and good antioxidant performance. Then, a uniform design experiment was used to determine the proportions of the six strains in the mixed fermentation: 31.74% for Lactobacillus reuteri, 15.71% for Lactococcus pentosus, 17.45% for Streptococcus thermophilus, 11.65% for Bifidobacterium animalis, 9.56% for Lactobacillus casei, and 13.89% for Lactobacillus acidophilus. A response surface methodology (RSM) not only depicts the relationship between the response and the independent variables but also considers the interaction effects of the variables [25]. In this paper, the optimal fermentation conditions for CFJ determined using response surface methodology were as follows: the TSS content was 10.71 • Brix, fermentation time was 26.30 h, and the inoculum amount was 4.42 × 10 6 CFU/mL. Many studies have assessed the hepatoprotective effect of Cistanche, but few studies have determined the hepatoprotective effect after processing Cistanche into products. In this paper, the MTT method was used to detect the effects of CFJ on the survival rates of ethanoldamaged WRL68 cells. CFJ promoted the proliferation of WRL68 cells, and CFJ exerted an obvious effect in repairing ethanol-treated cells, for which the cell survival rate increased to 120.35 ± 0.77% (p < 0.05). The potential underlying mechanism was that CFJ reduced the MDA content in damaged cells from 1.36 nmol/mg prot to 0.88 nmol/mg prot and increased GSH-Px and SOD activities by 48% and 72%, respectively. This study provides a theoretical basis and reference data for future clinical applications and developments in the fields of liver injury and prevention of liver cancer. In this paper, a preliminary study was conducted on the development of Cistanche fermented juice and its activity in repairing alcohol-injured hepatocytes in vitro. Due to the complexity of the fermentation process and the limitations of in vitro experiments, the following issues still require further study: (1) The potential microbial flora and the structure-activity relationship of metabolites in the fermentation process; (2) Research and development of a new compound, CFJ, to make it available as a prominently placed fermented juice product and so better serve the local economic situation; (3) Investigate the reparative effect of CFJ on alcoholic liver injury by performing animal experiments. Conflicts of Interest: The authors declare no conflict of interest.
2022-04-13T15:22:48.297Z
2022-04-10T00:00:00.000
{ "year": 2022, "sha1": "e1c4fc700d493bfbaf9333a131a49ab49e8c4634", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2311-5637/8/4/178/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1b2febc320fdeb1bd4c61b2af6a8c84ed4fa7f0b", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
221754452
pes2o/s2orc
v3-fos-license
Adaptive Interval Type-2 Fuzzy Logic Control of a Three Degree-of-Freedom Helicopter This paper combines interval type-2 fuzzy logic with adaptive control theory for the control of a three degree-of-freedom (DOF) helicopter. This strategy yields robustness to various kinds of uncertainties and guaranteed stability of the closed-loop control system. Thus, precise trajectory tracking is maintained under various operational conditions with the presence of various types of uncertainties. Unlike other controllers, the proposed controller approximates the helicopter’s inverse dynamic model and assumes no a priori knowledge of the helicopter’s dynamics or parameters. The proposed controller is applied to a 3-DOF helicopter model and compared against three other controllers, i.e., PID control, adaptive control, and adaptive sliding-mode control. Numerical results show its high performance and robustness under the presence of uncertainties. To better assess the performance of the control system, two quantitative tracking performance metrics are introduced, i.e., the integral of the tracking errors and the integral of the control signals. Comparative numerical results reveal the superiority of the proposed method by achieving the highest tracking accuracy with the lowest control effort. Introduction Helicopters are able to levitate and navigate in tight and hazardous locations. This requires a robust controller to deal with numerous uncertainties such as, changes in mass and inertia, along with other unpredictable factors like external disturbances. The motion of helicopters depends on three independent axis controls; pitch, yaw and roll, which are nonlinear in nature and strongly coupled together (Figure 1). These strong couplings make controlling helicopters a non-trivial task [1]. The 3-DOF helicopter's motion along with the pitch, roll, and yaw axis is achieved by controlling two rotors which makes it more fault-tolerant with respect to the classical helicopter that uses a single main rotor. However, similar to other kinds of Unmanned Aerial Vehicles (UAVs) [2][3][4][5], the 3-DOF helicopter has unstable open-loop dynamics. Its complex nonlinear dynamics, changing operating conditions, high nonlinearities, and unpredictable disturbances are amidst the distinctive issues to be faced. Among many approaches that are proposed for controlling helicopters (i.e., classical, adaptive and robust), two of the well-known controllers that are famous for their simplicity are, backstepping and input-output linearization. Though, under the presence of high uncertainties, and sensitivity to parameters variation, these methods do not guarantee stability and good performance. Therefore, an alternative approach that can deal with different degrees of nonlinearities is required, specifically when the number of design requirements is very high and there is no accurate mathematical model that can effectively describe the motion of the helicopter because of unpredictable factors [6][7][8][9][10][11]. There are different researches that have been done in the area of UAVs in general, including strategies and approaches for designing helicopters' controllers. Some of these researches are as follows: a trajectory control problem of hovercraft with drift angle constraint and external disturbance is addressed by combining finite time observer with adaptive sliding mode control [6]. The resultant controller prevented the angle drift in real-time. Then, altitude control of the quadrotor is developed using a simplified fuzzy controller [12]. A simulation model is used to demonstrate the effectiveness of the proposed controller along with several performance indices such as rise time, settling time, percentage overshoot, integral absolute error, central processing unit time, and energy consumption. In [13], a proportional-derivative (PD) and a proportional-integral-derivative (PID) controller is used to control a hovering small-scale helicopter. Moreover, a Lyapunov-based nonlinear controller is designed and developed in [14] for a quadrotor to attain robust tracking. Along the same idea, a fuzzy sliding mode controller based on sliding-mode control is combined with fuzzy logic to obliterate the chattering is designed for regulation and trajectory control of quadrotor in [15]. The efficacy of the controller is discussed and compared with the nonlinear controller based on the backstepping technique. In [7], a robust controller is designed based on a linear quadratic regulator (LQR) with a control feedback that approximates the altitude of a 3-DOF helicopter while coping with disturbances and uncertainties. The LQR controller requires angular position measurement only. A robust reference model-based adaptive controller is combined with LQR based on Kalman filtering is proposed in [16] to deal with unmodelled dynamics, uncertainties, and disturbances. In [17], an optimal state feedback controller based on model linearization along the desired trajectories is designed for a 3-DOF helicopter. The effectiveness of the controller is demonstrated by experimental results. Additionally, a robust nonlinear tracking is proposed for controlling a helicopter based on a second-order auxiliary system that estimates the uncertainties and filters the errors with compensation to eliminate disturbances also in proposed in [8], experimental results are presented to show the efficacy of the proposed controller. A robust controller is presented in [18] to track the trajectory of small-scaled unmanned helicopters in the presence of the external disturbances in transient and steady-state without explicit knowledge of the model's parameters. An adaptive feedback controller that adapts to parametric uncertainties, unmodeled dynamics, and known actuator characteristics is laid out. In [19], a controller and estimator for a helicopter that estimates and adapts to transmit a hybrid continuous-discrete observed data over a limited bandwidth of a communication channel developed in a Laboratory for analysis and architecture of the system (LAAS). Adaptive backstepping tracking with online updated parameters is also proposed in [9] and a backstepping control strategy is combined with artificial intelligence and machine learning to approximate the online uncertainties to improve robustness of the model [20]. An approximating method with radial basis function of neural networks (RBFNNs) that approximate unmodeled systems is detailed in [21]. In [22], an adaptive robust controller is also based on RBFNNs and a nonlinear observer copes with uncertainties and unknown disturbances. The time-invariant neural network tracking controller of the 3-DOF helicopter is introduced with input saturation [23] and a control compensator based on genetic algorithm and frequency-domain of inputs and outputs is proposed in [24]. Robust second-order consensus tracking controller that achieves tracking without calculating the velocity of the target is established in [25]. In [26], a linear time-invariant controller is designed to improve tracking and validation is done with experimentally. A model-based adaptive controller with Riccati equations is investigated in [27]. A mathematically ill-defined designed controller that is subjected to various disturbances and uncertainties can be approximated with computational intelligence tool, such as artificial neural networks and fuzzy logic systems, since these intelligent tools with high accuracy can uniformly approximate any real continuous function [28][29][30][31]. Hence, such an advancement in neural network, can lead to modeling many complex models [11,12,32,33]. The conventional adaptive control strategies perform well with structured (parametric) uncertainties, but fail to achieve robustness in the presence of unstructured (non-parametric) uncertainties like external disturbances. An attempt on coping with both types of uncertainties was carried out using adaptive control and a reference model [34]. In this work, disturbance rejection is achieved using the derivative of the error which is noisy and limits the performance in practical applications. This shortcoming can be addressed by using an advanced control method meant to cope with the uncertainties of higher magnitudes (like disturbance). In [35], the adaptive type-2 fuzzy logic control approach is used for a motor drive application. Similarly, this paper purposes a controller based on type-2 fuzzy logic and adaptive control theory to track the 3-DOF helicopter's motion in the presence of both structured and unstructured uncertainties. The type-2 fuzzy logic consequent part adaptation is performed by Lyapunov based adaptation law which guarantees the closed-loop system's stability. To the best authors' knowledge, this work is one of the first attempts, if any, to cope with both structured and unstructured uncertainties for the 3-DOF helicopter using adaptive type-2 fuzzy logic. The remaining parts of the paper are organized as follow: Section 1 introduces the dynamic model of the helicopter and present the problem statement, Section 2 outlines the adaptive control methodology, Section 3 deals with numeric results and discussion, and finally, Section 4 states the conclusion with comments and recommendations for future work. System Dynamics The helicopter is a highly unstable 3-DOF rotary motion system with nonlinear and coupled dynamics [1]. The operation of the 3-DOF helicopter is based on the rotation of the two rotors in which the propellers coupled to the motors generate a force called lift, as well as direction control. As mentioned, various linear/nonlinear control techniques have been studied for the control of a 3-DOF helicopter. In order to achieve a 3-dimensional movement, the helicopter performs three main angular movements as it is shown in Figure 1. The variables of the mathematical model are the roll angle ( ), the pitch angle (θ), the displacement angle (ψ), and the thrust forces of each of the motors, i.e., front motor force (F f ) and back motor force (F b ). There is a linear dependence between its three main angles: • The roll angle ( ): The helicopter turns relative to the perpendicular axis (elevation angle). The total force exerted on the system is the aggregation of the thrusts caused by the propellers powered by the two engines and the moments exerted by the counterweight and the weight of the main beam. • The pitch angle (θ): This is a movement which is due to the imbalance of forces between the pair of motors, or an inclination of one relative to the other, taking as a pivot the end of the arm which contains them. • The yaw angle (ψ): The forces acting on the axis of movement are due to the difference between the thrusts of the two motors. As it is illustrated in Figure 1, all axes intersect at the same point (origin of the global coordinate frame) and the yaw angle (ψ) and roll angle ( ) are perpendicular. The behavior of movements of the 3-DOF helicopter system can be described, using Euler-Lagrange formulation, by the following dynamic mathematical model [1]: It is important to note that this model has been used and validated experimentally in [1]. Table 1 shows the physical values of the mathematical model used to validate the proposed control strategy. Table 1. Physical values of the mathematical model of a 3-degree of freedom (DOF) helicopter [1]. Parameter Helicopter Unit Helicopter Remark 1. The studied 3-DOF helicopter in this paper is an underactuated system. In other words, the control of the three states (the pitch angle θ, the roll angle and the yaw angle ψ) is achieved with only two input forces. Adaptive Interval Type-2 Fuzzy Logic Control The fact that a single input u 1 can control simultaneously the system's motion along the roll and yaw axes, i.e., and ψ, makes the control of the 3-DOF helicopter challenging. To achieve decoupling, the autopilot control structure, depicted in Figure 2, is considered. For that, define the virtual inputs v 1 , and v 2 as [6], By replacing the virtual inputs v 1 , and v 2 in Equation (1), the dynamic Equation (1) become: Using (2), the input u 1 can be written as [1], In this study, the autopilot is based on the fact that the decoupling can be carried out providing that the virtual inputs v 1 and v 2 given in Equation (2) are satisfied. From this, the equation of control u 1 can be expressed as [1], where To realize the decoupling of system, it's necessary to force the angle θ, by the control input u 2 , to track the following desired trajectory [1], The objective is to design a control law v 1 , v 2 , and u 2 to drive the 3-DOF helicopter's states , ψ, and θ to their time-dependent pre-defined respective reference trajectories * , ψ * , and θ * under the presumption of unknown system's dynamics. All system's parameters, L a , L w , L h , M h , M w , g, J ψ , J θ , and J are listed in Table 1; but they are assumed to be unknown to the proposed controller. The control objective in this paper is to track the errors e = − * , e ψ = ψ − ψ * , and e θ = θ − θ * to zero. Thus, each adaptive type-2 fuzzy controller must drive its corresponding error e • to zero by adjusting its weights to approximate the 3-DOF helicopter's system inverse dynamics and thus, achieve a precise tracking. The symbol • can be ψ, θ or . The control scheme is depicted in Figure 2. Type-2 fuzzy sets are very useful in situations where it is difficult to determine an exact function due to uncertainties since they are particularly suitable for time-variant systems with unknown time-varying dynamics. Unlike the type-1 fuzzy system which is incapable of directly modeling higher types of uncertainties, type-2 fuzzy system is able to model and minimize the effects of such uncertainties. In fact, the footprint of uncertainty (FOU) provides the type-2 fuzzy system with additional degrees of freedom, making its membership functions three-dimensional. Hence, type-2 fuzzy sets can handle more types of uncertainties with higher magnitudes than their type-1 counterparts. For this reason, type-2 fuzzy sets are adopted in this paper. Similar to a type-1 fuzzy system, a typical type-2 consists of a fuzzifier which must transform the crisp input values into type-2 fuzzy sets. After this procedure, the inference rules are combined and the inputs of the type-2 fuzzy set are mapped into output sets, through the inference engine. The rule base of type-2 is the same as that of type-1. A fuzzy inference engine results in a type-2 fuzzy set which is the combination of several output sets and each type-2 fuzzy output set is the result of activating a rule. The inference engine is the central block of the fuzzy logic controller. A type reducer is necessary to transform the inference block output set into a type-1 fuzzy set, by producing left most and right most points, y lk and y rk , respectively. Using the centroid method, the center-of-sets type reduction reduces the resulting type-2 fuzzy sets to an interval type-1 fuzzy set [y i lk , y i rk ] for each rule i. The inferred interval type-1 fuzzy set is then defined by [y lk , y rk ], such as: where f i l , f i r are the firing strengths corresponding to y i lk and y i rk of rule i and n is the total number of rules. In the last stage, the defuzification process is carried out, where the fuzzy set is transformed into real information (crisp) by calculating the center, which is equivalent to finding the weighted average of the outputs of all type-1 fuzzy sets that make up the type-2 fuzzy set. The defuzzified output for each output k is formulated as [36]: Considering the error and its derivative (e • ,ė • ) as inputs, the fuzzy logic controller (FLC) applies a suitable control action to drive both errors to zero following a set of pre-defined rules: (i) the FLC applies a large control input when both errors are far from zero; (ii) when these errors start decreasing in their way to approaching zero, the control input is reduced gradually for a smoother approach; (iii) once errors reach zero, then the control input is also set to zero. Inputs signals are quantized into seven levels represented by a set of linguistic variables: Negative Large (NL), Negative Medium (NM), Negative Small (NS), Zero (Z), Positive Small (PS), Positive Medium (PM), and Positive Large (PL). Details about the fuzzy rule base can be found in [35]. The choice of the fuzzy rules and membership functions are chosen and refined further to ameliorate the tracking performance. This choice can reduce the magnitude of the abrupt variations in the system's response. However, unlike in type-1 FLC where good performance is achieved at the cost of a heavy empirical tuning procedure of the rules and the input membership functions, type-2 FLC does not require such extensive tuning thanks to its third dimension that captures better the membership functions uncertainties and knowledge base imprecision. In this study, defuzzification is achieved using the center-of-area technique. As shown in [35], an adaptive fuzzy logic system consists of the antecedent part of the fuzzy rules (fuzzification) and the consequent part (defuzzification) liking the fuzzy rules with the output. As such, the adaptive type-2 FLC's output is expressed by, where,Φ ∈ R n =Φ l +Φ r 2 is the n-dimensional vector of known functions (regressor) of the interval type-2 fuzzy logic antecedent part defined as, and W ∈ R n is the weight vector of the fuzzy logic consequent part, with n as the number of fuzzy logic rules. As such, σ =Φ TŴ − Φ T W is the fuzzy logic output error. The symbol• denotes the parameter estimate. Formulation (3) can be expressed as: Thus, the desired dynamics of the helicopter's inverse dynamics can be expressed using a regression model: With W • and Φ • representing the vector of unknown parameters and known functions (regressor), respectively, which are defined as follows: where,• r =• * + K dė• + K p e • , with K d and K p being positive constant gains, which correspond to the desired time constant of the error dynamics. Setting the control law as Φ TŴ leads to, Using the linear regression model in (13) leads to, where,W = W −Ŵ,η • is the estimate of η • which is defined as, The adaptive FLC uses only an approximation of the regression vector Φ • since this vector is assumed to unknown, thus, the error dynamics equation can be written as: where (9). Since the system's dynamics can be written in the form of a regression model Φ T It is noteworthy that the solution is not unique and there exists several combination ofΦ T •Ŵ• that leads to an accurate approximation of the nonlinear system's dynamics Φ T • W • . This can be written in a state space formĖ = AE + Bσ, where E ∈ R 2 = [e • ,ė • ] T is the state vector. A ∈ R 2×2 is a stable matrix, and B ∈ R 2 , are given by: The control law is defined as: Theorem 1. Consider a nonlinear system in the form (1)-(3) with the control law (19). The closed-loop system's stability is achieved with the following adaptation law:Ŵ where Γ = diag(γ 1 , γ 2 , . . . , γ j ) and γ l is a positive constant, l = 1, . . . , j. P is a symmetric positive definite matrix chosen to satisfy the following Lyapunov equation: with Q > 0. Proof. Choose the following Lyapunov candidate: Taking the derivative of V:V where E 1 = B T PE. Add and subtractΦ T W from σ =Φ TŴ − Φ T W, Therefore, Substitute σ in (24) Setting the adaptation law as˙Ŵ = −ΓΦE 1 (28) implies thatV Setting K p > 0 and K d > 0 makesV ≤ 0 possibly omitting the region of E = 0 [35]. Consequently, the system is stable in the sense of Lyapunov. The region of E = 0 is defined by the fuzzy logic approximation errorΦ and gets smaller asΦ → 0. Remark 2. Due to the iterative nature of adaptation mechanisms and because of the high complexity of the 3-DOF helicopter's model, the controller may take a relatively long time to converge which may lead to an unstable behavior. To overcome this issue, the control gains K p and K d should be large enough to achieve stability at start-up. Setup This section is dedicated to the analysis of the performance of the 3-DOF helicopter whose physical parameters and control gains are defined in Tables 1 and 2, respectively. All controllers have the same singleton fuzzy logic input membership functions for e • andė • that are set to [−1 · 10 −2 , −5 · 10 −3 , 0, 5 · 10 −3 , 1 · 10 −2 ] and [−2, 0.5, 0, 0.5, 2], respectively. In order to assess the performance and robustness of the closed-loop control system, the 3-DOF helicopter model is implemented in MATLAB/Simulink R by MathWorks Inc. Numerous tests are essential to validate the proposed controller's ability to cope with various uncertainties. Sampling frequency (Hz) 100 Natural frequency w n (rad/s) 5 Figure 3 shows the desired trajectory for all angles, which is considered as a step response of a critically damped second order system with a natural frequency w n = 5 rad/s. In each test, the tracking angle errors e • and control signals v 1 , v 2 , and u 2 are taken into account in the study of the helicopter system response under various operating conditions. Results The helicopter's tracking errors e • and the control signals v 1 , v 2 , and u 2 were used to study the system's response under various operating conditions. First, the adaptive interval type-2 fuzzy logic control strategy was validated on the helicopter in the nominal case. Results are reported in Figure 4. As it is revealed, the motion tracking errors converge gradually to zero after approximately 2 s. The ability of the proposed controller in achieving high tracking accuracy was clearly demonstrated by the low magnitude of the tracking errors. A saturation of control signals at ±20 observed in Figure 4b due to a fast change in the system's trajectories. It is important to note the controller remains stable under such condition. As it is shown in Figure 5, the fuzzy logic weights converge to finite limit after transient. Next, the controller was tested under parametric uncertainties. For that, the mass of the system was changed by a factor of two and in another case by half. The tracking errors, reported in Figure 6, were kept within a similar magnitude as in the nominal case and converge to zero around the same time. This shows the controller's resilience in presence of parametric uncertainties. The control efforts (especially u2) increased when the mass of the system is doubled and decreased when it is cut in half, which is expected to adjust to the mass change. To further demonstrate the robustness of the control scheme in handling uncertainties, unexpected sudden step disturbances of 1 rad are applied to the yaw, roll, and pitch at time 2 s, 3 s, and 4 s, respectively. Results depicted in Figure 7 reveal the controller's capability in decaying tracking errors to zero without oscillations or unstable behavior. The benefit behind the use of intelligent control is shown by this test. The adaptive interval type-2 fuzzy logic controller's performance was contrasted against classical PID control, adaptive control presented in [34], and the adaptive sliding-mode control (SMC) method suggested in [37]. The tracking errors and the control efforts are depicted in Figure 8. It is revealed that both the intelligent and the PID controllers, unlike sliding-mode control and adaptive control, are able to decay the tracking errors to zero over time. However, the proposed controller was able to decay the errors faster than PID control with a lower error magnitude and control effort. Adaptive control needs persistent excitation for parameters' convergence; and hence, high tracking accuracy may take relatively long time to achieve. In theory, sliding-mode is reached by discontinuous control at infinite switching frequency. Though, switching frequency is finite in real-world applications which yields discretization chattering phenomenon as it is shown in Figure 8. To deal with this problem, a boundary solution uses a saturation function to approximate the sign function in the sliding-mode manifold boundary layer. This alternative conserves in part the sliding-mode invariance property where states are confined to a small vicinity of the manifold. Thus, convergence to zero is not guaranteed since robustness is achieved only when sliding mode truly occurs. To quantitatively adjudge the trajectory tracking performance, two performance metrics are introduced. First, the integral of the tracking error ζ e was calculated over a single run. The second performance index ζ c consists of the integral of the control signals. These metrics are calculated as follow: where t 0 and t f are initial and final time instants, respectively. The obtained numerical values for both performance metrics are displayed in Table 3. The proposed FLC method yielded the lowest tracking performance index and control effort. Adaptive control and PID control show similar performance, which is mainly due to the fact that adaptive control needs time for parameters' adaptation and convergence. On the other hand, the adaptive SMC achieves the highest tracking performance index and control effort due to its inherent chattering phenomenon. Conclusions In this paper, an adaptive interval type-2 fuzzy logic controller is designed for a 3-DOF helicopter. The control design uses an adaptation law to approximate the system's inverse model. Consequently, no a priori parameters knowledge is required. Therefore, the control scheme achieves accurate tracking by a Lyapunov-based adaptation law in the presence of both structured and unstructured uncertainties. Unlike other control techniques, the system's closed-loop stability is guaranteed by the Lyapunov direct method. The controller is tested under various operating conditions to assess its robustness against numerous uncertainties. Results illustrate that the 3-DOF helicopter's motion can be tracked with a high precision. Comparison is carried out against three other controllers, i.e., PID control, adaptive control, and adaptive sliding-mode control. To better assess the performance of these controllers, two quantitative tracking performance metrics are introduced, i.e., the integral of the tracking errors and the integral of the control signals. Comparative results reveal the superiority of the proposed control method in achieving high tracking performance with a low control effort. Future work may envision conducting the comparative study under various operating conditions on the physical Quanser 3-DOF helicopter. Conflicts of Interest: The authors declare no conflict of interest.
2020-08-06T09:05:25.658Z
2020-07-30T00:00:00.000
{ "year": 2020, "sha1": "caeb5f1d52810d335fa7bcfccf7325c80ee2e373", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-6581/9/3/59/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7c64944d812e1386aa0fed434304e53484e2cfdc", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
245199214
pes2o/s2orc
v3-fos-license
High-throughput predictions of metal–organic framework electronic properties: theoretical challenges, graph neural networks, and data exploration With the goal of accelerating the design and discovery of metal–organic frameworks (MOFs) for electronic, optoelectronic, and energy storage applications, we present a dataset of predicted electronic structure properties for thousands of MOFs carried out using multiple density functional approximations. Compared to more accurate hybrid functionals, we find that the widely used PBE generalized gradient approximation (GGA) functional severely underpredicts MOF band gaps in a largely systematic manner for semi-conductors and insulators without magnetic character. However, an even larger and less predictable disparity in the band gap prediction is present for MOFs with open-shell 3d transition metal cations. With regards to partial atomic charges, we find that different density functional approximations predict similar charges overall, although hybrid functionals tend to shift electron density away from the metal centers and onto the ligand environments compared to the GGA point of reference. Much more significant differences in partial atomic charges are observed when comparing different charge partitioning schemes. We conclude by using the dataset of computed MOF properties to train machine-learning models that can rapidly predict MOF band gaps for all four density functional approximations considered in this work, paving the way for future high-throughput screening studies. To encourage exploration and reuse of the theoretical calculations presented in this work, the curated data is made publicly available via an interactive and user-friendly web application on the Materials Project. INTRODUCTION Metal-organic frameworks (MOFs) have been extensively studied over the last two decades due to their high degree of synthetic tunability, which makes it possible to tailor their physical and chemical properties for a given application 1,2 . While much attention has been focused on the use of MOFs for industrial gas storage and separations 3,4 , the design of MOFs with targeted electronic properties has become a topic of recent interest as well [5][6][7][8] . Through a judicious selection of inorganic nodes and organic linkers, MOFs have been proposed for novel electronic and optoelectronic devices, electrocatalysts, photocatalysts, sensors, and energy storage devices, among many other applications 6,[9][10][11] . However, with tens of thousands of MOFs that have been experimentally synthesized 12 and virtually unlimited more that can be proposed 13 , it is often difficult to identify promising MOF candidates with the optimal set of electronic properties. The advent of machine learning (ML) and related big data approaches has made it possible to more efficiently search through MOF chemical space, and high-throughput computational screening can often provide insight into previously unknown structure-function relationships [14][15][16][17][18][19][20][21][22] . With this goal in mind, a high-throughput density functional theory (DFT) workflow 23 was recently used to construct a publicly accessible dataset of quantum-chemical properties for thousands of MOFs (and coordination polymers), known as the Quantum MOF (QMOF) Database 24 . Like many databases of material properties generated from high-throughput periodic DFT calculations 25,26 , the electronic structure properties within the QMOF Database were computed with the relatively inexpensive Perdew-Burke-Ernzerhof (PBE) 27 exchange-correlation functional. While PBE is useful for generating large quantities of material property data that are often needed for ML, the electron selfinteraction error 28 of generalized gradient approximation (GGA) functionals like PBE can greatly influence the predicted electronic properties 28,29 . Perhaps most notably, PBE is known to severely underpredict band gaps [30][31][32] , but the degree to which there may be qualitative (as opposed to merely quantitative) errors is not wellestablished. This inherently limits the practical utility of data-driven, computational screening approaches based on such a functional. For inorganic solids, several approaches have been taken to increase the accuracy of ML-predicted band gaps trained on highthroughput DFT calculations in a computationally tractable manner. The most straightforward option is to train ML models on experimental band gap data 33 or an ensemble of both theoretical and experimental band gap data 34 . Unfortunately, this approach is challenging to apply to MOFs because there are relatively few reports of experimentally measured MOF band gaps 8 . Furthermore, the reported band gaps of MOFs can vary by several tenths of an eV depending on the synthesis conditions and crystallinity of the material 6 . Another approach is to carry out higher-accuracy DFT calculations on a subset of materials and use them to train an ML model that can make more reliable predictions. Recently, large datasets of band gaps computed with meta-GGA and hybrid functionals have been published for inorganic solids [35][36][37] , although no such resource currently exists for MOFs. In the present work, we complement the existing dataset of PBE electronic structure properties in the QMOF Database with analogous data computed using three other functionals: HLE17 38 (a high local-exchange meta-GGA), HSE06 39,40 (a screened-exchange hybrid GGA), and a functional we refer to here as HSE06 * in which the amount of screened Hartree-Fock (HF) exchange of HSE06 has been changed from 25% at short interelectronic distances to 10%. By analyzing the electronic structure properties calculated at these levels of theory, we uncover severe theoretical limitations associated with the more computationally efficient (meta-)GGA density functionals that prevent them from achieving quantitatively-and sometimes qualitatively-accurate band gap predictions for MOFs and coordination polymers with respect to hybrid functionals. Since it is known that different density functional approximations (DFAs) can alter the underlying charge density, we also investigated trends related to the computed partial atomic charges. In general, we find that the different levels of theory predict similar partial atomic charges; however, as compared to PBE, the meta-GGA and screened hybrids tend to shift electron density away from the metal centers and onto the ligand environments. We conclude by using the electronic structure data to train multi-task and multi-fidelity convolutional neural network models that can predict PBE, HLE17, HSE06, and HSE06 * band gaps given a graph-based representation of a MOF crystal structure. We anticipate that the computational data, trends, and subsequent deep learning models presented in this work will make it possible to achieve both rapid and accurate predictions of MOF band gaps that can greatly accelerate the materials design and discovery process. To help realize this vision, all the data underlying the QMOF Database is now also made available as a dedicated, interactive application on the widely used Materials Project 41 . Band gap comparison To develop ML models that can directly guide future experimental efforts, it is essential to first understand the behavior and potential limitations of various levels of theory when predicting MOF electronic structure properties. As such, we begin by comparing the DFT-predicted band gaps for 10,720 structures in the QMOF Database with the PBE (GGA: 0% HF exchange), HLE17 (meta-GGA: 0% HF exchange), HSE06 * (screened hybrid: 10% HF exchange at small interelectronic distances decreasing to zero at large distance), and HSE06 (screened hybrid: 25% HF exchange at small interelectronic distances decreasing to zero at large distance) functionals. As shown in Fig. 1, we observe pronounced differences amongst the predictions of the various DFAs. Starting with the box plots, we find that of the four functionals tested in this work, PBE generally predicts the lowest band gaps. Including HF exchange-as with HSE06 * and HSE06-tends to increase the predicted band gap values (as expected 42 ), with the relative increase depending on the fraction of HF exchange in the selected functional. Qualitatively, the HSE06 * and HSE06 results are more reflective of prior experimental studies 6 , which suggest that the majority of MOFs are electronically insulating and that comparatively few exhibit semi-conducting or metallic character. Switching focus to the HLE17 meta-GGA, we find that the median band gap value is within 0.09 eV of the HSE06 * calculations, suggesting that the parameterization of this functional can partially improve upon the band gap underprediction problem of PBE despite not incorporating HF exchange. When comparing the violin plots in Fig. 1, it is immediately clear that the shape of the band gap distribution can vary significantly depending on the DFA. The PBE-computed band gap data exhibits two distinct distributions with peaks around 0.90 eV and 2.93 eV (Fig. 1), which is observed for the full QMOF Database of 20,000 structures as well ( Supplementary Fig. 6). A qualitatively similar distribution of band gaps is obtained when using the HLE17 functional, which has peaks around 0.86 eV and 3.21 eV. However, the two distributions in the band gap data exhibit much more significant overlap for the HSE06 * functional, and for the HSE06 functional there is almost complete overlap such that the overall distribution is virtually unimodal. The two underlying distributions in the band gap data can be better understood by separating the computed values based on whether the material has closed-shell or open-shell character, the latter of which is associated with lower band gaps on average (Fig. 2a). When including 10% HF exchange with HSE06 * , the degree of overlap between the closed-shell and open-shell band gap distributions is partway between that of PBE and HSE06 (Fig. 2a), which illustrates the strong dependence of the trends on the fraction of HF exchange. Taking the hybrid-quality calculations as the more accurate reference point 43 , these findings suggest that the PBE functional exhibits severe quantitative and qualitative shortcomings when applied to a wide range of MOF structures and that these shortcomings go beyond a simple underprediction of the band gap. Although HLE17 increases the median band gap of the dataset compared to PBE and decreases the number of structures with a predicted band gap in the low-energy subset, it retains the bimodal nature of the band gap distribution. Nonetheless, HLE17 does significantly increase the band gaps of the closed-shell frameworks, and the distribution of band gaps for the closed-shell MOFs is similar to that of HSE06 * . By directly comparing the predicted band gaps for the PBE, HSE06 * , and HSE06 calculations, we find that there is a correlation between the median band gap and the fraction of HF exchange (Fig. 2b), at least within the range of 0-25% HF exchange considered in this work. Assuming linear behavior in this region, it can be concluded that the median band gap across the dataset changes by~0.05 eV per percent of HF exchange for the closedshell frameworks and~0.10 eV per percent of HF exchange for the open-shell frameworks, although we emphasize that these statistics are specific to the QMOF Database and may differ for other datasets of MOFs. Collectively, these results have significant Raincloud plots (i.e., combined violin plot, box plot, and strip plot) for the DFT-computed band gaps, E g , of 10,720 structures in the QMOF Database at the PBE, HLE17, HSE06 * , and HSE06 levels of theory. The strip plots show all the data at that level of theory (jittered horizontally for ease-of-visualization). The box plots show the extrema (whisker tails), interquartile range (box boundaries), and median (horizontal line). The violin plots show the probability density of the data. implications for computational screening studies of MOFs and coordination polymers, as the use of GGA functionals like PBE may lead to incorrect qualitative comparisons between the band gaps of different materials if some have closed-shell character and others have open-shell character. While Figs. 1 and 2 show how the entire dataset changes with different density functionals, it is also important to investigate the degree of correlation between the various functionals. As shown in Fig. 3, nearly every MOF has a larger predicted band gap with the HSE06 * (Fig. 3b) and HSE06 (Fig. 3c) functionals than with PBE. This is also the case for most of the closed-shell MOFs with the HLE17 functional, especially when E g,PBE is above~1.5 eV (Fig. 3a). For the closed-shell frameworks ( Supplementary Fig. 7), there is a linear correlation between the computationally inexpensive PBEquality band gaps and those calculated with the more accurate HSE06 * and HSE06 functionals as well as the HLE17 functional. As shown in Supplementary Fig. 7c, a simple linear equation of the form 1.09E g,PBE + 1.04 eV can predict HSE06 band gaps with an R 2 value of 0.92, provided the frameworks are closed-shell systems and have HSE06 band gaps above~1.0 eV. Similar linear equations can be obtained for HLE17 and HSE06 * for the closed-shell structures (Supplementary Fig. 7a and Supplementary Fig. 7b). The correlation between PBE and the hybrid functionals is weaker for MOFs with open-shell character, hence the larger degree of scatter in the low E g,PBE range of Fig. 3b and c. As might be anticipated based on trends in crystal-field splitting parameters and spin-pairing energies 44 , most openshell materials in the QMOF Database contain 3d transition metal cations (particularly Cu, Co, Mn, Ni, Fe, V, and Cr in decreasing frequency of occurrence) ( Supplementary Fig. 8). Previous theoretical work on transition metal complexes and gas-phase molecules containing transition metal cations has implicated large self-interaction errors (a consequence of each electron interacting with the total electron density, including its own 28 ) as a major source of errors in systems with 3d transition metal cations that have open-shell character 45,46 . More generally, selfinteraction error is usually considered to be responsible for many of the deficiencies of DFT across virtually all properties and material classes, often due to the associated delocalization error 47,48 . Since self-interaction error is partially decreased by the inclusion of HF exchange, this is a major reason that the hybrid functionals give different results than the local functionals for the band gap predictions in this work. Partial charge comparison Beyond band gaps, it is well-established that different DFAs can change how the charge density is distributed in a given material [49][50][51][52][53] . Furthermore, partial atomic charges (which can be computed directly from the underlying charge density) are commonly used in molecular simulations of MOFs and can be used to interpret trends when modeling redox processes and chemical reactions 54,55 . One such method to compute partial atomic charges, the sixth-generation Density Derived Electrostatic and Chemical (DDEC6) partitioning scheme [56][57][58] , has found widespread use in molecular simulations of MOFs 54 (e.g., for gas storage and separations) and has performed well in tests of reproducing the electrostatic potential 59 . To explore the sensitivity of partial atomic charges to different DFAs, we compared over 900,000 partial charges calculated from the DDEC6 method using charge densities at the PBE, HLE17, HSE06 * , and HSE06 levels of theory. As shown in Fig. 4a, the DDEC6 partial atomic charges calculated by PBE and HLE17 are highly correlated across the entire dataset, with most points falling within 0.04 charge units from the y = x line. When investigating the computed partial charges by HSE06 * , we find that the HSE06 * partial charges are even closer to the PBE reference than the HLE17 partial charges are (Fig. 4b), indicating that 10% HF exchange at small interelectronic distances does not substantially change the first moment of the charge density. However, when increasing the HF exchange at small interelectronic distances to 25% with HSE06, a slightly larger difference can be observed (Fig. 4c). By focusing solely on the metal elements and the ligand atoms within their first coordination spheres (as determined using the CrystalNN near-neighbor finding algorithm 60,61 ), we find thatcompared to the PBE reference-there is often a loss of electron density (i.e., increased partial atomic charge) at the metal and corresponding gain of electron density (i.e., decreased partial atomic charge) on the surrounding ligands when using the HSE06 functional (Fig. 4d). These trends are consistent with previous partial charge analyses carried out on transition metal complexes and open-framework solids 46,52,62 . Given the large partial charge dataset in the present work, we can conclude that this shifting of electron density occurs for an enormously diverse range of metal-ligand environments and can be taken as a rule-of-thumb in most cases. While there are differences in the partial atomic charges between the various levels of theory, they are generally relatively minor. The overall strong agreement suggests that the less expensive PBE-quality charges, which are available for thousands of MOFs 24,54 , are likely suitable when carrying out high-throughput computational screening studies. Since no single charge partitioning scheme is expected to be ideal for all applications, we also compared the effect of different charge partitioning schemes for a given DFA. As shown in Fig. 5, the differences between Bader 63,64 , DDEC6 56,57,65 , and Charge Model 5 (CM5) 66 partial atomic charges (as computed with the PBE functional) tend to be far larger than any differences observed when changing the DFA, similar to what has been observed for several inorganic solids 67 . This is especially the case when directly comparing the Bader and DDEC6 methods. As one example of many, large deviations are often observed for the S and P atoms of SO 4 2− and PO 4 2− groups, which have partial atomic charges upwards of~2.4 charge units higher with the Bader method than the DDEC6 method. In addition, there can be qualitative differences between Bader and DDEC6 charges, such as atoms that have a partial positive charge with the Bader method but a partial negative charge with the DDEC6 method. While there are also clear differences between the DDEC6 and CM5 methods (Fig. 5b), the agreement between these two charge partitioning approaches is generally greater than that between DDEC6 and Bader. For applications involving systems quite different from those in available benchmarks 55,56,66 , it might be advisable to compare multiple partial charge schemes and further investigate any substantial differences 68 . Machine learning With the goal of reducing the number of DFT calculations needed in future high-throughput computational screening studies, we have evaluated the performance of several ML models that can predict MOF band gaps from graph representations of their threedimensional structures (for the prediction of partial atomic charges, we refer the reader to several ML models 69-71 that have been shown to accurately predict PBE-quality DDEC6 and CM5 charges for MOFs). Using MatDeepLearn 72 , we first trained individual graph neural networks for each DFA and found that they performed well at predicting DFT-computed band gaps compared to a baseline model that simply predicts the mean of the dataset for each entry (Table 1). Prior work 24,72 on the QMOF Database showed that a crystal graph convolutional neural network model 73 could predict PBE band gaps with a comparable accuracy, and it is reassuring that relatively low testing-set MAEs on the order of 0.24-0.29 eV can be obtained for the more accurate DFAs (i.e., HLE17, HSE06 * , HSE06). Overall, the graph neural network trained on PBE band gap data performs better than the graph neural networks trained on the HLE17, HSE06 * , or HSE06 datasets, which can likely be attributed to the greater number of data points available for training with PBE. Despite similar training set sizes for the HLE17, HSE06 * , and HSE06 levels of theory, the model based on HSE06 data has the largest testing set MAE of 0.29 eV, which may be attributed in part to a wider range of possible band gap values and a greater overlap in the band gap distributions for the closed-and open-shell frameworks. Next, we considered various approaches that could make more efficient use of the available band gap data obtained with different functionals. Starting with a multi-task learning approach that predicts band gaps for all four DFAs simultaneously using a single model architecture, perceptible but minor improvements to the model performance are obtained (Table 1). While more convenient to use than multiple individual models if multiple band gap estimates are desired, an inherent drawback of the multi-task learning method is that the training process requires structures that have band gaps computed for all DFAs of interest, which limits the amount of data that can be used. An alternate way to efficiently leverage data at multiple levels of theory is to construct a multi-fidelity model, which treats each level of theory as a unique sample 74,75 . With a substantially expanded dataset size of up to 52,806 samples, we find that the multi-fidelity MEGNet model architecture of Chen et al 75 . achieves significantly lower MAEs than the individual and multi-task models for the 3-fi (i.e., PBE, HLE17, and HSE06 * ) and 4-fi (i.e., PBE, HLE17, HSE06 * , and HSE06) models (Table 1). These results demonstrate that data at multiple levels of theory can be used to improve the overall model performance, which is especially important for the prediction of band gaps from hybrid functionals that are more computationally demanding to calculate. However, we note that the 2-fi model (i.e., PBE + HSE06) does not outperform the multitask model. In future studies, it may be worthwhile to consider additional approaches (e.g., Δ-learning) 76 if only two fidelities are available, especially given the correlation between the PBE and HSE06 functionals (Fig. 4c). The testing set parity plots for each Fig. 5 Correlation between partial atomic charges with different charge partitioning schemes. a Comparison of the partial atomic charges, q, for 1,429,082 atoms computed using the Bader and DDEC6 charge partitioning schemes at the PBE level of theory. b Comparison of the partial atomic charges, q, for 2,321,435 atoms computed using the CM5 and DDEC6 charge partitioning schemes at the PBE level of theory. Given the large dataset size, the data is shown as 2D histograms with the logarithmic color bar reflecting the frequency of points in each bin. The y = x line is shown as for reference. A.S. Rosen et al. model are presented in Supplementary Figs. S12-S16, which show that the predictive accuracy generally holds over the range of band gaps, albeit with an increase in scatter toward the low band gap region (e.g., E g,DFT < 0.5 eV). The increased error in the low band gap region can likely be traced back to several factors, such as a smaller number of MOFs to train on in this range and a higher fraction of open-shell MOFs whose properties are likely more difficult to predict with ML models. Collectively, we anticipate that the multi-task and multi-fidelity ML models will be a valuable resource for future high-throughput screening studies by minimizing the need to carry out computationally demanding hybrid DFT calculations, particularly if low-fidelity PBE band gap data is readily available (as is the case with the QMOF Database). Given the promising nature of the multi-fidelity ML models, incorporating experimentally determined band gaps 6,8 during the training process would likely be worth pursuing in future work. QMOF database on the materials project With DFT-computed properties at multiple levels of theory, we aimed to make the QMOF Database align with the findable, accessible, interoperable, and reusable (FAIR) guiding principles 77,78 . Therefore, we conclude by showcasing an interactive web application hosted on the Materials Project 41,79 , which can be accessed at the following webpage: https://materialsproject.org/ mofs. Known as the Materials Project MOF Explorer, the web application makes it possible to investigate the computed properties in the QMOF Database through a user-friendly, search-based interface. The data driving the MOF Explorer is made available to the public through the Material Project's contribution platform MPContribs 80,81 . The MPContribs application programming interface and its accompanying Python client 82 provide a unified mechanism for contributors to submit a dataset and for the community at large to programmatically retrieve, download, and query the contributed materials data. Here, contributions containing materials data are linked to a given MOF via a dedicated, unique identifier (QMOF ID) and are organized in components of queryable dictionary data, Pymatgen 83 structure objects, and binary data files. As shown in Fig. 6, the Materials Project-hosted MOF Explorer allows users to sort and filter materials in the QMOF Database by numerous geometric, compositional, textural, topological, magnetic, and electronic properties. Selecting a single material on the MOF Explorer leads to a detailed calculation summary page, which lists various tabulated properties for that material and an interactive visualization of the DFT-optimized crystal structure. In addition to DFT-computed properties, each material has an associated MOFid/MOFkey 84 (where computable) to support substructure searches as well as cross-referencing with other MOF databases. As the QMOF Database continues to evolve, we plan to incorporate additional computed properties and visualizations on the Materials Project to enable further data exploration. DISCUSSION With a generated dataset of electronic structure properties for a subset of~10,700 MOFs (and coordination polymers) in the QMOF Database 24 , we compare the performance of different DFAs for the prediction of band gaps and partial atomic charges. When comparing DFT-computed band gaps with the commonly used PBE functional against those that incorporate some fraction of HF exchange, we observe that PBE almost universally results in a lower band gap prediction, as might be expected from prior work. Notably, this difference is largely systematic for MOFs with closed-shell electronic configurations and can be empirically corrected through a simple linear relationship for structures that are semi-conductors or insulators. For MOFs with open-shell electronic configurations (in particular, those containing 3d transition metals), an even larger-and less predictable-disparity between band gap predictions is observed as a function of the fraction of HF exchange. As compared to the PBE results, the meta-GGA HLE17 is found to increase the computed band gaps for the closed-shell MOFs such that they are similar to values predicted using the HSE06 screened hybrid functional with 10% HF exchange at small interelectronic distances (denoted here as HSE06 * ). Individual, multi-task, and multi-fidelity model performance. The individual models represent four separate models that are each trained on band gaps at a single level of theory. The multi-task model is a single model that is trained on and predicts band gaps at all four levels of theory simultaneously. The multifidelity models combine data from different levels of theory without all samples needing to have band gaps at each level of theory. The 2-fi, 3-fi, and 4-fi models are trained/tested on PBE + HSE06, PBE + HLE17 + HSE06, and PBE + HLE17 + HSE06 a + HSE06 data, respectively. A baseline model that simply predicts the mean value of the dataset is shown for reference. The dataset sizes refer to the entire available dataset, which is split 80:5:15 train:validation:test. The mean absolute errors (MAEs) are shown for the testing set. However, compared to the hybrid functionals, HLE17 does not as significantly increase the band gaps of the open-shell MOFs. When investigating partial atomic charges, which are reflective of the underlying charge density for a given density functional approximation, we find that there are slight systematic differences amongst the predictions of the different functionals. For both the HLE17 meta-GGA and the screened hybrid functionals, electron density localized on the metals is lower than with PBE, and the opposite is true for the coordinating ligand atoms. Nonetheless, these changes in the partial atomic charges are relatively minor compared to the differences that arise from using different charge partitioning schemes. Finally, we used the electronic structure data generated in this work to train multiple ML models that can predict MOF band gaps at various levels of theory from graphs of the underlying crystal structures. We find that individual graph neural network models can predict PBE, HLE17, HSE06 * or HSE06 band gaps from the QMOF Database with a testing-set MAE of 0.23-0.29 eV. A multitask graph neural network model capable of simultaneously predicting MOF band gaps for all four functionals performs slightly better than the individual models, but with three or more functionals to train on, a multi-fidelity model achieves the best performance of the models tested in this work. High-throughput computational screening approaches have historically been devoted to the discovery of MOFs tailored for gas storage and separations. With the dataset and ML models presented in this work-coupled with an increased understanding of the behavior of common DFAs for predicting electronic properties-we anticipate that a computational materials design perspective can be brought to countless application areas for MOFs. Now hosted on the widely used Materials Project platform (https://materialsproject.org/mofs), theorists and experimentalists alike can leverage the data from tens of thousands of quantummechanical calculations to accelerate the discovery of promising MOFs for electronic and optoelectronic applications. Density functional theory calculations Plane-wave, periodic DFT calculations were carried out using the Vienna ab initio Simulation Package (VASP) 85,86 version 5.4.4 and the Atomic Simulation Environment (ASE) 87 version 3.20.0b1. All structures were adopted from the QMOF Database 24 . We consider properties calculated with four exchange-correlation functionals: PBE-D3(BJ) 27,88,89 , HLE17 38 , HSE06 39,40 , and HSE06 * (i.e., HSE06 with reduced HF exchange). The PBE-D3 (BJ) calculations were obtained from the QMOF Database, as previously reported 24 . The HLE17, HSE06, and HSE06 * calculations are carried out in this work using structures from the QMOF Database 24 that were previously optimized with the PBE-D3(BJ) exchange-correlation functional. In commonly accepted notation, these levels of theory would generally be referred to as PBE-D3(BJ), HLE17//PBE-D3(BJ), HSE06//PBE-D3(BJ), and HSE06 * //PBE-D3(BJ), indicating that the functional to the left of the doubleslash is a single-point (i.e., static) calculation carried out on the geometry obtained using the functional to the right of the double-slash. For brevity, we will simply refer to these levels of theory as PBE, HLE17, HSE06, and HSE06 * , respectively. Of the 20,000+ structures in the QMOF Database with properties computed using PBE,~10,700 have computed properties at the HLE17, HSE06, and HSE06 * levels of theory based on the calculations in this work. The HSE06 functional is a screened-exchange functional built upon PBE and replaces a portion of PBE's local exchange with 25% HF exchange at small interelectronic distances, decreasing continuously to zero at large interelectronic distances 39,40 . HSE06 was selected in this work because it is currently the most widely used functional for predicting the band gaps of solid-state materials when high accuracy is required, including for MOFs 43,90 . Other functionals may have comparable or slightly better performance for certain systems 37,91-93 but are less widely used and tested. In addition to HSE06, we considered the hybrid functional defined here as HSE06 * , which has 10% HF exchange at small interelectronic distances and decreases to zero at large interelectronic distances. HSE06 * was considered because the standard HSE06 functional can overcorrect the band gap underprediction problem of PBE for some materials 94 , as is the case with MOF-5 95,96 . Considering a functional with an intermediate fraction of HF exchange between that of PBE and HSE06 also makes it easier to discern the impact of HF exchange. The HSE06 and HSE06 * calculations are considerably more expensive than the PBE calculations because of the nonzero fraction of HF exchange. With this in mind, we included the HLE17 meta-GGA functional as well because prior benchmarking studies 38,43 suggest that it can greatly improve the prediction of semiconductor band gaps without the need for computationally expensive HF exchange. While one could also consider the GGA+U approach 97 , relatively little is currently known about selecting empirically ideal U values for MOFs 90,98,99 despite its widespread use in correcting the predicted energetic and electronic properties of inorganic solids in high-throughput DFT databases [100][101][102] . For materials that are closed-shell (i.e., without magnetic character), the band gap is defined as the energy difference between the conduction band minimum (CBM) and valence band maximum (VBM). For materials with open-shell character, there can be more than one way to characterize the band gap 103 . Except where otherwise stated, we define the band gap for spin-polarized systems as min CBM " ; CBM # À Á À max VBM " ; VBM # À Á , where ↑ and ↓ refer to the spin-up and spin-down spin-orbital manifolds, respectively. Nonetheless, we note that this definition can occasionally result in a band gap that is associated with a formally spin-forbidden electronic excitation, as depicted in Supplementary Fig. 4. Using the band gap instead defined as min CBM " À VBM " ; CBM # À VBM # À Á does not involve a spin-flip. Regardless of which band gap definition is employed, the trends and conclusions reported throughout this work remain unchanged ( Supplementary Fig. 5). We also note that the computed band gaps refer to electronic band gaps and are not directly comparable to experimentally measured optical gaps (e.g., via UV-Vis spectroscopy) 104,105 , particularly when the exciton binding energies are non-negligible, as has been observed for some MOFs 106 . Machine learning Graph neural network architectures, which take graphs representing the crystal structures as inputs, were used for the ML models. The graph representations contain atoms as nodes and interatomic distances as edges. Here, the atoms are represented with a one-hot encoding of the element with a vector length of 100 within the node attributes. The edge attributes contain interatomic distances within a cutoff of 8 Å and up to 12 neighbors per node, where the distances were then expanded by a Gaussian basis 114 to a length of 50. In this work, an additional state attribute is included, representing the level of theory used (i.e., fidelity) as an integer. The graph neural network itself adopts the MatErials Graph Network (MEGNet) architecture 115 where the node, edge, and state attributes are propagated sequentially in the stated order during the graph convolutional steps. The overall model contains one pre-processing layer, four graph convolutional layers, one pooling layer using the Set2Set function, and finally two post-processing layers. The pre-processing, post-processing, and graph convolutional update functions are all fully-connected layers with Rectified Linear Unit activation functions and with dimensions of 128, 128 and (128, 128), respectively. The models were trained with the AdamW optimizer 116,117 using an initial learning rate of 0.0005 and a batch size of 128 for a total of 250 epochs. The model state with the lowest validation MAE is saved and used for testing. The training:validation:testing ratio used is 80:5:15, and the samples were randomly split across the training, validation, and testing sets. For all cases in this work, the same hyperparameters were used in the models. For the individual models, the models were trained separately. In multi-task learning, the output dimension was expanded to four, and the predictions were performed simultaneously with a single model for all fidelities (i.e., levels of theory). For multi-fidelity learning, we adopt the approach used by Chen et al 75 . where each fidelity is considered a unique data sample and structures with different fidelities can appear in both training and testing data splits. The model training and testing was set up and performed using the MatDeepLearn framework 72 , which is implemented using the PyTorch 118 A.S. Rosen et al. and PyTorch geometric 119 libraries. The training and evaluation were conducted on four NVIDIA Tesla V100 ('Volta') graphics processing units. DATA AVAILABILITY With the release of the Materials Project-hosted MOF Explorer interface to the QMOF Database, all data in this work can be accessed at the following webpage: https://materialsproject.org/mofs. Each version of the QMOF Database made available on the Materials Project is permanently archived on Figshare at the following DOI: 10.6084/m9.figshare.13147324. The VASP input and output files are made available via the Novel Materials Discovery (NOMAD) platform 120
2021-12-16T17:09:38.448Z
2021-12-13T00:00:00.000
{ "year": 2022, "sha1": "41b9e300dfba08c63b07f879c0dfbece60bf0485", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41524-022-00796-6.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "f9fb5bcc605a70e9bf04ac40da0e8e9b732b6813", "s2fieldsofstudy": [ "Materials Science", "Physics", "Computer Science" ], "extfieldsofstudy": [] }