id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
54447812 | pes2o/s2orc | v3-fos-license | Skin Lesions Classification Using Convolutional Neural Networks in Clinical Images
Skin lesions are conditions that appear on a patient due to many different reasons. One of these can be because of an abnormal growth in skin tissue, defined as cancer. This disease plagues more than 14.1 million patients and had been the cause of more than 8.2 million deaths, worldwide. Therefore, the construction of a classification model for 12 lesions, including Malignant Melanoma and Basal Cell Carcinoma, is proposed. Furthermore, in this work, it is used a ResNet-152 architecture, which was trained over 3,797 images, later augmented by a factor of 29 times, using positional, scale, and lighting transformations. Finally, the network was tested with 956 images and achieve an area under the curve (AUC) of 0.96 for Melanoma and 0.91 for Basal Cell Carcinoma.
I. INTRODUCTION
Today, skin cancer is a public health and economic issue, that for long years have been approached with the same methodology by the dermatology field [1]. This is troublesome when we analyze that for the last 30 years the numbers of cases diagnosed with skin cancer have increased significantly [2]. It is more troublesome when money comes in the equation, seeing that millions of dollars are being spent in the public sector [3]. A major part of this is spent in the individual analysis of the patient. Where the doctor analyzes the lesion and takes action on the pieces of evidence seen. If any of these steps were to be optimized, it could mean a decrease in expenditure for the whole dermatology sector.
Dermatology is one of the most important fields of medicine, with the cases of skin diseases outpacing hypertension, obesity and cancer summed together. That is accounted because skin diseases are one of the most common human illness, affecting every age, gender and pervading many cultures, summing up to between 30% and 70% of people in the United States. This means that in any given time at least 1 person, out of 3, will have a skin disease [4]. Therefore, skin diseases are an issue on a global scale, positioning on 18th in a global rank of health burden worldwide [5].
Furthermore, medical imaging can show itself as a resource of high value, as dermatology has an extensive list of illness that it has to treat. In addition, the field has developed its * The author is a member of the "Machine Learning Research Group" from the University of Braslia. Access www.gpam.unb.br for more information.
own vocabulary to describe these lesions. However, verbal descriptions have their limitations and a good picture can replace successfully many sentences of description and is not susceptible to the bias of the message carrier.
In addition, the recommended way to detect early skin diseases is to be aware of new or changing skin growths [6]. Analysis with the naked eye is still the first resource used by specialists, along with techniques such as ABCDE, that consists of scanning the skin area of interest for asymmetry, border irregularity, uniform colors, large diameters and evolving patches of skin over time [7]. In this way, the analysis from medical images is analogous to the analysis with the naked eye and thus can be applied the same techniques and implications. This supports the idea that skin cancer often is detectable through naked eye and medical photography.
Worldwide the most common case of cancer is skin cancer, been melanoma, basal and squamous cell carcinoma (BCC and SCC) the most frequent types of the disease [8]. This type of the disease is most frequent in countries with the population with predominant white skin or in countries like Australia or New Zealand [9].
In Brazil, it is estimated that for the biennium of 2018-2019, there will be 165,580 new cases of non-melanoma skin cancer (BCC and SCC mostly) [10]. Moreover, it is visible that the incidence of these types of skin cancer had risen for many years. This increase can be due to the combination of various factors, such as longer longevity of the population, more people being exposed to the sun and better cancer detection [8].
In the United States, the numbers add up to 9,730 deaths estimated for 2017 [6]. Skin cancer accounts for more than 1,688,780 cases (not including carcinoma in situ, nor nonmelanoma cancers) in the US alone in the year of 2017 [6].
Despite skin cancer being the most common type of cancer in society, it does not represent a great death rate in its first stages, since the patient has a survival rate of 97%. However, if the patients are diagnosed in the later stages the 5-year survival rate decreases to 15%.
In Brazil, were expected to occur 114,000 new cases of nonmelanoma skin cancer in 2010. From that, it was expected that 95% were diagnosed in early stages. However, even with early diagnosis this amount of cases means around R$37 million (Reais) to the health public system and R$26 million to the private system per year [3].
Moreover, we can divide skin lesions into two major groups, one being malignant lesions and the other benign lesions. The first is composed mostly of skin cancers and the latter being composed of any lesion that does not pose a major threat. One counterexample of this division is the actinic keratosis, that presents itself as a potential SCC, as it has the potential to develop into it. Thus, actinic keratosis is classified as a precancerous lesion [11]. Furthermore, this work analyzed and chose 12 lesions in total, 4 malignant and 8 benign (being 1 precancerous), as seen on Figure 1. The lesions where chosen mainly on the public data available online, to be described in subsection III-A.
Seeing the problems involved in diagnosing skin lesions, this work envisions to create a learning model to classify skin lesions in one of 12 conditions of interest. With this purpose, the classifier aims to correct distinguish lesions analyzing clinical images with the condition. Furthermore, this can prove to be a useful tool to aid patients and doctors on a daily basis operation.
Furthermore, this work is done with the vision of being the stepping stone for newer approaches that democratize and distribute access to health care. A good lesion classification model may be the motor that will accelerate the construction of tools that puts the possibility of early diagnosis and alert on patient's hands, even far isolated patients, where few doctors can reach. These tools may save many lives and reduce several costs with the treatment of late-stage diseases.
The related work in this field proved that there are many algorithms capable of tackling this problem, but there is an astonishing difference between shallow and deep methods in machine learning. With that in view, this work will guide its efforts in using deep neural networks to achieve its main objective. For this to happen, the gathering of good practices and techniques used to approach the classification of clinical images is needed.
II. RELATED WORK
Many algorithms and tools have been created to aid these professionals in their task of detecting diseases in many fields [12,13,14,15,16]. This has proven to add more reliability and confidence to doctors in their practices as they have more information to diagnose patients. For dermatology and skin lesions detection has not been different. History shows that many approaches had been made over the course of years, applications with shallow algorithms such as K-Nearest Neighbors (KNN) [17] and Support Vector Machines (SVM) [18] had been proven to accomplish good results, but are as well tiresome to build applications that involve such approaches.
Seeing this, some researchers have been applying this approach to classifying skin lesions with success. One common thing in this domain is the lack of quality and scarcity of open data. It is common to see works with only a couple hundred of examples. That is a characteristic of the medical field. There are many hospitals and clinics that hold huge amounts of data and do not make it public mainly because of privacy issues with patients. However, many authors still apply efforts to push forward the technology in such fields, overcoming these barriers. For the purposes of this work, we listed some related researches that uses deep learning in dermatology, applying neural networks to skin lesions. Matsunaga et al. (2017) proposed an approach to classify melanoma, seborrheic keratosis, and nevocellular nevus, using dermoscopic images. In their work, they proposed an ensemble solution with two binary classifiers, that still leveraged from age and sex information of the patients, if they were available. Furthermore, they utilized techniques of data augmentation, using a combination of 4 transformations (rotation, translation, scaling and flipping). For the architecture, they chose the ResNet-50 implementation on the framework Keras, with personal modifications. This model was pre-trained with the weights for a generic object recognition model and finally used two optimizers AdaGrad and RMSProp. This work was then submitted to the ISBI Challenge 2017 and won first place, ahead of other 22 competitors.
Nasr-Esfahani et al. (2016) showed a technique that uses imaging processing as a previous step before training. This result in a normalization and noise reduction on the dataset, since non-dermoscopic images are prone to have non-homogeneous lightning and thus present noise. Moreover, this work utilizes a pre-processing step using k-means algorithm to identify the borders of a lesion and extract a binary mask, which the lesion is present. This is done to minimize the interference of the healthy skin in the classification. Furthermore, Nasr-Esfahani et al. (2016) used a technique called data augmentation to increase the dataset, using three transformations (cropping, scaling and rotation) and multiplied the dataset by a factor of 36 times. Finally, a pre-trained convolutional neural network (CNN) is used to classify between melanoma and melanocytic nevus for 200 epochs (20,000 iterations, using a batch size of 64 and a dataset with 6,120 examples). Menegola et al. (2017) presented a thorough study for the 2017 ISIC Challenge in skin-lesion classification. In this work, it is presented experimentations with some pre-trained deep-learning models on ImageNet for a three-class model classifying melanoma, seborrheic keratosis, and other lesions. Models such as ResNet-101 and Inception-v4 were vastly experimented with several configurations of the dataset, utilizing 6 data sources for the composition of the final dataset. It was also reported the use of data-augmentation with at least 3 different transformations (cropping, flipping, and zooming). Also, it is reported that the points that were critical to the success of the project were mainly due to the volume of data gathered, normalization of the input images and utilizing metalearning. The latter is elucidated as an SVM layer in the final output of the deep-learning models, that map the outputs to the three classes that were proposed in the challenge. previous 3. This is due to the inherent limits and problems that are existent in this domain, data scarcity. In this work transferlearning is applied, using two different learning models, VGG-19 and ResNet-50, both pre-trained on ImageNet 1,000 classes dataset. These were used to classify between malignant and benign lesions, using 10,000 dermoscopic images. For the correct learning process, it was also used the up-sampling of the underrepresented class. This process was done using a random number of transformations, chosen between rotation, shifting, zooming, and flipping. Furthermore, in this paper, it was presented 3 experiments, first with the VGG-19 architecture with the addition of two extra convolutional layers, two fully connected layers, and one neuron with a sigmoid function. Second it experimented with the ResNet-50 model, and finally a implementation of VGG-19 with an SVM classifier as the fully-connected layer. As a final result, the modified implementation of the VGG-19 had the best results. However, the main reason for the poor results in the ResNet-50 model was due to the small amount of training data. Maybe with larger amounts of data, it would be possible to train a small model and produce better results.
Esteva et al. (2017) presented a major breakthrough in the classification of skin lesions. This research compared the result of the learning model with 21 board-certified dermatologists and proven to be more accurate in this task. It was performed to classify clinical images, indicating whether a lesion is a benign or malignant one. For this result were used 129,450 images, consisting of 2,032 different diseases and including 3,372 dermoscopic images. Furthermore, it was used a dataaugmentation approach to mitigate problems as variability in zoom, angle, and lighting present in the context of clinical images. The augmentation factor was by 720 times, using rotation, cropping, and flipping. Here, an Inception-v3 pre-trained model was utilized as the main classifier, fine-tuning every layer and training the final fully connected layer. Moreover, the training was done for over than 30 epochs using a learning rate of 0.001, with a decay of 16 after every 30 epochs. The classification was done in such a way that the model was trained to classify between 757 fine-grained classes, and then as the probabilities were predicted it was fed into an algorithm that selected the two different classes (malignant or benign). Using this approach, this work achieved a new state of the art result.
Seog Han et al. (2018) proposed to classify the skin lesions as unique classes, not composing meta-classes such as benign and malignant. It used the ResNet-152 pre-trained on the Ima-geNet model to classify 12 lesions. However, for training was used other 248 additional classes, that were added to decrease the false positive and improve the analysis of the middle layers of the model. Furthermore, this was done in such a way that the train sampling for the 248 diseases did not outgrow the main 12, thus when used for inference the model predicted one of the 12 illness, even when the lesion does not belong to one of them. For training was used 855,370 images, augmented approximately 20 to 40 times, using zooming and rotation. These images were gathered from two Korean hospitals, two publicly available and biopsy-proven datasets, and one dataset constructed from 8 dermatologic atlas websites. Furthermore, the training lasted for 2 epochs using a batch size of 6 and a learning rate of 0.0001 without decay before 2 epochs. This early stopping was done to avoid overfitting on the dataset. Finally, it was reported that the ethnic differences presented in the context were responsible for poor results in different datasets, thus it was necessary to gather data from different ethnics and ages to correct mold the solution to reflect the real world problem present in skin lesions classification.
Finally, we can observe that every one of these works has one aspect in common, data scarcity. This is a characteristic of the medical domain, there are very few annotated examples of data that are publicly available. The works that proven to have more impact had to collect data from other sources, mainly private hospitals or clinics. Furthermore, this step of data collection did not fully mitigate the problem, it was still necessary to use techniques such as transfer-learning [24,25] and data-augmentation [26,27,28].
A. Datasets
Due to the scarcity of data present in the medical field, the datasets chosen were not the selection of the best on a collection of options. The process of choosing one mainly took into account the criterion of public availability. Aside from that, the only pre-requisite was that the dataset was composed with only clinical images (photos taken from cameras without other tools or distorting lenses).
From these criteria, only two datasets fitted the description. The datasets contained 10 (ten) distinct lesions, containing 4 malignant illnesses at maximum. Another additional dataset was gathered from dermatologic websites, using a script for scrapping pages. The latter dataset was acquired from the work of Seog Han et al. (2018) and is not publicly available due to copyrights owned by the websites. Finally, these datasets are further discussed below.
MED-NODE: The first dataset used is provided by the Department of Dermatology at the University Medical Center Groningen (UMCG) [29]. This dataset contains 170 images that are divided between 70 melanoma and 100 nevus cases. Furthermore, these images were processed with an algorithm for hair removal.
Edinburgh: The second dataset is provided by the Edinburgh Dermofit Image Library [17] and is publicly available for purchase, under an agreement with the license of use 1 . This dataset is the more complete one found on the web. It contains 1,300 images, that are divided into 10 lesions, including melanoma, BCC, and SCC. These images are all diagnosed based on experts opinions. In addition, it is also provided the binary segmentation of the lesion, for each one. It is valid to note that the images are not all in the same size.
Furthermore, the lesions and its respective numbers are listed in the table I. The difference from the Edinburgh dataset is that this contains two lesions that are not present on the first, Wart and Lentigo -both benign lesions -, as it can be seen on table II. This, alongside with the Atlas and MED-NODE datasets, sums up to 12 lesions, that are the interest of this work. One difference between Atlas and the first two datasets is the quality of the images, since the dataset was collected from web pages, is not all images that present the same quality, nor the same common viewpoints observed on the Edinburgh dataset. Therefore, this dataset is the most heterogeneous in matters of quality of imaging, viewpoints, the age of patients and ethnicity. However, this dataset in its entirety is not officially diagnosed by specialists, but on the other hand, these photos were displayed on websites that are reliable and used by students. So, there is a heuristic that these images were revised before putting to display in these websites and can be trusted.
B. Handling data scarcity
As noted previously, for the correct generalization of the weights and biases of a network, a huge amount of data is needed. However, the medical field lacks this amount of images and if only used the data public provided, a good generalization of the problem cannot be met if we wish to train a deep neural network.
1) Transfer Learning: In practice, the domains that are faced in the industry, rather than the academia, usually have low numbers of labeled data. This poses a major obstacle to train a deep convolutional neural network from scratch, since the data may not demonstrate a true representation of the real world. Thus, it is common to see works that utilize the pretrained weights of a previously trained architecture, this can lead to 2 major approaches.
The approaches are: using a CNN as a fixed feature extractor or fine-tuning the trained model. The first is mostly used to collect features of images and then use them to train a linear classifier in a new dataset. The second strategy is to continue the training of the network, replacing completely the final layer, but updating the parameters through backpropagation.
A common use of transfer in computer vision, more specifically object classification, is to use pre-trained models that were trained on the ImageNet dataset. Some recent work done by Kornblith et al. (2018) shows that ResNets take the lead in performance when treated as feature extractors. While only fine-tuning some models to other datasets, they achieved a new state-of-the-art. All these tests used pre-trained weights and fine-tuned them with Nesterov momentum for 19,531 steps, which sometimes corresponded as more than 1,000 epochs using a batch size of 256. Finally, it was proven, empirically, that the Inception-v4 architecture achieves overall better results for this task than the other 12 pre-trained classification models.
Therefore, transfer learning optimizes and cuts short most of the time in the training of new applications. However, this can add some constraints to the work. One example of this is when using a pre-trained network is not possible to extract and change arbitrarily the layers of the network. Another point is that normally, small learning rates are applied to CNN weights that are being fine-tuned. This is because we already expect that the weights are good, and we do not want to distort them too much [25].
2) Data augmentation: Data augmentation is a technique used where we do not have an infinite amount of data to train our models. This can be done by introducing random transformations to the data. In image classification, this can be translated as rotating, flipping and cropping the image. These perturbations add more variability to the input, thus this could mean an overfitting reduction in our model by teaching it about invariances in the data domain [28,31,32]. Therefore, these transformations do not change the meaning of the input, thus, the label originally attributed to it still holds its importance.
Although some transformations in an image can be done agnostic to the field of application (e.g. translation), some other transformations are entitled to domain-specific characteristics. For this work we used an additional transformation that randomizes the natural light effect in the picture, this was done to mimic the transformations seen in indoors clinics due to different light sources. Furthermore, to increase the variability added by augmenting data, the probability of application and magnitude variability are added to the transformations.
Have seen the needs for augmentation, it was used the Augmentor Python library [33] for implementing the process of augmenting the dataset. The library has predefined transformations and has a hot-spot for new implementations of transformations. This was quite useful when implementing the method to add light variance to the augmentations.
Each transformation chosen to be applied had been based on general guidelines of data augmentation [31,32] or on the nature of the data. Thus, the transformations were aligned in a pipeline fashion, where each had a probability that defined the likelihood of being applied to the image and at the end, the new image was saved in the destination. Furthermore, the operations used for this work were the ones listed in table III.
C. Datasets Preparation
The first thing done, before applying transformations to the dataset, was to separate a test set, usually a 10% to 20% of each lesion, depending on the experiment. Following this, if needed for the experiment, was done the data augmentation process. Then the remaining sample was analyzed to see how much was necessary to augment each class. The process augmented the remaining dataset, usually, by a factor of 29 times. That summed with the original dataset was accounted to 30 times the original amount.
After processing the images necessary to compose the training and test datasets, the images for the training dataset were processed to create an LMDB file [34] for fast access to the data in training time. In this process the training dataset is divided between a training set and a validation set. Thus, this split is done in a way that 80% of the data is used for training and 20% is for validation. However, this split is done in a stratified way, so that each split has a fair amount of each class.
Finally, these slices of the dataset are kept separated and are used as such for the experiment.
D. Architecture
The architecture used for this work has been the ResNet-152, used with pre-trained weights trained on ImageNet database. This architecture was chosen mainly for the results the family (ResNet-50, ResNet-101, and ResNet-152) had achieved on other related works.
E. Metrics
The metrics used in the experiments were consistent throughout this work. This decision was made to build the ground necessary to compare the results between different experiments. Therefore, two metrics were used in training time and three for the testing step.
Training Time: For the training time, the main metric used was the accuracy metric. Nonetheless, as the model classifies 12 classes, the accuracy reported has two variants: top-1 accuracy and top-5 accuracy (or accuracy@5). Testing Step: For the testing step, it was created a process that the predictions for both the validation and the test datasets were generated. With these predictions in hand, as well as the true labels of the examples, it was possible to create a confusion matrix for the model. Furthermore, with the confusion matrix at hand, was simple to compute other metrics, such as precision, recall (or sensitivity), and accuracy as well.
Another metric used to evaluate the models was the AUC (Area Under the Curve), along with the ROC (Receiver Operating Characteristic) curve. The ROC curve is a mapping of the sensitivity (probability of detection) versus 1−specificity (probability of false alarm), using various thresholds points. Typically, this metric is implemented in systems to analyze how accurately the diagnosis of a patient state is (diseased or healthy) [35]. Furthermore, the AUC summarizes the ROC curve and effectively combines the specificity and the sensitivity that describes the validity of the diagnosis [36].
Alongside with the ROC curve analysis, is common to calculate the optimal cut-off point. This is used to further separate the test results, so that a diagnosis of diseased or not is provided. When the point is closest to where the sensitivity is equal one and specificity is equal zero, it has achieved the best result possible [37,38].
F. Best Experiment
The best results achieved on this work were with the use of the ResNet-152 architecture, trained over an augmented dataset with a mixture of MED-NODE, Edinburgh and Atlas datasets. The augmentation made was of 29 times for each class, leaving the classes unbalanced.
Furthermore, the ResNet architecture had to be modified to accommodate the needs of the problem at hand. So, the last layer of the architecture was changed from 1,000 classes to 12 classes. Therefore, the final architecture produced followed the same schema seen in Figure 2.
Moreover, the technique of transfer learning was applied to generate the best results more rapidly. For that, the hyperparameters of the network had to be tuned and carefully set, for that same purpose.
1) Dataset: The dataset used for the experiment was a derivative of the previous experiment. The original dataset consisted of a mixture of MED-NODE, Edinburgh and Atlas images, moreover, the dataset did not go under augmentation processes and was divided into three separated directories following the division of 20%, 10%, and 70% for testing, evaluation, and training, respectively.
Moreover, the training and validation datasets were augmented, using the transformations listed in table III, by a factor of 29 times. The testing dataset did not suffer any transformations. Finally, the final numbers for the datasets can be seen in table IV. 2) Training: For the training phase, it was used transfer learning techniques. Thus, it was necessary to gather the ResNet-152 pre-trained weights for the ImageNet dataset 3 first, and then modify the network for the purpose of this work. The learning rate was chosen to be higher than the used in the related works, for two major factors. First of all, one of the early experiments done showed that with a low learning rate it was found a plateau on the very start of the training. Thus, the network did not have the power to learn the features of the skin lesions. Secondly, it was found that increasing the learning rate often aids to reduce underfitting [39].
Additionally, the final dense layer has a 10 times factor of multiplication for the learning rate, compared to the other layers of the network. However, different from the process of freezing the early layers, used in the same research, this work approximates more to the approach implemented in [13], that fine-tuned all the layers of the network.
This was done with the premise in mind, that although the ImageNet dataset is far diverse and comprehends many different objects, it does not have classes that approximate in characteristics and problems encountered in this dataset of skin lesions. Furthermore, it the weights in the early layers may not be properly trained to extract fine features such as the ones found within the problem that is faced in this work. Therefore, it was needed to fine-tune the learnable parameters since the early layers and learn the final classifier from scratch.
3) Hyperparameters: For this work it was used the Caffe framework [40], since it allowed and simplified the changes that were needed to do in the layer levels. Furthermore, all the hyperparameters were defined in a separated configuration file called "Solver", necessary to define a .prototxt file with free parameters used in the training. These hyperparameters can be seen in table V.
Due to the infrastructure limitations it was only possible to set the batch size to 5. However, the Caffe framework provides an hyperparameter that serves as a hold on the update of the gradients. The iter size defines how many iterations the gradients will wait until the update. Altough using this hyperparameter may affect the batch normalization layers used in the architecture, the final results did not show this effect. Furthermore, the maximum iteration parameter was chosen to calculate the number of epochsto 10.
4) Infrastructure:
All the experiments were conducted under the same environment, that consisted of Antergos 18.3 (Linux kernel 4.16) running BVLC Caffe [40] with support for an NVIDIA GTX 1070 GPU (Cuda 9.1 and cuDNN 7.1).
IV. RESULTS
Finally, the model used in the testing phase was the product of the iteration number 38,000. This training phase took an uninterrupted total time of 35 hours (approximately 167 seconds for every 50 iterations).
A. Metric Results
With the confusion matrix generated for the predictions in the testing dataset, was found that for all the 11 lesions, with exception of the Actinic Keratosis, achieved a accuracy higher than 80%. Thus accounting for a 78% total accuracy for the model. However, this metric has a bias attached to it, since the distribution of the classes is not even, and therefore can cause misleading in the analysis of this metric.
Finally, the AUC and cut-off values for each ROC curve have been calculated (Figure 3). The table VI shows a comparison between results in these three works.
B. Interpretability
For this work, it was judged to be important to bring an input of the model's interpretability, since the application is sensible with human life. Moreover, this work want to raise the importance for the use of interpretability techniques for machine learning in medical applications. In addition, an model with an explainability may be more well received by medical practitioners, since it demystify the decisions taken by the model.
Moreover, model interpretability is important at several points. For example, in the training phase if a model is behaving unexpectedly an engineer must know what is happening in order to reverse the situation. For that, if the engineer has in hands the interpretability features of the data, it may found out that the reasons to the behavior might be because of a hindsight bias [41] present in the dataset. So, when debugging an application that has a machine learning model, it is important to have the tools to debug properly.
Another fact that is important to discuss is trust. When people are using systems, specially life-critical systems such as medical software, people want to trust that nothing unexpected will happen. And trust for machine learning models can be gained in two ways: through daily use evidence, that can be achieved by getting a high accuracy, for example; another route is to explain how did the model reach the decision, that in this way the end-user will have not only proof that the decision was correct, but how did it know it was a correct decision to make. The second approach can lead to a scenario that the "black-box" becomes less so, and less intimidating, thus leading the user to be more inclining to use the prediction to take action [42]. According to Gunning and DARPA (2016), it is necessary to create a new approach to how the machine learning models present their predictions.
1) Preliminary Results: An analysis of the model predictions was made, as a way to discover which were the examples in the validation dataset, that the model most got right and wrong. This is used as an artifact for interpretability, since it can bring new insights about the heuristics that are causing the model's decisions. Additionally, it is also commonly used even among simpler applications due to the simplicity of using this technique.
Furthermore, the GradCAM technique [44] was applied for this work, as a method to have visual feedback to where the network's activations, previous to the softmax layer, were more predominant. This gives more artifacts to build an explanation for the model decisions. This can also be used to identify problems where the network learns adjacent features that may be present in lesion samples, but does not necessarily hold meaning with the lesion (e.g. nails or hair in a scalp near the lesion), vide the "Husky vs Wolf" experiment in [45], where background snow was a predominant feature in classifying a Wolf.
For the most wrong predictions, it was found that the causes may fall under 2 factors: the lesion analyzed indeed caused confusion between the lesions (Figure 4a); the model did not generalize well and was struggling to extract predominant features in some of the images, thus giving more importance to areas that were not relevant, from a practical perspective (Figure 4b and 4c).
The most correct lesions brought more information on what (a) Predicted as Haemangioma with 100% confidence.
(b) Predicted as Basal Cell Carcinoma with 100% confidence.
Source: MED-NODE dataset. Figure 5c, that are more emblematic and need a expert's eye to shed a light on it. Moreover, we can speculate that the model took advantage of the geometric and color asymmetry in the lesion to make an accurate decision. Furthermore, it was found that the model generalizes well for the examples that it was shown, correctly activating the regions that contained the lesion, even in images that the pose made challenging the localization of the lesion.
Nonetheless, this kind of interpretation is important not only in a developer vision but also for the possible doctors that were to receive a simple prediction and to take action on another human life based on it. With these other tools, the doctor has much more to support the next decision that is necessary to take on the patient. Therefore, making the models as interpretable as accurate can transform a tool in a good counselor.
V. DISCUSSIONS
In this paper, we discussed the importance of automatic classification method to support skin lesions diagnosis. Furthermore, we listed a group of researches and their achieved results for the same problem. However, it is still a problem with several difficulties, even more when we study clinical images, that may present an immense diversity due to variables such as cameras and environments.
Seeing this, this work presented a model capable of classifying 12 skin lesions, that reached results comparable with state-of-the-art. Additionally, it was presented studies on the model decision taking process with interpretability techniques. However, regardless of the excellent results encountered in this work, it is necessary to further test the model with more data, with more diversity (different ethnics and ages), and then investigate the results for improvements. | 2018-12-06T02:54:28.000Z | 2018-12-06T00:00:00.000 | {
"year": 2018,
"sha1": "52c985ad0bfe94de5dc761d9e260337a56db779b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "52c985ad0bfe94de5dc761d9e260337a56db779b",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252721157 | pes2o/s2orc | v3-fos-license | Preliminary analysis of early and late radiation responses in 3D image-guided brachytherapy for cervical cancer
: Rationale – The use of 3D image-guided brachytherapy (3D-IGBT) allows adequately optimizing the dose distribution to bring a target therapeutic dose to clinical target volume (CTV), thereby minimizing an impact on critical organs, while ensuring a decrease in the incidence and severity of radiation-caused complications. Use of 3D-IGBT also allows improving the quality of life in patients with cervical cancer. Objective – To conduct a preliminary analysis of the incidence of early and late radiation responses in 3D-IGBT of locally advanced cervical cancer (LACC). female patients with stages IIB and IIIB of cervical squamous cell carcinoma, without confirmed metastases, preceding chemotherapy (CHT) and/or radiation therapy (RT), surgical in localization, combined early grade 3 responses were not diagnosed, while in the control group, they were observed in 4 (9.1%) women. For instance, the manifestation of grade 3 delayed radiation injuries in the rectum was diagnosed in 3 (6.8%) women in the control group, while in the main group, they were not detected. Grade 2 cystitis was observed in a smaller number of women in the group with 3D-IGBT, compared with the control group (9.1% vs. 13.6%, p<0.05). Grade 3 delayed radiation responses in the bladder were diagnosed in 4 (9.1%) women in the control group, whereas among the patients of the main group with 3D-IGBT, they were not recorded at all (p <0.05). Grade II reactions in the vaginal mucosa and cervix were diagnosed more often in the control group (16.7% vs. 13.6%, p<0.05). Conclusion – Hence, the method we have used to optimize the treatment of LACC by means of 3D planning in accordance with toxicity criteria exhibited a definite advantage, compared with RT with 2D planning. Based on the results of our research, we concluded that optimization of RT for LACC using 3D-IGBT created clinically favorable conditions for effective therapy: it reduced the risk of displacement of the applicators and decreased an impact on the patient via reducing the total radiation doses and incidence of severe early and late toxic effects, providing good outcomes of local control regardless of tumor size and clinical stage. Our study assessed early and late hematological and non-hematological toxicity in both groups. When measuring early hematological toxicity, the development of correctable anemia, leukopenia and thrombocytopenia, was observed in 8 (36.4%), 9 (40.9%) and 6 (27.3%) women of the main group, respectively, at weeks 4-5 of EBRT treatment; while in the control group, these were diagnosed in 17 (38.6%), 19 (43.2%) and 11 (25.0%) cases, correspondingly, both at the stage of EBRT and during BT sessions.
Introduction
Cervical cancer (CC) in the female population continues to occupy one of top places, and radiation therapy (RT) remains the main method in the arsenal of specialized CC treatment [1]. Consequently, further improvement in the results of therapy depends largely on the development and mastering of new RT techniques, since the possibilities for effective RT for CC have not yet been exhausted [2]. Fundamentally new equipment for external beam radiation therapy (EBRT) and contact radiotherapy is being developed and put into practice. A whole range of devices for high-precision imaging of tissues and organs emerges, computer technologies are developed for processing and visualizing data. All of these provide the technical possibility of accomplishing a stable therapy for patients with locally advanced forms of cervical cancer (LACC) [ 3,4].
Recent studies demonstrated that RT in combination with chemotherapy (CHT) has a large survival advantage, compared with RT alone [5]. Increasing the dose in combination with the insufficient use of contemporary technologies of combined RT and an inadequate approach to the choice of radiation technique enlarges the likelihood of irreversible damage to normal tissues of the bladder and rectum that are inevitably affected by combined radiation therapy. However, combined RT remains the gold standard in the treatment of patients with LACC, while the effectiveness of RT is increasing due to the brachytherapy (BT) stage, which allows delivering high doses of ionizing radiation directly to the tumor with minimal impact on surrounding healthy organs and tissues. In this regard, dose fractionation regimen and an appointment of total doses are decisive factors for reducing an incidence of complications during RT without worsening the treatment results [6,7].
Despite the fact that a large number of different BT regimens are currently known and used in practice, the choice of the optimal regimen is still a matter of debate. When developing methods of combined RT for CC, aimed primarily at improving the results of treatment, the issue of achieving an antitumor effect with a simultaneous decline in the likelihood of radiation complications remains relevant [8,9]. Novel dose planning technologies would allow optimizing radiation programs, taking into account the following factors: individual parameters of the tumor process, spatial relationships of the tumor and organs at risk, and constitutional features of the patient. Such technologies could also ensure that adequate levels of absorbed radiation doses are achieved to accomplish an antitumor effect while reducing the level of radiation exposure ton the part of surrounding tissues [10,11,12].
Previously, 2D planning was used during BT. However, this approach does not allow choosing the exact beam width in the tumor section and empirically takes its cylindrical geometry as a basis; it does not take into account individual peculiarities of the tumor location in each patient, thereby causing a risk of excessive radiation to adjacent organs. 3D planning takes into account the individual features of the tumor in each section, which makes it possible to model an optimal irradiated tumor volume and to include three-dimensional dose distribution calculations based on tomography data [13,14]. Radiation complications of moderate and severe degree, manifested by ulcerative changes in risk organs, the formation of rectal and vaginal fistulas, along with intrapelvic fibrosis, significantly impair the quality of life in patients, and could lead to disability and even death [15]. In this regard, the fractionation regimen and the choice of total doses are decisive factors in reducing the incidence of complications in RT while not worsening the outcomes of treatment [16,17]. The use of 3D image-guided brachytherapy (3D-IGBT) allows adequately optimizing the dose distribution to deliver a specified therapeutic dose to clinical target volume (CTV), thereby minimizing an impact on critical organs, while ensuring a decrease in the incidence and severity of radiation-caused complications and improving the quality of life in patients with CC [18,19].
BT is an important part of RT of CC when there is an extensive tumor process in the pelvic cavity: combined chemoradiotherapy is the main choice in therapy. Accordingly, it requires ongoing improvement. All directions of BT development are due to dissatisfaction with its long-term results; the currently available arsenal of means and options for RT methods does not guarantee women from the development of relapses and metastases of this disease. In this regard, the search and implementation of a unified approach to RT requires continual search with simultaneous implementation in practice [20].
Increasing the effectiveness of RT in patients with CC and reducing the frequency of radiation complications is largely associated with the modernization of technical means for intracavitary gamma therapy via widely used radioactive sources (cobalt, cesium, iridium) [21]. An integral part of the patient's preparation for intracavitary irradiation is dosimetric planning based on information about the dosimetric characteristics of selected radiation sources and patient's topometric data. Threedimensional planning of intracavitary irradiation makes it possible to obtain a more accurate distribution of a specified dose over the target volume, depending on the geometry of the applicator location, which is very important for large volumes of the tumor process [22].
The objective of our study was to analyze an incidence of early and late radiation responses during 3D-IGBT of LACC.
Material and Methods
The study was conducted at the Center for Nuclear Medicine and Oncology in Semey, Republic of Kazakhstan. The study objects were female patients with stages IIB and IIIB of squamous cell carcinoma of the cervix without confirmed metastases, preceding CHT, RT, or surgical interventions for this localization, who underwent complex chemoradiation therapy in the period from 2018 to 2020 as part of the study. 3D-IGBT, 3D-image-guided brachytherapy; 2D RТ, conventional (2D) radiation therapy; TFD, total focal dose; EBRT, external beam radiation therapy; RT, radiation therapy. 6. Life expectancy of over 6 months; 7. Normal functioning of the bone marrow, liver and kidneys (laboratory parameter values: leukocytes ≥3.0×10 9 /L, hemoglobin ≥100 g/L, platelets ≥100×10 9 /L, total bilirubin ≤ 25.65 µmol/L), the level of AST/ALT ≤80 units/L, serum creatinine ≤ 132.6 μmol/L); 8. Written informed consent; 9. Diagnostic imaging prior to the onset of RT: computed tomography (CT) of the abdominal cavity organs, magnetic resonance imaging (MRI) of the pelvic cavity; 10. No metastases in the para-aortic lymph nodes (PALN) (over 1 cm in minimum diameter, as shown by CT).
Study groups
According to the above criteria, the present study included 66 patients who formed the following groups (Table 1): (1) The main group included 22 patients who underwent RT within the framework of this study using 3D-IGBT. BT planning was carried out in the 3D volumetric mode (required dose distribution over CTV with a maximum in the tumor zone and minimum impact in the area of adjacent healthy tissues. The number of BT sessions was 4 with a single dose of 6.0-7.0 Gy. Total focal dose (TFD) during BT was 24-28 Gy. TFD for EBRT was 50 Gy. TFD for combined RT was 74-78 Gy. The dose of the CHT preparation Cisplatin was 40 mg/m 2 ; (2) The control group (retrospective group) included 44 patients. They were subjected to LT in the mode of twodimensional planning (2D) of BT sessions with the distribution of doses over single-plane sections of the body in the middle of the target. The number of BT sessions was 5 with a single dose of 6.0-7.0 Gy. The TFD during BT was 30-35 Gy. TFD for EBRT was 50 Gy. TFD for combined RT was 80-85 Gy. The dose of the CHT preparation Cisplatin was 40 mg/m 2 .
The socio-demographic and clinical data of patients included in the study are presented in Table 2.
The distribution of patients by CC stages sensu FIGO looked as follows: the proportion of women with stage IIB of CC was 10 (45.5%) patients in the main group and 17 (38.6%) in the control group. The number of cases of stage IIIB cervical cancer was 12 (54.5%) in the main group and 27 (61.4%) in the control group. The tumor size according to MRI in 9 (40.9%) women of the main group and in 19 (43.2%) women of the control group was less than 5 cm, whereas in 13 (59.1%) study group women and in 25 (56.8%) control group women was greater than 5 cm. According to the degree of differentiation, the patients of both groups were distributed as follows: the number of cases with a low degree was 6 (27.3%) in the main group and 14 (31.8%) in the group control. A moderate degree was diagnosed in 13 (59.1%) women of the main group and 25 (56.8%) females of the control group. Cases of a high degree of differentiation were detected in 3 (13.6%) patients in the 3D group and in 5 (11.4%) women in the 2D group. According to the nature of tumor growth, the patients included in the study were distributed as follows: formations with exophytic growth were registered in 12 (54.5%) women of the main group and in 25 (56.8%) females of the control group. Endophytic growth in the main group was represented in 7 (31.8%) cases in the main group and 12 (27.3%) cases in the control group. Patients with mixed growth of CC were registered in 3 (13.6%) cases in the group with 3D planning, and in 7 (15.9%) cases in the group with 2D planning.
Study subject
Study subjects included early and late radiation responses in the course of using the technique of 3D-IGBT in the program of RT for CC.
External beam radiation therapy with 3D planning
EBRT with a fractionation regimen of 2.0 Gy, 5 fractions per week, the total focal radiation dose (TFD) for the pelvic cavity was 50 Gy (Figure 1. Chemoradiotherapy with 3D-IGBT).
From the first day of EBRT, both groups of patients underwent competitive CHT using intravenous infusions of Cisplatin. The dose calculation was carried out individually for each patient: 40 mg/m 2 ; weekly administration before intracavitary RT session, at least 5 injections.
Starting from week 5, when the radiation dose delivered via EBRT was at least 40 Gy, 3D-IGBT sessions were commenced with a single radiation dose of 6.0-7.0 Gy per week (while maintaining the limitation of the dose impact on risk organs), while TFD was 28 Gy.
When choosing the volume and distribution of radiation doses, the recommendations of the International Commission on Radiation Units and Measurement (ICRU) were applied to determine the gradations of volumes.
The 3D planning process was started by generating a 3D model of each patient using a series of parallel CT scans. Anatomical structures and planned target volume were determined on each of the scans using an automatic procedure based on the knowledge of the range of Hounsfield units for each of the critical organs and other anatomical structures. The construction of contours corresponding to the volume of the tumor, along with the clinical and planned volume of the target, was performed taking into account not only CT data, but also all clinical data about the patient.
To assess the quality of the treatment plan, Dose Volume Histogram (DVH) monitoring was performed. DVH is a plot of dose distribution in the irradiated volume. To achieve the most effective distribution of the dose in relation to the planned volume of the target, the dose-volume histogram had the shape of a rectangle. Using histograms, the following characteristics of dose distributions could be determined: standard deviations of the dose, minimum and maximum doses, mean doses, and median doses for critical organs (Figure 2. Scheme of 3D planning with a dose-volume histogram).
A medical physicist calculated several treatment plans for each case, and plotted dose-volume histograms for each plan: planned target volume (PTV) and each critical organ. Based on the DVH analysis, an optimal plan was chosen, for which the dose for the tumor was maximum (at least 95% of the dose should fall on PTV), whereas it was minimal for critical organs (Table 3). There were four 3D-IGBT sessions; the TFD of combined RT was 74.0-78.0 Gy. Doses were normalized according to the Manchester method. BT was performed using the tandem-ovoid applicator.
Methodology for assessing early and late toxicity of radiation therapy
Within the framework of this study, indicators of early toxicity were prospectively determined in patients receiving RT with 3D planning (main group) and with 2D planning (control group), including laboratory parameters (complete blood count, complete urinalysis, biochemical blood test, physical examination), during treatment and after discharge, within 90 days of the onset of therapy. Hematological and non-hematological toxicity indicators were evaluated on the international scale, NCI/CTCAE (National Cancer Institute's Common Terminology Criteria for Adverse Events), according to which early and delayed radiation injuries were identified. The former included radiation damage developing during RT or in the next three months after its completion. The latter included radiation damage developing after three months.
Staging of early and late radiation responses of all CC patients included in the study was carried out in accordance with the international scale, RTOG/EORTC (Radiation Therapy Oncology Group/European Organization for Research and Treatment of Cancer) (Tables 4 and 5) [23,24].
Statistical data processing
To describe and search for differences in the groups, we applied the methods of descriptive statistics. The database was generated in Excel and transferred to SPSS 20 for further procedures. Categorical variables are expressed as absolute numbers and their percentages, while quantitative variables are expressed as mean values and their standard deviations. To search for differences in categorical variables between groups, the chisquared test (χ 2 ) was employed, whereas for quantitative variables, the Mann-Whitney test was used, taking into account the asymmetry of the distribution. Differences were considered statistically significant at p<0.05.
Results
The average follow-up period ranged 6-28 months. To determine the effectiveness of the developed and implemented RT method, two groups were identified, as indicated in the Materials and Methods section.
When assessing hematological toxicity, the content of hemoglobin, leukocytes and platelets in the blood prior to the treatment, as well as the lowest levels during the treatment period, were examined. Table 6 presents data on the incidence of anemia, leukopenia and thrombopenia (sensu Common Terminology Criteria of Radiation Therapy Oncology Group, CTC/RTOG) depending on the RT method of treatment.
Among non-hematological complications, dyspeptic disorders were noted, in the form of decreased appetite, nausea and vomiting of grade 0-I, as an expected response to weekly CHT sessions, were equally common in the study groups. Other early non-hematological complications were radiation responses from organs at risk, which demonstrated that the 3D-IGBT method had an advantage over the standard 2D planning scheme (Table 7). E.g., the development of grade 1 proctitis occurred in 59.1% of cases in the main group, while in the control group, in 65.9% and 61.4% of cases. Grade II proctitis and grade II cystitis were diagnosed in 5 (22.7%) and 6 (27.3%) cases in the main group, respectively, while in the control group, they were recorded more frequently: in 13 (29.6%) and 14 (31.8%) cases, respectively. Grade 3 complications were not registered in the main group due to the optimization of the RT regimen. There were no cases of grade 4 proctitis development in both main and control groups.
Grade 1 radiation responses of the vaginal mucosa and cervix developed in 12 (54.6%) women of the main group, and in 25 (56.8%) women of the control group. Grade 2 radiation responses were recorded in 6 (27.3%) women of the main group vs. 14 (31.8%) cases in control group. It is important to point out that in the main group with 3D imaging, grade 3 responses were not diagnosed, while in the control group with 2D planning, grade 3 responses were noted in 4 (9.1%) women.
After 90 days from the onset of treatment, delayed toxicity of the large intestine, bladder, vaginal mucosa and cervix was identified in 22 women of the main group and 44 women of the control group (Table 8).
In the main group, grade 1 radiation responses of the intestine were diagnosed in 15 (68.2%) cases, and only in 32 (72.7%) women in the control group. Grade 2 responses of the large intestine in the group with 3D planning were diagnosed in 2 (9.1%) women, while in the group with 2D planning, they were detected in 7 (15.9%) cases. It is important to note that the manifestation of grade 3 late radiation injuries of the rectum was diagnosed in 3 (6.8%) women in the control group, while they were not registered at all in the main group.
Grade 2 responses were detected in 13 (59.1%) women of the main group, while in the control group, they were registered in 33 (75.0%) cases. Grade 2 cystitis was reported in fewer women in the 3D planning group vs. the 2D imaging group (9.1% vs. 13.6%, p<0.05). Grade 3 radiation responses of the bladder in the late period were diagnosed in 4 (9.1%) women in the control group versus none among the patients of the main group subjected to 3D planning (p<0.05).
Grade 1 radiation responses of the vaginal mucosa and cervix among women of the main group were noted in 13 (59.1%) cases, while in the control group, grade 1 radiation responses of the irradiated volume were observed in 33 (75.0%) patients. Grade 2 responses of the mucous membrane of the vagina and cervix were diagnosed more often in the control group (16.7% vs. 13.6%, p<0.05). Grade 3 responses of the vaginal mucosa and cervix were detected in 4 (9.1%) women in the control group, while a combined problem in the late period occurred in 2 (4.5%) women. Overall, grade 3 radiation injuries in the control group amounted to 9 (25.0%) cases, while in the main group, in the case of the implemented 3D technology, no grade 3 responses were registered (p<0.05).
Discussion
In their study, Sadiq S. et al. noted that despite the frequent use of CT and MRI technologies, there was a lack of data regarding treatment outcomes, and particularly acute non-hematological toxicity, in 3D BT of CC based on CT. Previous studies reported solely chronic side effects [26]. Our study assessed early and late hematological and nonhematological toxicity in both groups. When measuring early hematological toxicity, the development of correctable anemia, leukopenia and thrombocytopenia, was observed in 8 (36.4%), 9 (40.9%) and 6 (27.3%) women of the main group, respectively, at weeks 4-5 of EBRT treatment; while in the control group, these were diagnosed in 17 (38.6%), 19 (43.2%) and 11 (25.0%) cases, correspondingly, both at the stage of EBRT and during BT sessions.
Among the early non-hematological complications, dyspeptic disorders were noted in 5 (22.7%) cases in the 3D-IGBT group, and 10 (22.7%) women in the control group. Other early nonhematological complications were radiation responses of risk organs: grade 3 complications were registered in 5 (11.4%) patients in the control group, while in the main group they were not detected at all due to optimization of the RT regimen. Early radiation responses of grade 3 of the vaginal mucosa and cervix were not diagnosed in the main group with 3D visualization, while in the control group with 2D planning, grade 3 responses were noted in 4 (9.1%) women in the form of membranous epitheliitis, severe pain, requiring an appointment of a stronger anesthesia The assessment of late toxicity was carried out after 90 days from the onset of treatment. In their study, Dang Y. et al presented late toxicity data using CT for BT: 37% of patients had grade 2 radiation responses of the rectum, whereas 7% had grade 3 radiation responses [26]. When conducting this study, the evaluation of late toxicity disclosed that the manifestation of late grade 3 radiation damage of the rectum was diagnosed in 3 (6.8%) women in the control group, while such damage was not detected in the main group. Grade 2 cystitis was reported in fewer women in the 3D planning group than in the 2D imaging group (9.1% vs. 13.6%, p<0.05). Grade 3 radiation responses of the bladder in the late period were diagnosed in 4 (9.1%) women in the control group, while among the patients of the main group with 3D planning, those were not observed (p<0.05). Grade 2 responses of the vaginal mucosa and cervix were diagnosed more habitually in the control group (16.7% vs. 13.6%, p<0.05).
Kusada T. et al. indicated in their study that the rate of severe late complications (grade≥3) during two years was 3% for the bladder and rectum and 0% for the sigmoid colon and small intestine [27]. When conducting this study, we revealed that in the main group, in the case of using the introduced 3D technology, grade 3 responses were not observed (p<0.05). At the same time, grade 3 responses of the vaginal mucosa and cervix were registered in 4 (9.1%) women in the control group, while a combined problem occurred in the late period in 2 (4.5%) women. Overall, grade 3 radiation injuries in the control group amounted to 9 (25.0%) cases.
Study advantages and limitations
It should be noted that the region in which the study was conducted is one of the territorial units adjacent to the Semipalatinsk nuclear test site. Most of the population are descendants of individuals affected by the effects of the tests. A number of epidemiological studies on the health of the population and the risks of developing various neoplasms have been carried out in the region [28,29]. Data on contamination of the territory with trace elements also create prerequisites for aggravating the negative impact on the health of residents living in the region [30]. Hence, our study possesses an important practical scope, which has a scientific basis and provides prerequisites for further research.
The advantages of this study encompass the establishment of a cause-effect relationship, and the evaluation of the applied treatment method effectiveness through the analysis of toxicity indicators.
The limitations of our study include the presence and/or absence of required equipment and the high cost of the method. The disadvantages of this study comprise limited applicability of proposed treatment for patients requiring emergency care, along with a large number of factors influencing the outcome of the therapy.
Conclusion
Hence, the method we used to optimize the treatment of LACC via 3D planning, based on toxicity criteria, exhibited a definite advantage in comparison with RT employing 2D planning. The patients in the main group had a lower percentage of early hematological responses to irradiation, while there were no grade III responses of risk organs in the early and late periods. The percentage of overall survival in the main group was higher than in the control group, which confirmed the clinical effectiveness of proposed method for optimal choice of therapy for patients with LACC.
Based on the results of our study, we conclude that optimization of RT for LACC via 3D-IGBT creates clinically beneficial conditions for effective therapy: it reduces the risk of applicator displacement and diminishes an impact on the patient via reducing total radiation doses, the frequency of severe early and late toxic effects, thereby providing adequate local control, regardless of tumor size and its clinical stage.
The study demonstrated that the choice of rational fractionation schemes and the methodology for taking into account the dose impact allowed optimizing radiation programs considering individual parameters of the tumor process, spatial relationship of the tumor and organs at risk, as well as the constitutional characteristics of the patient. Such choice also ensured a higher level of quality of life via reducing both early and late radiation responses and complications. We established that the introduction of 3D-IGBT in clinical practice for the treatment of CC, compared with traditional methods, provided higher social and economic efficacies, both by reducing the number of RT courses and by reducing the degree and risk of early and late postradiation responses.
Conflict of Interest
None declared. All authors participated equally in the manuscript preparation. This material has not been previously submitted for publication in other periodicals, and it is published for the first time.
Funding
This study was partially funded by the Science Committee of the Kazakhstan Ministry of Education and Science (Registration No. AR05130969). This study is part of the multicenter project, Forum for Nuclear Cooperation in Asia (FNCA), carried out on a grant basis.
Ethical approval
All procedures performed in studies involving humans were in accordance with the ethical standards of the institutional and/or national | 2022-10-06T15:16:00.995Z | 2022-06-25T00:00:00.000 | {
"year": 2022,
"sha1": "ecaa36981b802bacef3637b84f3f9ad61e534275",
"oa_license": "CCBYNC",
"oa_url": "https://romj.org/files/pdf/2022/romj-2022-0217.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1cd07c730f0f363c419e86750cb000fc4ac629ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
16774606 | pes2o/s2orc | v3-fos-license | Interaction of cold radiofrequency plasma with seeds of beans (Phaseolus vulgaris)
Highlight The impact of cold plasma on the wetting, water absorption, and germination of beans (Phaseolus vulgaris) is reported. Plasma treatment accelerated the water absorption and germination of seeds.
Introduction
The interaction of various kinds of radiation with plants has been exposed to intensive research (Teramura, 1983;Cen and Bornman, 1990;Jansen et al., 1998;Friesen et al., 2014, Guajardo-Flores et al., 2014. Generally, the researchers focused on the interaction of UV radiation with plants (Cen and Bornman, 1990;Guajardo-Flores et al., 2014). One kind of interaction remains practically uninvestigated, namely the interaction of cold radiofrequency plasma with plants. The first reports in this field appeared only in the last two decades. It seems that the first systematic study of the influence exerted by plasma on seeds was carried out by Volin et al. (2000) who exposed seeds of radish (Raphanus sativus) and two pea cultivars (Pisum sativum cv. 'Little Marvel', P. sativum cv. 'Alaska') to CF 4 and octadecafluorodecalin plasma, and reported a significant delay in germination compared with the untreated controls. Since then, researchers have concentrated on the following main trends of investigation: (i) decontamination of seeds by plasma, (ii) breaking of dormancy with plasma, (iii) the impact of plasma treatment (PT) on germination, and (iv) the impact of PT on root generation (sprouting).
Decontamination and inactivation of pathogenic microorganisms of seeds have been communicated recently by various groups (Selcuk et al., 2008;Filatova et al., 2009;Schnabel et al., 2012). Several groups reported the impact of PT on germination, sprouting, and dormancy breaking of seeds. The experimental data in these fields are scanty and controversial. Sera et al. (2010) investigated the influence of PT on wheat and oat germination. The authors reported that PT did not affect the germination of oat seeds, but they did note accelerated root generation in plants grown from plasma-treated seeds (Sera et al., 2010). The same group also showed that PT did change seed germination in Lamb's Quarters seeds (Sera et al., 2009). A stimulating effect of cold plasma on both the germination and sprouting of tomato seeds (Lycopersicon esculentum L. Mill. cv. 'Zhongshu No. 6') has been reported by Meiqiang et al. (2005). Similar results were reported for Pauwlonia tomentosa seeds (Živković et al., 2004). Kitazaki et al. (2012) studied the growth enhancement of radish sprouts (Raphanus sativus L.) induced by low pressure O 2 radiofrequency plasma irradiation. The experimental results revealed that oxygen-related radicals strongly enhance growth, whereas ions and photons do not (Kitazaki et al., 2012). The positive effect of cold helium plasma treatment on seed germination, growth, and yield was reported recently for wheat . Treatment of spinach seeds by a magnetized arc plasma increased the germination rate by 137% (Shao et al., 2013). It has been demonstrated that cold atmospheric plasma treatment had little effect on the final germination percentage of radish seeds, but it influenced their early growth (Mihai et al., 2014). Generally, the researchers agree that cold plasma treatment is an economical and pollution-free method to improve seed performance and crop yield. It plays essential roles in a broad spectrum of developmental and physiological processes in plants, including reducing the bacterial bearing rate of seeds, changing seed coat structures, increasing the wettability and permeability of seed coats, and stimulating seed germination and seedling growth (Selcuk et al., 2008Ling et al. 2014).
Our group recently studied the influence of cold air plasma on lentils (Lens culinaris), beans (Phaseolus vulgaris), and wheat grains (Triticum sp. C9) (Bormashenko et al., 2012). It was established that plasma treatment dramatically influenced the wettability of the seeds, resulting in a significant decrease of the apparent contact angle (de Gennes et al., 2003;Erbil, 2006). The change in wettability is followed by a consequent change in the water imbibition of the seeds. It was also reported that cold PT markedly modifies the wettability of various biological tissues, including lycopodium particles and keratin (Bormashenko and Grynyov, 2012a, b). The impact of cold plasma on the wettability of keratin has also been reported by Molina et al. (2003).
Hydrophilization of biological tissues by cold plasma resembles the similar effect observed and deeply researched in synthetic polymers (Yasuda, 1976;Strobel et al., 1994;France and Short, 1998;Stoffels et al., 2008).The plasma treatment of synthetic polymers creates a complex mixture of surface functionalities which influence surface physical and chemical properties and results in a dramatic change in the wetting behaviour of the surface (France and Short, 1997;Kaminska et al., 2002). Not only the chemical structure but also the roughness of the surface is affected by the plasma treatment; this also could change the wettability of the surface (Lommatzsch et al., 2007). PT usually strengthens the hydrophilicity of treated synthetic polymer surfaces. However, the surface hydrophilicity created by plasma treatment is often lost over time (Morra et al., 1990). This effect of decreasing hydrophilicity is called 'hydrophobic recovery' (Morra et al., 1990;Occhiello et al., 1992;Kaminska et al., 2002;Pascual et al., 2008;Mortazavi et al., 2012). By contrast, in our recent research, hydrophobic recovery was not observed in plasma-treated seeds (Bormashenko et al., 2012). It should be emphasized that cold plasma treatment only influences the nano-scaled external layer of a tissue (France and Short, 1997;Kaminska et al., 2002;Mortazavi and Nosonovsky, 2012;Bormashenko et al., 2013). This fact may be crucial for the biological applications of cold plasma treatment.
There are still many open issues with regard to the mechanisms of plasma action on synthetic polymers. Obviously, the processes of interaction with biological objects such as seeds are even more complicated. In this paper, the focus is on the modification of wetting properties by plasma, studied for a variety of interfaces constituting beans (exotesta, mesotesta, and cotyledon), change in water imbibition due to the plasma treatment, and also on the effect of plasma treatment on the germination of beans. It is noteworthy that beans are amenable for the goniometric study of the effect of plasma on their wettability represented by an apparent contact angle due to their large surface area. Changes in the water imbibition pathways due to the plasma treatment were also tracked, as well as the effect of plasma treatment on the germination kinetics of beans. Our research demonstrates that the speed of germination may be affected by the cold plasma treatment.
Plasma treatment of beans
PT was carried out with the plasma unit EQ-PDC-326, manufactured by MTI Co, USA. Beans (Phaseolus vulgaris) were exposed to an inductive air plasma discharge under the following parameters: the plasma frequency was on the order of 10 MHz, the power was 20 W, the pressure was 6.7 × 10 -2 Pa; the volume of the discharge chamber was 840 cm 3 . The time span of irradiation was 2 min. The scheme of the experimental unit used for plasma treatment of the seeds is depicted in Fig. 1. Under plasma treatment, the beans were exposed to low vacuum; hence it was necessary to study separately the impact of a vacuum on the properties of beans.
Study of the impact of vacuum pumping on the properties of beans
Beans were exposed to a low vacuum with the pressure of 6.7 × 10 -2 Pa for 3 min. Ten beans were weighed before and immediately after vacuum pumping. The weight was taken with a MRCASB-220-C2 five place analytical balance. Thus, any weight loss of beans occurring under vacuum pumping was established. Afterwards, the vacuumed beans were exposed to ambient conditions: the average temperature was 22.3 °C; the relative humidity was 31-33%. The weight of the beans was measured continuously over 5 min. Thus, the kinetics of water adsorption by the beans was established.
SEM imaging of beans
Irradiated and non-irradiated beans and tissues (a cotyledon and seed coat) were imaged by high resolution SEM (JSM-6510 LV).
Trinocular imaging of beans
Irradiated and non-irradiated beans and tissues constituting the seed were imaged by the trinocular microscope (Model SZM-2) with a USB digital camera (OPTIKAM PRO3) supplied by Optika Microscopes Italy.
Study of the wetting properties of tissues
The wetting properties of tissues were established using a Ramé-Hart goniometer (model 500). Ten measurements were taken to calculate the mean apparent contact angles (de Gennes et al., 2003;Erbil, 2006) for all kinds of studied tissues.
Study of water imbibition
For the study of the time dependence of water absorption (imbibition) by irradiated and non-irradiated beans, 10 seeds were placed on humid blotting paper at ambient conditions; the temperature was 24 °C. Beans were weighed every hour with an MRC ASB-220-C2 analytical balance. The relative water imbibition (absorption) was defined as: , where m 0 is the total initial mass of seeds, and m(t) is the running total mass of seeds. A series of six experiments was performed for untreated, vacuumed (pumped), and plasma-treated beans.
Study of the role of a micropyle in water imbibition
For the study of the role of the micropyle in water imbibition, experiments with open and sealed micropyles were performed. The micropyle was sealed with thermosetting glue (UHU 41686 Instant Super Glue).
Study of impact of the PT on beans' germination
For the study of the impact of PT on germination, 30 seeds of irradiated, vacuum-pumped, and non-radiated beans were placed on humid blotting paper for about 80 h at constant conditions provided by a growth chamber (Model PGI-500H, supplied by MRC Ltd, Israel); the temperature was kept at 21 °C; the process included 12 h of darkness and 12 h of light per day. Every 4-8 h the numbers of emerged seedlings were counted. The percentage of germination was calculated after counting seedlings on the fourth day. Two sets of 180 seeds were taken for vacuum and plasma treatments, respectively; the control group also included 180 seeds. All experiments were repeated three times. The results of the experiments were processed by JMP software (Statistical Discovery, Version 7.0.2).
Impact of vacuum pumping on the properties of beans
When beans are treated with cold plasma, they are necessarily exposed to a vacuum. They also lose weight when pumped and it is possible that their germination rate would also be changed (the germination of beans will be discussed later). The weight loss and water adsorption of pumped beans were established (see Materials and methods: study of the impact of vacuum pumping on the properties of beans). The average weight loss of pumped beans was established as 0.11% compared with the starting weight. Pumped beans partially restored their weight when exposed to a humid atmosphere (for details see Materials and methods: study of the impact of vacuum pumping on the properties of beans). It is plausible to suggest that this restoration is governed by surface events, i.e. water adsorption to the beans' surface. The dynamics of water adsorption by pumped beans is illustrated in Fig. 2. First of all, it should be stressed that pumped beans did not restore their initial weight. This means that pumping removes water not only from the surface of beans but also from the beans' bulk. The time dependence of water adsorption of pumped beans is well fitted by Equation 1.
where m(t) is the running mass, m 0 is the initial mass established immediately after pumping, t is time, τ is the characteristic time of adsorption and ∆m ad is the resulting mass of adsorbed water. The average characteristic time of adsorption τ was established by fitting of experimental data with Equation 1 as 0.38 ± 0.07 min.
Impact of the plasma treatment on the wettability of different biological tissues constituting seeds
It has already been reported that cold radiofrequency PT dramatically changed the wettability of the beans' seed coat (Bormashenko et al., 2012). A similar change in the wetting of soybeans exposed to cold plasma was reported recently . PT treatment markedly hydrophilized and irreversibly changed the wetting regime of the external side of the seed coat. It was also important to establish the impact exerted by PT on the wettability of other tissues constituting the seed: the testa, including the exotesta and mesotesta, and the cotyledon epidermis (Aniszewski et al., 2006). For this purpose, two series of experiments were carried out. In the first series, intact beans were exposed to the plasma treatment, as described in the Materials and methods: plasma treatment of beans, and, afterwards, the testa and exotesta were separated from the cotyledon using a scalpel and tweezers. The SEM and trinocular images of the inner (mesotesta)/outer (exotesta) sides of the seed coat and cotyledon surface are presented in Fig. 3A-F (for the experimental details see the Materials and methods: SEM and trinocular imaging of beans).
In the second series, the testa was separated from the cotyledon, and they were individually exposed to the PT, as described in the Materials and methods: Plasma treatment of beans. The wettability of irradiated and non-irradiated tissues was established by the measurement of apparent contact angles (APCAs) as described in the Materials and methods: Study of the wetting properties of tissues. The results of the measurements are summarized in Tables 1 and 2, where θ EXO 0 is the APCA of the exotesta before PT, θ EXO PT is the APCA of the exotesta after PT, θ MESO 0 is the APCA of the mesotesta before PT, θ MESO PT is the APCA of the mesotesta after PT, θ COT 0 is the APCA of the cotyledon before PT, and θ COT PT is the APCA of the cotyledon after PT.
First of all, it should be stressed that the mesotesta demonstrated an apparent contact angle as high as 132°, as depicted in Fig. 4A, which is close to the APCA inherent to superhydrophobic surfaces (Barthlot et al., 1997;Nosonovsky, 2007;Bhushan et al., 2010;Bormashenko et al., 2013). The high values of APCA are due to the multiscale roughness observed on the mesotesta, shown in Fig. 3B (Nosonovsky, 2007). However, the mesotesta does not demonstrate a true superhydrophobicity, but rather the so-called 'rose petal effect', where high values of APCA are attended by a high adhesion wetting regime, illustrated in Fig. 4B-D. Indeed, a water droplet remained connected to the mesotesta even in the 'pendant' position, as shown in Fig. 4B.
It should be emphasized that the mesotesta kept its pronounced hydrophobicity when the entire bean was exposed to PT. The APCA did not change, as seen from the data supplied in Table 1. The cold plasma comprises a variety of species, including atoms, ions, electrons, and photons (Lieberman et al., 2005). However, the changes in wettability of organic tissues produced by cold plasma are mainly ascribed to the collisions of ions with moieties constituting a tissue. Indeed, the energy transfer related to collisions of electrons with a tissue is negligible, and the energy of photons is deficient for the essential changes in the wettability of a tissue (Lieberman et al., 2005). Thus, the absence of changes in the wettability of the mesotesta leads to the suggestion that ions lost their energy when penetrating through the coat, and these ions are incapable of modifying the wetting regime of the surface.
It is also seen from data, presented in Table 1, that the wetting properties of the cotyledon were similarly not changed by plasma when entire beans were treated. Thus, the absence of changes in the wettability of the cotyledon strengthens the hypothesis that the plasma species 'attenuated' by the coat do not have sufficient energy for the modification of the wettability of the internal sides of the biological surfaces studied in our research. By contrast, cold plasma markedly hydrophilized all the biological surfaces studied when they were separately exposed to PT, as seen from the data presented in Fig. 2. Weight loss of beans (n=10) due to the pumping followed by the increase due to the water adsorption. The size of diamonds denoting experimental points indicates the experimental scattering of the results. Table 2. A jump from pronounced hydrophobicity to complete wetting was observed for the strongly hydrophobic mesotesta. Changes in APCA as high as 69° and 28° were observed for the exotesta and cotyledon. These findings coincide with those reported in our previous work (Bormashenko et al., 2012). It is concluded that PT modifies the wetting of various tissues constituting beans, when the access of ions to the tissues is not restricted.
Study of the water absorption by untreated and plasma-treated beans
As shown in the previous section, PT modifies the wetting regime of tissues and promotes their hydrophilization. This effect leads to increased water absorption (imbibition) of plasma-treated seeds, as demonstrated in our recent paper (Bormashenko et al., 2012). We now report more detailed study of water imbibition by plasma-treated beans, in which the pathways of water imbibition were investigated. The water imbibition occurring in plasma-treated beans is a complicated process, influenced by a number of factors. (i) It was demonstrated in 'Impact of vacuum pumping on the properties of beans' that vacuum pumping removes water not only from the surface of beans, but also from their bulk. Thus, it is reasonable to suggest that vacuum pumping will also influence the process of water imbibition. (ii) The seed coating inherent for beans is not intact; it contains a small opening called a 'micropyle' (Bewley and Black, 1994). The crucial role of the micropyle in water imbibition is widely acknowledged (Hagon, 1970;Rolston, 1978;Gama-Arachchige et al., 2013).
In order to separate the impact of factors influencing water imbibition, two series of experiments were performed. In the first series, water imbibition of untreated, vacuum-pumped (without plasma), and plasma-treated beans was studied, as described in 'Study of water imbibition'. In these experiments Table 1. Apparent contact angles (in degrees) as established for different tissues for entire beans exposed to PT (subscripts: EXO,exotesta;MESO,mesotesta;COT,cotyledon;superscripts: 0,untreated beans;PT, 40 ± 1 132 ± 1 131 ± 2 101 ± 2 100 ± 2 Table 2. Apparent contact angles (in degrees) as established for different tissues for seed coatings and cotyledons exposed separately to plasma treatment 109 ± 1 40 ± 1 132 ± 1 0 101 ± 2 72 ± 1 Fig. 4. The 'rose petal effect' observed on the mesotesta of bean seeds. High apparent contact angles are attended by a high adhesive wetting state. The droplet is attached to the surface even in the pendant position.
the micropyle was open. The results of these experiments are depicted in Fig. 5A. It was seen that PT markedly promotes water imbibition. It is also recognized from data supplied in Fig. 5A that vacuum pumping retards water absorption. This observation calls for future investigation. The second series of experiments was intended to reveal the precise role of the micropyle in water imbibition. For this purpose, the micropyle was sealed precisely with a small droplet of glue (for details, see the Material and methods: study of the role of a micropyle in water imbibition). Thus water imbibition by four kinds of beans was studied, namely: untreated beans with an open micropyle, plasma-treated beans with an open micropyle, untreated beans with a sealed micropyle, and plasma-treated beans with a sealed micropyle. Their water imbibition was studied as described in the Materials and methods: study of water imbibition; the results are depicted in Fig. 5B. It is distinctly seen that the fastest water imbibition was observed for plasma-treated beans with an open micropyle. The slowest water absorption was registered for untreated beans with a sealed micropyle. Plasma treated beans with a sealed micropyle demonstrated a speed of water imbibition higher than that of untreated beans with a sealed micropyle, but remarkably slower than that of plasma-treated beans with open (unsealed) micropyles. Thus, the strengthening impact of plasma treatment on water imbibition through the beans' coat, as well as the essential role of the micropyle, were both established.
Qualitative experiments were also performed which shed light on the kinetics of water imbibition. Untreated and plasma-treated beans were immersed in water coloured with Bromophenol Blue (see the Materials and methods: direct visualization of water penetration), which visualized the process of water penetration into the beans. Immersed beans were cut every hour, and a distinct boundary was observed, separating areas wet by coloured water and non-wet ones, as shown in Fig. 6A-H, where: (A, B) is the untreated bean after 2 h in coloured water; (C, D) is the untreated bean after 5 h in coloured water; (E, F) is the plasma-treated bean after 2 h in coloured water; (G, H) is the plasma-treated bean after 5 h in coloured water. It was clearly seen that more rapid penetration occurs in plasma-treated beans. The micropyles in these experiments were open. It can also be recognized from Fig. 6A-H that water penetrates into beans from the side of a micropyle. This further demonstrates the key role of a micropyle in the process of water imbibition (Rolston, 1978;Gama-Arachchige et al., 2013).
The impact of PT on beans' germination
As shown in the previous sections, PT leads to increased water absorption (imbibition) of seeds. It was also important to verify the impact of increased water absorption on the germination process of seeds. For this purpose, three groups of bean seeds were used: one group of seeds was treated by cold plasma, as described in the Materials and methods: Plasma treatment of beans, the second group was treated by vacuum pumping, and the third group of seeds was untreated. No significant difference was observed in viability (the percentage of germination); all three groups of seeds demonstrated the same viability rates, which were about 92-94%, as depicted in Fig. 7.
In order to derive the data describing the kinetics of germination, Richards'curves were fitted to a number of experiments (Richards, 1959;Hara, 1999;Sera et al., 2009). Fitting of experimental data by Richards' curves is depicted in Fig. 8A-C.). The Richards' differential equation, originally developed for growth modelling, gives rise to the Richards' curve, which is an extension of the logistic or sigmoid functions, resulting in S-shaped curves describing the kinetics of germination. The Richards' function Y t , possessing a variable point of inflection, was calculated according to Equation 2: Comparative study of the germination process observed in untreated, vacuum-pumped, and plasma-treated beans. where Y t is the germination percentage, a, b, c, and d are the fitting parameters, and t is the time. Fitting of experimental data by Equation 2 supplied the best values of the fitting parameters, summarized in Table 3, in which Me (median) denotes the time of 50% germination and characterizes the rate of this process. The quartile deviation of germination time Qu describes the deviation range of the Richards' curve relative to Me, and Sk (skewness) represents the asymmetry of the Richards' curve relative to the inflection point (mode). For calculation of these quantities, the useful formulae developed by Hara (1999) were implemented.
As seen from Table 3, the final percentage of germination is almost the same in the cases of plasma-treated, untreated, and vacuum pumped samples. The value of Me, however, is distinctly lower (by c. 4 h) in the case of plasma-treated samples. Germination is accelerated as Me decreases (Hara, 1999). This means that, although the final germination percentage is the same, the speed of germination is higher for the plasma-treated samples. The value of Me was the same for untreated and vacuum-pumped groups. Thus we deduce that vacuum pumping does not influence the rate of germination.
No difference in the parameter Qu was established among all groups of seeds (untreated, plasma-treated, and vacuumpumped). The value of Sk=0.17 was the same in the plasmatreated and control groups, whereas it was much higher (Sk=0.40) for the vacuum-pumped samples.
Conclusions
The influence exerted by cold radiofrequency plasma treatment on the wetting properties and germination of beans (Phaseolus vulgaris) was investigated. A comparative study of wettability and germination was performed for plasma-treated, vacuum pumped, and untreated seeds. It was established that the cold plasma treatment markedly hydrophilized the external (exotesta) surface of the seed coat, whereas the mesotesta and cotyledon kept their pronounced hydrophobicity when the entire bean was exposed to plasma treatment. It is concluded that plasma ions lose their energy when penetrating through the coat, and these ions are incapable of modifying the wetting regime of the biological surface. Water imbibition into vacuum-pumped and plasma-treated beans was studied. It has been demonstrated that plasma treatment markedly increased the water imbibition of seeds through the testa, independent of the micropyle effect. The key role of a micropyle in the process of imbibition was shown.
The impact of vacuum pumping on the properties of beans, which is inevitable under cold PT, was studied. The kinetics of water adsorption of pumped beans is well fitted by an exponential function. The average characteristic time of adsorption τ was established as 0.38 ± 0.07 min.
The impact of PT on the germination of beans turned out to be ambiguous. The kinetics of germination was fitted by the Richards curve. It was established that the final percentage of germination is almost the same in the cases of plasma treated, untreated, and vacuum-pumped samples. However, the speed of germination was markedly higher for the plasma-treated samples. It is not known in detail the mechanisms of interaction of cold plasma with the biological tissues constituting seeds; thus detailed microscopic research clarifying this interaction is required. | 2016-05-04T20:20:58.661Z | 2015-05-06T00:00:00.000 | {
"year": 2015,
"sha1": "3cc15b6b895bad852af7218fb573394409397068",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jxb/article-pdf/66/13/4013/18044518/erv206.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3cc15b6b895bad852af7218fb573394409397068",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
221896141 | pes2o/s2orc | v3-fos-license | Vitamin D3 Supplementation Ameliorates Typical Clinical Symptoms in Children with Autism Spectrum Disorder in Japan: A Case Study
Prior to the study, approval was obtained from the ethics committee of Bukkyo University (project registration number in 2018: 7). We enrolled 5 males and 1 female Japanese ASD children aged 3 years. The researchers were present at the child welfare institution (Mukunokien, Kyoto, Japan) where the study was conducted to assure the proper management of safety and confidentiality in the study. The manager of the institution invited parents to participate in the study, and all the children whose participation was requested from January to September in 2019 were enrolled. All subjects took oral vitamin D supplements (Baby D®200: 5.0μg/day of vitamin D3 oil purchased from Morishita Jintan Co., Ltd., Osaka) for 9 months. Abstract
Vitamin D is a secosteroid associated with peripheral calcium homeostasis and nervous system function [3]. Vitamin D exists in two major forms, vitamin D2 from plants and D3 from animals. Both vitamin D2 and D3 are biologically inert and require activation through two hydroxylation processes involving 25-hydrooxylase (CYP2R1) and 1α-hydroxylase (CYP27B1), located in the liver and kidney, respectively [4]. 1, 25-dihydroxyvitamin D (1, 25OHD) is a biologically active metabolite produced by two hydroxylation reaction steps in the nervous system [5]. Vitamin D modulates the central nervous system thorough its receptors which are expressed in neuronal and glia cells in almost all regions of the central nervous system [6].
It has been suggested that vitamin D reduces the risk of ASD. Cohort studies, have shown low neonatal vitamin D to be a possible risk factor for ASD [7,8]. Studies have also investigated the association between vitamin D concentrations during pregnancy and total behavioral and neurodevelopmental problem [9][10][11]. However, prospective studies have not found an association between 25-hydroxyvitamin D (25OHD) concentration and ASD. ASD usually appears during the first three years of life [12]. In one case report, Vitamin D supplementation led to significant reduction
Adaptive function test
All children were assessed for autism behavior using the Childhood Autism Rating Scale (CARS) [14]. An adaptive behavior scale (Vineland-II) [15] was used to identify communication, daily life, social and motor skills through a semi-structured interview conducted with parent and caregiver. The Short Sensory Profile (SSP) was used to quickly identify children with sensory processing problems from parent reports of sensory behaviors in their child [16].
The tests were performed by skilled occupational therapists.
Statistical analysis
The differences between before and after intervention with vitamin D3 supplements were evaluated using the Wilcoxon test. A p-value of < 0.05 was considered to be statistically significant. Analyses were carried out using SPSS 21 for Windows (IBM, Japan).
Study subjects
Characteristics of the study subjects are shown in Table 1. Age was 3 years for all the children (males, n=5 females, n=1). Ca and iPTH were within the normal range, but 25OHD was below the sufficient value (>=30ng/mL) (
Change in serum 25OHD
Serum 25OHD was classified as normal (>=30 ng/ml), insufficient (>20 to 29.9), or deficient (<=20). In this study, the level was deficient (n=2) or insufficient (n=4). Nine-month intake of vitamin D3 supplements increased serum Ca and 25OHD concentration significantly (Figure 1). Serum Ca and iPTH were within normal range for 9 months. The vitamin D deficient group decreased from 2 children to 1 (1: became insufficient) and the insufficient group decreased from 4 children to 3 (1: became sufficient). These results suggest that vitamin D3 supplement intake increased serum 25OHD levels.
Adaptive function
The CARS and SSP scores decreased in all except one child who had originally high 25-hydroxy vitamin D levels (28 ng/mL) ( Figure 2 Among the ASD children in this study, vitamin D administration had a positive effect on "communication skills", "ADL skills", "social skills" or "motor skills" in two children, based on the Vineland-II (Figure 3).
Many epidemiological studies have assessed the relationship between vitamin D and ASD [17]. The majority have examined the relationship between the vitamin D status of pregnant women and the risk of ASD in their children, and reported the vitamin D status of ASD children. Some studies have found no relationship between vitamin D status and ASD [18,19].
A few studies have reported the effect of vitamin D supplementation in women during pregnancy and in children with ASD. Vitamin D supplementation at adequate dose during pregnancy reduces the incidence of autism [20]. Administration of vitamin D administration to a vitamin D deficient 32-month-old boy with ASD improved his core symptoms of autism [13]. Among the children in this study, vitamin D supplementation had an ameliorative effect on "communication skills", "ADL skills", "social skills" or "motor skills" associated with the disorder.
Vitamin D is regarded as a hormone that is active not only in regulating blood calcium but also in brain development. Vitamin D may have a positive effect on serotonin, which affects brain development [21]. The developmental disruption of serotonin signaling may be involved in autism [22]. These results suggest that vitamin D supplementation may ameliorate clinical symptoms in ASD children. This is a preliminary study with a very small number subjects, and it was uncontrolled. Further study with larger numbers of subjects is warranted, and could reveal optimal 25OHD levels for ameliorating ASD symptoms and preventing falls in ASD children.
Conclusion
These findings indicate that vitamin D supplementation might improve adaptive function. | 2020-08-20T10:12:02.384Z | 2020-01-09T00:00:00.000 | {
"year": 2020,
"sha1": "5760e5c1a83c66a505b89a5e3d095fd0d6920fd2",
"oa_license": "CCBY",
"oa_url": "https://www.graphyonline.com/archives/IJNCP/2020/IJNCP-318/article.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4b6f2cdf7b79ee7e6bf453596a1d53ec5afaab2a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214713659 | pes2o/s2orc | v3-fos-license | Structural analysis of SARS-CoV-2 genome and predictions of the human interactome
Abstract Specific elements of viral genomes regulate interactions within host cells. Here, we calculated the secondary structure content of >2000 coronaviruses and computed >100 000 human protein interactions with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The genomic regions display different degrees of conservation. SARS-CoV-2 domain encompassing nucleotides 22 500–23 000 is conserved both at the sequence and structural level. The regions upstream and downstream, however, vary significantly. This part of the viral sequence codes for the Spike S protein that interacts with the human receptor angiotensin-converting enzyme 2 (ACE2). Thus, variability of Spike S is connected to different levels of viral entry in human cells within the population. Our predictions indicate that the 5′ end of SARS-CoV-2 is highly structured and interacts with several human proteins. The binding proteins are involved in viral RNA processing, include double-stranded RNA specific editases and ATP-dependent RNA-helicases and have strong propensity to form stress granules and phase-separated assemblies. We propose that these proteins, also implicated in viral infections such as HIV, are selectively recruited by SARS-CoV-2 genome to alter transcriptional and post-transcriptional regulation of host cells and to promote viral replication.
INTRODUCTION
A disease named Covid-19 by the World Health Organization and caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been recognized as responsible for the pneumonia outbreak that started in December 2019 in Wuhan City, Hubei, China (1) and spread in February to Milan, Lombardy, Italy (2) becoming pandemic.
SARS-CoV-2 is a positive-sense single-stranded RNA virus that shares similarities with other beta-coronavirus such as severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle East respiratory syndrome coronavirus (MERS-CoV) (3). Bats have been identified as the primary host for SARS-CoV and SARS-CoV-2 (4,5) but the intermediate host linking SARS-CoV-2 to humans is still unknown, although a recent report indicates that pangolins could be involved (6).
Coronaviruses use species-specific proteins to mediate the entry in the host cell and the spike S protein activates the infection in human respiratory epithelial cells in SARS-CoV, MERS-CoV and SARS-CoV-2 (7). Spike S is assembled as a trimer and contains around 1300 amino acids within each unit (8,9). The receptor binding domain (RBD) of Spike S, which contains around 300 amino acids, mediates the binding with angiotensin-converting enzyme (ACE2), attacking respiratory cells. A region upstream of the RBD, present in
Structure prediction
We computed the secondary structure of transcripts using CROSS (Computational Recognition of Secondary Structure) (12,13). The algorithm predicts the structural profile (single-and double-stranded state) at single-nucleotide resolution using sequence information only and without sequence length restrictions (scores > 0 indicate double stranded regions). We used the Vienna RNA Package (25) to further investigate the RNA secondary structure of minima and maxima identified with CROSS (13).
CROSS alive was employed to predict SARS-CoV-2 secondary structure in vivo (26). CROSS alive (m6A+ fast option) predicts long range interactions and can identify pseudoknots of 50-100 nucleotides. The RF-Fold algorithm of the RNAFramework suite (26) was used to iden-tify pseudoknots in SARS-CoV-2. In this analysis, the partition function was calculated using CROSS calculations as soft-constraints. RNA was then folded employing Vienna RNA Package (25) and pseudo-knotted bases were hardconstrained to be single-stranded.
Structural conservation
We used CROSSalign (12,13), an algorithm based on Dynamic Time Warping (DTW), to check and evaluate the structural conservation between different viral genomes (13). CROSSalign was previously employed to study the structural conservation of ∼5000 HIV genomes. SARS-CoV-2 fragments (1000 nt, not overlapping) were searched inside other complete genomes using the OBE (open begin and end) module, in order to search a small profile inside a larger one. The lower the structural distance, the higher the structural similarities (with a minimum of 0 for almost identical secondary structure profiles). The significance is assessed as in the original publication (13).
The Infernal package (version 1.1.3) was employed to build covariance models (CMs) for fragments 22, 23 and 24 (27). The package was then used to search for sequence and structural similarities among RNAs in our database (267 representative sequences), which allows to identify a series of matches below a specific E-value threshold (0.1, 1 and 10). The analysis shows agreement with CROSSalign (12,13) results. The minimum and maximum number of identified motifs were 224 and 4878 (E-value of 10), 136 and 3093 (E-value of 1) and 94 and 1060 (E-value of 0.1). The motifs in Spike S region were counted for annotated coronaviruses (239 genomes out of 246, of which 161 within Evalue of 0.1).
Sequence collection
The FASTA sequences of the complete genomes of SARS-CoV-2 were downloaded in March 2020 from Virus Pathogen Resource (VIPR; www.viprbrc.org), for a total of 62 strains. An additional non-redundant set was downloaded in August 2020 for further analyses (462 sequences). Regarding the other coronaviruses, the sequences were downloaded in March 2020 from NCBI selecting only complete genomes, for a total of 2040 genomes. The reference Wuhan sequence with available annotation (EPI ISL 402119) was downloaded from Global Initiative on Sharing All Influenza Data in March 2020 (GISAID https://www.gisaid.org/).
Protein-RNA interaction prediction
Interactions between each fragment of target sequence and the human proteome were predicted using catRAPID omics (18,19), an algorithm that estimates the binding propensity of protein-RNA pairs by combining secondary structure, hydrogen bonding and van der Waals contributions. As reported in a recent analysis of about half a million of experimentally validated interactions (21), the algorithm is able to separate interacting vs non-interacting pairs with an area under the ROC curve of 0.78. The complete list of interactions between the 30 fragments and the human proteome is available at http: //crg-webservice.s3.amazonaws.com/submissions/2020-03/ 252523/output/index.html?unlock=f6ca306af0. The output then is filtered according to the Z-score column, which is the interaction propensity normalised by the mean and standard deviation calculated over the reference RBP set (http://s.tartaglialab.com/static files/shared/faqs.html#4). We used three different thresholds in ascending order of stringency: Z greater or equal than 1.50, 1.75 and 2 respectively and for each threshold we then selected the proteins that were unique for each fragment for each threshold. omiXscore calculations of ADAR and ADARB1 are interactions are respectively at http://crg-webservice.s3.amazonaws.com/submissions/ 2020-04/263420/output/index.html?unlock=f9375fdbf9 and http://crg-webservice.s3.amazonaws.com/submissions/ 2020-04/263140/output/index.html?unlock=bb28d715ea.
GO terms analysis
cleverGO (28), an algorithm for the analysis of Gene Ontology annotations, was used to determine which fragments present enrichment in GO terms related to viral processes. Analysis of functional annotations was performed in parallel with GeneMania (29). The link to cleverGO analyses for fragment 1 is at http://www.tartaglialab.com/GO analyser/ render GO universal/3073/0d66e887c3/ (Z≥2).
RNA and protein alignments
We used Clustal W (30) for 62 SARS-CoV-2 strains alignments and T-Coffee (31) for spike S proteins alignments. The variability in the spike S region was measured by computing Shannon entropy on translated RNA sequences. The Shannon entropy is computed as follows: where a correspond to the amino acid at the position i and P(a,i) is the frequency of a certain amino-acid a at position i of the sequence. Low entropy indicates poorly variability: if P(a,x) = 1 for one a and 0 for the rest, then S(x) = 0. By contrast, if the frequencies of all amino acids are equally distributed, the entropy reaches its maximum possible value.
Predictions of phase separation
catGRANULE (32) was employed to identify proteins assembling into biological condensates. Scores >0 indicate that a protein is prone to phase separate. Structural disorder, nucleic acid binding propensity and amino acid patterns such as arginine-glycine and phenylalanine-glycine are key features combined in this computational approach (32).
SARS-CoV-2 contains highly structured elements
Structured elements within RNA molecules attract proteins (14) and reveal regions important for interactions with the host (33). Indeed, each gene expressed from SARS-CoV-2 is preceded by conserved transcription-regulating sequences that act as signal for the transcription complex during the synthesis of the RNA minus strand to promote a strand transfer to the leader region to resume the synthesis. This process is named discontinuous extension of the minus strand and is a variant of similarity-assisted template switching that operates during viral RNA recombination (17).
To analyze SARS-CoV-2 structure (reference Wuhan strain MN908947.3), we employed CROSS (12) that was previously developed to predict the double-and singlestranded content of RNA genomes such as HIV-1 (13). We found the highest density of double-stranded regions in the 5 end (nucleotides 1-253), membrane M protein (nucleotides 26 523-27 191), spike S protein (nucleotides 21 563-25 384), and nucleocapsid N protein (nucleotides 28 274-29 533; Figure 1A) (34). The lowest density of doublestranded regions were observed at nucleotides 6 000-6 250 and 20 000-21 500 and correspond to the regions between the non-structural proteins nsp14 and nsp15 and the upstream region of the spike surface protein S ( Figure 1) (34). In addition to the maximum corresponding to nucleotides 22 500-23 000, the structural content of Spike S protein shows minima at around nucleotides 21 500-22 000 and 23 500-24 000 ( Figure 1). We used the Vienna method (25) to further investigate the RNA secondary structure of specific regions identified with CROSS (13). Employing a 100nucleotide window centered around CROSS maxima and minima, we found good match between CROSS scores and Vienna free energies ( Figure 1).
RNA structure in vitro and in vivo could be significantly different due to interactions with proteins and other molecules (26). Using CROSS alive to predict the doubleand single-stranded content of SARS-CoV-2 in the cellular context, we found that both the 5 and 3 ends are the most structured regions followed by nucleotides 22 500-23 000 in the Spike S region, while nucleotides 6 000-6 250 and 20 000-21 500 have the lowest density of double-stranded regions ( Figure 1B). The region corresponding to nucleotides 13 400-13 600 shows high density of contacts. This part of SARS-CoV-2 sequence has been proposed to form a pseudoknot (35) that is also visible in CROSS profile ( Figure 1A), but CROSS alive is able to identify long range interactions and better identifies the region. Additionally, we used the RF-Fold algorithm of the RNAFramework suite (36) (Material and Methods) to search for pseudoknots. Employing CROSS as a soft-constraint for RF-Fold, we pre- (26), we studied the structural content of SARS-CoV-2 in vivo. The 5 and 3 ends (indicated by red boxes) are predicted to be highly structured. In addition, nucleotides 22 500-23 000 in Spike S region and nucleotides 13 400-13 600 (indicated by a red box) forming a pseudoknot (35) show high density of contacts. (C) Comparison of CROSS predictions with the secondary structure landscape of SARS-CoV-2 revealed by SHAPEMaP (38). From low (10%) to high (0.1%) confidence scores, the predictive power, measured as the Area Under the Curve (AUC) of Receiver Operating Characteristics (ROC), increases monotonically (HC corresponds to 10 nucleotides with highest/lowest scores). (D) CROSS performances on betacoronavirus 5 and 3 ends (39)(40)(41)(42). Using different confidence scores, we show that CROSS is able to identify double and single stranded regions with great predictive power.
To validate our results, we compared CROSS predictions of double-and single-stranded content (as released in March 2020) with the secondary structure landscape of SARS-CoV-2 revealed by SHAPE mutational profiling (SHAPEMaP) (38). In their experimental work, Manfredonia et al. carried out in vitro refolding of RNA followed by probing with 2-methylnicotinic acid imidazolide. In our comparison, balanced lists of single and double stranded regions were used for the calculations: A confidence score of 10% indicates that we compared the SHAPE reactivity values of 3000 nucleotides associated with the highest CROSS scores (i.e. double stranded) and 3000 nucleotides associated with the lowest CROSS scores (i.e. single stranded). From low (10%) to high (0.1%) confidence scores, we observed that the predictive power, measured as the Area Under the Curve (AUC) of Receiver Operating Characteristics (ROC), increases monotonically reaching the value of 0.73 (the AUC is 0.74 for the 10 highest/lowest scores; Figure 1C), which indicates that CROSS reproduces SHAPEMaP in great detail.
We also assessed CROSS performances on structures of betacoronavirus 5 and 3 ends (39-42) ( Figure 1D). In this analysis, we used RFAM multiple sequence alignments of betacoronavirus 5 and 3 ends and relative consensus structures (RF03117 and RF03122) (39)(40)(41)(42). We generated the 2D representation of nucleotide chains of consensus structures. We extracted the 'secondary structure occupancy', as defined in a previous work (20), and counted the contacts present around each nucleotide. Following the procedure used for the comparison with SHAPEMaP, different progressive cut-offs were used for ranking all the structures using balanced lists of single and double stranded regions: 10% indicates that we compared 600 nucleotides associated with the highest amount of contacts and 600 nucleotides associated with the lowest amount of contacts. From low (10%) to high (0.1%) confidence scores we observed that the AUC of ROC increases monotonically reaching the value of 0.75 (10 highest/lowest scores have an AUC of 0.78; Figure 1D), which indicates that CROSS is able to identify known double and single stranded regions reported in great detail. We also tested the ability of CROSS to recognize specific secondary structures in representative cases for which we studied both the 3 and 5 ends: NC 006213 or Human coronavirus OC43 strain ATCC VR-759, NC 019843 or Middle East respiratory syndrome coronavirus, NC 026011 or Betacoronavirus HKU24 strain HKU24-R05005I, NC 001846 or Mouse hepatitis virus strain MHV-A59 C12 and NC 012936 or Rat coronavirus Parker (Supplementary Figure S1).
In summary, our analysis identifies several structural elements in SARS-CoV-2 genome (11). Different lines of experimental and computational evidence indicate that transcripts containing a large amount of double-stranded regions have a strong propensity to recruit proteins (14,43) and can act as scaffolds for protein assembly (15,16). We therefore expected that the 5 end attracts several host proteins because of the enrichment in secondary structure elements. The binding would not just involve proteins interacting with double-stranded regions. If a specific protein contact occurs in a loop at the end of a long RNA stem, the overall region is enriched in double-stranded nucleotides but the specific interaction takes place in a single-stranded element.
Structural comparisons reveal that a spike S region of SARS-CoV-2 is conserved among coronaviruses
We employed CROSSalign (13) to study the structural conservation of SARS-CoV-2 in different strains (Materials and Methods).
In our analysis, we compared the Wuhan strain MN908947.3 with 2040 coronaviruses (reduced to 267 sequences upon redundancy removal at 95% sequence similarity (44); Figure 2; full data shown in Supplementary Figure S2).
We note that the regulatory regions located at the 3 end are slightly longer (about 250-500 nts containing a bulged stem loop, a pseudoknot plus a poly-A tail) than the ones at the 5 end (the 1-4 stem loops are within the first 200 nucleotides) and their structural elements are therefore better recognized within the 1000 nucleotides window that we use for our analysis (45). Although the 5 end is variable, it is more structured in SARS-CoV-2 than other coronaviruses (average structural content of 0.56, indicating that 56% of the CROSS signal is >0). The 3 end is less variable and slightly less structured (average structural content of 0.49). By contrast, the other coronaviruses have lower average structural content of 0.49 in the 5 end and 0.42 in the 3 end.
One conserved region falls inside the Spike S genomic locus between nucleotides 22 000 and 23 000 and exhibits an intricate and stable secondary structure (RNAfold minimum free energy = −285 kcal/mol) (25). High conservation of a structured region suggests a functional activity that is relevant for host infection. To demonstrate the conservation of nucleotides 22 000-23 000 (fragment 23), we divided this region and the adjacent ones (nucleotides 21 000-22 000 and 23 000-24 000) into sub-fragments. We then used the RF-Fold algorithm of the RNAFramework suite (36) to fold the different subregions using CROSS predictions as soft-constraints. The structural motives identified with this procedure were employed to build covariance models (CMs) that were then searched in our set of coronaviruses using the 'Infernal' package (27). We found that nucleotides 501-750 within fragment 23 have the highest number of matches for different confidence thresholds, implying a higher chance of sequence and structure conservation across coronaviruses (E-values of 10,1, 0.1; Figure 2B). We specifically counted the matches falling in the Spike S region (±1000 nucleotides to take into account the division of the genome into fragments; Supplementary Table S1). For the large majority of annotated sequences, we found a match falling in the Spike S region (239 genomes out of 246, of which 161 with E-value Nucleic Acids Research, 2020, Vol. 48, No. 20 11275 below 0.1) This further emphasizes the conservation of the region in exam.
Sequence and structural comparisons among SARS-CoV-2 strains
To better investigate the sequence conservation of SARS-CoV-2, we compared 62 strains isolated from different countries during the pandemic (including China, USA, Japan, Taiwan, India, Brazil, Sweden and Australia; data from NCBI and in VIPR www.viprbrc.org; Materials and Methods). Our analysis aims to determine the relationship between structural content and sequence conservation.
Using ClustalW for multiple sequence alignments (30), we observed general conservation of the coding regions ( Figure 3A). The 5 and 3 ends show high variability due to practical aspects of RNA sequencing and are discarded in this analysis (46). Indeed, their sequences are less well characterized (47), and their variation results higher than other parta of the viral sequence. One highly conserved region is between nucleotides 22 000 and 23 000 in the Spike S genomic locus, while sequences up-and downstream are variable (purple bars in Figure 3A). We then used CROSSalign (13) to compare the structural content (Material and Methods). High variability of structure is observed for both the 5 and 3 ends and for nucleotides 21 000-22 000 as well as 24 000-25 000, associated with the Spike S region (purple bars in Figure 3A). The rest of the regions are significantly conserved at a structural level (P-value < 0.0001; Fisher's test).
We note that sequence conservation ( Figure 3A) and secondary structure profiles ( Figure 1A) are statistically related. Following the analysis to compare CROSS and SHAPE scores, we selected balanced groups of nucleotides with the highest and lowest sequence conservation and measured their single and double stranded content: a conservation score of 1% indicates that we compared 300 nucleotides with the highest sequence similarity and 300 nucleotides with the lowest sequence similarity. At conservation score of 1% (or less stringent threshold of 10%), the match between similarity and structure, measured as the AUC of ROC is 0.76 (or 0.60, respectively). The association is statistically significant: shuffling the sequence conservation profiles, the empirical P-values are <0.02 (at both 10% and 1% conservation scores).
We also compared protein sequences coded by the Spike S genomic locus (NCBI reference QHD43416) and found that both sequence ( Figure 3A) and structure (Figure 2) of nucleotides 22 000-23 000 are highly conserved. The region corresponds to amino acids 460-520 that contact the host receptor angiotensin-converting enzyme 2 (ACE2) (48) promoting infection and provoking lung injury (24,49). By contrast, the region upstream of the binding site receptor ACE2 and located in correspondence to the minimum of the structural profile at around nucleotides 22 500-23 000 (Figure 1) is highly variable (31), as indicated by T-coffee multiple sequence alignments (31) ( Figure 3A). This part of the Spike S region corresponds to amino acids 243-302 that in MERS-CoV binds to sialic acids regulating infection through cellcell membrane fusion ( Figure 3B Our analysis suggests that the structural region between nucleotides 22 000 and 23 000 of Spike S region is conserved among coronaviruses ( Figure 2) and that the binding site for ACE2 has poor variation in human SARS-CoV-2 strains ( Figure 3B). By contrast, the region upstream of the ACE2 binding site, which has also propensity to bind sialic acids (10,50,51), showed poor structural content and high variability ( Figure 3B). The region downstream of the ACE2 binding site and located at the beginning of S2 domain shows high variability ( Figure 3B). The results are confirmed by analysing a pool of 462 genomes having a ±5 nucleotides length difference with respect to MN908947.3 (August 2020; Supplementary Figure S3).
Analysis of human interactions with SARS-CoV-2 identifies proteins involved in viral replication
In order to obtain insights on how the virus replicates in human cells, we predicted SARS-CoV-2 interactions with the whole RNA-binding human proteome. Following a protocol to study structural conservation in viruses (13), we first divided the Wuhan sequence in 30 fragments of 1000 nucleotides each moving from the 5 to 3 end and then calculated the protein-RNA interactions of each fragment with catRAPID omics (3 340 canonical and putative RNAbinding proteins, or RBPs, for a total 102 000 interactions) (18). Proteins such as Polypyrimidine tract-binding protein 1 PTBP1 (Uniprot P26599) showed the highest interaction propensity (or Z-score; Materials and Methods) at the 5 end while others such as heterogeneous nuclear ribonucleoprotein Q HNRNPQ (O60506) showed the highest interaction propensity at the 3 end, in agreement with previous studies on coronaviruses ( Figure 4A) (52).
For each fragment, we predicted the most significant interactions by filtering according to the Z score. We used three different thresholds in ascending order of stringency: Z ≥ 1.50, 1.75 and 2 respectively and we removed from the list the proteins that were predicted to interact promiscuously with more than one fragment. Fragment 1 corresponds to the 5 end and is the most contacted by RBPs (∼120 with Z ≥ 2 high-confidence interactions; Figure 4B), which is in agreement with the observation that highly structured regions attract a large number of proteins (14). Indeed, the 5 end contains multiple stem loop structures that control RNA replication and transcription (53,54). By contrast, the 3 end and fragment 23 (Spike S), which are still structured but to a lesser extent, attract fewer proteins (10 and 5, respectively) and fragment 20 (between Orf1ab and Spike S) that is predicted to be unstructured, does not have predicted binding partners. Fragments 1 and 29 together with the adjacent regions are also predicted to be the most structured in vivo and show the highest amount of contacts for different Z scores ( Figure 1B).
The interactome of each fragment was analysed using cleverGO, a tool for Gene Ontology (GO) enrichment analysis (28). Proteins interacting with fragments 1, 2 and 29 were associated with annotations related to viral processes ( Figure 4C; Supplementary Table S2). Considering the three thresholds applied (Material and Methods), we found 23 viral proteins (including 2 pseudogenes), for fragment 1, 2 proteins for fragment 2 and 11 proteins for (31)) in the spike S protein indicate conservation between amino-acids 460 and 520 (blue box) binding to the host receptor angiotensin-converting enzyme 2 ACE2. The region encompassing amino-acids 243 and 302 is highly variable and is implicated in sialic acids in MERS-CoV (red box). The S1 and S2 domains of Spike S protein are displayed. fragment 29 ( Figure 4D). Among the high-confidence interactors of fragment 1, we discovered RBPs involved in positive regulation of viral processes and viral genome replication, such as double-stranded RNA-specific editase 1 ADARB1 (Uniprot P78563), 2-5A-dependent ribonuclease RNASEL (Q05823) and 2-5-oligoadenylate synthase 2 OAS2 (P29728; Figure 5A). Interestingly, 2-5oligoadenylate synthase 2 OAS2 has been reported to be upregulated in human alveolar adenocarcinoma (A549) cells infected with SARS-CoV-2 (log fold change of 4.2; P-value of 10 −9 and q-value of 10 −6 ) (55). While double-stranded RNA-specific adenosine deaminase ADAR (P55265) is absent in our library due to its length that does not meet catRAPID omics requirements (18), the omiXcore extension of the algorithm specifically developed for large molecules (56) attributes the same binding propensity to both ADARB1 and ADAR, thus indicating that the interactions with SARS-CoV-2 are likely to occur (Materials and Methods). Moreover, experimental works indicate that the family of ADAR deaminases is active in bronchoalveolar lavage fluids derived from SARS-CoV-2 patients (57) and is upregulated in A549 cells infected with SARS-CoV-2 (log fold change of 0.58; P-value of 10 −8 and q-value of 10 −5 ) (55).
We also identified proteins related to the establishment of integrated proviral latency, including X-ray repair crosscomplementing protein 5 XRCC5 (P13010) and X-ray repair cross-complementing protein 6 XRCC6 (P12956; Figure 5A). In accordance with our calculations, comparison of A549 cells responses to SARS-CoV-2 and respiratory syncytial virus, indicates upregulation of XRCC6 in SARS-CoV-2 (log fold-change of 0.92; P-value of 0.006 and qvalue of 0.23) (55). Moreover, previous evidence suggests that the binding of XRCC6 takes places at the 5 end of SARS-CoV-2, thus giving further support to our predictions (58). Nucleolin NCL (P19338), a protein known to be involved in coronavirus processing, was also predicted to bind tightly to the 5 end (Supplementary Table S2) (59). Importantly, we found proteins related to defence response to viruses, such as ATP-dependent RNA helicase DDX1 (Q92499), that are involved in negative regulation of viral genome replication. Some DNA-binding proteins (18,19) confidence levels: Z = 1.5 or low Z = 1.75 or medium and Z = 2.0 or high; regions with scores lower than Z = 1.5 are omitted); (C) enrichment of viral processes in the 5 of SARS-CoV-2 (precision = term precision calculated from the GO graph structure lvl = depth of the term; go term = GO term identifier, with link to term description at AmiGO website; description = label for the term; e/d = enrichment / depletion compared to the population; % set = coverage on the provided set; % pop = coverage of the same term on the population; p bonf = P-value of the enrichment. To correct for multiple testing bias, use Bonferroni correction) (28); (D) viral processes are the third largest cluster identified in our analysis; such as Cyclin-T1 CCNT1 (O60563), Zinc finger protein 175 ZNF175 (Q9Y473) and Prospero homeobox protein 1 PROX1 (Q92786) were included because they could have potential RNA-binding ability ( Figure 5A) (60). As for fragment 2, we found two canonical RBPs: E3 ubiquitin-protein ligase TRIM32 (Q13049) and E3 ubiquitin−protein ligase TRIM21 (P19474), which are listed as negative regulators of viral release from host cell, negative regulators of viral transcription and positive regulators of viral entry into host cells. Among these genes, DDX1 (log fold change of 0.36; Pvalue of 0.007 and q-value of 0.23) and TRIM21 (log fold change of 0.44; P-value of 0.003 and q-value of 0.18) are also upregulated in A549 cells infected with SARS-CoV-2 (55). Ten of the 11 viral proteins detected for fragment 29 are members of the Gag polyprotein family, that perform different tasks during HIV assembly, budding, and maturation. More than just scaffold elements, these proteins are elements that accompany viral and host proteins as they traffic to the cell membrane (Supplementary Table S2) (61). Finally, among the RBPs with the highest interaction propensity for fragment 23, we found nucleosome assembly protein 1-like 1 NAP1L1 and E3 ubiquitin-protein ligase makorin-1 MKRN1, which could have an effect on the regulation of cell proliferation.
Analysis of functional annotations carried out with Gen-eMania (29) revealed that proteins interacting with the 5 of SARS-CoV-2 RNA are associated with regulatory pathways involving NOTCH2, MYC and MAX that have been previously connected to viral infection processes ( Figure 5A) (62,63). Interestingly, some proteins, including DDX1, CCNT1 and ZNF175 for fragment 1 and TRIM32 for fragment 2, have been shown to be necessary for HIV functions and replication inside the cell, as well as SARS-CoV-1. DDX1 has been shown to enable the switch from discontinuous to continuous transcription in SARS-CoV-1 infection and its knockdown reduced the number of longer sub-genomic mRNA (sgmRNA) through interaction with the SARS-CoV-1 nucleocapsid protein N (64) and Nsp14 (65). It functions as a bidirectional helicase, which distinguishes it from the coronaviruses helicases, which can only unwind RNA in the 5 to 3 direction (66), a very important function in highly structured RNA such SARS-CoV-2. DDX1 is also required for HIV-1 Rev as well as for avian coronavirus IBV replication and it binds to the RRE sequence of HIV-1 RNAs (67,68), while CCNT1 binds to 7SK snRNA and regulates transactivation domain of the viral nuclear transcriptional activator, Tat (69,70).
Analyses of SARS-CoV-2 proteins interactomes reveal common protein targets
Recently, Gordon et al. reported a list of human proteins binding to Open Reading Frames (ORFs) translated from SARS-CoV-2 (71). Identified through affinity purification followed by mass spectrometry quantification, 332 proteins from HEK-293T cells interact with viral ORF peptides. By selecting 274 proteins binding at the 5 with Z score ≥1.5 (Supplementary Table S2), of which 140 are exclusively interacting with fragment 1 (Figure 4B), we found that 8 are also reported in the list by Gordon et al. (71), which indicates significant enrichment (representation factor of 2.5; P-value of 0.02; hypergeometric test with human proteome in background). The fact that our list of protein-RNA binding partners contains elements identified also in the proteinprotein network analysis is not surprising, as ribonucleoprotein complexes evolve together (14) and their components sustain each other through different types of interactions (16).
We note that out of 332 interactions, 60 are RBPs (as reported in Uniprot), which represents a considerable fraction (i.e. 20%), considering that there are around 1500 RBPs in the human proteome (i.e. 6%). Comparing the RBPs present in Gordon et al. (71) and those present in our list (79 RBP annotated in Uniprot), we found an overlap of six proteins (representation factor = 26.5; P-value < 10 −8 ; hypergeometric test), including: Janus kinase and microtubule-interacting protein 1 JAKMIP1 (Q96N16), Akinase anchor protein 8 AKAP8 (O43823) and A-kinase anchor protein 8-like AKAP8L (Q9ULX6), which in case of HIV-1 infection is involved as a DEAD/H-box RNA helicase binding (72), signal recognition particle subunit SRP72 (O76094), binding to the 7S RNA in presence of SRP68, La-related protein 7, LARP7 (Q4G0J3) and La-related protein 4B LARP4B (Q92615), which are part of a system for transcriptional regulation acting by means of the 7SK RNP system (73) (Figure 5B; Supplementary Table S3). We speculate that sequestration of these elements is orchestrated by a viral program aiming to recruit host genes (74). LARP7 is also upregulated in A549 cells infected with SARS-CoV-2 (log fold change of 0.48; P-value of 0.006 and q-value of 0.23) (55).
Using the liver cell line HuH7 a recent experimental study by Schmidt et al. (76). identified SARS-CoV-2 RNA associations within the human host (76). Through the RAP-MS approach, 571 interactions were detected, of which 250 are RBPs (as reported in Uniprot) (76).
In common with our library we found an overlap of 148 proteins. We compared predicted (as released in March 2020) and experimentally-validated interactions employing balanced lists of high-affinity (high fold-change with respect to RNA Mitochondrial RNA Processing Endoribonuclease RMRP) and low-affinity (low fold-change with respect to RNA Mitochondrial RNA Processing Endoribonuclease RMRP) associations: a confidence score of 25% indicates that we compared the interaction scores of 35 proteins with the highest fold-change values and 35 interactions associated with the lowest fold-change values. From low (25%) to high (5%) confidence scores, we observed that the predictive power, measured as the AUC of ROC, increases monotonically reaching the remarkable value of 0.99 (the AUC is 0.72 for 25% confidence score; Supplementary Figure S4), which indicates strong agreement between predictions and experiments. In addition to DDX1 and DDX3X (O00571), other interactions corresponding to catRAPID scores >1.5 and fold-change >1 include Insulin-like growth factor 2 mRNA-binding protein 1 IGF2BP1 (Q9Y6M1), Insulinlike growth factor 2 mRNA-binding protein 2 IGF2BP2 2 (Q9Y6M1) and La-related protein 4 LARP4 (Q71RC2; also in Gordon et al. (71)).
By directly analysing RNA interactions of all the 571 proteins by Schmidt et al. (76), catRAPID identified 18 strong RBP binders at the 5 end (Z score ≥ 1.5; fold-change >1; P-value of 0.008 computed with respect to all the interactions; Fisher exact test; Supplementary Table S4 Table S4).
Smaller enrichments were found for proteins related to Hepatitis B virus (HBV; P-value = 0.01; three hits out of seven in the whole catRAPID library; Fisher's exact test), including Nuclear receptor subfamily 5 group A member 2 NR5A2 (DNA-binding; O00482), Interferon-induced, double-stranded RNA-activated protein kinase EIF2AK2 (P19525), and SRSF protein kinase 1 SRPK1 (Q96SB4) as well as Influenza A (P-value = 0.03; two hits out of four; Fisher's exact test), including Synaptic functional regulator FMR1 (Q06787) and RNA polymerase-associated protein RTF1 homologue (Q92541; Supplementary Table S5). By contrast, no significant enrichments were found for other viruses such as for instance Ebola.
Phase-separating proteins are enriched in the 5 end interactions
As SARS-CoV-2 represses host gene expression through a number of unknown mechanisms, sequestration of cell transcription machinery elements could be exploited to alter biological pathways in the host cell. A number of proteins identified in our catRAPID calculations have been previously reported to coalesce in large ribonucleoprotein assemblies similar to stress granules. Among these proteins, we found double-stranded RNA-activated protein kinase EIF2AK2 (P19525), Nucleolin NCL (P19338), ATP-dependent RNA helicase DDX1 (Q92499), Cyclin-T1 CCNT1 (O60563), signal recognition particle subunit SRP72 (O76094), LARP7 (Q4G0J3) and La-related protein 4B LARP4B (Q92615) as well as Polypyrimidine tractbinding protein 1 PTBP1 (P26599) and Heterogeneous nuclear ribonucleoprotein Q HNRNPQ (O60506) (78). To further investigate the propensity of these proteins to phase separate, we used the catGRANULE algorithm (Material and Methods) (32). Differently from other methods to predict solid-like aggregation (79,80), catGRANULE estimates the propensity of proteins to form liquid-like assemblies such as stress granules (81). We found that the 274 proteins binding to the 5 end (fragment 1) with Z score ≥1.5 are highly prone to accumulate in assemblies similar to stress-granules (274 proteins with the lowest Z score are used in the comparison; P-value <0.0001; Kolmogorov−Smirnoff; Figure 5C; Supplementary Table S6). We note that there is not a direct correlation between RNA-binding scores (catRAPID) and phase-separation propensities (catGRANULE; Supplementary Figure S5).
Supporting this hypothesis, DDX1 and CCNT1 have been shown to condense in membrane-less organelles such as stress granules (82)(83)(84) that are the direct target of RNA viruses (85). DDX1 is also the primary component of distinct nuclear foci (86), together with factors associated with pre-mRNA processing and polyadenylation. Similarly, SRP72, LARP7 and LARP4B proteins have been found to assemble in stress granules (78,87,88). A recent work also suggests that the binding of LARP4 and XRCC6 takes places at the 5 end of SARS-CoV-2 and contributes to SARS-CoV-2 phase separation (58). Moreover, emerging evidence indicates that the SARS-CoV-2 nucleocapsid protein N has a strong phase separation propensity that is modulated by the viral genome (58,89,90) and can enter into host cell protein condensates (89), suggesting a possible mechanism of cell protein sequestration. Notably, catGRANULE does predict that nucleocapsid protein N is the viral protein with highest propensity to phase separate (91).
As is the case with molecular chaperones (92), RNAs can influence the liquid-like or solid-like state of proteins (93). This observation is particularly relevant because RNA viruses are known to antagonize stress granules formation (85). Stress granules and other phase-separated assemblies such as processing bodies regulate translation suppression and RNA decay, which could have a strong impact on virus replication (94).
DISCUSSION
Our study is motivated by the need to identify molecular mechanisms involved in Covid-19 spreading. Using advanced computational approaches, we investigated the structural content of SARS-CoV-2 genome and predicted human proteins that bind to it.
We employed CROSS (12,13) to compare the structural properties of ∼2000 coronaviruses and identified elements conserved in SARS-CoV-2 strains. The regions containing the highest amount of structure are the 5 end as well as glycoproteins spike S and membrane M.
We found that the Spike S protein domain encompassing amino acids 460-520 is conserved across SARS-CoV-2 strains. This result suggests that Spike S must have evolved to specifically interact with its host partner ACE2 (48) and mutations increasing the binding affinity should be highly infrequent. As nucleic acids encoding for this region are enriched in double-stranded content, we speculate that the structure might attract host regulatory elements, such as nucleosome assembly protein 1-like 1 NAP1L1 and E3 ubiquitin-protein ligase makorin-1 MKRN1, further constraining its variability. The fact that this region of the Spike S region is highly conserved among all the analysed SARS-CoV-2 strains suggests that a specific drug could be designed to prevent interactions within the host.
The highly variable region at amino acids 243-302 in spike S protein corresponds to the binding site of sialic acids in MERS-CoV (7,10,51) and could play a role in infection (50). The fact that the binding region is highly variable suggests different affinities for sialic acid-containing oligosaccharides and polysaccharides such as heparan sulfate, which provides clues on the specific responses in the human population. At present, a glycan microarray technology indicated that SARS-CoV-2 Spike S binds more tightly to heparan sulfate than sialic acids (95).
Using catRAPID (18,19) we computed >100 000 protein interactions with SARS-CoV-2 and found previously reported interactions such as Heterogeneous nuclear ribonucleoprotein Q HNRNPQ and Nucleolin NCL (59), among others. We discovered that the highly structured region at the 5 end has the largest number of protein partners including ATP-dependent RNA helicase DDX1, which was previously reported to be essential for HIV-1 and coronavirus IBV replication (96,97), and the double-stranded RNAspecific editases ADAR and ADARB1, which catalyse the hydrolytic deamination of adenosine to inosine. Other predicted interactions are XRCC5 and XRCC6 members of the HDP-RNP complex associating with ATP-dependent RNA helicase DHX9 (98) as well as and 2-5A-dependent ribonuclease RNASEL and 2-5-oligoadenylate synthase 2 OAS2 that control viral RNA degradation (99,100). Interestingly, DDX1, XRCC6 and OAS2 were found upregulated in human alveolar adenocarcinoma cells infected with SARS-CoV-2 (55) and DDX1 knockdown has been shown to reduce the number of sgmRNA in SARS-CoV-1 infected cells (64). In agreement with our predictions, recent experimental work indicates that the family of ADAR deaminases is active in bronchoalveolar lavage fluids derived from SARS-CoV-2 patients (57).
Comparison with protein-RNA interactions detected in the liver cell line HuH7 (76) shows agreement with our predictions. We note that the experiments have been carried out 24 h after infection (76) and the protein interaction landscape might have changed with respect to the early events of replication. Yet, the accordance with our calculations indicates participation of elements involved in controlling RNA processing and editing (DDX1, DDX3X) and translation (IGF2BP1 and IGF2BP2), although proteins such as ADAR and XRCC5 were reported to have poorer binding capacity (76).
A significant overlap exists with the list of protein interactions reported by Gordon et al. (71) and among the candidate partners we identified AKAP8L, involved as a DEAD/H-box RNA helicase binding protein involved in HIV infection (72). In general, proteins associated with retroviral replication are expected to play different roles in SARS-CoV-2. As SARS-CoV-2 massively represses host gene expression (74), we hypothesize that the virus hijacks host pathways by recruiting transcriptional and posttranscriptional elements interacting with polymerase II genes and splicing factors such as for instance A-kinase anchor protein 8-like AKAP8L and La-related protein 7 LARP7. In concordance with our predictions LARP7 has been reported to be upregulated in human alveolar adenocarcinoma cells infected with SARS-CoV-2 (55). The link to proteins previously studied in the context of HIV and other viruses, if further confirmed, is particularly relevant for the repurposing of existing drugs (77).
The idea that SARS-CoV-2 sequesters different elements of the transcriptional machinery is particularly intriguing and is supported by the fact that a large number of proteins identified in our screening are found in stress granules (78). Indeed, stress granules protect the host innate immunity and are hijacked by viruses to favour their own replication (94). As coronaviruses transcription uses discontinuous RNA synthesis that involves high-frequency recombination (59), it is possible that pieces of the viruses resulting from a mechanism called defective interfering RNAs (101) could act as scaffold to attract host proteins (14,15). In agreement with our hypothesis, it has been very recently shown that the coronavirus nucleocapsid protein N can form protein condensates based on viral RNA scaffold and can merge with the human cell protein condensates (89), which provides a potential mechanism of host protein sequestration. | 2020-04-02T09:19:39.367Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "e343a263ee2ed5e520f81863d0d3b88c7e09eaad",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/48/20/11270/34368040/gkaa864.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2dbc326bb6882e8ab76c510344360454f3d2677",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
73692217 | pes2o/s2orc | v3-fos-license | Atomic force microscope nanolithography of graphene: cuts, pseudo-cuts and tip current measurements
We investigate atomic force microscope nanolithography of single and bilayer graphene. In situ tip current measurements show that cutting of graphene is not current driven. Using a combination of transport measurements and scanning electron microscopy we show that, while indentations accompanied by tip current appear in the graphene lattice for a range of tip voltages, real cuts are characterized by a strong reduction of the tip current above a threshold voltage. The reliability and flexibility of the technique is demonstrated by the fabrication, measurement, modification and re-measurement of graphene nanodevices with resolution down to 15 nm.
We investigate atomic force microscope nanolithography of single and bilayer graphene. In situ tip current measurements show that cutting of graphene is not current driven. Using a combination of transport measurements and scanning electron microscopy we show that, while indentations accompanied by tip current appear in the graphene lattice for a range of tip voltages, real cuts are characterized by a strong reduction of the tip current above a threshold voltage. The reliability and flexibility of the technique is demonstrated by the fabrication, measurement, modification and re-measurement of graphene nanodevices with resolution down to 15 nm.
Scanning probe microscopy, as well as being a powerful tool for imaging and spectroscopy, has also shown great potential for the manipulation and patterning of materials on the nanometer scale [1,2]. Atomic force microscope (AFM) nanolithography, in particular, is now routinely used for the fabrication of quantum dots and quantum wires in materials such as Si and GaAs [3,4]. AFM nanolithography also has significant potential for device fabrication in graphene, a material of intense current interest due to its exceptional mechanical, electronic and optical properties [5,6]. Most commonly, graphene nanoscale devices are fabricated using conventional electron-beam lithography and subsequent plasma etching [7][8][9]. AFM lithography offers several advantages over electron-beam lithography: it has higher ultimate resolution, can be performed under ambient conditions and allows in situ device measurement and modification.
Usually AFM nanolithography is performed in air at room temperature. Under these conditions a water meniscus forms between the AFM tip and the substrate. The presence of an electric field, resulting from the voltage between the tip and substrate, dissociates water into hydrogen (H + ) and hydroxyl (OH − ) ions. When the voltage on the tip is negative with respect to the substrate the hydroxyl ions oxidize the graphene surface, creating the desired nanostructure. Several important factors determine the reliability and resolution of AFM lithography such as the applied tip voltage (or electric field strength), the humidity, tip velocity, applied force, and the conductivity of the substrate [1]. This process is now well understood for a variety of semiconductors and metals. However, in the case of graphene many of the key parameters have not been well established and device fabrication by AFM lithography is not yet routine. For example, the necessary threshold tip voltage for graphene oxidation reported in the literature varies in magnitude between ∼ -5 V [10] and -35 V [11,12] and in one report oxidation could only be initiated from a graphene edge [13]. Moreover, there has been no systematic study of the tip current during AFM lithography of graphene.
Here, we investigate in detail the cutting of the graphene lattice with an AFM tip. In particular, we measure the tip current, I tip , during the cutting process and find that we cut graphene only when I tip drops below our noise floor. We also find that pseudo-cuts appear when I tip is non-zero. These pseudo-cuts, in which the electron system of graphene remains intact, cannot reliably be distinguished from real cuts by AFM height imaging. However, the differences between real and pseudo-cuts become apparent using transport experiments and scanning electron microscopy (SEM). This ability to distinguish between real and pseudo-cuts is crucial for device fabrication in graphene.
To investigate the voltage and current dependence of AFM nanolithography on graphene we use a Veeco Dimensions 3100 AFM system with non-coated, doped silicon tips for both imaging and lithography [14]. Imaging is performed in tapping mode and lithography carried out in contact mode. Single (SLG) and bilayer (BLG) graphene flakes are produced by micro-mechanical exfoliation on 300 nm SiO 2 with a highly doped Si back gate. Optical [15] and Raman spectroscopy [16] are used to assess the layer number and quality. All flakes are electrically contacted and characterized at room temperature. Contacts are defined in polymethylmethacrylate (PMMA) resist by electron beam lithography and metalized with Ti/Au (5/50 nm) by evaporation and lift off.
Samples are annealed for ∼ 10 mins. at 300 o C in forming gas prior to AFM lithography, which we find to be a crucial step for reliable cutting of our flakes as it removes contamination which can prevent oxidation. During AFM nanolithography, we use a 50 nm/s scanning speed and a relative humidity of around 50%. The lithographically defined trenches vary in width from 15 nm up to 100 nm with typical values of 30 nm. We find that the widths depend only weakly on scanning speed and humidity, with cuts slightly wider with decreasing scanning speed and increasing relative humidity. Individual tip characteristics, most likely tip radius, appear to be more important. [14]. Tip speed was 50 nm/s and the relative humidity was ∼ 50%.
We firstly investigate the current through the biased AFM tip during lithography. Figure 1 shows a series of cuts performed on a SLG at various tip voltages. The AFM tip is negatively biased with respect to the flake, which is grounded via a Stanford SR570 current preamplifier. The lower panels of Fig. 1 show the current, I tip , through the tip to ground, as a function of time where t = 0 is the time at which the tip bias V tip is applied and the tip starts along its predefined path. Above this is the corresponding AFM image of the scanned area. The upper cells show the averaged height cross section across the cuts. Indentations begin to appear on the graphene surface, accompanied by finite I tip , at around V tip = -2 V. The tip current then drops to ∼ 0 above a threshold, V thresh . Threshold voltages for our tips vary from ∼ -3.5 V to ∼ -5 V. Trenches created at the smallest voltages occasionally disappear over the course of hours or days, regaining their original shape. Trenches created at larger voltages remain unchanged after several weeks. We also note that, in both regimes, ridges are frequently formed along the trench edges where the electric field is lower. This may be due to the formation of stable oxides similar to that reported in Refs. [10,11].
In order to investigate the nature of the marks created in the two regimes we cut triangles into a SLG with V tip both above and below V thresh and imaged them using both AFM and SEM. Figures 2(a) and 2(b) show two triangles, cut with |V tip | > |V thresh | and thus I tip ∼ 0, imaged using AFM (left image) and SEM (right image). The central regions of the triangles, clearly visible in the AFM image, are significantly darker in the SEM images as compared to the bulk. Figures 2(c) and 2(d) show two triangles, cut with |V tip | < |V thresh | and I tip ∼ 100 µA, again imaged using AFM (left) and SEM (right). Though the AFM images are qualitatively similar to those in Figs. 2(a) and 2(b), the triangles are barely visible in the SEM images with the central regions showing no contrast with the bulk. In all cases, SEM imaging is carried out using an accelerating voltage of 500 V. Note that for these low acceleration voltages (i.e. below ∼ 1 kV), the contacted graphene is easily visible on the SiO 2 substrate. This is illustrated in Figs. 2(e) and 2(f) which show SEM images of the areas of the flake on which the triangles were cut. The arrows indicate the locations of the triangles in Figs. 2(a)-2(d). The strong contrast of the graphene flakes on the SiO 2 substrate is attributed to differences in the surface electrostatic potential between the bare SiO 2 substrate and the regions covered by the (electrically contacted) graphene, similar to that observed for carbon nanotubes [17]. This immediately allows us to conclude that the triangles shown in Figs. 2(a) and 2(b), which appear dark in the SEM images, are electrically isolating, while those of Figs. 2(c) and 2(d) are electrically connected to the bulk. This behaviour is consistent over all 10 pairs of triangles measured.
For further confirmation, the tip is placed inside the triangles and voltages below threshold are applied. We find that we never measure current above our noise floor with triangles cut with |V tip | > |V thresh | while we always measure current with triangles cut with |V tip | < |V thresh |. Furthermore, the current measured from within these triangles is no smaller than the current measured from outside the triangles. The resulting indentations can be seen as the short lines within Figs. 2(c) and 2(d) whereas no such marks are seen in Figs. 2(a) and 2(b). We conclude that electrically isolating cuts are made only for negative tip voltages larger than |V thresh | at which point the applied electric field is sufficiently strong to initiate the oxidation process. We believe that for tip voltages below the threshold, the SLG is merely pressed into contact with the SiO 2 surface by displacing water trapped between graphene and the substrate. The graphene in this case remains unbroken and electrically conducting.
Our findings may help to clarify previous measurements on highly oriented pyrolytic graphite (HOPG) in which an AFM was used for local oxidation [18][19][20], the interpretations of which are conflicting. While some reports identify a strong Fowler-Nordheim field emission dependence of the tip current on the applied voltage, attributing the oxidation to a field emission enhanced chemical reaction [18], other work on HOPG finds that holes are only observed in the absence of a tip current [19,20]. Our observations are consistent with the latter and indicate that the presence of a large current points to a failure of graphene oxidation. This excludes cutting mechanisms such as heating of the carbon lattice. The strong reduction of the tip current during cutting can be understood if graphene oxidation occurs once the local electric field strength exceeds the reaction activation energy. As graphene is oxidized below the tip, the tip-graphene contact is disrupted and only the very small electron flow of the reaction electrons remains [21].
Finally, we show that the technique described is well suited for device fabrication. Fig. 3(a) and 3(c) show AFM images of two graphene nanodevices, designed as quantum dot and quantum wire, respectively, formed in a BLG by AFM nanolithography. For each device, the tip current is monitored during lithography to ensure that the graphene is properly cut. Figures. 3(b) and 3(d) show the conductance as a function of back gate voltage V bg of the devices at a temperature of 4.2 K. The flexibility of AFM lithography is illustrated by Fig. 3(b) which shows measurements of the conductance as a function of V bg at 4.2 K both for the quantum dot as shown in Fig. 3(a) (blue dashed line) as well as that of the same device but with the entrance barriers of the quantum dot narrowed from ∼ 150 nm to about 50 nm in a subsequent AFM lithography step at room temperature (black solid line). As expected, the conductance is significantly lower in the post modification device with an increase in the gap observed [7].
In conclusion, we have studied the local oxidation of graphene by an AFM tip. We demonstrate that at low tip voltages the graphene is typically not cut even though clear indentations are observed in AFM height images. The lattice is only cut when the local electric field exceeds a threshold at which point tip current vanishes (within our noise floor). These conclusions are supported by scanning electron microscopy and transport experi- ments. The ability to distinguish between pseudo-cuts and cuts as demonstrated here is important for reliable graphene device fabrication by AFM nanolithography.
We acknowledge Cinzia Casiraghi for technical support. ACF and AL acknowledge funding from EU grants NANOPOTS and RODIN, EPSRC EP/G042357/1 and a Royal Society Wolfson Research Merit Award. MRB acknowledges support from the Royal Society. | 2011-02-14T14:34:06.000Z | 2011-02-14T00:00:00.000 | {
"year": 2011,
"sha1": "9df99644de740767025f746c52c58b0e72087dbe",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1102.2781",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9df99644de740767025f746c52c58b0e72087dbe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
95078591 | pes2o/s2orc | v3-fos-license | Structure and dynamics of liquid AsSe4 from ab initio molecular dynamics simulation
Structural and dynamical properties of AsSe4 liquids have been studied by ab initio molecular dynamics simulations as a function of temperature. Calculated neutron structure factors are in good agreement with experimental data. Results show the existence of a significant amount of As-As homopolar bonds and of SeI and AsIV units which are not part of the picture of the cross-linking As(Se1/2)3 pyramids usually proposed for the glassy state.
Introduction
As a typical amorphous semiconductor, As x Se 1−x glasses and liquids have been extensively studied for more than 40 years since the pioneering work of Meyers and Felty [1]. For x<2/5, their structure is usually described as a random network of Se q chains cross-linked by pyramidal As(Se 1/2 ) 3 units [1]. This model is supported by the observed local maximum of T g (x) near x=2/5, when the cross-linking process saturates, pyramidal units being connected to each other by only one Se atom.
Unlike silicate systems that have been widely simulated via classical molecular dynamics, the electronic structure of chalcogenide systems has to be explicitly treated to account for charge transfers between atoms. The use of ab initio molecular dynamics is therefore required, thus inducing the use of systems of only hundreds of atoms simulated over a few ps. Several studies have been reported, most of them focusing on the stoichiometric As 2 Se 3 system [2,3] or on the glassy state [4,5]. Recently, several compositions in the liquid state were simulated by Zhu [6]. It should also be stressed that many chalcogenides contain homopolar defects [7] that can only be reproduced from a careful electronic modelling using density functional theory [8,9] whereas such features are usually not reproduced from classical molecular dynamics [10].
In this paper, we focus on the low As amount x=0.20 composition and use a larger system than previous simulations [6] to get enough statistics to study the environment of As atoms. After giving the simulation methodology in Sec. 2, results are presented in Sec. 3 and discussed in Sec. 4. Finally, conclusions are given in Sec. 5.
Methodology
First principles simulations [11] were performed to simulate AsSe 4 liquids of 200 atoms at different temperatures (600, 800, 1200, 1600 and 2000 K). A cubic box with periodic boundary conditions was used, its length being fixed in order to recover the ambient density of the glass [12]. This methodology induces the existence of pressure at high temperature (0.32 GPa at 2000K), but the latter remains small as compared to pressure fluctuations in the box (0.5 GPa at 2000K). The electronic structure was described within density functional theory. Valence electrons were treated explicitly, in conjunction with norm-conserving pseudopotentials to account for core-valence interactions [13]. The wave functions were expanded at the Γ point of the supercell and the energy cutoff was set at 20 Ry. Other parameters of the simulation (fictitious mass, time step, exchange-correlation scheme, GGA) are identical to the ones used in previous simulations on GeSe 2 and Ge-Se liquids and glasses [8,14,15]. All temperature points were accumulated over 20 ps.
Structure factor
To compare the structure of the simulated liquids with experimental and previous simulations results, we computed neutron structure factors. The partial structure factors have been first calculated from the pair distribution functions (PDFs) g ij (r) : (1) where Q is the scattering vector, 0 is the average atom number density and R is the maximum value of the integration in real space (here R = 8 Å). The F L (r) = sin(πr/R)/(πr/R) term is a Lortch-type window function used to reduce the effect of the finite cutoff of r in the integration [16]. As discussed in [17], the use of this function reduces the ripples at low Q but induces a broadening of the structure factor peaks. The total neutron structure factor can then be evaluated from the partial structure factors following : where c i is the fraction of i atoms (As, Se) and b i is the neutron scattering length of the species (given by 6.58 and 7.97 fm for arsenic and selenium atoms respectively [6]).
Neutron structure factors of liquid AsSe 4 are represented on Fig. 1 for different selected temperatures and compared with a Calculated neutron structure factors S N (Q) of liquid AsSe 4 (top, black solid lines) at different selected temperatures, compared with a previous simulation from Zhu (bottom, blue broken line, [6]). Calculated structure factors at 800 K are compared with neutron diffraction data from Usuki (red squares [18]). (color online) previous simulation from Zhu [6] and neutron diffraction data from Usuki [18]. We observe that the agreement with experimental data for AsSe 4 is improved in the present simulation as compared to the one of Zhu [6]. This may be due to the fact that larger systems are required to get reasonable statistical averages at low wave vector since, for this composition, the system of Zhu [6] contains only 20 As atoms as compared to 40 in the present simulation.
The good agreement with experiments of the simulated AsSe 4 liquid at 800 K allows us to study temperature effects. As observed in experiments [19,20,21,22], the intensity of the first peak of S N (Q) at 2.3 Å −1 increases and its width increases slightly with the tem-perature while the two first peaks progressively merge each other to form a broad one over 2-4 Å −1 .
Pair distribution functions
The local structural order can be observed in more details by considering the partial PDFs g ij (r). As-As, As-Se and Se-Se partial of AsSe 4 liquids are represented on Fig. 2 at different temperatures. As temperature increases, the usual trends are observed, manifested by a decrease of the intensity and a broadening of the peaks. We find that large changes in structure take place between 800 K and 1200 K which manifest by a rapid decrease of the main peaks (3.7, 2.5 and 3.8 Å for As-As, As-Se and Se-Se respectively). For As-Se and Se-Se, these typical length scales correspond to the distances defining the As(Se 1/2 ) 3 pyramid. The variation shows that the latter must be substantially modified in this temperature rang as discussed below.
Coordination numbers
As presented earlier, the average coordination numbers (CNs) of As and Se atoms can be calculated via the integration of the first peak of the partial PDFs. After respective integrations to the first minimum of each PDF, we obtain N AsAs = 0.075, N AsSe = 2.95, N SeAs = 0.74 (note that, as implied by the stoichiometry, N AsSe = 4N SeAs ) and N Se−Se = 0.94 for T = 800 K so that N As = 3.03 and N Se = 1.68. At 1600 K, these values change to N AsAs = 0.46, N AsSe = 2.77, N SeAs = 0.69 and N Se−Se = 0.72. However, to get the full distribution of the different coordinated species, it is useful to enumerate the number of neighbors that are present in the first coordination shell of each atom at each step. The populations of the different CNs in the liquid AsSe 4 at 800 K are shown on Fig. 3. As expected, the dominant species are the base units of the crosslinking pyramid model, Se II and As III . However, a significant amount of defect Se I and As IV units are found (respectively 32.7% and 20.5%), as well as a small amount of Se III (0.9%) and As V atoms (1.6%). Other contributions (Se 0 , Se IV , As I and As VI ) are insignificant (<1%).
The inset in Fig. 3 shows the evolution of the main species in AsSe 4 with respect to the temperature. We notice that only minor changes in CNs are observed until the temperature becomes lower than 1200 K. As temperature continuously decreases, the fractions of Se I and As IV units starts to decrease while the ones of Se II and As III increase.
Diffusion
In order to analyze the effect of the temperature on the dynamics of AsSe 4 , the mean square displacements (MSD) r 2 (t) α (α = Se, As) of Se and As atoms have been computed according to : where N α is the number of α atoms and r i (t) the position of the atom i. The diffusion constants D α have then been calculated from the long-time limit of the MSD : The inset in Fig. 4 shows the MSD of Se atoms at different selected temperatures. At high temperature (1600 K), the diffusive regime, which manifests by a slope of 1 in a log-log plot of the MSD with time, onsets at about 0.3 ps. This regime can be clearly observed at all temperatures except at 600 K since, due to the slowing down of the dynamics as the temperature decreases, the diffusive regime is not reached over the simulation time (20 ps). The associated diffusion constants, computed from Eq. 4, are shown in Fig. 4. To the best of our knowledge, no experimental nor simulation data are available for As-Se liquids. A non-Arrhenius behavior is obtained at high temperature, which is also found in oxide glasses [23]. Although the number of data points is limited, the obtained diffusion constants have been fitted by an Arrhenius law : in order to extract the activation energies E A (Se) and E A (As) which are respectively equal to 0.37 and 0.40 eV. These values are close to those determined in Ge-Se liquids (0.43 eV for the diffusion of Se atoms in GeSe 4 but 1.1 eV for GeSe 2 [24]), but certainly much smaller than those found in oxides (for example 1.12 eV for the diffusion of O atoms in GeO 2 [23]). The reason of this difference may arise from the fact that one has a low connected system where homopolar Se q chains dominate which should favor diffusion across the liquid. This seems in line with recent studies showing that flexible (i.e. weakly connected) networks display an increased diffusion or ionic conductivity [25,26].
Discussion
As it can be seen on Fig. 2, the existence of a first peak in the partial As-As shows that homopolar bonds As-As do exist at all temperatures, even if they become more predominant at high T , so that the usual cross-linking pyramid model is not realistic in the liquid state. Note that these As-As bonds have been observed by EXAFS measurements [27] in As 2 Se 3 . The behavior of the As-As partial coordination number is shown in the inset of Fig. 2. Its value at 800 K (0.075) is found to be lower than the one of Zhu (0.12 [6]).
The increase of the fraction of As-As bonds with the temperature has been observed experimentally by Hosokawa et al. [27].
Moreover, the significant amount of As IV and Se I species also contradicts the latter model and may be the signature of the existence of a Se=As(Se 1/2 ) 3 basic unit. the existence of the latter has been suggested by Georgiev et al. [12] using an approach based on topological constraints in As-Se glasses and observed experimentally in the corresponding sulfide As-S glass [28,29]. However, we note that, for As-Se systems, no clear experimental evidence of the existence of this unit from direct structural analysis has been reported so far. We note that no As IV were found in Shimojo's simulations [3] in As 2 Se 3 . The average CN of As atoms found by Usuki by neutron diffraction in AsSe 4 (3.01 ± 0.02 [18]) is in very good agreement with our value (3.03 as compared to 2.67 in Shimojo's simulation) and suggests a minor fraction of As IV atoms in the liquid phase.
Conclusions
We have carried out ab initio molecular dynamics simulations in AsSe 4 liquids in order to study the effect of temperature on the structure and dynamics. Thanks to the use of a larger system, the calculated neutron structure factors show a better overall agreement with experimental data as compared to previous simulations. Results show that the cross-linking pyramid model is not suitable any more in the liquid state and that exotic Se I and As IV units exist as well as As-As bonds. In future works, we plan to check if these anomalies can still be found in the glassy state and in other compositions. | 2014-03-10T19:28:11.000Z | 2013-10-01T00:00:00.000 | {
"year": 2014,
"sha1": "d5ad08864e3ddee30040b23673655f93c413238e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1403.2358",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d5ad08864e3ddee30040b23673655f93c413238e",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
233951651 | pes2o/s2orc | v3-fos-license | Crop Establishment and Weed Control Options for Sustaining Dry Direct Seeded Rice Production in Eastern India
: Dry direct seeded rice (DSR) has emerged as an economically viable alternative to pud-dled transplanted rice to address emerging constraints of labor and water scarcity and the rising cost of cultivation. However, wide adoption of DSR is seriously constrained by weed management trade-off. Therefore, the availability of effective weed control options is critical for the success and wide-scale adoption of DSR. A field study was conducted at ICAR-National Rice Research Institute, Cuttack, India, in the dry seasons of 2015 and 2016 to evaluate the performance of three crop establishment methods and five weed control practices on weed management, productivity, profitability and energetics of dry DSR. The results demonstrated that weed density and weed dry weight was lower in drill seeding than broadcast seeding by 26 – 36% and manual line-seeding by 16 – 24%, respectively, at 30 and 60 days after crop emergence (DAE). Among herbicides, post-emergence application (17 DAE) of azimsulfuron was most effective in controlling weeds compared to early post application of bispyribac-sodium and bensulfuron-methyl+pretilachlor. Weed competition in the weedy check treatment resulted 58% reduction in rice yield. Among establishment methods, drill-seeding was most profitable with US $ 685 ha −1 higher net income than broadcast seeding primarily due to higher yield. Among weed control treatments, azimsulfuron was most profitable resulting in US $ 160 and 736 ha −1 higher net income than weed free and weedy check, respectively. The specific energy was lowest for drill seeding among establishment method and azimsulfuron among weed control practices, suggesting lowest energy consumed in producing per unit of grain yield.
Introduction
Rice is the staple food for over half of the world's population hence called as "Global Grain".India contributes about 20% of total global rice production [1], therefore, the stability of rice production in India would play a key role in the world's food security.The coastal plain zone of eastern India is the major rice growing belt of the country but the flood-prone lowlands of east coast plains are highly diverse, complex and fragile in nature [2].During the wet season, the crop experiences several abiotic stresses including drought, submergence, waterlogging, and flash floods along with the additional problems of salinity (in certain pockets) and cyclonic disturbances [3].Rice cultivation during dry season (summer rice) offers a great potential for boosting and stabilizing the yield in the region [4].The conventional method of rice crop establishment during the dry season in the region is manually transplanting rice seedlings in the puddled soil known as puddled transplanted rice (PTR) that requires a large amount of water, labour, and energy, which are gradually becoming scarce and more expensive, making PTR more costly and less profitable.
Dry direct-seeded rice (DSR) has shown promise under the scenario of labour and water scarcity and is considered as a potential alternative to PTR [5][6][7].Based on the previous studies, DSR saved 20-33% irrigation water compared to PTR [5].It reduces the total labour requirement by 11-66% compared to PTR, depending on the season, location, and type of DSR [5,8].The increased availability of short duration rice varieties has further encouraged farmers to explore this new method of establishing dry season rice in the coastal plain zone of eastern India [4].
Despite these benefits, however, the economic benefit from DSR is not realized many times by the farmers due to poor crop establishment and severe infestation of weeds.Risk of higher weed infestation and consequently higher yield losses is one of the major constraints in the wider-scale adoption of DSR and in the realization of full yield potential [9,10].Weeds in DSR are major problem because weeds emerge concurrently with rice seedlings and hence rice does not get a head start as the case in transplanted rice.Also, early flooding to suppress the early flushes of weeds cannot be used in DSR as rice is sensitive to flooding at germination and early establishment stage.Therefore, high expenditure on labour for weeding, if rely on hand weeding, may further dampen the scope of any profit occurrence [11].In DSR, the competition by weeds is so severe that yield losses may sometimes shoot up to 90% [12,13] resulting in concurrent economic loss.
There are several herbicides that are standardised and recommended for DSR all over the world.However, in India, there are legal restrictions in pesticide use and many of the popular herbicides viz., glyphosate, paraquat, butachlor, 2,4-D, oxyfluorfen, quinalphos, pendimethalin are under restricted use or recommended for ban [14][15][16].Over the years, farmers have used oxadiazone, oxadiagyl, pretilachor and pendimethalin as pre emergence application in rice with reported weed control efficiency of 55% [17], 65-85% [18], 58-82% [19,20] and 30.4% [21] respectively.There are two main problems associated with these pre emergence (PRE) herbicides.Firstly, these herbicides suppress the weeds only till three weeks, but, the subsequent flush of weeds cannot be controlled in dry DSR.Secondly, many of the area under rice cultivation are rainfed which are vulnerable to extremities of weather conditions.Sudden rains after sowing result in damage of emerging rice seedlings [22].So, the farmers have only limited options of herbicide use to control weeds throughout the critical period.Additionally, sole use of herbicide do not guarantee complete weed control, the farmers need to modify the cultivation practices to achieve desirable weed control.Manipulation in establishment methods hold high potential in reducing the weeds pressure.Weed flora composition changes significantly with alternation in rice establishment methods [5].Earlier, Bhurer et al. [23] reported variation in yield reduction under different establishment methods that varied by 40-76% in broad-cast seeding and 20% in drill seeding.Therefore, an integrated approach involving the manipulation of crop husbandry combined with direct weed control using herbicides is expected to address the issue of weed infestation in dry DSR.
In DSR, the weed competition and weed control cost depend greatly on how the crop is established and how weeds are managed [24].In DSR, the rice can be established by line sowing (manually or by using a drill) or broadcasting which can have differential effects on weed occurrence, crop growth, and rice yield.Information on weed dynamics and weed management under different DSR establishment methods in the coastal region of eastern India is limited.Hand weeding 2-3 times during crop growing season has traditionally been the common practice for weed control in this region.However, recently, because of the rising scarcity of labor, particularly their non-availability at a critical time, hand weeding is either delayed or not done optimally.Moreover, labor wages are rising, making hand weeding uneconomical.Integration of herbicides offers an alternative option to achieve timely and cost effective weed control in DSR.There are limited studies, especially in relation to systematic comparison of weed infestation, weed control efficiency of herbicides applied at early or late post-emergence, and rice yield in different DSR establishment methods.Therefore, a field study was conducted with the objective to evaluate the effect of DSR establishment methods and weed management practices on weed control, rice yields, energetics and economics in the coastal belt of Odisha, India.
Site Description
The field experiment was undertaken at the Research Farm of the ICAR-National Rice Research Institute, Cuttack (20.5 °N, 86 °E and 23.5 m above mean sea-level), India, during the two consecutive dry seasons of 2014 and 2015.The soil of the experimental field was Aeric (Endoaquept) with sandy clay loam in texture, slightly acidic to neutral with pH (using 1:2.5, soil: water suspension) 6.79, total carbon 0.71%, available nitrogen 209 kg ha −1 , available P 17.8 kg ha −1 , and available K 121 kg ha −1 .The soil test was based on samples taken from the upper 20 cm depth just prior to start of the experimentation.
Experimental Design
The experiment was laid out in a split-plot design with three replications.Three establishment methods viz., drill seeding using a seed drill, manual line-seeding with 15cm row spacing and broadcast seeding, were assigned to the main plots and five weed control treatments were in sub plots.Weed control treatments included bispyribac-sodium as early post-emergence (POST) herbicide at 30 g a.i.ha −1 , azimsulfuron as late POST herbicide at 35 g a.i.ha −1 , and currently recommended early POST ready-mix herbicide bensulfuron-methyl plus pretilachlor at 70 + 700 g a.i.ha −1 , along with weedy and weed free checks.Azimsulfuron is a broad-spectrum sulfonylurea herbicide recommended to suppress major grasses along with broadleaved and sedges.Bispyribac-sodium is a most widely used pyrimidinyl thiobenzoate herbicide in Indian subcontinent to suppress key grasses (for example, Echinochloa species and Ischaemum rugosum), broadleaf and sedges but not effective on grasses such as Leptochloa chinensis.Bensulfuron-methyl plus pretilachlor is the recommended herbicide mixture for broad spectrum weed control in both wet and dry DSR.
In an earlier study, late POST herbicide suppressed weeds effectively in dry DSR and azimsulfuron applied at 15 DAE showed very good efficacy (91% weed control efficiency) against complex weed flora particularly late emerging grass weed, Leptochloa chinensis [25].Moreover, in recent years, L. chinensis has become a major weed in the vegetative stage of the rice crop.To compare the efficacy of early and late POST herbicides in the present study, azimsulfuron was applied at the 3-4 leaf stage of weeds (17 DAE), bispyribac-sodium was sprayed at the 2-3 leaf stage of weeds (10 DAE) and the ready-mix herbicide bensulfuron-methyl plus pretilachlor was applied at 3 DAE.In the weed free plots, weeds were removed manually at 15, 30, 45, and 60 DAE to keep the treatment free from weed competition.
Crop Management and Herbicide Application Details
The field was prepared by ploughing thoroughly with a disc plough followed by harrowing with a rotavator to get a fine tilth for ensuring easy movement of the seed drill on dry soil.The experimental field was divided into three replications each of them consisting three main plots (each having size 35 m × 25 m).Each main plot was divided into five sub plots (each having size 7 m × 5 m).The sub plots had gross plot size was 7 m × 5 m and the net plot size used for harvesting was 6 m × 4 m.The rice variety 'Naveen' (115 days duration, Indica type) was sown using a seed rate of 40 kg ha −1 on January 14 and 15 during 2015 and 2016, respectively.For drill seeding, seeds were sown using a 9 row seed-cum-fertilizer drill developed at the ICAR-National Rice Research Institute (formerly CRRI).For manual line-seeding, seeds were placed in continuous furrows followed by a light harrowing to cover the seeds.A Furrow opener was used to make furrows.For the broadcast seeding treatment, seeds were broadcasted on the well pulverized soils followed by a light harrowing.First, a light irrigation was given immediately after seeding and the field was kept saturated during the first 10 days.Thereafter, a thin layer of standing water (1-2 cm) was maintained for the next 21 days after rice emergence.Afterwards, irrigation water was applied at a 2-3 cm depth after disappearance of water from the field till 15 days prior to maturity.
In India, grasses and sedges are predominant weeds in DSR [26,27], for which bispyribac-sodium is profusely used till date [26,28].Therefore, bispyribac-sodium is taken as check to compare the efficacy of new herbicide i.e., azimsulfuron.The efficacy of herbicide mixture i.e., bensulfuron-methyl plus pretilachlor was compared with bispyribac-sodium as the herbicide mixture is expected to have least/delayed chance of developing herbicide resistance in weeds [29].
Bispyribac-sodium and azimsulfuron were applied by spraying at 10 DAE and 17 DAE respectively on saturated soil (after draining out of water) using a knapsack sprayer fitted with a flat fan nozzle at a spray volume of 300 L ha −1 and spray pressure of 200 kPa.The field was irrigated again after 48 h of spraying.The ready-mix herbicide, bensulfuronmethyl plus pretilachlor (in granular form) was applied 3 DAE after mixing with fine sand at 12 kg ha −1 in saturated soil conditions.Full dose of P2O5 (50 kg ha −1 ) and 2/3rd of K2O (33 kg ha −1 ) were applied before sowing at the time of final land preparation and N (100 kg ha −1 ) was applied in three equal splits, at 15, 35, and 55 DAE.All the other recommended agronomic and plant protection measures were adopted to raise the crop.The rest 1/3rd of K2O was applied along with the third dose of N.
Field Measurements
Observations on weed species were recorded at 30 and 60 DAE.At each sampling date, weed density was recorded species wise by placing quadrates of size 0.5 m × 0.5 m at two random locations in each sub plot.Weeds were cut at the ground level, washed with tap water, and oven dried at 70 °C for 48 h, before weighing.The dominant weed species were determined based on the summed dominance ratio (SDR) values expressed as percentage, computed using the following equation [30].Weed control efficiency (%) at 30 and 60 DAE were computed using the formula given below: where, x = weed dry weight in weedy check and, y = weed dry weight in treated plots Weed index was computed by using the formula given below: where, a = yield in weed free plot and, b = yield under treatment for which weed index is to be calculated.Grain yield of rice along with other yield components were recorded at harvest at the 14% seed moisture content.Sampling was done from an area of 1 m 2 in each plot to determine above ground total dry weight (total biomass) and yield components.Panicles m −2 was counted manually.Filled grains of 10 randomly selected panicles were counted to determine the number of grains per panicle.Biomass (sum of straw dry weight and grain dry weight) was calculated using grain and total dry weight of each treatment.
Economics
All the costs incurred for different field operations (tillage, seeding, irrigation, application of fertilizers and chemicals, harvesting and post-harvest operations) along with input costs (seeds, fertilizers and chemicals) were computed and summed up to obtain the total variable cost of cultivation.Sale prices of grain and straw based on prevalent market prices were summed up in each treatment to calculate the total revenue received from the sale of produce as gross returns.Net returns for each treatment were calculated by deducting the variable cost of cultivation from gross returns.The ratio between gross returns to total variable cost of cultivation was taken as benefit-cost ratio (B:C ratio) i.e., return per US $ of investment.
Energy
The energy consumption was calculated by multiplying the amount of input consumption with its unit energy equivalent as in Ziaei et al. [31].From energy input and output; the net energy, energy use efficiency, specific energy and energy productivity were computed by following formulae [32,33].Unit of Energy input and output = MJ ha −1 ; Unit of yield of rice = kg ha −1 where Ehl, Epr and Emt refer to energy from human labour, energy from power and energy from materials such as seed, fertilizer, chemicals and irrigation, respectively.Emp and Ebp refer to energy from main product and energy from by product, respectively.
Statistical Analyses
Treatment × year interactions were non-significant for almost all the parameters, therefore, data of both years were pooled for analysis and average is presented.Data were analysed using analysis of variance (SAS Software packages, SAS EG 4.3) and means of treatments were compared based on the least significant difference (LSD) test at p ≤ 0.05.Weed density and dry weight data were subjected to square root transformation and the transformed values were used in analysis.Correlation of weed dry weight, panicle numbers m −2 , number of grains per panicle, grain yield, B:C ratio and energy productivity of rice were determined using SAS EG 4.3.
Weed Composition and Weed Species Dominance Pattern
The weed flora in the experimental plots had a mixed population of grasses, sedges and broadleaved weeds (Table 1).Among grasses, the dominant weeds were Echinochloa colona (L.) Link, Leptochloa chinensis (L.) Nees and Digitaria sanguinalis (L.) Scop.Cyperus difformis was the only sedge weed species present in the experimental plot.Among broadleaved weeds, following species were present: Sphenoclea zeylanica Gaertn., Eclipta prostrata L., Alternanthera philoxeroides Griseb., Phyllanthus niruri L., and Ammannia baccifera L. The weed species dominance pattern was found similar at 30 and 60 DAE except for Cyperus difformis L. It showed more dominance during 2016 over L. chinensis and D. sanguinalis.
Weed Density
Rice establishment methods and weed control treatments significantly influenced the weed density (Table 2).Among the establishment methods, both at 30 DAE and 60 DAE, weed density was highest in broadcast seeding plots followed by manual line-seeding plots and was lowest in drill seeding plots.At 30 DAE, weed density was 16% and 36% lower in drill seeding than manual line-seeding and broadcast seeding, respectively.At 60 DAE, a similar trend was observed with 14% and 26% lower weed density in drill seeding plots relative to manual line-seeding and broadcast seeding methods, respectively.Among weed control treatments, irrespective of rice establishment method, weed density at 30 and 60 DAE varied in the following order (averaged across years and stages): weedy (69 plants m −2 ) > bensulfuron-methyl plus pretilachlor (42 plants m −2 ) > bispyribac-sodium (35 plants m −2 ) > azimsulfuron (24 plants m −2 ).All the herbicides reduce weed density compared to weedy check ranging from 39% in bensulfuron-methyl plus pretilachlor, 49% in bispyribac-sodium to 65% in azimsulfuron-treated plots.There was no interaction effect of establishment methods and weed control treatments on total weed density at 30 DAE but the interaction was significant at 60 DAE suggesting herbicide effects varied with rice establishment method (Table 2; p value= 0.049).For example, under weedy treatment, weed density decrease in the following order: broadcast seeding > manual line-seeding > drill seeding.However, in herbicide-based treatments (bispyribac-sodium, azimsulfuron and bensulfuron plus pretilachlor), weed density did not differ between broadcast and manual line-seeding treatments but it was 28-31% lower in drill seeding plots relative to the broadcast seeding.
Weed Dry Weight
Similar to weed density, weed dry weight was also influenced by rice establishment methods and weed control treatments (Table 3).At 30 DAE, weed dry weight in drill seeding and manual line-seeding plots was 24% and 16% lower than broadcast seeding, respectively.At 60 DAE, weed dry weight was 16% and 9% lower in drill seeding and manual line-seeding, respectively compared to broadcast seeding.Among weed control treatments, at both 30 and 60 DAE, weed dry weight was lowest in plots treated with azimsulfuron and maximum in weedy check and it decreased in the following order: weedy check > bensulfuron-methyl plus pretilachlor > bispyribac-sodium > azimsulfuron.Compared to weedy check, weed dry weight reduction was 63, 66 and 82% at 30 DAE in besulfuron plus pretilachlor, bispyribac-sodium, and azimsulfuron-treated plots, respectively.Similar pattern was observed at 60 DAE with 71, 75, and 85% in besulfuron plus pretilachlor, bispyribac-sodium, and azimsulfuron-treated plots, respectively.The interaction effect was significant among establishment method x weed control treatments (Table 3; p value= 0.029 at 30 DAE and 0.034 at 60 DAE).For example, in weedy plots, weed dry weight was highest in broadcast seeding, intermediate in manual line-seeding and lowest in the drill seeding plots at both 30 and 60 DAE.At 30 DAE, the effect of azimsulfuron was consistent among the establishment methods, but the effects of bispyribac-sodium and bensulfuron plus pretilachlor varied.However, at 60 DAE, in herbicide-based treatments, weed biomass did not differ by rice establishment methods within each herbicide treatment.
Weed control efficiency (WCE)
Weed control efficiency (WCE) of different weed management treatments in different rice establishment methods for 30 and 60 DAE are shown in Figure 1.The results showed that WCE was highest in azimsulfuron-treated plots irrespective of DSR establishment methods.These results indicated that late POST application of azimsulfuron at 17 DAE showed better performance (15-19% higher WCE at 30 DAE and 10-14% higher WCE at 60 DAE) than early POST application of bispyribac-sodium and bensulfuron plus pretilachor at 10 DAE and 5 DAE, respectively.
Rice Grain Yield
Rice grain yield was significantly influenced by both establishment methods and weed control treatments (Table 4).Based on two years average data, rice grain yield was highest in drill seeding (4.9 t ha −1 ) followed by manual line-seeding (4.5 t ha −1 ) and was lowest in broadcast seeding (3.9 t ha −1 ).The weed control treatments also showed significant effects of different herbicide treatments on grain yield of rice.Among herbicide treatments, azimsulfuron treated plots recorded highest grain yield (5.2 t ha −1 ) irrespective of rice establishment methods followed by bispyribac-sodium (4.9 t ha −1 ) and was lowest in bensulfuron plus pretilachlor (4.4 t ha −1 ).There was a drastic reduction in grain yield (58%) in weedy plots over weed-free check.Bensulfuron plus pretilachlor was found to be the least effective herbicide option in dry DSR with 15% yield reduction compared to azimsulfuron and 20% yield reduction compared to weed free plots.In azimsulfuron and bispyribac-sodium treated plots, yield reduction relative to weed free plots were only 5 and 11%, respectively.
A significant interaction effect of rice establishment methods and weed control treatments was recorded on grain yield (Table 4).In weed free and azimsulfuron-treated plots, yields of drill seeding and manual line-seeding was similar but was higher than broadcast seeding.However, in other treatments (bispyribac-sodium, bensulfuron plus pretilachlor, and weedy), yield varied with rice establishment methods in the following order: drill seeding > manual line-seeding > broadcast seeding.
Yield Components
The difference in rice grain yield was reflected on yield components.Yield components such as panicle number m −2 and number of grains panicle −1 were significantly influenced by rice establishment methods and weed control treatments (Table 4).However, the interaction of establishment methods x weed control treatments was not significant for these yield components.The highest number of panicles m −2 was recorded in drill seeding (251) and it was reduced by 10% in the manual line-seeding and 17% in broadcast seeding as compared to drill seeding.Among the weed control treatments, the highest panicle numbers m −2 was recorded in weed free plots (270) and it was similar to azimsulfuron-treated plots but higher than other treatments.Among, different herbicide treated plots, panicles m −2 were similar in azimsulfuron and bispyribac-sodium treated plots but in besulfuron methyl plus pretilachlor treated plots; it was 14% lower than azimsulfuron.Herbicide treated and weed free plots had 24-56% higher panicles m −2 than weedy check.
Grains panicle −1 did not differ between drill seeding and manual line-seeding but were 18% higher in drill seeding than broadcast seeding (Table 4).Among weed control treatments, grains panicle −1 were similar in weed free, bispyribac-sodium and azimsulfuron-treated plots but in bensulfuron-methyl plus pretilachlor treated plots, grains were lower than weed free and azimsulfuron-treated plots.Grains panicle −1 were 32-38% lower in weedy plots compared to other weed control treatments.
Economic Analysis
Based on a two-year average, the results on economic analysis showed that the cost of cultivation was quite high in manual line-seeding (US $ 631) compared to drill seeding (US $ 599) and broadcast seeding (US $ 577) under different rice establishment methods (Table 5).Cost of cultivation did not differ in herbicide-based treatments but in weedy check, cost of cultivation was US $ 54 to 49 ha −1 lower than herbicide-based treatments and US $ 249 ha −1 lower than weed free check.BPS-Bispyribac-sodium (30 g a.i.ha −1 ); AZM-Azimsulfuron (35 g a.i.ha −1 ); BSM + Pretl.-Bensulfuron-methyl+ Pretilachlor (70 + 700 g a.i.ha −1 ); NS: not significant difference; * US$ = 66.90 INR 15.12.2015);Data with the same superscripted lower case letters in a column are not significantly different using LSD0.05.
Treatment Cost of Cultivation Gross Return Net Return
Among different establishment methods, the significantly higher net and gross return were found in drill seeding than manual line-seeding and broadcast seeding (Table 5).Drill seeding resulted in US $ 138 and US $ 87 ha −1 higher net return relative to manual line-seeding and broadcast seeding respectively.Among weed control treatments, net return was lowest in weedy check and was highest in azimsulfuron-treated plots and varied in the following order: azimsulfuron (US $ 801 ha −1 ) > bispyribac-sodium (US $ 709 ha −1 ) > weed free (US $ 649 ha −1 ) > bensulfuron + pretilachlor (US $ 586 ha −1 ) > weedy check (US $ 73 ha −1 ).
Energy Balance
There was a difference in energy requirements for different rice establishment methods as well as different herbicides treatments.The energy input ranged from 10,292 MJ ha −1 in broadcast seeding to 10,355 MJ ha −1 in drill seeded plots and 10,169 MJ ha −1 in weedy plots to 11,032 MJ ha −1 in weed free checks (Table 6).Significantly higher energy output was recorded in drill seeding (137,306 MJ ha −1 ) over manual line-seeding and broadcast seeding methods.Among the herbicide treated plots, the energy output was significantly higher in azimsulfuron-treated plots over bispyribac-sodium and bensulfuron-methyl plus pretilachlor treated plots.The energy balance i.e., net energy also showed the similar trend as energy output under different establishment methods and weed control treatments.The highest energy use efficiency (EUE) was recorded in drill-seeding (13%) and it was significantly reduced in other two establishment methods (11-12%).Similarly, azimsulfuron-treated plots showed significantly highest EUE (14%) over other weed control treatments (6-13%).The energy productivity was shown similar trend as EUE.Among different establishment methods, the specific energy was significantly higher in drill seeding than in manual line or broadcast seeding.Among weed control treatments, it was significantly higher in azimsulfuron, bispyribac-sodium, and weed free treatments compared to bensulfuron-methyl plus pretilachlor and weedy check treatments.Table 6.Energy balance and productivity as influenced by establishment methods and weed control treatments.
Discussion
Heavy weed pressure, occurrence of several flushes of weeds and lack of available strategy for weed control are some of the critical factors for low adoption of DSR in India over the decades.This demands the development and deployment of proper weed control strategy for DSR.Availability of new POST herbicides with broad spectrum weed control ability during the critical period of crop-weed competition opened the opportunities for DSR in recent time.However, a crop establishment method plays an important role in weed emergence, its subsequent growth and choice of herbicides for its control as well as economics and energy requirements are also influenced by establishment methods and type of weed control options.
Effect on Weeds
The highest total weed density and weed dry weight was observed in broadcast seeding compared to manual line-seeding or drill seeding methods.The possible reasons of higher weed incidence in broadcast seeding relative drill seeding may be because of uneven/non-uniform crop stand in broadcast seeding compared to line-seeding manually or by drill.Ichikawa [34] also found severe weed pressure at early stage in broadcast seeding due to uneven and poor crop establishment which resulted in higher crop-weed competition in comparison to spot seeding and row seeding.Uniform crop establishment resulted from the congenial micro environment of rhizosphere in the drill-seeded crop [22] and fast initial growth favoured rice crop to compete with weeds and helped in smothering the weed flora.
Among the tested herbicides, significantly higher weed density and dry weight in the bensulfuron-methyl plus pretilachlor treated plots might be due to poor control of grasses, particularly late emergent ones.The density of grasses was very high in this treatment plots at 60 DAE (data not shown).Bensulfuron-methyl and pretilachlor, are reported to be very effective in transplanted rice where mixed population of weeds occurred [35].The higher efficacy of bensulfuron-methyl plus pretilachlor was found in an earlier study in wet DSR during the dry season [36].Bensulfuron-methyl alone is recommended in rice for many annual and perennial broadleaved weeds and sedges [37].Luo et al. [38] reported that carbon source like sodium lactate is responsible for rapid degradation of bensulfuron-methyl making it less effective on weeds at later stages.Another report showed rapid degradation of bensulfuron-methyl due to repeated application owing to adaptation of soil bacteria which can utilize bensulfuron-methyl as a source of carbon and energy [33].
Regardless of rice establishment methods, azimsulfuron provided better weed suppression relative to other two tested herbicide combinations (Table 3).Suppression of grasses (weed control efficiency 98.5%) along with complete control of sedges and broadleaved weeds by azimsulfuron was also reported by Saha et al. [4].In this study, it was observed that azimsulfuron performance was not affected by rice establishment methods but performance of other herbicide treatments varied with establishment methods.This differential response could be attributed to differential performance of herbicides with different levels of weed density as reported in different dry DSR establishment methods.The result suggests that azimsulfuron was less affected by differential density in different establishment methods, whereas the performance of bispyribac-sodium and bensulfuronmethyl plus pretilachlor was reduced under higher density in the broadcast seeding.Mahajan and Chauhan [39] reported higher efficacy of azimsulfuron in row-seeded rice over other herbicides (pendimethalin, bispyribac-sodium and fenoxaprop-p-ethyl).Bispyribacsodium, the most widely used herbicide for control of grasses in rice, was found less effective against late emergent L. chinensis.Many studies have reported poor control of L. chinesis by bispyribac-sodium [5,40].Bispyribac-sodium has shown minimal translocation and a large amount is retained in the treated plant leaves [41], that indicates the residue left in the soil only gets absorbed by the roots of weed species only if they have extensive roots.It may be the reason for relatively less efficacy of bispyribac-sodium compared to azimsulfuron.
Higher efficacy of azimsulfuron compared to bispyribac-sodium and bensulfuronmethyl plus pretilachlor in different rice establishment methods ensured the effectiveness of azimsulfuron for suppressing weeds even at late vegetative stages in DSR.Gradual and persistent degradation of azimsulfuron in soil might have helped in suppressing the weeds for longer period of time.The slow degradation of azimsulfuron was aided by neutral pH (pH 6.8) of the experimental soil [42].Pinna et al. [43], reported faster degradation of azimsulfuron in acid soils compared to neutral and slightly alkaline soil.Again, in unflooded soils, azimsulfuron was characterized as exhibiting moderate to high persistence [44] which is generally associated with higher residual effect of azimsulfuron on weed species.The persistent of bispyribac-sodium is low to moderate in un-puddled field (DSR) whereas it shows moderately higher persistent in flooded paddy soils [45].This indicates the capability of BPS to control weeds for longer periods in transplanted rice than dry direct seeded rice.High weed control efficiency of azimsulfuron in DSR indicated that the efficacy of herbicide was further influenced by crop establishment techniques.Bispyribacsodium, applied as early as 10 DAE when the crop was too small to cover the space between the plants which led to its rapid photo-transformation and photo-degradation enabling the weeds to emerge in the second flush.Suppression of grasses (weed control efficiency 98.5%) along with complete control of sedges and broad-leaved weeds by azimsulfuron was also reported by Saha et al. [4].Suppression of late flushes of weeds leads to higher efficacy of azimsulfuron.The reverse is true for bensulfuron-methyl plus pretilachlor which was applied as early post-emergent herbicide and completely failed to control the weeds in dry DSR.The main reason could be poor control of late flushes of weeds after an early degradation of this herbicide [24,38].
Manual seeding combined with azimsulfuron was found effective, however, it needs to be used with caution.Azimsulfuron belongs to sulfonyl-urea class of herbicides (ALS inhibitor) which is associated with very high potential to develop herbicide resistance in weeds.Knezevic et al. [46] and Palou et al. [47] have reported development of herbicide resistance in as liltle as 3-4 years.Therefore, herbicide rotation is highly recommended.Use of azimsulfuron is not intended to replace the existing herbicides but simply adding other options for the farmers to choose from on need basis.
Rice grain Yield, Economics and Energetics
The higher rice grain yields in drill seeding and manual line-seeding compared to the broadcast seeding method was mainly because of higher rice panicle number m −2 and to less extent due to higher grains panicle −1 .The lower yield in broadcast seeding could be attributed partly to poor and uneven crop stand and higher weed incidence indirectly because of an uneven crop stand.Similarly, grain yield in weed control treatments are linked with weed control efficiency with highest yield in the weed-free plots where there was no crop-weed competition followed by azimsulfuron where weeds were effectively controlled and yields were lower in weedy check and bensulfuron-methyl plus pretilachlor where weeds competed with the crop because of poor weed control in these treatments.
The lowest weed index (WI) in drill seeding indicated relatively less competition offered by weeds in drill seeding over manual-line and broadcast seeding.This might be due to better rice establishment at early vegetative stage when sown by the seed-cumfertilizer drill.Since the system ensures placement of rice seeds and fertilizer that favoured young rice seedlings to establish in a better way in comparison to manual and broadcast seeding where basal fertilizer was applied by broadcasting during final land preparation.Higher WI in bispyribac-sodium and bensulfuron-methyl plus pretilachlor treated plots over azimsulfuron indicated comparatively poor efficacy of early POST herbicides in dry DSR that resulted in a higher percentage of yield reduction owing to the presence of weeds in the field during the critical period of competition.
The higher net return and B:C ratio in drill-seeded rice compared to the broadcast method despite a slightly higher cost of cultivation was attributed to higher grain yields than the broadcast method.The higher net income in the drill seeding method than manual line-seeding method was attributed to the combination of higher yields and lower cost of cultivation.Although the rice yield was significantly higher in weed-free checks over all the herbicide treatment plots but the net return was relatively less than azimsulfuron and bispyribac-sodium treated plots due to higher cost of production by engaging more labour for weed control in the weed-free treatment.The B:C ratio of weed-free checks was least in comparison to all the herbicide-treated plots.This indicates that weed control by herbicides was the most economical way to control weeds in DSR fields.There were negligible differences in the cost of cultivation of different herbicide treatments, but azimsulfuron-treated plots showed relatively higher net returns, which were 11% and 27% higher over bispyribac-sodium and bensulfuron-methyl plus pretilachlor treatment plots, respectively.This was mainly because of higher yield resulted from better weed control in azimsulfuron-treated plots.Thus, selection of a suitable herbicide is one of the most important criteria for weed control in dry DSR.This information could be very useful to farmers.
A similar trend was observed in terms of energy use efficiency and energy productivity.The amount of EUE was higher under drill seeding that indicated that sowing by using a seed-cum-fertilizer drill is the best way of establishing rice under dry direct seeding over manual line and broadcast seeding.The higher EUE in azimsulfuron treatments showed the best way of controlling weeds under dry direct seeding.The energy productivity obtained under different establishment methods indicated that per each unit of energy consumption in the fields, 0.47, 0.42 and 0.38 yield units was achieved in drill seeding, manual line-seeding and broadcast seeding, respectively.Thus drill seeding showed its advantage over other two establishment methods.The same was true for azimsulfurontreated plots; it showed much higher units of yield achievement over other weed control treatments.Specific energy is the reversal of energy productivity hence its lower amounts showed that lesser energy was used for the production of each yield unit.So establishment of crop by drill seeding and weed control by azimsulfuron was superior for rice production under dry direct seeding due to both energy productivity and specific energy.
Conclusions
Selection of a proper crop establishment method and application of an appropriate herbicide have a remarkable influence on weed control and crop yield.Establishment of the rice crop using drill seeding was effective in attaining higher rice grain yield, B:C ratio and energy use efficiency in dry DSR when weeds were kept under control by late POST application of azimsulfuron (35 g ha −1 ) applied at 17 DAE.Manual seeding combined with azimsulfuron was found effective in achieving higher rice yield compared to the normal practice of broadcast seeding and weed control by bispyribac-sodium during the dry season in the coastal plain areas of eastern India.However, economic advantage and energy productivity were much higher under drill seeding.The information generated from this study will encourage the farmers to grow DSR and realize higher profitability in the coastal plain zone of India.
SDR of a weed species = [Relative density (RD) + Relative dry weight (RDW)] 2 ⁄ where, RD = (Density of a given species Total density ⁄ ) × 100 RDW = (Dry weight of a given species Total dry weight ⁄ ) × 100 + Epr + Emt Energy output = Emp + Ebp Net energy = Energy output − Energy input Energy use efficiency = Energy output Energy input ⁄ Specific energy = Energy input Yield of rice ⁄ Energy productivity = Yield of rice Energy input ⁄
Table 2 .
Effect of establishment methods and weed control treatments on weed density (plants m −2 ) at 30 and 60 days after emergence (DAE) at Cuttack, Odisha (Average of 2015 and 2016) § .
Bispyribac-sodium (30 g a.i.ha −1 ); AZM-Azimsulfuron (35 g a.i.ha −1 ); BSM + Pretl.-Bensulfuron-methyl+ Pretilachlor (70 + 700 g a.i.ha −1 ); NS: not significant difference; † Weed free-Weed density was not recorded since weed was removed manually at 15, 30, 45 and 60 DAE; § Means are separated by least significant difference (LSD).*Within each timing, means with the same lower case letter in a column are not significantly different using LSD0.05.; ** Within each timing, means with same upper case letter in a row are not significantly different using LSD0.05.Data in bold are mean values of main plot and sub plot treatments.
Table 3 .
Effect of establishment methods and weed control treatments on weed dry weight (g m −2 ) at 30 and 60 days after emergence (DAE) Cuttack, Odisha (Average of 2015 and 2016) § .
Bispyribac-sodium (30 g a.i.ha −1 ); AZM-Azimsulfuron (35 g a.i.ha −1 ); BSM + Pretl.-Bensulfuron-methyl+ Pretilachlor (70 + 700 g a.i.ha −1 ); † Weed free-Dry weight was not recorded since weed removed manually at 15, 30, 45 and 60 DAE; § Means are separated by least significant difference (LSD).; * Within each timing, means with the same lower case letter in a column are not significantly different using LSD0.05.; ** Within each timing, means with same upper case letter in a row are not significantly different using LSD0.05.Data in bold are mean values of main plot and sub plot treatments.
Table 4 .
Effect of establishment methods and weed control treatments on yield components and grain yield of rice (Average of 2015 and 2016) § .
Table 5 .
Economics of dry direct seeded rice as influenced by different establishment methods and weed control treatments. | 2021-05-08T00:03:17.037Z | 2021-02-22T00:00:00.000 | {
"year": 2021,
"sha1": "c6eb499bb70873ac4d3c72616b60805e68f37adb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/11/2/389/pdf?version=1614255484",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "54e62b5300e37d35f5cb004ab0c7b0c23e19f220",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
15429166 | pes2o/s2orc | v3-fos-license | The impact of body mass index on treatment outcomes for patients with low-intermediate risk prostate cancer
Background Little is known about the relationship between preoperative body mass index and need for adjuvant radiation therapy (RT) following radical prostatectomy. The goal of this study was to evaluate the utility of body mass index in predicting adverse clinical outcomes which require adjuvant RT among men with organ-confined prostate cancer (PCa). Methods We used a prospective cohort of 1,170 low-intermediate PCa risk men who underwent radical prostatectomy and evaluated the effect of body mass index on adverse pathologic features and freedom from biochemical failure (FFbF). Clinical and pathologic variables were compared across the body mass index groups using an analysis of variance model for continuous variables or χ2 for categorical variables. Factors related to adverse pathologic features were examined using logistic regression models. Time to biochemical recurrence was compared across the groups using a log-rank survivorship analysis. Multivariable analysis predicting biochemical recurrence was conducted with a Cox proportional hazards model. Results Patients with elevated body mass index (defined as body mass index ≥25 kg/m2) had greater extraprostatic extension (p = 0.004), and positive surgical margins (p = 0.01). Elevated body mass index did not correlate with preoperative risk groupings (p = 0.94). However, when compared with non-obese patients (body mass index <30 kg/m2), obese patients (body mass index ≥30 kg/m2) were much more likely to have higher rate of adverse pathologic features (p = 0.006). In patients with low- and intermediate- risk disease, obesity was strongly associated with rate of pathologic upgrading of tumors (p = 0.01 and p = 0.02), respectively. After controlling for known preoperative risk factors, body mass index was independently associated with ≥2 adverse pathologic features (p = 0.002), an indicator for adjuvant RT as well as FFbF (p = 0.001). Conclusions Body mass index of ≥30 kg/m2 is independently associated with adverse pathologic features, which is an indicator for additional RT, particularly in patients with low-intermediate risk disease. Future studies may determine if this select group of patients may be best treated with definitive RT to reduce toxicity from additional RT following radical prostatectomy. We propose including body mass index in clinical decision-making for appropriate treatment recommendation for patients with low-intermediate risk PCa.
Background
Obesity, one of the most pressing issues facing the U.S. healthcare system, is a potentially modifiable risk factor for disease progression and poor outcomes for numerous diseases, including prostate cancer (PCa) [1]. Specifically, associations between increased body mass index (BMI) and advanced prostate tumor stage and grade at diagnosis, younger age at diagnosis, and biochemical failure (FFbF, disease recurrence) after treatment have been observed. [2][3][4] While investigators study the underlying mechanisms that link obesity to poor PCa outcomes [5][6][7][8], understanding how BMI may influence treatment recommendations is a critical aspect of ongoing PCa care.
The current guidelines for patients with organ-confined PCa include definitive modalities such as radical prostatectomy (RP) or radiation therapy (RT) [9][10][11][12]. RP is a standard surgical management for clinically localized PCa in patients free of surgical contraindications. This procedure confers excellent 10-year long-term disease control of >90 % in patients who are confirmed pathologically to have localized (pT2) disease. Retrospective studies reported that the long-term outcomes of patients with localized and low-risk PCa were equally favorable with RP or external beam radiation therapy [13,14]. For intermediate and high risk disease, however, monotherapy with either RP or RT did not achieve the excellent long-term outcomes seen in patients with low-risk disease [15,16]. For pT3 cancer (defined as disease in the extraprostatic extension or seminal vesicle involvement), the risk of 5-year local failure and biochemical progression varies from 20 % to 70 % [17,18]. Several randomized studies for patients with pT3 (with or without positive margin) or pT2 (with positive surgical margin) disease have been reported, demonstrating that adjuvant RT reduces the risk of local relapse and biochemical progression and disease-specific survival [19][20][21][22]. Despite earlier cancer detection with ser-umPSA screening, approximately 50 % of patients who undergo RP are found to have at least one adverse pathologic feature(APF), including advanced tumor grade/stage and positive margins/lymph nodes, extraprostatic extension and seminal vesicle invasion [23]. These patients may require adjuvant RT.
Several studies have shown increased genitourinary and gastrointestinal toxicity from additional RT after RP [22,[24][25][26]. In the South West Oncology Group trial, adverse events were more likely to occur in the RP + RT arm compared with the RP arm (23.8 % vs 11.9 %), including urethral strictures (17.8 % vs 9.5 %), total urinary incontinence (6.5 % vs 2.8 %), and rectal complications (3.3 % vs 0 %), respectively [25]. A study on the healthrelated quality of life (HRQOL) of PCa patients compared short-and long-term effects of adjuvant treatment versus observation after RP [26]. The investigators reported that the addition of RT to RP resulted in more frequent urination, as well as early report of more bowel dysfunction. Another HRQOL in patients treated with multimodality for PCa reported a decline in HRQOL particularly with urinary function, urinary bother and sexual function [24]. Therefore, the ability to preoperatively identify the subset of patients who are at risk of requiring additional RT after RP will be of clinical utility. These patients may benefit from upfront definitive RT to improve quality of life and minimize additional toxicity from a combination of RP followed by RT. To date the most widely utilized predictors of clinical outcomes including PSA, Gleason score (GS) and clinical stage are sub-optimal in predicting adverse pathologic outcomes and adjuvant RT use following RP. Over the last decade, a large body of evidence has emerged associating obesity with incidence of PCa [27][28][29] as well as adverse outcomes following treatment. Recent studies found increased BMI to be associated with aggressive PCa and FFbF [30][31][32][33][34]. However, no studies have examined the relationship between preoperative BMI and the need for adjuvant RT following RP in patients with adverse pathologic outcomes. We sought to determine whether BMI provides a clinically useful prediction of adverse pathologic outcomes that will guide physicians in recommending RT for select patients with organconfined PCa. Obesity, in particular, has been related to a number of factors and molecular pathways that may advance cancer progression [35]. We hypothesize that obesity status modifies the relationship between preclinical risk and PCa outcomes among low-intermediate risk patients. The study aims were to utilize a cohort of radical prostatectomy patients to 1) examine the relationship between obesity and adverse pathology, and 2) examine the relationship between obesity and FFBF.
Patient population
This study utilizes a cohort of 1970 men with PCa treated with RP and bilateral pelvic lymph node dissection at the Hospital of the University of Pennsylvania Health System (UPHS; Philadelphia, PA.) Patients were consented in person and recruited at UPHS to participate in a PCa study, the Study of Clinical Outcomes, Risk and Ethnicity (SCORE) between 1990 and 2012 as previously described [36,37]. This study was approved by the Institutional Review Board at the University of Pennsylvania.
The SCORE study includes information on patient age, race, height, weight, clinical stage, clinical Gleason on diagnostic biopsy, preoperative PSA levels, surgical pathologic information (tumor grade, stage, surgical margins status, extraprostatic extension, or seminal vesicle involvement, lymph node status). Prospective follow -up was conducted with PSA levels obtained at each visit. For the purpose of this study, patients without height and weight data for BMI calculation were excluded from the analysis (N = 506). Patients without adequate preclinical data including initial PSA (N = 30), or biopsy Gleason (N = 264) at diagnosis were excluded from the analysis. Patients who received androgen deprivation therapy (ADT) or adjuvant RT and/or ADT were included. The remaining 1170 patients were analyzed in this study.
Data collection
The standard protocol for men in the SCORE study was as follows: Patients were evaluated at time of diagnosis by a thorough history and physical examination (including digital rectal examination [DRE]) followed by routine laboratory studies, including serum PSA levels, and GS determined by needle biopsy and reviewed at the UPHS. The patients were examined 1 month postoperatively and then at 3 month intervals for 1 year, every 6 months for 5 years, and then annually. At each follow up visit a complete evaluation, including DRE and serial PSA values, were determined and recorded. Biochemical recurrence (PSA failure) was defined as a single PSA ≥0.2 ng/ml or when two consecutive PSA values of 0.2 ng/ml were obtained after an undetectable value. Time zero (the starting point for follow-up) was defined at the date of surgery for all patients. If PSA was never undetectable postoperatively, then PSA failure was assigned at time zero. Patients with no follow up data were included for the evaluation of differences in preoperative and pathologic characteristics, but not biochemical recurrence.
Data related to patient and clinical characteristics, tumor pathology, and PCa outcomes were collected via medical record abstraction. All patients were staged according to the 1992 American Joint Committee on Cancer staging system [38].
Treatment
Surgical treatment consisted of a radical retropubic prostatectomy and bilateral pelvic lymph node sampling or robotic-assisted laparoscopic prostatectomy. Adverse pathologic features (APF), such as extraprostatic extension (EPE), seminal vesicle invasion (SVI), and surgical margin status (SM), were noted and recorded. At the discretion of the treating physician, patients with APF including EPE, SVI or positive surgical margins were treated with adjuvant RT and/or ADT. ADT consisted of a gonadotropin-releasing hormone agonist (leuprolide acetate or goserelin acetate) with or without antiandrogens (flutamide). The SCORE study is a prospectively maintained database with patients treated from the 1990s until 2012. For this reason the year of prostatectomy was recorded and introduced into our modeling to account for difference in pre-PSA era of diagnosis and improvements in surgical treatment techniques that may impact APFs.
Risk classification
Preoperatively patients were stratified into low, intermediate and high risk groups according to the recent National Comprehensive Cancer Network (NCCN) guidelines [39]. Patients who had T1 to T2a tumors, and a Gleason score < 7, and a PSA level < 10 ng/mL were classified as low risk (N = 777); patients who had T2b to T2c tumors, and/or a Gleason score of 7, and/or a PSA level between 10 ng/mL and 20 ng/mL were classified as intermediate risk (N = 270); and patients who had > T3 tumors, or a Gleason score between 8 and 10, or a PSA level > 20 ng/mL were classified as high risk (N = 117) [38]. Following RP patients were further stratified by the number of APFs into low, intermediate and high RPrisk groups. Patients with no APFs were in the low RPrisk (N = 818); patients with only 1 APF were in the intermediate RPrisk (N = 177); and patients with >/=2 APFs were in the high RPrisk group (N = 175).
Statistical analysis BMI
For the purpose of this study BMI (weight in kilograms divided by height in meters squared) was categorized as follows; normal weight (<25 kg/m 2 ), overweight (≥25 kg/m 2 to <30 kg/m 2 ), obese (≥30 kg/m 2 ). BMI was examined as a continuous and a categorical variable in which case BMI was stratified into non-obese (<30 kg/m 2 ) or obese (≥30 kg/m 2 ).
Other patient/clinical variables
Age, PSA, year of surgery, and biopsy Gleason score were examined as continuous variables. Clinical T-stage (T1c, T2a, T2b, and T2c) and race (white, African-American/black, and other) were examined as categorical variables.
Clinical and pathologic variables were compared across the BMI groups using an analysis of variance model for continuous variables or χ 2 for categorical variables. Factors associated with the presence of APF were examined using logistic regression models. The predictability of BMI was evaluated using more stringent criteria of ≥2 APF in order to rigorously select for the patients that are most likely to be offered additional RT. For Cox proportional hazards models predicting FFbF, patients who experienced biochemical recurrence or PSA failure, lost to follow up, deceased were censored. Treatment outcomes often correlate with biochemical control rates thus, a PSA rise to 0.2 ng/ml was used to define biochemical disease recurrence, and time to biochemical recurrence was used as a surrogate for biochemical disease-free survival. Time to biochemical recurrence was compared across the groups using a logrank survivorship analysis. For both univariate and multivariate analyses, BMI, Race, clinical stage, and clinical Gleason score were evaluated as categorical variables as follows: BMI categorical used BMI <25 kg/m2 as reference category; Race used White as reference category; "Other" race was dropped from model due to small numbers. Clinical stage categorical (T1; T2a, T2b; >T2c), used T1c as reference category; clinical Gleason score categorical (6, 7, ≥8) used Gleason 6 as reference category. The analyses were conducted using STATA statistical software version 13.0 (STATA Corporation). A p-value <0.05 was considered statistically significant.
Results
The patient clinical and pathologic characteristics are listed in Table 1. Preoperative factors such as age, PSA at diagnosis, biopsy Gleason, clinical T-stage and year of RP were similar between BMI categories except for race, where African American/Black race was associated with elevated BMI (p < 0.001). There was a statistically significant difference between postoperative pathologic features and BMI. Specifically, extraprostatic extension, p = 0.001; positive surgical margins, p = 0.01; and higher pathologic Gleason (p = 0.001).
As shown in Table 2, BMI was not associated with preoperative risk groupings (p = 0.94). However, obesity (BMI ≥30 kg/m 2 ) directly correlated with increased risk of APFs (; p = 0.006). The effect of BMI and outcomes by pre-operative risk grouping was evaluated as per the National Comprehensive Cancer Network classification. Obesity (BMI ≥30 kg/m 2 ) was strongly associated with a higher rate of pathologic Gleason score upgrading of tumors, particularly for low risk ( Fig. 1a-b, 30 % vs. (Table 3). Although African American/Black race is associated with elevated BMI, race was not associated with adverse pathologic outcomes in the logistic regression models.
Using a multivariate Cox proportional hazard modeling, the significant predictors of risk for FFbF following RP were determined, Table 4. After adjusting for the other preclinical factors BMI ≥30 kg/m 2 (HR, 2.56; 95 % CI 1.24 to 5.29; p = 0.01) remained a predictor of biochemical recurrence.
Discussion
In this current study, further evidence was provided to suggest that BMI is a strong predictor of APF and biochemical recurrence following RP as monotherapy, particularly in patients with low-and intermediate-risk PCa. Specific groups of PCa patients with localized disease at presentation may be at increased risk for disease progression and related PCa-specific mortality from A B Fig. 1 Rate of Gleason score upgrade in a Non-Obese and b Obese men undergoing radical prostatectomy at University of Pennsylvania, 1990-2012. Abbreviation: GS-Gleason score Gleason score upgrade represent upgrading from either score 6 or 7 to 8 -10. Obese: ≤30kg/m2; Non-Obese <30kg/m2 pre-treatment patient phenotype (e.g., obesity.) or posttreatment adverse pathologic features (e.g., positive surgical margins, seminal vesicle invasion, extracapsular extension). Multimodal treatment techniques have been employed to increase recurrence-free survival among localized high risk patients [40], and may be useful to treat a subset of patients with lower risk disease characteristics but elevated phenotypic risk factors. Although low-and intermediate risk patients often are treated with monotherapy, obese men are a patient population with unique disease features and medical needs that may require a more aggressive treatment approach including adjuvant RT.
Obesity and PCa outcomes
Previous studies on the association between BMI the risk of developing PCa have provided mixed results. Obesity has been shown to increase the risk of poor PCa outcomes in several studies [2,[41][42][43]. Recent studies have analyzed the relation between BMI and PCa risk stratified by clinical stage and Gleason score at diagnosis. These studies consistently showed that elevated BMI positively correlated with increased risk of higher Gleason grade or higher stage disease and negatively correlated with low Gleason grade and stage of disease [27,28,44]. Previous results suggest that obesity is associated with higher grade tumors, increased risk of positive surgical margins, higher FFbF rates, and risk for PCa-specific mortality [1,2,33,[45][46][47]. In multivariate analyses, obesity also has been associated with significant tumor upgrading and upstaging among pre-operative low-risk patients, which increases risk for FFbF among this patient population [48][49][50][51][52].
However, not all studies support a relationship between poor PCa outcomes and obesity [53][54][55]. Often, studies differ by the number of obese men, sample population demographics and study methodology, making it difficult to compare across studies. Further complicating the relationship between obesity and PCa are diagnostic and treatment obstacles associated with obesity that make it more likely that cancer will progress and that treatments will fail on the obese patient due to technical difficulty rather than biological processes [41,[56][57][58]. Obese men are less likely than non-obese men to have abnormal PSA results and undergo biopsy, potentially effecting timely diagnosis. At the time of biopsy, larger prostate glands may make it more difficult to detect and accurately stage cancer [41,59,60]. It is also not clear if the relationship between obesity and treatment failure is due to aggressive disease biology or to technical limitations. Potency and continence rates after treatment are similar among weight groups, so technically inferior operations do not account fully for differences in treatment failure [1]. Pelvic surgery in general is more technically challenging in obese patients. Obesity has been associated with 30 % higher odds of capsular incision, a surrogate for poor technical operation. However, some patients receiving poor technique do well after surgery and others that experienced apparently fine surgical technique still experience FFbF [61].
Treatment guidelines for low and intermediate risk patients
Treatment outcomes for patients with low-and intermediate risk disease have been inconsistent in part due to tumor heterogeneity and inaccuracies in staging [62,63]. For this reason, low risk patients have recently been reclassified into very low-risk group (active surveillance eligible) and low-risk, and there are ongoing discussions to re-classify intermediate risk patients into low-and high-intermediate risk groups [64]. Therefore, the ability to preoperatively identify low or intermediate risk patients with elevated BMI who are at highest risk of FFbF after RP as monotherapy will be very useful in guiding upfront treatment recommendations. Perhaps, these patients may be best treated with combination therapy (surgery and RT), or other approaches such as definitive RT with/without hormonal therapy to improve disease control [40]. The current treatment guidelines recommend that patients with ≥1 APF be offered RT adjuvantly, or as part of a salvage regimen upon a detectable rise in PSA above 0.2 ng/ml following RP. Adjuvant RT has been shown in randomized trials to improve PSA-relapse free survival [19,22,65], distant metastasis-free survival and overall survival [66], compared to observation. Despite these results, referral patterns for additional RT for these patients remain very low. In fact less than 20 % of qualifying patients in the United States actually receive adjuvant RT [67][68][69][70] suggesting that many clinicians are reluctant to deliver RT after RP. In our study, only 2 % of entire cohort had documented treatment with additional RT. However, this result may be an underestimation and should be interpreted with caution, since a good number of patients could undergo RP at UPHS and then RT locally. RT information for these patients may not be accurately captured in our database. The primary reasons for withholding post-RP RT include increased treatment-related toxicity and potentially over treating patients with RT who may not recurred after RP [71]. It is estimated that approximately 50 % of patients with APF will not experience FFbF. Therefore, patients are offered "active surveillance" post-RP, and RT is only recommended at the earliest signs of PSA failure termed "early salvage". However, whether early salvage RT is equivalent to adjuvant RT is a topic of current investigation [29]. Unlike RT post-RP, the use of ADT for patients in this setting is even much less standardized since physicians often recommend ADT for a number of reasons including attempting to reduce the prostate size prior to surgery or at the earliest signs of PSA failure after surgery.
Currently, the decision to recommend definitive radiation treatment for patients with low-intermediate risk prostate disease is often based on many factors including patient preference, and/or preexisting comorbid conditions that precludes surgery. However, in patients with no contraindications for surgery, decision for RT or RP is largely driven by age, genitourinary toxicity, and the desire to preserve sexual function [71]. The ability to identify patients with low-intermediate disease yet at increased risk for APF as well as FFbF will enable clinicians to better counsel patients with the appropriate treatment option that provides the best disease control with minimal side effects. The existing preclinical factors used to predict APF and biochemical outcomes are suboptimal. In this report elevated BMI was identified as a preclinical factor that is independently associated with adverse pathologic outcomes as well as biochemical recurrence, particularly in patients with low-intermediate disease or ≤ one adverse pathologic feature. Therefore, incorporating BMI into the current predictive models may shows promise in identifying the group of patients with low-intermediate risk disease who are likely to experience biochemical recurrence following RP as monotherapy. These patients could be best treated with definitive RT with or without hormones thus sparing them the added toxicity of requiring additional RT after RP. Further studies are required to develop and validate the predictive performance of BMI using an independent patient cohort.
Study limitations
It is important to emphasize that results from this study cannot be extrapolated to imply that RP is suboptimal for obese men since not all patients with elevated BMI experience adverse pathologic outcomes or biochemical recurrence after surgery. Although elevated BMI was associated with increased positive surgical margins, BMI still was associated with adverse pathologic outcomes after adjusting for margin status. This suggests that poorer surgical outcomes did not account for worse pathologic outcomes in obese men. Therefore, patients with elevated BMI may harbor a biologically more aggressive PCa. Limitations to this study were that important measures of obesity, such as waist-to-hip ratio and percent lean body fat were unavailable. Information on the biologic factors that may contribute to the effect of elevated BMI on disease aggressiveness and treatment outcomes could not be evaluated since blood biosamples were not obtained at the time of surgery. Furthermore, the median follow-up for the study cohort was relatively short. Availability of data and materials SCORE database is housed at the University of Pennsylvania and is available via collaborative agreement or request through a Materials Transfer agreement between institutions. Currently, database is not available in any publicly accessible repository.
Authors' contributions KY developed manuscript concept and study design; analyzed data; drafted manuscript. CMZJ helped draft manuscript; helped with data collection. AJ Revised manuscript. BM Revised manuscript. ES helped with data collection and data management. JYP Revised manuscript. AW Revised manuscript. TRR helped with data collection and interpret results. All authors read and approved final manuscript. | 2017-08-03T01:45:58.317Z | 2016-07-29T00:00:00.000 | {
"year": 2016,
"sha1": "9de5cd709a976f659ba3d493badfd593bf793949",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12885-016-2572-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e821642099a20ea95025fe3a348744123906d359",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220846748 | pes2o/s2orc | v3-fos-license | Massive open online course for Brazilian healthcare providers working with substance use disorders: curriculum design
Background Interpersonal and technical skills are required for the care of people living with substance use disorders. Considering the applicability and usability of online courses as continuing professional education initiatives, this study aimed to describe the content design process of an introductory-level healthcare-centered Massive Open Online Course (MOOC). Methods The content of the course was informed through needs assessment, by using three sources: (a) narrative literature review, (b) Delphi health experts panel consensus, and (c) focus groups conducted with people living with substance use disorders. The data from the empirical research phases were analyzed through qualitative Thematic Analysis. Results The product of this research project is the introductory-level Massive Open Online Course “Healthcare: Developing Relational Skills for the Assistance of People Living with Substance Use Disorders” which approaches health communication and empathetic relational professional skills as a means of reducing stigmatization of people living with substance use disorders. Conclusions Diverse strategies for designing distance education initiatives have to consider different views on the subject being approached in such courses. The product presented in this paper has the potential to be an educational tool for topics traditionally not addressed in Brazilian continuing education and can be used as a model to the design of online courses directed to the development of work-related skills for the healthcare professions.
competencies for working with patients with substance use disorders. To build rapport, the literature suggests that relational abilities like empathy and communication skills are essential for promoting behavior change [9,10].
Since practitioners from different health care settings work with people with substance use disorders, it is important to understand the role of training programs in approaching the development of professional competencies needed for providing mental health services [11]. Training is critical to the development of a diverse and well-trained workforce with the skills needed to care for people living with substance use disorders [12]. Therefore, contextually and empiricallybased training programs designed to develop such skills are required [13,14].
Courses targeting work-related skills have the potential for higher enrollment and completion rate [15,16]. Skill training can be defined as a proposal for the development of a series of discrete elements that build complex behavior and that leads or helps build a relationship between the health professional and patients by addressing when and how to use certain communication strategies [17]. Considering this definition, it becomes necessary to define training strategies, using different applied approaches that allow the development of target skills [18,19].
Needs assessment is essential for planning continuing educational programs and can contribute to practical application of content to professional activities [20]. Educational needs are generalized to target audience considering standards of practice [20] and should be drawn from diverse perspectives [21]. The approach chosen for the purpose of the present study was the development of an introductory-level Massive Open Online Course (MOOC) as an accessible tool for health professionals and social workers with little or no previous experience in the course content.
The course content was developed using evidencebased information on stigmatization processes related to substance use disorders and its negative effects and the role of health communication strategies and empathy in the assistance of people living with substance use disorders. The content was adapted for the structure of the healthcare system in Brazil, considering the low availability of continuing education courses targeting variables as communication skills and empathy for the development of therapeutic relationships with people living with substance use disorders.
This study describes the content design process of a MOOC by combining current scientific literature and inputs from experts on the alcohol and other drugs and people living with substance use disorders undergoing treatment in a community-based facility.
Research design
The course design was based on guidelines for intervention development proposed by Kok and collaborators [14] and participatory research [22] as a needs assessment approach.
The ADDIE instructional design approach (i.e., Analysis, Design, Development, Implementation, and Evaluation) [23] was also used as a guideline for course design and multimedia content creation. The ADDIE process is commonly used in the planning of instructional courses having a student-centered approach. The phases are: (a) Analysis, a needs assessment phase, where a field or topics worth pursuing as a training program are identified, (b) Design, a research phase in which the strategies that will be employed to achieve the learning objectives are defined, (c) Development, instructional course content construction characterized as drafting, production and evaluation, (d) Implementation, course dissemination to the target audience and (e) Evaluation, operationalized during the implementation phase as a process evaluation. This paper focuses on curriculum development applying the Analysis, Design and Development phases described above.
Considering the need for the feasibility of the content produced to the care context of people living with substance use disorders, consultation from various sources is essential to include different views of what accomplishes for quality care [22]. We divided the creation of the course syllabus into three studies (phases) carried out separately and described in Table 1.
To conduct the studies, we used as reference the guidelines for qualitative research published by the American Psychological Association (APA, [27] and accessed the directions provided by Salmon [28]. The content of the MOOC was organized in courseware. Four modules were developed, namely: (1) Drug Policies and Substance-related Care in the Brazilian Context, (2) Attitudes related to People living with Substance Use Disorders, (3) Health Communication Skills for Professionals in the Substance Use Field and (4) Empathy as a Therapeutic Tool. Visuals were added to the materials which were proofread for spelling and reference.
Study 1
A narrative review about the stigmatization processes, healthcare providers communication and empathyrelated skills for the assistance of people living with substance use disorders.
Study design
The literature review was conducted in order to summarize the scientific evidence and contextual content used for theoretical and practical decisions related to the course.
We performed a narrative review on epidemiology, policies and the Brazilian healthcare system for substance use disorders, the stigmatization process of substance use as a health condition and the professional skills needed to work with people living with substance use disorders [5]. This study was conducted as an Analysis and Design phases from the ADDIE instructional design approach [23].
Procedures
Searches for academic literature published in English or Portuguese were performed in the PubMed, PsychINFO and Virtual Health Library Brazil databases, as well as on official pages of the Brazilian National Drug Policy Secretariat, Brazilian National Health Department and the Brazilian Federal Council of Psychology.
Medical Subjects Headings (MeSH), Thesaurus from APA and Health Sciences Descriptors (DeCS, Brazilian acronym) were consulted to refine the search terms used in the review. The search words are described in Table 2.
In conducting the literature research, no date limits were established, but considering the relevance of potential references, recent publications were favored in an attempt to use relevant current information for course content development.
Research articles and theoretical material were included, which were: a) applicable information for clinical work regarding people living with substance use disorders, b) published by peer-reviewed journals, organizational publishers or that included theoretical frameworks systematically used in the substance use disorders field and c) written in Portuguese or English.
The review was extended by consulting the reference section of documents. Since the theoretical approach used in the course was defined as Patient-Centered Care, theoretical and empirical articles specific to this theme were searched and evaluated.
Study 2
Delphi expert consensus about treating people living with substance use disorders.
Study design
This study strategy was characterized as a mixedmethod Delphi study to survey experts input on practical content to be included in the course. The Delphi method is a way of eliciting and refining information from a group [24,25].
First, qualitative data analysis was performed on experts' opinions and ideas collected by open-ended interviews. Then, the same experts evaluated the analyzed data by answering questions on a Likert scale.
Participants
Twenty healthcare professionals with experience in substance use disorders were invited by e-mail. E-mails were obtained from the literature analyzed in study 1. Invitations were made to achieve diversity among participants in terms of profession and location. Eight responses were collected in the first round and six in the second.
Procedures: Round 1
An online survey was made available in SurveyMonkey® (SurveyMonkey Inc., San Mateo, California, USA, https://surveymonkey.com/), containing an electronic informed consent form, open-ended questions, and a sociodemographic questionnaire. The questions were: [24,25] To survey experts input on practical content to be included in the course Study 3: a focus group [26] To elicit the views and experiences of people under treatment for substance use disorders in order to include these views in the course discussions (4) When you look at your colleagues working on the substance use disorders field, which attitudes do they exhibit? (5) In your opinion, which professional attitudes involved in caring for people living with substance use disorders should be displayed? (6) What is the best way to communicate with people living with substance use disorders?
Data analysis: Round 1 Once the professionals' answers were received, they were qualitatively analyzed through the Thematic Analysis procedures by author EPM and a research assistant. For the purposes of this study, rich thematic description strategies were performed, through which a broader variety of themes is extracted from the data [29]. The NVivo® 12 software (QSR International Pty Ltd., Victoria, Australia) was used to assist in the data analysis.
Procedures: Round 2
Closed-questions were produced for the themes and subthemes generated by the qualitative data analysis.
Participants were asked to evaluate the proposed subthemes in terms of relevance and suitability for inclusion in online training for health and social care professionals with an interest in the substance use disorders field. The response options, regarding the suitability criterion, were: very inadequate, inadequate, neither inadequate nor adequate, adequate and very adequate. Regarding the relevance criterion, the following options were available: very irrelevant, irrelevant, neither irrelevant nor relevant, relevant and very relevant.
This procedure was performed to reach consensus among the consulted professionals [24,25] and also used as a means to validate the qualitative analysis process performed in round one. In other words, the second round of procedures for the Delphi method facilitates consensus and is classified as a means of rigor and trustworthiness for the qualitative analytic process, since the themes extracted from the data collected in the first round are evaluated by the same participants who answered the open-ended questions in the first place.
Data analysis: Round 2
We performed a descriptive data analysis on participants' agreement on the relevance and suitability of the extracted themes. For reporting purposes, the mean concordance for each theme was calculated based on the information given for the subthemes composing a theme.
There were five response options to each criterion yet, for presentation purposes, they were described as binary. Regarding the suitability criteria, very inadequate, inadequate and neither inadequate or adequate were considered as inadequate and adequate and very adequate were considered as adequate. Related to relevance, very irrelevant, irrelevant and neither irrelevant or relevant were considered as irrelevant and relevant and very relevant were considered as relevant. The subthemes with an agreed percentage of 60% or more on both criteria were included in the course syllabus.
Study 3
Focus Group: Service users' perceptions about their treatment.
Study design
A focus group was designed to identify the service users' perceptions of how they were welcomed and treated by the various professionals that they have met during their treatment history to include these views in the course discussions. Research procedures regarding this study followed the recommendations from Krueger and Casey [26]. This study is the last of three research strategies for Analysis and Design as proposed in a course instructional framework [23].
Participants
Patients undergoing treatment in a Psychosocial Care Center for people living with substance use disorders (CAPS AD, Brazilian acronym, government public service), in a metropolitan region of south of Brazil, were invited to participate in the research activity by researchers in a face-to-face assembly held as part of the routine work of the data site collection facility.
Seven participants agreed with the research invitation and composed the focus group. The method used was a convenience sampling process and the criteria for selection were the interest in sharing information on how the relationship between themselves and health professionals and social workers was formed in their treatment history. The sample was evaluated as sufficient considering the focus group research procedures described by Krueger and Casey [26].
Procedures
For their personal experience with regard to the substance use disorder treatment process, people in treatment in psychosocial care network services are considered critical informants who can, therefore, contribute to the understanding of some important aspects related to the healthcare of people living with substance use disorders.
The research objectives and procedures were presented to coordinators of the Psychosocial Care Center (CAPS AD) after a meeting with the multi-professional team. After consent, the focus group was conducted by the author -EPM -and a research assistant. The research assistant was responsible for the generation of the field diary, based on the observations made during the focus group. The discussion was recorded and later transcribed for qualitative data analysis purposes.
A semi-structured script was used and the following steps were followed: (1) presentation of the researchers and research objectives, (2) presentation of the informed consent form, requesting the signature of those who agreed to participate in the study, (3) self-presentation of the participants, including their treatment history, (5) questions that aimed to investigate the relationship established between participants and the various health and social care professionals with whom they have interacted during their treatments for substance use disorders, (6) acknowledgment and group closure.
Data analysis
Thematic Analysis [29] on the focus group data was conducted using NVivo Pro 12 software (QSR International Pty Ltd., Victoria, Australia). As a strategy regarding analytic rigor and trustworthiness, the themes extracted from the focus group data were evaluated from a triangulation point of view, being added as a result from this study the ones who were consistent with patient-centered care approach dimensions [6] and in agreement with the themes extracted from the Delphi panel expert consensus.
Study 1
Considering the research articles, the official manuals published in Brazil concerning the care of people living with substance use disorders and materials such as books and theses, four modules were developed: (1) Drug Policies and Substance-related Care in the Brazilian Context, (2) Attitudes related to People living with Substance Use Disorders, (3) Health Communication Skills for Professionals in the Substance Use Field and (4) Empathy as a Therapeutic Tool. Table 3 contains modules content, number of articles included as references in the MOOC and their range of publication dates.
Study 2
Participant description.
The experts recruited for the Delphi panel sociodemographic characterization is described on Table 4.
Rounds 1 and 2
Themes and subthemes extracted through Thematic Analysis from information provided by the Delphi method panel experts on round 1 of procedures and evaluated considering the suitability and relevance for inclusion in course content constitute the results from study 2 strategy. These results are presented in Table 5.
Study 3
The results that emerged from the focus group with people living with substance use disorders indicate and highlight the core role that community health services and the relationship established between health professionals and service users to better health outcomes. Themes extracted from this research method are described in Table 6 and were used to tailor the theoretical and practical contents included in the MOOC.
Curriculum as a main result
The main product of this research paper is the introductory-level MOOC "Healthcare: Developing Relational Skills for the Assistance of People Living with Substance Use Disorders", built as a continuing education initiative. The MOOC is composed by videos, modules and assignments and Table 7 contains the course syllabus.
Discussion
The overall aim of the MOOC is to improve healthcare quality through the use of evidence-based care strategies [30] that are also endorsed by specialists acting in the Brazilian alcohol and other drugs field. Content of an introductory-level MOOC was identified by using three strategies: (a) narrative review, (b) Delphi consensus, and (c) focus groups. These strategies were used as complementary approaches, tapping the diverse perspectives of researchers, health professionals and people living with substance use disorders undergoing treatment. The results obtained from them are consistent with each other, enabling the development of scientifically sound and contextually applicable content. There is a need for evidence-based recommendations to meet the practical challenges of everyday healthcare practice and online continuing education initiatives have to be built on relevant current information [31]. The literature on the development of continuing education initiatives emphasizes that online courses should be practical, assisting in enhancing work-related skills [32].
The literature on the topics included in the MOOC described in the present paper -stigma, health communication skills, and empathy -corroborates the themes and subthemes extracted from the qualitative research strategies implemented through Delphi consensus and focus group. From the Delphi consensus regarding the stigmatization process associated with substance use disorders, two themes can be emphasized: negative beliefs and positive attitudes. Stigma is characterized as labeling, stereotyping, and separation of others from oneself, leading to status loss and discrimination [33,34]. Subthemes extracted from the Delphi consensus were related to attitudes such as the view that the patient's personal characteristics are determinant of treatment, that abstinence is the only option, that substance use disorders are the cause of vulnerabilities and a strictly medical view of the disorder. These are examples of the labeling and stereotyping as components of the health condition-related stigmatization process [35]. Technical professional knowledge about substance use disorders as an essential skill Attentive listening Professional participation in the treatment process proving to be involved in the work process Healthcare as a safe space Healthcare service as a safe place where risk factors for substance use are minimal Build rapport with professionals Collaborative relationship between health professional and patient Emotional support as a necessary aspect of treatment Professional competency for approaching patient emotional distress These are negative views not only from the general public but also from healthcare providers, which have potentially harmful consequences for the assistance of people living with substance use disorders, undermining access to diagnosis, treatment and prevention practices and better health-related outcomes [36]. Stigma reduction strategies were associated with positive attitudes cited by the consulted experts. These strategies were also linked to the themes extracted from the focus groups. In a review of stigma reduction strategies conducted by Nyblade and coauthors [35], a large number of interventions (37 out of 42) targeted topics were identified like the knowledge of health condition, knowledge of stigma and ability to manage the health condition. Professional competencies and communication skills extracted from our data were related to treatment customization, importance of Occupational Therapy, reception offered by service professionals, the clarity of service rules and attentive listening. These competencies are necessary to build rapport and to demonstrate emotional support required for working in the substance use disorders field.
Synnot and coauthors [37] aimed at defining the priorities for the conduction of Cochrane Systematic Reviews in which targeted audience composed of stakeholders discussed current needed research initiatives. The results indicate that between the five topics extracted from the qualitative data, there is promoting patient-centered care, the theoretical approach underling the online educational initiative described in this paper. In the same research, 12 priority problems were the jumpstart point of the discussion which included the need for better understanding from healthcare providers of patient-centered concept and practices, a better way of sharing information with patients, the involvement of service users in the decision-making process, participating actively according to their priorities. We emphasize that all of these topics were covered in the MOOC presented in this paper.
The results presented here are related to the curriculum development process in similarity with previous papers published in the literature (for example, [38]). A previous study comparing online and in-person educational strategies concludes that both initiatives were effective in generating readiness to practice in the field of alcohol and other drugs [39]. Thus, the hypothesis to be tested in a future study considers the probable effectiveness of the course [40] in reducing stigmatizing attitudes towards people living with substance use disorders through the development of greater professional communication skills and empathetic posture for planning and conducting care as a patient-centered approach.
At the time of this manuscript's submission, the MOOC was at a production phase by a university distance education team. Once completed the course will be disseminated among public and private healthcare and social assistance Brazilian facilities.
MOOCs have the potential to reach a broad audience, as they are self-paced and can be accessed at any time. These characteristics might improve the retention of students -especially health professionals who often work long hours. There are other educational initiatives presented as massive open online strategies on the health field operationalized in Latin America [41]. The educational tools influence the learning process and need to be adapted to the context of the target course audience. In that matter, various sources of information can contribute to the development of applicable content with evidence of effectiveness. The use of booklets, videos, and interactive tasks seeks to engage the audience, providing spaces for connected learning. Considering that acting in the field of alcohol and other drugs should be done by multi-professional teams, these activities have the potential to enable the development of integrated work skills [32]. The learning exercises, such as discussion forums, were based on the connectivism learning theory -a pedagogical approach in which interactions between learners contribute to knowledge acquisition [42].
Limitations
Regarding the study of strategy 1, a narrative review without a systematic evaluation of the research literature was conducted. This type of review can be influenced by subjective views from the researchers when selecting the references for inclusion in the MOOC content. For the study of strategy 2, a Delphi consensus method, there was a significant sample loss between recruitment and inclusion, 20 participants were recruited and 8 agreed to participate composing a convenience non-representative sample. As far as sociodemographic information related to the sample, no social works were recruited, so participation was restricted to psychologists, doctors and occupational therapists, some of them working in social assistance facilities. The data collected from the study of strategy 3, the focus group, was analyzed by one author and a research assistant, so there was no comparison basis for the results extracted from the qualitative corpus of information. Still an attempt to reduce biases was made by using three research strategies from diverse information sources.
Conclusions
Diverse strategies for designing distance education initiatives have to consider different views on the subject being approached in such courses. In the present paper, topics such as healthcare providers stigmatized views, communications skills and empathy in the field of alcohol and other drugs were researched through complementary perceptions enabling the construction of scientifically-based and practically applicable educational multimedia content which will be evaluated in terms of its efficacy in future research.
The product presented in the present paper, the MOOC "Healthcare: Developing Relational Skills for the Assistance of People Living with Substance Use Disorders" has the potential to be an educational tool in topics traditionally not addressed in continued education strategies in the Brazilian healthcare process and can be used as a model to the design of online courses directed to the development of work-related skills for the healthcare professions. | 2020-07-30T02:02:12.389Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "56810bdf50001de95b532f676ac672ed8a301352",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-020-02162-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2947c02da28f1a2743a407fae7643ca16eca59de",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
13534291 | pes2o/s2orc | v3-fos-license | Ciprofloxacin is an inhibitor of the Mcm2-7 replicative helicase
Most currently available small molecule inhibitors of DNA replication lack enzymatic specificity, resulting in deleterious side effects during use in cancer chemotherapy and limited experimental usefulness as mechanistic tools to study DNA replication. Towards development of targeted replication inhibitors, we have focused on Mcm2-7 (minichromosome maintenance protein 2–7), a highly conserved helicase and key regulatory component of eukaryotic DNA replication. Unexpectedly we found that the fluoroquinolone antibiotic ciprofloxacin preferentially inhibits Mcm2-7. Ciprofloxacin blocks the DNA helicase activity of Mcm2-7 at concentrations that have little effect on other tested helicases and prevents the proliferation of both yeast and human cells at concentrations similar to those that inhibit DNA unwinding. Moreover, a previously characterized mcm mutant (mcm4chaos3) exhibits increased ciprofloxacin resistance. To identify more potent Mcm2-7 inhibitors, we screened molecules that are structurally related to ciprofloxacin and identified several that compromise the Mcm2-7 helicase activity at lower concentrations. Our results indicate that ciprofloxacin targets Mcm2-7 in vitro, and support the feasibility of developing specific quinolone-based inhibitors of Mcm2-7 for therapeutic and experimental applications.
INTRODUCTION
As cancer cells demonstrate uncontrolled proliferation relative to most non-cancer cells, DNA replication has traditionally been an important target for cancer chemotherapy. Such therapeutics are frequently nonspecific and mutagenic, as they either chemically modify the DNA to block replication fork progression or trap deleterious Topo II (topoisomerase II)/DNA double-strand break intermediates [1]. Not surprisingly, these therapies have multiple toxic side effects (reviewed in [2]). Newer topoisomerase inhibitors, which inhibit the catalytic activity of the enzyme rather than trapping the toxic protein-DNA intermediate, show therapeutic promise [3], suggesting that compounds that specifically inhibit DNA replication enzymatic activity may be better suited as therapeutic agents. Moreover, enzyme inhibitors have had a long and important history in biochemical research, and their use has been TAg, T-antigen; Topo I, topoisomerase I, Topo II, topoisomerase II. 1 These authors contributed equally to this work. 2 To whom correspondence should be addressed (email schwacha@pitt.edu).
an essential avenue to obtain critical mechanistic insight (e.g., the F1 ATPase [4]). As eukaryotic DNA replication is a complex process that is poorly understood at a mechanistic level, the development of targeted small molecule inhibitors of specific replication factors would be of significant research utility.
One potential therapeutic target is the Mcm2-7 (minichromosome maintenance protein 2-7) eukaryotic replicative helicase, a molecular motor that unwinds duplex DNA to generate ssDNA templates for replication. Unlike other replicative helicases, the toroidal Mcm2-7 complex is formed from six distinct and essential subunits, numbered Mcm2 through Mcm7 [5]. Each subunit is an AAA + ATPase, and the unique heterohexameric composition of this helicase is conserved throughout eukaryotic evolution (reviewed in [5]). Consistent with its vital function during DNA replication, Mcm2-7 is a key target of regulation, as its loading is a carefully controlled and limiting feature of replication initiation, whereas its cell cycle-dependent activation is a limiting feature of elongation [6]. The importance of its regulation is demonstrated by the observations that both specific mutations in Mcm2-7 [7] and overexpression of its subunits [8] cause cancer or contribute to tumorigenesis. Despite the potential of helicases as disease targets, a few specific small molecule inhibitors of these enzymes have been identified [9][10][11][12]. To date, one compound, heliquinomycin, has been identified that inhibits a non-physiological Mcm subcomplex (Mcm467) [13] and decreases the proliferation of cancer cells in vitro [14], further suggesting that Mcm inhibitors may have therapeutic value.
Following examination of amino acid modifiers and small molecule ATPase inhibitors [4,10,11], we found that the commercially available fluoroquinolone antibiotic ciprofloxacin preferentially inhibits the in vitro helicase activity of the Saccharomyces cerevisiae Mcm2-7 complex. Ciprofloxacin also appears to target Mcm2-7 in cell culture, as it blocks proliferation of both yeast and human cells at concentrations that inhibit the purified enzyme, and a previously studied cancer-causing mutation in Mcm4 confers ciprofloxacin resistance [15]. Additional inhibitors of greater potency were identified among compounds structurally related to ciprofloxacin. Several of these agents exhibited increased selectivity towards Mcm2-7, whereas others had varying specificities against a range of unrelated helicases. These data suggest that (fluoro)quinolone-based compounds may provide a general scaffold for future development of helicase inhibitors with targeted specificity.
For initial small molecule inhibitor screening, a collection of 144 compounds was obtained from the DDC (Drug Discovery Center, University of Cincinnati, Cincinnati, OH) (Supplementary Table S1 available at http://www.bioscirep. org/bsr/033/bsr033e072add.htm). For follow-up experiments on selected inhibitors (Table 1), neat samples of each inhibitor were obtained from DDC or ChemBridge (compounds 924384 and 271327 correspond to ChemBridge 7473736 and 5281925, respectively) and stored as 100 mM stock solutions in DMSO. The purity of these compounds was either established by the manufacturer or was determined by the DDC using mass spectrometry and HPLC analysis and found to be >90-100 % in all cases (Table 1).
Biochemical assays
Helicase assays were performed essentially as described [16,19]. Synthetic replication forks were prepared by annealing oligos 233 and 235 [IDT (Coralville), oligo 233 5 (T) 40 were incubated for 30 min at 37 • C, and all other reactions were incubated at 37 • C for 1 h. The products were separated by 10 % (w/v) native PAGE, the resulting gels dried and the radioactivity quantified using a Fuji FLA-5100 phosphoimager. Irrespective of the protein used, all helicase assays contained equal molar helicase concentrations (100 nM, assuming in all cases that the active helicase form is hexameric). Steady-state ATP hydrolysis was assayed as published [17]. In short, reactions were set up essentially as in the helicase assay, with minor exceptions. A non-radiolabelled DNA fork was used, helicase concentration was 100 nM (hexamer) the total ATP concentration was 500 μM and included ∼0.5 μCi of [α32P]ATP, and the ATP regeneration system was omitted. Reactions were incubated for 1 h at 37 • C and stopped by the addition of SDS. ATP was separated from ADP by PEI (polyethyleneimine) thin layer chromatography, and the ratio of ATP:ADP was quantified using a Fuji FLA-5100 phosphoimager. Based on our prior work [17], conditions were . established to ensure that the results shown are within the linear range of the assay. Protein-ssDNA binding was determined with a double filter-binding assay using an ssDNA probe (oligo 826, 5 TGTCTAATCCCGAAAGGCCCTGCCACTGAAATCAAC-ACCTAAAGCATTGA) that was 5 -radiolabelled using T4 polynucleotide kinase and [γ 32P]ATP [16]. For the double filter-binding assay, the helicase concentration was 150 nM (hexamer) and the ssDNA concentration was 4 nM. For all biochemical assays, helicases were preincubated with inhibitors for 20 min at 37 • C unless otherwise indicated. Topo I (Topoisomerase I) assays were performed as described [21]. Reactions (10 μl) contained 50 mM Tris/HCl (pH 8), 1 mM EDTA, 1 mM DTT, 20 % (v/v) glycerol and 50 mM NaCl. pUC19 (50 ng; NEB) was incubated at 37 • C for 2.5 h with 4 units of Wheat Germ Topo I (Promega). Inhibitors were added at the indicated concentrations at either t = 0 or 90 min as described in the figure legends. Following incubation, topoisomers were separated via gel electrophoresis on a 1.0 % (w/v) agarose gel for 2 h at 8 V/cm in TAE (Tris/acetate/EDTA) buffer. After electrophoresis, the gel was stained with ethidium bromide and imaged with a Fuji LAS-3000. In all of the above assays, dilutions of the test compound were made with Milli-Q H 2 O and DMSO such that the final concentration of DMSO in the biochemical assays was 1 % (v/v), and the reported activity was normalized to solvent controls.
Data analysis
Inhibition and the corresponding 95 % CIs (confidence intervals) from both the helicase assays and growth inhibition assays were plotted using GraphPad Prism Version 5.0f for Macintosh. The inhibitor concentrations were converted to Log 10 , and then nonlinear regression was used to fit the data points with a sigmoidal dose-response curve [eqn (1)] where y min is the minimum helicase activity, y max is the maximum helicase activity, IC 50 is the effective concentration of inhibitor that decreased helicase activity by 50 %, and the Hill Slope describes the steepness of the curve. In all cases, eqn (1) was constrained by subtracting the baseline from the data and normalizing all values to helicase activity in the absence of inhibitor. Thus, y min and y max were 0 and 100 %, respectively. The software was also used to calculate the 95 % CIs, the quality of the fit (i.e., R 2 ), and to determine the extra sum-of-squares F test to calculate P values to compare the LogIC 50 values between curves. Differences in values were considered statistically significant when P < 0.05.
Experimental rationale
The goal of this study was to identify compounds that preferentially inhibit Mcm2-7. Prior work has demonstrated that the six S. cerevisiae Mcm2-7 ATPase active sites contribute unequally to ATP hydrolysis: three are particularly important for DNA unwinding and contribute the most to ATP turnover, whereas the other three contribute little to bulk ATP hydrolysis and appear to play a regulatory role [17][18][19]. To identify inhibitors that preferentially target one of these two sets of active sites, each inhibitor was tested on both the Mcm467 complex (an S. cerevisiae Mcm subcomplex that demonstrates helicase activity but lacks all of the regulatory sites) and Mcm2-7 (containing both types of active sites [19]
Chemical modifiers and small molecule inhibitors that preferentially inhibit Mcm or TAg helicase activity
Initially, we tested the effects of both chemical modifiers and previously studied small molecule inhibitors on the helicase activities of Mcm2-7, Mcm467 and TAg by using an established, gel-based, endpoint DNA unwinding assay [19]. The incubation time of our standard assay (30 min) was doubled to eliminate or reduce the identification of weak inhibitors in the screen but remained in the linear range of the assay.
A variety of amino acid modifiers were initially tested. These chemical probes covalently modify carboxyl groups (carbodiimide derivatives EEDQ and DCCD), guanidyl groups (PG), amino groups (PP), phenol groups (Nbf) and thiol groups (NEM) and have been previously used to study the ATPase active sites in the F1-ATPase (reviewed in [4]). Although most of these amino Because TAg subunits oligomerize only in the presence of ATP [23], and ATP preincubation probably causes a conformational change in Mcm2-7 [16,19], we also tested the effects of the potential inhibitors after the proteins were preincubated with ATP ( Figure 1B). Although this treatment had essentially no effect on either Mcm complex, it completely or partially protected TAg from all modifiers except Nbf ( Figure 1B, treatment 6) and MAL2-11b ( Figure 1B, treatment 8), suggesting that at least one effect of the other inhibitors may be to block TAg oligomerization.
Because helicase activity depends on ATP hydrolysis and ssDNA binding, the effects of the chemical modifiers and small molecules on both activities were examined. Using previously established steady-state ATP hydrolysis [17] and ssDNA filterbinding [16] assays, the effect of the same panel of small molecules on each of the three helicases was examined. With the exception of DCCD and ofloxacin, which failed to inhibit helicase activity, most of the remaining treatments severely inhibited the ATPase activities of all three helicases ( Figure 1C). These data suggest that the inhibition of DNA unwinding is mediated by compromised function of one or several ATPase active sites.
However, these small molecules caused a less severe and variable decrease in TAg ssDNA binding regardless of the order of ATP addition. Conversely, Nbf, NEM and MAL2-11b did inhibit Mcm2-7 and Mcm467 ssDNA binding ( Figure 1D, treatments 6-8). Ciprofloxacin stands in sharp contrast: even though it completely inhibited Mcm helicase activity, it had only modest effects on ATP hydrolysis and ssDNA binding of the three helicases (Figures 1C and 1D, treatment 10). Together, these results suggest that ciprofloxacin inhibits a step or steps specifically required for DNA unwinding, possibly through selective inhibition of the Mcm regulatory subunits. This possibility is explored further below. , respectively, whereas the apparent IC 50 of ofloxacin for TAg was much higher (>20 mM; Figure 2A). In contrast naladixic acid, the parent quinolone compound for both ciprofloxacin and ofloxacin, had essentially no effect on the activities of the three helicases at any concentration tested (results not shown).
Ciprofloxacin demonstrates selectivity towards
Interestingly
A small molecule library screen for helicase inhibitors
We reasoned that other (fluoro)quinolone derivatives might show enhanced Mcm2-7 specificity at potentially lower inhibitor concentrations. As the fluoroquinolones are used as antibiotics (reviewed in [24]), prior drug discovery efforts have resulted in the synthesis of chemically diverse libraries modeled on key elements found in the basic fluoroquinolone scaffold. Therefore we investigated a 144-compound chemical library that contained either (fluoro)quinolone derivatives or molecules with various substructures found in ciprofloxacin and other marketed quinolones.
This library of 144 compounds was initially screened for inhibition of Mcm2-7, Mcm467 and TAg helicase activity at a final concentration of 1 mM (see Supplementary Table S1 available at http://www.bioscirep.org/bsr/033/bsr033e072add.htm) for chemical structures and a complete list of results. Of the compounds tested, 27 reproducibly inhibited at least one of the three helicases to 90 %. Both (fluoro)quinolone and triaminotriazine-like inhibitors were identified. Although a wide range of results were obtained, two general conclusions emerged from the data (Supplementary Table S1
Select library compounds display greater potency and selectivity than ciprofloxacin
In addition to ciprofloxacin, seven representative compounds from among those described above were chosen for additional study based either upon potency, selectivity, reproducibility, dose-dependent effect and/or availability. Supplementary Figure S1 (available at http://www.bioscirep.org/ bsr/033/bsr033e072add.htm) summarizes their effects on the DNA unwinding activity of TAg, Mcm2-7 and Mcm467, again at a final concentration of 1 mM. To provide a quantitative measure of inhibitor affinity and selectivity, fresh samples of known purity (>90 %) were obtained for each of the seven inhibitors, and the IC 50 values for DNA unwinding were determined for all three helicases. In most cases, these compounds were either more potent or more selective than ciprofloxacin (Supplementary Figure S2 and Table S1 available at http://www.bioscirep.org/bsr/033/bsr033e072add.htm). Based on their differential inhibition of the three helicases, the inhibitors were classified into one of two groups:
General inhibitors
Inhibitors that had approximately equal effects on all three helicases include MAL2-11b ( Figure 1A) and compounds 125248, 924384, 268973 and 388612 (Table 1). Interestingly, unlike any of the (fluoro)quinolones characterized, the triazole 924384 and the structurally related compound 388612 were more effective at inhibiting TAg than either Mcm complex ( Table 1). The IC 50 values for each of these compounds are similar to one another and ranged from ∼50 to 400 μM.
Mcm-selective inhibitors
Two inhibitors (271327 and 314850) fall into this category. The fluoroquinolone 271327 inhibited both Mcm complexes with an IC50 of ∼300-450 μM but had a negligible effect on TAg within the concentration range tested (Table 1). Although the limited solubility of 271327 prevented us from testing higher concentrations, we can conclude that the IC 50 against TAg is at least an order of magnitude greater than that of the Mcm complexes.
In contrast, 314850 preferentially inhibited Mcm2-7 relative to Mcm467 but had little effect on TAg.
Mechanism of inhibition
As noted above, DNA unwinding is the culmination of a variety of simpler biochemical activities. Thus, the seven representative inhibitors and ciprofloxacin may function by physically interacting with the helicase, the DNA substrate, or the ATP. To understand how all eight inhibitors block helicase activity, their effects on steady-state ATP hydrolysis were measured ( Figure 3A These results suggest one of three possible scenarios: First, the inhibitors (with the possible exception of MAL2-11b) might not target the ATPase active sites. Secondly, the inhibitors may deregulate or uncouple the activity of the enzyme rather than block ATP hydrolysis. Thirdly, at least in the case of the Mcm2-7 complex, the inhibitors could preferentially target the ATPase active sites but are selective for the low-turnover regulatory sites. Although the second and third possibilities are difficult to distinguish, the first explanation can be tested. Although we cannot rigorously test for competitive inhibition using our helicase endpoint assay, we can test if increased ATP concentration overcomes the inhibitory effects of these compounds ( Figure 3B). Although doubling the ATP concentration in the absence of inhibitor caused a slight increase in helicase activity (1.5-to 2-fold, Figure 3B, treatment 0), in most cases, doubling the ATP concentration in the presence of the inhibitors caused a much larger increase in activity (3-to 20-fold). These results suggest that the inhibitors disrupt ATPase active sites in the Mcm2-7 complex in some manner. In contrast the inhibitory effects of 924384, MAL2-11b, and 268973 could not be rescued by an increase in ATP concentration ( Figure 3B, treatments 2-4), suggesting that these inhibitors operate independently of the ATPase active sites.
Because these compounds are also planar double ring molecules, they could conceivably inhibit helicase activity via DNA intercalation. To examine this model, we tested our inhibitors in a standard topoisomerase assay [21]. The rationale of this assay is that intercalating compounds will introduce supercoils into a fully relaxed plasmid. Topo I will remove these introduced supercoils, but after quenching and gel electrophoresis the intercalator will diffuse away and produce a detectable compensatory supercoiling increase.
Following plasmid relaxation, each inhibitor was added to 1 mM final concentration in the topoisomerase assay (Figure 3C, treatments 1-8). The general inhibitors 125248 (treatment 1), 924384 (treatment 2), 268973 (treatment 4) and 388612 (treatment 5) cause extensive DNA intercalation, while in contrast, MAL2-11b (treatment 3) and the more Mcm-selective inhibitors (314850, 271327 and ciprofloxacin, treatments 6-8) demonstrated little or no intercalation ( Figure 3C). However, lack of apparent intercalation could also be caused by Topo I inhibition. To test this possibility, the assay was repeated under conditions in which Topo I and each inhibitor were added to the reaction at the same time. Under these conditions, Topo I inhibition will only yield supercoiled plasmids ( Figure 3D). Under this criterion and comparing the results to Figure 3C, only MAL2-11b ( Figure 3D, treatment 3) is a Topo I inhibitor. Although the general inhibitors can intercalate into dsDNA at 1 mM concentration ( Figure 3C), in vitro helicase inhibition occurs at much lower inhibitor concentrations. Repeating the intercalation assay at more modest inhibitor concentrations (2-to 3-fold overcalcu-lated IC 50 for helicase inhibition) only 125248 and 268973 continued to demonstrate significant DNA intercalation ( Figure 3E, treatments 1 and 4). Thus, most of the tested inhibitors, including ciprofloxacin, do not appear to function through intercalation, suggesting that they more directly affect the helicase activity. To further define inhibitor selectivity, we examined their in vitro effects on representative helicases at 1 mM concentration (Supplementary Figure S3 available at http://www.bioscirep.org/ bsr/033/bsr033e072add.htm). Inhibitors 125248, 924384 and 268973 (treatments 1-3) were the least specific, causing nearly complete inhibition of DnaB and T4 gp41. Interestingly, only one additional inhibitor (314850, treatment 6) effectively inhibited the SsoMcm complex. This discrepancy may be due to the high assay temperature (65 • C) required to assess SsoMcm helicase activity [25]. Inhibitor 271327 (treatment 7) caused substantially less inhibition among the helicases tested than either 125248 or 924384. In contrast, none of the tested helicases were substantially inhibited by ciprofloxacin (Supplementary Figure 3, treatment 8). Combined with the IC 50 data summarized in Table 1, Mcm2-7 is the only helicase tested that is preferentially inhibited by ciprofloxacin.
Ciprofloxacin preferentially inhibits Mcm2-7 in vitro and in yeast and cell culture
Secondly, to examine the general cellular toxicity of these inhibitors, growth inhibition of micro-cultures by serial dilution of inhibitors was tested in a 96-well format in yeast [22]. Wild-type yeast is resistant to ciprofloxacin ( Figure 4A). However, resistance to many compounds in yeast reflects an inability to accumulate sufficient concentrations of such compounds due to the prevalence of multidrug transporters (reviewed in [26]).
To circumvent this potential problem, we used a yeast mutant (Δerg6) [27] previously shown to non-specifically decrease drug resistance. As anticipated, this strain had demonstrable growth sensitivity to both ofloxacin and ciprofloxacin ( Figure 4A).
Using the Δerg6 strain, the remaining compounds were tested for growth inhibition over a range of concentrations (Supplementary Figure S4 available at http://www.bioscirep.org/ bsr/033/bsr033e072add.htm and Table 1). Several compounds inhibited growth at lower concentrations than they inhibited in vitro helicase activity (388612, 268973 and 924284), suggesting that proteins other than Mcm2-7 are more sensitive to inhibition. These data are consistent with their poor helicase selectivity as demonstrated above. In contrast, several compounds were less efficient at inhibiting yeast growth than helicase activity (125248 and 314850). However, two inhibitors (ciprofloxacin and to a lesser extent 271327) have IC 50 curves that closely match the IC50 curves for Mcm2-7 helicase activity ( Figure 4B, Table 1), consistent with the possibility that the primary cellular target is Mcm2-7.
Inhibitor cytotoxicity was next examined in a non-tumour human cell line (RPE-TERT, Supplementary Figure S4). In general, these cells were demonstrably more sensitive to the tested inhibitors than yeast. RPE-TERT cells were ∼10-fold more sensitive to 125248 and 924384 (IC 50 s of about 10 μM) than 271327 and 314850 (IC 50 s∼500-700 μM). The extreme sensitivity of human cells to both 125248 and 924384 suggests that Mcm2-7 is not a major cellular target. In contrast ciprofloxacin kills human cells and inhibits yeast growth at roughly similar concentrations (i.e., human cells are only ∼2.5-fold more sensitive than yeast).
DISCUSSION
We provide evidence that ciprofloxacin (and to a lesser extent compound 271327) inhibits the activity of the budding yeast Mcm2-7 helicase both biochemically and in cell culture. Although our experiments largely focus on yeast, we also demonstrated that ciprofloxacin inhibits the viability of human cells at roughly similar concentrations. As fluoroquinolones have been extensively used in human medicine and their . . pharmacological properties are established [24], the fluoroquinolone scaffold might well serve as a useful platform in the development of Mcm2-7 inhibitors with enhanced therapeutic potential. Although inhibition of Mcm2-7 occurs at ciprofloxacin concentrations higher than its normal therapeutic range (also see below), our results suggest that some of the side effects seen with this and other fluoroquinolones may be because of inhibition of DNA replication.
Relationship to prior studies
Fluoroquinolones serve as potent antibiotics due to their strong inhibition of the prokaryotic DNA gyrase. Although eukaryotes are relatively resistant to ciprofloxacin at normal therapeutic levels, cytotoxicity is noted at high drug concentrations (reviewed in [24]). The eukaryotic Topo II enzyme is a target for fluoroquinolones such as ciprofloxacin, as the drug inhibits Topo II in vitro [28], and mutants in Topo II have been isolated with increased in vitro fluoroquinolone resistance [29]. Moreover, cells exposed to cytotoxic levels of fluoroquinolones arrest in G2 and demonstrate chromosomal breaks consistent with the known role of topoisomerase II in mitosis [30]. However, it should be noted that these are also relatively common phenotypes of various known DNA replication mutants (e.g. [31]) Both our in vitro and cell-based studies strongly support Mcm2-7 as a new eukaryotic target for fluoroquinolones. Our finding that the Mcm mcm4chaos3 mutant has significantly increased ciprofloxacin resistance provides evidence that at least part of fluoroquinolone cytotoxicity is likely due to defects in DNA replication.
Inhibitory effects of amino acid modifiers
Although chemically reactive amino acid modifying agents are too unstable, non-specific and irreversible to assist in studies of Mcm2-7 in vivo, there is considerable precedence for using modifying reagents in vitro to determine a mode of action in complex systems [4]. For example, DNA replication requires a large number of nucleotide hydrolases (e.g., ORC, Cdc6, Mcm2-7, RFC, primase and DNA polymerases [6]), and knowledge of the inhibitory spectrum of modifiers on individual replication factors will aid future studies that examine functional interactions between these proteins. Because preincubation of TAg with ATP relieved much of the inhibitory effects of these modifiers ( Figure 1B), they most probably affect ATP binding and oligomerization of TAg, which is ATP-dependent. One interesting difference between inhibition of the Mcms and TAg is with the guanidyl modifier PG, which inhibits both Mcm2-7 and Mcm467 without affecting TAg. This property could make PG an experimentally useful reagent in vitro if Mcm2-7 activity needs to be specifically ablated.
Mode of (fluoro)quinolone inhibition
Our results suggest that most of the studied inhibitors likely interfere with the ATPase active sites of the helicases. Although these molecules only have a modest effect on bulk ATP hydrolysis of Mcm2-7 ( Figure 3A), helicase inhibition is largely suppressed by increased ATP concentration ( Figure 3B). The relatively high observed IC 50 concentrations are consistent with this possibility, as the ATP Km 0.5 for helicase activity by the yeast Mcm2-7 is ∼2 mM [19]. However, if (fluoro)quinolones act as inhibitors of ATPase active sites, how can the relatively minor inhibition of ATP hydrolysis be explained?
For Mcm2-7, bulk ATP hydrolysis correlates poorly with DNA unwinding. There are mutations that cause substantial reductions in ATP hydrolysis but have only minor effects on in vitro DNA unwinding (e.g., mcm3KA [19]), whereas other mutations retain robust steady-state ATP hydrolysis but reduce in vitro DNA binding or unwinding (e.g., mcm6DENQ [19,32]). Only two of the Mcm2-7 ATPase active sites are responsible for most of the observed steady-state ATP hydrolysis (i.e., the Mcm3/7 and 7/4 active sites [18,33]). The remaining active sites, although clearly essential, hydrolyse ATP poorly. These data suggest that occupancy and turnover at these sites correspond predominately to a regulatory role rather than a direct contribution to helicase function. If the (fluoro)quinolone inhibitors preferentially target the regulatory rather than catalytic sites, only a modest change in ATP hydrolysis might be observed. Alternatively, the inhibitors may function to poison the helicase. By binding to a single active site, the inhibitor might uncouple ATP hydrolysis from DNA unwinding by altering the ability of adjacent active sites to communicate. This model also explains the effect of these inhibitors on TAg, a homohexameric helicase that contains identical ATPase active sites that coordinately unwind DNA during SV40 replication [23]. Finally, the fluoroquinolones could inhibit helicase activity by blocking ssDNA binding; however, this interpretation is difficult to reconcile with our observations that elevated levels of ATP restore Mcm2-7 helicase activity in the presence of most of the examined fluoroquinolones ( Figure 3B).
Prospects for tailoring fluoroquinolones as effective helicase inhibitors for Mcm2-7
Helicases are abundant in eukaryotes. For example, in yeast, ∼2 % of open reading frames contain known helicase structural motifs [34]. In addition to Mcm2-7, many human helicases (e.g., the RecQ family members such as the Werner, Bloom and RecQ4 helicases, [35]) are also potential therapeutic targets. Given the paucity of available helicase inhibitors and our observations that different fluoroquinolones differentially inhibit a variety of helicases (Supplementary Figure S3), fluoroquinolones may provide a general and malleable molecular scaffold for the development of efficient helicase inhibitors with tailored specificities.
Further development of fluoroquinolones provides a useful route to develop Mcm2-7-specific inhibitors of the therapeutic value, as Mcm overexpression correlates with cancer, and multiple studies indicate that the Mcm2-7 subunits are potential targets [14,36]. Several of the inhibitors that we examined (ciprofloxacin, 271327 and 314850), demonstrate at least partial selectively for Mcm2-7 over a host of other helicases tested and ciprofloxacin appears to target Mcm2-7 in yeast. As ciprofloxacin and related fluoroquinolones are common and approved human antibiotics [37],this molecular scaffold has proven pharmaceutical utility. Although our inhibitors only act at concentrations that exceed typical therapeutic use, this situation has precedence. For example, high doses of sodium phenylbutyrate are used in the treatment of malignant tumours, in which plasma concentrations of the compound are well over 1 mM [38]. Given the degree of selectivity that we observe with an off-the-shelf pharmaceutical designed for an entirely different application, our limited screen of ciprofloxacin-related compounds has identified several chemicals with improved properties, validating the likelihood that additional structural refinement using ciprofloxacin as a starting point will yield molecules with enhanced potency and specificity.
Our discovery of Mcm2-7 inhibitors has utility in other areas. First, they may function as a useful research tool both in vitro and in vivo. As each of the six Mcm subunits are individually essential, analysis of the role of the replicative helicase has largely focused on model systems such as S. cerevisiae that have especially well-developed genetic tools. Such inhibitors also have potential utility for biochemical studies, especially using systems (e.g., Xenopus egg extracts [39]) that have highly tractable biochemical advantages but are poorly amenable to genetic manipulation. Secondly, the discovery that fluoroquinolones can inhibit the eukaryotic helicase may explain some of the cytotoxic effects observed with ciprofloxacin and other fluoroquinolones [40]. Our finding that the mcm4chaos3 allele confers resistance to ciprofloxacin supports our hypothesis that the Mcm2-7 complex is a ciprofloxacin target in cells and suggests that it could also be contributing to the deleterious side effects seen with this class of compounds.
MATERIALS AND METHODS
The viability of human cells was assayed using the MTS method [1]. Briefly, 1×10 5 cells of the human non-tumour cell line RPE-hTERT were plated into each well of a 96-well plate and grown in DMEM: F12 containing 10 % (v/v) FBS in 5 % (v/v) CO 2 at 37 • C. The next day, the indicated compounds were titrated into media such that the final concentration contained 1 % (v/v) DMSO. As a negative control, media were also prepared that contained 1 % DMSO but lacked compound. After 48 h, the media was removed and replaced with DMEM lacking phenol red but containing Cell Titer 96 Aqueous One Solution Cell Proliferation Assay (Promega). After 1 h, the A 490 nm was measured using a BioRad iMark Microplate Reader (Hercules). Final data reflect the average and standard deviation (S.D.) of three replicates at each compound concentration. 1 These authors contributed equally to this work. 2 To whom correspondence should be addressed (email schwacha@pitt.edu). Figure 1. The inhibitors were preincubated with helicase before ATP addition, and the final helicase concentration in all experiments was 100 nM (hexamer). The values below the gels indicate the percent of DNA unwinding by the indicated helicase normalized to the solvent control (treatment 0).
Figure S2
The identified inhibitors exhibit diverse specificities against different helicases Representative helicase activity assays in the presence of the indicated inhibitor were tested, quantified and standardized as described in the legend to Figures 2 (A) and (B). All helicases were assayed at 100 nM final concentration (hexamer) with inhibitor preincubation prior to ATP addition. | 2017-11-06T20:45:58.410Z | 2013-09-03T00:00:00.000 | {
"year": 2013,
"sha1": "fa3f95550971e6f4f3523fee53cec42658bcb4dc",
"oa_license": "CCBY",
"oa_url": "http://www.bioscirep.org/content/33/5/e00072.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa3f95550971e6f4f3523fee53cec42658bcb4dc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
210924654 | pes2o/s2orc | v3-fos-license | BBOX1‐AS1 contributes to colorectal cancer progression by sponging hsa‐miR‐361‐3p and targeting SH2B1
Colorectal cancer (CRC) is the third main cause of cancer‐relevant deaths worldwide, and its incidence has increased in recent decades. Previous studies have indicated that certain long noncoding RNAs (lncRNAs) have regulatory roles in tumor occurrence and progression. Often, lncRNAs are competitive endogenous RNAs that sponge microRNAs to up‐regulate mRNAs. Here, we examined the role of a novel lncRNA gamma‐butyrobetaine hydroxylase 1 antisense RNA 1 (BBOX1‐AS1) in CRC. We observed that BBOX1‐AS1 is overexpressed in CRC cell lines, and BBOX1‐AS1 knockdown enhances cell proliferation, migration and invasion while reducing cell apoptosis. miR‐361‐3p is present at a low level in CRC and is negatively modified by BBOX1‐AS1. Moreover, miR‐361‐3p was validated to be targeted by BBOX1‐AS1. Src homology 2 B adaptor protein 1 (SH2B1) was notably upregulated in CRC cell lines and was identified as a downstream gene of miR‐361‐3p. In addition, we found that miR‐361‐3p amplification can suppress the expression of SH2B1. Finally, data from rescue assays suggested that overexpression of SH2B1 counteracted BBOX1‐AS1 silencing‐mediated inhibition of CRC progression. In conclusion, BBOX1‐AS1 promotes CRC progression by sponging hsa‐miR‐361‐3p and up‐regulating SH2B1.
Colorectal cancer (CRC) is the third main cause of cancer-relevant deaths worldwide, and its incidence has increased in recent decades. Previous studies have indicated that certain long noncoding RNAs (lncRNAs) have regulatory roles in tumor occurrence and progression. Often, lncRNAs are competitive endogenous RNAs that sponge microRNAs to up-regulate mRNAs. Here, we examined the role of a novel lncRNA gamma-butyrobetaine hydroxylase 1 antisense RNA 1 (BBOX1-AS1) in CRC. We observed that BBOX1-AS1 is overexpressed in CRC cell lines, and BBOX1-AS1 knockdown enhances cell proliferation, migration and invasion while reducing cell apoptosis. miR-361-3p is present at a low level in CRC and is negatively modified by BBOX1-AS1. Moreover, miR-361-3p was validated to be targeted by BBOX1-AS1. Src homology 2 B adaptor protein 1 (SH2B1) was notably upregulated in CRC cell lines and was identified as a downstream gene of miR-361-3p. In addition, we found that miR-361-3p amplification can suppress the expression of SH2B1. Finally, data from rescue assays suggested that overexpression of SH2B1 counteracted BBOX1-AS1 silencing-mediated inhibition of CRC progression. In conclusion, BBOX1-AS1 promotes CRC progression by sponging hsa-miR-361-3p and up-regulating SH2B1.
Colorectal cancer (CRC) is known as the third main cause of cancer-relevant deaths worldwide, and its occurrence continues to increase in recent decades [1]. Although great progress has been made in medical strategy, the prognosis of patients with CRC is still largely disappointing on account of the distant metastasis of tumor in advanced stage [2]. Consequently, it is greatly urgent to search the molecular mechanisms underlying CRC progression and to explore potential therapies for patients with CRC.
Long noncoding RNAs (lncRNAs) are a group of RNAs whose length is >200 nucleotides and limited to code proteins [3]. They are reported to exert key roles in the regulation of genetic transcription and pathogenesis of various tumors [4,5]. For example, lncRNA HOXD-AS1 was confirmed to drive the metastasis of melanoma cells by restraining the expression of RUNX3 [6]. In addition, high expression of HULC resulted in a poor prognosis and facilitated the prostate cancer progression via regulating the epithelial-mesenchymal transition (EMT) process [7]. It was widely recognized that lncRNA could interact with microRNAs (miRNAs) via functioning as competing endogenous RNA (ceRNA) that secluded miRNAs leading to suppression of their modulatory role in target mRNAs [8]. Several reports have shown that lncRNAs, as ceRNA, impacted tumor initiation and progression, reflecting a new regulatory mechanism at posttranscriptional level. For instance, lncRNA TUG1 was identified as an oncogene in osteosarcoma and enhanced osteosarcoma cell growth through modulating the expressions of miR-212-3p and FOXA1 [9]. lncRNA PVT1-5 was discovered to participate in the regulation of lung cancer cell proliferation by sponging miR-126 and targeting SLC7A5 [10]. In addition, numerous lncRNAs are validated to serve as a ceRNA in CRC progression. SOX21-AS1 functioned as a sponge for miR-145 to boost CRC tumorigenesis through targeting MYO6 [11]. HIF1A-AS2 exerted a promotive effect on CRC progression and EMT formation by regulating the miR-129-5p/DNMT3A axis [12]. Up-regulated in colorectal cancer liver metastasis (UICLM) enhanced liver metastasis of CRC via functioning as a ceRNA for miRNA-215 to modify ZEB2 expression [13]. gamma-Butyrobetaine hydroxylase 1 antisense RNA 1 (BBOX1-AS1), a novel lncRNA, has not been investigated in cancers.
In this study, we devoted to probing the biological role and mechanism of BBOX1-AS1 in CRC. It was found that BBOX1-AS1 drives CRC progression by sponging hsa-miR-361-3p and up-regulating Src homology 2 B adaptor protein 1 (SH2B1). This discovery provides a promising theoretical basis for the exploration of CRC therapeutic strategies.
Materials and methods
Cell culture CRC cell lines (GEO, SW480, HCT116 and LOVO) and the human colorectal mucosal cell line (FHC) were gained from Chinese Academy of Sciences (Beijing, China). A Dulbecco's modified Eagle's medium (Corning Life Sciences, Tewksbury, MA, USA) mixed with 10% FBS (Warbison Technology, Beijing, China), 100 mgÁmL À1 penicillin and streptomycin (Invitrogen, Karlsruhe, Germany) were used for culturing the earlier cells in a humid atmosphere at 37°C with a 5% CO 2 .
Quantitative real-time PCR
Total RNA was separated from cultured cells by TRIzol reagent (Thermo Fisher Scientific, Waltham, MA, USA) based on the manufacturer's guides. Reverse Transcription Kit (Takara, Tokyo, Japan) was applied to reverse transcription. The quantitative real-time PCR analysis was conducted by SYBR Green Premix PCR Master Mix (Roche, Mannheim, Germany) via ABI HT9600 (Applied Biosystems, Foster City, CA, USA). The relative RNA level was detected using the 2 ÀDDCt method. In addition, glyceraldehyde-3 phosphate dehydrogenase (GAPDH) or U6 served as internal control.
Western blot
Total protein was extracted from lysates added with protein inhibitors. Extracted proteins were isolated by SDS/PAGE (Boster Biological Technology, Los Angeles, CA, USA) and then transferred into poly(vinylidene fluoride) (East Fluorine Chemical Technology, Shanghai, China). After being blocked with skim milk, the membranes were incubated with primary antibodies at 4°C overnight. The primary antibodies for our experiment were SH2B1 Ig (ab228828; Abcam, Cambridge, UK) and GAPDH Ig (ab8245; Abcam). GAPDH was seen as a reference. The protein blot was visualized by enhanced chemiluminescence system (GE Healthcare, Chicago, IL, USA).
Colony formation assay
At first, about 800 cells were seeded into six-well plates after transfection. After being incubated in a chamber under an environment of 5% CO 2 at 37°C for 2 weeks, paraformaldehyde (Solarbio, Beijing, China) was used to fix the colonies for 10 min and dyed with crystal violet (Beyotime, Nantong, Jiangsu, China) for 5 min. Colonies were counted manually.
Transwell invasion assay
The aim of transwell assay was to test the invasion ability of GEO and HCT116 cells. Cells were seeded on the upper chamber, which the membrane precoated with Matrigel matrix gel (BD Biosciences, San Jose, CA, USA). The upper chamber was applied with serum-free medium, and the lower chamber was added with 10% FBS. Invaded cells were fixed with methanol (Ceran Technology, Chengdu, China) and dyed with crystal violet after the scheduled time. Five fields at random were counted, and invaded cells were observed via a microscope.
Cell Counting Kit-8 assay
The GEO or HCT116 cells (1 9 10 3 cells/well) were seeded in six-well plates in a complete medium and cultured for 0, 24, 48, 72 and 96 h. Then Cell Counting Kit-8 (CCK-8; Dojindo Molecular Technologies, Tokyo, Japan) solution was added and incubated for an additional 2 h. Relative cell viability was determined using an EL 9 800 micro-immuno analyzer (Bio-Tek, Winooski, VT, USA).
TUNEL staining assay
TUNEL assay was conducted to analyze the level of cell apoptosis in GEO and HCT116 cells. After TUNEL staining, the GEO or HCT116 cells were stained using 4 0 ,6-diamidino-2-phenylindole (Sigma-Aldrich, Downtowner, St. Louis, MO, USA). After that, relative fluorescence intensity was observed using a laser scanning confocal microscope (Olympus, Tokyo, Japan).
Flow cytometry analysis
Annexin V-FITC/PI Apoptosis kit (Invitrogen) was used to measure the apoptosis of GEO and HCT116 cells. In brief, the apoptotic cells were cleaned using PBS (Solarbio) and resuspended. Later, 70% ethanol cooled by ice was used to fix cells. At last, the apoptosis rate was evaluated by FACSCalibur flow cytometer (BD Biosciences).
Cell scratch test
GEO or HCT116 cells were centrifuged, and the cell suspension was cultured in six-well plates for 24 h. After the degree of cell fusion reached 80-90%, the transfer gun was used to draw some scratches on the back of each plate with the same force. Then the plates were washed thrice with PBS (Thermo Fisher Scientific). Images were photographed at 0 and 24 h on MOTIC IMAGES ADVANCED 3.2 software (Motic Asia, Hong Kong, China).
Subcellular fractionation assay
In line with the manufacturer's protocol, PARIS Kit (Invitrogen) was used to isolate nuclear and cytoplasmic fractions. Relative BBOX1-AS1, GAPDH (cytoplasmic control) and U6 (nuclear control) expressions in the cytoplasm or nuclear of GEO and HCT116 cells were assessed by quantitative real-time PCR.
RNA immunoprecipitation assay
Imprint RNA immunoprecipitation (RIP) kit (Millipore, Bedford, MA, USA) following the specification was used for RIP assays. In brief, GEO or HCT116 cells lysed in RIP lysis buffer were incubated with anti-Ago2 antibody or anti-IgG antibody that was preattached to magnetic beads. The quantitative real-time PCR was applied for the detection of immunoprecipitated RNA.
RNA pull-down assay RNA pull-down assay was chosen to confirm the specific binding of BBOX1-AS1 and hsa-miR-361-3p. The acquired cell lysates were treated with Bio-miR-361-3p-WT/Mut or Bio-miR-NC, together with streptavidin-labeled magnetic beads. The compound was analyzed by the quantitative real-time PCR.
Statistical analysis GRAPHPAD PRISM 7.0 software (La Jolla, CA, USA) was applied for statistical analysis, and the experimental data from at least three independent experiments were shown as mean values AE standard deviation (SD). The Student's ttest or one-way/two-way ANOVA was used for analyzing the comparisons of groups. Statistical significance was obtained when P < 0.05.
BBOX1-AS1 is overexpressed in CRC cell lines and facilitates CRC cell proliferation, migration and invasion while inhibiting apoptosis
First, quantitative real-time PCR analysis was used for detecting BBOX1-AS1 expression in CRC cell lines (GEO, SW480, HCT116, LOVO), and one colorectal mucosal cell line (FHC) was used as a reference. As a result, a high level of BBOX1-AS1 was found in CRC cell lines (Fig. 1A, P < 0.05, P < 0.01). To investigate the function of BBOX1-AS1 on CRC progression, we transfected GEO and HCT116 cells with sh-BBOX1-AS1 for knocking down BBOX1-AS1 expression (Fig. 1B, P < 0.01), and sh-BBOX1-AS1#1 was chosen for the subsequent assays because of the better interfering efficiency. The results from CCK-8 and colony formation assays disclosed that silenced BBOX1-AS1 significantly restrained the proliferation in GEO and HCT116 cells (Fig. 1C,D HCT116 cells was evidently hastened with the transfection of sh-BBOX1-AS1 through TUNEL assay and flow cytometry analysis (Fig. 1E,F, P < 0.01). Later, wound healing assay delineated that BBOX1-AS1 deficiency dramatically suppressed the migration in GEO and HCT116 cells (Fig. 1G, P < 0.01). Lastly, transwell assay was conducted to assess cell invasion. As expected, sh-BBOX1-AS1 transfection effectively inhibited the invasion in GEO and HCT116 cells (Fig. 1H, P < 0.01). Taken together, BBOX1-AS1 was overexpressed in CRC cell lines and facilitated CRC cell proliferation, migration and invasion while inhibiting apoptosis.
Discussion
Previous studies have confirmed the pivotal status of lncRNAs in the process of tumor lesions [14][15][16]. Increasing lncRNAs are found to be involved in human tumor progression, including CRC [17,18]. For example, the increased expression of GHET1 was tested in CRC samples, and CRC cell proliferation and invasion were inhibited on GHET1 knockdown [18]. In addition, down-regulated PCAT1 induced cell apoptosis, arrested cell growth cycle, repressed proliferation, and hampered cyclins and c-Myc expression in CRC [19]. In this study, the biological function and mechanism of BBOX1-AS1 in CRC were investigated, and results implied that BBOX1-AS1 displayed a considerably up-regulated expression level in CRC cells. Moreover, with the transfection of sh-BBOX1-AS1, CRC cell proliferation, migration and invasion were dramatically restrained, whereas the apoptosis was observably promoted. These discoveries verified the oncogenic property of BBOX1-AS1 in CRC.
Based on the earlier findings, the mechanism mediated by lncRNA needs to be further researched. miRNA, with 22-24 nucleotides in length, was known as a critical regulator in biological processes through being sponged by lncRNA and targeting mRNA [20]. For instance, lncRNA XIST acted as an oncogene to boost CRC cell proliferation and reduce cell apoptosis through sponging miR-132-3p, and MAPK1 functioned as a target gene of miR-132-3p [21]. lncRNA MIAT, a sponge for miR-29a-3p, regulated the biological behaviors of gastric cancer cell by up-regulating HDAC4 expression [22]. lncRNA CCAT1 activated cisplatin resistance via a mechanism relating to the miR-130a-3p/SOX4 axis in non-small cell lung cancer [23]. With further exploration of the molecular mechanism in this study, we found that BBOX1-AS1 was mainly distributed in the cytoplasm of CRC cells and functioned as a sponge for miR-361-3p. Meanwhile, miR-361-3p expression was lowly expressed in CRC cells and negatively modified by BBOX1-AS1. All of the data suggested that BBOX1-AS1 exerted the role of tumor facilitator by sponging miR-361-3p. SH2B1 was commonly recognized as an oncogene in multiple cancers. For example, SH2B1 was identified as a risk factor in gastric cancer and stimulated its progression [24]. SH2B1 enhanced the EMT process in lung adenocarcinoma through the IRS1/b-catenin axis [25]. SH2B1 promoted cell proliferation in non-small cell lung cancer via phosphoinositide 3-kinase/Akt/ mechanistic target of rapamycin signaling cascade [26]. In this study, SH2B1 was found to be highly expressed in CRC cell lines and targeted by miR-361-3p. Furthermore, miR-361-3p negatively regulated the expression of SH2B1.
In conclusion, this research revealed that BBOX1-AS1 promoted CRC progression by sponging miR-361-3p and up-regulating SH2B1, which suggested a BBOX1-AS1/miR-361-3p/SH2B1 axis in CRC and provided a promising insight for CRC treatment. | 2020-01-28T14:02:55.432Z | 2020-01-27T00:00:00.000 | {
"year": 2022,
"sha1": "fdce69c73d03927de48ba49bfc060d445d7f43d2",
"oa_license": "CCBY",
"oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2211-5463.12802",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "3a802dba4a0917a319c60d720d116e740e8129d1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
226124578 | pes2o/s2orc | v3-fos-license | Using Songs to Reduce Language Anxiety in Speaking English in ESL Classroom
English proficiency is very important. Our Ministry of Education have been emphasizing about the importance of English language. They introduced various methods to help our young learners. Young learners love to do something fun and involving action.Among the four skills in learning English, Speaking is one of the most crucial skill. Anxiety in speaking English language is the major factor why pupils are reluctant to participate in oral activities. They are scared of making mistakes and eventually become not self-confidence to speak in English. Their anxiety of speaking English makes them cannot excel in English and forever appalled of using English to communicate. The anxiety to make mistakes makes them demotivated to use the language. Young learners with low motivation usually are reluctant to speak in the target language. The growing needs of English language increased the roles of this language in society today. Realizing the increasing needs, the education system in Malaysia is built to equip young Malaysians to compete in the 21st century. In Malaysia, English is taught as a second language. Students are required to receive eleven years of formal education in both primary and secondary school. Undoubtedly, the most effective teaching method to teach English in ESL classroom is among the major issue to be discussed in Malaysia Education System. Songs help to motivate them and boost their confidence. Songs has been a part of every level of society and cultures. It can be fun and engaging to the learners. Not only that using songs able to support language focus and increase the attention span of the learners. Young learners are usually more attracted in the use of songs in their lesson since they are still very much attracted to activity that involves total physical response.
Introduction
Malaysia is a unique country because of the diversity of the cultures, religion and ethnicity. The linguistics situation in Malaysia is pretty much a result of its multiracial status. The main races in Malaysia are Malay, Chinese, Indian and Bumiputera Sabah Sarawak. There are different dialects in every state in Malaysia and that contributes to the diverse linguistic situation. This exquisite language phenomenon is because of the multi-races in the country. Most Malaysian are capable of speaking Malay and English, these two languages are used widely and regularly in their daily conversation.
Unfortunately, English is definitely not a common second language for Malaysian, their first language is usually their mother tongue and second language is the national language Bahasa Malaysia. Many first-time learners have to face difficulties in learning second language because they are not confident in using the language. Learning English causes, them to feel anxiety. Anxiety is a form of feelings that causes someone to worry a lot and decrease their motivational level. Second language or foreign language speaking anxiety have sparked a lot of attention in the recent years. It is very important to reckon the source of anxiety among the primary school pupils. Anxiety is one of the causes of having difficulties in language learning as it reduces learners' motivation in using the language. Their lack motivation to speak the language causes their lack of proficiency in speaking the language learn. They will become shy to acquire the knowledge or language skill just because of their high anxiety level and low motivation. Delinquent to that understanding, many studies have been carried out to identify the factors that contribute to the feeling of anxiety among pupils in English Language. Unfortunately, most studies seem to focus more on learners from secondary and also tertiary levels learners. Less researchers are doing research on the primary school levels.
The Research Problem
This paper is going to elaborate more about the factors that affects the motivation of the second language learners in primary school to speak English and also the method use to assist them. This paper discusses the anxiety of ESL (English as Second Language) learners to speak in English language. In Malaysia, English is one of the crucial subjects. The students have to acquire all the four skills in learning English. The skills are listening, speaking, reading and writing but speaking is the most important skill. Speaking skills are crucial especially because it can also affect other skills of students. Students needs to be able to interact in English to survive in the massive development of profession in the country that requires them to be able to communicate using English.
Speaking in English is very important in term of communicative ability and for students to interact with other English users. Unfortunately, some students develop anxiety and lack of motivation whenever they are required to speak in English. Language anxiety causes their motivation to decrease and their interest to study the language become decrease. Therefore, it is appropriate to investigate and examine students' perceptions towards the anxiety in learning English.
Purpose of the Study
In order to identify language learning in the Malaysian school context, we should explore the factors that contribute to the way in which English education is conducted in Malaysia. Students' motivation and interests in the language class can determine their level of anxiety, as well as promote or hinder their language learning performances. The paper tries to identify the factors of anxiety and motivation of lower secondary students towards learning English Language. Language anxiety is the major factor in affecting the speaking skill of students and this result in their weakness and lack of motivation to use English as a medium of communication during formal and also informal situation. The serious effects of this problems towards the learning of students leads to the need to conduct this research. This research also specifies to use songs to help reduce the pupil's anxiety in speaking English language.
Research Questions
In this study the following questions guided my research: a) What are the factors causing students anxiety to speak in English? b) Why students feel anxiety to speak in English? c) How songs can facilitate in reducing language anxiety among students? d) Why song can be used as a tool to reduce language anxiety among students?
Literature Review
There are various researchers that have conducted research about the use of songs to motivate pupils to speak English. Malaysia is one of many countries that accepted English as their second language. A vast amount of research has also been conducted to identify the factors contributing to language anxiety among learners. In Malaysian primary school, most of the pupils are merely demotivated to speak English.
They are scared of making mistakes and they have anxiety towards the usage of the language. According to Walker (2006), the usage of songs in a classroom will increase the usage of English. It is not only a powerful technique to teach English but also motivate the pupils to love the language. Using songs may assist in teaching young learners. So, the use of songs and games is an effective tool in teaching English for young learners especially if it made them unconscious that they are learning a language. The usage of songs will increase the interesting and enjoyable environment for learning and lessening the burden of anxiety.
English Language is known as a Lingua Franca or medium of communication among foreign countries. English language becomes a common means for speakers of different first languages. English has removed the language barriers between different countries and this increase the development of various countries. Unfortunately, in Malaysia most English as Second Language learners have anxiety to speak in English.
Language anxiety is a feeling of being afraid to make mistakes and being judge by others. Choy and Troudi (2006) stated in their research that half of their respondents reported feeling afraid and reluctant to speak English. The fear arises due to their anxiety of making mistakes while using the spoken language. The respondent also feels very distressed about being corrected by others. Suleimenova (2013) noted that the concern over communication competence among second or foreign language learners in the recent years may trigger a high level of speaking anxiety. In his study, the foreign language learners stated that they feel stressful, nervous and anxious while learning to speak using the target language and said to have 'mental block' against language learning. This shows that language anxiety actually affects the motivation of English as Second Language learners to communicate in that language. They become stressful because of the anxiety and it causes them to be unable to speak clearly and their word choices become limited because of the anxiety. According to Pourhosein Gilakjani, Leong, and Saburi (2012), desire or the extent to which a person try to get their goal determine the success. If people are not motivated to do something, they will not put effort in learning the language. The same thing applies in this situation in which the ESL learners are not motivated to speaks in English. In general people refer to this psychological factor as motivation, a motive force that arouses, incites, or stimulates action. Motivation is an important factor in specifying the readiness of learners to communicate. Communications requires the communicator to be ready and confident.
A few researchers also agree with the research that motivation affects learners' desire to learn the language. Al-Otaibi (2004) stated that a motivated learner spends much of their time to gain aims in learning second or foreign language. He also mentions that a motivated learner can also learn language more effectively than unmotivated ones. Al-Hazemi (2000) supported his statements by saying that learners with strong desire to learn a language can obtain high level of competence in the target language. Lucas (2010) said that learners are intrinsically motivated to learn speaking and reading skills. He also stated that learners are also intrinsically motivated through knowledge and achievement. Learners' with higher level of motivation will definitely more motivated to speak in English. According to Murphey (1990), songs are particularly beneficial to language acquisition because they trigger more effectively. Songs are repetitive and the melody itself are very catchy, learners can learn through the lyrics to apply in real life situations. Some song lyrics also increases motivation because learners can relate the lyrics to their situations in life.
Songs in English Language Learning
In most Asian countries, English is not the citizens' first language. Most of the English speakers usually learn English as second language in school. Most of them are usually exposed to English via songs and also broadcast in television. Otillie (2010) stated that these people first exposure to the English language is probably through listening to popular songs. Lynch (2005) expressed in his article that English language teachers should use songs as a component part of their teaching. Songs are also easily obtainable and fun to listen to plus it is repetitive. The utilization of songs in teaching language can also display assortment and differentiation because songs can function as an instrument to pioneer diverse new vocabulary. Moreover, songs can be used to expose people to cultural aspects and familiarize them to different types of English accents. Lynch (2005) also stated in his article that songs can be designated to adhere to the multiple needs and interests of students. The lyrics itself can be used to colligate to real life situations, struggles and various issues happening around the world. Generally, songs can substance pleasurable speaking, vocabulary and language drill while enjoying the rhythm and melody. Orlova (2003) claims students are put at ease when the song is used in the classroom. In her 10year experience of using songs in the language teaching, she claims that her students are more interested in the lesson and their desire to learn a language is increase. Music offers a versatile way to look at language and students will not feel the burden of learning the language. It can also be a very effective tool to reinforce and improve speaking, listening comprehension, vocabulary and language in general. In 2010, Beare wrote an article on using music for the ESL (English as Second language) classroom that supported Orlova 's statements. According to him, using music in the beginning of a lesson is a great way to introduce new vocabularies to students. It can also lead them to thinking in the right direction to guide them and give them the gist about what they are learning and overall ideas about what the lesson is about.
Lo and Li (1998) also suggested that songs can provide a break from the normal classroom routine and the learning of English through the use of songs will develop a non-threatening environment classroom environment. When the students are comfortable with the atmosphere in the classroom, then the four language skills can be enhanced.
Language Anxiety
Language anxiety is actually one of the main factors that needs to be considered in learning a language. It can affect the learning and also teaching of second language massively because in learning a language, the learners are requiring to speak the language. According to Brown (2001) defined language anxiety as the feelings of uneasiness, worry, nervousness, self-doubt, frustration and apprehension that a non -native speakers experienced when learning or using a second language. These non -native speakers will feel anxious and panic when they need to use the English language. Each learner is believing to respond differently to language anxiety. Language anxiety may affect a student positively and also negatively. The students that are affect positively will tend to find various ways to overcome their anxiety meanwhile those who have been affected negatively usually tend to avoid anxiety in learning the language. This will lead to their poor performance and also refusal to communicate in English.
Ellis (2008) list out some indication of a language anxiety such as the lack of confidence, unwilling to speak in the target language and sometimes even experiencing insomnia when they are needed to communicate in public. The most common cases are when the non-native speakers have to give oral presentation. There are two types of anxiety according to Woodrow (2006), there are in -class anxiety and also out of class anxiety. Extrinsic factors such as social and also cultural surroundings also end kindle anxiety. For example, a non -native speakers are usually conscious and precarious about other peoples' opinion whenever they commit mistakes.
Language anxiety affects the pupils' motivation level to speak the language. Their fright of making mistakes and judgement from others causes them to stop communicating in English. Sometimes people are not judging them but they themselves have the insecurities because of their lack of confidence.
Using songs is one of a very significant tools that can reduce their anxiety and also increases their motivation level. Song are very fun and also catchy. Some songs also have meaningful lyrics and also since it is repetitive, the listeners are usually able to mouth the lyrics and sometimes sing along to songs. Some songs itself are about motivation and the listeners can usually relate to the lyrics. The learner's motivation level increases and they are less anxiety, speaking the language will not be a big issue or concern for the learners. Speaking English is all about self-assurance because the intention of speaking is just to ensure other people be able to comprehend the message you are trying to deliver and get the main contents of what you are trying to tell them.
According to Setia et al. (2012) songs is effective in creating positive attitude and motivation in primary school ESL learners in Malaysia. The results indicated that "the use of song not only helps the understanding, it also stimulates and increases the students' interest to learn, enjoy and engage in the learning process" (p. 270). When the learners are interested to learn, their motivation level will be higher and obviously will affect their anxiety. The learners will be less anxiety and their reluctance to use the language will be lower. The results also suggest that songs have a positive outcome on the learners' self-confidence and theoretical success by supplying more laid-back and favorable learning surroundings. Peacock (1997) also mention that songs can be beneficial to language teaching by improving learners' motivation, which is fundamental to a successful learning process. The language learners' motivation can be increased significantly through the employment of authentic materials such as various genre of songs.
The use of songs boosts learners' motivation to speak in the target language. On the other hand, when the motivation level is higher, they will no longer have anxiety to speak in English. Songs usually have repetitive lyrics so the learners will also be able to learn new vocabulary that they can use the new words whenever they are speaking the language in real life situations and also implement the new vocabularies in their language learning class.
Methodology Research Design
This research is conducted in a school situated in a rural area in Mukah. It is a mixed abilities school.The medium of instruction of the school are Bahasa Malaysia and English. Most of the pupils' first language is Iban.They mostly communicate using Iban with their friends and only communicate using the target language in class. This research main purpose are to investigate the anxiety level of students in using English language as the medium of instruction and also the factors that are affecting their reluctance to communicate in English.
The research will be a qualitative research. Qualitative research was chosen as the research design because this study aims to know the factors that causes pupils' anxiety in speaking English and find out the effectiveness of using songs to reduce the anxiety. Ary et al (2010) expressed that the aim of qualitative research is to understand a phenomenon in general and to know the phenomenon in depth of understanding the data analysis. In a qualitative research, researcher can ask specific questions to the participants to explore more about the phenomenon to get more information. One of the varieties in qualitative research is basic interpretative study. Researchers usually use interviews and also observation to collect data to understand the participants experience and point of view.This research main point is to know in general and also deeply the language anxiety phenomenon among pupils. Their inabilities and also anxieties to use English in class and also outside of the class.
Research Sample
In this research, there are 30 pupils that have been chosen as the samples. They are pupils from Year 4 class in a government school in rural area in Mukah, Sarawak. It is located in a place with limited facilities such as internet connections and also electricity. The pupils' native language is mostly Iban. They are from mixed abilities and also their first language is Malay since its their medium of instructions in school.
Data Collection (Instruments)
Three instruments have been utilizing for the data collection method in this research. This research is a qualitative research and the instruments use are journal writing, questionnaire and semi-structured interview. The data collected was then analyzed, the questionnaire data are analyzed using simple frequency count and the answers from the interviews are arrange and collected in distinguishing the students language anxiety in using English language as a medium of instruction and whether the use of songs are help to reduce their anxiety to speak English in an ESL classroom.
Questionnaire
The first instrument use is questionnaire. The purpose of using questionnaire in this research is to find out when do the pupils feel more anxious in using the target language and what causes their anxiety. The researcher will observe the participants before and after the use of songs to see the different of attitude they have towards speaking in the target language.
Questionnaire help researcher to collect extensive data in a very limited time. It can be used to research almost any aspect of teaching and learning. Key (1997) stated that a questionnaire is a mean of eliciting the feelings, beliefs, experiences, perceptions or attitudes of some sample of individuals.
The questionnaire was design to find out what causes the massive or main reason to pupils' anxiety to communicate in English and whether the use of songs help in reducing their anxiety to communicate.
The questionnaire items were developed to find out the reason and also factors that causes anxiety in speaking English. The questionnaire was administered to all the participants before the lesson to find out the cause of their language anxiety. Each item was explained first to the pupils to ensure they understand better about the questionnaire. The time allocated to answer the questionnaire was abundant and the items were simple and easy to comprehend.
Interview
The interview will be a semi structured interview. Interview is the most crucial data collection tool in a qualitative research. It is a good way of approach people's perceptions, meanings, definition and constructions of reality (Punch, 2009). Interview is one of the popular instruments use in a qualitative research. It is a process in which the researchers ask the participant an open-ended question and record the answers.
According to Mc Namara (1999) interview are peculiarly useful for investigating profoundly into the learners' occurrence. It can also help pursuing in-depth and specifically about a topic. It can also be helpful as a follow up with the participants react after a questionnaire to further look into the rational motive between their decision making.
Journal Writing
Journal writing is an important tool to help someone to reflect and improve in their learning process. Journal writing is another instrument that is implemented in this research.
The participant writes their reflection and point of view about what they feel before and after the lesson using songs. The participant also reflects on the reason why they feel anxious to communicate in English.
Wallace (1999) stated that journals were the shared accounts of personal's actions, thoughts and feelings written by the person himself and herself usually on daily basis. Therefore, journal writing can help researcher to understand more about the participant feelings and also thoughts. The journal entries will be useful for researcher to get to know in deep about how the participants feels and why anxiety occurs whenever they need to speak in the target language.
Data Collection Interview
In this research, two types of interview sessions were conducted. The first interview was conducted to find out participants level of anxiety and what is the main cause of their anxiety.The second interview was conducted after using songs and to observe the effectiveness of using song to reduce language anxiety to communicate in the target language. These interviews were conducted to find out the differences in the pupil's reaction and also motivation to speak in the target language after the use of songs.
Example of the first interview questions are; 1. How do you feel when you have to communicate in English? 2. Why are you afraid to speak in English? 3. Is English important to you? Why? 4. What makes you afraid to speak English with others? The responses from the interview were then analyse to find out the reason why the participants are reluctant and feel anxiety to speak in English. The following were some of the responses from the participants to the questions. These responses show that the pupils are aware and knows the importance of using English in everyday life. They know that English is one of the crucial subjects and they need to master English. They also know that using English will enable them to communicate with foreigners and socialise with others.
Question: What makes you afraid to speak English with others? Pupil A: I am afraid that my friend will laugh at me. From these responses, we can see that most of the pupils are willing to communicate in English but they are afraid of making mistakes and become the laughing stock of their friends. Some of the pupils also feels anxious whenever they have to speak in front of their friends especially in English because they are not familiar with the usage of English in everyday life. The pupils also tend to become so afraid that they were unable to speak. Most of the responses from the interviews were the same, the pupils become demotivated if they make mistakes for example mispronounce certain words. The data from the pupils were collected and then the frequency was counted. The data were as follows; the numbers are the total participants that picked each stand.
10
From the result, we can see that the most picked questions are item number 3 and 4. Most of the pupils are scared to speak in English and they become really nervous whenever they have to speak in public. They are also very shy because their friend usually laughs at them and this cause their motivation level to decrease. Peer pressure causes the pupils to become demotivated and also reluctant to speak in English.
The second item that cause anxiety is item 2, the pupils are not confident to speak in English. They know and able to communicate in English but they are not confident. This cause them to have language anxiety in speaking English.
The third most picked item are items 1, 9 and 10. Pupils were scared to speak in English even among their friends, probably it is because of their mind set itself. They are reluctant to communicate and also apply the language in everyday life.
The lowest picked item in the questionnaire is item number 8. This shows that most of the pupils did not disfavour English. They like to acquire English; it is just that they are not assured to speak in English.
Discussions
The widely used of English language has demand the speakers to have good communication skills in using the language. It is a general idea among learners that speaking is harder than other skills of learning English and it may be more important than other skills. Students tend to be more anxiety when they have to communicate with native speakers and also speaks without preparation. Their anxiety and insecurities cause them to become unable to utter their words in English.
Songs play an important role in early language development. Most people beliefs that to learn a language we must listen and speak the language. Young learners are usually more attracted to lesson that requires them to move actively and also sing along.
Songs are repetitive and by repeating the lyrics, pupils can learn new vocabulary and new language structures. It is important to choose songs that able to be enjoyed by children because it can give them the feeling of being forced to learn. Once they enjoy themselves, they will not feel demotivated. Likewise, from the interview and the responses from the pupils, we can see that they are enjoying themselves and pupils found the lessons amusing and they were learning something entertaining. For example, they learn how to pronounce certain new vocabulary, they become more confidence to use the new vocabulary when they are speaking in English.
Their confidence during the singing session also decreases their anxiety level. They are also more actively participating and more responsive when they are discussing and sharing opinion with their friends. Passive students also eventually and gradually began to respond actively. Cheng (1984) stated that the three language teaching techniques are via using games, sketches and songs.
Conclusion
In conclusion, the use of songs in the classroom will not only teach the students language but also expose them to real life issues that can be found in the lyrics of varieties of songs. The different genres of songs will also prolong their attention span and make them more attentive towards English lessons. Songs also can be easily memorized because of the fun melody and repetitive lyrics. Students tend to learn better with the use of songs and it will reduce their anxiety to speak in English language because they will be more confident in speaking the language. Therefore, the use of songs will increase pupils' motivation to speak English and reduce their anxiety of making mistakes while speaking English.
Language anxiety will reduce when using songs because songs effectively increases students' motivation and also interest in learning languages. They become more willing to learn and also participate in the activities that encourages them to communicate. This study concluded that the research participants become more active and shows massive interest in learning language after the implementation of song. Low proficiency students also show interest and increase in motivation. This shows that they are learning in a more fun and stress-free way. Further study should be conducted and implemented by using songs in other skills in teaching such as writing, listening and also reading. | 2020-07-02T12:39:25.333Z | 2020-02-09T00:00:00.000 | {
"year": 2020,
"sha1": "d594827ed547437ead6446fa295267d3d0f8640c",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/6917/Using_Songs_to_Reduce_Language_Anxiety_in_Speaking_English_in_ESL_Classroom.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d594827ed547437ead6446fa295267d3d0f8640c",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
234938363 | pes2o/s2orc | v3-fos-license | Biomarkers of Myocardial Fibrosis are Associated with Diabetes but not with Coronary Microvascular Dysfunction in Women with Angina and No Obstructive Coronary Artery Disease
Background Coronary microvascular dysfunction (CMD) is highly prevalent in women with no obstructive coronary artery disease and possibly related to myocardial brosis caused by excessive extracellular matrix (ECM) remodeling. ECM turnover can be measured in blood indicating brotic activity. We hypothesized that women with DM, angina and no obstructive coronary artery disease have increased ECM turnover and that this is associated with CMD. Methods We included 344 women with angina pectoris and no obstructive coronary artery disease (187 with DM, predominantly type II) and 76 asymptomatic women without DM as controls. Biomarkers reecting formation of type IV and VI collagen (PRO-C4 and PRO-C6) and degradation of type IV, V and VI collagen (C4M, C5M, C6M), mimecan (MIM) and titin (TIM) were measured in all participants. CMD was dened as coronary ow velocity reserve (CFVR) <2.0 assessed by transthoracic Doppler echocardiography. triglycerides and
Background
Coronary microvascular dysfunction (CMD) is highly prevalent in women with angina pectoris and no obstructive coronary artery disease. CMD is a strong prognostic marker of cardiovascular morbidity and mortality, 1-3 is associated with cardiovascular risk factors, particularly diabetes mellitus (DM) type II and hypertension, 4,5 and is frequent in heart failure. 6 In CMD, transient ischemia occurs because the microvasculature cannot dilate in response to increased oxygen demand. 7 This condition may cause chronic low-grade ischemia which promotes cardiac extracellular matrix (ECM) remodeling. 8,9 The cardiac ECM consists primarily of collagens and proteoglycans preserving ventricular function and structure. 10,11 Imbalanced ECM remodeling induces accumulation of collagens, expansion of the extracellular volume and myocardial brosis. 12,13 Thus, CMD might be a precursor of myocardial brosis.
Myocardial brosis is associated with impaired ventricular function, remodeling and stiffness of the myocardium, 8,14 and is a characteristic of diabetic cardiomyopathy clinically presenting as heart failure with preserved ejection fraction (HFpEF).
Collagen and proteoglycan formation and degradation fragments can be quanti ed in blood and may be indicative of early brotic disease activity. 15,16 In a small study we have previously found that women with angina pectoris and no obstructive coronary artery disease have an imbalanced collagen turnover compared with asymptomatic controls. In this study, we investigated whether ECM turnover is associated with DM and CMD by examining seven biomarkers that have recently shown promising as markers of brotic activity.
Study population
The study-population was derived from the iPOWER (ImProve diagnOsis and treatment of Women with angina pEctoris and micRovessel disease) cohort conducted from May 2012 to December 2017. 17 In-and exclusion criteria in iPOWER have been published previously. 4 Brie y, inclusion criteria for the iPOWER study were angina pectoris and no obstructive coronary artery disease (de ned as less than 50% stenosis assessed by coronary angiography), left ventricular ejection fraction (LVEF) > 45% and no signi cant valvopathy. From 1830 patients included in iPOWER we selected all women with DM (n = 187) and a random sample of women without DM (n = 157).
A control group of 76 asymptomatic women without DM or previous cardiovascular disease were recruited from the Copenhagen City Heart Study 18 between June and September 2015. Blood biomarkers of collagen and proteoglycan turnover were successfully measured in all 420 participants.
Study assessments
We obtained demographic data from interviews. Clinical data included weight, abdominal circumference, blood pressure and heart rate measured at rest. Blood samples were analyzed for cholesterol levels (total, low-density lipoprotein [LDL] and high-density lipoprotein [HDL] cholesterol), and triglycerides and HbA1c.
Serum was collected for biomarker analysis and immediately stored at -80 ºC until analysis according to pre-de ned standard operating procedures.
Coronary microvascular function
Coronary microvascular function was assessed non-invasively by transthoracic Doppler stress echocardiography (TTDE) measuring the coronary ow velocity reserve (CFVR) of the left anterior descending artery. CFVR is the ratio of the peak diastolic ow at hyperemia to rest. We used high-dose dipyridamole (0.84 mg/kg) over 6 minutes to induce maximal hyperemia (29). All examinations were performed by the same 3 experienced echocardiographers in an unchanged setting using GE Healthcare Vivid E9 cardiovascular ultrasound system (GE Healthcare, Horten, Norway) with a 2.7-8 MHz transducer (GE Vivid 6S probe). A standard echocardiographic examination was conducted before the CFVR examination. Measurements of myocardial function at hyperemia were obtained immediately after termination of the dipyridamole infusion. After the examination, intravenous theophylline (maximum dose 220 mg) was administered. A detailed methods description of the standard echocardiography is given in Online Appendix.
Biomarkers of brotic activity
Tissue turnover was determined by neo-epitope biomarkers to assess the formation of type IV and VI collagen (PRO-C4 and PRO-C6) and degradation of type IV, V, VI collagen, mimecan and titin (C4M, C5M, C6M, MIM and TIM), respectively. (Table 1).
PRO-C4 and PRO-C6 are building stones of collagen type IV and VI and have been shown to correlate with DM and to be associated with a poor outcome in patients with HFpEF. [19][20][21] C4-6M are degradation fragments of collagen type IV-VI and have been linked to liver brosis, carotid atherosclerosis and major cardiovascular events. 22,23 Mimecan is a small proteoglycan, upregulated and released in heart diseases. 14 Titin, also known as Connectin, is a large sarcomeric protein responsible for elastic recoil and compliance. 24,25 Decreased titin content in the sarcomere leads to deposition of brotic tissue. 24,26 In the ECM remodeling process, mimecan and the cardiac isoform of titin are broken down to the neo-epitopes MIM and TIM, respectively. Both have shown promising results as blood biomarkers of brotic activity. 10,12,27 Biomarker analysis The biomarkers were quanti ed by ELISA assays produced at Nordic Bioscience, Herlev, Denmark.
Biomarkers were measured in serum samples from the symptomatic women with and without diabetes and from the asymptomatic controls. Brie y, ELISA assays were performed as following: Streptavidincoated microtiter plate was incubated with a biotinylated peptide for 30 min at 20 °C. Unbound biotinylated-peptide was washed off ve times with washing buffer (20 nM TRIS, 50 mM NaCl, pH 7.2). Subsequent, a selection peptide, ve kit control samples and patient serum samples were added to the plate and peroxidase-labelled monoclonal antibody was added and incubated for 1 h at 20 °C (PRO-C4, C4M, C6M, MIM and TIM), 3 h at 4 °C (C5M), or 20 h at 4 °C (PRO-C6). Subsequently all ELISA assays plates, after incubated with peroxidase-labelled antibody, were washed ve times with washing buffer and incubated with 3,3',5,5'-Tetramethylbenzidine (TMB) for 15 min at 20 °C in the dark. The reaction was stopped with stopping solution (1% H 2 SO 4 ) and measured on an ELISA plate reader at 450 nm absorbance with 650 nm as reference. Standard curves were generated by the selection peptide and plotted using a 4-parametric mathematical t model. Samples below the lower limit of measurement range (LLMR) were reported as the value of LLMR.
Statistics
Continuous variables with approximately normal distributions are expressed as mean ± standard deviation (SD) and continuous variables with non-normal distribution as median ± interquartile range (IQR). Pairwise comparisons of demographic, anamnestic and clinical parameters between the three groups were performed with one-way analysis of variance for continuous variables, and with χ 2 -tests for categorial variables. Age-adjusted p-value for trend across groups was calculated using linear or logistic regression analysis with the categorical variable treated as a continuous variable. Pairwise comparisons of biomarker distribution in the three groups were adjusted for age and performed by linear regression analyses with logarithmically transformed biomarker as the dependent variable. All tests were Bonferroni corrected for family-wise error rate. A two-sided p-value below 0.05 was considered statistically signi cant. Pairwise correlations between biomarkers and covariates of interest were calculated using Spearman's rho and reported both raw and Bonferroni corrected. Statistical analyses were performed using STATA/IC13.1 (StataCorp LP, College Station, Texas, USA).
Demographics and risk factor distribution
Baseline information is presented in Table 2. As expected, symptomatic women had a signi cantly higher risk factor burden than asymptomatic controls. Conversely, they received more aggressive preventive medication leading to lower LDL-and total cholesterol. Women with DM were older, had higher BMI and higher prevalence of hypertension, dyslipidemia and atheromatosis on invasive coronary angiogram ( Table 2). Sixteen women (8.6%) had DM Type I.
Coronary microvascular dysfunction and echocardiographic parameters
Coronary microvascular function was successfully assessed in 409 (98%) of the participants.
Symptomatic women with DM had lower CFVR, indicating poorer microvascular function compared with women without DM (age-adjusted p for trend < 0.001). Using a cut-off for CFVR of < = 2 to de ne CMD, the prevalence of CMD was 33.7%, 28.7% and 17.1% in symptomatic women with DM, symptomatic women without DM and controls, respectively (age-adjusted p for trend 0.016) ( Table 3).
LVEF under stress was higher in the two groups of symptomatic women than in the control group.
Parameters of diastolic dysfunction (E/e', e' and elevated lling pressure) indicated poorer function in women with diabetes than in the other groups (all p < 0.01). However, only few participants quali ed for manifest diastolic dysfunction according to international guidelines. 28 (Table 3).
Alterations of the extracellular matrix quanti ed by biomarkers ECM turnover biomarker levels across the three groups are shown in Fig. 1. For all biomarkers, the highest values were seen among symptomatic women with DM. Five biomarkers (MIM, PRO-C4, PRO-C6, C4M and C6M) were signi cantly higher in symptomatic women with DM than in women without DM (ageand Bonferroni adjusted p = 0.001-0.03, Fig. 1b-e) and four (TIM, MIM, Pro-C6 and C6M were signi cantly higher than in controls (age-and Bonferroni adjusted p = 0.001-0.009, Fig. 1a-e). None of the biomarkers differed between non-diabetes patients and controls.
Association between biomarkers, clinical parameters and CFVR
HbA1c, BMI, HDL-cholesterol and serum triglyceride levels were signi cantly correlated with all seven biomarkers, although all correlations were weak (r = 0.06-0.32, p = 0.03-0.0001) (results not shown). In addition, the biomarkers TIM, MIM, PRO-C6, C4M and C6M were also signi cantly correlated with history of hypertension. After Bonferroni correction, all pairwise spearman correlations became less signi cant (Fig. 2). There was no correlation between any of the biomarkers and CFVR, CFV at rest or CFV under hyperemia. No systolic or diastolic measurements were correlated with biomarkers after Bonferroni correction.
Discussion
An increased turnover of ECM fragments in blood may re ect remodeling and early brotic disease. 29 In a smaller sub-sample of the iPOWER cohort we have previously demonstrated imbalanced turnover of certain collagens when compared with healthy controls 30 but have failed to nd a relation between brosis on cardiac magnetic resonance imaging and CFVR assessed non-invasively, perhaps due to lack of statistical power. 31 In this larger study we aimed to verify the increased ECM activity and determine whether DM patients, who are particularly prone to developing myocardial brosis, had elevated ECM turnover as a marker of myocardial brosis and whether this was related to impaired coronary microvascular function.
We found that women with angina pectoris and DM had signi cantly higher levels of ECM biomarkers although CMD did not seem to be associated with these biomarkers. Furthermore, high levels of ECM biomarkers were associated with metabolic disturbances as re ected in higher BMI, HbA1c, triglycerides and lower HDL.
Cardiovascular risk factors other than diabetes were highly prevalent in symptomatic women with DM.
Ageing, hypertension and metabolic disturbances such as obesity and DM have previously been associated with myocardial brosis. In DM, glycation end-product deposition 25 and metabolic dysregulation have been described as triggers of broblast activation, cardiac ECM-remodeling and brosis. Adipositas and hypertension may also activate broblasts and thereby induce collagen accumulation and deposition. 8 Biomarkers of collagen type IV and VI turnover, TIM and MIM have previously been associated with brotic disease or brotic related conditions. Collagen type IV is primarily found in the basement membrane 30 and has a stabilizing function of microvessels during angiogenesis. 32 Increased levels have been correlated to endocardial hypertrophy and liver brosis. [33][34][35] Collagen type V and VI are important components of the interstitial connective tissue and contribute to the quality of the ECM by regulating the bril size of collagen type I and III. 30 PRO-C6 have been associated with diabetes 16,19 and together with PRO-C4 associated with poor prognosis in HFpEF-patients. 21 C4M and C6M have been linked to severe liver brosis. 22 Also, C4M has recently been found to predict major cardiovascular events and to be associated with carotid atherosclerosis. 23 Decreased titin in the sarcomere is thought to cause brosis, 24,26 and circulating levels of MMP-cleaved mimecan (MIM) has previously been identi ed as a marker of extracellular matrix remodeling in mice. 10 Although elevated in symptomatic women with DM compared with asymptomatic women, TIM was no longer correlated to DM and HbA1c after Bonferroni adjustment, whereas a strong correlation remained with BMI and blood cholesterol levels. Mimecan is a small proteoglycan with important functions in myo bril formation and angiogenesis. It is upregulated and released in heart disease such as after myocardial infarction, in conditions with pressure overload such as in hypertension, but is also released in in ammatory disease such as vasculitis. 10,14 In our previous study of collagen turnover in the iPOWER cohort including 71 symptomatic patients, 30 PRO-C6, C4M and C6M were increased when compared to asymptomatic controls, whereas no signi cant difference between groups was observed for PRO-C4, possibly due to lack of statistical power. C5M and C6M were found to be lower in iPOWER women than in controls. However, the previous study did not include patients with DM and are thus not directly comparable with the current results where the high values are related to the presence of DM.
To our knowledge, this is the rst study to demonstrate a consistent and signi cant overexpression of multiple biomarkers of brosis in women with angina pectoris, DM and risk factors for myocardial brosis. Although many correlations were weak, most were highly signi cant even after conservative Bonferroni adjustment. Further, all biomarkers were consistently associated with DM and metabolic risk factors: BMI, HbA1c, HDL-cholesterol and triglycerides. Also, we performed Bonferroni corrected pairwise correlations and consequently, the association with DM and HbA1c disappeared for TIM and C5M.
We found no relation between CMD and ECM biomarkers. This would indicate that non-endothelial dependent CMD, as assessed in this study by dipyridamole stress, is not causally related to the development of myocardial brosis. Other explanations, as discussed below, is that the ECM biomarker level does not only re ect cardiac remodeling but general brotic activity, making direct comparisons di cult. Also, a relation between increased ECM turnover and CMD caused by endothelial in ammation may have been missed in this study as we have only assessed non-endothelial dependent CMD. 8,12,36 Obesity, arterial hypertension and DM may induce chronic, systemic in ammation and consequently endothelial dysfunction, ECM remodeling, cardiac brosis and nally HFpEF. 37 Another explanation for the lack of relation between CMD and biomarkers is that the measured biomarker activity may re ect early stages of brotic disease that later may develop into manifest myocardial brosis, CMD and/or HFpEF. 38 Risk factors for HFpEF such as female sex, ageing, hypertension, obesity, and DM 38 are all well presented in our population but our population of women at risk did not have HFpEF. However, they did show signs of ventricular hyper-contractibility on echocardiography (higher LVEF), and of poorer diastolic function and higher left ventricular lling pressure compared with controls.
Strengths and limitations
In the iPOWER study, participants were consecutively included and systematically examined. All participants except the asymptomatic controls had a clinical invasive coronary angiography performed ruling out obstructive coronary artery disease (de ned by > 50% stenosis of coronary arteries). The prevalence of cardiovascular risk factors was high. However, if we had been able to include participants with more impaired ventricular function or more pronounced CMD, the population might have had more myocardial brosis which we might have been able to measure by increased levels of circulating biomarkers. We did not measure the endothelial dependent component of coronary microvascular function and have therefore not examined the relationship between endothelial-dependent microvascular function and biomarker turnover Further research is needed with the aim of detecting cardio-speci c biomarkers of brosis. Until then, it is possible to misinterpret brosis as myocardial when biomarkers may be increased due to brosis in other organs than the heart.
Conclusion
Women with angina pectoris, DM and a high cardiovascular risk factor burden have increased turnover of biomarkers re ecting early brotic disease compared with women without DM and few risk factors. Biomarkers were associated with BMI, HbA1c and cholesterol levels but not with non-endothelial dependent CMD. To better evaluate the relation between CMD and myocardial brosis future studies would bene t from longitudinal follow-up and include measurements of endothelial dependent CMD. Table 1.
Ethics approval and consent to participate
This study was performed in accordance with the Declaration of Helsinki 39 and was approved by the Danish Regional Committee on Biomedical Research Ethics (H-3-2012-005). All participants have given written informed consent upon oral and written information.
Consent for publication
Not applicable.
Availability of data and materials
The dataset analyzed during the current study is available from the corresponding author on reasonable request.
Competing interests
SHN is employed at Nordic Bioscience. The other authors of this manuscript have no con icts of interest.
Funding
This work was supported by The Danish Heart Foundation (grant number: 11-10-R87-B-A3628-22678). Supplementary Files This is a list of supplementary les associated with this preprint. Click to download. | 2020-07-02T10:26:28.852Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "c9df3415e0124f68be8a864717d047f5d5422702",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-36838/v1.pdf?c=1631859282000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "38a7fd4d920050f0b79dc906772c3bb115fff6d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
121105183 | pes2o/s2orc | v3-fos-license | Greedy de novo motif discovery to construct motif repositories for bacterial proteomes
Background Bacterial surfaces are complex systems, constructed from membranes, peptidoglycan and, importantly, proteins. The proteins play crucial roles as critical regulators of how the bacterium interacts with and survive in its environment. A full catalog of the motifs in protein families and their relative conservation grade is a prerequisite to target the protein-protein interaction that bacterial surface protein makes to host proteins. Results In this paper, we propose a greedy approach to identify conserved motifs in large sequence families iteratively. Each iteration discovers a motif de novo and masks all occurrences of that motif. Remaining unmasked sequences are subjected to the next round of motif detection until no more significant motifs can be found. We demonstrate the utility of the method through the construction of a proteome-wide motif repository for Group A Streptococcus (GAS), a significant human pathogen. GAS produce numerous surface proteins that interact with over 100 human plasma proteins, helping the bacteria to evade the host immune response. We used the repository to find that proteins part of the bacterial surface has motif architectures that differ from intracellular proteins. Conclusions We elucidate that the M protein, a coiled-coil homodimer that extends over 500 A from the cell wall, has a motif architecture that differs between various GAS strains. As the M protein is known to bind a variety of different plasma proteins, the results indicate that the different motif architectures are responsible for the quantitative differences of plasma proteins that various strains bind. The speed and applicability of the method enable its application to all major human pathogens. Electronic supplementary material The online version of this article (10.1186/s12859-019-2686-8) contains supplementary material, which is available to authorized users.
Background
The rise of antibiotics resistant bacteria poses a major global health issue predicted to cause 10 million deaths per year in 2050, more than heart disease and cancer combined [1]. The increasing resistance to antibiotics necessitates the development of alternative treatment strategies. One promising alternative treatment strategy includes the disruption of protein binding interfaces between bacteria *Correspondence: lars.malmstroem@uzh.ch 1 Faculty of Science, Institute for Computational Science, University of Zurich, 429 Winterthurerstrasse, 190, CH-8057 Zurich Switzerland 2 Service and Support 430 for Science IT (S3IT), University of Zurich, Winterthurerstrasse, 190, CH-8057 431 Zurich, Switzerland Full list of author information is available at the end of the article and human proteins to disarm bacterial defense systems [2]. Such strategies require high-confident identification of sequence motifs that correspond to a structural unit that are necessary for protein folding or binding of ligands and other proteins.
Motifs are short segments of a protein sequence which shows a level of conservation throughout a protein family and beyond. Conserved motifs can be extracted from multiple sequence alignment of proteins with similar functions in different species. While finding such motifs can provide insights for prediction of functional residues, identifying and understanding them is fundamental to discovering binding interfaces in protein complexes [3]. It is generally believed that the binding interfaces forming interactions to help bacteria evade the immune system or to obtain nutrients are comparatively more conserved compared to interactions that are benefiting the host, such as surface exposed epitope. Over time, this results in segments of exposed proteins that are significantly more conserved for functional reasons.
Disrupting the protein-protein interactions by targeting the conserved segments would potentially facilitate the host immune response [4][5][6]. However, the high variability of bacterial surface proteins makes it challenging to study them with traditional sequence analysis methods. InterPro for example [7] contains motifs for the anchor and the signal peptide whereas the rest of the protein sequence remains largely unannotated. Multiplesequence alignment algorithms typically run into problems with the variable number of repeats and tends to produce highly gapped alignments. The rapid growth of known bacterial protein sequences presents an opportunity to identify protein-family specific motifs (in contrast to Interpro that attempts to find motifs common to multiple families).
Group A streptococcus (GAS) is one of the most important bacterial pathogens causing over 700 million mild infections such as tonsillitis, impetigo and erysipelas and, occasionally, severe invasive infections including sepsis, meningitis or necrotizing fasciitis with mortality rates up to 25% [8]. Surface proteins play important roles in the interaction with host proteins [9]. Several bacterial surface proteins interact with numerous of host proteins, forming complex protein-protein interaction networks.
One of the key surface proteins of S. pyogenes is the M protein, a coiled-coil homodimer that extends over 500 Å from the cell wall. The M protein is capable of binding several plasma proteins such as fibrinogen [6] and albumin [10,11]. A crystal structure of M and fibrinogen was published in 2011 demonstrates that the M and fibrinogen form a cross-like complex structure. Further, the M protein is composed of several repeats that are present a variable number of times; some of these repeats overlap with protein-protein interactions binding interfaces [12][13][14][15]. Accordingly, a comprehensive repository of the motifs in coiled-coil proteins and their relative conservation grade is a prerequisite to target the protein-protein interaction that bacterial surface protein makes to host proteins [16].
Here, we present a strategy to iteratively identify protein-family specific motifs from large genome resources, then mask all occurrences of these motifs until no more significant motifs can be found. We applied this strategy to a GAS strain as a model system. We constructed a compendium of almost 60000 motifs for GAS. Further, we demonstrate the power of the approach using the M protein and describe the motif resource in general terms.
Outline of the algorithm
The algorithm starts with a database of protein sequence families and sub-selects a user-defined number (here 100) of sequences for each family containing more than 100 sequences as shown in the pseudocode in Fig. 1. The main part of the algorithm finds the first motif in a sequence and mask all identified occurrences, then remove them from the sequence and produce one new sequence for each occurrence in the second iteration. After finishing this loop, all identified motifs are stored in the main repository and followed by architectural analysis by considering the occurrences of each motif in the entire genome and by computing the internal overlaps of that motif inside the family as well. In the last step, the results will be compared with InterPro to find common overlapping motifs to report and the new ones for further analysis. All the results as well as the main repository of all discovered motifs are stored and available in a SQLite table. SQLite is an embedded SQL database engine that implements a transactional SQL database engine. The code for SQLite is in the public domain and free for use.
Construction of protein families
We selected a representative genome from an invasive M1 S. pyogenes isolated in Ontario, Canada. This sample is available with id 293653.4 from PatricBRC, the bacterial bioinformatics resource database [17]. This genome has 1931 coding sequences (CDS). We downloaded additional 70459 genomes from PatricBRC. This number here refers to the number of genomes available at PatricBRC that had both an .ffa protein fasta file and .cds files that contains a table which links the PatricBRC sequence accession number to the FigFam ID [18]. We used this resource to build one protein fasta file per FigFam ID filtering out duplicate entries. We constructed 1564 FIGfams families containing a total of 9,041,083 protein sequences of which 3,817,065 were unique at the amino acid level. This sequence resource was used as input to the workflow outlined in Fig. 2. Figure 2 shows the general workflow of our approach, where we make use of MEME [19,20] and FIMO [21] in the core part of the system to handle motif discovery and masking the multiple occurrences of each motif on the sequence. MEME is an open-source application which has been widely used for sequence motif discovery and analysis in both DNA and proteins. It is based on GLAM2 algorithm [22] and enables covering of motifs containing gaps. While MEME finds a single occurrence of a motif in the sequence, FIMO is able to consider the MEME's output and define multiple occurrences for any individual gapped or un-gapped motifs. FIMO assign different Fig. 2 The workflow of de novo motif discovery approach. All FIGfams families were downloaded for a user-specified organism and get through the core of the processing -MEME and FIMO-where all the motifs are discovered and masked in an iterative process. A sqlite repository stores all the motifs and motif architectures with required information like organism name, FIGfams family name, the start and stop points on the sequence and etc scores for each matched sequence according to a dynamic programming approach [23] and then motif-specific qvalues are computed based on a bootstrap procedure [24]. FIMO's outputs are considered according to their pvalues, and q-values make it possible to set a user-defined thresholds to cover only specific motif occurrences.
InterPro
InterProScan [7] is a reference resource that provides a functional analysis of protein sequences by classifying them into families and predicting the presence of domains and important sites. In order to achieve a general view of the coverage of our approach, we compared the generated de novo based motif repository of GAS with all GAS-related motifs in InterProScan.
Assigning proteins to cellular compartments
All proteins were assigned to one of seven compartments by using information from mass spectrometry experiments, Fig. 3 Sub-selection test on two sample families. Two sample families are selected and analyzed by sub-selection test. The bubble graph indicates that selection of 100 sequences has a good coverage while saving computational resources more than 20-fold annotations from several databases followed by manual curation. In short, we identified exposed, cell wall associated and secreted protein using data from Karlsson et al. [9]. Transmembrane proteins were identified using TMHMM [25]. DNA associated protein and transcription factors were identified using InterPro [7] and RegPrecise [26]. All other proteins were assigned to the intracellular compartment.
Software availability MEME and FIMO 4.11.1 was used through out the project. The workflow is implemented in GC3pie [27] which makes it possible to parallelize over all available computational cores. All parts of the workflow are written in Python 2.7 and is wrapped by applicake, an opensource and free framework useful when designing workflows. The workflow is available through a singularity container [28] and the container together with the data and an ipython notebook contains instructions and examples to parse the data are provided online with this DOI: 10.5281/zenodo.1403142.
Sub-selection
We analyzed a large sequence database of all GAS proteins containing 1564 FIGfams sequence families as outlined in the Methods section. The FIGfams contain a different number of sequences. This begs the question whether a subset of them would be sufficient to cover most of the motifs. We designed a general sub-selection test to reduce the number of sequences due to computational resource reasons. The sub-selection considers two different families and select a set of 2, 10, 20, 50, 100, 250, 500, and 1000 sequences randomly and repeat the whole analysis for 10 times. In each sub-selection test, we ran the workflow to find all the motifs, and we made an average of motif-coverage between all 10 repeats. Figure 3 demon- The number of sequences, the motif coverage in percentage (which is the average of 10 repeated test) and the computational time on 1 CPU are shown Fig. 4 Two rounds (first to the left, second to the right) of the algorithm is displayed. It starts with a collection of sequences and then discovers motifs in this collection using MEME. It uses FIMO to find additional occurrences of the motif within the sequence collection. In the second round, the motifs are masked (gray bars) before MEME is applied once more. The algorithm iterates through round 3 to N until no more motifs are found, or the sequence collection is fully annotated strates that sub-selection of 100 sequences is sufficient to cover the majority number of all motifs while reducing time and computational resources more than 20-fold (see Table 1).
MEME/FIMO
The workflow starts by entering the name of the desired organism and the q-value cut-off (optional) which are the only required inputs (Fig. 2). In the second phase, all FIGfams protein families related to the input organism are downloaded and stored in a database. Then, by considering the accessibility of computational resources, de novo motif discovery on protein families starts. Figure 4 shows two sample runs of the algorithm where MEME is applied to the sequence collection, restricting the number of identified motifs to one. Motif occurrences were discovered in the sequence collection using FIMO, and only occurrences with e-values of 1e -6 or lower were Fig. 5 The auto-generated result of our approach on M1 protein. a: Binding interfaces of fibrinogen according to the reference crystal structure (PDB id 2XNX). [6]. b: M1 domains proposed in [10]. c: The output results of our approach considered. The proteins were split using the number of occurrences and remaining parts longer than ten amino acids are carried forward to create a new merged sequence collection, mixed with full-length and partial proteins.
The new sequence collection is used as the input for each iterative round of MEME, FIMO, split until no more significant motifs could be discovered, or all remaining sub-sequence were below ten amino acids. All the motif occurrences with corresponding features are stored in an SQLite database. To give further information to the user, known motifs are also integrated from [10] and the InterPro database and visualized using pViz.js [29].
Protein M1 Motif discovery
As an example of application on specific protein family, we collected a large sequence collection of M proteins from four sources: PatricBRC, genomes we have previously sequenced and assembled [30], the M database from CDC (Centers for Disease Control and prevention) and the UniProtKB/TrEMBL database. Any M protein sequence without motifs representing an anchor or a signal peptide was discarded, and the remaining sequences were reduced to a 98% sequence identity using CD-HIT [31]. In total, the algorithm ended after 18 rounds, resulting in 20 motifs from the M protein sequence collection. The SF370 M1 protein reference [32] contained motifs m01-m03, m05-m08, m11-m14, m16-m17 but not m04, m09-10, m15, and m18-20. Additional file 1 contains the logo of all discovered motifs. Figure 5 shows the general motif architecture as the output of the algorithm. Note that an architecture (motif pattern) shows the distribution of motifs over the entire protein family. By considering such representation, it is possible to show the general motif pattern that most of the proteins in the family follow. So, the architectural motif view helps to find potential protein-protein interaction binding sites as the majority member of the family desire to follow such pattern. Accordingly, we found a total of 123 motif architectures, and of these, 85% (104) are associated with a single serotype.
Analysis of conserved motif in GAS genome
We evaluated identified motifs separately based on protein families in different cellular compartments ( Table 3). The main idea is to provide a general comparison between protein families in different cellular compartments in terms of motif-based conservation grade which helps to discover the general evolutionary pressure on cellular compartments and further distinguishing potential drug targets inside and outside the cell. To do that, one should consider the fact that the number of motifs in each compartment is a function of sequence length and the average is dependent on sequence variability. Such dependency affects the comparison between different protein families led to results that are biased against sequence length. To address these issues, we represent motif architecture per sequence and most importantly per family. Accordingly, each protein or its related family can have one or several architectures based on motif variability on that family. Consequently, protein families with few architectures indicates higher sequence conservation inside the family and generally shows that the family has more conserved motifs to do special cellular functions. In this way, by comparing the number of architecture in two different protein families, it is possible to state which family is more conserved. As shown in Table 4, most variable proteins in GAS are transmembrane and secreted proteins which are less conserved and have a more diversified interaction with host proteins. Most conserved proteins are DNA-related and transcription factors together with intracellular proteins that have special machinery roles inside the cell. Transmembrane proteins which play crucial role as the transportation system on the bacterial surface are also more evolved according to the evolutionary pressure. In general as Fig. 6 indicates, we can conclude that the evolutionary pressure is lower on intracellular proteins compared to the surface and secreted proteins.
Comparison with InterPro
To compare our results to InterPro, we analyzed and filtered motifs based on their signature from InterProScan which revealed that 11996 distinct motifs related to GAS are not recognized/discovered by InterProScan (71.15% of all discovered motifs) while there are many important motifs also in common (28.85%). Table 5 contains the list of most commonly overlapping motifs with special InterPro description which discovered by our approach.
Discussion
Conserved protein sequence domains, also referred to as motifs, play an important role in protein function, protein structure and protein-protein interactions. Motifs are the results of several evolutionary processes where, for example, a part of a protein is evolving at a different rate compared to other parts of the same protein. Identifying motifs are fundamental to understanding protein function and to discover putative binding interfaces. Motifs can both be used to shed light on the evolutionary process underpinning the development of a protein family with respect to the protein's function over time; it can also be used to produce a simplified view on the protein as a series of conserved motifs that together specify a proteins motif architecture. Although several approaches have been developed to address motif discovery on protein sequences, most are either focused on a given motif or finding motifs, such as signal peptides, that can be found in a general population of protein sequences.
Here, we developed a de novo motif discovery approach and applied to protein families that share a common ancestral protein; this resulted in a repository of motifs over an entire organism. This approach is focused on understanding the evolutionary processes that have acted on that protein family in a comparatively short evolutionary time. We developed and designed this approach as a software package which is written in Python and distributed via singularity containers [28] making it easy to install and use. We demonstrated the approach on GAS, an important human pathogen with a mortality rate of 25% at invasive infections. We also characterized the proteome-wide motif repository by comparing it to InterPro; furthermore, we analyzed the motif architecture for these proteins and discovered that the number of sequence per architecture is different for different cellular compartments.
Given the speed and flexibility of our approach, we believe it will be useful in breaking analyzing surface protein of pathogens as these proteins are under high selective pressure and therefore cannot be analyzed using more traditional approaches such as multiple-sequence alignments (MSAs). Our attempts to use various MSA algorithms failed due to high sequence variability in regions between motifs and the varying number of motifs. Also, motif searching approaches failed and only identified a small subset of the motifs that our approach discovered.
Conclusion
In this paper, we demonstrate a proof-of-principle approach to parsing large sequence families into motifs using a denovo-based greedy approach. This simple approach can easily handle situations where parts of proteins are repeated or re-arranged, and this can be time-consuming using other approaches. While this general approach can be applied to any bacteria, we used GAS as a model system to make a comprehensive motif repository of its proteins. We further analyzed M1 protein, one of the most important virulence factor of S. pyogenes to show the motif-based architectural analysis. We observe that we over-parse some domains, but also observe that many of these large domains are only partly conserved over the sequence collection. The result indicates that many of the newly discovered motifs are not always present together with adjacent motifs, indicating that they might have different and independent functions. Interestingly, many of our newly discovered motifs are not found in any of the emm1 strains, and some of these might be responsible for binding other ligands. | 2019-04-19T13:27:55.715Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "19eda3194bea2f813f81faef888e78b1e99d09fb",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-019-2686-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19eda3194bea2f813f81faef888e78b1e99d09fb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Biology"
]
} |
18835717 | pes2o/s2orc | v3-fos-license | International trends in rates of hypospadias and cryptorchidism.
Researchers from seven European nations and the United States have published reports of increasing rates of hypospadias during the 1960s, 1970s, and 1980s. Reports of increasing rates of cryptorchidism have come primarily from England. In recent years, these reports have become one focus of the debate over endocrine disruption. This study examines more recent data from a larger number of countries participating in the International Clearinghouse for Birth Defects Monitoring Systems (ICBDMS) to address the questions of whether such increases are worldwide and continuing and whether there are geographic patterns to any observed increases. The ICBDMS headquarters and individual systems provided the data. Systems were categorized into five groups based on gross domestic product in 1984. Hypospadias increases were most marked in two American systems and in Scandinavia and Japan. The increases leveled off in many systems after 1985. Increases were not seen in less affluent nations. Cryptorchidism rates were available for 10 systems. Clear increases in this anomaly were seen in two U.S. systems and in the South American system, but not elsewhere. Since 1985, rates declined in most systems. Numerous artifacts may contribute to or cause upward trends in hypospadias. Possible "real" causes include demographic changes and endocrine disruption, among others.
treds n hyopdaI osbe""cue icudedmogahccags n.norn disruption amog ohersKeywri sbnrmalty ryptrchiism endcrin, gnita, hyospdias,tsi Within the past 5 years, researchers have hypothesized that some natural or manufactured agents are disrupting normal endocrine function in humans and animals, with particular emphasis on male reproductive effects (1)(2)(3)(4)(5)(6)(7)(8). This hypothesis attempts to unify and explain worrisome trends in measures of male reproductive health as the effects of estrogenic or antiandrogenic chemicals. Among the most frequently cited trends, along with trends in sperm count and testicular cancer, are increases in the male genital birth defects of hypospadias and cryptorchidism (3)(4)(5).
These defects represent mild degrees of femininization. Hypospadias occurs when the urethral opening is displaced toward the scrotum. Cryptorchidism is a condition in which one or both testicles do not descend into the scrotum.
Increasing rates of these two anomalies have been reported within the past 25 years by a number of authors (9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)) (see Table 1). The increases that have been cited in support of the endocrine disruption hypothesis occurred for the most part in the 1960s and 1970s. They also derive from a small number of countries in North America and Europe. I have compiled information on worldwide trends in these anomalies to provide more contemporary and complete information for the ongoing debate on endocrine disruption.
Methods
The International Clearinghouse for Birth Defects Monitoring Systems (ICBDMS), a nongovernmental organization of the World Health Organization, collects rates of selected birth defects from member programs. To be a part of the ICBDMS, member programs must be actively engaged in the systematic and continuous collection of birth defects cases. The ICBDMS does not accept data from programs that only passively receive and report health statistics data from administrative sources. The Appendix lists the size and base (population or hospital) of each program as of 1990 (15). Hospitalbased systems calculated their rates based on all births occurring in participating hospitals. Population-based systems used all births in a given geographic area.
In late 1997, the ICBDMS headquarters in Rome provided the latest hypospadias and cryptorchidism birth prevalence rates per 10,000 total births from 29 birth defects registries in 21 countries for all years available. In many cases, I received information for additional years via personal communication with the registrars. Registrars also provided information useful in the interpretation of changes in their rates over time.
The ICBDMS has defined both hypospadias and cryptorchidism, but the extent of adherence to those definitions by participating programs is not known. Almost all countries included defects among stillbirths, but the definition of a stillbirth varied. Prenatal diagnoses that were followed by pregnancy termination were not counted in the rates. Both isolated defects and defects found as part of syndromes are included. The number of cases per year of hypospadias ranged from 12 in the Northern Netherlands system to nearly a thousand in England and Wales. Three ICBDMS members, the California Birth Defects Monitoring Program, the Italy-Northeast registry, and the English registry (after 1989), collected data only on the more severe types of hypospadias (i.e., penile, scrotal, or perineal), sometimes known as second or third-degree hypospadias, wherein the meatal opening is proximal to the glans of the penis. These types of hypospadias will be referred to as`severeh ypospadias in this paper.
To determine whether trends depended on the degree of industrialization and to facilitate presentation, registries were grouped into categories loosely based on their country's gross domestic product (GDP) in 1984, chosen as the middle year of the bulk of the data. GDP was obtained from a University of Toronto online database (20). The U.S. group consisted of three American registries and had the highest GDP ($15,900 U.S.) in 1984. The Commonwealth group consisted of the most affluent nations from English-speaking countries (Australia, Canada, New Zealand), all of which had GDPs in excess of $12,000 U.S. The Scandinavian group induded the four Scandinavian countries, whose GDPs were similar to those in the Commonwealth nations. The Northern Europe and Japan group (England, France, Japan, The Netherlands) had GDPs slightly lower than Scandinavia. The Mediterranean and Ireland group (Ireland, Israel, Italy, Spain) had yet lower GDPs, in the range of $5,500-8,000 U.S. The least affluent nations group (GDP less than $5,500 U.S.) included systems from two Eastern European nations, Hungary and Czechoslovakia, and the Chinese and Latin American systems.
I calculated 3-year moving averages to smooth the trends. An "increase" in a trend in this report is based only on visual inspection, and not on statistical testing.
Results
The highest and lowest rates reported by individual systems during the period under study sometimes varied by a factor of three or more for both hypospadias and cryptorchidism. However, few systems showed monotonic, unbroken upward or downward trends; trends typically reversed direction at least once during the period of years under study.
Hypospadias. Nationwide data from the Birth Defects Monitoring Program of the Centers for Disease Control and Prevention (CDC) showed an upward trend in hypospadias beginning in 1970 (Fig. 1). A more discontinuous upward trend began in 1968 in the CDC's Atlanta, Georgia, surveillance system. Severe hypospadias in the Atlanta system increased from 1982 to 1985 and then leveled off. Rates from the California Birth Defects Monitoring Program for severe hypospadias showed no upward trend.
In the Commonwealth group (Fig. 2), each system showed both short upward and short downward excursions and very little net change. Increases in the Australian and Canadian rates and rates in the Canadian province of Alberta were restricted to the late 1980s, whereas rates in the province of Ontario and in New Zealand were down or unchanged through the 1980s and early 1990s.
Scandinavian countries, with the exception of Sweden, show overall upward trends, with rates approximately doubling in Norway and Denmark during the 1970s and 1980s (Fig. 3). Norwegian rates declined somewhat in the 1990s. The Finnish registry reports that their increase was restricted to the mild form of hypospadias (first degree). In the northern Europe and Japan group (Fig. 4), all but the northern Netherlands registry showed some net increase. Rates dropped sharply in the northern Netherlands system during the 1980s. The Mediterranean and Ireland systems ( Fig. 5) include only one system with an increase, the Italian Multicentric Register of Congenital Malformations (IPIMC). The IPIMC suggests that its increase may have been secondary to a special case-control study of hypospadias launched during this interval. The Italy-northeast system, which records only severe hypospadias, showed a decrease. Rates in the Israeli system made wide upward and downward excursions.
Among the least affluent nations (Fig. 6), rates were generally stable since 1980, with the exception of the Czechoslovakian system, which registered an increase.
Cryptorchidism. Few countries in the ICBDMS had data on cryptorchidism. Among U.S. and Commonwealth systems combined (Fig. 7), the U.S. national rates increased during the 1970s and 1980s, whereas the U.S.-Atlanta system began to increase in 1970, rose sharply in 1985, and dedined equally sharply by 1994. This peak corresponds to a 10-year period during which a more inclusive case definition was in effect in Atlanta. Rates in the Canadian national system increased until about 1980 and then stabilized. The Canadian provincial systems of Alberta and Ontario reported declines, at least since 1985.
The Norwegian system, the only Scandinavian system collecting cryptorchidism rates, shows no consistent trend between 1974 and 1996 (Fig. 8). The same can be said of the data from the France-Paris system (Fig. 9), whereas English rates dropped sharply around 1990, contemporaneous with the introduction of an "exclusion list." The Hungarian system rates have dedined from an early peak, while the South American rates increased overall and since 1985 (Fig. 10).
Discussion
Review of data from 29 registries that monitor a total of 4 million births per year around the world reveals wide intercountry variation in rates of hypospadias and cryptorchidism. Given differences in registry methods, genetic variation, and other factors, the rates themselves are not directly comparable. The primary value of this data is what it shows about changes within systems in recent decades.
The data suggests an increase in reported rates of hypospadias during the 1970s and 1980s in two United States systems and in Scandinavia and Japan. Rates from other nations increased only in one Italian system (IPIMC), where an artifact is suspected, and in the Israeli system, which is the smallest system and the system showing the most unstable rates of hypospadias over time. The absence of an increase is perhaps most notable in Canada, whose society is similar to that of the United States. Among all systems showing an increase, rates tended to level offafter 1985.
There is no indication of a generalized increase in cryptorchidism rates over time Environmental Health Perspectives * Volume 107, Number since 1970, although data on this defect is much more limited. Two U.S. systems show marked increases, but the data from the Atlanta system is difficult to interpret because of coding changes. Since 1985, rates in most systems have actually declined.
A number of factors may account for reported changes in these rates. Chief among them are artifacts. One possible explanation is that the definition of hypospadias may have changed over time to include more minor degrees of deviation from the normal position of the urethral opening on the tip of the penis. There is no anatomical marker that defines when normal variation stops and first-degree hypospadias begins. Slight degrees of deviation are much more common than more proximal meatal positions (21), and a subtle change in the case definition could have produced a large change in overall rates. There is conflicting evidence on whether the case definition of hypospadias has indeed loosened to include more of the milder, first-degree cases. Previously published data from the Atlanta registry indicated that the percent of first-degree cases did not increase over time (16). In contrast, the Finnish registry communicated that the percent of more serious degrees of hypospadias declined as overall rates increased. Moreover, the California and the northeast Italy programs have shown no increase in rates of severe hypospadias.
Severe hypospadias is much less likely to be affected by changes in definition because it has clearer anatomical boundaries.
Another possible explanation for the increase is gradual improvement over time in physician documentation of hypospadias. Because the foreskin is used in some surgical procedures to repair hypospadias and circumcision must be deferred if hypospadias is present, medicolegal considerations may increasingly cause physicians who perform circumcision to examine the penis carefully. They may therefore be referring more boys to urologists.
Increasing numbers of such referrals may increase the number and/or prominence of diagnoses of hypospadias in medical records, thereby improving the chances of detection by a surveillance program.
The same artifacts could explain the increases in cryptorchidism rates noted in some of the systems. In particular, cryptorchidism may be sought more aggressively now because of the strong evidence accumulated over the past 20 years that undescended testicles are likely to become cancerous (22) and because of the standard practice of removing them early in life in hopes of reducing this risk. A second hospitalization for orchidopexy in infancy may double the chances of the anomaly being registered in a surveillance system.
For both hypospadias and cryptorchidism, it is also conceivable that rates after 1985 were affected by literature published during the 1980s describing increases in these anomalies. Perhaps criticism of the reports led to a tightening of case definitions in some systems. It is noteworthy that of the five registries publishing hypospadias increases by 1986 (Table 1), only Denmark reported any further increase in subsequent birth years.
Other, nonartifactual explanations have been proposed to explain the increasing hypospadias rates in Europe reported earlier. Initially, it was hypothesized that increasing use of steroid-containing medications by pregnant women might be responsible (23). However, the consensus now seems to be that the risk from such preparations is exaggerated, and the prevalence of their use is not great enough to account for the increase (12,24,25).
Alternatively, evidence of increased risk of hypospadias among couples of reduced fertility has produced speculation that an increasing proportion of such couples among all parents could account for an increasing trend in this anomaly (25,26). However, the magnitude of the risk for relatively infertile couples (26), combined with their low prevalence among the population of all parents, does not seem sufficient to account for the large observed increases in some registries. Year of birth
Conclusion
There is some evidence for an increase in hypospadias rates concentrated in more affluent nations. That increase may have ended in the mid-1980s. More registries that experienced increasing trends in hypospadias should report how their percentage of severe cases has changed over time. If an increase in all degrees of hypospadias is reported in more surveillance systems, more in-depth investigation will be warranted.
Assuming these upward trends are real and assuming exogenous agents are responsible, the relevant exposures may be more common in highly industrialized countries. Those exposures (or their body burdens) that may have stabilized since 1985 might also be the most logical ones to pursue among all potential environmental exposures.
Although it is important to examine these trends broadly, it is unlikely that further inspection of international trends alone will shed additional light on the question of endocrine disruption as a cause of birth defects. Such descriptive analysis is provocative, but more sophisticated study designs should be sought. | 2014-10-01T00:00:00.000Z | 1999-04-01T00:00:00.000 | {
"year": 1999,
"sha1": "fe4b03ef54b37effd6bba45f73631b50a400a8f7",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.99107297",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe4b03ef54b37effd6bba45f73631b50a400a8f7",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
} |
253631680 | pes2o/s2orc | v3-fos-license | A case report of persistent drug-sensitive pulmonary tuberculosis after treatment completion
Background Mycobacterium tuberculosis (Mtb) has been found to persist within cavities in patients who have completed their anti-tuberculosis therapy. The clinical implications of Mtb persistence after therapy include recurrence of disease and destructive changes within the lungs. Data on residual changes in patients who completed anti-tuberculosis therapy are scarce. This case highlights the radiological and pathological changes that persist after anti-tuberculosis therapy completion and the importance of achieving sterilization of cavities in order to prevent these changes. Case presentation This is a case report of a 33 year old female with drug-sensitive pulmonary tuberculosis who despite successfully completing standard 6-month treatment had persistent changes in her lungs on radiological imaging. The patient underwent multiple adjunctive surgeries to resect cavitary lesions, which were culture positive for Mtb. After surgical treatment, the patient’s chest radiographies improved, symptoms subsided, and she was given a definition of cure. Conclusions Medical therapy alone, in the presence of severe cavitary lung lesions may not be able to achieve sterilizing cure in all cases. Cavities can not only cause reactivation but also drive inflammatory changes and subsequent lung damage leading to airflow obstruction, bronchiectasis, and fibrosis. Surgical removal of these foci of bacilli can be an effective adjunctive treatment necessary for a sterilizing cure and improved long term lung health.
Background
Mycobacterium tuberculosis treatment has been evolving over the years, especially with the introduction of newer drugs and shorter regimens [1,2]. Apart from the cavitary nature of tuberculous disease, patients who have been treated with current regimens often are given the designation of cure without achieving proper sterilization. Patients who complete the tuberculous regimen are given the definition of cure after they achieve sputum negativity but many of these patients harbor bacilli within cavities that continue to exert their effects on the respiratory system [3]. The residual changes that occur in patients who have completed medical therapy have been poorly attended to in the literature. Patients that underwent surgical and medical sterilization have been reported to have better pulmonary health in the long term, especially after the removal of cavities [4].
Here, we report a patient who underwent a complete regimen of medical therapy for pulmonary tuberculosis and later had to have surgical resection of her cavities, Open Access *Correspondence: sergovashakidze@yahoo.com 1 Thoracic Surgery Department, National Center for Tuberculosis and Lung Diseases, 50 Maruashvili, 0101 Tbilisi, Georgia Full list of author information is available at the end of the article which grew tuberculous bacilli even after achieving sputum negativity. A 33-year-old female from the country of Georgia presented to a tuberculosis dispensary on July 10, 2020, with a temperature of 38° C and symptoms of malaise, productive cough, and night sweats. The patient had no known medical problems. She reported smoking ~ 10 cigarettes daily and denied alcohol or illicit drug use. She had 3 children and her husband was a prisoner being treated for pulmonary tuberculosis. Upon physical examination there were decreased breath sounds in the upper lobes of the lungs with dullness to percussion. The patient had a body mass index (BMI) of 16.3 kg/m 2 . A complete blood count revealed a moderate leukocytosis of 10.2 × 10 9 /L and an erythrocyte sedimentation rate (ESR) of 42 mm/h. Biochemical blood parameters were normal. Sputum testing found a negative acid-fast bacilli (AFB) microscopy, positive Xpert MTB/RIF test (no RIF resistance), and positive culture for Mycobacterium tuberculosis (Mtb). Additionally, drug susceptibility testing (DST) revealed sensitivity to rifampin, isoniazid, and ethambutol. Chest radiography revealed multiple small foci in the upper lobes of both lungs and a cavity in the right lung (Fig. 1A). The patient was initiated on daily outpatient treatment with three pills of a fixed dosed combination pill containing isoniazid 75 mg, rifampin 150 mg, ethambutol 275 mg and pyrazinamide 400 mg. Treatment was given through directly observed therapy (DOT). She converted her sputum cultures to negative at 2 months and continued rifampin and isoniazid to finish 6 months of treatment. An end of treatment chest x-ray revealed fibrosis and honeycombing in the right upper lung, and fibrosis and dense focal shadows in the 1st and 2nd intercostal spaces of the left lung (Fig. 1B). The complete treatment timeline is summarized in Fig. 2.
Case presentation
A follow up chest computed tomography (CT) scan demonstrated a cavity in the right upper lobe measuring 12 × 10 mm in size with a thick and heterogeneous wall and nodules and bronchiectasis in the left lung ( Fig. 3A-D). Based on CT findings and in accordance with National tuberculosis guidelines, the patient was offered surgical resection of the affected portion of the lung. It should be noted that the patient reported no symptoms, complaints, or functional disability before the surgery. Preoperative workup including pulmonary function testing, an echocardiogram, bronchoscopy, and blood chemistries were normal. The patient consented to surgery and underwent a surgical resection of the S1 and S2 segments of the right lung 2 weeks later. Intraoperatively, moderate adhesions were visualized in the S1 and S2 area with a palpable dense formation ~ 3.0 cm in diameter, in addition to a dense nodule. Gross pathology of the resected lesion showed a thick-walled fibrous cavity filled with caseous necrosis (Fig. 4A) corresponding to the right preoperative CT lesion seen on Fig. 3A, C.
Microbiological analysis on the resected tissue revealed acid-fast bacilli on microscopy, and positive Xpert MTB/ RIF and culture results. Mtb grew from the caseous center, inner and outer walls of the cavity and a resected foci located ~ 3 cm from the cavity. DST revealed sensitivity to isoniazid, rifampin, and ethambutol.
Pathological examination of the resected lesion showed findings consistent with fibrocavernous tuberculosis. No postoperative complications were experienced, and the patient reinitiated first-line therapy via DOT on the 2nd A follow up CT scan performed after 3 months showed postoperative changes in the right upper lobe, and an unchanged left lung ( Fig. 5A-C). Based on the persistent conglomerate of tuberculomas and multiple small tuberculous foci, growth of Mtb from the previous surgical specimen, and the patient's social situation (mother of three young children) a second surgery to optimize the chance of cure was recommended. The patient reported no symptoms, complaints, or functional disability before the surgery. Preoperative sputum testing found negative AFB smear microscopy and culture. The patient underwent the second operation on May 18, 2021, in which the S1, S2 and part of the S6 segment of the left lung were resected. Intraoperatively, moderate adhesions seen along with a dense palpable ~ 3 cm mass in the S1 and S2 region and a dense focus in S6.
Microbiological examinations performed on resected tissue revealed positive AFB smear microscopy and Xpert MTB/RIF results and a negative AFB culture. The pathological examination of the surgical samples Fig. 4 A Gross pathological image of a resected cavity with caseous material from first surgery (S1 & S2 segment of right lung). B The gross pathology from the second surgery showed the presence of a blocked cavity measuring up to 2 cm in diameter filled with caseous material in the S1, S2 and C Tuberculoma in S6 segment indicated a variety of destructive changes in addition to ongoing inflammation. The gross specimen of S1 and S2 segments of the left lung showed fibrocavernous tuberculosis shown in Fig. 4B, which corresponds to the left lung lesion seen on the first preoperative CT in Figs. 3B and 5A in the second preoperative CT; the gross specimen of the S6 segment showed progressive tuberculoma seen in Fig. 4C, which corresponds to the left lung lesion seen on the first preoperative CT in Figs. 3D and 5C in the second preoperative CT.
There were no postoperative complications, and tuberculosis (TB) treatment was reinitiated. The patient successfully completed treatment with normalization of clinical and laboratory parameters and a clinical outcome of cure in September 2021, ~ 14 months after beginning treatment. The patient had reported near complete resolution of her symptoms, having a much better ability to perform her daily activities. The patient appreciated the effects surgery had on her recovery and was happy to have gone through that treatment route. A post treatment CT scan demonstrated postoperative changes in the upper segments of both lungs (Fig. 5D). Results from post treatment lung function testing were all within normal range.
Discussion and conclusions
We present this case to highlight the heterogeneous nature of pulmonary tuberculosis and need for an individualized treatment approach, especially for patients with cavitary disease. Over the last decade, novel diagnostics, drugs, and treatment regimens have revolutionized TB management including a recent landmark clinical trial demonstrating an effective 4-month regimen for drug-susceptible TB [1]. The move towards shorter regimens is critical to improve treatment completion rates and help meet TB elimination goals. However, during a transition to shorter treatment durations it is imperative that clinicians remain aware of complex and severe pulmonary TB cases that may require longer durations of treatment and adjunctive therapies such as surgery. Supporting evidence comes from a recent landmark study finding persistent inflammation on imaging associated with finding Mtb mRNA in sputum after successful treatment and a meta-analysis demonstrating a hard-to-treat TB phenotype not cured with the standard 6 months of treatment [2,5]. However, regarding recommendations for prolonging treatment beyond 6 months for drug-susceptible pulmonary tuberculosis, ATS/CDC/IDSA recommends (expert opinion) extended treatment for persons with cavitary disease and a positive 2 month culture (our patient would not have met this criteria); World Health Organization (WHO) does not recommend extended treatment for any persons with drug-susceptible TB [6,7]. Accumulating evidence demonstrates surgical resection may be an effective adjunctive treatment in cases with cavitary disease [8][9][10][11][12]. Ultimately, a precision medicine approach towards TB will be able to identify patients who would benefit from short course therapy and those who would benefit from longer therapy and adjunctive treatment including surgery [13].
Mtb has a unique ability and propensity to induce cavities in humans with various studies showing cavitary lesions in ~ 30 to 85% of patients with pulmonary tuberculosis [14]. Lung cavities are more common in certain groups including patients with diabetes mellitus and undernutrition such as our patient who had a baseline BMI of 16.3 kg/m 2 [15,16]. Their presence indicates more advanced and severe pulmonary disease as evidenced by their association with worse clinical outcomes. Cavitary disease has been associated with higher rates of treatment failure, disease relapse, acquired drug resistance, and long term-term pulmonary morbidity [2,[17][18][19]. The impact of cavitary disease may be more pronounced in drug-resistant disease as shown in an observational study from our group which found a five times higher rate of acquired drug resistance and eight times higher rate of treatment failure among patients multidrug-or extensively drug-resistant cavitary disease compared to those without [20].
Mtb cavities are characterized by a fibrotic surface with variable vascularization, a lymphocytic cuff at the periphery followed by a cellular layer consisting of primarily macrophages and a necrotic center with foamy apoptotic macrophages and high concentrations of bacteria. Historically, each portion of the TB cavity has been conceptualized as concentric layers of a spherical structure due to its appearance on histologic cross-sections. However, recent studies using more detailed imaging techniques have shown most TB cavities exhibit complex structures with diverse, branching morphologies [21]. A dysregulated host immune response to Mtb is thought to contribute to the development of lung cavities, which may explain why cavitary lesions are seen less frequently among immunosuppressed patients including people living with Human Immunodeficiency Virus (HIV) [14]. The center of the TB cavity (caseum) is characterized by accumulation of pro-inflammatory lipid signaling molecules (eicosanoids) and reactive oxygen species, which result in ongoing tissue destruction, but do little to control Mtb replication [22]. Conversely, the cellular rim and lymphocytic cuff are characterized by a lower abundance of proinflammatory lipids and increases in immunosuppressive signals including elevated expression of TGF-beta and indoleamine-2,3-dioxygenase-1 [22]. The anti-inflammatory milieu within these TB cavity microenvironments impairs effector T cell responses, further limiting control of bacterial replication [23][24][25].
The combination of impaired cell-mediated immune responses with accumulation of inflammatory mediators at the rim of the caseum leads to ongoing tissue destruction with the potential for long-term pulmonary sequelae. Many with cavitary tuberculosis suffer chronic obstructive pulmonary disease after successful treatment and the risk may be greater in those with multidrug-resistant disease [3,4]. This has led to research into adjunctive treatment with immune modulator therapies with a goal of mitigating the over-exuberant inflammatory response at the interior edge of the cavity to limit tissue damage. In a recent randomized clinical trial, patients with radiographically severe pulmonary tuberculosis treated with adjunctive everolimus or CC-11050 (phosphodiesterase inhibitor with anti-inflammatory properties) achieved better long-term pulmonary outcomes versus those who received placebo [26]. Such results suggest the inflammatory response can be modified with appropriate hostdirected therapies to improve pulmonary outcomes, particularly in those with cavitary tuberculosis.
Tuberculosis cavities not only hinder an effective immune response, but also prevent anti-tuberculosis drugs from achieving sterilizing concentrations throughout the lesion and especially in necrotic regions. The necrotic center of cavitary lesions is associated with extremely high rates of bacilli (up to 10 9 per milliliter), many of which enter a dormant state with reduced metabolic activity. Bacilli in this dormant state may be less responsive to the host immune response and exhibit phenotypic resistance to some anti-tuberculosis drugs thereby preventing sterilization and increasing chances of relapse [14,27,28]. The fact that the specimens from our patient's second surgery were Xpert and AFB positive, but culture negative may indicate the presence of either dead bacilli or metabolically altered(dormant) bacilli that may be alive, but not culturable by standard techniques. Further, genomic sequencing studies have also found distinct strains of Mtb within different areas of the cavity that have varying drug-susceptibilities demonstrating cavities as a potential incubator for drug resistance [27,29].
Emerging literature has started to elucidate the varying abilities of drugs to penetrate into cavitary lesions and the importance of adequate target site concentrations. One notable study found that decreasing tissue concentrations within resected cavitary TB lesions were associated with increasing drug phenotypic MIC values [30]. Innovative studies using MALDI mass spectrometry imaging have further demonstrated varied spatiotemporal penetration of anti-TB drugs in human TB cavities [31]. This study found rifampin accumulated within caseum, moxifloxacin preferentially at the cellular rim, and pyrazinamide throughout the lesion, demonstrating the need to consider drug penetration when designing drug regimens in patients with cavitary TB. Computational modeling studies have further demonstrated the importance of complete lesion drug coverage to ensure relapse-free cure [32]. Furthermore, clinical trials are now incorporating these principles into study design by (1) using radiological characteristics to determine treatment length and (2) incorporating tissue penetration into drug selection and regimen design [33,34]. Beyond tissue penetration, varying drug levels and rapid INH acetylation status can also lead to suboptimal pharmacokinetics and poor clinical outcomes [35,36]. As highlighted in a recent expert document, clinical standards to optimize and individualize dosing need to be developed to improve outcomes [37].
Available literature points to a benefit of adjunctive surgical resection particularly among patients with drug resistant tuberculosis. A meta-analysis of 24 comparative studies found surgical intervention was associated with favorable treatment outcomes among patients with drug-resistant TB (odds ratio 2.24, 95% CI 1.68-2.97) [38]. Additionally, an individual patient data meta-analysis found that partial lung resection (adjusted OR 3.9, 95% CI 1.5-5.9) but not pneumectomy was associated with treatment success [39]. In two observational studies, we have also found that adjunctive surgical resection was associated with high and improved outcomes compared to patients with cavitary disease not undergoing surgery and was associated with less reentry into TB care. It should be noted that all studies of surgical resection for pulmonary TB were observational studies, which may be subject to selection bias, and no clinical trials (very difficult to implement in practice) were conducted to provide more conclusive evidence. Based on available evidence, the WHO has provided guidance to consider surgery among certain hard to treat cases of both drug-susceptible and resistant cavitary disease [40]. Criteria for surgical intervention included (1) failure of medical therapy (persistent sputum culture positive for M. tuberculosis), (2) a high likelihood of treatment failure or disease relapse, (3) complications from the disease, (4) localized cavitary lesion, and (5) sufficient pulmonary function to tolerate surgery. For our patient, the severity of disease, lack of improvement of radiological imaging despite appropriate treatment, and high risk of relapse were the main indicators for surgery. Contraindications for surgery included a forced expiratory volume (FEV1) < 1000 mL, severe malnutrition, or patients at high risk for perioperative cardiovascular complications. With strict adherence to indications and contraindications for surgery, an acceptable level of postoperative complications are noted (5-17%) [4,38]. Our results also demonstrate the safety of adjunctive surgery, as our post-operative complication rate (8%) was low with the majority being minor complications [41].
As our case highlights, patients with persistent cavitary disease at the end of treatment require close clinical follow up and a tailored, individualized plan to determine the best approach for disease elimination and cure. In certain cases, including those with persistent cavitary disease and end of treatment, and where available, surgical resection is an effective adjunctive treatment option that can reduce disease burden and aid anti-tuberculosis agents in providing a sterilizing cure. As we enter an era of welcomed new shorter treatment options for tuberculosis it is imperative for clinicians to be able to identify and recognize complicated TB cases that require prolonged treatment and potentially adjunctive surgery. | 2022-11-19T15:04:54.420Z | 2022-11-19T00:00:00.000 | {
"year": 2022,
"sha1": "287fb26d8cd77714f3931ace4bbc1a93fd42ce29",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "287fb26d8cd77714f3931ace4bbc1a93fd42ce29",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12565821 | pes2o/s2orc | v3-fos-license | Wormholes in spacetimes with cosmological horizons
A generalisation of the asymptotic wormhole boundary condition for the case of spacetimes with a cosmological horizon is proposed. In particular, we consider de Sitter spacetime with small cosmological constant. The wave functions selected by this proposal are exponentially damped in WKB approximation when the scale factor is large but still much smaller than the horizon size. In addition, they only include outgoing gravitational modes in the region beyond the horizon. We argue that these wave functions represent quantum wormholes and compute the local effective interactions induced by them in low-energy field theory. These effective interactions differ from those for flat spacetime in terms that explicitly depend on the cosmological constant.
Introduction
Wormholes are spacetime fluctuations that involve baby universes branching off and joining onto different regions of spacetime. In the dilute wormhole approximation, each wormhole end is considered to be connected to a different asymptotically large region of spacetime [1,2]. In this situation, wormholes can be represented quantum mechanically by wave functions that satisfy the Wheeler-De Witt (WDW) equation. In order to recover the semiclassical behaviour expected for a wormhole, Hawking and Page [3] proposed that the wormhole wave functions should be exponentially damped for large three-geometries. Besides, these wave functions should be regular when the three-geometry degenerates to zero [3]. These boundary conditions are usually employed to select the wormhole wave functions among the solutions to the WDW equation.
It has been claimed that the existence of wormhole insertions in spacetime introduces local effective interactions in low-energy field theory and may modify the constants of nature [1,4]. The analysis of the wormhole effects in spacetimes with cosmological horizons is particularly relevant. Spacetimes that describe solutions of interest in cosmology usually posses this kind of horizons. In addition, these horizons are generally present when the cosmological constant, Λ, is positive. Actually, the existence of quantum wormholes in spacetimes with positive Λ was already assumed by Coleman when putting forward his mechanism for the vanishing of the observed cosmological constant [5]. However, when one tries to study wormhole processes in spacetimes with a cosmological horizon, one soon realises that the asymptotic boundary condition, as proposed by Hawking and Page, is no longer applicable. All the solutions to the WDW equation turn out to exhibit an oscillatory behaviour when the scale factor of the three-geometry becomes greater than the horizon size. In fact, the presence of a cosmological horizon in a Lorentzian spacetime implies that its Euclidean counterpart is compact in Euclidean time and, as a consequence, there does not exist an asymptotically large Euclidean region around the wormhole end.
In this work, we will propose a generalisation of the asymptotic wormhole boundary condition for spacetimes with a large cosmological horizon. According to this proposal, quantum wormholes can be represented by wave functions that have an exponentially damped WKB behaviour in the region of large three-geometries well inside the horizon and include only outgoing gravitational modes when the three-geometry is asymptotically large. Note, on the other hand, that the condition that the wormhole wave functions are regular when the three-geometry degenerates needs in principle no modification, because the presence of a cosmological horizon affects only the large scale behaviour.
We will particularise our discussion to the simplest gravitational system that presents a cosmological horizon, namely de Sitter spacetime. Together with the asymptotically flat and anti-de Sitter cases (which have already been considered in the literature [6,7]), our analysis exhausts the study of wormholes in maximally symmetric spacetimes. We will see that our generalised wormhole boundary condition requires the wormhole throat to be much smaller than the existing horizon. Otherwise, the baby universe fluctuation could not be distinguished from the background spacetime. In this sense, we will understand that the expression "quantum wormhole" refers only to tunnelling processes that occur in regions well inside the cosmological horizon. In addition, we will assume that the cosmological horizon is large.
The local effective interactions produced by wormholes have been explicitly computed for asymptotically flat [6] and asymptotically anti-de Sitter wormholes [7] with a variety of matter field contents. Using our generalised wormhole boundary condition, we will calculate the effective interactions induced by wormholes in de Sitter spacetime. We will show that the existence of a cosmological horizon modifies these interactions with respect to the flat case by introducing terms that are proportional to even powers of the inverse horizon size.
In section 2, we generalise the asymptotic wormhole boundary condition and obtain the de Sitter wormhole wave functions. Sec. 3 deals with the effective interactions induced by these wormholes in low-energy field theory. In both sections, we work with a conformal scalar field as the matter content. Finally, we discuss our results and their generalisation to other matter fields in Sec. 4.
De Sitter wave functions
Let us analyse quantum mechanically a gravitational system with positive cosmological constant and a conformally coupled scalar field. In the following, a and χ 1 denote the scale factor of the sections of constant time and the homogeneous mode of the conformal scalar field on these sections, respectively. These configuration variables will be treated exactly. We will also consider the deviations from the homogeneous and isotropic configuration described by a and χ 1 , but only up to first order of perturbation theory. These deviations will be described by the coefficients of the expansion of the conformal field in hyperspherical harmonics on the three-sphere (in this process, gravitational waves are neglected and the gravitational harmonics are gauged away [7,8]). Explicitly, we decompose the scalar field as 1 where Q nσn are the scalar harmonics, eigenfunctions of the Laplace-Beltrami operator in the three-sphere with eigenvalues −(n 2 − 1), the index σ n runs over a basis of the corresponding degenerate eigenspace [7,8], and the coefficients χ nσn depend only on the time coordinate. In conformal time, the action for the system can then be written Here, π a and π n are the momenta canonically conjugate to a and χ n , N is the lapse function, and the prime denotes derivative with respect to η. From Eq. (2.2), we obtain the WDW equation in which we have chosen an operator ordering that removes the ground-state energy of each of the harmonic oscillators. We can solve this equation by separation of variables. By imposing standard boundary conditions for the quantum harmonic oscillators, and restricting all considerations to rotationally invariant states of the scalar field [7,9] (i.e., states that depend on the inhomogeneous configuration variables only through the rotationally invariant combinations χ 2 n = σn χ 2 nσn ), we arrive at wave functions of the form where E = N 1 + n>1 2nN n is a sum of harmonic oscillator energies, H N 1 is the Hermite polynomial of degree N 1 , and L (n 2 −3)/2 Nn is the generalised Laguerre polynomial of degree N n [10]. The gravitational part of the wave function, ψ E (a), must satisfy the equation If we now imposed the usual wormhole boundary condition that requires the wave function to be damped when a → +∞, we would obtain ψ E = 0. Actually, every nonvanishing solution of Eq. (2.5) is asymptotically oscillatory.
The potential term (a 2 − λa 4 − 2E) in Eq. (2.5) has two positive roots when 8Eλ < 1, namely, a ± = (2λ) −1/2 [1 ± (1 − 8Eλ) 1/2 ] 1/2 . These turning points divide the sector of positive scale factors into three regions: two oscillatory, Lorentzian domains, 0 < a < a − and a > a + , and an exponential, Euclidean domain, a − < a < a + (for 8Eλ ≥ 1 we have an oscillatory behaviour for all a > 0). When 8Eλ ≪ 1, we can perform a WKB analysis in the exponential domain, far from the turning points. This analysis reveals that there are two possible behaviours for the wave function in this region, namely, the leading term in the WKB approximation can either increase or decrease exponentially for increasing a. For a − < a < a + , the leading-order WKB approximation is where A ± are constants. When 8Eλ ≪ 1 and λ ≪ 1, this approximation is valid [12] at least for scale factors near the bottom of the potential, a m = 1/ √ 4λ. In particular, note that (for fixed E) the approximation is valid in a region that overlaps with that of large scale factors if λ is sufficiently small. Demanding that the leading-order WKB approximation is exponentially damped in the region of large scale factors well inside the horizon does not totally remove a subdominant contribution from the increasing exponential (δ = 1) in Eq. (2.6). In the flat case (λ = 0), the increasing exponential would dominate the wave function when a becomes unrestrictedly large, even if A + is considerably small. Hence, the condition that ψ E (a) is exponentially damped for asymptotically large scale factors actually implies A + = 0 if the cosmological constant vanishes. In our case, however, the requirement of exponentially damped behaviour only implies that the quotient |A + |/|A − | (which can depend on λ, E and Planck length) has to be small enough to suppress the contribution of the increasing exponential in ψ E (a) when the scale factor approaches the horizon size.
On the other hand, the de Sitter wave functions display an oscillatory behaviour for scale factors larger than the horizon size, a > a + ≃ 1/ √ λ, owing to the presence of the cosmological term in the potential. In this region, the WKB approximation gives which is always valid for sufficiently large scale factors. When A ′ + = 0, the gravitational wave function ψ E (a) is completely characterised in the region beyond the horizon by the fact that it represents a purely outgoing wave. Moreover, integrating that solution backwards in a, one obtains a unique wave function among those which exhibit an exponentially damped WKB behaviour in the Euclidean region.
Based on the above discussion, we now introduce the following proposal. For sufficiently small cosmological constant, one can interpret as quantum wormholes in de Sitter spacetime the wave functions (2.4) whose gravitational part: i) admits a WKB approximation, which is exponentially damped, in the interval of large scale factors well inside the Euclidean domain, and ii) corresponds to an outgoing mode in the region beyond the horizon. Any linear combination of such wave functions represents also a quantum wormhole state. Our proposal restricts the existence of wormhole wave functions to the sector of matter energies with 8Eλ ≪ 1, since it is only then that an Euclidean region with the required properties exists. As we have seen, this proposal picks out a unique wave function for each value of E. Finally, notice that the condition that the wave function includes only outgoing gravitational modes for very large scale factors is similar in spirit to the tunnelling proposal of Vilenkin [13], although in our case the tunnelling to the large Lorentzian region does not occur from "nothing", but from another Lorentzian domain, namely that with small scale factors.
A motivation for this proposal comes from the following considerations. For sufficiently small λ , let us choose a matter energy such that 8Eλ ≪ 1. The turning point a − , which corresponds to the wormhole throat, is then approximately equal to √ 2E, which is the throat size of an asymptotically flat wormhole with the same matter energy [11]. In the interval (0, a − ) the de Sitter wormhole wave function has an oscillatory behaviour. This behaviour is similar to that displayed by a flat wormhole [3] and describes a Lorentzian closed Friedmann-Robertson-Walker spacetime, i.e. a baby universe. Furthermore, in the region where the potential is dominated by the term a 2 , one can parallel the line of reasoning discussed by Hawking and Page in the flat case to conclude that the main contribution to the wormhole wave function in the saddle point approximation must be given by the exponential of the surface term − 1 2 √ h|K|d 3 x, evaluated on the section of constant time with scale factor a. Here, K is the trace of the extrinsic curvature. It is then easy to see that one arrives precisely at the exponentially damped WKB behaviour that we have proposed for the wave function ψ E (a) in the region of scale factors under consideration. In order to select a wave function among those that possess this exponentially damped behaviour, one needs to generalise the arguments given by Hawking and Page to the region of scale factors beyond the horizon. In this region, the saddle points are Lorentzian and describe asymptotically de Sitter geometries that either expand or contract from a scale factor a larger than the horizon size. When the dominant saddle points are expanding (contracting) geometries, the wave function includes only outgoing (ingoing) gravitational modes beyond the horizon. In the flat case, the condition that the wormhole wave functions are asymptotically damped reflects the fact that the tunnelling occurs from the baby universe to the region of asymptotically large three-geometries, but not in the opposite direction. It then seems natural to generalise this condition to the de Sitter case by imposing that, beyond the cosmological horizon, there are only outgoing gravitational modes, so that the wave function describes in fact a tunnelling from small to asymptotically large scale factors. In this sense, our proposal provides a natural generalisation of the asymptotic wormhole boundary condition once it is assumed that the size of the cosmological horizon (1/ √ λ) is sufficiently large. The de Sitter wormhole wave functions selected by this proposal have a behaviour that is similar to that of the asymptotically flat wormholes both in the baby universe sector and in the region of large scale factors much smaller than the horizon size.
Since Eλ is the square of the rate between the wormhole throat (of order √ E) and the horizon size of de Sitter space (1/ √ λ), the condition 8Eλ ≪ 1, that we have used in our discussion, allows us to regain the picture of wormhole connections in a background spacetime. This picture would break down for quantum states with 8Eλ > ∼ 1, because these states describe quantum fluctuations whose characteristic size is of the order of or greater than the cosmological horizon of the de Sitter background. Hence, we will not regard the states with 8Eλ > ∼ 1 as quantum wormholes.
Finally, we will see in the next section that the wave functions (2.4) with the above choice for ψ E (a) can be interpreted as de Sitter wormholes with a definite number of particles in the Euclidean vacuum for de Sitter space [14,15], that is, the vacuum which is conformally related to the natural vacuum for flat spacetimes [7].
Effective interactions.
Following a procedure developed by Hawking [2], and used in Ref. [7] for the anti-de Sitter case, we will now deduce the explicit form of the effective interactions produced by wormholes in de Sitter spacetime. In order to do this, we must first calculate the matrix elements of products of matter fields between a vacuum in de Sitter spacetime and an arbitrary wormhole state, |Ψ α , Here, we have considered only two matter fields for symplicity. In this section, we will choose the vacuum |0 to be the Euclidean vacuum. In Sec. 4, we will comment on the consequences of different choices of vacuum. Since our aim is to determine the effects of wormholes in scales greater that the wormhole scale, x 1 and x 2 represent points in regions of the Euclidean de Sitter spacetime far from the tiny wormhole end. For scale factors a in the Euclidean region, the state φ(x 1 )φ(x 2 )|0 is then given, in a (a, χ)-representation, by a Euclidean path integral over geometries and matter field configurations with initial values a and χ for the scale factor and the matter field, respectively, and compatible with the condition E χ = E a = 0 at an arbitrary final time. Here, E χ = 1 2 n (n 2 χ 2 n − π 2 χn ) is the energy of the matter field, and E a = 1 2 (a 2 − λa 4 − π 2 a ) is the energy associated with the scale factor. Fixing the variables E χ and E a in this way at a final time τ f selects the Euclidean vacuum for the matter field (the classical solutions for the scalar field with E χ = 0 correspond to the Euclidean mode decomposition of the field) and makes the action invariant under time reparametrizations that coincide with the identity at the initial time, but are arbitrary at τ f [16]. Therefore, the particular value chosen for the final time becomes irrelevant.
An estimate of the path integral can be obtained by means of a saddle point approximation. As far as the low-energy regime is concerned, the geometrical saddle point can be taken to be pure de Sitter space outside a subtracted sphere of radius a that surrounds the wormhole insertion. Then, the saddle point solution for the conformal scalar field must satisfy the equation (✷ − 2λ)φ = 0, where ✷ is the Laplacian for the de Sitter foursphere. If φ(x ′ ) is a saddle point solution, φ f (x ′ ), the transform of φ under an element f −1 of the group of isometries SO(5), is also a solution. As a consequence, one has to average over the group SO (5). Recalling that the four-sphere S 4 (i.e., Euclidean de Sitter spacetime) and the coset space SO(5)/SO(4) are isomorphic, the integral over SO(5) can be performed as follows SO(5) df h being a generic element of the isotropy group SO(4) and g the determinant of the metric on the four-sphere. This integral can be interpreted as an average over the positions and orientations in which a wormhole end can be connected. In Ref. [7] it was shown, by treating each mode separately, that the different saddle points can be expressed in terms of the propagator of the matter field as where G is the propagator associated with the Euclidean vacuum, and M n h is a constant tensor of range n − 1 that is completely symmetric and vanishes under contractions of any pair of indices. This tensor contains all the dependence on the rotation group, as well as on a and χ n (i.e., the values of the scale factor and the matter field coefficients on the sphere that has been subtracted to de Sitter space). The function Θ n G is constructed by completely symmetrising a product of n − 1 covariant derivatives acting on the x-dependence of the propagator, ∇ µ 1 · · · ∇ µ n−1 G, and subtracting all its traces [7]. Finally, the dot in (3.3) denotes the scalar product in the linear space of tensors with the explained symmetries. Taking into account that M n h = R(h)M n , where R(h) is the appropriate irreducible representation of the rotation group, which satisfies (with ⊗ the tensor product), the average over orientations of the product of the two matter field solutions leads to Hence, the complete expression for the quantum state φ(x 1 )φ(x 2 )|0 has the form In the function F (a, χ n ), the dependence on χ n is of the form (χ n ) 2 e − 1 2 n 2 (χn) 2 [7], where the factor of (χ n ) 2 comes from the product (M n · M n ) and the exponential factor from the evaluation of the action on the classical solution.
The orthogonality of the Hermite and Laguerre polynomials that appear in (2.4) implies then that the state φ(x 1 )φ(x 2 )|0 has non-vanishing projections only on the vacuum and either on the N n = 1 state for n > 1 or the N 1 = 2 state for the homogeneous case. We can therefore interpret Ψ Nn=1 with n > 1 and Ψ N 1 =2 as quantum wormholes that, in the Euclidean vacuum, contain a two-particle rotationally invariant state and two homogeneous particles, respectively. A similar analysis can be applied as well to the three-point function and higher functions, extending the above interpretation to any wave function Ψ N 1 ,···,Nn··· of the form (2.4) such that (N 1 + n>1 2nN n )8λ = 8Eλ ≪ 1.
One can now deduce the expression of the interaction Lagrangian that, via the formula 0|φ(x 1 )φ(x 2 ) d 4 x g(x)L I n (φ(x))|0 , reproduces the matrix element (3.1) up to a constant factor. This Lagrangian must be of the form L I n = Θ n φ · Θ n φ, as can be seen by making use of Wick's theorem and noting that the operator Θ n is linear [7]. For the lowest modes, for instance, the interaction Lagrangians turn out to be It is worth noting that, for n ≥ 3, the form of the interactions differs from that obtained for flat space in terms that depend on λ, the square of the inverse horizon size. Therefore, the local interactions introduced by wormholes seem to depend on the large structure of spacetime. It then might happen that the constants of nature could be affected by contributions of cosmological origin owing to the existence of quantum wormholes.
Discussion and conclusions.
In this work, we have analysed the possible effects of the existence of wormholes in cosmological spacetimes with matter content. In this kind of spacetimes, it is necessary to generalise the standard, asymptotic wormhole boundary condition, because, when the spacetime possess cosmological horizons, no asymptotic Euclidean region exists. We have considered in detail the case of de Sitter spacetime, and extended the line of reasoning discussed by Hawking and Page for the case of flat wormholes. In this way, we have arrived at the following proposal. In a large de Sitter spacetime (i.e., when the cosmological constant is sufficiently small), it is possible to interpret as quantum wormhole states the wave functions that i) admit a WKB approximation with exponentially damped leading term in the region of large scale factors much smaller than the horizon size, and ii) contain only outgoing gravitational modes beyond the horizon. Unlike the situation found in the flat and anti-de Sitter cases, the existence of a cosmological horizon in de Sitter space poses an obstruction for the interpretation of a quantum state as a wormhole: the interpretation is feasible only in the sector of states with small matter energy. This restriction is necessary to guarantee that there exists a large Euclidean region in which the wave functions can have an exponentially damped behaviour. Without this restriction in the matter energy, the entire observable universe could be contained in the considered quantum fluctuation, so that it would be imposible to distinguish the interior of the wormhole from the background universe.
We have analysed the case of de Sitter wormholes with a conformal scalar field and discussed the effects of these quantum fluctuations in low-energy field theory. We have shown that the effective interactions produced by these wormholes differ from those induced by flat wormholes for n = 3 and higher harmonics. These differences are proportional to even powers of the inverse horizon size, i.e., to positive powers of the cosmological constant.
For other matter fields one would obtain similar results. As long as there exist quantum wormholes that admit the interpretation of small connections in a background spacetime, one can calculate their effective interactions in the following way. Given a field with spin s, each of the hyperspherical harmonics on the three-sphere in which we can decompose its true degrees of freedom carries an irreducible representation of the group SU(2) ⊗ SU(2), the universal covering of the isotropy group SO(4). This irreducible representation is of type (m/2 + s, m/2) or (m/2, m/2 + s), where m = n − s − 1 is a non negative integer, n is the mode of the considered harmonic, and the lowest mode is given by n = s + 1. Each harmonic gives rise to a different interaction Lagrangian. The explicit form of these Lagrangians would be Θ n Φ·Θ n Φ, with Φ representing a matter field of spin s. Finally, the operator Θ n can be constructed with the help of the symmetries that the corresponding hyperspherical harmonic possesses when we write it in a Cartesian basis [7,8], as it was proved in Ref. [7] for integer spins.
Let us illustrate this point with two examples. We will first consider the case of de Sitter wormholes with a minimally coupled massless scalar field φ. In the process of subtracting all the traces of the completely symmetric product of covariant derivatives that act on the propagator (as was explained above for the conformal scalar field), one must take into account that the approximate saddle point equation is now g µν ∇ µ ∇ ν φ = 0, instead of the equation that applies in the conformally coupled case, (g µν ∇ µ ∇ ν −2λ)φ = 0. This results in changing the interaction Lagrangians for the third (n = 3) and higher modes with respect to those obtained for the conformal field. For instance, the Lagrangian for the mode n = 3 takes the form L 3 I = (∇ µ ∇ ν φ) 2 . In contrast with the conformal field case, the interaction Lagrangians for the minimally coupled scalar field in flat and de Sitter spaces differ then just in that partial derivatives in flat space are replaced with covariant derivatives in de Sitter space. Let us discuss now the case of an electromagnetic field. In this case, Φ represents the four-potential A µ . The interaction Lagrangian for the lowest mode n = 2 reads ( [7]. One can then recursively construct Θ n Φ for higher harmonics by symmetrising ∇Θ n−1 Φ and subtracting all its traces. This subtraction is easily performed by noticing that ✷F µν = 4λF µν , an equation that follows from ∇ µ F µν = 0. Finally, the tensor (Θ n Φ) µ 1 µ 2 ,µ 3 ···µn must be antisymmetric in its first two indices, symmetric with respect to all other indices, and vanishing when contracted in any pair of indices or when taking a cyclic sum over µ 1 , µ 2 , and any other index. These are the symmetries corresponding to the n-th mode of the transverse vector harmonics on the three-sphere, which are the true degrees of freedom of the electromagnetic field [7]. On the other hand, when dealing with the electromagnetic field (as well as with other fields of higher spin), another physical quantity comes into play: the helicity. The helicity distinguishes the (p, q) and (q, p) irreducible representations. This can be done by introducing operators Θ n± that are the self-dual and anti-self-dual parts of Θ n . Nonetheless, the interaction Lagrangians for positive and negative helicities turn out to coincide, because the cross product Θ n Φ · * Θ n Φ is a topological invariant (as can be cheked by direct calculation).
In general, we find that the wormhole effective interactions have contributions owing to the presence of a cosmological horizon. The explicit form of such contributions depends on the specific matter content.
To conclude, we will make some comments on our choice of vacuum in Eq. (3.1). Throughout our calculations, the state |0 has been chosen as the Euclidean vacuum in de Sitter space. In Ref. [7], it was shown that, for anti-de Sitter wormholes, the choice of a particular vacuum among the family of maximally symmetric vacua is a matter of convenience. Once a vacuum is chosen, one can always construct a Fock space of quantum wormholes labelled by the number of particles that they contain, as referred to that vacuum. In the de Sitter case, however, the existence of the horizon scale implies that the Euclidean vacuum plays an special role. The total number of particles associated with this vacuum that a de Sitter wormhole can contain is restricted by the condition 8Nλ ≪ 1. A quantum state ΨN with a definite number of particlesN in another vacuum has non-vanishing projections in states with 8Nλ ≥ 1 regardless of the value ofN , so that it cannot be interpreted as a quantum wormhole. Therefore, the Euclidean vacuum turns out to be special in the sense that only observers associated with it can interpret certain wormhole states as containing a definite number of particles. | 2014-10-01T00:00:00.000Z | 1998-03-08T00:00:00.000 | {
"year": 1998,
"sha1": "d1f0721ba15fffcf84e0c34605db838d5ef80262",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/9803029",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d1f0721ba15fffcf84e0c34605db838d5ef80262",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255025406 | pes2o/s2orc | v3-fos-license | Nutritional and bioactive characteristics of buckwheat, and its potential for developing gluten‐free products: An updated overview
Abstract In the present era, food scientists are concerned about exploiting functional crops with nutraceutical properties. Buckwheat is one of the functional pseudocereals with nutraceutical components used in the treatment of health‐related diseases, malnutrition, and celiac diseases. As a preferred diet as a gluten‐free product for celiac diseases, buckwheat is a good source of nutrients, bioactive components, phytochemicals, and antioxidants. The general characteristics and better nutritional profile of buckwheat than other cereal family crops were highlighted by previous investigations. In buckwheats, bioactive components like peptides, flavonoids, phenolic acids, d‐fagomine, fagopyritols, and fagopyrins are posing significant health benefits. This study highlights the current knowledge about buckwheat and its characteristics, nutritional constituents, bioactive components, and their potential for developing gluten‐free products to target celiac people (1.4% of the world population) and other health‐related diseases.
| INTRODUC TI ON
In recent years, the consumption of functional food with bioactive ingredients has increased in consumers' diets. These functional foods provide nutritional as well as health benefits to end-use consumers. Among the health-related foods, pseudocereals are among the functional foods with numerous health benefits (Astrini et al., 2020;Mir et al., 2018;Xu et al., 2022). However, pseudocereals are taxonomically different; they show similar characteristics to the Poaceae family (wheat, rice, and barley) due to endospermrich starch components. The main pseudocereals with healthrelated benefits are buckwheat, amaranth, and quinoa (Ferreira et al., 2022). Buckwheat is one of the pseudocereals, belongs to the family of Polygonaceae, and is commonly used in the cold region of the world. The buckwheat cultivars are mainly found in mountain regions, especially Russia and China (Begemann et al., 2021;Yilmaz et al., 2020;Zou et al., 2021). The world production of buckwheat is about 3.8 million tons, and Russia ranked at the top position with 1.5 million tons, followed by China with 0.9 million tons (FAO STAT data, 2019). Buckwheat is also cultivated in France (8.3%), the USA (5.7%), Poland (5.4%), Brazil (3.5%), and Japan (1.0%). Buckwheat seeds are mainly used as breakfast cereals in the form of groats, flour for bakery products, and other enriched products such as bread, tea, honey, and sprouts (Giménez- Bastida et al., 2015;Małgorzata et al., 2018). The various health-related benefits (hypocholesterolemic, hypoglycemic, anticancer, and anti-inflammatory) were associated with buckwheat and its byproducts which enhance their potential for functional food formulation (Mondal et al., 2021) and increase their agricultural, industrial, and pharmaceutical uses (Fotschki et al., 2020).
The biological values of buckwheat proteins are outstanding, but antinutritional factors (tannins and proteases) associated with buckwheat proteins lower their protein digestibility (Mattila et al., 2018).
Polyphenolic compounds (flavonoids and phenolic acids) are bioactive ingredients in buckwheat and increase the nutraceutical potential of buckwheat. Buckwheat is a rich source of flavonoids such as rutin, isoorientin, quercetin, isovitexin, vitexin, and orientin (Raguindin et al., 2021). Among all pseudocereals, rutin is only present in buckwheat, with higher antioxidant, anti-inflammation, and anticancer properties (Zhu, 2016). The flavonoid compounds in buckwheat impart pharmaceutical and other health-related benefits (Lee et al., 2016). Buckwheat is also a good source of resistant starch, tannins, plant sterols, and fagopyrins (Ahmed et al., 2014).
There are two well-known disorders linked to gluten exposure: celiac disease and IgE-mediated wheat allergy. Genetic susceptibility factors play a significant role in the development of celiac disease, an autoimmune condition that may cause significant intestine damage. In the general community, celiac disease affects 0.5% to 1% of people (Giménez- Bastida et al., 2015;Manikantan et al., 2022;Rafiq et al., 2021). Contrarily, gluten sensitivity and/or other wheat proteins is caused by IgE antibodies that identify epitopes from certain proteins, known as allergens, setting off a chain of events that results in allergic inflammation. Wheat allergy is present in between 0.33% and 1.17% of the population (Ballini et al., 2021;Brand et al., 2022;Srisuwatchari et al., 2020). A rigorous, gluten-free diet for the rest of one's life is only suggested for those diagnosed with celiac disease.
Similar restrictions apply to those who have been diagnosed with IgE-mediated wheat allergy. In that context, nonceliac gluten sensitivity, a third illness marked by discomfort after ingestion of gluten and in which neither celiac disease nor IgE-mediated allergy plays a role, has received more attention in recent years (Aksoy et al., 2021;Srisuwatchari et al., 2020;Vassilopoulou et al., 2021). Various products without gluten protein are available on the market, but most products lack acceptance by celiac patients (Mir et al., 2014).
Products free from gluten are available in the market with low protein, dietary fiber, and vitamins compared to gluten-containing food products. So, designing through fortification or supplementation of nutrient-dense ingredients is the novel approach for enhancing the nutritional profile of gluten-free products. Buckwheat is a nutrientdense pseudocereal, free from gluten protein, and the preferred diet for celiac patients (Morales et al., 2021). This study highlights the current knowledge about buckwheat and its characteristics, nutritional constituents, bioactive components, and potential for developing gluten-free products.
| D IVER S IT Y AND G ROWING A S PEC TS OF BUCK WHE AT
Buckwheat is an annual herbaceous plant adaptable to all environmental conditions, including infertile land unsuitable for other crops (Rodríguez et al., 2020). The origin of buckwheat is China and Central Asia . Buckwheat is derived from beech and wheat due to its similarity with beechnut and wheat. Buckwheat has been grouped into annual and multiannual types of species. The annual species include Fagopyrum tataricum L, Fagopyrum giganteum Krotov, and Fagopyrum esculentum Moench, whereas Fagopyrum suffruticosum Fr. Schmidt, Fagopyrum ciliatum Jaegt, and Fagopyrum cymosum Meissn are multiennual types (Jing et al., 2016). The The buckwheat has a diversified crop ecology depending upon season and climate. Buckwheat is a mostly adapted crop of the Himalayas at different altitudes, such as low hills, mid-hills, and high hills. The most commonly grown buckwheat is adapted to low hills and is a winter crop, while tartary buckwheat is suitable for mid-and high hills and is a spring-growing crop (Koirala, 2020). Buckwheat is a dicot with irregular and triangular seeds (Ahmed et al., 2014). The seed contains brown-to-black color hulls covering the whole kernel, F I G U R E 1 Different lines/selections of buckwheat (a), buckwheat at harvesting stage (b), and seeds from different lines from buckwheat (c).
colored white to light green. The color and hardness of buckwheat hull are related to different cultivars of buckwheat (Roy et al., 2019).
The buckwheat seed germinates in soil (3-4 days), flowering after 3 weeks of planting with white petals, and after pollination by wind, seed formation starts within 10 days. The buckwheat seed needs more than 1 week to attain full maturity after seed formation. The buckwheat seed for growth requires low-fertility soil with a moderate nitrogen content (Gairhe et al., 2015). The cultivation period of buckwheat is a short growing period of 70-90 days and good storage periods due to its chemical constituents. The harvesting of buckwheat seeds has technical problems due to their uneven ripening pattern. So modern techniques and harvesting equipment need to be adopted to tackle the problems and issues related to the harvesting of buckwheat.
| NUTRITIONAL CONS TITUENTS OF B U CK WH E AT
Buckwheat is a nutrient-dense grain with an excellent nutrient profile to target malnutrition and the celiac population. The buckwheat type pseudocereal is preferred for formulating food products through supplementation with cereal crops to enhance nutritive value or replace cereal grains with gluten-free product formulation.
The nutritional composition of buckwheat is presented in Table 1.
| Proteins
Buckwheat is an important source of protein content (8.5%-18.8%), depending on the cultivar, source, and climate conditions (Dziadek et al., 2016). The protein concentration in buckwheat grains is higher than in cereal grains (Bobkov, 2016). The proteins in buckwheat are composed of globulins (43.3%-64.5%), albumins (12.5%-18.2%), prolamins (0.8%-2.9%), and glutelins (8.0%-22.7%), and 15% of residual proteins (Chrungoo et al., 2016). The prolamine content in buckwheat proteins is very low, and proteins responsible for celiac diseases (30 kDa prolamins) are absent in buckwheat as observed from gel electrophoresis and enzyme-linked immunosorbent method by Petr et al. (2003). The 2 S albumins, 8 S, and 13 S globulins in buckwheat proteins are similar to the storage proteins of legumin (Bobkov, 2016). Among the globulin proteins, 13 S globulins are the main storage protein having a hexameric structure with acidic (32-43 kDa) and basic (23-25 kDa) polypeptide subunits bonded by disulfide bonding (Taylor et al., 2016). The compositions of amino acids are well balanced in buckwheat proteins and are rich in arginine, lysine, and aspartic acids (Bhinder et al., 2020). The presence of tannins and protease inhibitors decreases the protein digestibility of buckwheat. However, the presence of lysine amino acid in buckwheat protein increases the protein digestibility-corrected amino acid scores of proteins in buckwheat than the cereals .
| Carbohydrates
Starch is the available carbohydrate source in buckwheat grains that varies from 60% to 70% (Vojtiskova et al., 2012). The amylose and amylopectin are in the ratio of 25 and 75%, respectively, present in the starch of buckwheat . Buckwheat starch granules are small in size, smooth surfaced, polygonal in shape and size (3-10 μm), and are similar to tuber and cereal starch (Yang et al., 2019).
Buckwheat is also a good source of resistant starch (33.5%) as Yang et al. (2019) reported in buckwheat groats. Different processes, such as autoclaving, cooking, or boiling, affect buckwheat-resistant starch content (Bobkov, 2016). Buckwheat contains more starch content than the other pseudo cereals, and the calorie content (343 cal/100 gm) of buckwheat is similar to cereal and legumes (Mir et al., 2014). The buckwheat embryo is a storehouse of soluble carbohydrates in the form of d-chiro-inositol, besides sucrose. The dchiro-inositol in the embryo of buckwheat is stored in the form of fagopyritols which is galactosyl derivative of d-chiro-inositol, and
TA B L E 1 Nutritional composition of buckwheat
their concentration varied from 20.7 to 41.7 mg 100/g, out of which 71% is concentrated in buckwheat embryo (Zieliński et al., 2019).
The maximum concentrations of fagopyritols are reported in Tartary buckwheat than in the common cultivars. Another soluble carbohydrate has been identified from tartary buckwheat as rhamnosyl glucoside (31%) (Dębski et al., 2016).
| Dietary fiber
The and also induce adverse effects such as mineral and protein unavailability (Zhu, 2020).
| Lipids
The lipids in buckwheat are low but have shown good importance in various physiological activities, with lipid content ranging from 1.5% to 3.7% (Ruan et al., 2020). The lipids in buckwheat are classified into neutral lipids at 81-85%, phospholipids at 8%-11%, and glycolipids at 3%-5% (Bobkov, 2016). Buckwheat is a rich source of unsaturated fatty acids (74.5%-79.3%), which have health benefits against heart diseases, cancer, inflammation, and diabetes (Ruan et al., 2020). The unsaturated fatty acids are concentrated in the embryo of buckwheat seed, and palmitic, oleic, and linoleic are the most common type of fatty acids, representing 87.3%-88% in buckwheat seeds (Gulpinar et al., 2012).
| Vitamins and minerals
Vitamins and minerals are leading in physiological processes in the human body. The buckwheat grain is a rich source of vitamin A, vitamin B complexes, and vitamins C and E (Zhu, 2016 (Zhou, Hao, et al., 2015;Zhou, Wen, et al., 2015). Vitamin B1 of buckwheat is associated with thiamine-binding proteins and increases their bioavailability and stability during storage conditions (Wronkowska, Zielinska, et al., 2010;Wronkowska, Soral-Śmietana, et al., 2010). Zhou, Hao et al. (2015) and Zhou, Wen et al. (2015) reported a vitamin C content of 5 mg/100 g in buckwheat, and after germination, the vitamin C levels rose to 25 mg/100 g. Buckwheat grains are mineral sources of both macro-and micronutrients.
Macronutrients like phosphorus, potassium, magnesium, and calcium are at reasonable levels, whereas iron, manganese, and zinc are at lower concentrations in buckwheat (Zhu, 2016). The micronutrients in buckwheat are higher than in cereal grains, and their concentration is mainly confined to the seed coat, hull, and aleurone layers (Orožen et al., 2012). The minerals like zinc, copper, and potassium become readily available for absorption after enzymatic digestion into a soluble form (Klepacka et al., 2020). The diet available for gluten-free consumers is low in vitamins and minerals, and the incorporation of buckwheat in gluten-free diets increases the concentration of vitamins and essential minerals in their diet (Mir et al., 2014).
| B I OAC TIVE COMP OUNDS IN B U CK WH E AT
Buckwheat is a dense nutritive pseudocereal with various bioactive compounds such as bioactive peptides, flavonoids, fagopyrins, fagopyritols, d-fagomine, and phenolic acids (Table 2), and their chemical structure is presented in Figure 2. The bioactive compound of buckwheat grains enhances its healing effect against health-related diseases.
| Bioactive peptides
Buckwheat grains are a good source of protein higher than cereal crops (Bobkov, 2016). The protein content in buckwheat has wellbalanced amino acids with a nutritive value similar to milk and egg solids (Gimenez- Zhou, Hao, et al., 2015;Zhou, Wen, et al., 2015). According to Clare and Swaisgood (2000), bioactive peptides have pharmacological activity. The pharmacological activities associated with bioactive peptides are antioxidant, antimicrobial properties, cholesterol-lowering ability, hypoglycemic effect, and antitumor activity (Nasri, 2016). The peptide generally contains 3 to 50 amino acid residues, with their activity depending on their sequence, composition, structure, charge, and hydrophobicity (Saadi et al., 2015). Bioactive peptides are either naturally available or prepared through enzymatic hydrolysis (Aiello et al., 2017).
Buckwheat-based protein extracts have also been effective against dimethylhydrazine-induced mammary and colon cancer in rats (Liu et al., 2001).
Buckwheat hypotensive peptides contain 2-5 amino acid residues (Zhou, Hao, et al., 2015;Zhou, Wen, et al., 2015). The hypotensive peptides inhibit the angiotensin I-converting enzyme and lower blood pressure (Zieliński et al., 2020). The hypotensive peptides with amino acids (proline, tyrosine, tryptophan, tyrosine, proline, and phenylalanine) at the carboxyl and terminal end have a great affinity for inhibiting the activity of the angiotensin Iconverting enzyme (Ma et al., 2006). The inhibitors of angiotensin I-converting enzyme isolated from protein hydrolysates of buckwheat were FY, YQ, AY, LF, YQY, YV, VK, PSY, LGI, ITF, and INSQ (Li et al., 2002). The inhibitors such as DVWY, FDART, FQ, VAE, VVG, and WTFR were also reported by Koyam et al. (2013) in fermented buckwheat sprouts with hypotensive activity. The buckwheat protein extracts have been reported as the antidiabetic property (Zhou, Hao, et al., 2015;Zhou, Wen, et al., 2015). The reactive oxygen species generated during metabolic processes damage the pancreatic islets' cell membrane, which is one of the reasons for diabetes (Fakhruddin et al., 2017). The damage of pancreatic islets can be reduced by the intake of buckwheat proteins containing antioxidant enzymes and reactive oxygen species scavenging activity (Liu et al., 2009). Buckwheat proteins maintain the balance of glucose levels in diabetic people (Zhou, Hao, et al., 2015;Zhou, Wen, et al., 2015). Buckwheat-digested proteins produce antioxidant peptides (WPL, VPW, VFPW, and PW) with a strong scavenging capacity of reactive oxygen species (Zhou, Hao, et al., 2015;Zhou, Wen, et al., 2015). Buckwheat proteins are an effective supplement for people suffering from high cholesterol levels and obesity. Buckwheat protein extracts can bind with bile acids in the gastrointestinal tract, which increase the neutral sterols excretion through feces (El-Sayed et al., 2020). The binding affinity of buckwheat protein extract with bile acid increases the secretion of liver bile acid from cholesterol and lowers the liver cholesterol level (Zhou, Hao, et al., 2015;Zhou, Wen, et al., 2015). The maximum level of rutin content is present in leaves (0·08-0·10 mg/g), and in buckwheat seeds, it varied from 0·12-0·36 mg/g (Brunori et al., 2010;Park et al., 2008).
| Fagopyrins
Fagopyrin is a photosensitive polyphenol compound present in buckwheat seeds. The fagopyrins have a naphthodianthrone skeleton with a structure similar to hypericin compounds (Kim & Hwang, 2020). Fagopyrins are considered a toxic polyphenol due to their photosensitization and allergy reaction present in seeds and leaves of buckwheat. They are difficult to isolate due to their lower content in buckwheat (Ahmed et al., 2014). The fagopyrins in buckwheat have laxative, antibiotic, and antiviral effects and treatment for diabetes (Hagels, 2007).
| Fagopyritols
Fagopyritols are d-chiro-inositol with galactosyl derivatives present in bran fraction and higher concentration in buckwheat embryos.
| Phenolic acids
Buckwheat is the richest source of phenolic acids available in free or bound form . The phenolic acid in buckwheat is mostly benzoic acid and cinnamic acid derivatives (Mir et al., 2018).
Phenolic acids in buckwheat are associated with antioxidant activity (Gimenez- and prevent the buckwheat seed from chemical degradation during prolonged storage (Antoniewska et al., 2018). Phenolic acids are natural antioxidants, effective against reactive oxygen species, reducing cardiovascular diseases, cancer, and age-related processes (Forni et al., 2019;Wani et al., 2022). The phenolic acids from buckwheat bran have biological activity against liver cancer cells, as tested in vitro by Li et al. (2016).
| HE ALTH -PROMOTING AT TRIBUTE S OF B U CK WH E AT
Buckwheat is a potentially promising herb used in ancient times to treat health-related diseases (Cai et al., 2016). Buckwheat is pseudocereal rich in protein and starch content, with a rich source of minerals and vitamins (Khalid et al., 2020). The unique nature, structure, and bioactive compounds of buckwheat have a vast potential to support nutraceutical potential to human health (Zhu, 2016).
Buckwheat provides a good source of nutrients and bioactive components, resulting in the enhancement of the therapeutic potential of buckwheat. The therapeutic potential of buckwheat is as follows:
| Antioxidant activity
The antioxidant activity of buckwheat is due to the presence of polyphenolic compounds, particularly rutin content. Due to the generation of reactive oxygen species in human metabolism, free radicals are responsible for cancer, cardiovascular disease, aging, cerebrovascular disease, and degenerative diseases. Additionally, it has been shown that naturally occurring polyphenolic antioxidants decreased the ROS in antigen-IgE-activated mast cells and concurrently suppressed the release of histamine by these activated mast cells. Therefore, it may be hypothesized that buckwheat's antiallergic effects are partially a result of its polyphenolic components' antioxidant properties (Ahmed et al., 2014;Papadopoulou et al., 2021;Ünsal et al., 2021). However, more research must be done to determine how buckwheat and its components affect allergic responses.
Buckwheat flavonoids can act as scavengers against free radicals due to their ease of oxidation (Zhou et al., 2018). Buckwheat flavonoids have a molecular structure that supports the concept of an active phenolic hydroxyl group capable of scavenging free radicals and preventing cancer, cardiovascular disease, aging, and cerebrovascular and degenerative diseases (Li et al., 2017;Shahbaz et al., 2022).
| Anticancer activity
Due to their lifestyle, cancer is the leading cause of death in developed and developing countries (Jemal et al., 2011). One of the century's major global problems is the formulation of functional foods to prevent chronic diseases, including cancer. It was found that eating a variety of foods, including buckwheat in diet, was reported to have a lower risk of lung cancer (Shen et al., 2008). Flavonoids and polysaccharides (Zhu, 2020); lectins (Bai et al., 2015); and phenylpropanoids Zheng et al., 2012)
| Hepatoprotective activity
Reactive oxygen and nitrogen species, as well as a variety of chemicals, damaged the liver in mice and rats. Lee et al. (2017) reported that buckwheat-based flavonoids and their extract from the tartary cultivar protect the liver against carbon tetrachloride and ethanol-induced damage. On a molecular level, the buckwheat flavonoids reduced serum aspartate transaminase activities. They increased superoxide dismutase enzyme activity, decreased liver dysfunction and hepatic inflammation, and improved the antioxidative and anti-inflammatory functions for hepatoprotection . The hepatoprotective effects of buckwheat were related to polyphenols such as rutin (Ruan et al., 2020). Buckwheat-
| Antidiabetic activity
Diabetes is a severe public health problem today, affecting over 300 million people (Zhang et al., 2011). Buckwheat helps avoid diabetes and its complications by reducing fasting blood sugar, increasing insulin levels, lowering glycosylated hemoglobin and glycosylated serum protein, and suppressing blood sugar levels .
Buckwheat-based products have significantly reduced blood sugar concentration and a lower risk of diabetes mellitus. Buckwheat should be cultivated more extensively as a grain crop due to its ability to effectively reduce the diabetes mellitus rate in people more prone to it. Both digested and undigested buckwheat flavonoids have improved glucose consumption and glycogen amount (Ruan et al., 2020). Rutin inhibits the glucosidases and amylase enzymes and thus reduces the glucose uptake in the small intestine by a decrease in carbohydrate digestion (Jadhav & Puchchakayala, 2012).
Buckwheat starch has a greater sensitivity to digestive enzymes because of the structure and compactness of its granules (Zhu, 2016).
Polyphenolic compounds, dietary fiber, and other nonstarch elements lead to the food matrix effects and lower the glycemic index of buckwheat-based products (Singh et al., 2010). Fagopyritols are a special active ingredient of buckwheat-soluble carbohydrates, used as a treatment for people with diabetes, noninsulin dependence, and polycystic ovarian syndrome.
| Anti-inflammatory and Antifatigue effects
Inflammation is a natural biological reaction to tissue damage, microbial pathogens, and chemical irritants (Pan et al., 2010), and their chronic stage is related to cancer development (Chen et al., 2018). An extract of buckwheat can lower inflammatory mediators, including interleukin-6, monocyte chemoattractant protein-1, tumor necrosis factor, and inducible nitric oxide synthase, as well as nitric oxide resulting in its inhibiting inflammation reaction (Li et al., 2017;Nam et al., 2017;Zhang et al., 2018). Different phenolic acids, such as feluric and p-coumaric, in buckwheat decreased lipopolysaccharideinduced inflammation activity (Hole et al., 2009). Fatigue is a term that encompasses a wide range of medical disorders related to pathology, general health, and physical activity. Exercising for a long time at a high intensity induces fatigue. Buckwheat protein significantly increased climbing, swimming time, and liver glycogen level, effectively reducing the blood lactate and serum urea contents (Jin & Wei, 2011). in blood vessels . Bran extract from buckwheat can also lower the lipid in the blood and liver, boosts antioxidants, and prevents peroxides in the blood (Ruan et al., 2020).
| Antihypertensive activity
Hypertension affects approximately 1 billion people in the global population and is expected to be 1.56 billion by 2025 (Devos & Menard, 2019). The biological system of renin-angiotensin maintains blood pressure. The system involves the conversion of angiotensinogen into angiotensin I and then converted into angiotensin II by angiotensin I-converting enzyme (ACE), resulting in hypertension (Jao et al., 2012). The presence of ACE inhibitors blocks the conversion reaction of ACE I to ACE II and hence results in lowering the pressure in blood vessels. The hypotensive effect of buckwheat was also supported both in in vitro and in vivo conditions as ACE inhibition (Jin et al., 2022). The buckwheat-based flavonoids, particularly rutin, prevent blood vessel hardening, boost microcirculation, detoxify the blood, improve blood circulation, remove toxins, and lower blood and urine sugar levels (Hou et al., 2017). By assisting in regulating vasoconstriction and diastole, buckwheat extract helps lower blood pressure with quercetin as the key active component in minimizing oxidative stress of blood vessels and restoring vasodilation in clinical trials (Gimeńez-Bastida et al., 2015).
| Antineurodegenerative effect
Accumulating excessive protein (amyloid-b protein) aggregation contributes to oxidative stress in the central nervous system, leading to neurodegenerative disorders (Citron, 2004 (Gulpinar et al., 2012). Recent research suggests that neuroprotective effects in the buckwheat parts and their extracts were attributed to rutin (Choi et al., 2015).
| Antigenotoxicity
Genotoxicity is a term used to describe a negative impact on the integrity of a cell's genetic material (DNA and RNA). Buckwheat extracts showed good protection against DNA damage induced by hydroxyl radicals under in vitro chemical assays. The use of buckwheat extract to repair the DNA damage was found to repair by over 50% owing to higher levels of phytochemicals and their ability to scavenge hydroxyl radicals and chelate iron (Cao et al., 2008). The DNA-protective properties of buckwheat using a human hepatoma cell line reported that inhibition of DNA damage in the body was related to antioxidants from buckwheat extract containing rutin and quercetin (Vogrincic et al., 2013). The presence of rutin and quercetin components in buckwheat extracts works synergistically to protect against DNA damage (Wang & Zhu, 2015).
| DE VELOPMENT OF G LUTEN -FREE PRODUC TS FROM BUCK WHE AT
Gluten intolerance is an autoimmune disease caused by the consumption of cereal-based gluten proteins that leads to mucosa damage in the small intestines through interaction among celiac patients, gluten diet, and response from the immunological system (Mir et al., 2018). Celiac disease involves loss of intestinal villi, incomplete digestion, and absorption of nutrients, affecting the overall function of the human body (Kreutz et al., 2020).
| Challenges and achievements in the technology of gluten-free products
Buckwheat's utilization in developing gluten-free products enhances its valorization in the gluten-free market. It mitigates various diet-related diseases and alternates gluten-free diets to feed 1.4% of the world population . The promising health ingredients and absence of gluten proteins in buckwheat focused on its use in food processing for various global gluten free buckwheat products (Małgorzata et al., 2018). There is an increased demand for gluten-free products since the incidence of celiac diseases or other gluten intolerance or allergies. People with celiac disease have to restrict their diet of gluten proteins and shift their diet to gluten-free products (Rosida et al., 2022). To reduce the prevalence of celiac disease and ensure customer-acceptable quality, cutting out gluten from the diet presents technological obstacles. Gluten is the abundant structural protein complex found in wheat and exhibits the technofunctional properties of TA B L E 3 Buckwheat-based gluten-free products and their biologically active compounds and chemical structure wheat-based products (Allai et al., 2022;Sapone et al., 2012). The elimination of gluten in food products results in defects that appear in the form of quality attributes, nutritional characteristics, and consumer acceptance. Hence, the development of glutenfree products for celiac patients is unrealistic to mimic the overall qualities of gluten products. Different approaches and technologies were adopted to overcome gluten-free products' defects and make them acceptable to celiac patients. Incorporating nutritional ingredients, hydrocolloids, and enzymes to modify and mitigate the defects and problems of gluten-free products (Alvarez-Jubete et al., 2009;Hamada et al., 2013;Ronda et al., 2015). Other than altering formulations, technologies like high pressure, extrusion, and sourdough fermentation acting directly on the product's base material also bring a promising result, mimicking the gluten product qualities.
| Use of buckwheat in the technology of gluten-free products
The buckwheat was used to prepare different gluten-free products such as bakery products (bread, biscuits, and cookies), noodle making, tea preparation, and extruded products with good organoleptic quality and consumer acceptability.
| Gluten-free bakery products
In the bakery industry, there is a great scope to create innovative and health-promising products by utilizing bioactive-rich ingredients to produce functional bakery products. The buckwheat flour with bioactive-rich components positively impacted human health . Subsequently, the exploration of bakery products (bread and biscuits) from different functional ingredients, modification, advancement in final product functionality, and composite flour-based bakery products (bread, biscuits, and snacks) increases the consumer demand for these products. Buckwheat-based ingredients in enriching bakery products are a new gluten-free food with nutraceutical attention. The incorporation of buckwheat flour ingredient in bread development enhances the nutritional quality of bread with an increase in nutrients such as protein and mineral content (Wronkowska et al., 2013). The addition of buckwheat flour in product development leads to a technological problem in bread development due to its viscoelastic properties, lower baking quality, and acceptability by consumers (Hager et al., 2012;Saturni et al., 2010). The characteristics of gluten-free bread, such as loaf volume and crumb texture, improved significantly with the incorporation of buckwheat as compared to control gluten (Alvarez-Jubetea et al., 2010). Besides nutritional quality, the incorporation of buckwheat in bread enhances the antioxidant activity, low glycemic index, and functional properties of bread (Wolter et al., 2013;Wronkowska, Zielinska, et al., 2010;Wronkowska, Soral-Śmietana, et al., 2010). The sensory and quality of buckwheat-based bread were improved by incorporating other nonbuckwheat-based ingredients such as starch, corn flour, and rice (Torbica et al., 2010;Wronkowska et al., 2013). The development of bread from buckwheat needs more effort in terms of technological and formulation aspects to enhance the overall quality and alternate targets for celiac diseases with high consumer acceptability. The buckwheat flour incorporation for the development of gluten-free products (Suzuki et al., 2020). The increase in buckwheat flour the bread development increased their rutin and antioxidant activity (Vogrincic et al., 2013). The gluten-freebased buckwheat bread reported a high level of protein, phenolic content, and antioxidant activity compared to amaranth and quinoa flour-enriched gluten-free bread (Chlopicka et al., 2012). The buckwheat, along with other pseudocereals, is considered an alternative to gluten protein-based bread, with an increased nutritional profile and phenolic content (Schoenlechner et al., 2010). Wronkowska, Soral-Śmietana et al. (2010) and Wronkowska, Zielinska et al. (2010) formulated a buckwheat-based gluten-free bread with corn starch as the base material. They showed increased activity of antioxidants, minerals, proteins, and vitamins compared to control bread without buckwheat flour. Furthermore, bread containing whole-grain buckwheat flour expressed higher antioxidant and phenolic compounds . However, the addition of buckwheat in bread development, besides the increase in nutritional content and functionality, is restricted due to their low baking quality and consumer acceptability (Saturni et al., 2010).
Cookies and biscuits from buckwheat are bakery products with health-promising functional ingredients for gluten-intolerant people . Cookies made from buckwheat proved to have excellent product quality and acceptability by consumers by up to 20% (Torbica et al., 2010). Gluten-free cookies from buckwheat with added chickpea flour enhance nutritional value and organoleptic properties compared to control wheat-based cookies (Yamsaengsung et al., 2012). The area of gluten-free cookies needs much more studies for the optimization process to enhance the quality and sensory properties for targeting celiac patients as a diet to control the impact of celiac diseases.
The buckwheat-enriched snacks, not less than 30 percent buckwheat flour with corn flour, showed good acceptability as an attractive appetizer with high nutritional properties (Wojtowicz et al., 2013). The biscuits prepared by incorporating buckwheat changed the physicochemical and organoleptic properties and increased spread, hardness, and fracturability (Filipcev et al., 2011).
The incorporation of buckwheat from 20% to 50% enhances the sensory attributes, biofunctional properties, protein, fiber, micronutrients, polyphenolic content, and antioxidant activity (Baljeet et al., 2010;Filipcev et al., 2011). The cookies from buckwheat are also gluten-free products with broad consumer acceptability and nutraceutical properties. The cookies made from buckwheat flour were more protein and fiber rich than those from wheat flour . The protein content of the cookies formulated from buckwheat flour was in the range 4.34%-5.45%, fat (18.81%-20.04%) and the fiber content 0.39%-0.68%, with better cookie qualities (Altındag et al., 2015).
| Gluten-free noodles and pasta
The noodles are convenient, easy to prepare, acceptable by consumers, and nutritionally rich products (Sofi et al., 2019). These noodles and pasta properties would be acceptable for celiac patients to develop gluten-free products. The buckwheat noodles, commonly called soba noodles, are prepared by substituting with cereal flour or with buckwheat flour only (Hatcher et al., 2011). Buckwheat noodles were prepared with other ingredients such as green tea powder, mushroom, or seaweed to enhance sensory and texture quality (Yoon et al., 2007). The textural properties of noodles are an important quality parameter for judging the sensory score of the noodle and are dependent upon the amount of starch, protein, and fiber content present in noodles (Hatcher et al., 2011). The demand for noodles has increased from buckwheat due to its nutraceutical potential, but consumers' sensory quality is only partially acceptable due to a lack of a viscoelastic network (Han et al., 2012). Recent studies on noodles prepared from buckwheat were done to improve texture, sensory, and noodle quality to make acceptable noodles for gluten-intolerant people (Bouasla & Wójtowicz, 2019).
Buckwheat-based noodles are known as soba-type noodles, with 35% buckwheat flour having good texture and cooking qualities (Hatcher et al., 2011). Buckwheat-based noodles also contain 60%-100% of buckwheat flour for preparing soba noodles with good consumer acceptability (Sun et al., 2019). Buckwheat noodles have been added with functional ingredients such as green tea, shiitake mushroom, or seaweed powder to enhance their nutraceutical potential (Yoon et al., 2007). The increase in demand for buckwheatbased noodles is an alternative product for gluten-intolerant people with a good source of nutrients. However, the nongluten protein in noodle dough does not produce a cohesive structure that affects the textural qualities (Hatcher et al., 2011). The buckwheat flour incorporation in gluten-free noodle development increased the mineral and nutritional composition. However, buckwheat addition to noodles reduced the cooking quality and color parameters. The noodles produced from the fermented buckwheat increase the amount of amino acids and minerals, whereas reduce the allergenic proteins and phytic acid (Bilgicli, 2009).
| Other gluten-free products
The buckwheat, rich in bioactive compounds, can also be processed into tea, beer, and extruded products to enhance the valorization of buckwheat beyond bakery products. The process of tea formation from buckwheat involves many steps to minimize the bioactive compound degradations. The process involves soaking, steaming, and drying of buckwheat seeds, and then dehulled seeds are roasted and powdered into tea development . The effects of this thermal processing on the nutrient composition and polyphenolic content depend upon the cultivars and processing time. The thermal stability of chemical constituents such as proteins in buckwheat tea is related to the proportion of lipid content in the buckwheat seed (Jin et al., 2022). The alternate method of retaining the nutrients, polyphenols, and antioxidants is using microwave heating for buckwheat tea preparation . With a nutraceutical potential, Buckwheat tea is used in most Asian and European countries (Zielinska et al., 2013). The presence of rutin in buckwheat plant parts such as flowers and leaves was used for the preparation of tea, and the processing of these buckwheat-rich rutin plant parts into tea showed a slight change in rutin content during boiling (Xu et al., 2019). Moreover, buckwheat's byproduct, such as hulls rich in flavonoids, was used to prepare infusions or teas (Zielinska et al., 2013). Buckwheat has been used in the production of malt as a basis of a mash for the development of beer suitable for celiac sufferers or others sensitive to specific glycoproteins (Agu et al., 2012).
Buckwheat has been shown as an alternative source of glutenfree beer due to its dense nutritional profile (Brasil et al., 2020).
In recent years, the investigation into the use of buckwheat in the production of beer has increased to avoid gluten-based grains for Extrusion technology in food processing is used to produce food products with broad diversification, product quality, and consumer acceptability. The extruded products include pasta, modified flours, textural vegetable protein, meat analogs, snacks, and starch-based food (Leonard et al., 2020). The extruded products have good digestibility and consumer acceptability and are available in various shapes and sizes due to the extruder's high pressure, mixing, and shear operations (Alam et al., 2016). The end-user consumers' preference for healthy foods focuses on the extrusion technology industry to shift functional extruded products with added fiber, resistant starch, antioxidants, and vitamins (Chillo et al., 2010;Leonard et al., 2020). The buckwheat pseudocereal with nutritive dense, rich ingredients focused the researchers on formulating extruded products from buckwheat and was reported as prebiotic and maintains gut microbiota and reduced cholesterol level (Petrova & Petrov, 2020). The extruded products from buckwheat maintain nutrients during extrusion processing with higher antioxidant activity than roasted buckwheat. The protein digestibility, dietary fiber, and polyphenols in extruded buckwheat products are retained in higher amount, suggesting the addition of buckwheat flour as a functional supplement for the production of functional extruded products (Klepacka & Najda, 2021). The pasta produced from extrusion technology is an easy-to-prepare and available food product in the market with high acceptability by consumers. Buckwheat flour used in pasta maintains good-quality dough and texture without affecting the cooking qualities of gluten-free pasta (Schoenlechner et al., 2010). Optimization for the development of gluten-free pasta from buckwheat flour was used to produce better firmness, a good structural network, and improved cooking and sensory quality Verardo et al., 2011).
| CON CLUS ION
Buckwheat is the main pseudocereal with an excellent nutritional profile, rich in phytochemicals, vitamins, and minerals. Buckwheat is a cheap source of protein, with protein content higher than cereals would be an approach to mitigating protein-related malnutrition in developing countries. The buckwheat associated with bioactive components has health and nutraceutical significance. The bioactive components isolated from buckwheat can be used in the pharmaceutical industry to treat various health-related diseases. Nowadays most attractive trend in the food industry is the formulation of functional food with health benefits. In recent years, buckwheat-related food products with good sensory and technofunctional qualities attract the food market with health benefits and are suitable food for people with gluten intolerance. However, more research and development are needed to improve gluten-free buckwheat products' organoleptic.
ACK N OWLED G EM ENT
Not applicable.
CO N FLI C T O F I NTE R E S T
The authors declare no conflict of interest
DATA AVA I L A B I L I T Y S TAT E M E N T
Not applicable.
E TH I C A L A PPROVA L
The study involved no experimentation with human subjects.
CO N S E NT TO PA RTI CI PATE
The authors declare their consent to participate in this article.
CO N S E NT TO PU B LI S H
The authors declare their consent to publish this article. | 2022-12-24T16:06:22.890Z | 2022-12-22T00:00:00.000 | {
"year": 2022,
"sha1": "e24bbf8f46027feaab97249bbcd805f574abc8d5",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.3166",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0f6bd15ed00db1cfd10a27d7703738843addcec",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
16223194 | pes2o/s2orc | v3-fos-license | Molecular diagnosis of generalized arterial calcification of infancy (GACI)
Generalized arterial calcification of infancy (GACI) is a life-threatening disorder in young infants. Cardiovascular symptoms are usually apparent within the first month of life. The symptoms are caused by calcification of large and medium-sized arteries, including the aorta, coronary arteries, and renal arteries. Most of the patients die by 6 months of age because of heart failure. Recently, homozygous or compound heterozygous mutations for the ectonucleotide pyrophosphatase/phosphodiesterase 1 (ENPP1) gene were reported as causative for the disorder. ENPP1 regulates extracellular inorganic pyrophosphate (PPi), a major inhibitor of extracellular matrix calcification. A newborn was diagnosed with GACI. The infant died at the age of 7 weeks of cardiac failure and the parents were referred to Molecular Biology and Cytogenetic lab for further workup. Cytogenetics analysis was performed on the parents, which showed normal karyotypes and mutational analysis for the ectonucleotide pyrophosphatase/phosphodiesterase 1 (ENPP1) gene was also performed. The mutational analysis showed that both father and mother of the deceased infant were heterozygous carriers of the mutation c.749C>T (p.P250L) in exon 7 of ENPP1 and it was likely, that the deceased child carried the same mutation homozygous on both alleles and died of GACI resulting from this ENPP1 mutation. The couple was counseled and monitored for the second pregnancy. Amniocentesis was performed at 15 weeks of gestation for mutational analysis of the same gene in the second pregnancy. The analysis was negative for the parental mutations. One month after the birth of a healthy infant, peripheral blood was collected from the baby and sent for reconfirmation. The results again were negative for the mutation and the baby was on 6 months follow up and no major symptoms were seen. The parents of the child benefited enormously by learning about the disease much in advance and also its risk of recurrence. The main aim of this study is to emphasize on two aspects: (i) the importance of modern molecular techniques in diagnosis such a syndrome and (2) the difficulties faced by the physician to provide appropriate diagnosis and the adequate genetic counseling to the family without molecular facilities.
INTRODUCTION
Generalized arterial calcification of infancy (GACI) is a rare autosomal recessive disorder, reported till date in about 180 individuals. [1,2] Calcification of large and medium-sized arteries and marked myointimal proliferation leading to arterial stenoses, are characteristic vascular features of the GACI phenotype. An extravascular feature of foci of periarticular calcification, occurs in many of the affected subjects. [3] Initial signs of the disease may occur prenatally and most affected children die in early infancy from sequelae of vascular occlusion, typically myocardial infarction or congestive heart failure due to hypertension. [3,4] The diagnosis of GACI is often not considered until arterial calcification is incidentally detected in the affected patients. Also, arterial calcification might be too subtle to be detected by conventional radiography. In such cases, ultrasonography or computed tomography scan might demonstrate evidence of arterial calcification and establish the diagnosis. [5,6] Other potential causes of arterial calcium deposition, such as, hypervitaminosis D, advanced renal disease and hyperparathyroidism, can be excluded after basic laboratory investigations. [6] GACI is most frequently caused by mutations in ENPP1, a gene encoding ecto-nucleotide pyrophosphatase/phosphodiesterase 1 (NPP1). [7] Prenatal diagnostic testing may not yield accurate results as the manifestations are usually seen only in the last trimester of pregnancy. However, early diagnosis by mutation analysis from amniocentesis is possible, if disease causing mutations in ENPP1 have been identified previously in an index case in the same family.
CASE REPORT
The parents of the index case were non-consanguineous. The antenatal ultra sonography during the pregnancy at 13 and 30 weeks of gestation did not reveal any abnormalities. The 36 weeks ultrasonography revealed polyhydramnios and accumulation of mild pericardial effusion in the fetus [ Figure 1a and 1b]. Amniotic fluid was drawn for chromosomal analysis during the 15 weeks of gestation which was normal, and unremarkable for numerical and structural anomalies. The female infant was born at term via lower-segment cesarean section. She developed respiratory distress immediately after birth and required supplemental oxygen. The baby was pale looking, hyperglycemic and euthermic. Vesicular breath sounds were audible. Abdomen was soft. A pansystolic murmur was heard at the 3 rd and 4 th left intercostal space. Central nervous system examination was unremarkable, and there were no external congenital anomalies. Her femoral pulses were present; however, brachial, radial, posterior tibial and dorsalis pedalis pulses were absent. Due to severe respiratory distress, she was initially started on continuous positive airway pressure (CPAP) ventilation; however, was later shifted to mechanical ventilation in view of increasing dyspnea and severe metabolic acidosis. In view of severe anemia [ Table 1], she was given packed red blood cell transfusion.
She developed extreme tachycardia. Electrocardiography (ECG) on monitor was suggestive of ventricular fibrillation. Due to severe hypertensionand oxygen saturation of 95% in room air, she was started on There was prominent calcification of the aortic arch, right pulmonary artery (RPA), innominate artery (IA), left carotid artery (LCA), and left subclavian artery Ultra sound Revealed polyhydramnios and accumulation of mild pericardial effusion in the fetus Figure 2]. She was started on Milrinone infusion and the dose was gradually increased to 0.9 micrograms per/kg per minute with minimal improvement.A repeat echocardiogram on the 49 th day of life revealed severe biventricular dysfunction. The ascending aortic arch and the coronary arteries were calcified [ Figure 2]. She was started on inotropic support (dobutamine and milrinone) and diuretic therapy. An ultrasound scan of the abdomen revealed a diffuse calcification of abdominal wall with right artery stenosis. A CT scan of abdomen revealed a diffuse rim calcification of the entire aorta and its branches in the chest and abdomen, and this pointed towards a rare genetic form of diffuse arterial calcification. The clinical diagnosis of generalized arterial calcification of infancy (GACI) was proposed, and therapy with IV Pamidronate (three doses given at 0.9 micrograms per/kg per minute) was started at 18 days of life intravenously. Despite therapy, a rapid evolution to multiorgan failure and death occurred on day 51 of life. Autopsy was not performed as the parents did not give consent, and also mutational analysis of ENPP1 was not performed as no tissue was available.
The family was planning for the second conception; however, wanted to have an objective and measurable understanding about the risk of recurrence in a subsequent pregnancy. For this purpose the family was referred for a genetic counseling to the Molecular Biology and Cytogenetics lab, Apollo Hospitals, Hyderabad, India.
After one year of the first baby's demise, the mother conceived for the second time, and the pregnancy was closely monitored.A detailed genetic analysis was carried after taking the informed consent from the couple. About 3 ml of peripheral blood sample was collected from the couple in a heparinized vacutainers and processed for cytogenetic analysis. Lymphocyte stimulated cultures were set up as described by Moorehead et al. [8] Giemsa banding (GTG banding) was performed according to Seabright et al. [9] Fifty metaphases were scored under light transmission microscope and karyotyping software (Leica CW 4000 Total solutions for cytogenetics imaging V1.3) was used for cytogenetic analysis. Metaphases were karyotyped according to International System for Human Cytogenetic Nomenclature (ISCN) criteria. [10] The cytogenetic analysis showed normal karyotype for both husband and wife without any numerical and structural anomalies. All mutations were detected on one allele. The mutation c.517A> p.K173Q in exon 4 is known to be a non pathogenic polymorphism. The mutation c.1273-17delT is an intronic deletion in intron 12 and 17 base upstream of exon 13 which probably will not cause any transcriptional error (not been described previously) and is most likely not disease causing [ Table 2 and Figure 3]. During the second pregnancy at 15 gestational weeks, the amniocentesis was performed. The amniotic fluid was collected in two vials, and the first vial was cultured for cytogenetic analysis and the results showed a normal karyotype. From the second vial, the DNA was extracted and mutational analysis of exons 4 and 7 of ENPP1 was performed from the fetal DNA, after exclusion of maternal contamination of the sample. Mutational analysis in the amniotic sample was negative for parental mutation (paternal mutation). After the baby was born, mutational analysis was repeated from a blood sample and again was confirmed to be negative for the parental mutation (paternal mutation) [ Figure 3].
DISCUSSION
GACI is a rare, fatal, autosomal-recessive disorder that results in arterial stenosis and decreased elasticity of the blood-vessel walls, secondary to unregulated hydroxyapatite deposition. GACI is associated with inactivating mutations in the ENPP1 gene, which lead to decreased levels of inorganic pyrophosphate, a potent physiological inhibitor of hydroxyapatite-crystal deposition in the blood-vessel walls. [7] Till date, only 180 cases have been published on this disease, and, many cases go undiagnosed or unreported. GACI is associated with a high mortality rate owing to the development of severe hypertension and cardiovascular complications in early infancy. Several case studies have described patients who survived into adulthood with persistent hypertension and cardiovascular squealae; however, approximately 85% of affected infants do not survive beyond 6 months of age. [6] Infants often present with nonspecific signs, including poor feeding, cyanosis and respiratory distress. The diagnosis of GACI is often not considered until arterial calcification is incidentally detected as in the index case presented here. Arterial calcification might be too subtle to be detected by conventional radiography. In such cases, ultrasonography or CT might demonstrate evidence of arterial calcification and establish the diagnosis. [5,6,11] GACI should also be considered when hydrops fetalis is detected antenatally by ultrasonography as calcification is very rarely detected in the early stages of pregnancy. During pregnancy there might be polyhydramnios and mild pericardial effusion seen in the fetus is compared with normal, which was also present in our index case [ Figure 1a and 1b]. The clinical presentation is variable, and most cases are diagnosed postmortem. Affected patients usually present with respiratory distress, feeding difficulties, hypertension and progressive heart failure. Electrocardiography can reveal evidence of myocardial ischemia, which suggests coronary-artery involvement, and might also show ventricular hypertrophy or impaired ventricular function consistent with infarction. [11] Coronary artery involvement can be lethal within the first 6 months. [12] The syndrome is histologically characterized by intima proliferation and diffuse deposition of hydroxyapatite at the internal elastic lamina of medium-sized arteries and correlates to the loss of function to mutations in the ENPP1 gene (chromosome 6q22-q23, OMIM 173335), encoding for ectonucleotide pyrophosphatase/ phosphodiesterase 1 (NPP1). This cell surface enzyme is a type II membrane glycoprotein generating inorganic pyrophosphate PP i in vascular smooth muscle cells, chondrocytes and osteoblasts. PP i is a strong inhibitor of hydroxyapatite deposition. The pathogenesis of GACI is related to reduced NPP1 enzymatic activity leading to low extracellular PP i levels. [13][14][15] Rutsch and colleagues demonstrated that homozygous or compound heterozygous loss-of-function mutations in ENPP1 result in GACI in about 80% of the cases. [16] Accordingly, in the family presented here, ENPP1 mutational analysis was performed in the parents of an index case with clinically proven GACI. The parents were found to be heterozygous carriers of two different ENPP1 mutations. This makes is very likely that the deceased child must have been compound heterozygous for the same mutations. However, we were not able to prove this hypothesis, because there was no DNA available from the deceased child. The couple was counseled accordingly since they wanted another child.
It was reported earlier in 2009 that GACI is a rare autosomal recessive disorder and is characterized by vascular calcification and myointimal proliferation in infancy and thus results in stenosis and reduced vascular elasticity. There is an urgent need to use molecular tests to confirm the diagnosis and understand the risk-benefit ratio and also the potential risks involved. Genetic counseling for GACI is very important as the disease is transmitted in an autosomal recessive manner.
Considering the objective evidence of risk of recurrence of ENPP1 mutations in a subsequent conceptus, the parents made a conscious decision to try to have another child. Although the clinical diagnosis of GACI is based on clinical features and typical radiographic signs, molecular analysis of ENPP1 is mandatory for genetic counseling in the affected family. Hence, it is recommended that along with investigations like CT, more advanced investigations such as fetal echo scan and molecular analysis of ENPP1 should be included as this would help the clinicians and the prospective anxious parents to plan their families in a better way. In cases such as the present one, where a family has significant emotional and social issues to grapple with, the conflict as to whether to go for another pregnancy or not, can be answered with molecular diagnosis methods and the anxiety of the parents can be reduced. Given these concerns, a thorough explanation of the risk-benefit ratio should be considered whenever diagnostic tests are considered in patients with GACI, along with a detailed discussion with the parents about the potential risks involved. | 2016-05-12T22:15:10.714Z | 2012-04-01T00:00:00.000 | {
"year": 2012,
"sha1": "910b6710a5b23172fc30170a01f2f87d622d413d",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3354462?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "910b6710a5b23172fc30170a01f2f87d622d413d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23677492 | pes2o/s2orc | v3-fos-license | The firsT prOTOcOl Of sTable isOTOpe raTiO assessmenT in TumOr Tissues based On Original research
Thanks to proteomics and metabolomics, for the past several years there has been a real explosion of information on the biology of cancer, which has been achieved by spectroscopic methods, including mass spectrometry. These modern techniques can provide answers to key questions about tissue structure and mechanisms of its pathological changes. However, despite the thousands of spectroscopic studies in medicine, there is no consensus on issues ranging from the choice of research tools, acquisition and preparation of test material to the interpretation and validation of the results, which greatly reduces the possibility of transforming the achieved knowledge to progress in the treatment of individual patients. The aim of this study was to verify the utility of isotope ratio mass spectrometry in the evaluation of tumor tissues. Based on experimentation on animal tissues and human neoplasms, the first protocol of stable isotope ratio assessment of carbon and nitrogen isotopes in tumor tissues was established.
Introduction
Over the past centuries and still today, light microscopy has been the most useful and versatile method of assessing cells and tissues, with multiple clinical implications in oncology.However, it is expected that mass spectroscopy will play a lead role in biomedical research, including cancer diagnostics.A report of Strategic Directions International (SDI, Los Angeles) stated that mass spectrometers would become the most dynamically developing analytical instruments worldwide.This adamant prognosis may change our contemporary look at the concept of routine examination in medicine, especially as mass spectrometry (MS) techniques already prevail in many areas of research.Mass spectrometry was created as an analytical meth-od at the end of the 19 th century thanks to the work of Joseph John Thomson and Francis William Aston.That discovery has already yielded four Nobel Prizes.The first two were for Thomson in physics (1906 -for electricity research) and in chemistry for Aston (1922 -for the construction of a mass spectrometer).The invention of the ion trap, a new type of analyzer, earned its originators the third Nobel Prize in physics for Wolfgang Paul and Hans Georg Dehmelt.In 2002, John Fenn and Koichi Tananki were awarded the fourth Nobel Prize in chemistry for the use of electrospray ionization in the analysis of biopolymers and the development of a new test method, namely, matrix-assisted laser desorption ionization (MALDI).Today spectrometry is a highly developed mass measurement technique.The devices in current usage, e.g.Fourier transform mass spectrometry (FTMS), are deployed to determine the composition of the sample at a resolution of one electron [1,2].Depending on the application area, there are a great number of mass spectrometry method variants, differing in the way of preparation and sample introduction (gas, liquid, solid) as well as in ionizing the sample and analysis of the resulting ions.The application areas of contemporary MS related to medicine are: clinical diagnosis (analysis of biomarkers), proteomics, metabolomics and genetics.By using the so-called combination techniques, featuring mass spectrometry coupled with other methods, it is possible to conduct the identification and detailed analysis of selected peptides and proteins as well as the analysis of endogenous compounds present at very low concentrations.For example, the most commonly used are mass spectrometry coupled with liquid chromatography (liquid chromatography-mass spectrometry, LC-MS), tandem mass spectrometry (MS/MS) and mass spectrometry coupled with matrix-assisted laser desorption ionization (MALDI-MS) [3,4,5].By using mass spectrometry we can also identify post-translational modifications, which in the light of current proteomics are considered by some researchers as crucial in the development of cancer.Similarly, the investigation of membrane phospholipids, which play such an important role in differentiation, signal transduction and cell proliferation, may be used for the evaluation of tumors.Current studies indicate the possibility of employing mass spectrometry (MALDI) as a cancer biomarker.
There are reports of its potential use in the detection of colon cancer in mice and prostate cancer and lung cancer in humans in the available literature.In the latter case, differences were observed in test results between primary and metastatic tumors [6,7,8].
Mass spectrometry reveals new factors that potentially can be used as diagnostic, prognostic, and predictive markers, so it was hailed as the technology of future in oncology [9].However, in the area of proteomics alone, despite more than 100,000 studies [10] methodological issues still require standardization and research methodology remains an unresolved problem, which is emphasized by many researchers, most aptly put by one of them as 'running before we can walk' [11,12,13,14].
One specific technique is isotope ratio mass spectrometry (IRMS), which estimates the ratio of stable isotopes.This method allows one to determine the relative ratio of the heavier isotope to the lighter isotope and detects the enrichment or depletion of the sample in the heavier isotope, which, depending on the nature of the sample, may be linked to different physical processes (e.g.crystallization, diffusion), chemical reactions (chemical and enzymatic) or biological processes (biochemical reactions or changes in diet).Therefore, this technique is being currently used in such diverse fields as environmental sciences, ecology, archeology, geology, climatology, food authentication, criminology and others but extremely rarely in clinical medicine.The major difference in comparison to other methods of mass spectrometry is that IRMS, rather than determine the mass of individual chemical compounds present in the sample, converts them into simple chemical compounds (e.g.CO 2 , H 2 O), and only then are the total isotopic ratios measured and reported as delta values (in parts per thousand).The main steps of IRMS analysis of the sample are: combustion or thermal conversion of the sample, ionization of the gas molecules, separation and detection of the ions, and evaluation of the raw data.A low number of studies and the lack of methodology standardization as regards the selection of tools and materials, their acquisition and evaluation may also be observed in connection to isotope ratio mass spectrometry.At the beginning of the twenty-first century, calls for new isotope studies appeared, but they again barely covered the area of medicine [15].A vast majority of studies focused on archeology and the environment, and were carried out on animal models [16,17,18].To date, the isotopic composition of pathologically altered human tissue has remained virtually unknown.In the literature there are only a few studies selectively showing random isotope elements in pathological studies based on several cases [19,20,21,22].The only material studied on a large scale in humans today is the blood drawn from randomly selected patients [23].To our knowledge, test methodology or the size of the isotopic peaks in tumor tissue are not known.In the study herein, the usefulness of isotope mass spectrometry for the evaluation of cancerous tissues was verified and the first permanent isotope evaluation protocol of some of the elements was established, most importantly, for the processes of formation and growth of cells.These are shown and discussed in terms of the methodology and pathology of cancer spectrometry.
Experimental part 1: animal tissue (13 samples of commercially available porcine muscle tissue)
The meat was RFN type (EU), class I -pork loin of the protein content in the fresh meat 16.8% (67.2% dry matter) wherein the overall fat content does not exceed 30%, and connective tissue does not exceed 20%.
Examined factors of performed analyses of animal tissue are shown in Table I.
Experimental part 2: human tumor tissue (53 samples from 20 cases)
The examined group was completed so as to achieve the greatest possible heterogeneity of the samples to cover all the isotope signals.There was representation of materials from males (14 samples) and females (6 samples) aged from 3 days to 9 years, benign tumors (4 samples) and malignancies (16 samples) at stages from 1 to 4 (stage 1 -6 cases, stage 2 -2 cases, stage 3 -1 case, stage 4 -7 cases).There were cases with divergent clinical courses including metastases (8 samples), recurrences (3 cases) and death (1 case) and with different histology across the tumors.The details of histological types are presented in Table II.
Methods
For each sample of both animal tissue and human tissue, compliant with the agreement of the Bioethics Committee of the Medical University of Lodz (RNN/99/13/KE), three measurements were performed: 1) the isotope ratio measurement of nitrogen 15 N/ 14 N (Delta Air), 2) the isotope ratio measurement of carbon 13 C/ 12 C (Delta PDB), 3) the isotope ratio measurement of sulfur 34 S/ 32 S (Delta CDT).
Experimental part 1
The samples weighting from 3 mg to 11.74 mg were prepared from frozen tissue (-80°C).During the first two analyses they were weighed and placed in 12.5 × 5 mm or 8 × 5 mm tin capsules and approximately 1 mg of vanadium pentoxide was added to each sample as a sulfur oxidation catalyst, and capsules were folded carefully and kept at -80°C until analysis.During the third analysis frozen samples were kept in 12.5 × 5 mm tin capsules at -80°C and, at the day of analysis performance, they were dried in a vacuum for 5 hours at room temperature, around 1 mg of vanadium pentoxide was added to each sample as a sulfur oxidation catalyst, and capsules were folded carefully.
Experimental part 2
Based on preliminary experiments, the optimal sample size was determined to be 5 ±1 mg.Frozen samples were weighed and placed in 12.5 × 5 mm tin capsules and kept at -80°C until analysis.Three samples were prepared from each tumor unless there was not enough material.
On the day of analysis, capsules with samples were dried in a vacuum for 5 hours at room temperature, around 1 mg of vanadium pentoxide was added to each sample as a sulfur oxidation catalyst, and capsules were folded carefully.
Isotope ratio measurement experimental part 1 and part 2
Isotope ratio measurements were performed using a Sercon 20-22 Continuous Flow Isotope Ratio Mass Spectrometer (CF-IRMS) coupled with a Sercon SL elemental analyzer for simultaneous carbon-ni- trogen-sulfur (CNS) analysis.The system setup was the same as described in the article by Fry [24].Each analysis started with three blank samples, followed by a primary reference material, followed by 5 samples, secondary reference material, 5 samples, and ending with another primary reference.Batches of 10 samples was measured daily so that samples from one tumor were analyzed in three consecutive measurements.In cases where there were less than three samples from one tumor, the analysis was shortened accordingly.An in-house standard thiobarbituric acid (δ15N = -0.23 (Air), δ13C = -28.35(PDB)) was used as the primary reference.The secondary standard material was glutamic acid (δ15N = 4.8 (Air), δ13C = -27.3(PDB)) obtained from CEISAM laboratory, University of Nantes.The primary standard material was used to determine isotopic ratios of samples while the secondary standard was used as a control.Additionally, a ratio of total carbon to total nitrogen was calculated to check homogeneity of samples, assuming that all tissue samples from one tumor should have the same elemental composition.Preliminary data analysis was performed using Data Reprocessor software supplied with a spectrometer.The program was used to correct signal drifts and calculate delta values and elemental composition of samples.Isotopic ratios were reported as delta values (in parts per mil, ‰) which are ratios of heavier to lighter isotopes relative to international standards for nitrogen (atmospheric, Air) and carbon (Pee Dee Belemnite, PDB) according to the formula: where R sample and R standard are heavier/lighter isotope ratios for sample and international standard, respectively.
Results
251 signals from 66 examined tissue samples were obtained.
Experimental part 1
The signals of all the three elements were observed.The tissue samples of approximately 10 mil-ligrams induced unnecessarily high signals.Obtained results are shown in Tables III and IV.
Experimental part 2
The signals of all three elements were observed.The sulfur signals, even for the sample with the highest mass, were too low for reliable measurements (the values were from minimum -34.42 to maximum 33.75).
Obtained results are shown in Tables V and VI.
Discussion
Numerous proteins and other biomolecules that differ in quantity or quality between normal and cancer cells have been identified so far.Thus, proteomics and metabolomics may be considered to be basic spectrometric methods at the current early stage of cancer research.It should be noted, however, that both methods require isolation of the abnormal fraction, and therefore only well-known aspects of the biology of tumor cells may be subject to study.
A likely source of new information, isotope ratio mass spectrometry, was chosen for our study due to the fact that nowadays it is used in many areas of modern knowledge.In oncology, it can potentially reveal abnormalities in cancer cells at the lowest, atomic level that remains unknown.
In spectrometric studies an optimal research protocol largely depends on the choice of material for research [25].The study of stable isotopes has been a familiar method in archaeology since the most commonly examined materials are those that are the hardest and the last to disintegrate after the death of living organisms, namely, bones, teeth, hair and nails.Their assessment brings a lot of information on environmental conditions, habits and migration processes, a little about certain pathologies, and none about neoplastic processes ongoing in the organism.However, IRMS allows one to study some other materials obtained either invasively (body fluids or tissue) or non-invasively (exhaled air) [19,20,23,24].
The tumor tissue was chosen for the study from all the potential materials, and the protocol of stable isotope estimation was established.Tumor tissue presents a significant advantage over all the others -there is evidence that it is the most representative of cancer.In the field of proteomics a wide variety of materials have been tested: cell lines, tumor tissue, plasma, urine and saliva of patients, and fluid transudates and exudates appearing in the course of cancer disease [26].These studies showed that many factors affect the final results, and they not only interfere with the proper interpretation but may even undermine the versatility of obtained results.It has been proven that age, gender, ethnicity, body weight, food preferences, and menopause can generate artifacts in the study of body fluids affecting the final outcome of the research, and the results depend on the concentration of protein in the fluid [13,14,26,27].In addition, an essential fact was disclosed that the results of proteomic analysis of cell lines in culture differ from those obtained in the primary tumor tissue [28].That indicates that the real biological activity can be assessed only by examination of tissue taken directly from a tumor.Obviously, the tumor tissue sample must be obtained invasively, but not in an additional procedure aggravating the patient.In the present research it was found that a tumor tissue sample of 0.5 mg was adequate for evaluation of the isotopic ratio of carbon and nitrogen isotopes.This minimal mass may be collected in the course of the necessary diagnostic procedures and can be simultaneously used for isotope studies without detriment to pathological diagnosis.Typically, the postoperative mass, as well as material from biopsy, weighs from a few to a few hundred grams, and therefore the use of 0.5 mg of tumor tissue for isotopic analysis does not limit the necessary diagnosis performed in a routine way.However, two general methodological aspects should be emphasized.Firstly, the material from fine needle aspiration biopsy (FNAB) is not appropriate due to insufficient tumor tissue mass collected in this way.The actual impact made by this limitation shall be determined by investigating FNAB usefulness in the diagnosis of particular types of tumor.Secondly, there is another methodological aspect of study performance that seems of much greater significance for practice.It was revealed that the mass of the tumor tissue necessary to obtain valuable carbon and nitrogen isotope signals varies by several times when compared to sulfur.This must be taken into consideration when selecting the elements for analysis and collecting the material for the planned research.For methodological reasons, the main limitations of isotope ratio assessment are the decomposition processes and contamination of samples.The initial isotope ratio can quickly change as a result of decay or changes in environmental conditions.Contamination is sometimes difficult to avoid, even in the laboratory -the use of water from different sources or exposure to ambient external conditions could undermine the credibility of the measurements.Procedures for securing the material against contamination, decay and changes in the isotopic ratio due to environmental conditions are known and substantially described [29].It should be noted, however, that the tumor tissues decompose relatively quickly and must be protected without the use of any chemicals.Freezing to -80°C makes tissues available for isotope assessment, but the time from collection of the examined material to freezing and the time of preparation for the study after thawing may be crucial.
The procedure used during the sample preparation for isotopic studies (drying in vacuum and the addition of vanadium pentoxide) does not require particularly specialized equipment or skilled staff and appears to be a relatively inexpensive, quick and simple procedure in comparison with the preparation of material for routine microscopic examination and, in particular, with proteomics, which requires a number of initial methods for the selection of proteins present in low concentration, for example: separation on reversed phase [30], filtration based on measurement of molecular weight [31], the use of biotinylated reagents [32], the use of chromatographic methods [33], microseparation [34], or fractionation due to isoelectric focusing [35].These procedures are not only very complicated and expensive, but also time-consuming, which may extend the time-to-result process to several weeks.The duration of the study appears to be another advantage of IRMS.The preparation of the material and vacuum drying takes approximately 5 hours, and the routine stable isotope evaluation of one sample in the spectrometer takes about 20 minutes, the time being comparable to intraoperative pathological examination.Notwithstanding its many advantages, isotope study shows some unfavorable features.The biggest limitation of the IRMS method is probably the high price and low availability of mass spectrometers for medical studies.The cost of equipment amounts to approximately $ 200,000 and, in addition, its application requires qualified personnel and technical staff.There are a few research centers in Poland where spectrometers are available.So far, however, they have been mainly used by archaeologists, geologists, physicists and environmental researchers.Moreover, cancer research requires the cooperation of many professionals for the collection of material, its microscopic evaluation, clinical examination of patients, imaging and laboratory tests to isotope ratio measurements.These studies involve both specialists in oncology and specialists in spectrometry, who do not routinely collaborate with each other.All in all, the use of mass spectrometry for medical purposes is a relatively new phenomenon, and it is essential to develop good practices during studies as well as a comparable way of result interpretation by specialists whose areas of science have not converged up till now.
The development of procedure standardization for the acquisition, preparation and evaluation of research materials in IRMS studies would allow us to obtain reliable results and to create open databases of stable isotope values.That aim seems to be the most important in the pioneering stage of human tumor tissue research, and it is necessary to achieve the overarching objective of oncology -the transformation of knowledge into benefits for the treatment of patients with cancer.
Table I .
Summary of the examination of animal tissues
Table II .
Histology of examined tumors
Table III .
Summary of results of examination of animal tissues
Table IV .
Details of examination of carbon, nitrogen and sulfur isotope ratios in animal tissue
Table V .
Summary of results of delta Air and delta PDB examination of tumor tissues
Table VI .
Details of tumor tissue examination of carbon and nitrogen isotope ratios | 2018-04-03T03:57:07.641Z | 2015-10-23T00:00:00.000 | {
"year": 2015,
"sha1": "9322bc47b682af750fb5414101097d8decd5e8db",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-55/pdf-26002-10?filename=the%20first%20protocol.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9322bc47b682af750fb5414101097d8decd5e8db",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
262188129 | pes2o/s2orc | v3-fos-license | Experiences of Japanese women simultaneously caring for children and older people: An ethnographic study
Background Japan has experienced a rapid decline in birth rate and an aging population, coupled with women choosing to delay having children. Family carers are therefore increasingly expected to accept simultaneous responsibilities for both children and parents. This responsibility often falls on women in Japan, but little is known about their views. This study aimed to understand how Japanese women who are simultaneously responsible for caring for children and older people perceive their experiences. Methods This was an ethnographic study conducted in central Japan. Over a period of 3 years and 5 months, we observed 19 people active in a peer support group for people with both childcare and caregiving responsibilities. We also carried out individual interviews with 14 Japanese women who were raising children and caring for parents or parents-in-law. Results Five key themes emerged. These were “Accepting both childcare and caregiving as my role,” “Inability to fulfill the role of mother,” “Being supported by children and grandparents,” “Unable to talk to anyone about the pressures of caregiving,” and “Realizing that caregiving is not the only way to live.” Conclusions Japanese women who provided care to both children and older people were influenced by traditional Japanese values. However, they had a sense of mission and accepted the role of providing for their families. They felt guilty about not being able to fulfill their role as mothers, and were lonely, with no one to understand or advise them. If the burden of caregiving is concentrated on women, there is an increased risk that their children will become involved in providing some of the care for older people. It may therefore be necessary to develop a support system for female carers, and to increase understanding of the potential harm of placing caregiving responsibility solely on women.
Introduction
Japan has rapidly transitioned to a super-aged society, and currently has the highest proportion of older people in the world.It is expected that the percentage of the population aged 65 years and over will rise to 35.6% by 2065, and further aging is predicted [1].There is also a trend toward delaying having children, because more women are entering the workforce [2].This suggests that women are more likely to be responsible for caring for both children and older people at the same time [3].
Individuals who experience the conflicting demands of caring for children and caring for aging parents are known as the "sandwich generation" [9,10] and tend to be between 40 and 65 years old [9,11].
Previous studies on the sandwich generation have shown that it is associated with health-related factors in both men and women.In Japan, for example, middle-aged men in the sandwich generation smoke less and experience fewer health problems.However, middleaged women in the sandwich generation engage in less exercise and have fewer health checkups, and women with multiple roles generally have worse health [12].International studies have reported that people with dual caring responsibilities experience imbalances in caregiving roles within the family [13] and feel stressed about balancing caring for their parents and their other roles [14].Women who care for both children and older people report a higher rate of subjective feelings of ill health [15].Those with children under 15 years old are at higher risk of developing depression [7].The sandwich generation is therefore a group at high risk of having poor physical and mental health.
Studies on the experiences of adult daughters with children who also provide care for their parents also report worrying results.Caregivers report having too little time in the day [16] and limited time for work and leisure activities [17].Caregiving also changes the roles of parents and children [16].The burden of caring for older people affects carers' relationships with their children and spouses [17].
Studies from several countries have demonstrated that the sociocultural background of carers is strongly related to gender issues.A 1992 meta-analysis of gender differences in caregiving found that women tended to provide more personal care, carried out more household tasks, and experienced higher levels of subjective burden than men [18].A 2021 Canadian study also reported that women are more likely to be the primary caregivers for multiple family members [19].In Japan, until 1947, there was a formal family-based system of inheritance, which reinforced strong gender role divisions.Husbands were traditionally expected to work outside the home, and wives to do the housework [20].The 2023 Global Gender Gap Report [21] states that Japan still has a lower proportion of women in managerial positions and a lower labor force participation rate for women than many other countries.In addition, the concept of filial piety (respect for parents), which is a tenet of Confucianism, is an important and deeply rooted value in Japan and emphasizes family responsibility to care for aging parents.Children in Japan recognize that they have a duty to reciprocate the parental care they have received by caring for their parents in older age [22].Furthermore, it is commonly expected that daughters-in-law (usually the wives of eldest sons) will undertake caregiving roles [23].
Sociocultural influences are also apparent in the experiences of Japanese women when they have their first child.A recent qualitative study found that Japanese primiparas experienced pressure from not being able to escape child-rearing, clung to the image of the ideal mother, and felt conflicting feelings when comparing themselves with other mothers [24].
Japanese women who simultaneously care for children and older family members therefore experience two caregiving roles: one as a daughter or daughter-in-law and one as a mother.The above-mentioned previous studies have investigated the experiences of Japanese women in one or other of these roles, but the experiences of Japanese women who simultaneously care for children and older people remain to be clarified.Understanding the values and beliefs of women who are simultaneously responsible for raising children and caring for their parents could help to deepen our understanding of their experiences.The purpose of this study was to understand how Japanese women who simultaneously care for children and older parents perceive their experiences.
Research approach
To better understand the experiences of Japanese women caring for both children and older parents we conducted a qualitative study based on ethnographic principles [25,26].We chose this research method because ethnography permits deeper insights into a phenomenon through a process of participatory observation.Throughout this process, the researcher shifts back and forth between emic (insider) and etic (outsider) perspectives.This approach incorporates a range of methods to study individuals in a real-world setting, including direct observation, video recordings, and analysis of documents and artifacts.The aim of an ethnographic approach is to describe the patterns of behavior of individuals or groups living within a particular social and cultural setting.Culture encompasses assumptions about the nature of reality and specific information related to that reality [27].By applying ethnography, we hoped to identify common attitudes and behavioral patterns among women in Japan who are simultaneously responsible for providing care to children and parents.These insights should help us to understand how such women live in multiple relationships.
In Japan, a long-term care insurance system was implemented in 2000.The system ensures that people in all parts of the country receive the same level of long-term nursing care services as part of universal health coverage.
After World War II, a large proportion of the population in Japan moved from rural to urban areas, and the number of nuclear family households increased.Despite these patterns, in some regions of the country it is still common for three generations to live in the same household or nearby, and to support each other in childcare and nursing care.The process of urbanization in Japan has meant that many cities contain a mix of nuclear families and three-generation households.To ensure that participants were drawn from typical Japanese cities, central Japan was chosen as the study area.
In recent years, peer support activities for women who are simultaneously raising children and caring for older people have been initiated in several areas of Japan.In 2018, a peer support organization was established in the Chubu region to bring together people who have experience caring for both children and parents at the same time.The organization that conducts peer support activities was K. Sugiyama et al. founded by people caring for both children and older people in central Japan.The main activity of the peer support organization is the Carers' Café, where people share their concerns with their peers.The organization also disseminates information to the community to promote understanding of the challenges faced by carers with multiple responsibilities.The Carers' Café is held once every 2-3 months.It moves between venues in central Japan and was also held online after the start of the COVID-19 pandemic.The group usually includes approximately 10 caregivers in their 30s-50s.
It is helpful for researchers to attend meetings and social events to help them to understand research subjects more fully [28].The first author started the research by engaging in fieldwork on peer support activities, to help them to understand the research subject.The fieldwork was initiated with the permission of the "gatekeeper," the group representative.The first author participated in Carers' Cafés and activity meetings (face-to-face or online chat) as an observer.The first author observed 19 different individuals over the study period.
We also interviewed women in central Japan who were simultaneously raising children and providing care for their parents or parents-in-law.The first author conducted semi-structured interviews with five female carers whom we met through peer support activities and nine female carers who did not participate in the Carers' Café.Three of the fourteen interviewees were also included in the observation sessions.The nine additional participants were recruited through snowball sampling, and asked to participate in the study by members of the peer support organization or staff at the Community Comprehensive Support Center.
The criteria for participation in this study were having experience of caring for children under 18 years of age and providing care for parents or parents-in-law at the same time.Providing care for older people was defined as caring for someone who has been certified as requiring nursing care under Japan's Long-Term Care Insurance Law.We excluded individuals who provided care for a disabled child or spouse, and single mothers.
Data collection
We conducted this ethnographic research in central Japan over a 3-year, 5-month period from 2019 to 2022.Data were collected from observations of participants in peer support activities (such as Carers' Cafés), interviews, and from handbooks and training materials prepared by peer support group members.The fieldwork data collection was not videotaped or audio-recorded, but after each participant observation session, the first author immediately transcribed the collected data into field notes.Over the study period, the first author conducted 51 fieldwork sessions, amounting to a total of 108 non-consecutive hours.The main focus of the fieldwork observations were the experiences and backgrounds of female caregivers, how they perceived their experiences, and patterns of repeated experiences.In addition, the first author carefully observed which issues were mentioned by people who had the same experiences and their reactions to these issues.
After 10 months of fieldwork, we interviewed women in central Japan who were simultaneously raising children and providing care for their parents or parents-in-law.Interviews were conducted in Japanese using an interview guide that had previously been pilot-tested.The interview focused on the experiences of mothers, daughters (daughters-in-law), and wives.Interviewees recalled episodes that occurred while simultaneously raising children and providing nursing care, and freely talked about what they felt from their respective perspectives.The interviews were held in a private meeting room where participants' privacy was protected.The conference room where the interviews were conducted was chosen to be outside the areas where the study participants lived, but easily accessible.The interviews were audio-recorded and transcribed verbatim afterwards.Each interview included just two people, the study participant and the first author.Each participant was interviewed once, and the length of interviews ranged from 69 to 178 min, with an average of 115.9 min.Three participants asked to conduct their interviews online because of the risk of COVID-19.The online interviews were also audio-recorded.The first author explained that to ensure participants' privacy during online interviews, the interview locations and times should be chosen so that no family members were present.None of the peer support group members or community support center staff declined the interview request.However, one woman changed her mind before the interview because of the risk of infection.Table 1 summarizes the demographic characteristics of the interviewed participants and their involvement in childcare and nursing care.
Data analysis
Hammersley and Atkinson [29] stated that data analysis in ethnography should be carried out throughout the research process and requires a flexible approach to the phenomenon.We therefore transcribed verbatim words and phrases used to show the typical patterns, rules, and behaviors seen during participant observations, or obtained from informal conversations, materials, and interview data.During the study, the first author read the data repeatedly, and repeatedly discussed possible meanings with the other authors.This enabled us to gain a better understanding of the experiences, values, and beliefs of the participants.All data were analyzed using the thematic analysis procedure [30].The analysis procedure consisted of reading the field notes and interview transcripts multiple times, identifying frequent recurring themes and patterns, and performing initial coding of phrases that described the caregiver experiences (by the first author).The coding was manually performed using NVivo 12 (QSR International Pty Ltd.) and the data were also managed using this software.The codes were grouped into meaningful units by all authors, integrated into subthemes, and assigned overarching themes.In the analytical process, we used the constant comparative method [31], which is widely used in grounded theory.The constant comparative method is an effective way to obtain deeper analytical insight from observations of actions and other data.The data associated with each code are repeatedly compared to help generate a more reflexive understanding of the observations.Using this method, variations, similarities, and differences in the data were examined repeatedly by all authors and themes were identified.To ensure the accuracy of data interpretation, we also used triangulation to examine the similarities between different types of data from multiple sources (participant observations, interviews, and documents (i.e., handbooks and training materials)) and to assess the validity of the analysis.The themes and subthemes were translated from Japanese to English by a translator familiar with this field.The accuracy of the translated content was checked by all authors.
Reflexivity
This study was conducted by female Japanese researchers with experience in childcare and caring for older people, including parents.The investigators' influence on data collection and analysis was checked using reflection diaries kept by the first author and discussions with collaborators.The first author continually and critically checked for empirical bias throughout the research process through these reflective journals and ongoing discussions with co-authors.
Ethical considerations
This study was approved by the Kanazawa University Medical Ethics Review Board (919-1, 919-2).Permission for the researchers' participant observation was discussed among the representatives, vice representatives, and members of the peer support group, and was obtained with their consent.The first author orally explained the observation to the Carers' Café participants, including the reason for conducting the research, and orally confirmed their consent.The first author also obtained written consent for the interviews from the study participants.
Results
The researchers collected data through participant observation and interviews with women who were simultaneously caring for children and older people.Some of the women cared for parents who lived with them, whereas the care recipients of others lived separately.The care provided varied but included physical care, support for daily living, and helping the other parent to provide care.
The peer support organization did not limit its activities to women.However, only women attended the Carers' Café, and all the carers active in the peer support group were women.Most of the women who participated in the Carers' Café were in their 30s and 40s, with a small number in their 50s.Most were housewives or worked part-time.These women attended the Carers' Café to talk to others
Table 2
Themes and subthemes of the experiences of Japanese women simultaneously raising children and caring for parents.
Themes Subthemes
Accepting both childcare and caregiving as my role Being expected to follow "unspoken rules" at home Being concerned about their reputation with neighbors Inability to fulfill the role of mother Children and parents needing care at the same time Being unable to prioritize children Worrying about children's psychological well-being Being supported by children and grandparents Being helped by their children to care for their parents A synergistic effect is created between children and grandparents Unable to talk to anyone about the pressures of caregiving Having a limited number of people to talk to about experiences Being unable to talk easily to their husbands about caring Realizing that caregiving is not the only way to live Feeling uncomfortable with the "wear and tear" of the responsibilities of caring for older people Working provides a release from the responsibilities of caring for parents K. Sugiyama et al. who could relate to their experiences.Several said they had difficulty finding people with similar experiences on social media because they could not find relevant search terms.Most of them reported that it had taken them a considerable time to find information about the Carers' Café.In contrast, the study participants who did not attend the Carers' Café were all either part-time or full-time workers.
Interviews were conducted with 14 women (mean age 47.3 years, range 35-59 years).Some of these women were not currently caring for children and older people, but had previously done so.
The ideas that emerged from the research were grouped into 11 subthemes, and then into five themes.Table 2 shows the themes and subthemes, which were as follows: "Accepting both childcare and caregiving as my role," "Inability to fulfill the role of mother," "Being supported by children and grandparents," "Unable to talk to anyone about the pressures of caregiving," and "Realizing that caregiving is not the only way to live."
Accepting both childcare and caregiving as my role
The theme of "Accepting both childcare and caregiving as my role" was linked to two subthemes: women being expected to follow "unspoken rules" at home and being concerned about their reputation with neighbors.Many of the women saw caring for both children and older people as their role.Some had expected to do this since childhood, when they had seen their mothers taking care of their grandparents.
"My mother took care of my bedridden grandmother at home for about 10 years.From my earliest childhood, I had always seen my mother taking care of my grandmother.Because of that background, I felt like I also had to take care of my mother.Without seeing that, I might have given up on taking care of my mother."(Ms.K) The women who cared for their own parents reported wanting to do so as part of their role as daughters.They almost had a sense of mission to support their parents' lives.
"I don't want to feel regret when my mother dies. That's why I want to do what I can for her." (Ms. C)
One woman who only had sons was considering relying on her son's wife for her own care in the future.She did not expect her son to take care of her.
"If I had a daughter, I could depend on her. I only have a son, so if I need care in the future, I will have to rely on my son's wife …" (Ms. N)
The relationship between the women and their care recipients influenced their acceptance of the caregiving role.In particular, daughters-in-law felt they could not refuse to provide care for their parents-in-law.
"Both my mother and mother-in-law needed care at the same time.There were no women in either household.At that time, I was not feeling well.I had to take care of my children and my parents.… My sister-in-law, who lives a long way away, called me and asked me to take care of her mother-in-law.I wanted to tell my sister-in-law, 'I'm having a hard time, too,' but I was worried that would hurt the relationship.
I didn't know what to do." (Ms. O)
When older people living with the family needed care, the women had naturally assumed the caregiving role without any family discussion.This was particularly true if the woman was already a housewife.
"We never discussed dividing caregiving responsibilities within the family, but I had a feeling that I would be the one responsible for all of it." (Ms. F)
Both the women and their relatives were aware of how provision of care for older people was viewed by their neighbors.In Japan, even when a family member provides the majority of the care, using specialized care services may be interpreted by neighbors as abandoning or neglecting the parent.One woman wanted to use nursing care services to reduce the burden of nursing her parents.However, she was opposed by her relatives, who were concerned about her reputation in the neighborhood.
Inability to fulfill the role of mother
The theme of "Inability to fulfill the role of mother" included subthemes such as children and parents needing care at the same time, being unable to prioritize children, and worrying about children's psychological well-being.However, this was almost never discussed in depth at the Carers' Café.One problem was that children and older people tend to have the same mealtimes, toilet times, bath times, and sleep times, which inevitably makes caring for both more difficult.In particular, women found it hard to prioritize either group at any given time.
"I think it comes down to prioritizing what to do when needs overlap. Mealtimes, bath times, and bedtimes are usually the same for both children and parents. It is difficult to manage when both need care." (Ms. B)
The women reported that their children were affected by the cognitive symptoms and psychological instability of their grandparents.The women felt the need to be involved with both their children and their parents, because the situation created negative interactions between the two groups.This was a difficult problem to solve.
"It was difficult for me to improve the relationship between my son and his grandmother. My son now has complicated emotions about his grandmother, and when she sees his attitude, she becomes aggressive, leading to an emotional discussion. It was difficult because both sides had their own position." (Ms. K)
As a result of their belief that they could not prioritize their children, many of the women were troubled by what they saw as their inability to fulfill the role of mother.When they prioritized care of their parents over childcare, some also experienced worry about [their] children's psychological well-being.
Being supported by children and grandparents
The women talked about their experiences of being helped by their children to care for their parents.Some noted that a synergistic effect was created between children and grandparents.In a participant observation, one woman said, "Caring for a child and an older parent at the same time is hard, but there are happy moments." The women described situations where they had seen their children accepting their grandparents without resistance.They also described times where the children had acted to dissolve a tense atmosphere in the household.
"When my eldest son and grandmother were arguing, my second son came in and said, 'Grandma, let's go to your room and sleep' and took her out of the room. My second son really helped me out in that situation." (Ms. K)
The synergistic effect created between children and grandparents was a positive influence produced as a result of interaction between the child and the grandparent.This experience was an emotional support for the women.
Unable to talk to anyone about the pressures of caregiving
The experiences women described under the theme "Unable to talk to anyone about the pressures of caregiving" included having a limited number of people to talk to about their lives.Many of the women mentioned being unable to talk easily to their husbands about caring.One subtheme was "Having a limited number of people to talk to about experiences."The women reported having many family members and friends around them, but not being able to talk to anyone about the pressures of providing care to their older parents.This was because many had become estranged from people with whom they had previously been involved because of their caregiving responsibilities.Therefore, they were no longer able to consult with these people.In Japan, separate support is generally provided for people caring for children and for older people, so women found it more difficult to obtain support that took both requirements into account.Carers who were providing both types of care did not receive adequate support.
"The hardest part of balancing childcare and housework was that even when seeking advice on caring for older people, the counselor would say, 'You need to spend more time listening to your mother-in-law.' But I have a child to take care of … When I mentioned this, the counselor didn't know what to say. When caring for both a child and a parent at the same time, I couldn't get any appropriate advice. There was no real help available. Despite my best efforts, it felt like there was no reward for my hard work." (Ms. B)
Few of the who women had friends had any experience of caring for older people.The women felt that friends who had not experienced providing care could not understand the emotional pain they felt and their circumstances, and did not expect them to be able to do so.
"If you can't get sympathy from your friends, there's no point in talking about it. Empathy can reduce stress and make you feel like your story has been received by the other person. I am happy to have a friend who has had a similar experience of caring for a parent. Those who have never cared for a parent cannot fully understand the feelings of those who have." (Ms. D)
At the Carers' Café, first-time visitors were looking for peers who could empathize with them.One woman said that talking about K. Sugiyama et al. her experiences with her peers "made me feel better" and it was good to "know I'm not the only one suffering." Several women reported being unable to consult even their husbands on caring for their parents or parents-in-law.One woman thought that caring for her parents was "an issue within family I was born and raised in and has nothing to do with my husband."The women were afraid that if they talked to their husbands about their concerns about caring for their parents-in-law, their husbands would perceive it as criticizing their in-laws.This made them unwilling to consult their husbands.Several women commented that their husbands often did not cooperate with their requests about housework and childcare support.The women therefore did not expect the men to understand the pressures on them.
"My husband doesn't do chores right away when I ask him to, so I end up doing all the chores.I don't ask him to do anything.I did all the preparations for the camp the other day by myself because I knew my husband was sleeping in the other room.But when my child said, 'Daddy, help Mom,' my husband helped me.I usually don't ask him to help me with anything."(Ms.C)
Realizing that caregiving is not the only way to live
Under the theme of "Realizing that caregiving is not the only way to live," the women discussed experiences of feeling uncomfortable with the "wear and tear" of the responsibilities of caring for older people.Several also said that working provides a release from their caring responsibilities.This experience was seen in carers who were full-time housewives.It was a way of coping for women who were overwhelmed by their caring role and at risk of losing sight of themselves.
"What am I? Am I only a wife to my husband, a parent to my children, a daughter to my parents? Sometimes I don't know who I am. But I try my best to focus on what is in front of me. The only time I feel like I can catch my breath is when I am alone in the car. Because there I am not disturbed by anyone." (Ms. O)
The women tended to feel that they had no time for themselves because of their caring responsibilities.However, having a job outside the home temporarily freed them from these responsibilities and allowed them to connect with society.For these women, the workplace became the only place where they could be their true selves.
Discussion
This study explored the experiences of Japanese women who simultaneously care for both their children and their parents.We identified five themes.The originality of this study is that it elucidates the experiences of women who are simultaneously responsible for caring for children and older parents.A strength of this study is that we used multiple sources of information (participant observation, interviews, and document analysis) and analyzed the data using triangulation to assess the validity of the data.
The influence of traditional Japanese values
Underlying the fact that female caregivers take for granted that it is their role to provide care is a deep-rooted division of gender roles in Japan [32].The women in this study did not feel uncomfortable with the idea that it was their role to take care of their parents.Instead, the gender roles of men as the workers and women as devoting themselves to housework and childcare were taken for granted.
Even today, although the traditional Japanese family system has been abolished, its remnants remain.The Japanese government reported in 2019 that 63.4% of women in their 40s who are likely to be caring for both children and older people do not believe roles should be dictated by gender [33].However, even in this age group, in most households, women are responsible for housework and childcare.This shows that there is an unconscious assumption in Japanese society that it is normal for women to manage childcare and housework.Japan ranks 125th out of 146 countries in the 2023 Gender Gap Index from the World Economic Forum [21], and is especially low in terms of women's economic and political participation [21].Donath [34] stated that women put themselves aside in a variety of situations and serve others.Women in this study who were full-time housewives experienced feeling uncomfortable with the "wear and tear" of caring responsibilities.We conclude that Japan has a male-dominated social structure, and that married women are still bound by caregiving roles and unable to make active choices about their lives and responsibilities.
Providing care for both children and older people
In Japan, in addition to the rapid aging of the population, the trend toward having children later [2] is likely to increase the risk of women being required to provide care for both children and older people at the same time.Participants in this study ranged in age from their mid-30s to late 50s and cared for both children and older parents.Therefore, they can be considered part of the sandwich generation [9][10][11].Participants accepted their roles of caring for children and older relatives and tended to feel that they had no time for themselves because of their responsibilities.In addition, they worried that they would not be able to satisfactorily fulfill their role as mothers, and experienced difficulties talking to others about the pressures of caregiving.These reported problems, and the fact that they are part of the sandwich generation, makes it likely that participants are at risk for physical and mental health problems.Previous studies have shown that female carers' role overload in caregiving affects their relationships with their children [17].In this study, women described the dilemma of being unable to fulfill their role as a mother.They were also unable to talk to anyone about the pressures of caregiving.It is possible that these women feel guilty that they cannot fulfill all their roles.Guilt is defined as "the dysphoric feeling associated with the recognition that one has violated a personally relevant moral or social standard" [ [35], p.318].In other words, women experienced being unable to fulfill the role of mother as a sense of guilt toward their children.However, we suggest that this sense of guilt arose from traditional societal perceptions of women's roles.
The women also experienced being unable to talk to anyone about the pressures of caregiving.This may be because they were worried that expressing concerns would suggest that they could not fulfill the roles assigned them by social norms and that their husbands and friends would not understand their situation.The women seemed to experience microaggressions, or negative expressions with hostile intent that made them feel undervalued, through words and actions in everyday situations.The women coped by not consulting anyone to avoid these microaggressions.However, this coping strategy could further isolate them.One woman said, "Only someone who has been through the same thing can relate."This narrative means that most women in this study felt that they had nobody who understood their position, and they were therefore more likely to be isolated.
A survey in Canada found that husbands of women providing care for children and older people can greatly reduce the burden on their wives by empathizing with their wives' efforts and helping with nursing care [17].We found that the husbands of the women in our study rarely showed empathy or participated in caregiving for children or parents.The women felt that they did not receive adequate support from their husbands and were left to provide all the care on their own.Some of the women reported being supported by children and grandparents.This was a good experience for these women, because their children were involved in caring for their grandparents.However, it is possible that part of the caregiving role may shift to the child if their participation in caregiving becomes permanent.In other words, if the caregiving role becomes too much for their mothers, there is a risk that children will become young carers to support them.
Effective support measures
Policies such as flexible working systems and the development of childcare and older care services accessible to all are important to address the challenges experienced by Japanese women who simultaneously care for children and older parents.However, these policies alone are not sufficient.This is because Japanese society has an underlying value system in which it is taken for granted that women are responsible for housework, even within the family, and there is a lack of family understanding of women's use of care support services.
In Japan, the Long-Term Care Insurance Law was enacted to provide an alternative to family-dependent nursing care, and to socialize the process of providing this care.However, there are few services to promote the welfare of caregivers, and caregivers do not have the same rights as those receiving care [36].We found that women who are balancing care for children and older people in Japan do not receive sufficient support.To ensure that support services are more accessible to Japanese women who are simultaneously responsible for childcare and parent care, there needs to be greater understanding in society of the negative effects on women of accepting all caregiving responsibilities.Greater social understanding would make it easier for women who care for both children and older parents to access services.
In addition to providing social services, creating opportunities for caregivers with similar experiences to talk to each other could help them to express their feelings of guilt and feel less isolated.The establishment of nationwide Carer's Cafés, which provide an opportunity for caregivers who simultaneously care for children and older people to meet, is increasing in Japan, but these are not always permanently located in places accessible to carers.Alternatives such as online cafés, social networking chat rooms, and other resources could help caregivers who cannot find in-person support to connect with other carers more frequently.Creating opportunities for caregivers who have had similar experiences to establish contact could help to reduce the emotional burden of this population.
Those providing support to people caring for either children or older people need to carefully gather information to ensure that these carers do not have overlapping caregiving roles within their family.Women providing care to older people can be difficult to identify because some do not live with their care recipients and others help their other parent to provide care.In addition, most consultation centers for childcare and older people in Japan are held in separate locations, and such support systems do not provide adequate support to carers.To make it easier for women to obtain help with care for both their parents and their children, it may be useful to set up "one-stop shops" to provide childcare and nursing care support in the community.This may require new staff roles to coordinate services to meet the needs of the target audience.
Japan faces the challenge of a shrinking labor force with a further decline expected in birth rates and an aging population.Women need to participate in the workforce to maintain the social security system.However, in a society where it is taken for granted that women are responsible for caring for children and older people, women may find it harder to participate actively in society because of caring responsibilities.To shift to a society in which everyone can live as they wish, policymakers need to foster the momentum to enable both men and women to work while raising children and caring for older people.
Limitations
In this study, a qualitative approach was used to explore the experiences of women who are simultaneously caring for children and parents.The research was conducted in central Japan, but may be applicable to other regions or countries where women tend to be the K. Sugiyama et al. main caregivers.
One potential limitation of the study was that only the first author collected the data.This may have affected the validity of the data.
The women interviewed included those who no longer provided care for both children and parents.One limitation of this study is that we cannot fully understand the trajectory of female caregivers until they have completed caregiving.In addition, the caregiving situation may change in future because the number of dual-earner households has been increasing among the younger generation in recent years.Future generations may have different values, and the gender divide in caregiving may also change.
In future, it is necessary to quantitatively investigate the real-world situation of women who are responsible for caring for children and older people at the same time, including the support available to them and issues within the support system.
Conclusions
This study explored the experiences of women in Japan who are simultaneously responsible for providing care for children and older people.Our results show that these women were influenced by traditional Japanese values, but also accepted the role of caregiver with a sense of mission.The women felt guilty about not being able to fulfill their role as mothers, and were often lonely with no one to understand or advise them.If the burden of caregiving is concentrated on women, there is an increased risk that some of the responsibility for caring for older people will be shifted to children.There is a need to develop a support system for female carers.
Table 1
Participant demographic characteristics and background in childcare and caregiving.
Being at my mother's house means I can't spend time with my children.My daughter says to me, 'Why are you always taking care of my grandmother?'My mother complains all the time and says she wants to die.Seeing her like that, my daughter said, 'I don't want to go to Grandma's house anymore.'I wish I could explain it to my kids, but I don't know how.It is difficult."(Ms.H) "When my daughter was younger, she had a severe injury to her arm because I wasn't paying attention to her while I was cleaning up my step-grandmother's fecal incontinence.I still want to do something about my daughter's wound.I asked her, 'Don't you care about the scars on your arms?'She replied, 'It doesn't bother me that much.'But I believe she is bothered about it."(Ms.F) | 2023-09-24T15:13:32.458Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "050a9cb9e6feb0bb1ffa4c3c5c5554cf50013c60",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e20375",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f31a595b9112458b1b467a0d041e68f1640c70be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119101940 | pes2o/s2orc | v3-fos-license | ANTARES Deep Sea Neutrino Telescope Results
The ANTARES experiment is currently the largest underwater neutrino telescope in the Northern Hemisphere. It is taking high quality data since 2007. Its main scientific goal is to search for high energy neutrinos that are expected from the acceleration of cosmic rays from astrophysical sources. This contribution reviews the status of the detector and presents several analyses carried out on atmospheric muons and neutrinos. For example it shows the results from the measurement of atmospheric muon neutrino spectrum and of atmospheric neutrino oscillation parameters as well as searches for neutrinos from steady cosmic point-like sources, for neutrinos from gamma ray bursts and for relativistic magnetic monopoles.
NEUTRINO ASTRONOMY
Neutrino astronomy has a unique opportunity to observe processes that are inaccessible to optical telescopes or cosmic ray observatories. The advantage of neutrinos with respect to cosmic particles like photons and protons is that they can travel over cosmological distances without being absorbed or deflected by magnetic fields.
The existence of high energy cosmic rays is known since over 100 years, but their astrophysical origin and their acceleration to the high energies is still unclear. The observation of cosmic rays is a strong argument for the existence of high energy neutrinos from the cosmos. The cosmic neutrinos are expected to be emitted along with gamma-rays by astrophysical sources in processes involving the interaction of accelerated hadrons with ambient matter or dense photon fields. The subsequent production and decay of pions produce high-energy neutrinos and photons.
The weak interaction of neutrinos with matter makes the detection challenging. A cost effective way to detect high energy neutrinos is to use target material found in nature, like water and ice. The detector material has to be equipped with a three dimensional array of light sensors, so that muon neutrinos are identified by the muons that are produced in charged current interactions. These muons are detected by measuring the Cerenkov light which they emit when charged particles move faster than the speed of light in the detector material. The knowledge of the timing of the Cerenkov light recorded by the light sensors allows to reconstruct the trajectory of the muon and so to infer the arrival direction of the incident neutrino. This technique is used in large-scale Cerenkov detectors like IceCube A. Achterberg et al., Astropart. Phys. 26 [1] and ANTARES [2] which are currently looking for high-energy (>TeV) cosmic neutrinos.
ANTARES NEUTRINO TELESCOPE
The ANTARES (Astronomy with a Neutrino Telescope and Abyss environmental RESearch) detector is taking data since the first lines were deployed in 2006. It is located in the Mediterranean Sea, 40 km off the French coast at 42 • 50 N, 6 • 10 E. The detector consists of twelve vertical lines equipped with 885 photomultipliers (PMTs) in total, installed at a depth of about 2.5 km. The distance between adjacent lines is of the order of about 70 m. Each line is equipped with up to 25 triplets of PMTs spaced vertically by 14.5 m. The PMTs are oriented with their axis pointing downwards at 45 • from the vertical. The instrumented detector volume is about 0.02 km 3 . The design of ANTARES is optimized for the detection of upward going muons produced by neutrinos which have traversed the Earth, in order to limit the background from downward going atmospheric muons. The instantaneous field of view is half of the sky for neutrino energies between 10 GeV and 100 TeV, due to selection of upgoing events and the size of the detector. Further details on the detector can be found elsewhere [2].
In this proceeding there is not enough room to discuss all topics which were illustrated at Particle Physics and Cosmology workshop, such as the atmospheric muon [3], diffuse neutrino fluxes [4], neutrinos from dark matter from the Sun [5], the time calibration system [6] and the acoustic neutrino detection system [7], for which the reader is FIGURE 1. Left: The reconstructed zenith angle data distribution of selected events compared to the Monte Carlo distribution for atmospheric neutrino and muon background. Right: The zenith-averaged atmospheric neutrino energy spectrum. The ANTARES result is shown together with the results from AMANDA-II [8] and IceCube40 [9]. The solid black line represents the conventional flux prediction from Bartol group and the shaded area represents its uncertainty [10]. The dashed red and dotted blue lines include two prompt neutrino production models from [11] and [12], respectively. referred to elsewhere. A short description of some interesting measurements and searches using ANTARES data are presented in the following.
Measurement of Atmospheric Muon Neutrino Spectrum
Even if the primary aim of ANTARES is the detection of high energy cosmic neutrinos, the detector measures mainly downward going atmospheric muons and upward going atmospheric muon neutrinos. The atmospheric muons are produced in the upper atmosphere by the interaction of cosmic rays and can reach the apparatus despite the shielding provided by 2 km of water. Atmospheric neutrinos produced also in the atmospheric cascades as the above mentioned atmospheric muons, can travel through the Earth and interact in the vicinity of the detector, producing the upward going atmospheric neutrinos. Figure 1 left shows a comparison of the zenith angle distribution between data and Monte Carlo simulation. It can be seen that the flux of atmospheric muons is several order of magnitude larger than that of atmospheric neutrinos and that there is a good agreement between data and the Monte Carlo simulation.
The measurement of the atmospheric muon neutrino spectrum has been performed using 2008-2011 data for a total equivalent live time of 855 days. A determination of the neutrino energy is needed for such a measurement. In the analysis two energy estimators were used. The first one is based on the muon energy loss along its trajectory and the second one relies on a maximum likelihood method attempting to maximize the agreement between the observed and expected amount of light. To reconstruct the energy spectrum an unfolding procedure is used for both methods and the results are represented in Figure 1 right. Within the errors the ANTARES results are compatible with the spectra measured in the Antarctic neutrino telescopes. Also shown is the distinction between neutrinos produced by the decay of pions and kaons up to about 100 TeV, the so-called conventional neutrinos and the neutrinos produced by the decay of charmed mesons, the so-called prompt neutrinos [11,12]. The energy dependence of the prompt neutrino flux are poorly constrained. Its precise features are sensitive to hadronic interaction models and is less steep than the conventional neutrino flux. The highest energy region of the atmospheric neutrino spectrum has been used to put a constraint on the diffuse of cosmic neutrinos [4]. Such a diffuse flux would reflect the existence of a cumulative neutrino flux from a bulk of unresolved astrophysical sources.
Measurement of Atmospheric Muon Neutrino Oscillations
At lowest neutrino energy ANTARES is sensitive to neutrino oscillation parameters through the disappearance of atmospheric muon neutrinos [13]. Neutrino oscillation are commonly described in terms of L/E, where L the oscillation path length and E is the neutrino energy. For upward going neutrinos crossing the Earth the travel distance L is translated to D cos θ where D is the Earth diameter and θ the zenith angle. Within the two-flavor approximation, the ν µ survival probability can be written as where θ 23 is the mixing angle and m 2 23 is the squared mass difference of the mass eigenstates (with L in km, E ν in GeV and m 2 23 in eV 2 ). The survival probability P depends only on the two oscillation parameters, sin 2 2θ 23 and m 2 23 , which determine the behavior for the atmospheric neutrino oscillations. Taking the recent results from the MINOS experiment [14], the first minimum in the muon neutrino survival probability (P(ν µ → ν µ ) = 0) occurs for vertical upward going neutrinos at about 24 GeV. Muons induced by a 24 GeV neutrino travel in average around 120 m in sea water. The detector has PMTs spaced vertically by 14.5 m so that this energy range can be reached for events detected on one single line.
The reconstructed flight path through the Earth is reconstructed through zenith angle θ R , which is estimated from a muon track fit [15]. Whereas the neutrino energy E R is estimated from the observed muon range in the detector. Figure 2 left shows event rate of the the measured variable E R / cos θ R for a data sample from 2007 to 2010 with a total live time of 863 days. Neutrino oscillations cause a clear event suppression for E R / cos θ R < 60 GeV with a clean sample of atmospheric neutrinos with energies as low as 20 GeV. The parameters of the atmospheric neutrino oscillations are extracted by fitting the event rate as a function E R / cos θ R and is plotted as a red curve in Figure 2 left with values m 2 23 = 3.1 · 10 −3 eV 2 and sin 2 2θ 23 = 1. This measurement is converted into limits of the oscillation parameters and is shown in Figure 2 right. If maximum mixing is imposed (sin 2 2θ 23 = 1) the value is m 2 23 = (3.1 ± 0.9) · 10 −3 eV 2 . This measurement is in good agreement with the world average value. Although the results are not competitive with dedicated experiments, the ANTARES detector demonstrates the capability to measure atmospheric neutrino oscillation parameters and to detect and measure energies as low as 20 GeV. It was the first time that a high energy neutrino telescope was used to measure the atmospheric neutrino oscillation parameters.
Measurement of Velocity of Light in Water
The correct understanding of the velocity of light in the water at the detector site is crucial to reach the optimal performance of the detector. It is well known that charged particles crossing sea water induce the emission of Cerenkov light whenever the condition β > 1/n p is fulfilled, where β is the speed of the particle relative to the speed of light in vacuum and n p is the phase refractive index. The Cerenkov photons are emitted at an angle with respect to the particle track given by cos θ c = 1 β n p . The individual photons travel in the medium at the group velocity. Both the phase and group refractive indices depend on the wavelength of the photons and has the effect of making the emission angle and the speed of light wavelength dependent. This velocity of light has been measured using a set of pulsed light sources (LEDs emitting at different wavelengths) distributed throughout the detector illuminating the PMTs through the water [16]. In special calibration data runs the emission time and the position of the isotropic light flash, as well as the arrival time and the position when the light reaches the PMTs are used to measure the velocity of light. The refractive index has been measured at eight different wavelengths between 385 nm and 532 nm. This refractive index with its systematic errors are shown in Figure 3 left. Also shown is the parametric formula of the refractive index. The measurements are in agreement with the parametrization of the group refractive index.
Measurement of Electromagnetic Showers along Muon Tracks
The ANTARES detector measures mainly downward going muons. These muons are the decay products of cosmic ray collisions in the Earth's atmosphere. Atmospheric muon data have been used for several analyses [3,17,18]. In particular the collaboration investigated the sensitivity of the composition of cosmic rays through the downward going muon flux [19]. Several observational parameters are combined to estimate the relative contribution of light and heavy cosmic rays. One of these parameters is the number of electromagnetic showers along muon tracks.
Catastrophic energy losses appear occasionally, when a high energy muon (∼ 1 TeV) traverses the water. These energy losses are characterized by discrete bursts of Cerenkov light originating mostly from pair production and bremsstrahlung (electromagnetic showers). A shower identification algorithm [20,21] is used to identify the excess of photons above the continuous baseline of photons emitted by a minimum-ionizing muon. With this method downward going muons with energies up to 100 TeV have been analyzed.
The muon event rate as a function of the number of identified showers is plotted in Figure 3 right. The distribution shows the results for data and a Corsika based simulation. As can be seen, about 5% of the selected muon tracks have at least one well identified shower. Also shown is the systematic uncertainty for the simulation, where the largest systematic errors arises from uncertainties on the PMT angular acceptance and absorption length.
Search for Sources of Cosmic Neutrinos
The collaboration has developed several strategies to search in its data for point-like cosmic neutrino sources [22,23], possibly in association with other cosmic messengers such as gamma-rays [24,25], gravitational waves [26] or gravitational lenses. Clustering of neutrino arrival directions can provide hints for their astrophysical origin. In the search of cosmic neutrino point sources, upward going events have been selected in order to reject atmospheric muons. Most of the events are atmospheric muon neutrinos which constitute an irreducible diffuse background for cosmic neutrino searches. The 2007-2010 data contain around 3000 neutrino candidates with a predicted atmospheric muon neutrino purity of around 85%. The estimated angular resolution is 0.46±0.10 degrees. The selection criteria are optimized to search for E −2 neutrino flux from point-like astrophysical sources, following two different strategies: a full sky search and a search in the direction of particularly interesting neutrino candidate sources. The selection of these sources is either based on the intensity of their gamma-ray emission as observed by Fermi [27] and HESS [28] or based on strong gravitational lensed sources with large magnification. The motivation to select lensed sources is that neutrino fluxes as photon fluxes can be enhanced by the gravitational lensing effect, which could allow to observe sources otherwise below the detection threshold.
The cosmic point source search has been performed using an unbinned maximum likelihood method [23]. This method uses the information of the event direction and, since the cosmic sources are expected to have a much harder spectra than atmospheric neutrinos, the number of hits produced by the track. For each source, the position of the cluster is fixed at the direction of the source and the likelihood function is maximized with respect to the number of signal events. In the absence of a significant excess of neutrinos above the expected background, an upper limit on the neutrino flux is calculated. A full sky point source search based on the above mentioned algorithm has not revealed a significant excess for any direction. The most significant cluster of events in the full sky search, with a post-trial p-value of 2.6%, which is equivalent to 2.2σ , corresponds to the location of (α, δ ) = (−46.5 • , 65.0 • ). No significant excess has been found neither in the dedicated search from the list of 11 lensed and 51 gamma-rays selected neutrino source candidates. The obtained neutrino flux limits of these selected directions are plotted as function of declination in Figure 4 left, where for comparison the limits set by other neutrino experiments are also shown.
Search for Coincidence of Neutrinos and Gravitational Waves
Both neutrinos and gravitational waves are cosmic messengers that can escape from the core of the sources and travel over large distances through magnetic fields and matter without being altered. They could give important information about the processes taking place in the core of production sites and they could also reveal the existence of sources opaque to hadrons and photons such as failed GRBs. A first joint gravitational waves and neutrino search was performed using data taken with ANTARES and the gravitational waves detectors VIRGO and LIGO using 2007 data [26]. The strategy consists in an event-per-event search for gravitational waves signal correlation in space and time with a given high-energy neutrino event considered as an external trigger. No coincident event was observed, which allowed to place upper limits on the volume and density of joint gravitational waves and neutrino emitters. The gravitational waves horizon has been estimated to be ∼ 10 Mpc for mergers, and ∼ 20 Mpc for collapses. The density limit ranges from 10 −2 M pc −3 × yr −1 for short GRB-like signals to 10 −3 M pc −3 × yr −1 for long GRB-like emission. These density limits are presented in Figure 4 right and are compared to other objects of interest.
Search for Neutrinos from Gamma Ray Bursts
Another possible way to discover cosmic neutrinos is to observe neutrino events in coincidence in direction and time with gamma ray bursts. Gamma ray bursts are intense flashes of gamma rays, which result from a highly relativistic jet formed during the collapse of a massive star such as supernova events. Two searches for a neutrino flux in coincidence with GRB have been made. The first search selected 40 GRB that occurred in 2007 [31]. The second search is based on 2008-2011 data with 296 GRBs, representing a total equivalent live time of 6.55 hours. In both cases, zero events were found in correlation with the photon emission of the GRBs. Figure 5 left shows the upper limits on the total flux for a fully numerical neutrino model which include Monte Carlo simulation [32] and an analytical model [33].
Search for Neutrinos from Fermi Bubbles
The Fermi Satellite has revealed an excess of gamma-rays in an extended pair of bubbles above and below our Galactic Center. These so called Fermi Bubbles (FBs) cover about 0.8 sr of the sky, have sharp edges, are relative constant in intensity and have a flat E −2 spectrum between 1 and 100 GeV. It has been proposed that FBs are seen due to cosmic ray interactions with the interstellar medium, which produce pions [34]. In this scenario gamma rays and high-energy neutrino emission are expected with a similar flux from the pion decays.
ANTARES has an excellent visibility to the FBs and therefore a dedicated search for an excess of neutrinos in the region of FB has been performed [35]. This analysis compares the averaged rate of observed neutrino events in the three FBs regions to that observed excluding the FB region. One such off source FB region is equivalent in size and has in average the same detector efficiency as the FB region. The analyzed 2008-2011 data reveal 16 neutrino events inside the FB region. Estimations from outside the FBs regions predict 11 neutrino events. These results are compatible with FIGURE 6. Left: The ANTARES 90% C.L. upper limit on the upgoing magnetic monopole flux as a function of the monopole velocity β . Also shown are the theoretical Parker bound [38], the published upper limits obtained by MACRO [39] for an isotropic flux of monopoles as well as the upper limits from Baikal [40] and AMANDA [41] for upgoing monopoles. Right: The ANTARES 90% C.L. upper limits on a downgoing flux of nuclearites as a function of the nuclearite mass, compared to the limits reported by MACRO [44] and SLIM [45].
no signal and limits are placed on the fluxes of neutrinos for various assumptions on the energy cutoff at the source. Figure 5 right shows the upper limits and compares it to expected signal for optimistic models [34]. It can be seen that the calculated upper limits are within a factor 3 above the expected signal.
Search for Relativistic Magnetic Monopoles
The existence of magnetic monopoles is a generic prediction of grand unification theories and are expected to have been produced in the early Universe. The mass predicted for such magnetic monopoles can range from 10 4 GeV to 10 20 GeV depending on the specific model [36]. Magnetic monopoles are one of the few predictions of grand unification theories that can be studied in the present environment. The detection of monopoles relies on the large amount of light emitted compared to that from muons. Cherenkov light emission of a monopole exceeds that of a single charged minimum ionizing particle by O(10 4 ). The search for upgoing relativistic magnetic monopoles was performed with 116 days live time of ANTARES 2007-2008 data [37]. One event was observed, consistent with the expected atmospheric neutrino and muon background. The derived limits on the upgoing magnetic monopole flux for monopoles with velocity β > 0.625 are shown in Figure 6 left.
Search for Slowly Moving Nuclearites
Nuclearites are hypothetical massive particles assumed to be stable lumps of up, down and strange quarks in nearly equal proportions. They could be present in the cosmic rays relics of the early Universe. The nuclerite detection in neutrino telescopes is possible through the blackbody radiation emitted by the expanding thermal shock wave along their path [42]. A search was performed for downgoing slowly moving nuclearites (β ∼ 10 −3 ) with data collected in 2007 and 2008 [43]. Only nuclearites with masses larger than a few 10 13 GeV produce enough light to be detected within the detector. A dedicated search strategy found no significant excess of nuclearite events. The upper limits on the flux of downgoing nuclearites are shown in Figure 6 right for the mass range ∼ 10 13 − 10 17 GeV.
CONCLUSION
ANTARES has been taking data since the first lines were deployed in 2006 and is foreseen to take data at least until the end of 2016. With these data a broad physics program is underway producing competitive results. In particular the | 2013-10-31T10:41:19.000Z | 2013-10-31T00:00:00.000 | {
"year": 2014,
"sha1": "e846f18b3b077feab48d9f86ad5dd68b6a27fd28",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1310.8451",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e846f18b3b077feab48d9f86ad5dd68b6a27fd28",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
57620368 | pes2o/s2orc | v3-fos-license | Birth Intervals Among Multiparous Women in Indonesia
Maternal mortality rate and infant mortality rate in Indonesia are currently high. One of factors causing the high risk of maternal and infant mortality is too short birth intervals. This study aimed to learn determinants of birth intervals among multiparous women in Indonesia. This study used data from the Indonesia Demographic and Health Survey 2012 with 9,945 multiparous women. The data was analyzed using Mann Whitney, Kruskal Wallis and logistic regression tests. Median of birth intervals was 62 months and 22.8% women had birth interval less than three years. Results showed that determinants of birth intervals included maternal education, the last age of childbirth, ideal family size, the use of contraception, infant mortality records and survival of preceding child (p value < 0.05). The age of childbirth was a major risk factor of too short birth intervals. It needs the improvement of communication, information and education regarding maturation of age for marriage, ideal number of children as well as the increase of contraception use in order to increase optimum birth intervals.
Introduction
Infant mortality rate (IMR) and maternal mortality rate (MMR) are indicators of national health standards. MMR and IMR are also targets determined in the Millennium Development Goals (MDGs). 1 According to World Health Organization (WHO), the IMR is the number of infants died before the age of one year per 1,000 live births. Meanwhile, MMR is maternal death that occur during pregnancy, childbirth, or within 42 days of postnatal period with causes related directly or indirectly to pregnancy per 100,000 live births. In Indonesia, maternal mortality is one of the highest in Asia. Based on the report from Indonesia Demographic and Health Survey (IDHS) in 2012, it was recorded that MMR reached up to 359 per 100,000 live births. This number increased compared to the IDHS in 2007 which recorded MMR as many as 228 per 100,000 live births. 2 In the world, 19,000 children died every day in 2011.
According to The United Nations Children's Fund (UNICEF)'s report in 2012, IMR in Asia was amounted to 34 per 1,000 live births. 3 In Indonesia, IMR was steadily declining (but the decline was insignificant). IMR decreased from 35 per 1,000 live births in 2004 to 34 per 1,000 live births in 2007 and reached 32 per 1,000 live births in 2012. 2 When compared to other countries in Asia whose economic conditions are not too different with Indonesia, Indonesia is still lagging. In 2012, UNICEF reported that in Philippines, IMR was 20 per 1,000 live births, while in Malaysia IMR was 6 per 1,000 live births. 3 Indonesia still needs to reduce infant mortality by 40% to achieve the MDGs 2015 that is 23 per 1,000 live births. Efforts to reduce maternal mortality and infant mortality have been performed through family planning programs by The National Family Planning Coordinating Board (BKKBN). Through family planning programs, various programs are presented in thinning or birth control. 4 Birth interval is useful to reduce the high risk pregnancy. By thinning the birth, the mother has a chance to regain her health before next conception. Healthy pregnancy will also have an impact on fetal development in which fetus develops healthy.
Birth intervals control action in Indonesia has not been optimal. IDHS 2012 showed the birth intervals in Indonesia in which 4.4% of births had intervals less than 18 months, less than 24 months (10.5%), less than 36 months (25%) after the previous birth. Median of birth intervals was 60.2 months in 2012 which increased compared to 54.6 months in 2007. IDHS 2012 also showed that child mortality risk is three times higher on children born with birth intervals less than two years compared to children born with intervals of four years or more. Neonatum death, post-neonatum death, infant death and toddler death are twice as high for children born with birth intervals less than two years compared to children born with intervals of four years or more. 2 Birth interval is a period between two live births from a woman. 5 According to the United States Agency for International Development (USAID), the optimum birth interval limit is the limit of time between births which produce the best health outcomes for pregnancy, mothers, newborns, and the all families. 6 Study conducted by Rasheed and Dabal indicated that the optimum interval between the birth period was between three to five years. 7 According to Conde-Agudelo and Belizan, too short birth intervals may increase the risk of toxemia, anemia, malnutrition, bleeding and maternal death. 8 The too short interval is also associated with an increasing risk of abortion. In addition, too short birth intervals also increases the risk of low birthweight (LBW) incidence and infant death. [9][10][11] Waiting up to 36 months or more for the next pregnancy can reduce the risk of morbidity and infant mortality. 12 The still high prevalence of short birth intervals in Indonesia was influenced by various factors. According to Davis Blake, social factors and the behavioral factors influenced variable birth intervals through intermediate variables. The total of intermediate variables was 11 variables and these variables were classified into three broad categories including the variable intercourse, variable conception, and the variable gestation. 13 A multiparous woman is a woman who has given birth more than once. 14 Among multiparous women, birth interval of each child with the preceding birth can be seen. Based on the data above, it is necessary to conduct study on factors related to birth intervals among multiparous women in Indonesia.
Method
This study used secondary data from IDHS in 2012 with a cross-sectional study design. Demographic and Health Survey 2012 conducted by the Central Statistics Agency (BPS) in collaboration with BKKBN, Ministry of Health and USAID.
Population in this study were all childbearing women in Indonesia who had records of live births and not including the firstborn. Samples were respondents noted in 2012 that met the IDHS inclusion and exclusion criteria. Inclusion criteria of this study were women at childbearing age who had records of the last live births within fiveyear period prior to the survey and not including births of the firstborn. Meanwhile, the exclusion criterion was babies born twins because twin births have 0-month birth spacing.
The minimum sample size was calculated using a formula of different proportions. The result of the calculation obtained was 5,360 respondents, meanwhile samples used from the data of IDHS were as many as 9,945 respondents. Thus, the sample examined was eligible for minimum sample size. The dependent variable was the birth intervals. While the independent variables were the maternal education, economic status, place of residence, age at the last childbirth, the number of living children, sex of the preceding birth, the ideal size of family, contraception knowledge, perspectives of husbands against contraception, the use of contraception, exclusive breastfeeding, infant mortality records, survival of the preceding birth. The data was analyzed using Mann Whitney test, Kruskal Wallis, and logistic regression test.
Results
The average birth intervals was 69.02 months with standard deviations 40.7 months. Median birth intervals in this study was 62 months with the most birth interval of 25 months. The median value of the confidence inter-val (95% CI) indicated a value between 61 to 63 months. It could also be concluded that the birth interval data distribution was asymmetrical or did not follow normal distribution. After categorized, respondents who had short birth intervals (less than 36 months) was 22.8%.
Results showed that median birth intervals was lower among respondents with higher education (54 months). Based on the economic level, the lowest median birth intervals was found among respondents with the low economic level while the highest median birth intervals was among respondents with the fourth economic level (70 months). Median birth intervals of respondents who lived in rural and urban areas were similar, respectively 61 months and 64 months. Moreover, Table 1 showed median birth intervals increased by age that was 25 months among respondents aged less than 20 years, 59 months among respondents aged 20 to 35 years and 78 months among respondents aged over 35 years. Median birth interval was lower on respondents with the rate of live births more than two people (60 months).
The results showed that the median birth intervals based on sex of preceding birth between male and female were same, respectively 62 months and 63 months. Median birth interval was lower on respondents with preference ideal family size more than two children (58 months) ( Table 2).
Based on knowledge level, the median birth intervals between respondents who had high and low knowledge of contraception was the same at 62 months. Median birth intervals between respondents who agreed and disagreed were similar, respectively 65 months and 62 months. The lowest median birth interval was found among repondents who used traditional contraception (50 months) (Tabel 3).
Results on Table 4 showed median birth intervals was lower on respondents who exclusively breastfed (57 months). Based on infant mortality records, the median birth interval was lower on respondents who had infant mortality record (47 months). Birth interval was lower on respondents with survival of preceding birth was dead (33 months). Determinants of birth intervals included maternal education, age at the last childbirth, ideal family size, contraceptive use, infant mortality records and survival of preceding birth. Women who had the last pregnancy at the age of < 20 years had a short birth interval of less than three years of 11.1 times (1/0.09) times (OR = 11.1; 95% CI = 7.14 to 16.67) than respondents aged between 20 -35 years old. Respondents who had the last childbirth at the age of < 20 years had the risk of having a short birth interval 20.0 (1/0.05) times (OR = 20.0; CI 95% = 14.29 -33,33) times higher than respondents aged > 35 years. Respondents who were highly educated had a risk of having a short birth interval of 1.51 (1/0.66) times (OR = 1.51; 95% CI = 1.36 to 1.67) higher than respondents with the low education level. Respondents who had the ideal family size of more than two children had the risk of having a short birth interval 1.34 times (OR = 1.34; 95% CI = 1.21 to 1.48) higher than the ideal size of the respondents who had less family living with
Discussion
Respondents who were highly educated had a higher risk of having the short birth intervals in which the risk of short birth interval on high educated respondents was 1.51 higher than the low educated respondents. These results were in contrast to studies in Ethiopia where short birth intervals were more common among women with low level of education. 15 Women with higher education were more likely to use contraception to space births and extend access to information and better health awareness. [15][16][17][18][19] The difference in results was because in this study, the group of more educated women were mostly young women. This is in line with IDHS report in 2012 that showed the median birth intervals on women graduated from high school and college was lower than women who did not complete primary school and never graduat- 2 Results of cross tabulation also showed that proportion of highly educated women were more at age less than 35 years old compared to the age over 35 years old. There was a positive relation between age and birth intervals. These results were similar to studies in Spain and Iran. The older age, the higher median birth intervals. This effect was related to the process of degeneration in female fertility because of age. 16,[20][21] Moreover, the older age increased birth intervals due to the increasing experience and women knowledge. 21 Increasing age also indicated contraceptive use as an effort to reduce the risk of pregnancy and to increase birth intervals.
Results showed the ideal proportion of children related to birth intervals. Respondents who wanted more than two children had risk of having a short birth interval 1.34 times higher than respondents who wanted a child less than or equal to two children. Studies in Iran and Tanzania showed number of children living significantly related to birth spacing. 21,22 The number of children was not the desired direct variables affecting fertility, but related to the variables affecting the birth control. 23 There was a significant relation between contraceptive use and birth intervals. Respondents who used traditional contraceptives had risk 1.47 times more likely to have short birth intervals compared to respondents who used modern contraceptives. Respondents who did not use contraceptives had risk 1.50 times more likely to have short birth intervals compared to respondents who used modern contraceptives. This is in line with the prior study conducted in which contraceptive users had birth intervals longer than those who did not use contraceptives. 15,24,25 This could be caused by the contraceptive effect to delay the time until the next conception. 24 Bongaars theory also stated that such contraception affected directly on fertility. 26 For respondents who had infant mortality records, the percentage of short birth intervals was higher than respondents who never had infant mortality records. Results showed that respondents who had infant mortality records had the risk 1.68 times more likely to have shorter birth intervals compared to respondents who never had infant mortality records. This is in line with study in Spain stating that there were shorter birth intervals in families who had infant mortality records. 20 Lucas theory stated that if the parents experienced death of a child, then they would try to have another child. This effect is known as a substitute or replacement effect. 13 The results also showed a significant relation between previous survival and the spacing between births. Median birth intervals was lower on respondents with a record of preceding survival of birth was died (33 months). Respondents with preceding survival of birth was died had the risk 2.15 times more likely to have shorter birth intervals than respondents whose preceding survival of birth was alive. This was due to the need of parents who wanted to immediately replace the dead child in a short time. 7,21 Conclusion Median birth interval is 62 months. The precentage of respondents who have short birth interval is 22.8%. Determinants of short birth intervals include maternal education, age at the last childbirth, ideal family size, contraceptive use, infant mortality records and survival of preceding birth. Women who have the last childbirth at the age < 20 years have risk 11.1 times of having short birth interval less than three years (OR = 11.1; 95% CI = 7.14 to 16.67). Higher education has the risk 1.51 (1 / 0.66) times (OR = 1.51; 95% CI = 1.36 to 1.67), the ideal family size of more than two children has the risk 1.34 times (OR =1.34; 95% CI = 1.21 to 1.48), traditional contraception has the risk 1.47 times (OR = 1.47; 95% CI = 1.18 to 1.83), those who do not use contraceptives have the risk 1.50 times (OR = 1.50; 95% CI = 1.35 to 1.69), infant mortality record has risk 1.68 times 1.50 times (OR = 1.68; 95% CI = 1.70 -2.73), preceding survival of birth died has the risk 2.15 times (OR = 2.15; 95% CI = 1.43 to 1.97). Highly educated women should be encouraged to reduce short birth intervals. In addition, the optimum birth intervals can be enhanced by improving communication, information and education concerning the maturation of age for marriage, the ideal number of children, then by increasing contraceptive use. | 2019-01-07T14:05:51.313Z | 2016-05-01T00:00:00.000 | {
"year": 2016,
"sha1": "b1163a53eaf3a0989eea119e1268db1b99361e3e",
"oa_license": "CCBYSA",
"oa_url": "http://journal.fkm.ui.ac.id/kesmas/article/download/839/503",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "939262b9ab6b9174ee0ae39a6d6a2bc7f994c36e",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
149978596 | pes2o/s2orc | v3-fos-license | Partially Penetrated Well Solution of Fractal Single-Porosity Naturally Fractured Reservoirs
In the oil industry, many reservoirs produce from partially penetrated wells, either to postpone the arrival of undesirable fluids or to avoid problems during drilling operations. The majority of these reservoirs are heterogeneous and anisotropic, such as naturally fractured reservoirs. The analysis of pressure-transient tests is a very useful method to dynamically characterize both the heterogeneity and anisotropy existing in the reservoir. In this paper, a new analytical solution for a partially penetrated well based on a fractal approach to capture the distribution and connectivity of the fracture network is presented. This solution represents the complexity of the flow lines better than the traditional Euclidean flow models for single-porosity fractured reservoirs, i.e., for a tight matrix. The proposed solution takes into consideration the variations in fracture density throughout the reservoir, which have a direct influence on the porosity, permeability, and the size distribution of the matrix blocks as a result of the fracturing process. This solution generalizes previous solutions to model the pressure-transient behavior of partially penetrated wells as proposed in the technical literature for the classical Euclidean formulation, which considers a uniform distribution of fractures that are fully connected. Several synthetic cases obtained with the proposed solution are shown to illustrate the influence of different variables, including fractal parameters.
Introduction
In the literature, several analytical solutions for modeling the behavior of pressure-transient tests of partially penetrated wells have been proposed [1][2][3][4][5][6][7][8][9][10].Some of these works have proposed the use of point and line source solutions derived in the Laplace space, considering finite and infinite systems, with homogeneous and naturally fractured reservoirs [2,5,6,8,10].Other studies considered gas anisotropic reservoirs using a uniform flow solution [9].All of these works assumed reservoirs with Euclidean geometry, that is, they used traditional mass conservation and flow equations.
Starting from mass conservation and flow equations with fractal characteristics, the authors of [11][12][13][14] analyzed the behavior of the pressure-transient tests of single and double porosity reservoirs with fractal geometry.These studies established the existence of a power-law behavior during the transient period instead of the classical semi-logarithmic behavior that exists in reservoirs with Euclidean geometry.It has been demonstrated that the radial flow regime is a special case of more general fractal behavior.All these studies considered vertical fully penetrated wells.Up to date, no study has been presented that considers the pressure-transient behavior of partially penetrated wells produced from anisotropic heterogeneous reservoirs with fractal properties.
In this study, a single-porosity system was considered, which can be represented by a naturally fractured reservoir with a tight matrix, where the porosity and permeability of the system are due to the fracture network.Additionally, it was considered that there was a folding where the density of fractures was greater at the top of the anticline and decreased toward the flanks.Thus, there was a heterogeneous and anisotropic reservoir where the radial and vertical permeabilities were functions of the radial and vertical position, respectively.Due to the complexity of this fracture network, it was convenient to consider fractal geometry, instead of assuming a uniform distribution of fractures, and all fractures as being interconnected, as is considered in the traditional formulation with Euclidean geometry.
The purpose of this work was to obtain an analytical solution that represented the behavior of pressure-transient tests in vertical wells partially penetrating heterogeneous and anisotropic reservoirs with fractal geometry.The heterogeneity and anisotropy were due to a fracture network caused by the thrust of a salt dome.
Problem Statement
The solution proposed in this study considered a closed cylindrical reservoir with a single porosity, i.e., a network of fractures may exist, but the matrix is compact and does not contribute to the reservoir response.The well was produced from a restricted interval of the formation.In the reservoir, there were fractal distributions of permeability and porosity in the radial and vertical directions, that is, it was a heterogeneous and anisotropic reservoir.Using the continuity equation in cylindrical coordinates: considering a distribution of permeability in the fracture network like that existing in an anticline, where the radial permeability decreases as the radial distance from the center of the anticline increases, and the vertical permeability also decreases with the increment of vertical depth from the top of the anticline.Thus, the fractal distribution of permeability in the radial and vertical directions are given as follows: where k rw and k zw represent the radial permeability at the center and the vertical permeability at the top of the anticline, respectively.D r = 2 and D z = 1 are the Euclidean dimensions in the horizontal and vertical directions, respectively.The fracture density is represented by the fractal dimensions d f r and d f z , in the radial and vertical directions, respectively.θ r and θ z represent the connectivity indexes of the fracture network in the radial and vertical directions, respectively.The definition of radial permeability is similar to that used in References [11][12][13][14].
The porosity of the fracture network is also a function of the radial distance from the center of the anticline and the vertical position from the top of the anticline.Thus, using the fractal definition of porosity proposed by Cossio et al. [15] in 2D (r and z), the fracture porosity is given by: where φ 0 represents the average porosity in the near wellbore region at the top of the reservoir.In the following, we use φ 0 = φ 0 /2.
Assuming Darcy´s Law for the velocities in the radial and vertical directions and considering Equations ( 2)-(4) into Equation (1), the following equation can be obtained: It can be noted that instead of fractal derivatives, fractal definitions of the petrophysical properties are used in the derivation of this equation, following a similar path to that proposed in References [11][12][13][14][15][16][17].Some applications of the use of fractional derivatives on the fluid flow in porous media are presented elsewhere [18][19][20][21].Using the values of the Euclidean dimensions in the horizontal and vertical directions, D r = 2 and D z = 1, we obtained the following equation: Using the following definitions of dimensionless variables: Considering a slightly compressible fluid of constant viscosity (µ), and small pressure gradients, we obtained: In Figure 1, a diagram of the problem to be solved in cylindrical coordinates is shown.Applying Newman's method according to Razminia et al. [10], "the instantaneous Green function is equal to the product of the instantaneous Green functions in one and/or two directions", in our case: With the above, Equation ( 19) will be solved for the two directions independently.Applying Newman's method according to Razminia et al. [10], "the instantaneous Green function is equal to the product of the instantaneous Green functions in one and/or two directions", in our case: With the above, Equation ( 19) will be solved for the two directions independently.
Analytical Solution of the Problem
The solution was deduced by applying the methods of the Laplace's transform, separation of variables, and Newman's product using instantaneous source functions.In Appendix A, the procedure for obtaining the solution in the radial direction for total penetration, Equation (A.7), can be found.This solution was used together with the solution in the vertical direction, Equation (B.18), obtained in Appendix B, to acquire the solution for a partially penetrated well through the use of the Newman's product.Thus, Equation (B.26) is written as follows: where: If this expression is evaluated for ℎ = 1, = 0, and ℎ = 1 , we obtain the fully-penetrated well solution:
Analytical Solution of the Problem
The solution was deduced by applying the methods of the Laplace's transform, separation of variables, and Newman's product using instantaneous source functions.In Appendix A, the procedure for obtaining the solution in the radial direction for total penetration, Equation (A7), can be found.This solution was used together with the solution in the vertical direction, Equation (A25), obtained in Appendix B, to acquire the solution for a partially penetrated well through the use of the Newman's product.Thus, Equation (A33) is written as follows: where: If this expression is evaluated for h pD = 1, z wD = 0, and h wD = 1, we obtain the fully-penetrated well solution: where λ n are the characteristic values given by the roots of Equation (A17), and: The second term of Equation ( 23) represents the pseudo-skin due to partial penetration considering fractal behavior in both radial and vertical directions.
To include wellbore storage and mechanical skin effects, the following expression, given by Van Everdingen and Hurst [22], is applied: where p D (s) is given by Equation ( 23).
Results
In this section, some results are presented with the proposed analytical solution given by Equations ( 23) and (25) in the case of wellbore storage and skin effects using Stehfest's algorithm [23].
Figures 2-5 show the solution for a Euclidean isotropic case (d fr = 2.0, θ r = 0, d fz = 1.0, θ z = 0), where the upper part of the formation is open to production. Figure 2 shows results without mechanical skin damage, S = 0, where only the thickness of the formation varies.The dashed lines in Figures 2-5 correspond to the pressure and pressure derivative given by Razminia et al. [10] for some Euclidian cases.In all cases, the agreement is excellent, so the proposed solution, Equation (23), is able to reproduce the Euclidian results as particular cases.
considering fractal behavior in both radial and vertical directions.
To include wellbore storage and mechanical skin effects, the following expression, given by Van Everdingen and Hurst [22], is applied: where ̅ () is given by Equation ( 23).
Results
In this section, some results are presented with the proposed analytical solution given by Equations ( 23) and (25) in the case of wellbore storage and skin effects using Stehfest's algorithm [23].
Figures 2-5 show the solution for a Euclidean isotropic case (dfr = 2.0, θr = 0, dfz = 1.0, θz = 0), where the upper part of the formation is open to production. Figure 2 shows results without mechanical skin damage, = 0, where only the thickness of the formation varies.The dashed lines in Figures 2-5 correspond to the pressure and pressure derivative given by Razminia et al. [10] for some Euclidian cases.In all cases, the agreement is excellent, so the proposed solution, Equation (23), is able to reproduce the Euclidian results as particular cases.In Figure 3, the magnitude of the open interval varies, including the case of the fully penetrated well, keeping the thickness of the formation constant.In Figures 4 and 5, the mechanical skin damage and wellbore storage vary, respectively, keeping the thickness of the formation and the open interval constant.All these cases are Euclidean and serve to evaluate the accuracy of the fractal analytic solution proposed for these cases.
Fractal Fract.2019, 3, 23 7 of 17 The cases with fractal geometry are shown below.Figure 6 shows a case where the fractal dimension in the radial direction is varying, dfr ≤ 2, where the value of 2 represents the Euclidean case (θr = 0).Thus, the traditional Euclidean case is a special case of the fractal case.In the Euclidean case, the classical spherical flow with a slope of −0.5, before the radial period, is present.It can be observed that this period of flow is not present for the fractal cases, where instead of the semi-logarithmic period, a power-law behavior can be observed in both the pressure drop and its derivative at late times during the transient period.
In Figure 3, the magnitude of the open interval varies, including the case of the fully penetrated well, keeping the thickness of the formation constant.In Figures 4 and 5, the mechanical skin damage and wellbore storage vary, respectively, keeping the thickness of the formation and the open interval constant.All these cases are Euclidean and serve to evaluate the accuracy of the fractal analytic solution proposed for these cases.
The cases with fractal geometry are shown below.Figure 6 shows a case where the fractal dimension in the radial direction is varying, d fr ≤ 2, where the value of 2 represents the Euclidean case (θ r = 0).Thus, the traditional Euclidean case is a special case of the fractal case.In the Euclidean case, the classical spherical flow with a slope of −0.5, before the radial period, is present.It can be observed that this period of flow is not present for the fractal cases, where instead of the semi-logarithmic period, a power-law behavior can be observed in both the pressure drop and its derivative at late times during the transient period.The cases with fractal geometry are shown below.Figure 6 shows a case where the fractal dimension in the radial direction is varying, dfr ≤ 2, where the value of 2 represents the Euclidean case (θr = 0).Thus, the traditional Euclidean case is a special case of the fractal case.In the Euclidean case, the classical spherical flow with a slope of −0.5, before the radial period, is present.It can be observed that this period of flow is not present for the fractal cases, where instead of the semi-logarithmic period, a power-law behavior can be observed in both the pressure drop and its derivative at late times during the transient period.
Figure 7 shows fractal cases where the connectivity index in the radial direction (θ r ) varies, and now all other parameters are kept constant, including the fractal dimension d fr = 2. Again, the Euclidean case occurs when θ r = 0, i.e., radial flow exists at late times during the transient period, and when θ r > 0, the power-law response is present at these times.
Fractal Fract.2019, 3, 23 8 of 18 Figure 7 shows fractal cases where the connectivity index in the radial direction (θr) varies, and now all other parameters are kept constant, including the fractal dimension dfr = 2. Again, the Euclidean case occurs when θr = 0, i.e., radial flow exists at late times during the transient period, and when θr > 0, the power-law response is present at these times.In Figures 8 and 9 the fractal dimension, dfz, and the connectivity index, θz, are varied in the vertical direction, respectively, keeping the other parameters constant, including dfr = 2, and θr = 0.The influence of dfz and θz is observed only in the period before the radial flow.In these cases, when dfz = 1.0 and θz = 0, the traditional Euclidean case is obtained again, with the presence of spherical flow before the radial period.
In Figures 8 and 9 the fractal dimension, d fz , and the connectivity index, θ z , are varied in the vertical direction, respectively, keeping the other parameters constant, including d fr = 2, and θ r = 0.The influence of d fz and θ z is observed only in the period before the radial flow.In these cases, when d fz = 1.0 and θ z = 0, the traditional Euclidean case is obtained again, with the presence of spherical flow before the radial period.In Figures 8 and 9 the fractal dimension, dfz, and the connectivity index, θz, are varied in the vertical direction, respectively, keeping the other parameters constant, including dfr = 2, and θr = 0.The influence of dfz and θz is observed only in the period before the radial flow.In these cases, when dfz = 1.0 and θz = 0, the traditional Euclidean case is obtained again, with the presence of spherical flow before the radial period.In Figures 10 and 11, the influence of hpD and the mechanical skin is shown, respectively, keeping the other parameters constant, including the fractal parameters.At large times within the transient period, the power-law behavior can be detected.In fact, in Figure 11, the presence of two power-law periods is observed.In Figures 10 and 11, the influence of h pD and the mechanical skin is shown, respectively, keeping the other parameters constant, including the fractal parameters.At large times within the transient Fractal Fract.2019, 3, 23 9 of 17 period, the power-law behavior can be detected.In fact, in Figure 11, the presence of two power-law periods is observed.In Figures 10 and 11, the influence of hpD and the mechanical skin is shown, respectively, keeping the other parameters constant, including the fractal parameters.At large times within the transient period, the power-law behavior can be detected.In fact, in Figure 11, the presence of two power-law periods is observed.
Figures 12 and 13
show the influence of the fractal parameters in the vertical direction, considering a fractal condition in the radial direction.In Figure 12, it is observed that the effect of the fractal dimension, dfz, is not very strong; however, it can be expected that with the arrival of undesirable fluids to the producing well, this parameter could play an important role.In both figures, the presence of two power-law periods is observed.Figure 13 shows that when the connectivity of fractures or pores in the vertical direction decreases, or even becomes null (i.e., θz = 1), the late power-law period is delayed, which is an expected behavior.
Figures 12 and 13 show the influence of the fractal parameters in the vertical direction, considering a fractal condition in the radial direction.In Figure 12, it is observed that the effect of the fractal dimension, d fz , is not very strong; however, it can be expected that with the arrival of undesirable fluids to the producing well, this parameter could play an important role.In both figures, the presence of two power-law periods is observed.Figure 13 shows that when the connectivity of fractures or pores in the vertical direction decreases, or even becomes null (i.e., θ z = 1), the late power-law period is delayed, which is an expected behavior. = 1.0, = 0.2.
Figures 12 and 13 show the influence of the fractal parameters in the vertical direction, considering a fractal condition in the radial direction.In Figure 12, it is observed that the effect of the fractal dimension, dfz, is not very strong; however, it can be expected that with the arrival of undesirable fluids to the producing well, this parameter could play an important role.In both figures, the presence of two power-law periods is observed.Figure 13 shows that when the connectivity of fractures or pores in the vertical direction decreases, or even becomes null (i.e., θz = 1), the late power-law period is delayed, which is an expected behavior.Considering the above results, it can be deduced that the new proposed analytical solution may provide useful information for the proper development of a reservoir.However, it can be intuited that to determine all the parameters involved in the proposed analytical solution, it is necessary to use a robust optimizer, since a visual adjustment is expected to be very difficult to apply for a complex model such as the one proposed in this work.
Discussion
Taking into account the above results, and those presented by Posadas and Camacho [14], and the fact that there are many unknown parameters (S, CD, , , , , , kr) to fully characterize Considering the above results, it can be deduced that the new proposed analytical solution may provide useful information for the proper development of a reservoir.However, it can be intuited that to determine all the parameters involved in the proposed analytical solution, it is necessary to use a robust optimizer, since a visual adjustment is expected to be very difficult to apply for a complex model such as the one proposed in this work.
Discussion
Taking into account the above results, and those presented by Posadas and Camacho [14], and the fact that there are many unknown parameters (S, C D , ε, d f r , θ r , d f z , θ z , k r ) to fully characterize this system, it is necessary to use robust optimization software in the type-curve matching process of both the pressure and its semi-logarithmic pressure derivative in order to obtain all of these parameters from well test data.
Conclusions
The novel analytical solution presented in this paper considers for the first time the application of fractal geometry to the problem of partial penetration.This is relevant because it allows the consideration of the variation of petrophysical properties with the scale or it takes into account the tortuosity of the flow lines in a cylindrical system.The solution was deduced by applying the methods of the Laplace's transform, separation of variables, and Newman's product using instantaneous source functions.Considering the results presented in this article, we can conclude the following: 1.
The new fractal analytical solution for a constant rate describes the pressure-transient behavior for partially penetrating wells in a single-porosity naturally fractured reservoir and includes the traditional Euclidean solution as a special case.
2.
The proposed fractal solution generates a power-law response at late times during the transient period after the wellbore storage, mechanical skin, and partial penetration effects have ended.This behavior occurs when the radial fractal parameters are different from the Euclidean values, i.e., d fr < 2 and θ r > 0.
3.
A different behavior to the power-law response occurs when d fz < 1 and θ z > 0. The effect of these parameters is shown only during the partial penetration period, and after this period, the traditional radial behavior (if d fr = 2 and θ r = 0) or a power-law behavior (when d fr < 2 and/or θ r > 0) can be present.
4.
The typical spherical flow regime due to partial penetration is only present when the fractal parameters in the radial direction have the Euclidean values, i.e., d fr = 2 and θ r = 0. 5.
An expression is provided to evaluate the pseudo-skin due to the partial penetration effects that consider fractal behavior in both the radial and vertical directions.6.
To determine the pseudo-damage due to restricted penetration, horizontal permeability, vertical to horizontal permeability ratio, mechanical skin, and the four fractal parameters, it is necessary to resort to a type-curve matching process of the pressure data and its semi-logarithmic derivative using a robust optimizer that minimizes the difference between the real data and the analytical solution.Connectivity index in the radial direction (0 ≤ θ r ≤ 1) θ z Connectivity index in the vertical direction (0 ≤ θ z ≤ 1) ξ(s) Pseudo-skin due to partial penetration considering fractal behavior
Appendix A. Solution in the Radial Direction
The flow in the radial direction is obtained from Equation (19) as follows: where Using the Laplace transform: Applying the Levedev [24] technique, we are able to obtain the solution in terms of the modified Bessel functions: where: Using the following boundary conditions in the radial direction: It is found that the solution in the radial direction for total penetration in the Laplace space is given by:
Appendix B. Solution in the Vertical Direction
From Equation ( 19), the continuity equation in the vertical direction is given by: z where Applying separation of variables in Equation (A8): Thus, the solution for u(t D ) is given by: and the solution for w(z D ) is obtained from: Applying the Levedev [24] technique, we are able to obtain the solution for w (z D ), in terms of the Bessel functions: where: Substituting Equations (A10) and (A12) into Equation (A9), the general solution for the problem in the vertical direction is given by: Considering the following boundary conditions in the vertical direction: We obtain from Equation (A15), D = 0. From Equation (A16) we obtain: From the roots of Equation (A17) the characteristic values λ are obtained.
Applying the superposition principle with Equation (A17) we obtain the following expression: Considering an instantaneous source plate with its center in z wmD , which agrees with the midpoint of the producing interval, the following expression is obtained: Multiplying Equation (A19) by x = z 2+θz 2
D
and then applying the orthogonality property, we obtain: To evaluate the term of the integral on the right-hand side of Equation (A20), Abramowitz and Stegun [25] is used, obtaining the following expression: where . To evaluate the term of the integral on the left-hand side of Equation (A20), we use Gradshteyn and Ryzhik [26], which is expressed as: Obtaining the following: Substituting Equations (A21) and (A23) into Equation (A20), we obtain: Substituting Equation (A24) into Equation (A18), the following solution is obtained: According to Razminia et al. [10] the instantaneous source function for a partial penetration is defined as a function of the instantaneous source function for total penetration, such as: Using the method of Newman's product, the instantaneous source function for partial penetration can be obtained as follows: S(r D , z D , t D ) = S r (r D , t D ) • S z (z D , t D ). the solution in the Laplace space is given by: Substituting Equation (A7) into Equation (A30), the final solution is given as follows: √ s+λn]+Kv r +1 [( 2 2+θr ) √ s+λn] a n J vz −1 (an)− vz an J vz (an) (A32) Finally, the following expression is obtained for the wellbore pressure drop: If Equation (A33) is evaluated for h pD = 1, z wD = 0 and h wD = 1, we obtain the fully penetrated well solution, given by: Thus, the pseudo-skin due to partial penetration considering fractal behavior is given by: (A36)
2 .
Equations (A25) and (A26) into Equation (A27):S(r D , z D , t D ) = 2 h pD ∂p D f (r D , t D ) v z −1 (a n ) − v z a n J v z (a n ) (r D , z D , t D ) = t D 0 S(r D , z D , τ)dτ,(A29) λ n are the roots of Equation (A17).The evaluation at the wellbore is obtained by evaluating Equation (A31) for r D = 1 in the producing interval, using: (r D = 1, z D , s)dz D . | 2019-04-30T13:40:28.080Z | 2019-04-24T00:00:00.000 | {
"year": 2019,
"sha1": "7bfb56c11cc53d8e17648f0bf78329fa7dddb30b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-3110/3/2/23/pdf?version=1557399406",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7bfb56c11cc53d8e17648f0bf78329fa7dddb30b",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
14697286 | pes2o/s2orc | v3-fos-license | Spin Models on Thin Graphs
We discuss the utility of analytical and numerical investigation of spin models, in particular spin glasses, on ordinary ``thin'' random graphs (in effect Feynman diagrams) using methods borrowed from the ``fat'' graphs of two dimensional gravity. We highlight the similarity with Bethe lattice calculations and the advantages of the thin graph approach both analytically and numerically for investigating mean field results.
INTRODUCTION
The analytical investigation of spin glasses on random graphs of various sorts has a long and honourable history [1,2], though there has been little in the way of numerical simulations. Random graphs with a fixed or fixed average connectivity have a locally tree like structure, which means that loops in the graph are predominantly large, so Bethe-lattice-like [3] (ie mean field) critical behaviour is expected for spin models on such lattices. Given this, the analytical solution for a spin model or, in particular, a spin glass on a Bethe lattice [4,5] can be translated across to the appropriate fixed connectivity random lattice. Alternatively, a replica calculation can be carried out directly in some cases for spin glasses on various sorts of random lattices.
A rather different way of looking at the problem of spin models on random graphs was put forward in [6], where it was observed that the requisite ensemble of random graphs could be generated by considering the Feynman diagram expansion for the partition function of the model. For an Ising ferromagnet with Hamiltonian where the sum is over nearest neighbours on three-regular random graphs (ie φ 3 Feynman di-agrams), the partition function is given by where N n is the number of undecorated graphs with 2n points, K is defined by and the action itself is where the sum runs over ± indices. The coupling in the above is g = exp(2βJ) where J = 1 for the ferromagnet and the φ + field can be thought of as representing "up" spins with the φ − field representing "down" spins. An ensemble of zregular random graphs would simply require replacing the φ 3 terms with φ z and a fixed average connectivity could also be implemented with the appropriate choice of potential. This approach was inspired by the considerable amount of work that has been done in recent years on N ×N matrix 1 versions of such integrals which generate "fat" or ribbon graphs graphs with sufficient structure to carry out a topological expansion [7] because of the matrix index structure.
The natural interpretation of such fat graphs as the duals of triangulations, quadrangulations etc. of surfaces has led to much interesting work in string theory and particle physics [8]. The partition function here is a poor, "thin" (no indices, so no ribbons), scalar cousin of these, lacking the structure to give a surface interpretation to the graph. Such scalar integrals have been used in the past to extract the large n behaviour of various field theories [9] again essentially as a means of generating the appropriate Feynman diagrams, so a lot is known about handling their quirks.
(ANTI)FERROMAGNETS AND SPIN GLASSES
For the Ising ferromagnet on three-regular (φ 3 ) graphs, solving the saddle point equations at large n shows that the critical behaviour appears as an exchange of dominant saddle point solutions to the saddle point equations. The high and low temperature solutions respectively are which give a low temperature magnetized phase. The critical exponents for the transition can also be calculated in this formalism and, as expected, are mean field. In general a mean field transition appears at on φ z graphs, which is the value predicted by the standard approaches. Simulations nicely confirm this mean field picture for the ferromagnet [10]. Analysis of the Binder's cumulant for the magnetization also shows that the critical temperatures are identical to the corresponding Bethe lattices (ie g = 3 for φ 3 graphs). The specific heat is shown in Figure.1 for various sizes of φ 3 graphs. There are various possibilities for addressing spin glass order in the Feynman diagram approach. In [6] the entropy per spin was calculated for the Ising anti-ferromagnet on φ 3 graphs and it was found to become negative for sufficiently negative β, which is often indicative of a spin glass transition. Simulations again confirm the picture. Taking a quenched distribution of couplings of the form which gives the antiferromagnet for p = 0, produces results for the spin glass order parameter, the overlap, that are very similar to the infinite range mean field (Sherrington-Kirkpatrick) model. Defining the overlap as with two Ising replicas on each graph, σ i , τ i and histogramming where [ ] denotes the quenched disorder average, we get the distribution shown in Figure.2 [10,11] in the putative spin glass phase at low temperature. The long tail stretching down to q = 0 is characteristic of the mean-field spin glass picture of many inequivalent states. It is possible to make some analytical inroads as well by looking at the solutions to the saddle point equations for k Ising replicas [11] where we have denoted the 2 k fields that now appear as φ. The Hessian for these equations is analytically calculable for any k and its zeroes show that the k = 0 transition temperature observed in simulations or calculated by using the analogy with the Bethe lattice is identical to the k = 2 transition temperature. This also occurs in the finite replica version of the Sherrington-Kirkpatrick model [12], so yet again the thin graph results are resolutely mean field.
For three or more replicas one does not see the continuous transition because a first order transition occurs at higher temperature to a replicasymmetric state. The situation appears to be rather similar for Q > 2 state Potts models where the saddle point calculation finds a continuous transition at one of the spinodal points and misses a first order transition occurring at higher temperature. The 3-state Potts model, for instance, with action gives high and low temperature solutions where c = 1/(g + 1). Calculations and simulations for Potts glasses are just as easy as for the Ising spin glass. In Figure.3 overleaf we have plotted the distribution of overlaps for the three state Potts model at low temperature. The overlap for a Q state Potts model is now defined as and the lack of an inversion symmetry in the spins gives a different pattern of replica symmetry breaking in mean-field theory to the Ising model. The numerical results are still consistent with a mean field picture.
Conclusions
In summary, spin models on thin graphs offer a promising arena for the application of ideas from matrix models, large-n calculations in field theory and bifurcation theory. In the spin glass case the tensor (or near-tensor) product structure of the inverse propagator allows some quite general expressions to be derived for the Hessian in the saddle point equations and offers a powerful line of attack on questions such as replica symmetry breaking. As a subject for numerical simulations they offer the great advantage of mean field results with no infinite range interactions and no boundary problems.
The bulk of the simulations were carried out on the Front Range Consortium's 208-node Intel Paragon located at NOAA/FSL in Boulder. CFB is supported by DOE under contract DE-FG02-91ER40672, by NSF Grand Challenge Applications Group Grant ASC-9217394 and by NASA HPCC Group Grant NAG5-2218. CFB and DAJ were partially supported by NATO grant CRG910091. | 2014-10-01T00:00:00.000Z | 1995-08-02T00:00:00.000 | {
"year": 1995,
"sha1": "fbf10e46ce7bb7c8569d12d7e70d7f95688eeaf1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9508003",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2ad4a3ad96650dd7c66e340aff9b38eb637eab7d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
234489668 | pes2o/s2orc | v3-fos-license | Evaluation of healing and anti-plaque efficacy of antioxidant mouthrinse in patients with gingival inflamation – A single blind clinical study
Objective: To evaluate the clinical efficacy of healing as well as anti plaque activity of an antioxidant mouthrinse containing sodium hyaluronate, coenzyme Q10, tea tree oil and aloe vera as active ingredients in a subject population of established clinical gingivitis. Materials and Methods: A single blind study was conducted among 45 participants, all of whom were in the age group of 18-60 years. The study subjects were randomly divided into three groups of 15 each. The control group where only oral hygiene instructions were given. Group B1 in which the participants underwent oral prophylaxis at baseline, day 7 and day 14. Group B2 where the participants were given the antioxidant mouthrinse to use thrice daily in addition to undergoing oral prophylaxis at baseline, day 7 and
Introduction
Periodontal diseases are chronic inflammatory conditions characterized by the loss of connective tissue, alveolar bone resorption and formation of periodontal pockets as a result of the complex interaction that occurs between pathogenic bacteria and the host's immune response. Periodontitis starts with inflammatory lesions of the gingiva, which, when left untreated, progress and eventually involve and compromise the entire periodontal apparatus of the affected teeth. Dental plaque is the primary etiologic factor in periodontal diseases. 1 Mechanical plaque control is the most dependable oral hygiene measure, but mechanicaloral hygiene methods of plaque removal also require time, motivation and manual dexterity.
Oral hygiene routines including daily toothbrushing and flossing are neither practiced consistently nor are they done for an adequate amount of time to thoroughly remove plaque. Also they are not 100% successful because of various anatomic features such as crowding and tooth alignment in the arch. These limitations of home oral care practices suggest the need for other better strategies. Many of the chemical antiplaque agents in various formulations have been tried as an adjunct to mechanical measures for improving the oral health. Many of the reviews have supported the feasibility of chemical approach in the control of plaque formation, thereby aiding individuals in achieving an acceptable gingival status. 2 These antiplaque agents can be delivered in the form of mouthwashes, dentifrices, chewing gums, gels and chips. Mouthwashes are a safe and effective delivery system for antimicrobials and can play an important role in plaque reduction.
Out of all the antiplaque agents, chlorhexidine is considered as the gold standard agent for its clinical efficacy in chemical plaque control. 3 It has broad antibacterial activity, with very low toxicity and strong affinity for epithelial tissues and mucous membranes. Besides its antiplaque effect, chlorhexidine is also substantive, thus reducing levels of microorganisms in saliva up to 90% for several hours. The use of chlorhexidine is burdened by some side-effects that could affect its patient compliance. The most notable of these is the staining that it produces, 4 others being the alteration in taste and mucosal erosions, but these are less common. 5 Essential oil rinses have also been evaluated and shown to be of value as an adjunct to mechanical oral procedures. However, the alcohol content of essential oil rinses and their unpleasant taste is unacceptable to some patients. Thus none of these chemical agents are without shortcomings. Therefore, the search for an ideal and safe antiplaque agent continues.
An increasing number of people all around the world are turning to the nature by using the natural herbal products in both prophylaxis and treatment of different diseases.
Plants are the source of more than 25% of prescription and over-the-counter preparations and the potential of natural agents for oral prophylaxis should therefore be considered.
Hyaluronic Acid (HA) is a naturally occurring linear polysaccharide of the extracellular matrix of connective tissue, synovial fluid and other tissues. It possesses various physiological and structural functions, which include cellular and extracellular interactions, interactions with growth factors and regulation of the osmotic pressure, and tissue lubrication. 6 HA has shown anti-inflammatory and anti-bacterial effects with regard to the treatment of periodontal disease, which is mainly caused by the microorganisms present in subgingival plaque. It has been found that the equilibrium between the free radicals/reactive oxygen species (ROS) and antioxidants is the major prerequisite for healthy periodontal tissue. 7 Coenzyme Q10 is a naturally occurring coenzyme formed from the conjugation of a benzoquinone ring with a hydrophobic isoprenoid chain of a varying chain length, depending on the species. 8 Because of its ubiquitous presence in nature and its quinone structure (which is similar to that of Vitamin K), Coenzyme Q10 is also known as ubiquinone. 9 Functions of CoQ10 include the following 1. Needed for energy conversion (ATP production)t 2. An essential antioxidant 3. Regenerates other antioxidants 4. Stimulates cell growth and inhibits cell death 5. Decreased biosynthesis may cause deficiency Aloe Vera: Its dental uses are multiple in nature. 10 a) It is extremely helpful in the treatment of gum diseases like gingivitis, periodontitis. 11 b) It reduces bleeding, inflammation and swelling of the gums. It is a powerful antiseptic in pockets where normal cleaning is difficult, and its antifungal properties help greatly in the problem of denture stomatitis. 12,13 c) It is a powerful healing promoter and can be used following extractions. 14 d) It has been used in root canal treatment as a sedative dressing and file lubrication during biomechanical preparation. 10 Tea Tree Oil -derived from the paper bark tea tree. 1 It is a widely looked on product and has a broad antimicrobial spectrum which includes antifungal and antiviral. Also has antioxidant and anti-inflammatory effect. [15][16][17][18][19][20] The effect of the local application of tea tree oil on diseased periodontal tissues has been shown to be useful in few studies. 21 Elgendy et al. reported the significant improvement in clinical parameters like plaque index (PI), gingival bleeding index (GBI), probing pocket depth (PPD), clinical attachment level (CAL), and pentraxin-3 level in the gingival crevicular fluid in the scaling and root planing and tea tree oil gel group as compared to scaling and root planing alone at the end of 1, 3, 6, and 9 months. 22 Myyrh -is an oleo-gum resin extracted from the tree Commiphora molmol consists of volatile oil (Myrrhol), resin (Myrrhin), gum and impurities. Myrrh contains many active ingredients with strong anti-inflammatory effects such as 1(10) 4-furanodien-6-one (78) which significantly reduces the levels of pro-inflammatory cytokines IL-6, IL-23, IL-17, TGF-B, and INF-gamma induced by lipopolysaccharide. 23 In addition, Myrrh has an antimicrobial effect against Streptococcus mutans, 24 Staphylococcus aureus and Candida albicans which are common oral pathogens. It was found to be as effective as Chlorhexidine in decreasing microbial load after one week of use as a mouthwash, 25,26 Myrrh was also found to promote oral wound healing 30 and was an effective over the counter remedy for treatment of aphthous ulcers. 27
Materials and Methods
This was a single centre, single blind clinical case study comprising 45 subjects. The study duration was 14 days. The subjects were taken from the out patient section of the Department of Periodontics, Subharti Dental College, Meerut.
The patients were randomnly allocated into three groups of 15 subjects each: Control group -only oral hygiene instructions, no treatment or intervention was given to the subjects. Clinical parameters assessment was done at baseline, 7 and 14 days.
Test group B1 -Clinical parameters assessment was done at baseline, oral prophylaxis was done. Parameters further assessed at day 7 and 14.
Test group B2 -Clinical parameters assessment was done at baseline, oral prophylaxis was done. Subjects were given antioxidant mouthrinse. The mouthrinse was to be used undiluted, thrice daily for 14 days. Parameters further assessed at day 7 and 14.
Inclusion criteria
1. Patients who were in good health in the range of 18-60 years of age. 2. Minimum of 20 teeth should be present in the dentition 3. Patients classified as stage II, stage III and stage IV gingivitis will be included in the study. 4. Patients who were willing to participate in the study by duly signing an informed consent form.
Exclusion criteria
1. Deep periodontal pockets (of depth greater than 4 mm 2. Subjects with any orthodontic appliances or prostheses that would interfere with the evaluation 3. If subjects are found to be allergic to any ingredients used in study or exhibited any gross oral pathology, eating disorders, chronic disease, pregnancy & lactation, acute myocardial infarction within the past six months, use of pacemaker, uncontrolled metabolic disease, major psychiatric disorder, heavy smoking or alcohol abuse, any systemic disease including any disease requiring repeated or regular analgesia or antiinflammatory drugs or antihistamines.
Parameters studied for clinical evaluation
1. PD (Probing depth) -reduction in periodontal pocket depth -(Inclusion criteria will need to mention that PPD of > 4 mm to be considered) Normal PPD is 2-3 mm, so the amount of reduction achieved on usage of MW is an indicator of healing in tissues.
Statistical methods
Statistical analysis was performed using SPSS software version 21.0.
One Way ANOVA -parametric test was used to evaluate the efficacy of mouthwash in the reduction of plaque, gingival inflammation and periodontal depth at baseline, day 7 and day 14. It was found that subjects of Group B2 (oral prophylaxis and mouthwash use) showed better reduction in plaque, gingival inflammation and periodontal depth as compared to other two group subjects. The results were found to be statistically significant at 14 days interval of plaque and gingival inflammation reduction. However, no significant result was found in relation to reduction of periodontal depth amongst the three subject groups.
Discussion
Biofilm development in the marginal gingiva and periodontal pockets are important changes in the pathogenesis of periodontal disease. Scaling and root planing are effective in reducing the microflora. Herbal derivatives play an important role in altering the microflora and acting as an adjunct to scaling and root planing. Oxidative stress plays a vital role in the pathogenesis of periodontal disease, as well as many other disorders. It is believed that antioxidants can defend against inflammatory diseases. Numerous health benefits of botanicals like aloe vera, myyrh, tea tree oil have been reported. They have antioxidant and anti-inflammatory properties which contribute in caries prevention and gingival health enhancement.
HA is an essential component of the periodontal ligament matrix and plays various important roles in cell adhesion, migration and differentiation mediated by the various HA binding proteins and cell-surface receptors such as CD44. 28 HA has been studied as a metabolite or diagnostic marker of inflammation in the gingival crevicular fluid (GCF) as well as a significant factor in growth, development and repair of tissues. 29 A deficiency of coenzyme Q10 at its enzyme sites in gingival tissue may exist independently of or due to periodontal disease. If a deficiency of coenzyme Q10 exists in gingival tissue for nutritional causes and independently of periodontal disease, then the advent of periodontal disease can enhance the gingival deficiency of coenzyme Q10. In such patients, dental treatment including plaque and calculus will improve the oral hygiene, but not that part of the deficiency of coenzyme Q10 due to systemic cause; therapy with coenzyme Q10 can be included with the oral hygiene for an improved treatment of the existing periodontal disease. 30 The concept of reactive oxygen species induced destruction has led to the search for an appropriate complimentary antioxidant therapy in the treatment of inflammatory periodontal diseases. The pharmacology of coenzyme Q10 indicates that it may be an agent for treatment of periodontitis. On the basis of on new concepts of synergism with nutritional supplements and host response, coenzyme Q10 may possibly be effective as a topical and/or systemic role or adjunctive treatment for periodontitis either as a stand-alone biological or in combination with other synergistic antioxidants (i.e., vitamins C and E).
Davis 31 stated that wound healing with aloe vera is due to increased blood supply, increased oxygenation, which stimulates fibroblast activity as well as collagen proliferation in tissues. Davis 32 in his in vitro and in vivo studies showed healing with fibroblast proliferation. Wound healing by means of growth factors such as gibberellins, auxins and mannose phosphate, which bind to insulinlike growth factor receptors to improve healing, is also seen. Yagi et al. 33 stated presence of glycoprotein with cell proliferation improves healing. Aloe vera also contains vitamins A, C, E, B12 and folic acid. Vitamin C, which is involved in collagen synthesis, increases concentration of oxygen at the wound site because of dilatation of blood vessels. Aloe vera penetrates and dilates capillaries going to an injured site, which improves healing.
The components of tea tree oil have lipophilic properties which facilitate its diffusion through the epithelium and readily absorbed with its anti-inflammatory property into the gingival connective tissues that serve to be a unique, nontoxic agent that would be as effective to the current range of chemotherapeutic periodontal treatment options 34 in addition, Tea tree oil suppresses the monocyte production of inflammatory mediators and superoxide, and thereby may prevent tissue damage that may be seen in more chronic inflammatory states. The antiinflammatory activity of tea tree oil upon its topical application could control inflammatory responses to foreign antigens and enable neutrophils to be fully active in an acute inflammatory response and eliminate foreign antigens, concealing monocyte inflammatory mediator and superoxide production and thereby preventing oxidative tissue damage that may be seen in chronic inflammatory states. The antimicrobial activity of tea tree oil is already well-established. 35 Myyrh, in addition to its anti-inflammatory, antiulcer 36 and astringent effect, also exhibits antibacterial effects on different species including that on oral micrfflora. 37 The antibacterial and antiinflammatory properties of myrrh explain its ability to reduce dental plaque and gingival inflammation. Myrrh extract has the potential to be an alternative remedy in daily oral hygiene practices as an adjunct to mechanical plaque control.
Conclusion
Mouthwashes containing herbal and antioxidant rich constituents are found to be effective in reducing plaque and gingival inflammation. Considering the fact that the chemical formulations of most commercially available mouth rinses are chemically based, expensive, and have considerable side effects, which restricts their use. India has a rich source of herbal plant products with medicinal value. Products based on herbal derivatives such as hyaluronan, myrrh, aloe vera can be used as adjuvant to oral hygiene maintenance with a goal of prevention of periodontal diseases due to its antibacterial and antioxidant properties.
Due to presence of natural ingredients, herbal mouthwashes have a more palatable taste and almost none known significant side-effects. Thus they can be used on a daily basis as an alternative to chemical based mouthwashes as antiplaque agents, in additional to mechanical means of plaque control. Their role should be further explored and evaluated on long term basis as antiplaque agents with prophylactic benefits.
Source of Funding
Purexa Global Pvt Ltd.
Conflict of Interest
None. | 2021-05-13T20:50:32.011Z | 2021-04-15T00:00:00.000 | {
"year": 2021,
"sha1": "9c61ad2e3ad35ed2b157246c2eb8907e174042f1",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijohd.org/journal-article-file/13665",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9c61ad2e3ad35ed2b157246c2eb8907e174042f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267778181 | pes2o/s2orc | v3-fos-license | A case report of carcinoma of the papilla of Vater associated with a hyperplasia–dysplasia–carcinoma sequence by pancreaticobiliary maljunction
Background Pancreaticobiliary maljunction (PBM) is a known risk factor for biliary tract cancer. However, its association with carcinoma of the papilla of Vater (PVca) remains unknown. We report a case with PVca that was thought to be caused by the hyperplasia–dysplasia–carcinoma sequence, which is considered a mechanism underlying PBM-induced biliary tract cancer. Case presentation A 70-year-old woman presented with white stool and had a history of cholecystectomy for the diagnosis of a non-dilated biliary tract with PBM. Esophagogastroduodenoscopy revealed a tumor in the papilla of Vater, and PVca was histologically proven by biopsy. We finally diagnosed her with PVca concurrent with non-biliary dilated PBM (cT1aN0M0, cStage IA, according to the Union for International Cancer Control, 8th edition), and subsequently performed subtotal stomach-preserving pancreaticoduodenectomy. Pathological findings of the resected specimen revealed no adenomas and dysplastic and hyperplastic mucosae in the common channel slightly upstream of the main tumor, suggesting a PBM related carcinogenic pathway with hyperplasia–dysplasia–carcinoma sequence. Immunostaining revealed positivity for CEA. CK7 positivity, CK20 negativity, and MUC2 negativity indicated that this PVca was of the pancreatobiliary type. Genetic mutations were exclusively detected in tumors and not in normal tissues, and bile ducts from formalin-fixed paraffin-embedded samples included mutated-ERBB2 (Mutant allele frequency, 81.95%). Moreover, of the cell-free deoxyribonucleic acid (cfDNA) extracted from liquid biopsy mutated-ERBB2 was considered the circulating-tumor deoxyribonucleic acid (ctDNA) of this tumor. Conclusions Herein, we report the first case of PVca with PBM potentially caused by a “hyperplasia–dysplasia–carcinoma sequence” detected using immunostaining and next-generation sequencing. Careful follow-up is required if pancreaticobiliary reflux persists, considering the possible development of PVca. Supplementary Information The online version contains supplementary material available at 10.1186/s12957-024-03347-z.
Background
Pancreaticobiliary maljunction (PBM) is a congenital anomaly defined as the union of the pancreatic and biliary ducts outside the duodenal wall, thus causing pancreaticobiliary reflux [1].The incidence of biliary tract cancer in patients with a non-dilated biliary tract with concomitant PBM was 42.4%, and cancer localization was 88% for gallbladder cancer and 7% for cholangiocarcinoma [2].Therefore, cholecystectomy is recommended in many cases of non-dilated biliary tract with PBM; however, there is no consensus regarding extrahepatic bile duct resection.Residual bile duct cancer has been reported to be detectable during long-term follow-up, even after termination of pancreaticobiliary reflux [3,4], and 23 (1.8%) of 1291 patients developed residual bile duct cancer after cyst excision [4].In addition to residual bile duct cancer, carcinoma of the papilla of Vater (PVca) has been reported; however, it is considerably rare [5,6].Although the risk factors for PVca have not been clarified [7], we report our experience with PVca that was thought to be caused by the hyperplasia-dysplasiacarcinoma sequence, which is considered a mechanism underlying PBM-related biliary tract cancer.This is the first report of such a case, which was confirmed not only by immunohistochemical examination but also by genetic analysis using next-generation sequencing (NGS) of formalin-fixed paraffin-embedded (FFPE) samples and liquid biopsy (LB) specimens.
Case
A 70-year-old woman presented with white stool and the patient was referred to our hospital for further investigation of jaundice.The patient had undergone cholecystectomy for the diagnosis of a non-dilated biliary tract with PBM (P-C type) approximately 30 years prior at another hospital.On physical examination, the patient's abdomen was soft, and no mass was palpated.Laboratory data on admission revealed high levels of carcinoembryonic antigen (6.1 ng/mL), while carbohydrate antigen 19 − 9 was within normal ranges.Esophagogastroduodenoscopy revealed a tumor in the papilla of Vater, and the histological examination of the biopsy specimens revealed adenocarcinoma.(Fig. 1A).Endoscopic ultrasound and intraductal ultrasonography showed that the tumor was located in the common channel with no invasion to the sphincter of Oddi, duodenal muscular layer, or pancreas (Fig. 1B and C).Abdominal enhanced computed tomography (CT) revealed a 14 × 14 mm tumor in the duodenum.No enlarged lymph nodes or distant metastases were observed (Fig. 2).Magnetic resonance cholangiopancreatography demonstrated dilatation of the extra/ intrahepatic bile duct and main pancreatic duct; the length of the common channel was 23 mm (Fig. 3).We finally diagnosed her with PVca with a non-biliary dilated PBM (cT1aN0M0, cStage IA, according to the Union for International Cancer Control [UICC], 8th edition), and subtotal stomach-preserving pancreaticoduodenectomy was performed.
Surgical procedures
A median incision was placed in the upper abdomen.The pancreas was dissected at the anterior surface of the superior mesenteric vein.The modified Child's reconstruction procedures were performed, and the remnant pancreas was anastomosed with the jejunal limb using the modified Blumgart method.Neither peritoneal dissemination nor lymph node metastases were detected during surgery.The regional lymph nodes of the papillary carcinoma were dissected.The operation lasted 370 min, and the estimated blood loss was 15 mL.No intraoperative blood transfusions were required.
Postoperative course
The postoperative course was uneventful, and the patient was discharged on the 24th postoperative day.The patient was recurrence-free for 4 years after surgery.
Macroscopic and pathological findings of the resected specimen
This was a PBM case without biliary dilation (P-C type), the tumor was diagnosed as PVca developing from the epithelium of the common channel, and the tumor diameter was 9 × 8 mm.The tumor invaded the Oddi sphincter and submucosa but did not invade the muscularis propria of the duodenum (No. 1, Fig. 4).The pathological diagnosis was pT1bN1M0 pStage IIIA, according to the UICC, because of the presence of a positive lymph node (2/37 lymph nodes).A front was observed between hyperplasia and dysplasia areas within the mucosal epithelium of the common channel, slightly upstream of the main tumor (No. 2, Fig. 4).At the hyperplastic area, there was no evidence of increased nuclear-to-cytoplasmic ratio, increased nuclear chromatin and loss of nuclear polarity, or cell overlap (No. 2 and No.3, Fig. 4, Supplement Fig. 1).In addition, at the dysplastic area, the findings of disturbed polarity, increased nuclear chromatin, and increased nuclear-to-cytoplasmic ratio suggested that the tumor was equivalent to BilIN-3 (high grade dysplasia) (No. 2, Fig. 4).A hyperplastic mucosa was found throughout the common bile duct (No. 3, Fig. 4).Immunostaining revealed positivity for CEA, COX-2, HER2, and IL-33 in the carcinoma (Fig. 5).CK7 and MUC1 positivity; MUC5 partial positivity; CDX2 and MUC2 negativity; and mostly CD20 negativity indicated that this PVca was of the pancreatobiliary type, not gastric type.In addition to MUC6 negativity in the carcinoma area, CDX2 was also negative, thereby we did not determine the lesion to be of the intestinal type [8].p53 was wildtype immunostaining pattern (Supplement Fig. 2).
Genetic analysis using FFPE and LB 1-1 Cell-free total nucleic acid (cfTNA) and genomic DNA extraction
Thirteen plasma samples were collected between July 2019 and September 2020.Cell-free total nucleic acid (cfTNA) was extracted using the MagMAX™ Cell-Free Total Nucleic Acid Isolation Kit (Thermo Fisher Scientific) or the NextPrep-Mag™ cfDNA Automated Isolation Kit (PerkinElmer), according to the manufacturers' Ten 5-µm slices of FFPE slides were used to extract genomic deoxyribonucleic acid (DNA).Genomic DNA from the tumor tissue, common bile duct, and normal tissue was extracted using a GeneRead™ DNA FFPE Kit (Qiagen), according to the manufacturer's protocol.The extracted cfTNA and genomic DNA were quantified using the Qubit™ DNA (High Sensitivity) Assay Kit and Qubit DNA (Broad Range) Assay Kit (Thermo Fisher Scientific), respectively.The quality and size of the extracted cfTNA were evaluated with the High-Sensitivity D5000 ScreenTape Assay (Agilent) and the quality of genomic DNA was evaluated with the Genomic DNA ScreenTape Assay (Agilent), using TapeStation (Agilent).
1-2 Library construction
The NGS library was constructed using the Oncomine™ Pan-Cancer Cell-Free Assay (Thermo Fisher Scientific), according to the manufacturer's protocol.Libraries were constructed using 12.3-20 ng of cfTNA, and 30 ng of genomic DNA from buffy coat.
Regarding tumor tissue, libraries were prepared using 40 ng of extracted genomic DNA from FFPE using the Ion AmpliSeq™ Comprehensive Cancer Panel (Thermo Fisher Scientific) with Ion Xpress™ Barcode Adapters (Thermo Fisher Scientific), according to the manufacturer's protocol (Thermo Fisher Scientific).The quality of all constructed libraries was evaluated with a High-Sensitivity D1000 ScreenTape (Agilent), using TapeStation (Agilent).
1-3 Targeted NGS
The constructed libraries were subjected to template preparation using the Ion Chef™ System (Thermo Fisher Scientific) with either the Ion 540 Chef Kit (Thermo Fisher Scientific) or the Ion 550 Chef Kit (Thermo Fisher Scientific).Thereafter, sequencing was performed using the Ion GeneStudio™ S5 Prime System (Thermo Fisher Scientific).
1-4 Sequencing data analysis
Sequence alignment with hg19 as the reference genome and variant calling were performed using the Torrent Suite Software v5.16 and Ion Reporter v5.16 and v5.18.The workflows used for analyses included Oncomine Tag-Seq Pan-Cancer Liquid Biopsy w2.5 for cfDNA as well as buffy coat, and AmpliSeq CCP w1.2 Tumor-Normal pair for tumor genomic DNA with default parameters.The cutoff for variant calling in cfDNA was 0.065%.Regarding tumor-tissue alterations, mutations with a mutant allele frequency ≥ 5% were considered positive after excluding variants (single-nucleotide polymorphisms) detected in normal tissue.Mutations detected in the buffy coat that were also detected in the plasma cfTNA were evaluated as clonal hematopoiesis-associated mutations [9,10].
1-5 Results of genetic analysis
Genetic mutations with single nucleotide variants exclusively detected in tumors and not in normal tissue and bile ducts from FFPE specimens included ERBB2 (Mutant allele frequency; MAF, 81.95%), POU5F1 (MAF, 12.43%), FLT1 (MAF, 9.91%), NCOA2 (MAF, 8.00%), and KMT2D (MAF, 7.14%).ERBB2 was also detected as a genetic mutation with copy number variant present exclusively in tumors compared to normal tissue, with a copy number of nine (Supplement table).The genetic mutations with single nucleotide variants detected in bile ducts included Fig. 5 Immunohistological findings of the resected specimen.CEA was highly expressed at all sites of the hyperplasia, dysplasia, and carcinoma.COX-2, HER2, and IL-33 expression were positive in the carcinoma tissues KIT (MAF, 11.03%) (Table 1).NGS identified no genetic abnormalities in p53.Considering the immunostaining results, this case was considered wild-type for p53.The cell-free DNA (cfDNA) (baseline) obtained from preoperative plasma was ERBB2 (MAF, 0.24%) and ERBB2 was never detected after surgery (Supplement Fig. 3).
Discussion and conclusions
We reported the first case of PVca after cholecystectomy for PBM with a non-dilated biliary tract potentially caused by a hyperplasia-dysplasia-carcinoma sequence detected using detailed immunostaining and NGS.Careful follow-up is needed after cholecystectomy for patients with PBM with a non-dilated biliary tract due to the possibility of carcinogenesis from the duodenal papillary region and conventional biliary carcinogenesis.
The incidence of biliary tract cancer in patients with PBM with a non-dilated biliary tract has been reported as 42.4%, and 88% of biliary tract cancer is localized in the gallbladder cancer while 7% is classified as cholangiocarcinoma [2].Therefore, cholecystectomy is often recommended for patients with PBM with a non-dilated biliary tract; however, there is no consensus regarding extrahepatic bile duct resection [11,12].The estimated incidence of cancer development after a diversion operation for congenital biliary dilatation is 0.7-5.4%, and the interval between the operation and cancer detection ranges from one to 19 years [3,4].However, as there are no reports regarding the incidence of biliary tract cancer in residual bile ducts after bile duct resection or after cholecystectomy in patients with PBM with a non-dilated biliary tract, it is unclear whether bile duct resection is a good treatment option.It is presumed that the reflux of pancreatic juice into the bile duct persists as the common bile duct and papilla are preserved after cholecystectomy.Therefore, the mucosal damage in the common duct is considered to be persistent.Previous studies have focused on PVca with PBM, though it remains unclear whether PBM is the cause of carcinogenesis in any of the previous studies [5,6,[13][14][15][16][17]. PVca is classified as cholangiocarcinoma, and the usual carcinogenic process underlying PVca is the adenoma-carcinoma sequence [18].While the hyperplasia-dysplasia-carcinoma sequence has been proposed as a carcinogenic process in the context of PBM [19,20], there are no reports of an association between PBM and PVca.Based on the morphological, pathological, and genetic analyses presented in this study, this is the first report of PVca thought to be caused by the hyperplasia-dysplasia-carcinoma sequence.
No previous reports have discussed the risk factors for PVca or the relationship between PBM and PVca.The pathological findings in this study suggest morphological carcinogenesis by the hyperplasia-dysplasia-carcinoma sequence.Although the usual carcinogenic process underlying PVca is the adenoma-carcinoma sequence [18], no adenoma was observed in this patient.Furthermore, hyperplasia was observed throughout the bile duct and dysplasia was observed in the vicinity of the carcinoma.We could not locate a "clear" front on any of the intercepts available.However, we successfully identified a transition from hyperplasia to dysplasia within mucosal surface epithelium, albeit not within the same glandular duct.If the sections had been longitudinally oriented along the bile duct, it might have facilitated a clearer delineation of the specific boundaries.However, we did not discern any evident transition from hyperplasia to dysplasia or from dysplasia to carcinoma within the same glandular duct.Our experience with these cases has prompted us to reconsider our methodology for specimen preparation in cases of cholangiocarcinoma.Also, it is very interesting that the carcinoma was positive for HER2 expression.The copy number of ERBB2 in this particular case was found to be nine.The genetic analysis revealed that ERBB2 was amplified, and thus the resulting abnormal production of the HER2 protein may have contributed to the growth of this tumor.HER2 protein overexpression is caused by ERBB2 amplification by next-generation sequencing; this has been confirmed in previous reports, and we believe the same to be true in this case [21].Immunostaining was also positive for CEA in the carcinoma, and COX-2 expression was positive in one region of the carcinoma, as previously reported [22,23].Although CD20 partial positivity is not typical, it was determined to be the pancreatobiliary type based on an overall judgment.This was considered to be a finding suggestive of intra-tumor heterogeneity.Since IL-33 overexpression has been reported in gallbladder carcinoma associated with PBM [24], IL-33-positivity of the cancerous area in this patient further suggests that the carcinoma is associated with PBM.
The significance of NGS and liquid biopsy in this case is highlighted by the fact that to date no studies have reported genetic analysis in PVca associated with PBM.Large-scale genome sequencing of PVca arising in an adenoma-carcinoma sequence was conducted in 2016, and these results are already available [25].Moreover, although there are scattered reports of genetic analysis of gallbladder tissue and cholangiocarcinoma associated with PBM, there are no reports of PVca associated with PBM.In light of the above, the results of the genetic analysis in this case are highly suggestive, and we hope that they will serve as a bridge supporting the accumulation of future cases.SMAD4 and TP53 have been reported as genetic abnormalities in cholangiocarcinoma associated with PBM [26,27].Large-scale genome sequencing of PVca identified KRAS (48%) and TP53 (56%) as the most frequent mutations, followed by CTNNB1, SMAD4, APC, ELF3, GNAS, ERBB2, ERBB3, and LOXHD1, with frequencies ranging from 10 to 30% [25].The pancreatobiliary type, as in this patient, resembles pancreatic cancer, involving KRAS (68%), TP53 (67%), and SMAD4 (20%) [25].Genetic analysis of the FFPE samples obtained in this study revealed mutations of ERBB2, but no mutations of KRAS, TP53, or SMAD4, which are frequently mutated in patients with pancreatobiliary-type PVca.Of the cfDNA extracted from LB, excluding those detected in the buffy coat and at only one time point during the disease course which may have been in error, ERBB2 was considered the circulating-tumor DNA (ctDNA) of this tumor.ERBB2 has been reported as a genetic abnormality in gallbladder cancer with PBM in 17.6% of patients, suggesting that the current patient's disease may have been associated with PBM [27].In addition, this tumor demonstrated genetic abnormalities, such as POU5F1, FLT1, NCOA2, and KMT2D, which have not been addressed in the large-scale genome sequencing of Vater papillary carcinoma.According to the Catalogue of Somatic Mutations In Cancer (COSMIC) database [28], KIT (p.H263Q) unlikely to be a pathogenic mutation.No common pathogenic genetic variants between the bile ducts and the carcinoma were found in this case; however, the presence of a common pathogenic genetic variant would have provided evidence to suggest that the PVca was based on a hyperplasia-dysplasia-carcinoma sequence.A limitation of the present study related to the immunohistological examination and genetic analysis is that the amount of dysplastic tissue was so small that the direct comparison of carcinoma and dysplasia was not possible.In this case, immunohistological examination and genetic analysis were performed as evidence to support the hyperplasiadysplasia-carcinoma sequence and were compared with previously reported data; however, this comparison was limited to hyperplasia and carcinoma.Considering the amount of dysplastic tissue and the success rate, we did not performed microdissection in this case.Furthermore, the fact that the bile ducts and duodenal papillae were not incised and annular in the resection specimen adversely affected gross morphology and histological evaluation.In other words, it was not possible to present sections with a front between the hyperplasia, dysplasia, and carcinoma areas.
No previous study has reported genetic abnormalities in PVca with PBM, therefore further studies are required to validate these findings.
Fig. 1
Fig. 1 Endoscopy and echography.(A) Esophagogastroduodenoscopy revealed a tumor of the papilla of Vater.(B, C) Endoscopic ultrasound and intraductal ultrasonography revealed that the tumor was located in the common channel and demonstrated no invasion of the duodenal muscular layer and pancreas.CBD, Common bile duct; MPD, Main pancreatic duct
Fig. 3
Fig. 3 Magnetic resonance choledochopancreatography (MRCP).MRCP revealed dilatation of the extra/intrahepatic bile duct and main pancreatic duct, and the length of the common channel was 23 mm
Fig. 4
Fig. 4 Morphological evaluation derived from pathological findings.No adenomas were observed, suggesting a hyperplasia-dysplasia-carcinoma sequence carcinogenic mechanism.No. 1: Main part of the carcinoma (C) in the common channel.No. 2: The depiction of front between hyperplasia (H) area, dysplasia (D) area, and carcinoma (C) area is observed in the common channel.No. 3: Hyperplasia (H) in the common bile duct.There was no evidence of increased nuclear-to-cytoplasmic ratio, increased nuclear chromatin and loss of nuclear polarity, or cell overlap.MP, Muscularis propria; Panc., Pancreas
Table 1
Genetic mutations with single nucleotide variant of mutant allele frequency > 5% detected only in tumors and bile ducts compared to normal tissue SNV, single nucleotide variant; MAF, Mutant allele frequency | 2024-02-22T11:26:27.955Z | 2024-02-22T00:00:00.000 | {
"year": 2024,
"sha1": "9905e4f286604f914f60a009ba11f8d30c6e299c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9905e4f286604f914f60a009ba11f8d30c6e299c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233852599 | pes2o/s2orc | v3-fos-license | Optimization of UAV’s Landing Longitudinal Control under Wind Disturbance
Environmental factors have a great influence on the autonomous landing process of UAV. In order to enhance the environmental adaptability of UAV’s landing control, this paper takes a certain high-speed UAV as the research object, analyzes the main causes of the error in UAV’s landing under the wind interference, and puts forward the improvement measures. The object model was built under the Matlab/Simulink platform to simulate the closed loop control system of UAV’s autonomous landing, and the Monte Carlo method was used to verify the robust performance of the control system in the presence of the wind interference. The simulation results show that the improved landing plan can effectively reduce the landing error of UAV under the wind interference and improve the landing accuracy and safety of UAV.
Introduction
Autonomous landing is one of the important methods of UAV recovery and also the key technology of UAV's control. High-speed UAV has high touchdown speed, so its landing control is more difficult. In order to ensure the safety of UAV's autonomous landing, tracking of the preset height and speed trajectory are required longitudinally, and aiming at the center line of the runway is required laterally [1] , so as to ensure that the UAV is grounded at a certain speed, subsidence rate and pitch angle. For the safe landing of UAV, it is necessary to design a reasonable landing trajectory [2] combining the aerodynamic characteristics, fly environment, landing indicators and other parameters of UAV, and to design a landing control law with good stability and control accuracy [3] , in order to ensure the UAV can fly in strict accordance with the designed trajectory.
The certain high-speed UAV studied in this paper is shown in Figure 1. The take-off weight of the UAV is 500kg, the length of the fuselage is 5.23m, the wingspan is 3.24m, the wing area is 4.84m2, and the take-off speed is 60m/s. The UAV has completed the test flight verification. During the test flight, if there was a large wind interference during the landing period, the control effect of lifting speed in the lift section would be affected, thus affecting the precision and safety of landing. In this context, this paper studies the longitudinal control during the landing process of UAV, optimizes the landing control plan, so as to enhance the adaptability of UAV's landing control to wind interference conditions and improve the precision and safety of UAV landing.
Introduction To Original Landing Control
The landing process of UAV is divided into four stages: approach stage, descending stage, pull-up stage and slipping stage, as shown in Figure 2. After the UAV reaches the predetermined height and speed, it comes in level flight and switches to the descending stage by means of "Impact the descending trajectory extension cord". Track the descending path line in the descending stage, enter the pull-up stage after reaching the pull-up height, decelerate and reduce the height according to the pull-up trajectory, and touch the ground with a certain speed, subsidence rate and pitch angle. After the UAV touches the ground, it enters the slipping stage, turns off the engine, connects to the deviation correction control, and then decelerates to taxi until it stops at the runway. The design of landing trajectory is a reverse process, which needs to determine the attitude angle's range of touchdown according to the requirements of touchdown speed and lifting speed, and then determine the pull-up trajectory.
The descending stage can track the altitude profile, establish and stabilize the equivalent airspeed of UAV, and reduce or eliminate the altitude and velocity errors [4] .
The pull-up stage is the most important stage for autonomous landing of UAV, which determines whether UAV can land safely. The pull-up stage is designed based on the manned aircraft's exponential pull-up design method. The lift speed instruction is shown in Formula (1). The speed trajectory is designed by combining the gaussian pseudo-spectral trajectory optimization method.
In the Formula (1): is the exponential flattening curve time constant, is the touch-ground lifting speed allowed by the UAV.
In the original landing plan, the aerodynamic characteristics and engine characteristics of the aircraft were taken into consideration comprehensively, and the trajectory angle of the steep descent was selected as , level flight approaching speed was 0 80 V m/s, the speed of pull-up point was m/s, the touchdown speed was m/s, the height of pull-up point was 0 =20.3 H m/s, , . The control structure of UAV in landing stage is composed of inner loop and outer loop. The inner loop is the pitch angle control, which can increase the damping of the system, thus increasing the longitudinal stability of the system, and control the attitude at the same time. The outer loop is height 3 control or lifting speed control. In the descending stage, total energy control is adopted to realize height trajectory tracking. The control structure is shown in Figure 3. The lifting speed control is used in the pull-up stage to ensure that the touchdown speed is in a safe range. The control structure is shown in Figure 4. In the above, c H , are altitude instruction and altitude respectively; , are speed instruction and speed respectively; , are lifting speed instruction and lifting speed respectively; , are pitch angle instruction and pitch angle respectively; is rate of pitch angle; is the elevator and is the engine throttle.
The lifting speed control [5] is designed based on LADRC, and the control structure is shown in Figure 5. Pitch angle control is designed based on cascade active disturbance rejection method and is divided into pitch angle control circuit and pitch angle rate control circuit [6] according to the principle of time scale separation. The control structure is shown in Figure 6.
Influence of Wind Disturbance On Landing Simulation
The UAV uses total energy control to track altitude command and airspeed command in the descending stage, while the height control in the pull-up stage is disconnected, the lifting speed control and tracking lifting speed command is connected. According to the previous landing plan, the lifting speed track of the pull-up stage is fixed and changes with the height. However, because the tracking of the descending stage is the airspeed instruction, when there is wind interference, the airspeed and When the wind speed is 0m/s, 10m/s and -15m/s respectively, the UAV landing simulation results are shown in Table 1 and Figure 7. It can be seen that the original landing plan: 1) When following wind, the ground speed is greater than the airspeed, and the initial lifting speed of the pull-up point is less than the design value. The lifting speed instruction makes the UAV directly pull up. The lifting distance is relatively long, and the UAV's landing point is moved backward; 2) When against the wind, the ground speed is less than the airspeed, and the initial lifting speed of the pull-up point is greater than the design value, which leads to that the lifting speed first decreases and then increases when entering the pull-up stage. The UAV first lowers its head and then raises its head. The ground speed is relatively small, the flying distance of the pull-up stage is relatively short, and its landing point is moved forward.
H is directly changed, the distance of the descending stage will be affected, the velocity of the pull-up point will be changed, and the subsidence rate will also be affected, so the strategy of changing the time constant H is adopted.
When the pull-up height is reached, the time constant is recalculated according to Formula (2), and then the new time constant is used to track the lifting speed trajectory, while other trajectories remain unchanged. Similarly, when the wind speed is respectively 0m/s, 10m/s and -15m/s, the UAV landing simulation results are shown in Table 2 and Figure 8. 2) When following wind, the initial lifting speed is less than the design value, H1 decreases, and the time of pull-up stage decreases; 3) When against the wind, the initial lifting speed is greater than the design value, H1 increases, and the time of pull-up stage increases. The optimized plan can effectively improve the tracking effect of the pull-up trajectory and reduce the deviation of the UAV landing point.
Monte Carlo Simulation
Monte Carlo method is used to verify landing simulation under wind disturbance, and the simulation effect of optimized landing plan under different wind speeds is investigated. The uncertainty range of wind disturbance is ±10m/s. Monte Carlo simulation was carried out for 200 times before and after optimization respectively. The simulation results are shown in Figure 9 and Figure10, and the touchdown state is shown in Table 3. It can be seen that the optimized landing plan has more concentrated distribution of touchdown airspeed, pitch angle, lifting speed and forward distance, smaller standard deviation, stronger antiwind interference ability and stronger robustness.
Results
In this paper, the influence of wind disturbance on landing simulation is analyzed, and the landing plan is modified by modifying time constant method. Simulation results show that the improved landing plan can effectively reduce the landing error of UAV, meet the design requirements, and enhance the robustness of the control system. | 2021-05-07T00:04:15.593Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "28532e0207abdc9323b6ca9aeea9ee403ef033cc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/693/1/012106",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "575d0fe0e4be283b5c36ed83b1066c81b8884852",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
225634054 | pes2o/s2orc | v3-fos-license | Stability and Asymptotic Behavior of a Regime-Switching SIRS Model with Beddington–DeAngelis Incidence Rate
A regime-switching SIRS model with Beddington–DeAngelis incidence rate is studied in this paper. First of all, the property that the model we discuss has a unique positive solution is proved and the invariant set is presented. Secondly, by constructing appropriate Lyapunov functionals, global stochastic asymptotic stability of the model under certain conditions is proved. Then, we leave for studying the asymptotic behavior of the model by presenting threshold values and some other conditions for determining disease extinction and persistence. The results show that stochastic noise can inhibit the disease and the behavior will have different phenomena owing to the role of regime-switching. Finally, some examples are given and numerical simulations are presented to confirm our conclusions. ,
Introduction
Infectious diseases are one of the greatest enemies of human beings. Whenever they happen, they will bring great disasters to human beings. erefore, it is of great significance to model and study the infectious mechanism of infectious diseases for disease control. SIR model, using S(t), I(t), and R(t) to express fractions of the susceptible, infected, and removed at time t, as one of the classical infectious disease models, has been studied and extended by many scholars.
Owing to the richness and importance of the research content of epidemic models, different scholars study them from different perspectives. Some authors used Lyapunov functions to study the stability of the model [1][2][3]. e authors in [2] proposed a new technique to study stability of an SIR model with a nonlinear incidence rate by establishing a transformation of variable. More about the stability of stochastic differential equations, we refer to [4,5]. Some scholars have studied the dynamic behavior of epidemic models and gave the threshold values of disease extinction and persistence so as to give control strategies for disease [6][7][8][9][10]. e authors in [6] have proved that the number R S 0 can govern the dynamics of the model under intervention strategies by using the Markov semigroup theory. In addition, scholars have studied the ergodicity and stationary distribution of the model by making use of different methods in [11][12][13][14]. e authors in [12] generalized the method for analyzing ergodic property of epidemic models, all of which further enrich and improve the theory and application of epidemiology. Markov semigroup approach was used in [14] to obtain the existence of stationary distribution density of the stochastic plant disease system.
Parameters involved in models are more or less disturbed by some environmental noise. Mao et al. [15] have proved that the presence of noise can suppress potential population explosion, which shows that environmental noise has a great influence on the behavior of the model. In order to describe this perturbation, stochastic noise driven by continuous Brownian motion is widely studied in epidemic models and other systems with various incidence functions [7-9, 12, 13, 16, 17]. ere are several kinds of stochastic noise, one of which is assumed that some parameters in the model are disturbed, such as the contact rate or death rate.
Beddington-DeAngelis function is an important incidence rate with the form f(S, which has been studied by some scholars [17][18][19]. It can be considered as a generalization of many incidence functions, for example, [7] (2) m 1 � 1, m 2 � a, m 3 � 0, f(S, I) � (SI/1 + aS) [20] (3) m 1 � 1, m 2 � 0, m 3 � a, f(S, I) � (SI/1 + aI) [13] (4) m 1 � 0, m 2 � 1, m 3 � 1, f(S, I) � (SI/S + I) [21] Hence, a certain SIRS model containing constant population size and stochastic perturbation takes the following form: where μ represents the birth and death rate, β denotes the valid contact coefficient, c means the rate at which the infected is cured and returns to the removed, d is the death rate due to disease, θ expresses the rate of losing immunity and returning to the susceptible, σ represents the intensity of stochastic perturbation, and B(t) is the standard Brownian motion. Moreover, the environment in our life often changes, for example, the seasons, temperature and humidity will always change and the mechanism and infectious ability of diseases will change accordingly. erefore, the parameters in the model will change suddenly and discontinuously, which cannot be depicted by continuous Brownian motion, but can be described by continuous-time Markov chain in finite-state space. Many scholars have studied the epidemic models with Markovian regime-switching, see [8][9][10]12]. Due to the rationality and significance of multiple environments in the model, regime-switching is also applied in population model and other fields, see [18,22,23]. We refer the readers to [5,24,25] for the theory and more knowledge of Markovian switching.
As far as we know, although there are a great many number of research studies on the epidemic models with Markovian switching, there is little work on the properties of the regime-switching SIRS model the with Beddington-DeAngelis incidence rate. In this paper, we will discuss the properties of this kind, and its expression is as follows: where α t is a continuous-time finite-state Markov chain taking values in space Ε � 1, 2, . . . , N { } with transition rate matrix Q � (q ij ) N×N , i.e., 2 Mathematical Problems in Engineering for a sufficiently small Δt > 0. We assume in this paper that the matrix is conservative and irreducible, which signifies that the unique stationary distribution π � (π i ) for Markov chain exists and satisfies the equations: e outline of this paper is organized as follows. Section 2 proves that the model has a unique positive solution and the invariant set is presented. Meanwhile, some important lemmas which will be used later are given. In Section 3, conditions of stochastic asymptotic stability in the large are established by constructing suitable Lyapunov functionals with regime-switching. In Section 4, conditions for disease extinction are discussed and condition of persistence in the mean is also studied by applying some useful inequality techniques. Section 5 presents some examples and their simulations to confirm our theoretical results.
Preliminaries
In this section, some background knowledge about differential equations with Markovian switching and several important Lemmas will be proposed, all of which will be used later in the paper. Let as a complete probability space with a filtration F { } t≥0 , which satisfies the usual conditions. Consider the SDEs with Markovian switching as follows: where f: R n × E ⟶ R n , g: R n × E ⟶ R n×m , and B(t) � (B 1 t , . . . , B m t ) T is m− dimensional standard Brownian motion. For i ∈ E and any function V(X, i) ∈ C 2 (R n × Ε; R n ), define the operator L by For a differential system, people are concerned with the existence, uniqueness, form of solutions, and so on. In this paper, we are concerned about whether the model has a unique solution. Can we estimate the range of the solution more precisely? e following lemma is presented to answer these questions.
Lemma 1.
For the initial value (S(0), I(0), R(0), α 0 ) ∈ R 3 + × Ε, there exists a unique positive solution (S(t), I(t), R(t), α t ) for model (2). In addition, let Proof. We take a piecemeal approach to prove. Let 0 � τ 0 < τ 1 < · · · < τ n < · · · be all jump times of Markov chain α t . When t ∈ [0, τ 1 ), let α t � c 0 , and we can prove that the model has a positive solution almost surely by constructing appropriate Lyapunov function. is process is common, and we omit here. When jumping to another state α t � c 1 for time t ∈ [τ 1 , τ 2 ), the corresponding parameters in the model will change to another set of numbers and the positive solution can be testified by the same method. Repeat this process on the intervals [τ 2 , τ 3 ), [τ 3 , τ 4 ) · · ·, and the positive solution can be obtained for t > 0. en, we prove the range of the solution. Adding three equations in model (2), one has For where then N(t) ≤ 1 holds until the first jump time τ 1 . When jumping into the next state, we know that the initial value N( Because of this property, we assume that the initial value Stability is one of the important research topics, which attracts a lot of attention of researchers. In this paper, we will discuss the stochastic stability of the model. In [5,24], conditions for stochastic asymptotic stability in the large are given.
en, the equilibrium P 0 of the model is stochastic asymptotically stable in the large.
where � c � max i∈E r i , μ � min i∈E μ i , and θ is defined in the same way.
Lemma 4.
For the solution of model (2), the following formulas hold true: Mathematical Problems in Engineering Proof. We only prove equation (11). Equation (12) can be testified in the same way. Let us, we get limsup t⟶∞ (M(t)/t) � 0.
Stability of Disease-Free Equilibrium
In this section, the global stochastic asymptotic stability under some conditions of disease-free equilibrium is studied by making use of Lyapunov functionals with regimeswitching. For simplicity, let us define f(S, We also define functions as follows: and obviously, are satisfied, then the disease-free equilibrium is stochastically asymptotically stable in the large. Proof. For S(0), I(0), R(0) ∈ Δ, we define the Lyapunov functional with regime-switching as follows: where κ 1 , κ 2 , κ 3 , and ω i are positive constants which will be specifically determined later. Using the generalized Ito formula to calculate directly, we can obtain that
Mathematical Problems in Engineering
Again, we choose a sufficiently small number κ 1 to make the following inequality holds true: We can see from (32) that the coefficients are all negative constants. Hence, we arrive at the conclusion by taking advantage of Lemma 2.
Asymptotic Behavior of Disease
In this section, we shall study the asymptotic behavior of disease in model (2).
Extinction of Disease.
First of all, we study the conditions for disease extinction. With these conditions, we can take some measures to adjust the parameters in the model to make the disease go extinct in the long run.
Proof. For case (1), applying the Ito formula to function ln I(t), we can obtain that Define g(S, I, α t ) � S/f(S, I, α t ), then 0 < g(S, I, α t ) ≤ (1/m 1 (α t ) + m 2 (α t )) and Integrating for the both sides of (30) from 0 to t followed by dividing both sides by t yields σ α s S(s) f S, I, α s dB((s)).
(33)
From ergodic properties of Markov chains α t and Lemma 4, we can obtain With the help of (26), we get that lim t⟶∞ (ln I(t)/t) < 0, a.s., which means the disease will go to extinction exponentially almost surely.
us, for any constant ϵ > 0 and ω ∈ A, there exists a positive constant T � T(ω, ε) such that erefore, according to (7), for t ≥ T, one has No matter how large the t is, (38) belongs to one of the following formulas: Applying the variation of constants formula to (39) yields For any i ∈ E and the arbitrariness of ε, the right side of (40) tends to 1 when t goes to infinity; then, it has lim inf t⟶∞ N(t)1, ∀ω ∈ A. (41) which implies that lim inf t⟶∞ N(t) ≥ 1, a.s.
Recalling the assertion N(t) ≤ 1 in Lemma 1 and the fact that N(t) � S(t) + I(t) + R(t), the conclusion that lim t⟶∞ S(t) � 1, a.s. can be obtained. (1) From the first condition above in eorem 2, we can see that if the intensity of stochastic perturbation is sufficiently large, R S 0 < 1 is sure to work; then, the disease will be extinct, which shows that the stochastic perturbation has an important influence on the model. When the intensity of stochastic perturbation is small, the disease will still be extinct if the second condition is satisfied. (2) We can see from the expressions of (16) and (27) that R S 1 ≤ Δ; thus, if Δ < 1, then R S 1 < 1, which means that stochastically asymptotically stability in the large will also make the disease go extinct.
Permanence of Disease.
Next, we move forward to analyze the conditions of disease persistence. First of all, a definition about persistence is given. (2) is called persistent in the mean if there exists a constant η > 0 such that
Proof. We prove this theorem in two steps. First, let us prove that the following inequality holds true for a certain positive constant C: where We can see easily that, for sufficiently large constant C, zH/zI > 0; then, from the monotone increasing property of H(S, I, i), For sufficiently large constant C, H(S, I, i) > 0 for I ∈ [ε, 1]. Now, inequality (45) has been proved.
In what follows, we will demonstrate our conclusion in the theorem. From the first equation of model (2), we know that According to the result of (31), we obtain that Making use of the result of (49), integrating for the both sides of d(λ ln I(t)) � L(λ ln I(t))dt + λ(σ(α t )S(t)I(t)/ f(S, I, α t ))dB(t) from 0 to t and dividing by t, one has Using the boundedness of S(t), I(t), and Lemma 4 along with the ergodic property of Markov chain (α t ), we take the limit inferior for both sides; then, lim inf erefore, if (44) is satisfied, then lim inf t⟶∞ (1/t) t 0 I(s)ds > 0, which means the disease will be persistent in the mean. e proof is completed. Proof
Mathematical Problems in Engineering
As can be seen from the above proof, if and only if max i∈E (μ(i)(m 1 (i) + m 2 (i))/β(i)) � (μ(i)m 1 (i) + m 2 (i)/β(i)) for every i ∈ E, R S 1 � R S 2 holds true. (1) R S 1 ≥ R S 2 means that eorems 2 and 3 are not in conflict. If R S 1 < 1, then R S 2 < 1, and the disease will die out. If R S 2 > 1, then R S 1 > 1, and the disease will be persistent. (2) If there is no regime-switching in model (2), i.e., there is only one environment, then can be considered as the threshold of disease persistence and extinction in the model. We can see from formula (53) that R will increase with the increase of contact coefficient β. When R > 1, the disease will be persistent in the long run. erefore, it will be one of the important ways to control infectious diseases to reduce the value of transmission coefficient β by isolating the infected and limiting the people's going out, which is widely used when SARS virus and new coronavirus pneumonia spread in China. Due to the existence of regime-switching, the behavior of disease will have different phenomena. Examples 2 and 3 will reveal some interesting things.
Examples and Simulations
In this section, some examples will be proposed and their numerical simulations are presented to verify our theoretical results above.
Conclusions
In this paper, we study the regime-switching SIRS model with the Beddington-DeAngelis incidence rate. We first prove that the model we discuss has a unique positive solution. Secondly, we give the conditions of global stochastic asymptotic stability by the Lyapunov method. en, the thresholds of disease behavior are given by some useful inequality technique. Finally, some examples are given and numerical simulations are presented to confirm our conclusions.
In addition, some more topics are worth further studying. More complex models can be considered to better reflect the actual situation, for example, the model with more general incidence rate, more perturbations such as jump noise, or the model with the effect of time delay. We will keep these for our future research.
Data Availability
e data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2020-07-16T09:05:31.042Z | 2020-07-15T00:00:00.000 | {
"year": 2020,
"sha1": "faaeb80e7d88bd56342646613f2ff0011b65bf91",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2020/7181939.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9963f031fde5de17caadf9710163a1075e725f57",
"s2fieldsofstudy": [
"Mathematics",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
199511225 | pes2o/s2orc | v3-fos-license | Improved treatment of fermion-boson vertices and Bethe-Salpeter equations in non-local extensions of dynamical mean field theory
We reconsider the procedure of calculation of fermion-boson vertices and numerical solution of Bethe-Salpeter equations, used in non-local extensions of dynamical mean-field theory. Because of the frequency dependence of vertices, finite frequency box for matrix inversions is typically used, which often requires some treatment of asymptotic behaviour of vertices. Recently [Phys. Rev. B 83, 085102 (2011); 97, 235140 (2018)] it was proposed to split the considered frequency box into smaller and larger one; in the smaller frequency box the numerically exact vertices are used, while beyond this box asymptotics of vertices are applied. Yet, this method requires numerical treatment of vertex asymptotics (including corresponding matrix manipulations) in the larger frequency box and/or knowing fermion-boson vertices, which may be not convenient for numerical calculations. In the present paper we derive the formulae which treat analytically contribution of vertices beyond chosen frequency box, such that only numerical operations with vertices in the chosen small frequency box are required. The method is tested on the Hubbard model and can be used in a broad range of applications of non-local extensions of dynamical mean-field theory.
Key ingredient of many of these methods is the relation between given two-particle irreducible vertices (which are often assumed to be local) and the two-particle reducible vertices, expressed by the corresponding Bethe-Salpeter equations, as well as calculation of the fermion-boson vertices [13,26,27,30,31]. Due to using finite frequency box, the corresponding treatment is, however, often approximate, and to get reasonable results large frequency box is required, which makes numerical calculation of vertices within this frequency box difficult. Recently it was proposed [32,33] to split the frequency box into "small" one where the numerically exact vertices are used, and larger one, where vertex asymptotics are used.
The proposed approach requires however numerical treatment of vertices in the large frequency box (although with their asymptotic values) and/or knowing fermion-boson vertices, which make it not very convenient for applications. In the present paper we propose a way of analytical treatment of vertex asymptotics, such that only numerical calculations within small frequency box are required.
The plan of the paper is the following. In Sect. II we introduce the model. In Sect. III we consider procedure of calculation of fermion-boson vertices and susceptibilities using the interaction vertex obtained in a given frequency box. In Sect. IV we discuss the solution of the Bethe-Salpeter equation. In Sect. V we present a numerical example of application of the obtained formulae to the standard Hubbard model. In Sect. VI we present Conclusions.
II. THE MODEL AND ASYMPTOTICS OF VERTICES
We consider an extended Hubbard model described by an action where G 0k and V c q are some (arbitrary) single-particle Green function and the two-particle vertex, c + kσ , c kσ are Grassmann variables, σ =↑, ↓, n q = σ n qσ = k,σ c + kσ c k+q,σ , and we use the momentum-frequency variables k = (k,iν n ), q = (q, iω n ), where iν n and iω n are fermionic-and bosonic Matsubara frequencies. The action (1) can describe both, the (E)DMFT solution of the Hubbard model (in which case G 0k and V c q are only frequency dependent), as well as more general case of the non-local theory, for which G 0k and/or V c q acquire some momentum dependence.
Let us denote the full two-particle vertex in charge (c) and spin (s) channels (which we consider below for definiteness), corresponding to the action (1) by F c(s) νν ′ q , where ν, ν ′ are the incoming-and outgoing fermionic Matsubara frequencies, and q is the momentum-frequency transfer. We assume for simplicity that the vertex depends only on one of the momenta (i.e. the momentum transfer q), as it happens in the ladder versions of DΓA [11][12][13][14]16], DF [20,21], DB [24,25], and (E)DMFT+2PI-fRG [30] approaches; more general case can be treat in a similar way. The vertex F c(s) νν ′ q is related to the two-particle irreducible vertex Φ c(s) νν ′ q by the Bethe-Salpeter equation −Σ k is the full Green function and Σ k is the electronic self-energy (for DMFT and ladder DΓA the latter depends on the fermionic frequency ν only). The vertex Φ c(s) νν ′ q for the considering cases of (E)DMFT and its non-local ladder extensions has at ν → ∞ or ν ′ → ∞ the asymptotic form [33] Φ c(s) where U c q = −(U + 2V c q ), U s q = U, and Φ c(s) νν ′ ω is given by χ c,s,pp (ω) are the charge-, spin-, and particle-particle susceptibilities, accounting for the contribution of the respective bubbles in the transverse channel (these contributions are assumed to be local in the considering ladder approximation), v c (ω) is the local retarded Coulomb interaction, corresponding to the original non-local interaction V c q , and obtained, e.g., in EDMFT [7,8] (non-local corrections to this interaction w.r.t. ν and ν ′ are neglected in the considering case of EDMFT and its non-local ladder extensions). Note that Φ c(s) νν ′ ω can be calculated for arbitrary large ν, ν ′ since χ c,s,pp (ω) and v c (ω) decay as 1/ω 2 (outside the bosonic frequency box they can be therefore approximated by zero or replaced by the respective asymptotic behavior).
The corresponding asymptotics of the reducible vertices F c(s) where the three-leg (fermion-boson) vertex Γ c(s) νq is defined by here and below we assume factor of temperature T for every frequency summation. Note that for completeness we account for the last terms in the right-hand sides of Eqs. (5), which were omit in Ref. [33]. We note also that our definition of vertices F c(s) has opposite sign in comparison to that used in Ref. [33], and the vertex Γ c(s) νq corresponds to the vertices 1 ± λ c(s) νq of that paper.
III. THREE-LEG VERTICES AND SUSCEPTIBILITIES
Our first task is to obtain a closed expression for Γ c(s) νq containing only summation of vertices (except their asymptotic parts Φ c(s) νν ′ ω ) within a given frequency box ν ′ ∈ B. For that we split a summation in Eq. (6) into ν ′ ∈ B and ν ′ / ∈ B and use the asymptotic form of νq in the right-hand side of Eq. (5) to the accuracy O(1/ν 3 max ), where ν max is the size of the frequency box. Substituting this into Eq. (6) and splitting also the summation over ν ′′ in Eq. (5) into one inside and outside the frequency box, we obtain where X The expression (8) gives a possibility to calculate Γ c(s) νq using summations of vertices (except their asymptotic parts) within the selected frequency box only. Using that we find that the first and second term in X c(s) q are of the order 1/ν max and 1/ν 2 max , respectively, and the difference Z c(s) νq − 1 are expected to give only very small contribution for large ν max (the smallness of these contributions is also verified to hold numerically for the DMFT solution of single band Hubbard model with some exemplary parameters, e.g. fillings close to half-filling, in Sec. V).
Using the obtained fermion-boson vertex, we can similarly find the non-local susceptibilities (which in general should not be confused with the local susceptibilities χ c(s) (ω) entering Eq. (4)) by splitting again the summation inside and outside the frequency box: Performing similar decomposition for Γ c(s) ν / ∈B,q in Eq. (6) and using again Eq.
Combining Eq. (9) with the first line of Eq. (10) this yields which again uses the summation only in a given frequency box. From Eq. (11) we find We have verified that the result in the first line of Eq. (10) with account of Eq. (12) is identical to the large ν limit of Eq. (8).
Let us also consider the "reduced" fermion-boson vertex [34] which contains the sum of contributions from 2PI vertices with excluded U c(s) q interaction.
This vertex is often used in DΓA [13], TRILEX [27], some versions of the DB approach [26], (E)DMFT+2PI-fRG method [30], etc. For this vertex we obtain where U . According to the Eq. (10), For the irreducible susceptibility φ c(s) q , which is related to the non-local susceptibility χ we find It can be verified by direct algebraic transformations that the obtained quantities fulfill the result for the irreducible susceptibility, which follows from the Eqs. (9), (13), and (17), cf.
Ref. [34], For the following it is convenient to represent the vertex F c(s) νν ′ q via Bethe-Salpeter equation, similar to (2), but with the inversion performed for ν, ν ′ ∈ B only (which provides the difference between Φ c(s),box νν ′ q and Φ c(s) νν ′ q , see Sect. IV). Using this equation and performing algebraic manipulations, similar to those described in Appendix C of Ref. [30], the result (14) can be represented in a simpler form where again the inversion is performed for ν, ν ′ ∈ B. This result allows us to obtain fermionboson vertices γ c(s) νq by performing summation over frequencies within the chosen frequency box. The size of the frequency box should be such that the asymptotic (5) is reached close to the boundary of the frequency box. We also note that different way of efficient calculation of fermion-boson vertices and irreducible susceptibilities in a non-local theory from the known local ones was suggested in Ref. [35].
IV. BETHE-SALPETER EQUATION
Now we consider the solution of the Bethe-Salpeter equation (2) which we write in the form F c(s) Splitting again the summation to the one restricted to the frequency box and outside the box and using the asymptotic forms (3) and (5), we find F c(s) From this equation we can express Φ c(s) where we have used the result for the fermion-boson vertex (8) and neglected the terms of higher order than 1/ν 3 max . Finally, using again the Bethe-Salpeter equation (20) and performing algebraic transformations, we obtain Φ c(s) where we have again neglected the terms of the order o(1/ν 3 max ). The result (25) can be also derived from "Method 2" of Ref. [33] which uses F 's asymptotics (Eq. (19) , the obtained vertex γ νq in terms of the physical vertex Φ c(s) has a rather standard form (cf. Ref. [12]), which is due to smallness of the difference Φ Obtained results allow to compare the accuracy of vertex calculation with and without the obtained corrections for the size of the frequency box. Without using the obtained corrections the main neglected contribution to the considered vertices arise from the terms, containing X c(s) q ∼ 1/ν max . Therefore, in this case the error of estimating two-particle irreducible and fermion-boson vertices also scales as 1/ν max . At the same time, accounting for the obtained corrections, the main source of the error of estimating of considered vertices is the deviation of irreducible vertices from asymptotic behavior (3) which is expected to scale as 1/max(|ν|, |ν ′ |) 3 and yield O(1/ν 4 max ) corrections to the vertex. Considering that the contribution of the terms of the order 1/ν 2 max (second term in X c(s) q ) and 1/ν 3 max (i.e. Z c(s) νq − 1) to the vertices is small (see also numerical verification in Sect. IV), higher order terms are expected to provide also small contribution, and therefore the suggested method provides fast convergence with increasing size of the box, as verified numerically in the next Section.
V. NUMERICAL EXAMPLE
As an example of the application of the developed approach we calculate the spin vertex γ s ν,0 in DMFT approach for the two-dimensional Hubbard model with the dispersion ǫ k = −2t(cos k x + cos k y ) + 4t ′ cos k x cos k y . We choose the parameters t ′ = 0.15t and U = 10t, which were suggested previously to describe physical properties of high-T c compound La 2−x Sr x CuO 4 . For numerical implementation of DMFT we use hybridization expansion continous-time QMC method within iQIST package of Refs. [36,37], choosing for the frequency box N f = 120 fermionic Matsubara frequencies.
In the left part of Fig. 1 we show the result of the calculation of fermion-boson vertex for not too low temperature T = 0.2t and n = 1. In this case the chosen frequency box is sufficiently large (the maximal fermionic frequency ν max ∼ 75t) and the results calculated with and without account of finite frequency box effects (we put X s q = Z s νq − 1 = Φ s νν ′ ω = 0 in the latter case) are close to each other, with slightly better agreement of the result calculated with account of finite frequency box with the required asymptotic value. With decreasing temperature to T = 0.08t the maximal fermionic frequency ν max ∼ 30t and we observe stronger difference of the fermion-boson vertex calculated with and without account of finite frequency box effect (right part of Fig. 1; in this case we also change filling to n = 0.96). The vertex, evaluated with account of finite frequency box effects approaches correct limiting value (equal to one). In both cases we find that the terms related to the Φ s νν ′ ω (i.e. second term in X s q and the difference Z s νq − 1) provide very small contribution (< 2 · 10 −6 for T = 0.2t and < 10 −4 for T = 0.08t). We have also verified that the obtained vertices γ s νq yield the irreducible local susceptibility φ s ω , obtained by the Eq. (19) and the respective local susceptibility χ s ω , obtained by the Eq. (16), which agree with those obtained directly from CT-QMC solver (for static local susceptibility at T = 0.2t we find χ s 0 = 2.2074 vs. QMC result 2.2071, while for T = 0.08t we find χ s 0 = 3.77 vs. 3.78, respectively). In Figs. 2a,b we show the frequency dependence of 2PI vertex Φ s ν ′ ν0 at fixed frequency ν ′ = ν 1 = πT (left part) and two equal frequencies ν = ν ′ (right part). For ν ′ = ν 1 one can see that the obtained correction improves the high-frequency behavior, which is close to U in that case (the contribution Φ s νν ′ 0 is small). At the same time, for ν = ν ′ the obtained correction due to finite frequency box effect is sufficiently small, and both vertices, with and the frequency box is shown in Fig. 3. One can see that, as discussed in the end of previous Section, the first term in X s 0 scales as 1/ν max at large ν max . At the same time, the second term in X s 0 scales as 1/ν 2 max , and, therefore, becomes negligibly small at sufficiently large ν max . Although Z s ν,0 − 1 ∝ 1/ν 3 max decays faster than the second term in X s 0 , at intermediate ν max and ν ∼ ν max it becomes somewhat enhanced (cf. also Figs. 2c,d). As also discussed in the end of previous Section, the deviation of the vertices, calculated without account of finite frequency box effects from their values extrapolated to ν max → ∞ (obtained using quadratic polynomial with respect to 1/ν max ) scales as 1/ν max . At the same time, the vertices, calculated using the obtained formulae, change very weakly with 1/ν max (we have verified that this holds for all |ν| < ν max ). Using a + b/ν 4 max + c/ν 5 max fits for vertices obtained with account of finite frequency box effects, we find the results of extrapolation consistent with those for vertices, obtained without of account of finite frequency box effects, which shows applicability of the obtained formulae. From these results it follows that for practical calculations without account of finite frequency box effects because of strong dependence of the vertices on 1/ν max , at least three different sufficiently large sizes of frequency box should be considered to determine the coefficients of the quadratic polynomial, and, therefore, extrapolated values of the vertices. At the same time, since the results obtained with account of finite frequency box effects change very weakly with frequency box size, only one such calculation is sufficient with reasonable accuracy in that case.
VI. CONCLUSION
In conclusion, we have derived explicit formulae for the full (Eq. (8)) and reduced (Eqs. (14), (21)) fermion-boson vertices; full (Eq. (11)) and irreducible (Eq. (18)) susceptibilities, and the 2PI vertex (25), which contain summation only in a given frequency box. These formulae account for the contribution of the frequencies outside the frequency box via the terms, containing X c(s) q and Z c(s) νq . In contrast to the approach, which neglects the corrections due to finiteness of the frequency box, which results error scales as 1/ν max , the considered approach is expected to show 1/ν 4 max scaling of the error, and requires therefore rather small sizes of the frequency box. We have verified numerically applicability of the obtained results on the two-dimensional Hubbard model with next-nearest hopping and strong Coulomb repulsion.
The obtained results can be used in a broad range of applications of diagrammatic extensions of dynamical mean field theory. | 2019-08-08T17:59:34.000Z | 2019-08-08T00:00:00.000 | {
"year": 2019,
"sha1": "4504a8834b454bc001573367b743ddb91c562d8d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1908.03198",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e7909d72f7df81eb43203796e86bf15405b2680a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244756995 | pes2o/s2orc | v3-fos-license | Main Controlling Factors and Models of Organic Matter Accumulation in Lower Carboniferous Dawuba Formation Shale in Southern Guizhou, China
A set of high-quality marine facies organic-rich shales developed in the Lower Carboniferous Dawuba Formation, which is considered to be the main target of shale gas exploration and development in Guizhou Province. In this paper, 53 samples from Well ZY1 are selected, and the core observation data, field-emission scanning electron microscopy (FE-SEM) images, and geochemical data of these samples are analyzed. On the basis of these data, the main influencing factors of organic matter enrichment in the Dawuba Formation shale were identified and an organic matter accumulation model was established. The results show that total organic carbon (TOC) values of the Dawuba Formation in the ZY1 well vary between 1.97 and 4.11%, with high values appearing at the depths of 2796–2814 m (3.00–4.11) and 2877–2894 m (1.97–3.49). The redox-sensitive element enrichments are generally low, indicating that these samples were deposited under oxic–suboxic conditions. The micronutrients (Zn, Cu, and Ni), biological Ba (BaXS), and P/Al also show low values, indicating low primary productivity. The chemical index of alteration (CIA) and terrigenous clastic input index (Ti/Al) showed two obvious high-value zones, indicating that shale in the study area was affected by terrigenous inputs. Similarly, the calculation results show that Fe/Mn and Rb/K values have two abnormal data segments at the same depth. The anomaly of these data at the same depth section further suggests that the shale was affected by terrigenous input during deposition. Moreover, the terrigenous input reaches the maximum in the above TOC high-value region, and it is inferred by combining with the core observation results that the gravity flow occurs in this depth. The carbon isotope of kerogen (δ13Corg) ranges from −26.84 to −24.36%, indicating that the source of organic matter is likely to be terrestrial plants. This is further supported by the widespread presence of filamentous organic matter using FE-SEM, despite the low productivity and poor preservation conditions during deposition of the Dawuba Formation; the enhanced terrigenous input may have provided additional sources of organic matter for the Dawuba shale.
INTRODUCTION
As an important source rock and the main producing layer of shale gas, black organic-rich shale plays a dominant role in oil and gas exploration and development. With the deepening of research, the main controlling factors and modes of organic matter accumulation in shale have also attracted much attention. 1−10 Based on this, the researchers established a productivity model for upwelling zones at continental margins, such as the modern Arabian Sea. These models take high marine primary productivity and sink fluxes of organic carbon as the key points to illustrate the accumulation modes of organic matter in the strata of high-productivity areas. 11,12 As the study progressed, the researchers found that marine primary productivity alone could not fully explain the characteristics of organic matter accumulation in some areas. 13 As a result, geologists developed conservation models suitable for low productivity and stagnant waters, such as the modern Black Sea. In this model, more attention is paid to the effects of reduction conditions and the reduction of oxidation decomposition of sinking organic particles on the enrichment of organic matter. 14,15 However, with the further development of oil and gas exploration, these two organic matter accumulation models are not fully applicable in some areas, which makes researchers begin to pay attention to the influence of terrigenous input on organic matter accumulation. 16 However, there is no consensus as to whether terrigenous input is beneficial to the accumulation of organic matter in the strata, but it has been widely recognized by researchers that it will affect the accumulation of organic matter. 3,17,18 The Qiannan Depression is located in the Upper Yangtze Plate, where a set of organic-rich shales of the Lower Carboniferous Dawuba Formation is deposited in the platform basin facies, 19,20 and it is the main hydrocarbon source rock in Southwest China. 21,22 Previous studies show that the shale of the Dawuba Formation is characterized by the low content of brittle minerals, high content of clay minerals, high maturity (between 2.0 and 3.0), and relatively high abundance of organic matter (generally greater than 2.0), thus providing favorable conditions for shale oil and gas enrichment. 23−25 Wells CY-1, DY-1, and ZY1, all of which have been completed at present, have found good shale gas display in the formation of the Dawuba Formation. 24,26,27 This indicates that some achievements have been made in the Lower Carboniferous Dawuba Formation in the Qiannan Depression, but it cannot be regarded as a major breakthrough in shale gas exploration in the Upper Yangtze area. 28−30 At present, the research on the shale of the Lower Carboniferous Dawuba Formation mainly focuses on the reservoir characteristics, hydrocarbon source rock evaluation, shale gas resource potential prediction, etc. 31−34 There are few studies on the main controlling factors and models of organic matter accumulation in this geological background. At present, Ding (2018) and a few others have discussed the main controlling factors and modes of organic matter accumulation in the Lower Carboniferous Datang Formation shale in the southern Guizhou depression from the aspects of paleoproductivity and reoxidation−reduction conditions and believe that paleoproductivity is the main controlling factor of organic matter enrichment. However, this model still cannot fully explain all of the characteristics of organic matter enrichment in black shale of the Lower Carboniferous Dawuba Formation in southern Guizhou. In addition, organic matter plays an important role in controlling hydrocarbon generation potential, pore structure, and adsorption capacity of organic-rich shale. 35−37 Therefore, to better understand the characteristics of shale gas accumulation in the Dawuba Formation, it is necessary to clarify the main controlling factors and modes of organic matter accumulation in this geological background.
In this paper, the total organic carbon (TOC), major element (ME), and trace element (TE) were measured for the samples from the Dawuba Formation of Well ZY1. In addition, combined with core observation data, field-emission scanning electron microscopy (FE-SEM) images, and logging data, the characteristics of paleoproductivity, paleoredox environment, and terrigenous debris input during the deposition of the Dawuba Formation were analyzed, and the organic matter enrichment model suitable for the study area was established.
GEOLOGICAL BACKGROUND
The study area is located in the south of Guizhou Province, Southwest China ( Figure 1A). Tectonically, it belongs to the Qiannan Depression on the southwest margin of the Upper Yangtze Plate and has undergone multiple complex tectonic movements, forming a series of NW-trending axial folds and NW-or NE-trending faults. 21,38 The main outcrop strata in this area are Devonian, Carboniferous, and Permian. A large set of black shale developed in the Upper Yangtze region during the Early Carboniferous due to a large-scale sea-level rise in the Upper Yangtze region and its surrounding areas, 20,21 which resulted in the formation of an intraplatform basin with a relatively shallow water depth in the front depression of southern Guizhou. 20 In the study area, the Lower Carboniferous has a wide distribution area and unconformity contact with the underlying Devonian system. The whole Carboniferous strata can be divided into three stratigraphic units. The upper part of the Nandan Formation mainly develops a set of limestone with a thickness of 800−900 m; the lower part of the Muhua Formation also develops a set of limestone with a thickness of 0−300 m; and the middle part of the Dawuba Formation, which is the horizon of this study, mainly develops shale, siliceous rock, calcareous shale, and argillaceous shale with a thickness between 90 and 300 m, and it belongs to shallow shelf sedimentary facies.
The ZY1 well is located in the western margin of the southern region of Guizhou Province. The main outcropping strata in the drilling area are the Devonian, Carboniferous, Permian, and Triassic strata ( Figure 1B) Figure 1C). The depth of the drilled well in the Dawuba formation is 2770−2983 m, and the layer thickness is 231 m. According to the lithology encountered during drilling, the whole Dawuba Formation in Well ZY1 can be divided into upper mudstone (section S), shale, and thin-bedded limestone section and lower clastic limestone and thin-bedded mudstone section (section X) ( Figure 1A).
SAMPLES AND METHODS
3.1. Samples. All 53 samples were collected from Well ZY1 at a depth of 2688−2976 m. The samples included mudstone, shale, marl, and dolomite. The sample number, depth, and lithology are listed in the attached table. Each sample collected was rinsed with deionized water to remove mud contamination during drilling. The selection of samples is mainly mudstone and shale, and under the premise of maintaining a certain sampling interval, all of the depths of the whole Dawuba Formation should be covered as far as possible. Samples were analyzed in the terms of TOC, major and trace elements, and FE-SEM. At the same time, 26 samples were selected from top to bottom for the carbon 13 isotope test of kerogen (δ 13 C org ), and the samples were selected as far as possible to cover all depths of the Dawuba Formation.
3.2. Experimental Methods. After removing the potential weathering layer, sedimentary veins, and visible pyrite nodules on the surface of 53 samples, some samples were selected for scanning electron microscopy observation, and the rest were crushed to about 200 mesh for geochemical analysis. The geochemical experiments were all completed in China University of Geosciences (Beijing), and scanning electron microscopy observation was performed at China University of Petroleum (Beijing).
All 53 rock samples were crushed to a size of less than 200 mesh. About 2 g of powder was placed in a porous crucible and treated with hydrochloric acid (50%) to remove inorganic carbon. The remaining residue was rinsed with deionized water to neutral pH, centrifuged, and dried. Finally, a 902T C-S analyzer (China University of Geosciences, Beijing, China) was used to analyze the TOC data of the sample.
For analysis of major elements (MEs), the sample powder was heated to 105°C and then baked at 920°C to completely remove the organic components from the sample. The heated sample powder was then mixed with Li 2 B 4 O 7 and BliO 2 and fused on a glass table at 1150°C. Then, an X-ray fluorescence spectrometer (Analymate V8C instrument) was used to measure the content of major elements in the samples on molten glass. The precision of the whole experiment was guaranteed at ±5%. For detailed main element analysis experiment steps, refer to Cao et al. 39 For analysis of trace elements (TEs), a PE NexION 350X Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) instrument was used. The sample powder was dipped in mixed acid (HF/HNO 3 /HClO 4 = 1:1:3) in a pressurized sample tank at 200°C for 12−24 h. Then, the liquid was measured by inductively coupled plasma-mass spectrometry (ICP-MS) to obtain the trace element content of the sample. Detailed experimental steps were carried out by Liu et al. 40 For scanning electron microscopy, first, the sample was cut into 2 cm 2 tetrodes slices. Then, the sample was ground with fine sandpaper and polished with an argon ion beam to produce a flat surface. Finally, the sample was observed with a Zeiss FE-SEM electron microscope and photographs were obtained.
To observe the thin section of rock, first, the sample was cut into 2 cm 2 tetrodes slices. Then, the sample was ground with fine sandpaper to produce a flat surface. Meanwhile, the differentiated layer on the sample surface was removed. Finally, the sample was observed with a Zeiss polarizing microscope and photographs were obtained.
3.3. Analytical Methods. To more accurately analyze the paleoredox conditions, paleoproductivity, and terrestrial input in the study area, the geochemical data were calculated and processed, including the element enrichment coefficient (X EF ), chemical weathering index (CIA), biological barium (Ba XS ) Ti/Al, Fe/Mn, Rb/K, and P/Al, respectively.
The calculation formula for the element enrichment coefficient is as follows where (X/Al) sample represents the ratio of X element to Al element in the sample and (X/Al) AUCC represents the ratio of the average content of X element to the average content of Al element in the upper crust. 41 The chemical weathering index (CIA) calculation formula 42,43 is It needs to be emphasized that the unit of major element (ME) oxides in the above-mentioned expression is mole, and CaO* merely represents the CaO originated from silicate minerals. However, to avoid the CaO content in the rock is composed of carbonate and phosphorite, it is necessary to correct the CaO content. The correction formula is as follows When the "remaining number of moles" was more than that of Na 2 O, the mole of CaO* was supposed to be equivalent to that of Na 2 O. Otherwise, the mole of CaO was regarded as that of CaO*. 44−46 The calculation formula of biological Ba (Ba XS ) 47 is as follows In the above formula, Ba total represents the total barium content in the sample, Al represents the total Al content in the sample, and Ba AUCC and Al AUCC represent the average Ba and Al contents in the upper crust, respectively. 41 4. RESULTS 4.1. Field Section and Microscopic Section Observation. Based on the data collected from previous Wells and outcrops, three outcrops were observed and described in this study. 4.1.1. Ziyun Sidazhai section. The strata of the Dawuba Formation are completely exposed, with both top and bottom visible and gradually integrated contact with the upper and lower layers. The overlying Nandan Formation is dark gray micritic bioclastic limestone, and the underlying Muhua Formation is gray-white massive limestone with the calcite belt. The Dawuba Formation is mainly composed of black and gray-black carbonaceous argillaceous and siliceous rocks, intermixed with a small amount of gray and gray-yellow mudstone, silt-fine sandstone, argillaceous siltstone, and silty mudstone. The silt-fine sandstone has a positive grain order, and its abrupt contact with surrounding rock has the characteristics of gravity flow ( Figure 2). The Dawuba Formation in the Ziyun Manchang section is completely exposed, with the clastic limestone of Nandan Formation overlying and the gray-black medium-thin marl of Muhua Formation underlying. In the Dawuba Formation, black carbonaceous shales, siliceous rocks, and siltstones are mainly developed, and a small amount of limestone interbeds are also developed. The sandstone layer is in positive grain order, and the sandy combination can be seen in the sudden contact with the surrounding rock. The shale has wavy bedding in which sandy nodules are developed and has the characteristics of gravity flow deposition ( Figure 3).
The Getuhe section has a complete outcrop with a visible top and bottom. Nandan Formation is covered by gray massive limestone, and Muhua Formation is covered by gray micritic limestone. In the Dawuba Formation, black carbonaceous shales and siliceous rocks are mainly developed, while small amounts of yellow sandstone interbeds and argillaceous limestones are developed. Mud shale has wavy bedding in which sandy nodules are developed, which is gravity flow deposition ( Figure 4).
FE-SEM Images.
The FE-SEM photos show that the organic matter in the samples of the Dawuba Formation is mainly filamentous, as shown in Figure 5A,C,I. However, lumplike organic matter can be seen in a few samples, as shown in Figure 5B. Pyrite is often developed around the filamentous organic matter, and organic pores are almost not developed in filamentous organic matter. According to FE-SEM images, both strawberry pyrite and euhedral pyrite are developed in the shale samples, and the particle size of strawberry pyrite is between 2 and 10 μm, which reflects the oxygen-poor environment in the upper part of the water. 48 Bulk organic matter is observed in Figure 5B, along with organic pores. However, in all FE-SEM images, clumps of organic matter are rarely developed, which is the reason that organic pores are poorly developed in shale. Finally, a large number of mineral dissolution pores and cracks between clay minerals were observed in the scanning electron microscopy photos, as shown in Figure 5D,E. 49,50 4.3. Organic Geochemistry. In this experiment, a total of 53 samples were tested for TOC, and the TOC value was distributed between 0.19 and 4.11%, with an average value of 1.73%. In general, the TOC value of marl is the lowest, ranging from 0.19 to 0.68%, followed by that of shale, ranging from 0.95 to 4.11%. The δ 13 C org values ranged from −26.84 to −24.36%, showing the characteristics of heavy carbon isotope enrichment. The results of TOC and δ 13 C org are shown in Tables 1 and 2, respectively. 4.4. Major and Trace Elements. In this study, 53 samples from well ZY1 were tested for major and trace elements. The test results are shown in Table S1. To more accurately analyze the paleoredox conditions and paleoproductivity in the study area, this paper calculates the enrichment coefficient (X EF ) of eight elements that participate in the reconstruction of the paleoenvironment. The calculation results for the element enrichment coefficient are shown in Figure 10 and Table S2.
Sedimentary
Model. The sedimentation model often plays a very important role in the process of organic matter enrichment. Only by establishing the sedimentation model in accordance with the study area, we can establish the organic matter enrichment model suitable for this area. Based on the interpretation results of the wide-area electromagnetic method and the analysis of the drilling and outcrop profile, the sedimentary facies zone and environment in the study area were redefined, as shown in Figure 6 51,52 However, in the actual research process, only relying on geophysical data cannot truly reflect the sedimentary characteristics of an area. Therefore, drilling and outcrop data are generally needed to support. To establish a more accurate sedimentary model in the study area, this paper selected a total of eight Wells and outcrop across the entire platform basin for well connection analysis. The data of Huishui Xicheng section, CY-1, DY-1, and ZY 1 -1 Wells were obtained from previous studies. 24,25,53,54 From the connecting diagram of the sedimentary profile (Figure 7) in the study area, it can be seen that the thickness of the whole Dawuba Formation decreases first and then thickens from the southwest to northeast direction, and the deposition thickness of the Dawuba Formation reaches the maximum at Well ZY1. This is consistent with the sedimentary facies' characteristics reflected in Figure 6. At the same time, the lithology of the Dawuba Formation also changed from quartz sandstone platform facies to carbonaceous shale and carbonaceous mudstone platform basin facies, and then for ZY 1 -1 well, it changed to carbonate platform facies again. It can be seen from the profile (Figure 7) that the paleogeographic pattern of the whole Dawuba Formation was high at both ends and low in the middle from the southwest to northeast direction. On the basis of previous studies on geophysical exploration, the Huishui to Zhenye sedimentary model map (Figure 8) was established by analyzing field profiles and drilling wells in the study area. In general, during the early Carboniferous Datang period, the study area experienced a seaward transgression from southwest to northwest, which basically submerged the southern and southwestern Guizhou areas, thus establishing the basic pattern of current sedimentation. 55 The sedimentary water in the whole study area deepens first and then becomes shallower from SW to NE, thus forming platform facies deposits in the Huishui area and then transferring to the continental facies denudation area dominated by clastic deposits to the north. The water body gradually deepens
ACS Omega
http://pubs.acs.org/journal/acsodf Article toward the southwest of the study area. Therefore, the main body of the southern facies area is the platform basin facies sedimentary area, the lithology is gradually changed to limestone and shale, and the sedimentary thickness is large.
In the south, the sedimentary water body gradually becomes shallow and once again transients to carbonate platform facies in the Zhenye area.
Paleoredox Conditions.
A series of methods, such as U/Th, V/Cr, Ni/Co, and V/(V + Ni), have been established to distinguish the redox degree of water using the ratio of major and trace elements. 56−60 However, with the deepening of the follow-up study, many geologists have found that only the pure element ratio to reflect the water redox environment is controlled by a particular sedimentary environment and is not widely used. And with a fixed threshold to distinguish between water redox environment and not in combination with the sedimentary background also tend to appear limitations. 61,62 More importantly, Algeo and Liu concluded that the use of element enrichment coefficients can more accurately reflect the characteristics of the oxygen content in water bodies. 62 On the basis of previous studies, five elements, V, Mo, U, Co, and Cr, which can better reflect the redox conditions of water during the sedimentation process, were selected to judge the redox conditions of water during shale deposition in the study area according to the characteristics of chemical changes of different elements under different oxygen conditions. 63−70 Based on the above characteristics, five redoxsensitive elements, Mo, U, V, Cr, and Co, were selected in this paper to reconstruct the paleoredox environment of Dawuba Formation. To exclude the influence of terrigenous input on the experimental results. In this paper, the average element content (AUC) in the upper crust was used as the standard to calculate the enrichment index of the above four elements. The calculation results are shown in Figure 10, and the detailed data and value range of corresponding parameters are shown in Tables S2 and 2.
The results indicate that the enrichment coefficients of RSTEs show the characteristics of fluctuation in the whole Dawuba Formation. This feature is due to the presence of carbonate rocks in the selected samples, so X EF of the X section, which mainly develops limestone, is higher than that of the shale section. This characteristic of carbonate rock is caused by the depth of its sedimentary water. However, the enrichment coefficients of RSTEs are all less than 1 in the S section of shale, and only a few data points are slightly higher than 1. On this basis, it can be observed that the enrichment coefficient of RSTEs shows two obvious low-value regions, indicating that the sedimentary water body of the Dawuba Formation should have experienced a relatively oxygen-rich period in these two low-value regions. In general, the enrichment coefficients of RSTEs in the whole mud shale of the Dawuba Formation are all less than 1, indicating that the mud shale of the Dawuba Formation occurs in an oxidized/ suboxidized sedimentary environment, and the occurrence of the low-value area indicates that the mud shale of the Dawuba Formation may have experienced two oxygen-rich periods. In addition to using the enrichment coefficient of RSTEs to distinguish the depositional environment of the Dawuba Formation, this paper also uses the Mo−U covariant diagram to further distinguish the oxidation environment of the Dawuba Formation during the depositional period. According to the Mo−U covariation diagram (Figure 9), all of the data points of the whole Dawuba Formation fall in the range of oxidation/suboxidation, which is consistent with the characteristics of the enrichment coefficient of RSTEs. The above characteristics indicate that there was sufficient oxygen in water during the shale deposition period of the Dawuba Formation, which may be conducive to the growth of marine microorganisms but is not conducive to the preservation and accumulation of organic matter in the long term ( Figure 10). 5.3. Paleoproductivity. The accumulation of organic matter in sediments is not only controlled by water redox conditions but also closely related to primary productivity. 5 Therefore, it is of great significance to evaluate the paleoproductivity in an area for understanding the enrichment degree and mode of organic matter in this area. To objectively and accurately evaluate the paleoproductivity of a region, the paleoproductivity index is generally used to evaluate it. According to previous studies, TOC, P/Al, Ni EF , Ba XS , Cu EF , and Zn EF are generally selected as indicators of paleoproduc- tivity in the study area. 63,72 TOC, which is directly related to the organic matter in the deposition process, is the most direct and important index to study paleoproductivity. As nutrient elements, Zn, Cu, Ni, and P are directly involved in the biological cycle of algae and plankton. Therefore, the enrichment degree of these elements in water can also reflect the growth of algae and plankton in water during the same period. 63,73,74 Although Ba does not directly participate in the biological cycle of algae and plankton, Ba can only be preserved in the deposition process by combining with SO 4 2− generated by algae metabolism. Therefore, Ba can also be used as a reflection of the algae prosperity in the water body during the deposition period. 75,76 However, due to the complexity of the deposition process, normalized parameters such as P/Al, Ni EF , Ba XS , Cu EF , and Zn EF are usually used to correct the content of each element to eliminate other influences.
The details of paleoproductivity indicators are shown in Table S2 and Figure 11, and the range of parameter values is shown in Table 2. In terms of the variation characteristics, both the paleoproductivity index and the paleoredox index of the Dawuba Formation fluctuate up and down. The paleoproductivity index of section X is relatively higher than that of section S, and almost all of the samples with high paleoproductivity indexes are carbonate rocks. According to previous studies, the lower limestone of the Dawuba Formation is mainly bioclastic limestone. 23,25 This result indicates that the limestone samples in this study generally show the characteristics of high paleoproductivity. However, the enrichment coefficients of Zn, Cu, and Ni are all less than 1 in the S section of the shale, and the values of only a few samples are greater than 1, indicating that there were not enough nutrient elements to supply the growth and development of paleontology in the seawater during this period. In addition, the average P/Al of all samples in the study area is only 0.008, and the average P/Al of shale samples is only 0.006, while the ratio of the two is 0.011 in the AUCC. This indicates that P is an element that constitutes the biological skeleton, and the content in the mud shale in the study area is low. It also indicates that there is not enough P element to supply plankton in the deposition in the study area. Finally, although the value of Ba XS , which is closely related to biological activities, is relatively lower in the upper shale section than in the lower limestone segment, the value of this parameter is still relatively low on the whole, and a considerable number of samples have negative values. All of these characteristics indicate that biological activities are not frequent in the Dawuba Formation. However, TOC, which can directly reflect the enrichment degree of organic matter, shows the characteristics of contradiction with the above parameters. The TOC value of the upper part of shale is higher than that of the lower part of limestone, and there are two high TOC values in the lower part of productivity. It can be seen from Figure 12 that TOC shows a negative correlation with the productivity indices Ba and P/Al, and the correlation between TOC and the two productivity indices is not high. These characteristics are contradictory to the conclusion of low paleoproductivity in the Dawuba Formation, indicating that other factors may have dominated the enrichment of organic matter during the development of mud shale in the Dawuba Formation.
5.4. Terrestrial Input. The terrigenous detrital input always exists in the sedimentation process, and the material with terrigenous detrital entering the sedimentary stratum will play a direct role in the accumulation of organic matter in the stratum. Previous studies indicate that the Lower Carboniferous Dawuba Formation is located in a slope facies sedimentary environment with frequent terrigenous input. 71 In addition, previous studies on the profile of the Dawuba Formation have found that the interbedded sandstone beds have positive grain sequences and the abrupt contact between different lithologies can be seen in some locations of the profile. All of these pieces of evidence indicate that there may be a typical phenomenon of debris input in the dam formation, namely, the existence of gravity flow. 73 In this study, the characteristics of terrigenous detrital input in the Dawuba Formation were analyzed by logging data, core observation, field profile deposition characteristics, microscopic thin section observation, and major and trace elements' characteristics.
First, argillaceous nodules prevalent in the field section, abrupt contact between sandstone and surrounding rock. The presence of quartz clots and terrigenous debris in mudstone observed in microscopic thin sections is also evidence of strong terrigenous input in the Dawuba Formation (Figures 2−4). In addition, previous studies have pointed out that, in general, the development of gravity flow tends to occur in areas with large terrigenous input. 77 So, the development of gravity flow can be regarded as an indication that a region experienced terrigenous input during a certain sedimentary period. Whether gravity flow is developed in the formation needs to be judged by the profile characteristics and drilling core characteristics in this area. A series of sedimentary characteristics, such as wavy bedding, massive bedding, clastic bedding, and coiling bedding, can be observed in the cores of the strata developed by gravity flow. 77 In addition, some studies have shown that natural gamma-ray (GR) logs exhibited unique box characteristics, which can also be used to judge whether gravity flow is developed in the formation. 77,78 Through the analysis of the Well ZY1 logging curve, it can be observed that in the depth range of the abnormally low paleoredox and paleoproductivity data mentioned in the previous article, the corresponding GR logging curve also shows box characteristics. Meanwhile, in the same depth range, wavy bedding, sand/limestone masses, rolled bedding, and clastic deposits can be observed on the cores, which can reflect the development of gravity flow. Core Figure 9. Cross plot of U EF vs Mo EF shows the majority of samples from the entire ZY1 well in the oxidized/suboxidized range. The data from the Xiaobaiyan profile are from Ding's study, 71 which was also conducted in the same area. characteristics and logging curve characteristics are shown in Figure 13.
In addition, element geochemical data analysis is also used to further analyze the terrigenous input characteristics of the Dawuba Formation. In this study, chemical weathering index (CIA), terrestrial detrital index (Ti/Al), Fe/Mn, and Rb/K, which are used to reflect changes of water depth, were selected for analysis and explanation. Previous studies have pointed out that Fe/Mn ratio tends to increase as the water depth becomes shallower, while the Rb element can migrate to a long distance because it is easy to be adsorbed by clay minerals, so the Rb/K value will decrease as the water depth becomes shallower. 79,80 By combining these two parameters, which are completely opposite with the change of water depth, the changes of water depth in different sedimentary periods can be more accurately obtained. The chemical weathering index (CIA) is used to reflect the degree of weathering of rocks. When the chemical weathering index (CIA) value becomes higher, more detrital material will be generated, which provides sufficient material conditions for terrigenous input. Finally, Ti and Al, as the main Figure 10. Chemostratigraphic profiles of redox proxies for the ZY1 well. In the dark areas, the element enrichment coefficient decreased significantly. elements in the crust, tend to migrate into seawater in the geological process, among which Al participates in the formation of feldspar and clay minerals, while Ti forms a series of terrigenous heavy minerals, such as pyroxene with strong stability. Therefore, the input of terrigenous debris in the ocean can be expressed as Ti/Al. The higher the Ti/Al ratio, the greater the terrigenous input in this period. 81,82 Meanwhile, the average value of CIA in the upper S section was 79.24, while that in the lower X section was 75.83. The value of the upper S section is higher than that of the lower X section. In addition, it can be seen from Figure 16 that the samples in the study area are in the moderately strong weathering zone. This indicates that the degree of weathering was higher during the mudstone deposition period, which provided conditions for the importation of terrigenous debris. At the same time, there are two high values of CIA in the S section, indicating that the chemical weathering degree reached the maximum and the clastic provenance was the most abundant in this period. Coincidentally, the depth of the relatively high CIA value area corresponds to the location where gravity is presumed to have developed. A relatively high level of CIA can provide a material basis for the development of gravity flow. The Ti/Al value has no obvious fluctuation compared with the CIA value, but the overall value is at a relatively high level. There is an increase in the depth segment where the CIA value is high, but this upward trend is not obvious.
The two parameters of CIA and Ti/Al (the value range of CIA and Ti/Al is shown in Table 3) indicate that the whole Dawuba Formation shale has sufficient detrital material in the deposition process. It also shows that the Dawuba formation is affected by terrigenous detrital input to varying degrees, and the two parameters have relatively high values in the two depth ranges of assumed gravity flow development. In addition, to more accurately reflect the terrigenous input of the study area, this paper collected the data of major elements from the Lower Carboniferous shale in Canada and compared them with the study area. 83−85 Figure 14 shows that the sedimentation rate of the study area is almost the same as that of Canada's Big Marsh area and the terrigenous input of shale is similar to that of Canada's Albert area. Due to the high sedimentation rate and terrigenous input of the Albert area in the Big Marsh area of Canada, similar element characteristics suggest that the shales in the study area may also be in a high sedimentation rate and terrigenous input environment (Figures 15 and 16).
The Fe/Mn and Rb/K parameters reflecting the variation of the water depth have a similar trend to other geochemical parameters. At the same depth, there are two high-value segments of Fe/Mn and two obvious low-value segments of Rb/K, which indicates that the depth of the sedimentary water Figure 12. Chemostratigraphic profiles of paleoproductivity for the ZY1 well. In the dark areas are the water depth change section and TOC highvalue section. The data from the Xiaobaiyan profile are from Ding's study, 71 which was also conducted in the same area. body of the Dawuba Formation has changed significantly at these two depths. However, due to the relatively stable sedimentary environment of the whole Dawuba, there will not be a large change of water depth in a short time. Therefore, the change of Fe/Mn and Rb/K values is due to the influence of terrigenous debris. All of these characteristics indicate that the shale of the Dawuba Formation received terrigenous material input continuously during the deposition process, and terrigenous debris input reached the maximum in the two depth segments where gravity flow is supposed to develop. 5.5. Models of Organic Matter Accumulation. Previous studies pointed out that the accumulation of organic matter in strata usually goes through three processes: generation, destruction (preservation), and dilution. 87 Moreover, only the organic matter particles that have experienced all three processes can remain in the formation and form oil and gas in the later diagenetic process. 87 These three processes correspond to paleo-productive conditions, paleoredox conditions, and terrigenous detritus input, respectively. 20 Based on the sedimentary model established above, it is clear that the shale of the Dawuba Formation belongs to platform basin facies, and the NE direction of the study area is close to the ancient land of the Upper Yangtze. In addition, the paleoredox environment, paleoproductivity conditions, and terrigenous debris importation of the Dawuba Formation shale have been analyzed previously. On this basis, it is clear that the mud shale of the Dawuba Formation developed in an oxidized/suboxidized and low-productivity sedimentary environment, and at the same time, it was diluted by terrigenous detrital inputs during the whole sedimentary period of the mud shale. From these conditions, the whole Dawuba formation is not conducive to the enrichment of organic matter in the stratum. However, it can be seen from the TOC test data that the TOC value appears high in the upper S segment of Dawuba shale development, there are two obvious high-value segments between the depths of 2896−2814 and 2877−2894 m, and the TOC value of the high-value segment is about 3.0%. The TOC value of some samples can reach around 4.0%. This indicates that there is a large amount of organic matter enrichment in the Dawuba Formation. At the same time, the depth at which TOC values appear coincidentally corresponds to the period when the oxygen content is the highest and paleoproductivity is the lowest in the sedimentary water of the Dawuba Formation. These contradictory results indicate that the enrichment of organic matter in the mud shale of the Dawuba Formation is not completely controlled by productivity conditions and preservation conditions. However, coincidentally, the high value of TOC coincides with the period when terrigenous input is the most intense in the Dawuba Formation, so it is speculated that terrigenous input may increase the organic matter content in the Dawuba Formation shale. To show that the terrigenous detrital input increased the organic matter content in the Dawuba shale, it is necessary to show that the terrigenous detrital material carried sufficient organic matter during the transport process. Combined with the filamentous organic matter observed by SEM photos of the Dawuba Formation ( Figure 5A,C,I), it can be concluded that there is higher plant debris in the Dawuba mud shale. Additionally, the carbon 13 isotope (δ 13 C org ) of kerogen ranges from −26.84 to −24.36%, which belongs to the isotopic characteristics of terrestrial C3 plants, indicating that there are indeed a large number of higher terrestrial plant components in the Dawuba shale. Therefore, it is proved that the terrigenous input that existed continuously during the deposition period of the Dawuba Formation not only brought inorganic detrital in the shallow part but also carried a large amount of organic detrital. This sedimentary feature makes up for the defects of shale preservation and production and increases the organic matter content of the shale in the Dawuba Formation.
Overall, the Dawuba Formation of mud shale was deposited in a period of relatively rich oxygen. During this period, although enough oxygen for algae in the water body and the breeding of microorganisms provide the condition, the effect of water to support microbial multiply the lack of elements are limiting the number of microorganisms, thereby resulting in the water during this period the low paleoproductivity. Under the condition of low paleoproductivity, the high oxygen content of water further increases the difficulty of organic matter preservation. However, in the deposition process of the Dawuba Formation, the higher plant debris with terrigenous input into the stratum made up for the defects of the preservation conditions and productivity during the deposition period and increased the content of organic matter in the shale of the Dawuba Formation. Therefore, the Dawuba Formation formed a set of organic matter accumulation modes dominated by terrigenous organic matter under the condition of high oxygen content and low water productivity ( Figure 17).
CONCLUSIONS
(1) Both the enrichment coefficient of redox-sensitive elements and the Mo EF −U EF covariant index indicate that the shale of the Dawuba Formation was deposited in an oxygen-rich water body. Under oxygen-rich conditions, it is favorable for the development of plankton in the ocean. However, under oxygen-rich conditions, the microbial remains in the water will be difficult to be preserved in the sediments, which leads to difficulty in the preservation of organic matter in the strata of the Dawuba Formation.
(2) The paleoproductivity index of the Dawuba Formation indicates that the primary productivity of the Dawuba Formation is at a low level during the sedimentary period. Although the oxygen-rich water provides favorable conditions for the growth of plankton, the lack of nutrient elements in the water restricts microbial reproduction to some extent. However, the abnormally high TOC value in the Dawuba shale indicates that Figure 15. Chemostratigraphic profiles of terrigenous input and water depth for the ZY1 well. In the dark areas are the high-value area of terrigenous detrital input and the abnormal area of water depth change.
although the shale of the Dawuba Formation is in a sedimentary period of low paleoproductivity, it still maintains the characteristic of high TOC value under the influence of other factors. (3) The parameters reflecting the terrigenous debris input indicate that the shale of the Dawuba Formation was continuously affected by terrigenous debris input during the deposition period. Meanwhile, according to the core observation and logging characteristics, two strong terrigenous inputs occurred at the depths of 2796− 2814 and 2877−2894 m, which also represent the gravity flow deposition. (4) From the perspective of paleoproductivity level and water redox, the Dawuba Formation is not conducive to the generation and preservation of organic matter. Paradoxically, the high TOC interval of the Dawuba Formation shale happens to be at the depth where the water is most oxygen-rich and the productivity is least.
This evidence suggests that the characteristics of high TOC in the Dawuba Formation shale may not be completely controlled by production and preservation conditions. At the same time, the depth at which the gravity flow develops also corresponds to the high-value region of TOC. According to the scanning electron microscopy (EE-SEM) photos and the kerogen carbon isotope 13 (δ 13 C org ) results, there is a large amount of terrigenous organic matter in the shale of the Dawuba Formation that enters the sedimentary strata with terrigenous detrital. The existence of these terrigenous organic matter makes up for the defects of poor productivity and preservation conditions and is also evidence that the accumulation of organic matter in the Dawuba Formation is dominated by terrigenous detrital. Therefore, the shale of the Dawuba Formation formed an organic matter accumulation mode dominated by terrigenous organic matter in the water body with high oxygen content and low productivity.
■ ASSOCIATED CONTENT
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsomega.1c03809. Elemental geochemical data and TOC data (PDF) | 2021-12-01T16:16:47.435Z | 2021-11-29T00:00:00.000 | {
"year": 2021,
"sha1": "62877100fbaba2bc3069a769b5d02b628886675b",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c03809",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33b9c00cd48c07aba9f9bfc31bd473ef05c3b054",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2670143 | pes2o/s2orc | v3-fos-license | Bone Healing Properties of Autoclaved Autogenous Bone Grafts Incorporating Recombinant Human Bone Morphogenetic Protein-2 and Comparison of Two Delivery Systems in a Segmental Rabbit Radius Defect
Purpose: This study aims to validate the effect of autoclaved autogenous bone (AAB), incorporating Escherichia coli-derived recombinant human bone morphogenetic protein-2 (ErhBMP-2), on critical-sized, segmental radius defects in rabbits. Delivery systems using absorbable collagen sponge (ACS) and fibrin glue (FG) were also evaluated. Methods: Radius defects were made in 12 New Zealand white rabbits. After autoclaving, the resected bone was reinserted and fixed. The animals were classified into three groups: only AAB reinserted (group 1, control), and AAB and ErhBMP-2 inserted using an ACS (group 2) or FG (group 3) as a carrier. Animals were sacrificed six or 12 weeks after surgery. Specimens were evaluated using radiology and histology. Results: Micro-computed tomography images showed the best bony union in group 2 at six and 12 weeks after operation. Quantitative analysis showed all indices except trabecular thickness were the highest in group 2 and the lowest in group 1 at twelve weeks. Histologic results showed the greatest bony union between AAB and radial bone at twelve weeks, indicating the highest degree of engraftment. Conclusion: ErhBMP-2 increases bony healing when applied on AAB graft sites. In addition, the ACS was reconfirmed as a useful delivery system for ErhBMP-2.
Introduction
The most widely used method for reconstruction of large bone defects is a vascularized free bone graft; although it requires longer and more involved. Vascularized free bone grafts can cause donor site impediment and carry a high risk of failing to find vessels for microanastomosis in patients who have received radiation therapy. In addi-tion, osseous flaps have esthetic and functional disadvantages due to morphological differences from recipient bone. To overcome these limitations, reconstruction using removed autogenous bone after autoclaving is employed as an alternative method for segmental defects [1].
This method is advantageous because antigenicity is no longer a problem and the anatomical form of the bone defect is reproducible. However, autoclaved autogenous bone (AAB) has some limitations. It is only osteoconductive, not osteoinductive [2]. Segmental bone defects have been reconstructed with AAB in orthopedics for nearly a century; however, AAB acts as a tolerated foreign body, gradually getting resorbed and sometimes causing infection or nonunion [3][4][5][6]. There have been attempts in recent years to increase the engraftment rate of AAB by incorporating proteins such as recombinant human fibroblast growth factor-2 [7]. However, incorporation of recombinant human bone morphogenetic protein (rhBMP)-2 has not been attempted.
BMP-2 is involved in the process of differentiation of osteoblasts and chondroblasts from mesenchymal stem cells, suggesting that BMP-2 induces adult bone formation and chondrogenesis, consequently increasing the recovery rate from skeletal defects and reducing infection rates [7][8][9][10].
However, the high production cost of rhBMPs from Chinese hamster ovary cells has limited commercial use. After several attempts using prokaryotic expression systems, researchers found rhBMPs isolated from Escherichia coli (ErhBMPs) were 98% pure, adequate for bone regeneration in vivo, and producible in commercial quantities at low prices. Furthermore, their effects on bone formation have been validated in rat calvarial and fibular defect models [11][12][13].
Evaluation of the results
Animals were sacrificed at weeks six and 12 after surgery. Plain x-ray images of both radius injury sites were compared. Three-dimensional images were obtained using Quantitative data were reported as the mean±standard deviation. Statistical analysis was performed using a one-way analysis of variance (one-way ANOVA) with statistical significance evaluated at P <0.05 and using the PASW Statistics ver. 18.0 (IBM Co., Armonk, NY, USA).
Radiographic evaluation and findings
In plain radiography, AABs in the control group united with the radius at only one site, either the proximal or distal cut ends, six weeks after operation. Both graft sites united with the radius in group 2, whereas neither site did in group 3. Twelve weeks after surgery, new bone formation was observed on both AAB sites in the control group and group 3, but complete bony fusion in both ends of AAB was observed only in group 2 ( Fig. 4). At 6 weeks, the BT/TV, TbTh, and TbSp were highest in group 1, lower in group 2, and even lower in group 3. TbN was highest in group 2, lower in group 1, and even lower in group 3. All twelfth week indices were highest in group 2, followed by 3 and 1, except for TbTh (Table 2). Coronal images of Micro-CT showed newly formed trabecular bone united with the AAB and radius in group 2, while AAB was surrounded by little osteoid without trabecular pattern in groups 1 and 3 at week 6. In groups 1 and 3, one Fig. 4. Plain radiographic findings after graft of AAB. Six weeks after operation, discontinuity was seen in groups 1 and 3; whereas, 12 weeks after operation, continuity between the AAB and radius was seen in all groups. Only AAB reinserted (group 1), and AAB and ErhBMP-2 inserted using an ACS (group 2) or FG (group 3) as a carrier. AAB, autoclaved autogenous bone; ErhBMP-2, Escherichia coli-derived recombinant human bone morphogenetic protein-2; ACS, absorbable collagen sponge; FG, fibrin glue. Only subjects whose entire graft shapes were clear on the slide were measured. One subject was excluded from group1 and group 2 at 6th week, and from group 2, and group 3 at 12th week. Two subjects were excluded in group 3 at 6th week, and from group 1 at 12th week. Engraftment degree: proportion of united perimeter within the entire AAB perimeter. Perimeters of the entire graft were measured at low magnification, with the presence or absence of fusion being judged by observing osteoid, including the osteocyte and reversal line at high magnification. Only AAB reinserted (group 1), and AAB and ErhBMP-2 inserted using an ACS (group 2) or FG (group 3) as a carrier. ND, no data; AAB, autoclaved autogenous bone; ErhBMP-2, Escherichia coli-derived recombinant human bone morphogenetic protein-2; ACS, absorbable collagen sponge; FG, fibrin glue.
of the two graft ends regained fusion between the AAB and cortex of radius, whereas in group 2 both sides of the cut ends regained fusion with the trabecular bone at 12 weeks (Fig. 5).
Histological evaluation and findings
Although woven bone originated from the adjacent ulnar bone and a reversal line, suggesting bony remodeling, was observed six weeks after surgery in groups 1 and 3 ( However, only two specimens were able to be measured for engraftment degree in group 3 at the sixth week and group 1 at the 12th week. The AAB engraftment was highest in group 2 at both six and 12 weeks and lowest in group 3 at 12 weeks (Table 3).
Discussion
The results of this study suggest that ErBMP-2 is remarkably effective for engraftment of AAB when used in conjunction with an ACS as carrier material, but rarely effective with FG. We used the radius as a segmental defect model.
The radius was selected as a rabbit long bone model in this experiment for the following reasons: it is smaller than the ulnar bone, bears less load, and is easily accessible.
A radial bone defect of critical size (20 mm) was formed in the rabbit, as in previous studies [14][15][16]. We followed the surgical protocols of Huber et al. [17]. Autoclaving is a useful method for preventing local recurrence of malignant bone tumors [19].
Zellin [7] reported that revitalization of replanted auto- As a carrier, FG prevents bone ingrowth by controlling BMP diffusion and BMP stimulated bone growth [33]. In this experiment, the ErhBMP-2 may have been partially lost when the fibrin block was transferred to the defect site. Conversely, injection of a mixture of FG and rhBMP-2 promotes healing in a bone-tendon injury [34]. FG could serve as a carrier material in future systems where ErhBMP-2 dissolves properly and elutes without loss.
Although AAB incorporating ErhBMP-2 was expected to recover mechanical strength soon after reconstruction be-
Conclusion
The results of this study suggest ErBMP-2 is remarkably effective for engraftment of AAB when used in conjunction with the ACS as a carrier material, but is rarely effective with FG. When comparing bone formation in control and experimental groups, more new bone was observed in the control group six weeks after surgery, while by the twelfth week it became highly noticeable in group 2. In terms of replanted and adjacent bone, group 2 showed the best continuity compared to groups 1 and 3 at the end of the observation period, suggesting that ErhBMP-2 increases bony healing when applied on an AAB graft site.
In addition, we reconfirmed the effectiveness of the ACS as a delivery system for ErhBMP-2. | 2018-04-03T00:17:14.727Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "3b8978e6bd73e41df64126ff7d1059396688d3f9",
"oa_license": "CCBYNC",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201418534782138&method=download",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b8978e6bd73e41df64126ff7d1059396688d3f9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234056211 | pes2o/s2orc | v3-fos-license | An Experimental Study of Overlap Ratio Effect to Savonius water Current Turbine by using Myring Equation for n=1
This study has been employed experimentally towards Savonius water turbine by using Myring equation for n=1. This work will be varied to overlap ratio of 0, 0.05, 0.1, 0.15, 0.2, 0.25 and 0.3. The overlap ratio of 0 can be called as the conventional turbine using half circle. The experiment is tested in flume tank by 1.1 m of wide and 0.8 m of height and the water velocity 0.22 m/s. The turbine have 0.4 m of height and 0.4 of diameter. This work will be calculated the torque coefficient and the power coefficient. The power coefficient is called the performance coefficient. The results indicated that the best performance coefficient has improved in about 62.83 % at overlap ratio of 0.2.
Introduction
The Savonius turbine is the simple shape of the turbine with bucket shape of half circle. Aplication ofSavonius turbine can be used in fluid of water and air. The shaft of turbine can be applied axis and horizontal. Horizontal aplication has been conducted experimentally by Nakajima, et al [1]. Marine current energy has been employed by Yakkob [2]. The axis aplication has been conducted by varying the bucket number and 2 buckets have the best performance [3]. The myring equation has been performed numerically with best performance has been obtained at n of one [4].
The effect of cylinder is added in front and the side of turbine numerically applied in the turbine to improve the performance Purpose of this study is toinvestigate the effect of overlap ratio towards the turbine performance of myring n=1 experimentally. Overlap ratio is varied 0, 0.05, 0.1, 0.15, 0.2 and 0.25. The flume tank is the facility to test and take data of the torque and power. Power coefficient is called the performance coefficient.
Experimental Setup
The Myring equation is used to determine the shape of turbine as shown in equation (1). The myring model uses overlap ratio of 0 or without overlap ratio. The flume tank is the facility to get data information of the Myring turbine as seen in Figure 2. The Myring use equation (1), Tip Speed Ratio (TSR) use the equation (2). torque coefficient use (3), power coefficient use equation (4) and torque use equation (5).
The torque coefficient as the function of TSR
The graphical results of torque coefficient can be seen in Figure The trend of torque coefficient decrease by increase of tip speed ratio (TSR). The increase of torque coefficient due to decrease of weight in weght pan at experiment in flume thank. The overlap ratio has the torque coefficient higher than the conventional blade shape.
The power coefficient as the function of TSR
The experimental results of power coefficient can be seen in Figure 4. The graph of power coefficient has been varied overlap ratio of 0, 0.05, 0.1, 0.15, 0.2 and 0.25. This study indicates that overlap ratio can increase the performance and the performance higher than the conventional blade shape. The performance is called the power as shown in Figure 4. The best performance is occureed at overlap ratio of 0.2.
Improvement of Savonius turbine performance
The improvement can be calculated by the maximum power coefficient that represents the performance coefficient. The performance improvement can be obtained by calculation based on conventional shape. The results of performance improvement in (%) can be displayed in Tabel 1.
Conclusion
The results showed that the overlap ratio by using Myring n = 1 has obtained the best performance at overlap ratio 0.2. The turbine performance has increased in about 62.83% compared to the other overlap ratio and conventional blade. | 2021-05-10T00:03:41.548Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "47be69ae91fa7c54ba9a74dc14451f5bc335911a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1764/1/012198",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3981af6d51e8bcd16cc224569da8312424b4779d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
228813688 | pes2o/s2orc | v3-fos-license | The Challenges of Poverty, Types and Its Causes
Destitution is a monetary state where individuals are encountering shortage or the absence of specific products that are required for the lives of people like cash and material things. Accordingly, destitution is a multifaceted idea comprehensive of social, monetary and political components. Destitution has become an incredible issue in our reality. Despite the fact that numerous associations have been made to find answers to this issue, no one could fully spare our reality from destitution. The most well-known fact that we can understand when contemplating destitution data is that neediness normally exists in the development of nations. Around 25 percent of the population currently lives in deteriorated need, according to them. Undoubtedly, even the assembly of India has yielded that around 20 crore people in India live in a state of pathetic destitution with no permission to minimized drinking water, purification, and two complete meals. In light of these issues I should think about the challenges of desperation and its causes in India and its sorts.
Introduction
Dejection, the state of one who doesn't have a commonplace or socially satisfactory proportion of money or material effects. Dejection is said to exist when people don't have the best approach to satisfy their fundamental needs. In this particular condition, the unmistakable evidence of down and out people at first requires a confirmation of what contains basic needs. These may be described as scarcely as "those central for perseverance" or as extensively as "those reflecting the all-encompassing lifestyle in the organization." The essential model would cover only those people near the edge of starvation or passing from presentation; the second would contact people whose sustenance, housing, and dress, anyway adequate to spare life, don't coordinate those of the general population with everything taken into account. The issue of definition is moreover exacerbated by the noneconomic feelings that the word destitution has picked up. Desperation has been connected, for example, with persistent shortcoming, low degrees of guidance or capacities, a disappointment or a hesitance to work, high movements of dangerous or offense, and improvidence. While these qualities have consistently been found to exist with desperation, their joining in a significance of destitution would will when all is said in done cloud the association among them and the inability to oblige one's fundamental needs. Whatever definition one uses, authorities and laypersons the equivalent ordinarily expect that the effects of destitution are dangerous to the two individuals and society.[1-5] 2. Objectives: To study the Poverty and Its Causes in India To study the types of poverty
Methodology:
The methodology is incredibly needed to construct the analysis work equally qualitative and quantitative ways in which were used within the study. This text has required secondary data, secondary data has been collected from written offer, like varies periodicals, articles, reports, books, journals, and literatures, on the subject. For the aim of gathering the most recent updated information's on the topic e-sources to boot sharpeyed.
Factors of Poverty:
Induction to extraordinary schools, clinical consideration, power, safe water, and other fundamental organizations remains inconspicuous for a few and is every now and again constrained by money related status, sex, character, and geography. For those prepared to move out of destitution, progress is as often as possible brief. Budgetary shocks, food slightness, and ecological change bargain their advantages and may force them back into destitution.
Desperation is an inconvenient cycle to break and every now and again went beginning with one age then onto the following. Basic results of destitution join alcohol and substance abuse; less permission to preparing; powerless housing and ordinary conditions, and extended degrees of ailment. Raised desperation is likely going to cause extended strains in the public field, as difference increases. These issues consistently lead to expanding wrongdoing rates in networks impacted by destitution.
Cyclical Poverty:
Rehashing poverty implies desperation that may be extensive all through a general population, yet the occasion itself is of limited range. In nonindustrial social requests (present and past), such an inability to oblige one's basic needs rests generally upon ephemeral food inadequacies achieved by trademark marvels or poor plant masterminding. Expenses would ascend because of deficiencies of food, which brought all over, however fleeting, misery.
In industrialized social requests the manager redundant explanation behind desperation is fluctuations in the business cycle, with mass joblessness during seasons of unhappiness or authentic decline. All through the nineteenth and mid 20th many years, the industrialized nations of the world experienced business crazes and slumps that quickly enlarged the amounts of needy individuals. The United States' inclusion with the Great Depression of the 1930s, anyway fascinating in a part of its features, exemplifies such a dejection. Additionally, until the Great Depression, poverty coming about in light of business changes was recognized as a certain aftereffect of a trademark pattern of market rule. Mitigation was permitted to the jobless to hold them over until the business cycle again entered an ascent. The experiences of the Great Depression stirred a period of budgetary specialists, for instance, John Maynard Keynes, who searched for answers for the issues achieved by preposterous swings in the business cycle. Since the Great Depression, governments in basically completely advanced mechanical social requests have gotten budgetary techniques that try to confine the detestable effects of financial change. In this sense, governments expect a working part in desperation facilitating by extending spending as a strategies for empowering the economy. Part of this spending comes as prompt assistance to the jobless, either through joblessness compensation, government help, and various allotments or by deal with open works adventures. Notwithstanding the way that business depressions impact all sections of society, the impact is commonly genuine on people of the least budgetary layers since they have less minor resources than those of higher layers.[6-9] 6. Collective Poverty: Instead of intermittent poverty, which is transient, inevitable or "total" dejection incorporates a modestly enduring insufficiency of expects to ensure about fundamental needs a condition that may be so wide as to portray the ordinary level of life in an overall population or that may be moved in commonly gigantic gettogethers in a for the most part prosperous society. Both summarized and figured total desperation may be imparted from forever, gatekeepers giving their destitution to their children.
Total destitution is modestly expansive and suffering in parts of Asia, the Middle East, a huge bit of Africa, and parts of South America and Central America. Life for the primary aspect of the general population in these areas is at a unimportant level. Dietary needs cause disease seldom saw by masters in the extraordinarily developed countries. Scoundrel trust, huge degrees of infant youngster mortality, and constant shortcoming depict life in these social requests.
Total destitution is normally related to budgetary underdevelopment. The total resources of many making nations in Africa, Asia, and South and Central America would be missing to help the general population adequately whether or not they were also disengaged among the aggregate of the inhabitants. Proposed fixes are twofold: (1) advancement of the gross public thing (GNP) through improved cultivating or industrialization, or both, and (2) people hindrance. So far, both people control and provoked monetary improvement in various countries have exhibited inconvenient, questionable, and sometimes dubious or baffling in their results. [10][11][12][13][14][15] An extension of the GNP doesn't generally incite an improved lifestyle for the general population all over the place, for different reasons. The most critical clarification is that, in many making countries, the general population turns out to be significantly snappier than the economy does, with no net lessening in poverty in this way. This extended people advancement stems fundamentally from cut down infant passing rates made possible by improved clean and ailment control measures. But on the off chance that such cut down rates unavoidably achieve women bearing less children, the result is a sharp accelerating in people improvement. To reduce birth rates, some making countries have endeavored comprehensively coordinated familymasterminding programs, with evolving results. Many making nations are in like manner depicted by a long-standing course of action of conflicting dissemination of wealth a system inclined to continue paying little mind to checked augmentations in the GNP. A couple of experts have viewed the tendency for a colossal section of any extension to be diverted by individuals who are currently rich, while others ensure that increases in GNP will reliably gush down to the part of the general population living at the methods level.
Concentrated Collective Poverty:
In many industrialized, modestly rich countries, explicit portion packs are vulnerable against long stretch dejection. In city ghettos, in locale evaded or gave up by industry, and in areas where cultivating or industry is inefficient and can't battle beneficially, there are found losses of concentrated total destitution. These people, like those tortured with summarized poverty, have higher passing rates, ongoing slightness, low educational levels, and so forth when differentiated and the more regal parts of society. Their supervisor financial credits are joblessness and underemployment, uncouth occupations, and work wobbliness. Tries at upgrade base on ways to deal with convey the prevented packs into the norm from getting money related life by pulling in new industry, propelling privately owned business, introducing improved agrarian procedures, and raising the level of aptitudes of the employable people from the overall population.
Case Poverty:
Like total poverty in relative interminable quality anyway not equivalent to it with respect to movement, case desperation implies the disappointment of an individual or family to ensure about basic needs even in social ecological components of general flourishing. This frailty is normally related to the nonattendance of some fundamental quality that would permit the individual to keep up oneself. Such individuals may, for example, be outwardly impeded, genuinely or truly weakened, or perseveringly wiped out. Physical and mental debilitations are ordinarily regarded nicely, as being outside the capacity to control of the people who experience the evil impacts of them. Attempts to improve destitution due to physical causes revolve around guidance, protected work, and, if essential, budgetary help.
Types of Poverty:
On the basis of social, economical and political aspects, there are different ways to identify the type of Poverty: 9.1 Absolute poverty: In any case called uncommon poverty or despicable dejection, it incorporates the deficiency of basic food, clean water, prosperity, refuge, guidance and information. The people who have a spot with out and out desperation will by and large fight to live and experience a lot of youth passings from preventable ailments like wilderness fever, cholera and water-spoiling related disorders. Preeminent Poverty is regularly excellent in developed countries.
Relative Poverty:
It is portrayed from the social perspective that is desire for ordinary solaces appeared differently in relation to the monetary standards of people living in ecological elements. Accordingly it is an extent of pay unevenness. For example, a family can be seen as poor if it can't shoulder the expense of outings, or can't tolerating presents for kids at Christmas, or can't send its young to the school.
By and large, relative desperation is assessed as the degree of the general population with compensation not actually some fixed degree of center compensation. It is a comprehensively used measure to decide desperation rates in rich made nations.
Situational Poverty:
It is a concise sort of destitution subject to occasion of a negative event like common catastrophe, work hardship and outrageous clinical issue. People can help themselves with night with a little assistance, as the poverty comes because of tragic event.
Generational Poverty:
It is offered over to individual and families from one age to the one. This is more puzzled as there will never be an exit plan in light of the fact that people are trapped in its inspiration and unable to receive to the mechanical assemblies required getting consequently.
Rural Poverty: It occurs in nation regions
with people under 50,000. It is the place there are less openings for work, less permission to organizations, less assistance for ineptitudes and quality preparing openings. People are tending to live for the most part on the developing and other unobtrusive work available to the ecological components. 9.6 Urban Poverty: It happens in the metropolitan territories with populace more than 50,000. These are some significant difficulties looked by the Urban Poor: • Limited access to health and education. • Inadequate housing and services.
• Violent and unhealthy environment because of overcrowding.
• Little or no social protection mechanism.
Conclusion:
Destitution has become an exceptional issue in our existence. Regardless of the way that various affiliations have been made to find answers for this issue nobody couldn't extra our world absolutely from dejection. The most broadly perceived reality which we can comprehend when we consider on information about poverty is that dejection is by and large occurring in making countries.
What are the impacts of destitution for our reality? Do you realize that more than 21,000 kids bite the dust each day around the globe because of diseases, clashes on the planet and other various reasons? The majority of these are brought about by destitution.
As a young understudy, I should propose a couple of parts which would be helpful in our outing to diminish dejection. Basically we have to figure out how to reduce the general population in our existence. Trademark resources don't extend as demonstrated by the general population which is growing at a quick. Right when we consider the families in powerless countries, they have in any occasion six or seven youngsters. However, those kids don't have a suitable prosperity or the gatekeepers can't give authentic preparing to them. Also, besides those watchmen can't outfit incredible sustenances stacked up with sensible enhancements to their youngsters due to nonattendance of wealth. Because of that their quality decreases by a broad aggregate. The progression of their psyches gets lacking and due to that their ability to get suitable preparing lessens.
So figuring out how to make prosperity and preparing parts in these countries is a nice technique to lessen poverty. So at first we have to make organizations for pregnant women of those countries and outfit them incredible sustenances stacked up with fitting enhancements to keep the youngsters solid. What's more, a short time later the youngsters will be solid and their brains will be in a better condition than get authentic guidance. Working up the preparation parts of those countries with the help of respectable purpose organizations and the governing bodies of made countries is in like manner a good development to make guidance structures in those countries. | 2020-11-19T09:13:40.966Z | 2020-11-16T00:00:00.000 | {
"year": 2020,
"sha1": "5226e993ac600f95e5659f448e944217ca5a5744",
"oa_license": "CCBY",
"oa_url": "https://rspsciencehub.com/article_1411_3f88ca853ada998cb3e44fad46b15b7f.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "30bc30f36583de638415ad4da1bda6f083e370f9",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
112833530 | pes2o/s2orc | v3-fos-license | Synesthetic Metering for Speed - Evaluation applied to young drivers ’ speeding
In previous work, we have been talking about the reduction of accident levels by improving the driver’s perception of the environment through multisensory interaction [27]. Being the vision the primary sense used during driving, it suffers a large overhead and, therefore, leaves space for the increased human error. For this reason it has been proposed the use of alternative means of communication by associating the sense of audition applied to the vehicular interface. The sense of audition used as a complementary interface involves several issues which have been identified such as the rhythm and intensity. The research described in this paper, instead, aims, first, at contributing to the reduction of accidents caused by speeding; and second, through the use the multisensory information, to aid the driver to maintain a more regular driving and controlled speed. It is a system for conscious users to whom is given the choice of establishing his limits, using his goals and needs as reference. Based on sound attributes, auditory communication and the aim of helping the driver to maintain a more regular speed for greater safety, a prototype system has been developed. As research methodology, tests were conducted, using a driving simulator, to evaluate the efficiency of the system, i ncluding user’s preferences and comfort. Through the data obtained by the simulator, we sought to observe the variation of the average speed under the influence of time pressure. The questionnaire indicated no discomfort in using the auditory icons, also helping to keep a greater concentration on the road compared to the use of the speedometer only. Tests indicated that the duration of the trip as well as the dynamics of the landscape are important variables.
Introduction
Italy suffers from the problem of crashes involving speedingin 2012 the number of accidents in Italy caused by speeding was about approximately 31,000, equivalent to 16.6 percent of national crashes, causing 3,653 deaths and more than 264,000 injuries [1]. These accidents are specially provoked by young drivers [2]. In United States the situation is not differentin the same period, there were registered 10,219 fatalities provoked by speeding involving young drivers (between 15 and 20 years old), representing 37 percent of males and 24 percent of females [3].
A variety of factors influence the speeding problem, such as alcohol consumption, estimation of the driving speed, lack of experience, the influence of other drivers through speed comparisonpsychological and social factors are the main ones.
Concerns about speeding consequences are motivation to develop devices to inform the driver about his speed. This can be evidenced through the history itself, since the use of vehicular interfaces to inform the vehicle speed has been one of the earliest interfaces used, from the very first years of the creation of the automobile [4].
Nowadays, to reduce accident statistics, Intelligent Speed Adaptation devices (ISA) are used. These systems use interfaces to inform the speed through sound alerts, verbal sound alerts, tone signals, beep and multisensory, such as auditory and visual alerts or auditory and tactile. Multisensory information through auditory warnings applied on other devices demonstrates to be more effective in improving performance and reducing reaction times, helping the driver to perform the primary tasks [5]. However, concerns on applying the ISA system are that auditory warnings in some situations showed a lower comfort, when used like vocal message or beep, combined with visual and haptic systems, annoying drivers and therefore not being appreciated and lightly increasing the mental workload [6,7,8,9].
Vehicular interfaces making use of multisensory information are applied to different situations, demonstrating to be an efficient way to bring information to the driver. This information is responsible to orientate the driver to have control over devices and especially over his actions and decisions regarding the vehicle in the traffic.
The driver's activity could be divided into three different categories of tasks. The first one, called the primary, is responsible for activities such as maneuvering the steering wheel and pedals, the secondary tasks are those which maintain the security (for instance, turning on devices like wipers), and the tertiary are responsible for the activation all the other comfort, information and entertainment devices [10].
Such quantity of tasks generates competing demands, resulting in mental overload due to the driver's limited cognitive resources, mainly regarding vision, that is the most used sense, compared with the others [11,12].
The in-vehicle system complexity, together with the presence of the different devices inside the vehicle, create an increase in research to find new ways of reducing information overload to the driver, applied in different situations. Some of these researches are looking to integrate multisensory information using senses like auditory and haptic, resulting in an important action for the driver performance.
Using multisensory interfaces in the in-vehicle devices create a more immediate and richer means of communication. They are able to convey a more complete information to describe compound situations, showing, for instance, the direction of the event and urgency levels, that may improve Reaction Times (RTs), and reduce workload [13,14,15,16]. The perception and distinction of the urgency level is also influenced by the complexity of the task to be performed, resulting in the different RTs [17].
Many studies in the automotive field use guidelines where the urgency of an alarm is represented by fundamental frequency, intensity, duration of the pulses and inter-pulses [18].
Natural sounds can offer a more viable solution to inform a state of a product. Natural sounds can contain semantic information of the event, informing intuitively and naturally [19] and therefore creating a positive emotional experience between the product and the user. This was demonstrated in other in-vehicle interfaces, through the auditory icons concept [20,21] showing a direct relationship between the warning with the type of event and demonstrating good results.
Some variations of ISA systems are used as "mandatory", which inhibit speeding through stringent actions such as reducing fuel injection; others may be less stringent consisting of "advisory" reminders to the driver through visual or audio signal; and yet there are the "voluntary" ones that make difficult to speed up by upward pressure on the accelerator pedal, but even so being usually possible to override this limitation. Drivers, in general, prefer to be allowed to override the ISA when necessary, and are not very receptive to stringent speed control systems [22].
The acceptance in the use of "voluntary" ISA during tests determined that drivers who drove more times above the speed limit, had a lower acceptance of the system. In other words, drivers making use of "voluntary" systems are less likely to respect the warnings, violating the speed limit [23].
Low acceptance in "mandatory" devices can be explained by the fact that they often use strong actions that inhibit speeding, as reducing fuel injection or closing the throttle and applying the small brake pressure to the hydraulic system, resulting in frustration of drivers, but above all taking of the driver the freedom to choose when to activate it. Stopping this action causes a feeling of frustration in the vehicle control. The personal frustration for incapacity in the use of a device when it does not correspond to your expectations is one of the most hated feelings for user interfaces and it is necessary the use of conceptual models, mapping and affordance applied to them [24].
The low acceptance in the use of audio warning, like verbal or beeps applied to ISA devices can be accredited to the fact that the types of warnings sometimes do not have a direct connection with the event. Users of devices that use beep or verbal sounds tend to annoy and make uncomfortable because of this artificiality, especially if they are played for long periods. In addition, the effect of novelty should be consideredhearing a sound for the first time usually calls attention and activates reactions; as it becomes known and repetitive the driver becomes confident with the alert which then becomes annoying and induces sleepiness instead of provoking an action.
Frequency and rhythm can convey the quality of information in the case of vehicle's speed. Studies analyzed the use of information through the intermittence of the auditory warning using the perception of non-musical rhythms while in front of a computersemantic differential scales and rhythms with the meanings of "Accelerating" and "Decelerating" [25]. Despite having shown the participants' low familiarity with this test conditions, it is hypothesized that these sounds could be better perceived when being used in a specific context/ activity such as driving. Associating a sound with a visual experience can offer a different perception of an event. Besides, using auditory warnings that contain connections with the event can become one way to inform the quality of the event, informing how much the driver is over speed.
Examples supplied by researches where applied as test drivers' interfaces using simple tones to give advices to the driver. Auditory signals were repeated (looping) expressing the level of urgency and informing about how much to slow down/speed up [26].
To offer the possibility of customization, such as alternative means of user-interface (communication and interaction) giving to the driver more control over the device, can increase the usability and acceptance of the system. Another important point is to make the correspondence between the audio warning intermittence with the acceleration and deceleration of the vehicle. Another relevant aspect is the use of natural sounds supplying information to the driver more intuitively without annoying him. It is believed these two points can offer new experiences to the driver, helping him to better accept the alarm, compared to ISA interfaces which are used nowadays.
It is necessary to design more articulated, informative and intuitive auditory alerts, taking into account semantic factors associated with the action of driving. These types of alerts can offer a more natural and better interaction, creating a more positive experience with the system's interfaces.
In this paper it is proposed a continuation of the previous work [27]. Initial tests were applied using the auditory icons concept aiming at observing the first impression of drivers as regards driving between two speed limits (a minimal limit and a maximum one), using the natural audio warning. Driving under time pressure has been also evaluated.
As demonstrated by initial applied studies about audio interfaces through auditory warnings, they can inform to the driver not only the effective speed of the vehicle, but also the quality of the driving. Practical tests were conducted, with the aim of identifying the initial students' perception regarding rhythm, intensity and acceptation During a driving simulator's test it was also analyzed possible future improvements on the application as an effective aid system to speed control. The acceptability of the audio warning utilized during the test has been evaluated also through a questionnaire in the end of the test.
Participants
Twenty young students of the Polytechnic University of Turin with good knowledge of Italian (19 male, 1 female) were selected to participate in the test. The age range was between 20 and 24 years old, with the average age of 21.4 years (SD = 1.5). The participants had a valid driver's license from an average driver experience of 3.1 years (SD = 1.8). The participants have never received any type of penalty. Problems regarding vision and hearing have not been detected. They have voluntarily participated in the test (with no specific rewarding), after having received a questionnaire via email and having being informed about the event through posters displayed in the university.
Design
For the realization of the test it was used a drive simulator emulation video game called RFactor ® v1255 and also a software created for the purpose of assigning sound alerts when necessary and acquiring datathe information of speed, distance, time and time to activate the audio warningcreating at the end of each test a report for statistical purposes.
To run the Rfactor game it was used a laptop with 4 gigabytes of RAM and for the software that collected the data a desktop computer with 1 gigabyte of memory.
For the output of audio warnings from the desktop and engine sound from the laptop they were used two pairs of speakers. The first one was responsible for the sound notification positioned on the table, in front of the participant (100 cm distance), and the distance between a speaker and the other was of 100 cm. The second pair of speakers, responsible for the engine sound, was positioned under the table, with the same distances between the other pair. The projection of the simulation was done using a projector with the projection size of 51 inches, and the image was projected at 120 cm away from the participant. A 17" LCD monitor was used to inform the instantaneous speed and maximum speed on the route, positioned at the center of the projection. The joystick utilized was the Thrustmaster ® , model Universal Challenge 5-in-1 V.4, with steering wheel (gear paddles type "butterfly") and pedals (brake and accelerator). The researcher remained on the right side of the participant, at 100 cm distance (see Fig. 1).
Sound design
The kind of auditory warning used was based on the auditory icon concept, mimicking the wind sound perceived when driving with the windows open, like proposed in previous work [27]. The sound is based in a mp3 file, with duration of about 1 second and intermittence of the warning was determined for every 100 meters covered by the vehicle, so that the time between intermittent sound alerts is directly related to the speed. For this step of research the sound utilized was downloaded from the sound effects website [28], and modified through Audacity ® software, with the aim of generating 2 different types of auditory iconone to alert when under speed limit and other to alert being over speed limit. The sound intensity utilized in the auditory icon during the tests was defined as 85dB (A) and to the engine sound was defined as 65dB (A) to 120 km/h.
The driving scenario
The total route corresponds to the distance of 25 km, totally straight and without traffic, and being divided among 5 maximum speeds (40, 60, 80, 100 and 120 km/h). The route has been divided in five parts according with the five correspond speeds making five subdivisions of 5 km of route for each speed. To obtain a more realistic test scenario, the 5 km corresponding to each speed were distributed randomly during the route, ensuring that participants are not to drive continuously 5 km for each speed at a time. For each part of the route it was established the maximum and the minimum speed allowed. The minimum speed was always 5km/h lower than the respective maximum speed. (35, 55, 75, 95 and 115 km/h). These two limits allow a security margin between the minimum and maximum speeds. No warning sound is heard, when the participant driving the car keeps it within those limits of speed (including the two extremes). For example, Fig. 2 shows two limits between 55 and 60 km/h, while the participant is outside the safety margin, below 55 km/h or above 60 km/h, the warning is heard every 100 meters covered. Then, the time between intermittent sound alerts is directly related to the speed at which the car is driven, while when the driver is inside of these two speeds, no warning sound is emitted. During the application of the test, visual information was provided through road signs, only as regards the maximum speed. In the same screen it was also shown that the driver had to take as a reference both, the instantaneous speed and the duration of the test. For the minimum speed, the participants already had knowledge of the safety margin, which was explained during the presentation of the test, and then this information wasn´t visually provided.
Procedure
The participants had to do two testsin the first one, called "Total travel time", the participants had to complete a route of the 25 km without exceeding either the maximum or the minimum speeds, in a time frame of 24 minutes. This information was explained to the drivers at the beginning of the test. Time and speed were two characteristics to be measured in this test. The former, time, aims at establishing which group can realize the route closer to the time determined. The latter, speed, is to see which group can stay longer inside the speed limit, despite the time pressure to complete the route in the pre-defined period.
The second test was called "Maintenance of the speed between the minimum and the maximum (average)". In this test participants were instructed to complete the same route, without breaking the limits of minimum and maximum speed which means that the task was to complete the route in the average time. Just like the first test, but this time without information about the time to finish the test. As the first test, it measured the average speed, which group could stay longer inside the speed limit, but this time without the time pressure. Participants were divided into two groups, each one of 10 students. The first one (control group) used the speedometer to control the speed. The second, made the test using both the speedometer and the audio warning (auditory icon) each time they were above or below a specific speed showed for them. To arrive at the predetermined time means traveling between two speed limits. The goal of this division was to compare the results between the control group and the group that listened the audio warning.
Although both groups have made the two tests, it has been set that the control group first made the test "Total time travel" and afterwards the "Maintenance of speed between the minimum and the maximum" test. The opposite was done with the group which used audio warningsthey have started the test with the "Maintenance of speed between the minimum and the maximum". This has been made to minimize the effects of a possible conditioning of the participants in relation to the time provided in one of the tests, which could affect the subsequent test. In the application of the test, first of all, the participant received an instruction sheet explaining how the first test would be done. At this point, if the participant had no doubt, he was free to drive for 5 minutes in the same route, at any speed. The goal of this part was to make sure the test environment was properly adjusted (devicessteering wheel and pedalsand the information present on the screen, including audio warnings). This would avoid the "novelty effect" when the official test started. The second test was performed with the same procedure as the first one.
After these tests, participants of the group that used the auditory warning answered a questionnaire with the purpose of giving their impression about the test. This questionnaire was created aiming to know whether the auditory warnings were considered annoying, if the warning was considered to help maintaining the concentration on the road compared to the speedometer, and if the warning sound was able to inform about minimum speed and the exceeding of maximum speed.
Qualitative results
At the end of the tests, a questionnaire was applied to the participants that did the test with the speedometer together the auditory icons. It had four questions. According to the results of the questionnaire, the auditory icon was not annoying during the test to nine out of ten participants. The participants declared also, that the warning sound in its totality has helped them to maintain the focus on the road, compared to the speedometer only (ten out of ten). It was asked if the auditory icons were able to inform about the missing of minimum speed missing or the maximum speed excess. In the first case, the use of auditory icons was more homogenous according to participants. The information about exceeding maximum speed has proved even more efficient.
Quantitative results
According to the Analysis of Variance (ANOVA), the statistical results showed no significant difference between the speeds in respective situation of the use of the speedometer and speedometer with auditory icons, presenting a significance value (p > 0.05). In was observed no variations in speed between driving with auditory icons and driving control. This occurred in all the speed applied to the test, where (p > 0.05) (p = 0.922 at 40 km/h; p = 0.543 at 60 km/h; p = 0.959 at 80 km/h; p = 0.506 at 100 km/h and p = 0.731 at 120 km/h). The pressure of the time in the execution of one of the tests also showed no variation of the average speed, in comparison with the test without the pressure of the time (see Fig. 3 a, b, c, d), with or without the auditory icon. With (p > 0.05) (p = 0.733 at 40 km/h; p = 0.112 at 60 km/h; p = 0.223 at 80 km/h; p = 0.120 at 100 km/h and p = 0.902 at 120 km/h).
Conclusion and discussion
In a former work, it was proposed a speed control system to help the driver to have a safer travel, using the auditory icons that mimic wind sounds when traveling with the vehicle windows open, informing when driver is under and over speed limits. Based on the previous work, this research proposed a study about the initial impressions of the drivers about the use of auditory icons to inform the speed. A virtual test using a simulator was then applied aiming at understanding their behaviors and also a questionnaire was applied to collect their opinion at the end of the test. The participants of the test have evaluated the experience as very positive. They have declared that the auditory icon utilized during the test was not annoying, was efficient and more useful to inform eventual speed excess, helping to maintain focus on the road.
The yet small number of tests performed though, was not sufficient to reveal a conclusive correlation between driving using the audio warning proposed and driving with the speedometer only. In this sample, in the two groups analyzed, the drivers were able to maintain a similar average speed. The time pressure also revealed no significant change of the participants´ behavior.
Adding a hazard evaluation in looking at the speedometer, through external signals, requiring the participant to perform an action due to the signals and thus evaluate its reactivity could be a possibility to apply in future tests. Another possibility for future works is to leave the intermittency warning customization level as a driver's choice, which then could be more effective in the perception of speed and in its control, compared to intermittency level previously defined. An auditory warning that adjust its reproduction mode according to the duration of the trip may also affect the drivers perception.
The application of this device in a real context will then be important to demonstrate more details than it is possible in the virtual simulator. | 2019-04-14T13:05:56.560Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "ae4f19fc98f32e5403d083001e9d170a10f7b141",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.promfg.2015.07.917",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4998960b3a037b243c0dec585a6394bd409486e8",
"s2fieldsofstudy": [
"Engineering",
"Psychology"
],
"extfieldsofstudy": [
"Engineering"
]
} |
245665783 | pes2o/s2orc | v3-fos-license | Socioeconomic status, alcohol use disorders, and depression: a population-based study
Background: Depressive disorders (DD) and alcohol use disorders (AUD) frequently co-occur. They are key to understanding the current increases in “deaths of despair” among individuals with lower socioeconomic status (SES). The aim of this study was to assess the prospective bidirectional associations between AUD and DD, as well as the effect of SES on these two conditions. is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. Declarations of interest All other authors declare that they have no conflicts of interest. Methods: The National Epidemiologic Survey on Alcohol and Related Conditions is a cohort study representative of the US adult population, which began in 2001-2002, with follow-up interviews conducted 3 years later. SES was primarily operationalized as educational attainment. AUD, DD, and their levels of severity were defined according to the DSM-5 criteria. Results: The risk of developing an incident DD increased gradually with the recency and the severity of AUD at baseline, but the converse was not observed. Lower SES was an independent risk for incident AUD or DD. SES did not modify the prospective association between AUD and DD. Limitations: The absence of interaction between SES and moderate or severe AUD for the incident DD must be considered with caution due to the limited number of DD cases reported in these AUD categories. Conclusions: This result is consistent with a causal relationship between AUD and DD, and suggests that therapeutic interventions for AUD may also have beneficial effects to lower DD rates. The independent effects of a lower SES and AUD on DD may result in a vulnerable population cumulating disorders with heavy consequences on health and social well-being.
Introduction
In many societies, including in the United States (US), depressive disorders (DD) and alcohol use disorders (AUD) are among the most prevalent psychiatric disorders (Substance Abuse and Mental Health Services Administration, 2019). Among US adults, 14.0 million qualified for an AUD in 2019 (SAMHSA Center for Behavioral Health Statistics and Quality, 2019) and more than 17.3 million experienced at least one depressive episode in 2017 (National Institutes of Mental Health, 2017). In addition, these disorders can co-occur; the presence of either disorder doubles the risk of the other disorder (Boden and Fergusson, 2011). Furthermore, this co-occurrence can be characterized by greater severity and worse prognosis than either disorder alone (Greenfield et al., 1998;Hasin et al., 2002), including a heightened risk for suicidal behavior (Conner et al., 2014). Three main hypotheses have been made to explain this comorbidity. First, a causal relationship between AUD and DD has been proposed, according to which alcohol use could lead to depression through changes in metabolism, neurotransmitter function, or through the consequences of AUD on social life (Fergusson et al., 2009;Li et al., 2020). Second, DD might increase alcohol consumption as a coping mechanism, which can result in AUD when increased use persist over time. This is known as the self-medication hypothesis (Abraham and Fava, 1999;Turner et al., 2018). Third, an overlapping genetic vulnerability or environmental risk factors, such as adverse childhood events or other traumatic experiences, may contribute to the co-occurrence of these two conditions (Capusan et al., 2021;Caspi et al., 2014;Gilbert et al., 2015).
to understanding the current phenomenon of "deaths of despair" in the US, where overall decreases in life expectancy, even before the COVID-19 pandemic, have been characterized by increases in poisoning, alcoholic liver cirrhosis, and suicides Deaton, 2015, 2017;Case and Deaton, 2020), especially among the two-thirds of Americans without a college degree (Case and Deaton, 2021;Sasson and Hayward, 2019). Education is a key indicator of socioeconomic status (SES), as people with low education in the US are increasingly being left behind with fewer prospects for secure, financially stable employment (Hummer and Lariscy, 2011;Walsemann et al., 2013).
Longitudinal observational studies assessing a direction in a potential causality between AUD and DD are still scarce and have shown conflicting results (Boden and Fergusson, 2011;Conner et al., 2014;Kohler et al., 2018). Moreover, AUD and DD are both known to cover heterogenous clinical presentations with a wide continuum, ranging from moderate and brief disorders to severe disorders with devastating life consequences. In that regard, taking their respective severity into account, as suggested in the DSM-5, might be a key factor in the understanding of a potential causal association between AUD and DD (Graham et al., 2007;McHugh and Weiss, 2019;Paris, 2014;Rehm et al., 2017). Finally, to our knowledge, whether SES modifies this longitudinal bidirectional association has not been studied. Accordingly, using data from the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC), a large nationally representative survey with a follow-up component (Grant and Dawson, 2021;Hasin and Grant, 2015), the aims of the present study were to: 1. explore the relative strength of a prospective bidirectional association between AUD and DD, taking their respective severity into account, in order to distinguish a potential causal direction, 2. examine if a low SES prospectively independently increases the risks of developing an AUD or a DD, and 3. assess whether the association between AUD and DD changes according to the SES (i.e., assess a potential interaction between SES and each disorder for incidence of the other disorder).
Participants
The National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) Wave 1 and Wave 2 was a longitudinal survey, designed under the National Institute on Alcohol Abuse and Alcoholism (NIAAA)'s direction to determine the magnitude of AUDs and their associated disabilities in the general adult population. The NESARC sample represents the civilian, noninstitutionalized adult (18 years old or more) population of the US, including people living in households, military personnel living off base, and people residing in group quarters. The three-stage sampling frame for the NESARC was based on the Census 2000/2001 Supplementary Survey (C2SS). For Wave 1, data were collected during 2001-2002 in face-to-face interviews (N=43 093), and for Wave 2, 34 653 respondents were re-interviewed during [2004][2005]. Of the 8440 Wave 1 respondents who were not included in Wave 2, 3134 were not eligible for a Wave 2 interview because they were institutionalized, mentally/physically impaired, on active duty in the armed forces throughout the Wave 2 interview period, deceased, or had been deported. The remaining respondents (N=5306) refused to participate or could not be reached or located. The cumulative survey response rate was 70.2%.
Participants who had ever presented with a manic or hypomanic episode at Wave 1 were excluded from all analyses. For analyses regarding the incidence of DD between Wave 1 and 2, participants with a lifetime DD at Wave 1 were also excluded. Similarly, for analyses regarding the incidence of AUD between Wave 1 and 2, participants with a lifetime AUD at Wave 1 were excluded.
Measures
NESARC's diagnostic classifications were based on the Alcohol Use Disorder and Associated Disability Interview Schedule-DSM-IV Version (AUDADIS-IV), a fully structured, computer-assisted diagnostic interview (Grant et al., 2003). NESARC contained questions that operationalized the criteria set forth in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) for alcohol and drug use disorders and mood disorders (major depressive disorder, bipolar I and bipolar II disorders, dysthymia, and hypomania).
For the present study, the definition of AUD was based on the DSM-5 (American Psychiatric Association, 2013), excluding the craving criteria which was not assessed in the Wave 1 and 2 NESARC interviews. Accordingly, alcohol use and AUD at Wave 1 were split into the following six categories: lifetime abstinence (reference), drinkers (past or current) without AUD, remitted AUD (occurred prior to the last 12 months), mild AUD (2-3 symptoms), moderate AUD (4-5 symptoms) or severe AUD (6 or more symptoms). Mild, moderate, or severe AUD referred to symptoms present during the last 12 months. We used the DSM-5 definition of DD as well, which includes major depressive disorders and persistent depressive disorder (former chronic major depressive disorder and dysthymia). DD at Wave 1 were also categorized according to their temporality and severity. Remitted DD had occurred prior to the last 12 months. The current DD, which had occurred in the last 12 months, were categorized according to their severity, based on the number of criterion symptoms. We used the DSM-5 specifier to determine the current severity of the DD. Mild included the DD with 0 to 1 symptom in excess of those required to make the diagnosis, moderate included the DD with 2 to 3 symptoms in excess, and severe included the DD fulfilling all the criterion symptoms. If both major depressive disorder and persistent depressive disorder were present, we chose the category with the higher severity.
SES was primarily defined according to the education attainment, and for sensitivity analyses, household income was used (Galobardes et al., 2006). Education was split into having a high school diploma or less (low), some college but no bachelor's degree (medium), and a bachelor's degree or more (high) (Case and Deaton, 2020). The 2001 poverty guidelines of the Department of Health and Human Services, which take into consideration the number of people living in the household and its combined income, were used to categorize the income into low (below the poverty threshold), medium (above the poverty threshold and under four times the poverty threshold), and high (above four times the poverty threshold) (Department of Health and Human Services).
The race and ethnicity variable was constructed from the Hispanic origin variable and the single classification race recode according to an algorithm developed by the Census Bureau (Wave 1 NESARC Data Notes. Bethesda, MD: NIAAA, 2004). Individuals who reported being of Hispanic origin were coded as having a Hispanic ethnicity, regardless of their race. Non-Hispanic individuals who reported multiple races were coded into a single category in the following order of priority: 1) Black, 2) American Indian/Alaska Native, 3) Asian/Native Hawaiian/Pacific Islander, 4) White.
Statistical analyses
The NESARC sample was weighted to adjust for non-response at the household and person levels, the selection of one person per household, and oversampling of young adults, Hispanics, and Blacks. We used multiple logistic regression models to assess the prospective associations between SES, DD, and AUD. Effect modification by SES was investigated on multiplicative and additive scales, following the approach described by Knol and VanderWeele (2012). Model-adjusted risks, model-adjusted risk differences, and additive interaction tests were computed using the predictive margin functions PREDMARG and PRED_EFF in SUDAAN. Here, predicted marginal prevalences of each outcome were generated using the specified logistic regression models, and group comparisons were performed through contrast statements (Sarvet and Wall, 2016). We used the Stata 17 (StataCorp, 2021) and SUDAAN (Research Triangle Institute, 2018) softwares. The latter uses Taylor series linearisation in variance estimation to make adjustments for the sampling methodology.
Ethical statement
All participants gave informed consent to participate. The research protocol received full ethical review and approval from the U.S. Census Bureau and the U.S. Office of Management and Budget.
SES and AUD as predictors of DD during the follow-up period
After exclusions of participants who already had a manic or hypomanic episode or a DD at baseline, 27 571 participants remained in the analyses (49.5% female, mean age 45.9 years). Overall, 6.4% developed a DD in the 3 years leading up to the follow-up interview. Compared to lifetime abstainers, the risk for developing a DD during the 3 years prior to follow-up increased gradually among non-abstainer participants according to the severity and the recency of their AUD at baseline (OR, 95% CI: 1.20, 1.01-1.41; 1.28, 1.04-1.57; 1.68, 1.22-2.32; 1.92, 1.31-2.81 and 2.35, 1.07-5.14 for participants without; with remitted; mild; moderate and severe AUD, respectively). Regarding SES, a medium or low level of education were independent risk factors for developing a DD, compared to a high level of education (OR, 95% CI: 1.35, 1.15-1.58 and 1.52, 1.27-1.81, respectively) ( Table 1). Sensitivity analysis using household income as an SES indicator showed similar results (Table 1-1, submitted to Data in Brief (DIB) alongside this article). Stratified analyses by sex followed the same trends for males and females (Table 1-2 DIB). SES did not modify interaction between SES and AUD on a multiplicative, nor on an additive scale (Table 1-3, DIB).
SES and DD as predictors of AUD during the follow-up period
After exclusions of participants who already had a manic or hypomanic episode or an AUD at baseline, 22 066 participants remained in the analyses (59.8% female, mean age 47.2 years). Overall, 7.0% developed an AUD during the follow-up period. The risk for developing an AUD during the 3 years of follow-up did not differ according to DD diagnosis, but was increased by a medium or a low level of education at baseline (OR, 95% CI: 1.38, 1.11-1.70) and 1.27, 1.00-1.61, respectively) ( Table 2). This was not confirmed in the sensitivity analysis using household income as SES indicator (
Discussion
This prospective study, with participants being representative of the US population, shows that AUD severity plays a key role in the risk of developing a DD, with a dose-response association, and that a remitted AUD remains a risk factor for developing a DD. These results provide consistent evidence with a unidirectional causal relationship between AUD and subsequent DD. Moreover, we found that a low SES was an independent risk factor for incident DD and for incident AUD, and that the strength of the association between AUD and DD did not vary between low or high SES. Our results are consistent with a meta-analysis of ten cross-sectional and longitudinal studies which supported the hypothesis of AUD as a precursor of DD (Boden and Fergusson, 2011), as well as with a recent large genomic study (Grotzinger et al., 2020) which showed a causal association between problematic alcohol use and depression using Mendelian randomization. Examining the older adults (60+) of the same NESARC sample, Chou et al. (2011) did not find any prospective association between AUD and major depressive disorder, in either direction. This might be explained by a decreasing risk of DD with age and by the manner in which the two disorders were measured-as being either present or absent-without accounting for their independent severity levels. Boschloo et al. (2012) used the number of AUD symptoms during the past 12 months as the exposure variable and also found a linear dose-response association with incident DD. We found no significant association between DD and the risk of developing an AUD, consistent with most (Boden and Fergusson, 2011;Fergusson et al., 2009), but not all, prior studies (Turner et al., 2018). The association between SES and DD is consistent with previous studies showing a social gradient in the incidence of depression (Grant et al., 2009;Melchior et al., 2013). Consistent with our results, the association between SES and AUD is more widely debated than the one between SES and DD (Grant et al., 2009;Lund et al., 2018).
Our finding that SES did not modify the association of AUD with DD is unexpected, given the literature on the alcohol harm paradox, according to which people of low SES experience greater alcohol-related harm than those of high SES for the same or lower level of alcohol consumption (Hall, 2017;Probst et al., 2020;Probst et al., 2014). A meta-analysis of 133 million people compared socioeconomic inequality in alcohol-attributable and all-cause mortality and found a relative risk of dying from alcohol-attributable causes of 1.7-fold the relative risk of all-cause mortality (Probst et al., 2014). However, to our knowledge, no study has examined the potential influence of SES on the prospective association between AUD and DD (i.e., the interaction between SES and AUD). Two cross-sectional studies have examined whether SES influenced the association between alcohol use and depressive symptoms and found conflicting results (Assanangkornchai et al., 2020;Martinez et al., 2015). In a Norwegian sample of more than 10 000 adults, Martinez et al. (2015) found an interaction between employment and depressive symptoms on the risk of heavy episodic drinking, but no interaction between education level and depressive symptoms regarding alcohol use. In a Thai sample of more than 13 000 participants, Assanangkornchai et al. (2020) found an association between major depressive episodes and AUD, which was modified by wealth and education, but not employment status. Patterns of drinking seem to vary across low-, middle-, or high-income countries and cultures and might affect possible interactions with SES (Grittner et al., 2013;Rehm et al., 2017).
This study has several limitations. First, we could not take craving-which is the only DSM-5 criteria that differs from DSM-IV in the assessment of AUD severity-because it was not assessed in the AUDADIS-IV interview. Craving has been shown to be well correlated with the other DSM criteria, but this symptom is rarely present and does not have much impact on the descriptive epidemiology of AUD (Keyes et al., 2011;Saha et al., 2006). Thus, missing the craving criteria could have led us to an underestimation of the AUD severity, and consequently of the observed effects, but is not likely to have modified the observed associations. Second, this survey was conducted in the first decade of the 21 st century, prior to the decrease in life expectancy due to "deaths of despair". However, the trends in substance use poisoning, liver cirrhosis deaths, and suicide began before the effect on life expectancy was manifest (Shiels et al., 2020). Moreover, AUD and DD are often relapsing disorders persisting over long periods of time, leading to an increased mortality after many years (Carvalho et al., 2019;Gilman et al., 2017). Third, we had no information about adverse childhood events or other traumatic experiences at baseline, which have been associated with both AUD and DD (Capusan et al., 2021;Caspi et al., 2014;Gilbert et al., 2015). However the nature of the association between adverse childhood events and mental health is still debated and disentangling these complex interrelationships would go beyond the present study (Danese and Widom, 2021). Fourth, we have not included anxiety disorders or cigarette smoking in our models, despite their possible association with AUD and DD. Considering the high correlations between anxiety disorders and DD, and cigarette smoking and AUD, including such variables in our models would have blurred the observed association with an overadjustment. Moreover, a previous publication on NESARC data among older adults has already shown models of prospective associations, including all measured psychiatric disorders (Chou et al., 2011). And, finally, our findings of an absence of interaction between SES and moderate or severe AUD for the incidence of DD must be considered with caution due to the small numbers of DD cases reported in these AUD categories. et al., 2018). Low SES also impacts the living conditions and might increase the risk of being confronted by violence. For example, more walkable neighborhoods with leisure opportunities are associated with a reduced prevalence of DD and AUD (Peen et al., 2010). Regarding AUD, addiction dramatically alters motivational circuits through multiple changes in neurotransmitter function, which could affect mood (Koob and Volkow, 2016). The direct neurotoxicity of alcohol might also result in an increased risk of DD. The consequences of AUD on multiple aspects of social life, such as a disruption of affective relationships, employment-related difficulties or legal problems are causes for a deteriorated mood as well (Carvalho et al., 2019). Our results seem to exclude the self-medication hypothesis of DD as a precursor to AUD (Turner et al., 2018). Regarding the hypothesis of a shared genetic susceptibility, which could not be explored within our data, genetic studies on large samples have demonstrated that AUD and DD arise from two distinct genetic factors (Grotzinger et al., 2020;Kendler et al., 2003). Finally, our results moderately support the hypothesis of shared risk factors between AUD and DD, in particular factors related to SES.
The conjugated effect of SES and AUD on DD contribute to the development of a vulnerable population with cumulating disorders with severe consequences on health and social wellbeing. The deaths of despair Deaton, 2015, 2017;Case and Deaton, 2020;Case and Deaton, 2021;Sasson and Hayward, 2019) -a decrease in life expectancy in the US characterized by increases in substance use poisoning, alcohol-related liver mortality, and suicides among the lower socioeconomic stratum-is a striking example of the development of a vulnerable population and its consequences. This shift in life expectancy has been described from a social and economic perspective. However, it is worth noting that the 'deaths of despair' are all closely linked to mental health, and particularly to depression. Our results show that depression might have been underestimated as an underlying cause in deaths of despair. Better capturing and measuring "despair" might help mental health specialists to study the deaths of despair as well, which could bring a more thorough understanding of this phenomenon (Shanahan and Copeland, 2021;Shanahan et al., 2019). Moreover, as AUD prevalence has increased in the last decades in the US population, especially among the socioeconomically disadvantaged subgroups of the population (Grant et al., 2017), the observed effects of low SES and AUD on the incidence of DD twenty years ago might even be of more importance today. Studies examining the long-term effects of SES and various degrees of AUD severity on mental health are urgently needed and might help us better understand the "deaths of despair" phenomenon. Adding a mental health perspective to the social and economic perspectives on deaths of despair could help in capturing the complex interconnection of environmental, social, and biological pathways to the current increase in suicide, drug-and alcohol-related mortality rates. Targeting the lower SES groups of the population in AUD prevention strategies might help reduce these disparities in mental health and have a beneficial effect on the incidence of depression as well.
Funding
This work was supported by the Swiss National Science Foundation (grant number P2LAP3_191273) and by the National Institute on Alcohol Abuse and Alcoholism of the National Institutes of Health (grant number 1R01AA028009). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Swiss National Science Foundation and National Institutes of Health.
•
Alcohol use disorder increased the risk for incident depressive disorders, but not the reverse.
•
The more severe was the alcohol use disorder, the higher was the risk of depression.
• Lower socioeconomic status independently increased the risk of both disorders.
Lasserre et al. | 2022-01-04T16:02:21.997Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "01776b9e0d60b08de2e4e1e4d13e4e49b07610bc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jad.2021.12.132",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f9491c9e278910ea159cc15c1dcdb337ba4130f1",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
187141214 | pes2o/s2orc | v3-fos-license | MEASURING AND ASSESSING THE STATE OF TECHNOLOGICAL INNOVATIONS AND THE LEVEL OF INTERACTION BETWEEN RICE PROCESSORS AND STAKEHOLDERS IN RICE PROCESSING INDUSTRY IN NIGERIA
Article History Received: 6 August 2018 Revised: 10 September 2018 Accepted: 16 October 2018 Published: 13 November 2018
The study examined the state of technological innovations in rice processing industry and investigated the level of interaction that exists between rice processors and stakeholders involved in rice processing operations in Nigeria. Data were collected from 35 (12 integrated and 23 medium and small-scale mills) rice processing firms in four geopolitical zones of Nigeria through the use of questionnaire. These firms were selected using snow balling sampling technique. The questionnaire elicited information on the state of technological innovations in the firms and the level of existing interactions among rice processors and relevant stakeholders. Data were analyzed using descriptive statistics. The result showed that majority (71.4%) of the firms had technological innovations involving introduction and improvement of existing product and 45.7% of them had technological innovations involving introduction of a new process and improvement of the existing process. More than two thirds (82.9%) of the firms carried out technological innovations mainly by themselves. Also, 65.7% of the technological innovations originated mainly in Nigeria, 31.4% were imported while 2.9% were sourced jointly in Nigeria and abroad. The results further showed that the firms have low (mean < 2.58) level of interactions with important stakeholders such as banks, local suppliers of equipment, universities/research institutes, foreign firms and government with a mean of 2.91, 2.57, 2.49, 2.09 and 1.86 respectively. The study concluded that to enhance technological innovation in Nigerian rice processing industry, there is a need for the development of strong linkage between the industry and the stakeholders involved in rice processing operations.
INTRODUCTION
Rice is an important food crop that most households in Nigeria consume on daily basis. The crop is the second largest consumed cereal in the world, and more than half of the world"s population depend on it for about 80 percent of their food calorie requirements (Raw Materials Research and Development Council RMRDC, 2000;Seck et al., 2012). Rice is cultivated in virtually all the agro-ecological zones in Nigeria and covers both the upland and the swamps, depending on the variety (Kano State Agricultural and Rural Development Authority KNARDA, substantial (Hoogvelt, 2000;Longtau, 2003b;Kareem et al., 2009;Bamidele et al., 2010;Okpe, 2010;Ajala and Gana, 2015).
Several studies have discussed the economic production and consumption of rice in the country (Hussien, 2004;Bamidele et al., 2010;Terwase and Madu, 2014;Ige et al., 2016). Studies have also shown that out-dated processing technologies and ill-equipped infrastructure are some of the factors responsible for the increase in the demand for rice consumption (Seck et al., 2010;Seck et al., 2013;Styker, 2013;Ajala and Gana, 2015). Akintelu (2017) also mentioned that rice processing technological capability can only be enhanced if adequate technologies are in place.
According to Kim (1997) absence of joint projects with the various types of stakeholders prevents companies from achieving new sources of scientific and technical information, which are crucial, as they can significantly increase the technological capability of the company. The scientific and technological infrastructures are a vital resource for the company"s competitiveness for product innovation. These indicate that building technological innovation is critical for a sustainable rice processing industry in Nigeria. The aim of the paper is to examine the state of technological innovation of rice processing sector and investigate the level of interaction that exists between rice processors and stakeholders in the industry in Nigeria with a view of providing information that could assist in building full-bodied technological innovation.
LITERATURE REVIEW
Technology is defined in many dimensions, embodying various areas, which has changed significantly over the last 200 years. Before the 20th century, the term was referred to as the study of useful arts (George, 1823). Stratton and Mannix (2005) often connected technology to technical education as in the Massachusetts Institute of Technology. Technology rose to prominence in the 20th century in connection with the Second Industrial Revolution. The term changed in the early 20th century when American social scientists, translated ideas from the German concept of Technik into "technology". In German and other European languages, a distinction exists between technik and technology. By the 1930s, technology referred not only to the study of industrial arts but extends to the industrial arts themselves (Eric, 2006). Bain (1937) defined technology to include all tools, machines, utensils, weapons, instruments, housing, clothing, communicating and transporting devices and the skills by which we produce and use them. Bain's definition remains common among scholars today, especially among social scientists. MacKenzie and Wajcman (1999) defined technology as applied science. More recently, scholars especially scientists and engineers have referred to technology as technique due to various instrumental reason. It is the know-how, physical things, and procedures used to produce products and services. An important tool used to monitor the production and manufacturing process, thus improving quality management and ensuring compliance with environmental standards.
In this study, technology refers to tools and machines that may be used to enhance rice processing operations.
The word "technology" can also be used to refer to a collection of techniques. It is a way of utilizing human knowledge and combine resources to produce desired products, solve problems, fulfil needs, and satisfy wants. It includes technical methods, skills, processes, techniques, tools and raw materials (Rhodes, 2000;Akpomi, 2003).
Consequently, capabilities are to be taken as outputs of adaptive learning processes that are sustained through a variety of external connections and sources for innovation (Von and Wang, 2003;2007) at least partially embedded in the regional environment of the firm. Nelson and Winter (1982) contributions in the area of firm specific capabilities have proliferated in and among resource-based views, evolutionary economics, the economics and history of technical change, strategic management and, more recently, evolutionary economic geography. The main extensions to conventional static notions of capabilities involve both interactive and dynamic capabilities.
The term "capabilities" has been used severally to describe a large variety of processes and a variety of functions through different levels of systems from individual to global (Abramovitz, 1986). Lall (1992); Bell and Pavitt (1995) explained that their works focused on technological capabilities as the knowledge and skills that firms need to acquire, use, adapt, improve and create technology, interacting with the external environment. One may think of firm"s endowment of adequate skills as the necessary internal competences to obtain value from R&D and innovation investments (Piva and Vivarelli, 2009).
Capabilities involve learning and accumulation of new knowledge on the part of the firm, and also the integration of behavioural, social and economic factors into a specific set of outcomes. Consequently, capabilities are to be taken as the results of adaptive learning processes that, in their collective dimension, can be highly localized, giving rise to "system" capabilities (Iammarino and McCann, 2012). Variables related to human resources, or cooperative linkages for innovation with external actors, are to be considered as determinants of a firm"s technological capabilities, rather than as the capabilities themselves (Von and Wang, 2003;2007).
Many authors viewed the result of innovation as economic gains arising from technical change and firms' performance. Innovation can take several forms ranging from product, production process, organizational structure, people and policy innovation (Olaposi, 2017). Out of these forms of innovations, only product and process innovation are recognized as technological innovation. Technological product innovation is the implementation or commercialization of a new or improved performance and delivery of new or improved services to consumers (OECD, 1997). While technological process innovation is a new or significantly improved production method with significant changes in techniques, equipment and software. Technological process innovation may involve the adoption of technologically new or improved production methods of product delivery (Olaposi, 2017). Process innovation as it relates to rice processing industry focus on improving rice processors" capabilities and improvement of the industry quality of output. This involves the deployment of modern rice processing technologies for parboiling and milling operations.
Linkages along the Rice Commodity Chain
The innovative performance of any economy depends on how the individual actors perform in isolation and how they interact with each other as elements of a collective system of knowledge creation and use, and on their interplay with social institutions. Without adequate development of these actors and institutions in the national settings, the rice processing industry remains weak. Strong linkages between various actors along the rice commodity chain can greatly improve the efficiency of the sub-sector. Linkages evolve through the relationships between various actors involved in rice processing operations. Given the huge inflow of imported rice in the markets, the linkages between the local actors along the commodity chain define and shape the competitiveness of locally produced rice.
Stakeholders of the National Rice Processing industry in Nigeria
The key Stakeholders of industrial sectors include: Education and Research institutes, suppliers of technologies, financial institutes, foreign firms and Government (Lundvall, 1992;Adeoti, 2002;Jegede, 2015). Antonio and Marcelo (2016) also affirmed that the interaction with the various players represents one of the most important learning and innovation efforts for industrial development and competitiveness. His findings revealed a weak link between companies and other stakeholders. The systematic relationship with universities is virtually non-existent.
Education and Research Institutes
The education and research institutes comprise the primary, secondary and tertiary institutions (universities and polytechnics and other specialised research institutes). Within these actors, universities play prominent roles, for which attention are paid to them. Universities traditionally fulfil the dual roles of manpower development and conduct research. However, an additional demand of transferring technology to business sector was later added to their roles.
Higher educational institutions, particularly universities, perform the traditional functions of teaching and talent filtering by which new generations of students are trained. They also have a social and statutory responsibility to participate in the generation of new knowledge through research and development deeds which can be channelled and diffused by new venture. Public research and development institutes are expected to undertake different lines of research that are of commercial applicability. These institutes vary in their mandates and sizes but derive their funding mainly from government sources (Oyewale, 2003).
Financial Institutes
Financial Institutions such as Bank of Industry are instituted by government in-order to intervene in facilitating the acquisition of technologies that will enhance the industry to boost their technological capability.
Finance that is needed for the sustainability of the industry is provided by the financial institute also through a special fund called Venture Capital (VC), which is often referred to as risk capital.VC is invested in the early stage of high technology companies in the form of equity, quasi-equity or conditional loan. Public Venture Capital Funds can be established by the State, Federal, or Regional government as development agencies. Private Venture Capital organisations can also be established as professional or institutional investors. Individual investors called Business Angels also provide venture capital in an informal approach.
Government
There is a pervasive influence of government to support industry that is strongly collaborated and connected with the actor. This will help to improve overall firm performance (Peng and Luo, 2000). Government as a stakeholder ranges from ministries, agencies and other government bodies at various level that are involved in regulating industry and their applications in industrial production. The government develops innovative policies that stimulate organisation linkages and directs the flow of goods and services for industrial production. The core function of government includes the general functions such as policy formulation and resource allocation. Other functions include that of implementation (financing, performance, human resource development and capabilitybuilding). These functions (both policy-related and implementation-related) are carried out by different stakeholders with the particular combination being unique to the country.
Government role in facilitating industrial growth is important to industry in all forms. This help in providing basic infrastructure to enable firms to adhere to international standards, building accredited control laboratories that support firms in the agro-processing industry, formulation of policies to promote technological transfer to the domestic economy. "This has been affirmed to foster entrepreneurship in Georgia" (Kuriakose, 2013).
Foreign Firms and Local Supplier of Technologies
Industry generally engaged stakeholder"s in project execution in order to build institutional knowledge and capability. This can be achieved through the use of laboratory facilities, staff internship, licensing of local supplier held patents, training, workshops and conferences. It can also be achieved through industrial involvement in joint research with trained personnel to help industry build strong relationships and develop global and local solutions to common sustainability challenges. Foreign corporate and local supplier"s existence facilitate the production of positive result in the host developing economy technology transfers as the most important channel through which many firms assist local suppliers in purchasing raw materials and intermediate goods in modernising or upgrading production facilities. MNEs generally are found to provide technical assistance, training and other information to raise the quality of the suppliers" products (OECD, 2002).
METHODOLOGY
The study was carried out in rice processing firms that engage in parboiling and processing operations in all the rice producing states across four of the six geopolitical zones in Nigeria. The zones were selected because they have good cluster of rice processing firms that can contribute to the national rice production in Nigeria (Ezedinwa, 2005). Eight (8) states were purposively selected from four geo-political zones in Nigeria. The states consist of Lagos (South West), Edo and Cross-River (South South), Benue, Kwara and Nasarawa (North Central) and Ebonyi and Enugu (South East). These states were chosen because the states formed about 75% of the share of the national rice producing areas in Nigeria (Wudiri, 1990;Ezedinwa, 2005).
The population of the study consists of all rice processing firms in the selected geopolitical zone in Nigeria.
There are forty five (45) existing rice processing firms in the study area. The industries cut across fourteen (14) Integrated and thirty-one (31) medium and small-scale mills; totalling forty-five (45) firms for the study. However, thirty-five (35) firms were found functional and were used for the study. The industries are mainly government and privately-owned rice processing firms.
A structured questionnaire was administered on thirty-five (35) rice processing firms in the study area. The questionnaire elicited information on both the level and extent of interaction that exists between rice processing firms and other stakeholders involved in rice processing operations using variables such as interactions with universities and research institutes, Government agencies, suppliers of tools and equipment, banks and foreign firms. Others variables used to investigate the extent of interactions with these stakeholders were the use of staff, student, workshops/conference and laboratories for rice processing operations etc. The questions were coded on a four-point rating scale; ranging from Not at all (1) Low (2) Medium (3) to High (4). Data were collected using both primary and secondary sources. Primary data were obtained using questionnaire, interview schedule as well as personal observations while secondary data were collected from journals, business directories, internet, as well as published and unpublished research works. Data were also retrieved from Federal Ministry of Agriculture and Rural Development (FMARD).
The data collected were analysed using frequency and percentage. Mean rating was used for the analysis of technological innovativeness of the firms as well as the existing relationship between the firms and other actors in the industry. Statistical Package for Social Sciences (SPSS) 20 edition was used for the statistical analysis. The results are indications of the fact that technological innovation activities are taking place in the firms. The work of Sobanke et al. (2014) shows that innovation capability is necessary for the successful development of new or improved processes, products or equipment and has the ability to make minor improvement and modifications to existing technologies and to create new technologies. García-Muiña and Navas-López (2007) also recognizes technological capability as the tool for creating value in any given environment with the ability to jointly mobilize different scientific and technical resources which enables a firm to successfully develop its innovative products or productive processes. Table 2 shows the level of interaction that exists between the firms and other actors in the industry. The results revealed that the firms have low level of interactions with majority of the actors involved in rice processing industry. As indicated in Table 2, out of the five actors identified, the firms only had fair (mean = 2.91) level of interaction with the banking sector. The mean rating of the interactions of the firms with those of the local suppliers of equipment, universities/research institutes and foreign firms were 2.57, 2.49 and 2.09 respectively.
RESULTS AND DISCUSSION
While interaction between the firms and government was rated relatively low (mean = 1.86).
The findings indicate that there is a weak linkage capability in the industry. This can affect the contribution of all these actors and in return hinder the technological capability output of the firms. Findings of Sobanke et al. (2012) and Oluwale et al. (2013) have registered low level of interactions between Nigeria firms and their stakeholders, especially, metal fabricating industry and auto mechanic industry in Nigeria. The results reaffirmed Shapira et al. (1992) and Oluwale et al. (2013) views that technological capability of a firm can be improved through a variety of sources including private vendors, public technology centres, government laboratories, universities and suppliers of technologies. Bell and Pavitt (1995); Ogbimi (2007) supported the findings that technological capability of an industry requires knowledge and skills derived through these stakeholders to improve and create technology. Oluwale et al. (2013) also buttressed that learning mechanisms available to firms determine the extent to which they augment their endowments of production and investment capabilities over time. Linkage capability can be seen as a strong determinant for efficient creation of product through transformation of resource. It has helped in transmitting information, skills and technology about the market, technologies, technical knowledge and other facilities (Olamade, 2001). Thus, rice processing stakeholders serve as mechanisms for the firms to enhance their production capabilities over time. The firms may therefore not be able to meet up with the consumers" expectation if they fail to strengthen their level of linkage capability with other actors involved in rice processing operation. Table 3 further shows the extent of interactions that exist between rice processing firms and universities/ research institutes (URI). Various indicators were used to measure these interactions as shown in Table 3. The result shows that licensing of URI held patents is fairly (mean = 2.37) recorded regarding the interaction between firm and universities/research institutes. Other indicators that had low record include attendance of training program, workshops and conferences (mean = 1.97), joint research between the rice milling firms and URI academics (mean = 1.89), university student internship (mean = 1.80), use of university/ research institutes (URI) laboratory facilities (mean = 1.74) and engagement of academic staff in project (mean = 1.74).
As shown in Table 4, top among the area of interaction between firm and local supplier of rice processing equipment were on the use of their laboratory facilities (mean = 2.46), local suppliers" staff internship (mean = 2.26), licensing of local supplier held patents (mean = 2.14) , attendance of training program, workshops and conferences (mean = 2.09), joint research between a firm and local suppliers of field equipment (mean = 1.97), and engagement of local suppliers" staff in project/ consultancy (mean = 1.74). Table 5 shows that the interaction between firm and Nigerian banks were mostly observed in the areas of invoice discounting to get product/service from third parties (mean = 2.17), project financing credit from commercial banks (mean = 2.06), level of interest rate concessions for local rice processing and milling firms (mean = 2.03), while the mean rating for each of bank guarantee from commercial banks and overdraft facilities to meet daily activities from banks is 1.74. Legend: Not at all = 1, Low = 2, Medium = 3, High = 4 Table 6 shows that government policy specifically directed to assist rice processing within Nigeria was the most (mean = 2.60) reported interaction between firm and Nigerian government. Other indicators used to measure the extent of interactions include "Nigerian government"s effort in training and developing people are commensurate with the laudable local rice production initiative" (mean = 2.49), "getting financial support from government in the area of technological innovation to develop rice production" (mean = 2.11) and lastly "level of awareness of any research and development that Nigerian government fund in order to develop expertise in rice processing, milling and production" (mean = 1.77). In all, the extent of interactions that exist between rice processing firms and government is low. The government"s role in facilitating industrial growth is very important to industry in all forms. This helps in providing basic facilities to enable firms conform to international standard. Peng and Luo (2000) in their work, agreed that there is a pervasive influence of government in supporting industry that is strongly collaborated and connected with these stakeholders. Kuriakose (2013) also mentioned that parts of government responsibility include building accredited control laboratories, formulation of policies, and recruitment and training of man power to support firms in agro-processing industry. Table 7 presents the results of the extent of interactions that exists between local firms and foreign firms. On the table, "knowledge obtained from foreign suppliers of equipment" was rated highest with a mean of 1.94; following this is "interaction with foreign firms to operate and continually maintain the equipment supplied" with a mean of 1.86 and "interaction with foreign firms to supply and set up equipment in Nigeria" which has a mean of 1.77. These results indicate that the extent of interaction between local rice processing firms and foreign firms are very low and there is need for these firms to relate well with foreign firms to enhance rice processing operations in Nigeria. In summary, the results shows that there is weak linkage capabilities between rice processing firms and other institutions involved in rice processing activities. This indicates that there is weak linkage among the firms and other stakeholders such as local suppliers of technologies, universities, research institutes, banks, government and foreign firms. Oluwale et al. (2013); Oyewale (2003); Biggs and Shah (2006) have observed that Nigerian enterprises is lacking in linkage capabilities with other institutions. Linkage capability forms the basis for interaction among these stakeholders. It can evolve through the relationship among rice processing firms and stakeholders involved in rice processing operations. Egbetokun (2009) submitted that the interactions with local and foreign competitors, government, and domestic and international institutions are determinants for firms development. Kim (1997) believed that absence of joint projects with various types of stakeholders will prevent firms from achieving new sources of technical information which are crucial to the development of technological innovation capability of firms. Antonio and Marcelo (2016) findings also revealed a weak linkage capability between manufacturing firms and other stakeholders and he concluded that the interactions with the various stakeholders represent one of the most important learning and innovation efforts for industrial development and competitiveness. Without adequate linkage with these stakeholders, rice processing industry may not be able to raise output above domestic consumption in Nigeria. The implication of the result is that, strong linkage between various stakeholders along the rice value added stage can greatly improve the performance of rice processing sector and if the firms will achieve growth and improvement on the state of its technological innovation capability, the sub-sector must ensure that necessary action is taken to collaborate with these stakeholders. Otherwise, the effort of Nigerian rice processing stakeholders to attain optimum production capacity in the country may be hindered.
Summary
The study examined the state of technological innovations in rice processing industry in Nigeria and it investigated the level of interaction that exists among the rice processors and other stakeholders. The study revealed that majority (71.4%) of the firms had technological innovations which involve introduction and improvement of existing products in the past three years and 45.7% of the firms had technological innovations involving introduction of a new process and improvement of the existing process of rice processing method in the last three years. More than two thirds 82.9% of the respondents admitted that the introduction of the new technological innovation was mainly done by the firms and originated mainly in Nigeria. The results revealed that the firms had low level of interactions with other stakeholders involved in rice processing. The firms had relatively fair level of interaction with the banking sector, local supplier of equipment, universities/research institutes and foreign firms. While interaction with government was very low. This indicates a weak linkage capability in the Industry.
Conclusions
The state of technological capability in rice processing industry in Nigeria is relatively innovative which reflects in the output (product and processing innovation) of the firms. The development of external links has a worse evaluation. Interaction and cooperation relations with other stakeholders are still very weak, limiting the acquisition of external knowledge, tacit knowledge, considered essential for the creation of technological competence. The firms are performing poorly in terms of interactions with other stakeholders which calls for urgent intervention by the institutions and government. There are weak interactions with local supplier of technologies, universities, research institutes and foreign firms; while interaction with government is very low.
Recommendations
Based on the results of this study, the following recommendations are provided: 1. The industry should strive to collaborate with the available stakeholders (URI, Government, Foreign firms etc) in the country. This may serve as a supportive measure in acquiring the needed technologies for rice processing operations in the country.
2. The industry should endeavour to design programmes for staff training in order to manage new technologies and to achieve the objectives of the firms. This will make the firms competitive.
Significant Contribution of the Study
Studies have shown that the output of rice processing industry in Nigeria is insufficient as such, imported have to make-up for the short fall in the demand of rice. It was also established that actors of industry play prominent roles involving industrial development, technological innovation capability enhancement and improved firms overall performance. Hence, the study therefore has contributed to the existing literature by providing information on the state of technological innovation of rice processing firms and the level of the existing interaction between the firms and actors involved in rice processing operations in Nigeria. This could help to boost the level of technological innovation through collaboration with these actors.
Funding: This study received no specific financial support. | 2019-06-13T13:19:36.737Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "7ef59e48e57a3a250260d9105bfd55c368c4097c",
"oa_license": null,
"oa_url": "http://www.conscientiabeam.com/pdf-files/eco/62/IJBEM-2018-5(6)-164-175.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f084f01df7869d9817aa0a9dba786821429e3879",
"s2fieldsofstudy": [
"Business",
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
14232470 | pes2o/s2orc | v3-fos-license | Predicting chemically-induced skin reactions. Part II: QSAR models of skin permeability and the relationships between skin permeability and skin sensitization
Skin permeability is widely considered to be mechanistically implicated in chemically-induced skin sensitization. Although many chemicals have been identified as skin sensitizers, there have been very few reports analyzing the relationships between molecular structure and skin permeability of sensitizers and non-sensitizers. The goals of this study were to: (i) compile, curate, and integrate the largest publicly available dataset of chemicals studied for their skin permeability; (ii) develop and rigorously validate QSAR models to predict skin permeability; and (iii) explore the complex relationships between skin sensitization and skin permeability. Based on the largest publicly available dataset compiled in this study, we found no overall correlation between skin permeability and skin sensitization. In addition, cross-species correlation coefficient between human and rodent permeability data was found to be as low as R2=0.44. Human skin permeability models based on the random forest method have been developed and validated using OECD-compliant QSAR modeling workflow. Their external accuracy was high (Q2ext = 0.73 for 63% of external compounds inside the applicability domain). The extended analysis using both experimentally-measured and QSAR-imputed data still confirmed the absence of any overall concordance between skin permeability and skin sensitization. This observation suggests that chemical modifications that affect skin permeability should not be presumed a priori to modulate the sensitization potential of chemicals. The models reported herein as well as those developed in the companion paper on skin sensitization suggest that it may be possible to rationally design compounds with the desired high skin permeability but low sensitization potential.
INTRODUCTION
Skin sensitization is a complex adverse toxicological endpoint that is influenced by several biological parameters, such as protein binding, dendritic cell activation, individual variation, and time-dose exposure Johansen et al., 2011). Skin permeability is also often considered as a potential parameter affecting chemicals' sensitization potential (MacKay et al., 2013). It relates to the ability of a molecule to pass through the skin, a characteristic which is primarily influenced by the physicochemical properties of the chemical as well as the physicochemical and biological properties of the membrane (Xia, 2011).
Despite the high importance of skin permeability for consumer product efficacy and its supposed influence on potential toxicities such as skin sensitization, the amount of experimental data available in the public domain is surprisingly limited. In the early 90's, a compilation of data points for skin permeability gathered from several sources was published (Flynn, 1990). Subsequent studies added some complementary data allowing researchers to develop Quantitative Structure-Activity Relationship (QSAR) models for predicting skin permeability. However, as shown in Table S1, most of the published studies reporting on skin permeability modeling (Abraham et al., 1999;Barratt, 1995;Berge, 2009;Chen et al., 2010Chen et al., , 2007Cronin et al., 1999;Hostýnek and Magee, 1997;Lien and Gao, 1995;Magnusson et al., 2004;Moss and Cronin, 2002;Moss et al., 2011;Patel et al., 2002;Guy, 1995, 1992) have not included certain critical elements of QSAR model development and validation protocol, such as the definition of the applicability domain (AD) or proof of passing the Y-randomization test, which constitute best practices of QSAR modeling (OECD, 2004;Tropsha, 2010). Recently, several QSAR studies were benchmarked on a series of 11 compounds and all of them failed to predict skin permeation quantitatively; they were only able to rank permeants (Brown et al., 2012). Another recent study showed that most of the available QSAR models underestimate the skin permeability of hydrophilic solutes . Recent studies attempted to overcome the related problems with complex chemical mixtures and built a QSAR model based on several mixtures of 36 chemicals with porcine skin data. The latter model followed the best practices of QSAR modeling (Tropsha, 2010); however, we have identified 21 duplicative structures in that dataset (see Table S3) indicating a potential bias of the model and a likely over-estimation of its true performance.
Despite the underlying importance of skin permeability and its identification as a necessary step in the OECD Adverse Outcome Pathway (AOP) for skin sensitization (Karlberg et al., 2008;OECD, 2012), we could not find any study among those compiled in Tables S1 and S2, where both endpoints were analyzed concurrently and in the context of their possible inter-dependency. The prevalence of dermal exposure to diverse chemicals in consumer products and in the environment, the importance of permeability for skin sensitization, and the lack of reliable models to predict these endpoints for new chemicals have motivated us to initiate a tandem study on collecting and analyzing both skin permeability and skin sensitization data. In the companion paper (Alves et al., 2014), we have reported on new QSAR models of skin sensitization. In this study, we have compiled, curated, and integrated skin permeability coefficient (Kp) data extracted from various literature sources. Using this unique data collection, we have developed and rigorously validated QSAR models for skin permeability, and explored the relationships between the skin sensitization potential and the chemical permeability coefficient. The QSAR models developed in this and the accompanying study (Alves et al., 2014) are publicly available and can be used for evaluating chemically induced skin effects in silico as part of both research and development projects as well as in support of regulatory decisions on consumer products.
Datasets
Skin sensitization datasets (datasets A and B)-In the Part I of this study (Alves et al., 2014) we described two skin sensitization datasets. Briefly, one of them (dataset A) was retrieved from the ICCVAM (Interagency Coordinating Committee on the Validation of Alternative Methods) report on the murine reduced local lymph node assay (ICCVAM 2009). The modeling set (Dataset A) consisted of 254 compounds (127 sensitizers and 127 non-sensitizers) and the external validation set (dataset B) consisted of 133 sensitizers from the ICCVAM report (ICCVAM 2009) and 18 additional compounds taken from the study of Jaworska et al. (2011). This collection of data was used to explore the intrinsic relationship between skin sensitization and skin permeability (human data from dataset D; see below) for a subset of 20 compounds from the same dataset for which both skin sensitization and skin permeability data were known.
Human skin permeability dataset (dataset D)-In vitro human skin permeability coefficients were retrieved from the literature including 211 records expressed in logKp (cm.h −1 ); this dataset contained the well-known and frequently studied Flynn dataset (Flynn, 1990). 17 duplicates and two sets of triplicates were identified and curated leaving unique compounds only. Three additional compounds and water were also removed for the following reasons: both styrene (logKp = −0.19) and ethyl benzene (logKp = 0.08) were identified as activity outliers (i.e., their logKp values were far outside the activity range of the other compounds in the dataset between −5.52 and −0.69) whereas digitoxin (natural product) was a structural outlier. The remaining 186 compounds (dataset D) were retained for modeling. Rodent dataset (dataset E)-The set of in vitro rodent skin permeability data consisting of 103 chemical compounds was retrieved from the literature (Moss et al., 2011). After curation, 96 compounds (dataset E) were kept for modeling. The following five activity outliers were removed from the dataset E: bisphenol A diglycidyl ether (−5.26), decabromodiphenyl oxide (−5.15), 4-N butylamine (−0.64), bufexamac (−0.57), and triclosan (0.13). The overall range of logKp for the final dataset varied from −4.85 to −0.94.
Data curation
Chemical structures were retrieved either from PubChem (https:// pubchem.ncbi.nlm.nih.gov/, accessed in March 2012) or ChemSpider (http:// www.chemspider.com/, accessed in March 2012) databases using chemical names. Chemicals were removed if their structures could not be found. Each dataset was carefully curated according to previously established guidelines (Fourches et al., 2010). Briefly, counterions were removed, whereas specific chemotypes such as aromatic and nitro groups were normalized using the ChemAxon Standardizer (v.5.3, ChemAxon, Budapest, Hungary, http://www.chemaxon.com). The presence of duplicates, i.e., identical compounds reported more than once in the same dataset, is known to lead to over-optimistic estimations of the predictivity for developed QSAR models. However, the analysis of such records also gives an estimate of the dataset quality: if activity data for the same compound are consistent, the overall data quality is high; if there is a large deviation in experimental values between different records of the same compound, the quality is, obviously, low. Thus, after the structural standardization, the duplicates were identified using ISIDA Duplicates (Varnek et al., 2008) and HiT QSAR (Kuz'min et al., 2008) software and carefully analyzed. If the experimental properties associated with two duplicative structures were identical, or highly similar, then one compound was chosen at random and deleted. However, if their experimental properties were significantly different, we deleted both records from the dataset. During the curation of human skin permeability dataset, we found 17 pairs of duplicate structures and two sets of triplicates as shown in Table S3. The permeability values for duplicative records were almost identical (logKp variation ~ 0.01 LU) except those for butanoic acid that had a variation higher than ca. 0.4 LU. Thus, the LogKp value for all 17 compounds was averaged and the duplicative records were removed so that only one permeability coefficient was used for a chemical compound.
Cheminformatics Approaches
Hierarchical Cluster Analysis-The clustering of a chemical dataset consists of merging compounds into distinct clusters of chemically similar molecules [see (Downs and Barnard, 2003;Mercier, 2003) for a review of the most popular clustering approaches used in computational chemistry]. In this study, we have employed the Sequential Agglomerative Hierarchical Non-overlapping (SAHN) method implemented in the ISIDA/Cluster program (http://infochim.u-strasbg.fr) (Varnek et al., 2007). Briefly, each compound represents one cluster at the start. Then, the m compounds are merged iteratively into clusters using their pairwise Euclidean distances stored in a squared (m * m) symmetric distance matrix. The two closest objects (molecules or clusters) are iteratively identified and merged to form a new cluster, the distance matrix being updated with the re-computed distances separating the newly formed cluster and the others, according to the user-specified type of linkage (complete linkage in this study). The process is repeated until one cluster remains. The parent-child relationships between clusters result in a hierarchical data representation, or dendrogram. In particular, we used ISIDA/Cluster to obtain the heat map (see Results section) of the proximity matrix.
SiRMS Descriptors-2D
Simplex Representation of Molecular Structure (SiRMS) descriptors (representing numbers of unique tetratomic fragments with fixed composition and topological structure) were generated by the HiT QSAR software (Kuz'min et al., 2008). At the 2D level, the connectivity of atoms in a simplex, atom type, and bond nature (single, double, triple, or aromatic) have been considered. SiRMS descriptors account not only for the atom type, but also for other atomic characteristics that may impact biological activity of molecules, e.g., partial charge, lipophilicity, refraction, and an atom ability for being a donor/acceptor in hydrogen-bond formation (H-bond). For atom characteristics with continuous values (charge, lipophilicity, and refraction) the division of the entire value range into definite discrete groups has been carried out. The atoms have been divided into four groups corresponding to their (i) partial charge A≤−0.05<B≤0<C≤0.05<D, (ii) lipophilicity A≤−0.5<B≤0<C≤0.5<D, and (iii) refraction A≤1.5<B≤3<C≤8<D. For H-bond characteristic, the atoms have been divided into three groups: A (acceptor of hydrogen in H-bond), D (donor of hydrogen in H-bond), and I (indifferent atom). The usage of sundry variants of differentiation of simplex vertexes (atoms) represents the principal feature of the SiRMS approach (Kuz'min et al., 2007). Detailed description of HiT QSAR and SiRMS can be found elsewhere (Kuz'min et al., 2008;Muratov et al., 2010).
QSAR modeling-The QSAR modeling workflow used in this study includes three major steps (Tropsha and Golbraikh, 2007;Tropsha, 2010): (i) data curation/preparation/analysis (selection of compounds and descriptors), (ii) model building, and (iii) model validation/ selection. Here we followed a 5-fold external cross-validation procedure: the full set of compounds with known experimental activity is randomly divided into five subsets of equal size; then one of these subsets (20% of all compounds) is set aside as an external validation set and the remaining four sets together form the modeling set (80% of the full set). This procedure is repeated five times allowing each of the five subsets to be used as external validation set. Models are built using the modeling set only, and it is important to emphasize that the compounds in corresponding external set (fold) are never employed either to build and/or select the models. Each modeling set is divided into many internal training and test sets; then models are built using compounds of each training set and applied to test set compounds to assess their properties. The statistical metrics used to assess different aspects of model performance are available in the Supplementary Materials.
Best models were identified and selected according to acceptable threshold values (> 0.6) of for the internal test sets (called out-of-bag set in Random Forest, vide infra). Then, selected models were applied to the external set compounds to predict their skin sensitization potential. This procedure was repeated five times to ensure that every compound was present once and only once in the corresponding external test set. Since the accuracy of each model is estimated only based on external set compounds, which are never used to derive, bias, or select models, this protocol ensures an objective estimation of the true external predictivity of the models. In addition, 1,000 rounds of Y-randomization were performed for each dataset to assure that the high accuracy of the models built with real data was not due to chance correlations.
Random Forest-Random Forest models were constructed according to the original RF algorithm (Breiman, 2001) using the CF software version 2.12 (Polishchuk et al., 2009). RF is an ensemble of single decision trees. Outputs of all trees are aggregated to obtain one final prediction. Each tree is grown as follows: (i) a bootstrap sample is produced from the whole set of N compounds to form a training set for the current tree. Compounds that are not in the training set of the current tree are placed in the out-of-bag (OOB) set (size of ~ N/3); (ii) the best split by CART algorithm (Breiman et al., 1984) among the m randomly selected descriptors from the entire pool in each node is chosen; (iii) each tree is then grown to the largest possible extent; there is no pruning. The predicted classification values are defined by majority voting for one of the classes. Thus, each tree predicts values for only those compounds that are not included in the training set of that tree (for OOB set only). Since RF possesses its own reliable statistical characteristics (based on OOB set prediction) which could be used for validation and model selection, no cross-validation was performed (Breiman, 2001). Thus, the final model is chosen by the lowest error for prediction of the OOB set. The local (tree) applicability domain approach (Artemenko et al., 2011) was used for all RF models developed in this study.
DERMWIN-The
where K o/w is the octanol/water partition coefficient and MW is the molecular weight of the chemical. Human skin permeability dataset (dataset D) was imported to DERMWIN as SMILES strings, and logKp was calculated and compared to the predictions of our models.
Relationship between human and rodent skin permeability coefficients
First, we searched for the subset of chemicals that had both human and rodent skin permeability data available to verify whether we could increase the dataset size by merging these data. We found 34 compounds that have both human and rodent experimental data. As is obvious from Figure 1, the correlation between human and rodent data was not high enough to merge these datasets.
A linear regression of the logKp values yielded R 2 =0.44 only. Therefore, we decided that the data for humans and rodents were definitely not compatible and built species-specific QSAR models. Human-based and rodent-based QSAR models for skin permeability were developed using the same protocol used for skin sensitization as described in the companion paper (Alves et al., 2014). The results (Table 1) suggest that continuous models for both datasets afforded high predictive accuracy as estimated by Q 2 ext values. However, the quality and coverage of human-based models 12 and 14 were higher than the same characteristics of the best rodent-based model 21. Thus, models 12 and 14 were retained for in silico screening of chemical libraries of concern to identify potential human skin sensitizers. Additional statistical characteristics of the models are present in Supplementary Materials (Table S2).
Relationship between skin sensitization and permeability
We have explored whether there are intrinsic relationships between skin sensitization and skin permeability of chemicals. Since rodent data did not correlate well with human data, only human skin permeability data were employed for this analysis. We identified a subset of 20 compounds for which both skin sensitization and skin permeability data were known. These experimental data were compiled from several studies ICCVAM, 2009;Jaworska et al., 2011). As shown in Table 2, there is no direct relationship between the two endpoints. The ranges of logKp values for sensitizers (logKp = −3.62 -−1.28) vs. non-sensitizers (logKp = −3.05 -−1.6) confirm this finding. For instance, five non-sensitizers have low permeability coefficient while logKp values for two other nonsensitizers are high. There is a common understanding that a chemical should penetrate the skin to cause sensitization (penetration is generally regarded as the first step in the skin sensitization AOP (Karlberg et al., 2008;OECD, 2012)). However, we found examples when a relatively weak penetrant can still be a strong sensitizer (e.g., p-Phenylenediamine, logKp = −3.62), whereas a strong penetrant, can be a non-sensitizer (e.g., octanoic acid, logKp = −1.60). This observation suggests that skin permeability and skin sensitization are generally decoupled processes and therefore it may be possible to modify chemicals such as to affect permeability without affecting sensitization, and vice versa.
Cluster analysis of human skin permeability dataset
We have performed a cluster analysis of the human skin permeability set (dataset D) following the same method used for the skin sensitization dataset A (part I in (Alves et al., 2014)). Due to the poor concordance between human and rodent data, and a heightened interest in human skin permeability data, this analysis was only conducted using dataset D.
As a result of cluster analysis, we found three compounds that had different activity annotations from the rest of the cluster members and were therefore suspicious. These compounds included a barbiturate (amylobarbital) and two steroid derivatives of hydrocortisone (hydrocortisone methylsuccinate, and hydrocortisone succinamate). Our procedure for evaluating suspicious compounds was based on the permeability prediction for these compounds using developed models and a search for additional experimental data to verify their human skin permeability coefficients. Unfortunately, we were unable to find any confirmatory data in the literature to prove or refute the values of permeability coefficients for these substances. According to our consensus model 5, amylobarbital had a predicted logKp = −3.56 (experimental logKp= −2.64) similar to other barbitals: barbital (logKp = −3.95), phenobarbital (logKp = −3.34), and butobarbital (logKp = −3.71), which made us less confident in the accuracy of the reported permeability of this compound. Within steroids (cluster c in Figure 2), progesterone was the strongest penetrant (logKp = −1.89), whereas the addition of one hydroxyl group in position 17 of the steroid scaffold significantly decreased the permeability (hydroxyprogesterone logKp = −3.22). The hydroxyl group in position 17 appears to be bad for steroid permeability, since testosterone (logKp = −3.40) contains the hydroxyl group but lacks the ethoxy group. We removed both hydroxyl and ethoxy group from position 17, and the resulting compound, 3-oxo-delta4-steroid, had a predicted logKp pred = −2.28; adding a methyl group to the same position resulted in a logKp pred = −2.28, which was higher than the permeability of hydroxyprogesterone and testosterone, but lower than that of progesterone.
QSAR modeling of skin permeability
We have employed a dataset comprising 211 compounds (dataset D). Several structural duplicates were identified during the curation process. The complete list of 17 duplicate pairs and two sets of triplicates is shown in Table S3. Skin permeability values for all pairs of duplicates were the same or almost the same. This observation also means that the statistical performance of the models developed in an earlier study may have been over-optimistic because, as shown previously (Fourches et al., 2010), the presence of duplicates with identical activity annotations in both training and test sets generally leads to an overestimation of the model's predictivity. Here, we built species-specific QSAR models of skin permeability using the same protocol used for the skin sensitization study (Alves et al., 2014). The results shown in Table 1 suggest that continuous models with high accuracy for both datasets have been generated; however, the quality and coverage of human-based models 12 and 14 were higher than the same characteristics of rodent-based model 21. The former models have been retained for in silico screening of chemical libraries.
Comparison of developed QSAR models vs. DERMWIN permeability predictions
The DERMWIN module available as part of the EPI Suite software has been employed in several recent studies (Fong and Tong, 2012;Fong et al., 2014;Zhang et al., 2013) for evaluating skin permeability. For this reason, we decided to compare our models with that implemented in this software. DERMWIN uses a linear equation that relates the permeability coefficient with the chemical's octanol/water partition coefficient and molecular weight. However, statistical characteristics of this model suggested that this correlation was not adequate to predict human permeability. Although our consensus model 12 had a lower coverage of chemical space than DERMWIN (77% vs. 100%), it significantly outperformed DERMWIN in predictivity (Q 2 ext = 72% vs. 43%, respectively) (see Table 1). When the AD restriction was removed and the predictive accuracy of the models was evaluated using the same set of compounds (143 compounds), the comparison was in favor of our model (Q 2 ext = 71% vs. 66%, respectively) (see Table 3).
DISCUSSION
Can rodent skin permeability data be used to predict human permeability?
As presented in the Results section, we found 34 compounds that had experimental permeability coefficient data for both human and rodents. Although the overall correlation between skin permeability measured in these two species was not high, we wished to determine whether the model obtained on rodent dataset E could be used to predict skin permeability in humans. Virtual screening of the human dataset D using the model developed with the rodent data resulted in a reasonably high predictivity (Q 2 ext = 0.77, RMSE = 1.27) but only for a very small fraction (19%) of compounds. Based on these results, one can assume that the rodent model could be used for predicting human skin permeability for at least some compounds within the conservative AD, but this observation should be considered with caution since it does not hold true for a larger data set. Additional experimental results for greater number of compounds are needed for more definitive conclusions.
Relationships between skin sensitization potential and skin permeability based on computed data
Given the small number of compounds with known experimental values of both skin sensitization and permeability, we decided to apply our selected QSAR models to crosspredict these properties. The use of QSAR-imputed permeability and sensitization data allowed us to examine the possible relationships between the two endpoints for a much larger set of compounds. Skin sensitization model 5 (see the companion paper (Alves et al., 2014)) was used for predicting skin sensitization for 186 compounds in the human skin permeability dataset (D), whereas skin permeability model 5 (Table 1) was used for calculating permeability coefficients for 387 compounds in the skin sensitization dataset retrieved from ICCVAM (254 compounds from dataset A + 133 sensitizers from dataset B) (see (Alves et al., 2014)). Then, we combined all predictions for compounds with unavailable experimental data (see Table S5). Compounds outside of AD either for model 5 or 12 were removed from the analysis. In the end, 283 compounds that had information (experimental or predicted) for both endpoints were considered for the analysis. As one can see in Figure S1, all but one of the weakest penetrants (logKp < −5) were non-sensitizers. The only exception was 1-dodecyl glycidyl ether, which could have been mispredicted by model 5. It was hypothesized recently that compound accumulation in the skin is contributing more significantly to skin sensitization than its actual permeability through the skin layer (Jaworska et al., 2013). Our results (see Figure S1) show that easily-penetrating compounds may be both sensitizers and non-sensitizers. For compounds with low permeability we made the same observation. These findings re-confirm the absence of the global correlation between skin permeability and sensitization.
Aforementioned results contradict a common view of skin permeation as the first step in the adverse outcome pathway leading to skin sensitization (OECD, 2012). To investigate whether the concordance between skin sensitization and permeability could be confined to certain chemical classes, we have performed a cluster analysis for the entire dataset of 283 compounds with imputed data. Careful analysis of the relationships between the two endpoints within clusters of structurally similar compounds showed that the observation linking the high permeability and skin sensitization potential could be made only for one cluster out of 34. This specific Cluster 1 is formed by 11 compounds represented by four barbitals (barbital, amylobarbital, butobarbital, and phenobarbital) and seven other drugs (scopolamine, ibuprofen, atropine, griseofulvin, cyclamen aldehyde, p-tert-butyl-alphaethyl-hydrocinnamal, and sodium-3,3,5-trimethyl-benzenesulfonate). The comparison between permeability and sensitization for compounds in this cluster is shown in Figure S2 and Table S6. The analysis revealed that seven compounds are associated with logKp < −2.5 and are non-sensitizers, whereas the four other compounds with logKp > −2.5 are sensitizers.
No distinct trend could be observed for the remaining clusters. Although, as noticed above, skin permeability is widely considered to be mechanistically implicated in skin sensitization, we have found no evidence that high permeability implies high skin sensitization potency and vice versa. Some other authors reached the same conclusions. For instance, the analysis of mechanism-based QSARs demonstrates no role for skin permeability in determining potency (Roberts and Aptula, 2008). Similarly, in another recent study (Jaworska et al., 2013) the authors hypothesizes that the accumulation of a chemical in the epidermis layer of the skin is much more related to the skin sensitization than the actual penetration properties. In another study (Roberts and Patlewicz, 2010), it was hypothesized that skin sensitizers reach the viable epidermis by-passing skin permeability route via shunt pathways, since bad penetrants have shown to be sensitizers when applied by intradermal injection.
Another important benefit of this cheminformatics analysis is the use of structural rules established from the interpretation of developed QSAR models, SAR and cluster analysis, in order to design new compounds with improved permeability and sensitization characteristics. We illustrated this approach using an example of putative stepwise structural optimization for permeability of pentanoic acid, considering experimental data and predictions using developed models (Figure 3). Starting from this compound with a relatively low permeability (logKp = −2.7), several transformation steps can increase its permeability more than 10-fold and convert it to n-heptanol (logKp = −1.50). n-heptanol is predicted as a sensitizer (which is confirmed by its Material Safety Data Sheet (OXEA, 2013) but it can be transformed to octanoic acid that has a similar permeability (logKp = −1.60) as n-heptanol but lacks its sensitization potential.
CONCLUSIONS
We have compiled, curated, and integrated the largest publicly available datasets of skin permeability for diverse chemicals. The analysis of the experimental data for compounds containing both skin sensitization and permeability data indicated that, with a few exceptions, there is no overall concordance between these two endpoints, i.e., weak penetrants could be strong sensitizers and vice versa. Although sensitizers have to penetrate the skin layer, the permeability coefficient is not a determinant of the skin sensitization potential. Cluster analysis also helped us to highlight the high consistency of experimental Kp data reported in the literature.
We have built statistically significant and externally predictive QSAR models of skin permeability that can be used to predict the permeability of untested compounds through the skin. Comparison of the developed consensus model with DERMWIN software showed that our model significantly outperformed DERMWIN in predictivity (Q 2 ext = 72% and 43%, respectively) but at the expense of some lost in coverage (77% vs. 100%); when applied to the same set of compounds as used for DERMWIN (ignoring the AD for our models) the performance of our model is still higher (Q 2 ext = 71% vs. 66%). The compiled datasets and all the models developed in this study have been made publicly available at the Chembench Web Portal (http://chembench.mml.unc.edu).
The use of skin permeability and sensitization values imputed by our QSAR models allowed us to examine the relationships between these two endpoints utilizing a significantly expanded set consisting of 283 compounds. The results indicated that there is still no overall concordance between these endpoints. This phenomenon could be explained by attributing higher impact to compound accumulation in the skin on sensitization potential, rather than permeability through the skin layer. Further investigation into this observation would be facilitated via building predictive models of compound accumulation in the skin, and incorporating considerations of protein binding and reactivity.
In conclusion, the skin permeability models developed in this study could be useful for risk assessment purposes in addition to the assessment of skin sensitization; for instance, the knowledge how much of an applied dose will penetrate the skin may be helpful to estimate systemic exposure to topically applied chemicals. Moreover, the lack of correlation between skin permeability and skin sensitization established in this study suggests a possibility of rational design of compounds with the desired high permeability but low skin sensitization potential that may be of value to the cosmetics industry.
Highlights
• We compiled the largest publicly-available skin permeability dataset.
• Predictive QSAR models were developed for skin permeability.
• No concordance between skin sensitization and skin permeability has been found.
• Structural rules for optimizing sensitization and penetration were established. Cluster analysis of the human skin permeability dataset D: dendrogram and heat map of the distance matrix ordered based on structural similarity (blue/violet = similar; yellow/red = dissimilar). The following clusters are noted: (a) carboxylic acids, (b) glycol ethers, and (c) steroids. Example of a structural transformation of sensitizer n-octanol with low permeability to nonsensitizer octanoic acid with improved permeability. Desired change of property is highlighted by green, undesired -by red; Δ = logKp parent − logKp child . Statistical characteristics of QSAR models for skin permeability assessed by 5-fold external cross-validation. Notes: Models 1 to 7: Human-based skin permeability models. Models 8 to 13: Rodent-based skin permeability models. RSME: root mean square error; MAE: mean absolute error * Applicability Domain was not considered in these models. | 2018-04-03T01:26:14.683Z | 2015-01-03T00:00:00.000 | {
"year": 2015,
"sha1": "bc70f16d9c6355430d95d3b9043c69d3b96c5a06",
"oa_license": "CCBYNC",
"oa_url": "https://cdr.lib.unc.edu/downloads/k930c5551",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc70f16d9c6355430d95d3b9043c69d3b96c5a06",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
15409181 | pes2o/s2orc | v3-fos-license | A Spoonful of Math Helps the Medicine Go Down: An Illustration of How Healthcare can Benefit from Mathematical Modeling and Analysis
Objectives A recent joint report from the Institute of Medicine and the National Academy of Engineering, highlights the benefits of--indeed, the need for--mathematical analysis of healthcare delivery. Tools for such analysis have been developed over decades by researchers in Operations Research (OR). An OR perspective typically frames a complex problem in terms of its essential mathematical structure. This article illustrates the use and value of the tools of operations research in healthcare. It reviews one OR tool, queueing theory, and provides an illustration involving a hypothetical drug treatment facility. Method Queueing Theory (QT) is the study of waiting lines. The theory is useful in that it provides solutions to problems of waiting and its relationship to key characteristics of healthcare systems. More generally, it illustrates the strengths of modeling in healthcare and service delivery. Queueing theory offers insights that initially may be hidden. For example, a queueing model allows one to incorporate randomness, which is inherent in the actual system, into the mathematical analysis. As a result of this randomness, these systems often perform much worse than one might have guessed based on deterministic conditions. Poor performance is reflected in longer lines, longer waits, and lower levels of server utilization. As an illustration, we specify a queueing model of a representative drug treatment facility. The analysis of this model provides mathematical expressions for some of the key performance measures, such as average waiting time for admission. Results We calculate average occupancy in the facility and its relationship to system characteristics. For example, when the facility has 28 beds, the average wait for admission is 4 days. We also explore the relationship between arrival rate at the facility, the capacity of the facility, and waiting times. Conclusions One key aspect of the healthcare system is its complexity, and policy makers want to design and reform the system in a way that affects competing goals. OR methodologies, particularly queueing theory, can be very useful in gaining deeper understanding of this complexity and exploring the potential effects of proposed changes on the system without making any actual changes.
Introduction
Over the past two decades, operations researchers increasingly have examined health care systems. One of the leading journals in the field, Operations Research, devoted an entire issue to health care research in November, 2008 [1]. This research employs the latest in opera-tions research methodology (e.g., Ross and Jayaraman [2]). Articles published in the operations research (OR) literature examines a broad range of issues, including (but not limited to) capacity planning and management in hospitals [3,4] and multisite service systems [5]; organ donation and allocation [6,7] and dialysis [8]; workforce scheduling [9]; the occurrence of disease, including mental disorder [10]; the effect of promotional tools [11]; patient queues and delays [12,13]; the prediction of health care costs [14]; drug treatment [15]; the effects of reim-bursement policy [16,17]; and breast cancer diagnosis and treatment [18].
In contrast, very little of this research appears in the standard journals in health policy and health services research (for exceptions, see [19][20][21]). The disjuncture, therefore, lies between the development of these tools and their application to real-world problems. This need is reflected in a recent joint report from the Institute of Medicine and the National Academy of Engineering. This landmark report identifies many potential benefits of OR in healthcare and recommends several measures to strengthen the link between the two. For example, the report recommends that health care become one of the standard applications taught to engineering students. Conversely, the report advocates that providers integrate system tools in the actual delivery of care. Such tools might include system-wide data standards and hand-held digital recall devices for doctors and nurses.
An OR perspective typically frames a complex problem in terms of its essential mathematical structure. Such a model has three main components: an objective function, decision variables, and constraints. The purpose of the model is to identify the relationships between alternative choices and key outcomes. For example, a common application of OR tools involves queues for services. In a typical queueing application, the objective could be to minimize staffing costs, a constraint could be that average waiting time remains below some level, and the decision variable could be the number of servers to be employed. Once the model is specified, OR offers a variety of tools for understanding the implications of alternative choices. For example, a mathematical solution may identify the optimal decision and allow one to estimate the impact of sub-optimal choices. In many instances, standard mathematical solutions for common problems exist; many rather different applications (e.g., the line at a bank teller or the waiting room at a health clinic) have a similar mathematical structure.
Like any modeling, OR simplifies the actual phenomena. A model generally cannot completely represent every detail of a complex system. A model captures the essence of a system; as a result, some details of that system are ignored. One needs to balance the level of detail with the analytic tractability of the model. As model complexity grows, the model becomes more realistic yet more difficult to solve. Standard solutions may no longer exist, requiring the analyst to develop complex simulations or new mathematical techniques to solve the problem. One can balance these concerns by calibrating the model--by assessing its ability to reproduce key features of the system(s) being modeled.
Modeling has several benefits. The model may identify unanticipated, system-wide consequences of a decision. For example, adding more lines at a fast food restaurant during lunch time may generate increasingly small reductions in waiting time by customers; or laying off one of our four cashiers may increase waiting times only by 10%. Particularly valuable is the fact that the model may reveal these consequences before the decision is actually implemented.
This article illustrates the use and value of operations research tools in health care. We employ queueing models as an illustration. Queueing models are useful in that they provide solutions to problems of waiting that are particularly relevant in health care. More generally, they illustrate the strengths of modeling in health care research and service delivery. Section 1 provides background on queueing theory. Section 2 provides some examples of how queueing theory has been used in healthcare. Section 3 develops a modeling approach for an illustrative example, drug treatment. This section also makes a broader point involving the benefits of modeling more generally.
Methods: A Brief Introduction to Queueing Theory
Queueing Theory (QT) is the study of waiting lines and is one of the oldest areas of OR. QT grew out of an article by Erlang [22], which provided some of the most widely used tools in mathematical modeling. QT focuses on systems in which "customers" arrive, wait for their turn for service, are served, and then leave. Telecommunications, inventory management, and healthcare all represent areas of application to which these tools have been applied.
Queues develop because of the random manner in which customers arrive and the times it take to serve them. In many systems, administrators need an understanding of the relationships between key performance measures and controllable system parameters. QT defines these relationships mathematically under certain conditions, which reveals the potential effects of decisions on performance measures. For example, the theory allows one to determine a mathematical expression that relates average customer waiting time to the number of servers. The decision-maker then can examine how the average waiting times change with the number of servers and determine how many servers to employ accordingly.
Queueing theory offers key insights that initially may be hidden. For example, a formal model allows one to incorporate randomness in key parameters, such as arrivals of phone calls at a system. As a result of this randomness, these systems often perform much worse than one might have guessed based on deterministic conditions (no randomness). Poor performance is reflected in longer lines, longer waits, and conversely lower levels of server utilization overall. For example, an emergency room may have sufficient staffing for the average load on a night, but the staffing level will not be sufficient on all nights because of the randomness in patient arrivals. As a result some of the patients will end up waiting for a very long time. QT can help in determining staffing levels that will help ensure that the percentage of long waits remain below a certain level.
Most queues we observe in our daily lives may seem simple from the outside, but they are in fact so complex that they cannot be characterized mathematically. For example, we can easily determine the arrival rate of cars to a drive-thru fast-food restaurant, but a complete mathematical description of the exact times at which cars will arrive is mathematically difficult. Even in cases where such characterization is possible, the description is typically very complex and therefore does not permit mathematical analysis. Any QT work assumes the existence of certain conditions on the system analyzed, which conveniently allow the mathematical analysis to proceed. Thus, QT analyzes "idealized" models, which typically do not exist in practice but can serve as approximations ranging from reasonable to excellent. Often, the same idealized model can be used to represent a variety of queueing systems, e.g., one model may approximate the ticket line at a movie theater, the cars lined up at the drive-thru, and the patients waiting in the emergency room.
Queueing theorists typically use Kendall's notation as short-cut notation for complete descriptions of queueing models. (See Table 1.) That notation comprises five essential characteristics. These are the (1) Arrival Process (A), (2) Service Time Distribution (B), (3) Number of Servers (C), (4) System Capacity (K), and (5) Service Discipline (D). The first characteristic, the arrival process, generally is specified as deterministic or stochastic. Deterministic processes involve constant times between events (such as customer arrivals); stochastic processes involve random variation in these times. The last characteristic refers to the process that determines the order in which waiting customers are served. The queue is then simply described (By Kendall's notation) as the "A/B/C/K/ D queue." If the system capacity K is not given, it is assumed to be infinite. If the service discipline is not given, it is assumed to be First-Come-First-Served (FCFS). For example, the notation M/D/3/20 indicates a queueing system in which arrival process is Markovian (interarrival times are exponentially distributed); service times are deterministic; there are three servers; system capacity is 20; and service discipline is FCFS. On the other hand, D/G/1 queue has deterministic arrivals, general service times (i.e., service time distribution is irrelevant), single server, infinite system capacity, and FCFS as the service discipline.
The QT literature offers many standard queueing models. The solutions to those models reveal its key features, such as mathematical expressions for the average waiting time. The results of these analyses can be readily used in different application areas where these queueing models are good fits. For example, one of the simplest queueing models is the M/M/1 queue. For this queue, if the arrival rate is λ, and the expected service time is 1/μ, the average number of customers waiting in the system (including those receiving service) is (λ/μ)/(1 -λ/μ). Then, if one is interested in determining the average number of customers waiting in a queueing system for which the M/M/1 queue would be a good fit, all s/he needs to do is plug in the actual values for the arrival rate λ, and the service rate μ. QT provides similar ready-to-use solutions to a number of queueing systems although the mathematical expressions can be significantly more complex. QT literature is not limited to models that can be described by Kendall's notation. There are many realistic features that one can add to such models although the analysis typically gets more complex with each addition. For example, one feature relevant to healthcare is reneging customers . These customers choose to leave the system before being served. "Leaving" can describe a wide range of phenomena within the healthcare setting. In an emergency room leaving can be those who get tired of waiting. On the other hand, in a mass casualty event, leaving could refer to those patients who die before being seen. Another set of models that are relevant within healthcare setting are queueing networks. In these models customers move between queues and possibly leave the system at some point. Such models can describe the movements of the patients, for example, between surgery, the recovery room and regular inpatient care [23].
While vast, the QT literature does not provide answers for every possible queueing model. In general, as one changes the Markovian assumptions for the arrival and service processes or adds some of the complications discussed above, the analysis becomes more complex. For such systems, simulation may be better suited. In a simulation study, researchers first build a model of the actual system using simulation software. When simulating, the computer generates vast numbers of customers (or other entities) who travel through the model, which could consist of a single queue or a network of queues. As these entities travel through the model, the computer records the desired data, which are then used to describe system performance, such as the average waiting time per customer.
Simulation is one tool for bypassing the difficult analytical problems resulting from complex queueing situations. Another involves mathematical approximations to simplify features of the model (such as the objective function). In most interesting problems, difficult equations arise which are impossible to solve generally. However, various approximations can make the problem tractable (e.g., a Taylor Series approximation to a difficult-to-evaluate function).
Applications of Queueing Theory in Health Care
Queueing models have been used to answer a variety of questions in health care. These applications involve a range of problems that vary greatly in scale. Some examples are how to allocate hospital beds [24], schedule surgeries [25], and triage patients [26], but researchers typically have focused on waiting times and utilization. These papers have explored the trade-offs between these two goals, minimizing patients waits and maximizing resource (staff, equipment, beds, etc.) utilization. Green [27] provides a good overview of this literature. Healthcare applications tend to be very complex, and so there have been a number of extensions to the classic models to bridge the gap between the models and reality. Some extensions include reneging (i.e., leaving the queue before being served) [28], variable arrival rates [29], and blocking (i.e., customers done with one phase of service, not advancing through the system due to others being served) [30].
Some QT research has examined patient flows and considered how redistribution of resources or redesign of systems could improve flow. Often commercial software, such as QNA, is very helpful to real-world providers. Such applications may reduce or eliminate bottlenecks that reduce quality of care [31].
Queueing has also been used in the design of whole systems in healthcare. Most of this work focuses on finding proper capacity, such as the marginal cost of additional beds [32]. Some studies examine multiple elements working together such as facilities working in the same region [30,33].
The next section illustrates queueing theory and the tools of OR more generally. This section will look at how a simple model in QT can answer difficult questions in healthcare management.
An Illustration: The Management of Drug Treatment Facilities
The history of treatment for substance abuse is long, and effective drug treatment has proven elusive. Some programs have demonstrated success. Brief intervention and social skills training have both shown significant efficacy (in particular over traditional 12-step programs, [34]). Simply finding an effective treatment, however, is only the beginning of service delivery . A range of problems separate potential patients from actually receiving appropriate care. These issues include funding as well as training providers to actually deliver the service. Key issues of capacity planning also are involved: a provider offering the effective treatment needs to deliver it in sufficient quantity to those who would benefit.
For these types of problems, OR in general and queueing models in particular have much to offer. Consider a residential drug treatment facility. This hypothetical facility can be modeled using an M/D/c queue. This model was chosen because it best approximates many key features of this application. First, potential customers (individuals requiring treatment) arrive, at random, according to a Poisson process with some constant rate (M for arrivals). If space is available for them to enter treatment they do; otherwise, they must wait. Second, services take a deterministic amount of time: each patient spends k weeks in treatment and then is discharged (D for service). Facility capacity is determined by the number of patients they can house at one time, here referred to as the number of beds (c servers). We also assume that the waiting list has no maximum and that the arrival rate does not depend on the number of clients being treated at a point in time. Finally, we assume decisions about whom to treat is "First Come First Served".
Tijms [35] has examined this specific queue. The solution involves solving an infinite set of linear equations. The author does so by using a common approximation in OR: he reduces the number of equations to a finite number using the geometric tail approximation. This approximation assumes that the probability of having successively greater numbers of people in a queue decays exponentially as the length of the queue increases. Using this simplified system, one can solve directly for the longrun average number of people in the queue. From that solution, one can calculate other key outcome measures, such as the distribution of waiting times, the proportion of arrivals that need to wait and measures of system utilization. (See mathematical appendix 1 for more details. The appendix also provides the Matlab code needed to produce the solutions reported below.) One can take Tijms' results and examine questions about performance or questions policy maker might have regarding the facility. For example, • What will average occupancy be? How long can patients expect to wait for a bed?
• Can the facility accommodate referrals from a local hospital? If so, how many extra beds will be required?
• If this facility is consolidated with two others (leaving overall capacity unchanged), who will benefit? Customers, the facilities, both or neither? To answer these questions, one needs the 3 key parameters. Any facility can provide these figures; we selected hypothetical values for arrival rate (λ = 1 per day); for length of treatment (28 days; μ = 1/28); and number of beds (C = 32). We illustrate the model's solution for these hypothetical parameters, but of course, the reader with access to Matlab could calculate a solution for a different facility, perhaps better reflecting the circumstances in his or her community.
Question 1: What will average occupancy be? How long can patients expect to wait for a bed?
The long-run expected occupancy at a point in time is 28 beds occupied. On 33.6% of all days, the facility is full; 66.4% of patients experience no wait for services. Only on 22.2% of days are fewer than 25 beds occupied. Of those who do wait, the expected wait for a bed is 4.11 days. Only 5.8% of patients have to wait more than one week. So, based on these benchmarks our hypothetical clinic is meeting realistic operating goals.
Question 2: Can the facility accommodate referrals from a local hospital? If so, how many extra beds will be required?
Suppose a local hospital is considering closing its treatment unit. If it does, it will refer an average of 2 patients per week. Based on the hospital's estimate, the arrival rate will now be 9/7 (1.2857) patients per day. We know that not all beds are in continual use now, since the occupancy should be 28 patients, so we might try adding no beds. However, from an easy check for stability we see that will not work. From introductory QT, it is well known that if there is no limit to the number waiting and λ ≥ μ * C, the system is unstable (the length of the line will grow without bound). Put into words, if the arrival rate is greater than the total service rate when all servers are working, the facility will never (in the long run) be able to keep up. We can clearly see from the stability condition we need at least 36 beds (additional 4 beds) How many beds to add beyond 4 is difficult to calculate because one must decide how much of an increase in waiting is tolerable. However, by utilizing QT one can look at the various possibilities instantly; one can determine the various outcomes without having to implement them and then observe the consequences. Figures 1 and 2 report the relationship between added beds and waiting times. Figure 1 shows the relationship between the number of beds added and two key performance measures. At 7 beds, the average percent of beds occupied is 92.3%, a slight increase over the original rate (87.5%). 9 additional beds restore the system to this original rate. At 7 beds, roughly half of arrivals do not wait (50.7%)--a rate substantially worse than observed originally (66.4%). Figure 2 reports the average waiting time for an arrival that is not immediately served. For 7 beds, the wait is marginally longer than our original model (5.21 days). At 9 beds, the impact of the additional customers is eliminated.
From the figures it is clear that we should add more than 7 beds, otherwise we expect performance metrics to deteriorate, at least from the client's perspective. From the facility perspective, some benefits accrue: the percent of beds occupied increases. Adding 9 beds keeps the performance metrics at similar (or slightly improved values) at a minimum number of additional beds.
Question 3: Suppose this facility is one of three in the area and that the owners of the facilities are considering a merger under which patients would be shared across facilities. Who will benefit? Patients, the firms, both, or neither?
Assuming that the two other facilities are identical to our initial facility (1 arrival per day, 28 day treatment, and 32 beds) and close enough such that patients could be moved from one facility to the other with negligible cost, the three merged facilities would have an overall capacity After the merger the average occupancy is unchanged (87.5%). However, the patients' experiences are dramatically improved. The percent of arrivals who experience no wait rises to 87.7% (from 66.4%) and the expected wait drops to 1.55 days (from 4.11). clearly this shows that the merger should improve the experiences of patients.
The owners of the firm will be interested in how they would benefit from the merger. In the post-merger system, while the average occupancy has stayed the same, the number of idle beds has increased. To raise revenue, the facility may take more transfers from other facilities or otherwise increase the number of patients without damaging the patients' experience. Table 2 reveals the effect of the additional arrivals. The first line shows the key system characteristics at the original arrival rate. We see that small increases in the arrival rate cause the occupancy to increase substantially. Increasing arrivals to 3.3 per day (a 10% increase) raises occupancy to 96.3%. Concomitantly, the patients' experience deteriorates--now only 41.4% are admitted immediately, and the waiting times (for those who do wait) increases to 4.24 days. Further increases are especially dramatic. At 3.5 arrivals, all of the beds are occupied all of the time, and all patients wait. And the system is now unstable as waiting times explode without bound.
An alternative option would be to cut the number of beds and so reduce overhead. This option is summarized in table 3.
One can see that two beds can be eliminated without much impact. However, cutting more than ten beds would have large negative effects on the patient's experience with little gain for the facility. Doing so would cause 73.7% of patients to wait, with average wait exceeding seven days.
Discussion
In the last 100 years, medical care has advanced tremendously, arguably more than in the entire history of humanity prior to that point. Revolutionary treatments have been developed, and people live longer and healthier lives as a result. Improved technology, however, has little power to improve the lives of ordinary citizens unless it is disseminated efficiently and quickly.
Moving from better treatment to better health involves a series of steps; stumbles at any stage can rob patients of the benefits they might gain from better treatments. In some cases, the barrier to better care can be quite simple. For example, it is well established that patients having experienced an acute myocardial infarction should receive an aspirin within 24 hours of hospital admission. More than one in six participants in the Medicare program, however, still do not receive the aspirin [36]. More generally, only half of all patients receive "best practice" treatment for their illness [37]. The failure of providers to follow treatment guidelines is just one example of how knowledge of effective care may not translate into effective care.
Other barriers to care are further removed from the actual experience of care. One such barrier involves the management of resources. Arguably, the technology of treatment has far surpassed the technology of managing the delivery of care. The IOM/NAE report has labeled this disjuncture, the "paradox of American Health Care"[ [37], p.11]. This paradox reflects a variety of causes, some beyond the reach of the tools of operations research. One cause is the financing of care fosters a "cottage industry" structure [37]. The resulting fragmentation of care further degrades the quality of care.
As this paper illustrates, one set of tools for improving the delivery of health care can be found in the tools of operations research. We illustrate one of those tools here, queueing models. As is true of the broader set of OR tools, queueing models can accommodate key features of the delivery process of interest, such as the random nature of patient arrivals. Then in turn the model can identify how key outcomes may change in response to policy changes. As we illustrate with an example involving drug treatment, a queueing model can identify the effect of such changes before they are actually implemented in the real world.
One key aspect of the health care system is its complexity, and policy makers want to design and reform the system in a way that fosters goals that can be competing. For example, policy makers have established a regulatory environment that creates barriers between different organizations within the health care system. Efforts to vertically integrate different levels of health care delivery raise antitrust concerns. For example, a hospital may purchase or merge with a provider of home health care, and such integration may improve the coordination between those providers. However, such integration may raise the potential for anticompetitive behavior, such as the steering of patients by the hospital to its provider. Such inte- gration, therefore, may foster the improved management of resources but may have hidden costs. More generally, these types of complex decisions are the ones for which the benefits of an OR approach are likely the greatest. An OR model can reveal how different choices balance the benefits of vertical integration with the potential market distortion it creates. | 2014-10-01T00:00:00.000Z | 2010-06-23T00:00:00.000 | {
"year": 2010,
"sha1": "09c7625997e430eafc4f65a0e8859b2d4284b4f7",
"oa_license": "CCBY",
"oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/1471-2288-10-60",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2a063c4a77762cc12cbbccb898214ab716e4505",
"s2fieldsofstudy": [
"Medicine",
"Mathematics"
],
"extfieldsofstudy": []
} |
252616844 | pes2o/s2orc | v3-fos-license | Tuning the Hydration Acceleration Efficiency of Calcium Carbonate by Pre-Seeding with Calcium Silicate Hydrate
Nanomaterials are promising candidates for refined performance optimization of cementitious materials. In recent years, numerous studies about the performance improvement of nanomaterials using polymers have been conducted, but the modification of cement-oriented nanomaterials with inorganic modifiers is seldom assessed. In this study, we explored the performance tuning and optimization of nanomaterials by inorganic modification. In this work, hydration acceleration efficiency of calcium carbonate (CaCO3, CC) was tuned via surface deposition with calcium silicate hydrate (C–S–H) nanogel through seeding. Multiple calcium carbonate–calcium silicate hydrate (CC–CSH) samples with varying degrees of surface modification were prepared via dosage control. According to characterizations, the degree of C–S–H modification on the CaCO3 surface has a maximum that is controlled by available surface space. Once the available space is depleted, excessive C–S–H turns into free form and causes adhesion between CC–CSH particles. The resultant CC–CSH samples in this work showed enhanced hydration acceleration efficiency that is tuned by the actual degree of C–S–H modification. Elevated C–S–H modification causes CC–CSH’s acceleration behavior to shift to enhanced early-age acceleration. According to mortar strength tests, CC–CSH with 5% C–S–H modification showed the most balanced performance, while CC–CSH with higher C–S–H modification showed faster early-age strength development at the cost of lower later-age strength. The inferior later-age strength of highly C–S–H-modified CC–CSH samples may be due to the coarsening of hydration products and stiffening of their network, as well as agglomeration caused by C–S–H adhesion. This study may offer a novel route for performance tuning of cement-oriented nanomaterials.
Hydration acceleration is one of the major application aspects of nanomaterials, as hydration is the key chemical reaction of Portland cement [1]. The acceleration effect is usually achieved by seeding. In past decades, extensive studies concerning numerous nanomaterials, such as nanoSiO 2 [7,8,15], synthetic nano calcium silicate hydrate (C-S-H) [9,12], and nano CaCO 3 [10,13], have been carried out. These materials exhibited prominent hydration acceleration efficiency compared to traditional materials, with few adverse effects on later-age index [4].
Still, there are several problems in the application of nanomaterials in this regard. At present, the effect of nanomaterials is mainly adjusted by dosage and mix-compounding [5], just like conventional admixtures. However, due to their high interfacial activity, and hence, proneness to agglomerate in pore solution [25], boosting the dosage may bring about undesired agglomeration, causing efficiency loss and workability deterioration. These adverse effects are further observed as dosage sensitivity and poor performance reproducibility [2,7]. Instead of dosage adjustment, modifying or tuning the intrinsic properties of nanomaterials may offer alternative solutions. In recent years, there have been extensive studies on optimizing the effect of nanomaterials in cementitious materials with surface modification [26,27]. Still, these modifications are mainly achieved by polymers, and the use of inorganic modifiers is seldom reported.
In this work, performance tuning of calcium carbonate (CaCO 3 , abbrev. CC), a common nanomaterial frequently used as a hydration accelerator, with inorganic modification, was investigated. The inorganic modification was conducted via surface deposition on calcium carbonate with calcium silicate hydrate (C-S-H). This idea comes with the acceleration mechanism of CaCO 3 , which is achieved by seeding C-S-H [28]. By pre-conducting the C-S-H seeding process on CaCO3 nanoparticles, i.e., activating CaCO3 with nano C-S-H, a tunable hydration acceleration effect can be achieved. We prepared a series of CaCO3 nanoparticles with different degrees of C-S-H activation in this study. The modified CaCO 3 nanoparticles, namely, CaCO3-C-S-H (CC-CSH), underwent cement-related tests and characterizations to investigate the effect of C-S-H modification and the viability of nanomaterial modification in cementitious systems.
Materials
CaCO 3 nanoparticles were supplied by Changzhou CaCO 3 Co. Ltd. (Changzhou, China). The particles were produced by carbonation, with no surface modification. The mean diameter of the particles was 1.6 µm (by surface area). Other reagents for CC-CSH preparation, namely, calcium nitrate tetrahydrate (Ca(NO 3 ) 2 4H 2 O, A.R.) and sodium silicate (Na 2 SiO 3 9H 2 O, A.R.), were purchased from Sinopharm Co. Ltd., Shanghai, China. A polycarboxylate ether dispersant (PCE) was used to improve the stability of CC-CSH dispersion, supplied by Nanjing Bote Co. Ltd. (Nanjing, China) in the form of 40% solution. PCE was used as the superplasticizer in cement-related experiments. The chemical structure of PCE is shown in Scheme 1.
In this work, performance tuning of calcium carbon mon nanomaterial frequently used as a hydration acceler was investigated. The inorganic modification was cond calcium carbonate with calcium silicate hydrate (C-S-H eration mechanism of CaCO3, which is achieved by seed ing the C-S-H seeding process on CaCO3 nanoparticles, C-S-H, a tunable hydration acceleration effect can be a CaCO3 nanoparticles with different degrees of C-S-H ac fied CaCO3 nanoparticles, namely, CaCO3-C-S-H (CC-C tests and characterizations to investigate the effect of C-S of nanomaterial modification in cementitious systems.
Materials
CaCO3 nanoparticles were supplied by Changzho China). The particles were produced by carbonation, w mean diameter of the particles was 1.6 μm (by surface a preparation, namely, calcium nitrate tetrahydrate (Ca(N icate (Na2SiO3 9H2O, A.R.), were purchased from Sinoph polycarboxylate ether dispersant (PCE) was used to imp persion, supplied by Nanjing Bote Co. Ltd. (Nanjing, Ch PCE was used as the superplasticizer in cement-related ture of PCE is shown in Scheme 1. [29]) cement wa content is listed in Table 1. [29]) cement was used in this study; its mineral content is listed in Table 1. CC-CSH was prepared by depositing nano C-S-H onto the surface of CaCO 3 . The mechanism simulates the seeding process that occurs in CaCO 3 -modified cements and is based on findings by Bentz et al. [28]. CC-CSH with different C-S-H content was prepared. In the preparation process, 100.0 g of CaCO 3 and PCE solution containing 5.00 g of the dispersant were dosed in 1000 mL of distilled water and stirred. Table 2 and was determined based on C-S-H content. A CaCO 3 reference without C-S-H deposition (but with PCE addition) was also prepared. The reaction was carried out at room temperature, and the addition of both solutions was completed simultaneously in 12 h. The final dispersion was dialyzed until becoming NO 3 − negative (<10 ppm, measured by ionic chromatography, TOSOH IC-2010, TOSOH Co., Tokyo, Japan) and stored for further use.
Characterization of CC-CSH
The mean size of CC-CSH was determined by a HELOS-SUCELL Size Analyzer (Sympatec Co., Clausthal-Zellerfeld, Germany). Free nano C-S-H in CC-CSH dispersion was sampled by short-term (5 min) centrifugation at 3000 rpm, in which CC-CSH was separated while free nano C-S-H remained in the supernatant. This method was also used for free C-S-H removal in subsequent experiments. Its size was measured by dynamic light scattering (DLS, type CGS-3 ALV Co., Langen, Germany).
Shape and morphology, as well as the presence of C-S-H on CaCO 3 , were then observed by a Scanning Electron Microscope-Energy-Dispersive Spectrometer (SEM-EDS, type QUANTA 250, FEI Co., Hillsboro, OR, USA); the acceleration voltage was 15 kV. Free C-S-H was removed using the technique mentioned above and CC-CSH was diluted with a proper amount of distilled water, dropped onto sample platelets, and dried at room temperature for 24 h before observation. [30] with a water to binder (including cement and CaCO 3 /CC-CSH) ratio (w/b) of 0.4 and a binder (total 600 g) to sand (1350 g) ratio of 4/9 by weight. The mix design of the cement composites is demonstrated in Table 3. PCE of 0.12-16% total binder mass was used to regulate the flow of mortar to 180 ± 10 mm. Table 3. Mix design of the mortar samples. The amount of sand in all the samples is 1350 ± 5 g; the amount of water in all the samples is 240 ± 0.1 g. Mortars were mixed on a Jianyi JJ-5 mortar mixer. Each mortar sample was prepared for three repetitions of each test. The strength was tested at 12 h, 1 d, 7 d, and 28 d. The prisms were demolded after 10-11 h and cured in a Dongwu HBY-40B curing case at 20 ± 0.5 • C until testing time.
Paste Preparation
The pastes with w/b of 0.4 were prepared in a high-shear mixer (Jianyi NJ-160A, Jianyi Co., Nanjing, China) at room temperature (20 • C). Superplasticizer (SP, polycarboxylates) and CC-CSH were added to the water phase before mixing with cement (296.4 g). The amount of CC-CSH added was 3.6 g (in solid, counted as binder), 1.2% by total binder mass, and the amount of PCE was 0.075-0.10% binder mass (regulating flow to 180 ± 10 mm). In the mixing process, the total 300 g of binder was mixed with the water phase at a low speed for 2 min. Then, the mix was paused for 15 s and then resumed at a high speed for another 2 min.
The paste samples that were not involved in fresh-state characterizations were cured for the desired time at 20 • C in plastic vials, then demolded. The outer surface (1 mm thickness) of samples was removed using a Buehler Phoenix 4000 plate smoothing grinder to discard carbonated parts.
Mortar Strength Tests
Flexural and compressive strength of the mortar samples were also tested according to GB/T 17671-1999 [30]. Both compressive and flexural strength were assessed using an Aelikon AEC-201 test machine.
The strength development of cement mortars with the CC-CSH samples was tested. After this test, the dosage dependence of CC-CSH was measured using CC-CSH, which has the most balanced performance. Dosages of 0.3%, 0.6%, 1.2%, and 2.0% (vs. total binder mass) were assessed.
Microscopic Observations
Hardened paste samples were crushed, and slices of 3~5 × 0.5~1.5 mm were gathered for observation. SEM observation was conducted on the FEI Quanta 250 Scanning electron microscope (FEI Co., Hillsboro, OR, USA) at a magnification of 10,000×.
Isothermal Calorimetry (IC) Tests
In IC tests, about 14.00 g of the paste (prepared via 2.4.1) was quickly and accurately weighed into a plastic vial. The vial was sealed and placed in a TAM Air isothermal calorimeter to measure heat development in 36 h at 20.0 • C. Since the mixing and weighing processes were conducted outside, the early hydration of all samples was not measured for about 5 min from the starting time of mixing. Due to the limited number of testing slots, the pastes were prepared here with 1.2% CaCO 3 and CC-CSH (in solid) replacement.
Characteristics of CC-CSH Particles
The mean size and free C-S-H content of CC-CSH samples are shown in Table 4. As the results suggest, for CC-CSH with a relatively low degree of C-S-H modification, most C-S-H was fixed on CaCO 3 particles, and only few existed as nanogels. For CC-CSH-20, most C-S-H was in free form, which is due to the depletion of available CaCO 3 surfaces.
Taking out free C-S-H, the actual modification degrees of CC-CSH-10 and CC-CSH-20 were 7.73% and 9.38%, respectively. As shown in Table 4, the size of free nano C-S-H was much smaller than the CaCO 3 particles, so the size of CC-CSH with 5% C-S-H modification was nearly the same as unmodified CaCO 3 . However, CC-CSH-10 and CC-CSH-20 were notably larger, which can be attributed to the adhesion effect of C-S-H [31] (as Figure 1 shows). The effect became more prominent with higher C-S-H contents, resulting in the agglomeration of a larger fraction of CC-CSH. The SEM images of the CC-CSH samples are shown in Figure 2. As observed, the surfaces of unmodified CaCO3 were rather smooth, with varied size ranging from less than 1 μm to 5 μm. After modification with C-S-H, the surface of the calcium carbonate particles became rougher. The surface morphology of the samples showed resemblance to the C-S-H grown in slightly hydrated cement [32]. Moreover, the degree of C-S-H modification affected the morphology of the samples. With 5% C-S-H modification, the particles maintained a loosely stacked status. The particles adhered to each other and The SEM images of the CC-CSH samples are shown in Figure 2. As observed, the surfaces of unmodified CaCO 3 were rather smooth, with varied size ranging from less than 1 µm to 5 µm. After modification with C-S-H, the surface of the calcium carbonate particles became rougher. The surface morphology of the samples showed resemblance to the C-S-H grown in slightly hydrated cement [32]. Moreover, the degree of C-S-H modification affected the morphology of the samples. With 5% C-S-H modification, the particles maintained a loosely stacked status. The particles adhered to each other and formed sheets as C-S-H content further increased, which is observed in the images of CC-CSH-10 and CC-CSH-20. The adhesion of CC-CSH with more C-S-H modification was in agreement with the increase in apparent size, as listed in Table 4. The SEM images of the CC-CSH samples are shown in Figure 2. As observed, the surfaces of unmodified CaCO3 were rather smooth, with varied size ranging from less than 1 μm to 5 μm. After modification with C-S-H, the surface of the calcium carbonate particles became rougher. The surface morphology of the samples showed resemblance to the C-S-H grown in slightly hydrated cement [32]. Moreover, the degree of C-S-H modification affected the morphology of the samples. With 5% C-S-H modification, the particles maintained a loosely stacked status. The particles adhered to each other and formed sheets as C-S-H content further increased, which is observed in the images of CC-CSH-10 and CC-CSH-20. The adhesion of CC-CSH with more C-S-H modification was in agreement with the increase in apparent size, as listed in Table 4. EDS characterization was carried out with SEM imaging to verify whether CaCO3-C-S-H composite had formed and whether C-S-H deposited onto the CaCO3 surface. As its chemical nature indicated, the spectrum of CaCO3 showed a blend of the three elements: Ca, C, and O; no other elements were spotted. In CC-CSH, the growth of C-S-H nanogel on the CaCO3 surface was indicated by the presence of a Si signal. The intensity of the Si signal increased from 8% in CC-CSH-5 to 19% in CC-CSH-10, but remained stable in CC- EDS characterization was carried out with SEM imaging to verify whether CaCO 3 -C-S-H composite had formed and whether C-S-H deposited onto the CaCO 3 surface. As its chemical nature indicated, the spectrum of CaCO 3 showed a blend of the three elements: Ca, C, and O; no other elements were spotted. In CC-CSH, the growth of C-S-H nanogel on the CaCO 3 surface was indicated by the presence of a Si signal. The intensity of the Si signal increased from 8% in CC-CSH-5 to 19% in CC-CSH-10, but remained stable in CC-CSH-20, indicating that the surface of CaCO 3 has already been fully covered by C-S-H in CC-CSH-10. EDS spectra confirmed C-S-H deposition onto the CaCO 3 surface.
Effect of CC-CSH on Mortar Strength
The compressive strength data of CC-CSH-modified mortars are shown in Figure 3. As can be observed, 12 h strength of the mortars showed a prominent increase that was clearly sequenced by the amount of C-S-H modified, in which CC-CSH-20 showed the highest increase of 128%. However, once free C-S-H was removed, the strengthening effect of the remaining CC-CSH notably weakened, but the corresponding mortar strength was still higher than that of CC-CSH-5, which may be attributed to the higher surface coverage. The strength increase in plain CaCO 3 was insignificant at this age, which can be attributed to its unseeded surface. For mortars at 1 d, the strengthening effect of CC-CSH with higher C-S-H content was surpassed by CC-CSH-5; mortar with CC-CSH-5 admix had the highest strength increase of 27%. The better dispersity of CC-CSH-5 (as previous results suggest) may contribute to its better performance at this time point. Moreover, the effect of unmodified CaCO 3 began to intensify. At the age of 7 d, mortars with CaCO 3 and CC-CSH-5 admix had the best strength performance, while the strength enhancement of CC-CSH-10 and CC-CSH-20 continued to weaken. At 28 d, mortars with CaCO 3 and CC-CSH-5 addition were still slightly higher in apparent value, while mortars with the other two CC-CSH admixes showed indifferent strength variation. However, the strength deviation of the 28 d samples is small and within the range of error overall. The better effect of CC-CSH and unmodified CaCO 3 at 1-7 d may be due to the milder hydration acceleration effect and more adequate pace of hydration product build up, since excessively rapid growth of hydration products may lead to coarsening of the network [33]. Overall, the trend of compressive strength development showed notable differences among samples with different degrees of C-S-H modification, indicating the feasibility of tuning hydration acceleration efficiency of CaCO 3 by C-S-H modification.
had the highest strength increase of 27%. The better dispersity of CC-CSH-5 (as previous results suggest) may contribute to its better performance at this time point. Moreover, the effect of unmodified CaCO3 began to intensify. At the age of 7 d, mortars with CaCO3 and CC-CSH-5 admix had the best strength performance, while the strength enhancement of CC-CSH-10 and CC-CSH-20 continued to weaken. At 28 d, mortars with CaCO3 and CC-CSH-5 addition were still slightly higher in apparent value, while mortars with the other two CC-CSH admixes showed indifferent strength variation. However, the strength deviation of the 28 d samples is small and within the range of error overall. The better effect of CC-CSH and unmodified CaCO3 at 1-7 d may be due to the milder hydration acceleration effect and more adequate pace of hydration product build up, since excessively rapid growth of hydration products may lead to coarsening of the network [33]. Overall, the trend of compressive strength development showed notable differences among samples with different degrees of C-S-H modification, indicating the feasibility of tuning hydration acceleration efficiency of CaCO3 by C-S-H modification. As for flexural strength, mortars with CaCO3 and CC-CSH admix at 12 h and 1 d experienced similar enhancing effects as compressive strength. However, 7 d mortars with CC-CSH-20 addition showed slightly inferior flexural strength than controls, indicating a more brittle nature of these mortars. This brittleness may also be caused by excessively As for flexural strength, mortars with CaCO 3 and CC-CSH admix at 12 h and 1 d experienced similar enhancing effects as compressive strength. However, 7 d mortars with CC-CSH-20 addition showed slightly inferior flexural strength than controls, indicating a more brittle nature of these mortars. This brittleness may also be caused by excessively fast hydration production formation in these samples [33], which is caused by the high C-S-H content of the two types of CC-CSH.
Considering the strength-increasing effect of CC-CSH at different ages, CC-CSH-5 showed the most balanced performance due to its considerable strength-enhancing effect at a very early age (12 h), though slightly weaker than CC-CSH of higher C-S-H content, and its good strength-enhancing effect at 1 d and 7 d, which is similar to CaCO 3 . It can be concluded that at this degree of C-S-H modification, CaCO 3 and C-S-H exhibit synergistic effects on mortar strength.
The dosage dependence of CC-CSH-5 was then tested to determine the optimal range, with the results demonstrated in Figure 4. As can be seen, at 12 h, the strength also showed a positive correlation with dosage, but in later ages, the strength enhancement in mortars with the highest dosage (2.0%) soon weakened, just like the CC-CSH with high C-S-H contents. Mortar with 2.0% CC-CSH dosage also exhibited brittleness, with flexural strength slightly dropping. The overall performances of 0.6% and 1.2% CC-CSH dosage were the most satisfactory, indicating an optimal range roughly between 0.6% and 1.2%. The strength of CC-CSH-5-modified mortars showed the typical dosage dependence of nanomaterials [7,34], where the effectiveness was impeded at high dosages.
showed a positive correlation with dosage, but in later ages, the strength enhancement in mortars with the highest dosage (2.0%) soon weakened, just like the CC-CSH with high C-S-H contents. Mortar with 2.0% CC-CSH dosage also exhibited brittleness, with flexural strength slightly dropping. The overall performances of 0.6% and 1.2% CC-CSH dosage were the most satisfactory, indicating an optimal range roughly between 0.6% and 1.2%. The strength of CC-CSH-5-modified mortars showed the typical dosage dependence of nanomaterials [7,34], where the effectiveness was impeded at high dosages.
Effect of CC-CSH on Cement Hydration
To determine the effect of different C-S-H contents in CC-CSH on mortar strength from a more comprehensive prospective, SEM images of paste samples were taken in early (12 h) and late (28 d) ages, which are shown in Figure 5. As the images suggest, there were notably more hydration products on pastes with CC-CSH, and hydration products on CC-CSH-20 samples appeared coarser, which was probably caused by particle aggregation and fast hydration, as mentioned above. For samples at 28 d, the split surfaces of control paste and paste with CC-CSH-5 were denser, while a more cracked structure with coarse particles appeared in paste with CC-CSH-20 admix. The difference in quantity and morphology of hydration products at 12 h, as well as late age structure, confirmed our suggestions in previous sections: the excessive acceleration effect of CC-CSH with high C-S-H (10%, 20%) content promoted the formation of coarse hydration products and coarse structures, which may exert adverse effects on late-age strength.
Effect of CC-CSH on Cement Hydration
To determine the effect of different C-S-H contents in CC-CSH on mortar strength from a more comprehensive prospective, SEM images of paste samples were taken in early (12 h) and late (28 d) ages, which are shown in Figure 5. As the images suggest, there were notably more hydration products on pastes with CC-CSH, and hydration products on CC-CSH-20 samples appeared coarser, which was probably caused by particle aggregation and fast hydration, as mentioned above. For samples at 28 d, the split surfaces of control paste and paste with CC-CSH-5 were denser, while a more cracked structure with coarse particles appeared in paste with CC-CSH-20 admix. The difference in quantity and morphology of hydration products at 12 h, as well as late age structure, confirmed our suggestions in previous sections: the excessive acceleration effect of CC-CSH with high C-S-H (10%, 20%) content promoted the formation of coarse hydration products and coarse structures, which may exert adverse effects on late-age strength.
For a more quantitative investigation of the hydration acceleration effect of CC-CSH, the hydration heat of the samples of the initial 36 h was measured, with the results shown in Figure 6. As Figure 6a illustrates, there was a clear leftward shift in the main hydration peak to earlier ages in all samples with CC-CSH, but the main hydration peak of the sample with CaCO 3 showed a slight rightward trend in the first few hours, which may be due to the retarding effect from the polycarboxylate dispersant added during its preparation as a reference.
For different CC-CSH samples, the main peaks in curves of CC-CSH showed clear dependence on the degree of C-S-H modification, and the main peak moved more leftward as the degree increased. This confirms the tuning effect of C-S-H modification on CaCO 3 . Moreover, the acceleration effect of CC-CSH-10 and CC-CSH-20 may be excessively high so as to stiffen microstructure and induce the formation of coarse hydration products, which has adverse effect on later-age strengths. The earlier main peak also indicates an earlier deceleration stage, which may also contribute to the inferior 1 d and 7 d strength enhancement of CC-CSH-10 and CC-CSH-20, as compared to CC-CSH-5. For a more quantitative investigation of the hydration acceleration effect of CC-CSH, the hydration heat of the samples of the initial 36 h was measured, with the results shown in Figure 6. As Figure 6a illustrates, there was a clear leftward shift in the main hydration peak to earlier ages in all samples with CC-CSH, but the main hydration peak of the sample with CaCO3 showed a slight rightward trend in the first few hours, which may be due to the retarding effect from the polycarboxylate dispersant added during its preparation as a reference. For different CC-CSH samples, the main peaks in curves of CC-CSH showed clear dependence on the degree of C-S-H modification, and the main peak moved more leftward as the degree increased. This confirms the tuning effect of C-S-H modification on CaCO3. Moreover, the acceleration effect of CC-CSH-10 and CC-CSH-20 may be excessively high so as to stiffen microstructure and induce the formation of coarse hydration products, which has adverse effect on later-age strengths. The earlier main peak also For a more quantitative investigation of the hydration acceleration effect of CC-CSH, the hydration heat of the samples of the initial 36 h was measured, with the results shown in Figure 6. As Figure 6a illustrates, there was a clear leftward shift in the main hydration peak to earlier ages in all samples with CC-CSH, but the main hydration peak of the sample with CaCO3 showed a slight rightward trend in the first few hours, which may be due to the retarding effect from the polycarboxylate dispersant added during its preparation as a reference. For different CC-CSH samples, the main peaks in curves of CC-CSH showed clear dependence on the degree of C-S-H modification, and the main peak moved more leftward as the degree increased. This confirms the tuning effect of C-S-H modification on CaCO3. Moreover, the acceleration effect of CC-CSH-10 and CC-CSH-20 may be excessively high so as to stiffen microstructure and induce the formation of coarse hydration products, which has adverse effect on later-age strengths. The earlier main peak also To quantitatively evaluate the acceleration effect of CC-CSH, an acceleration coefficient A was calculated using the following equation based on the method of Luc Nicoleau and co-workers [35,36]: where Acc is the slope of the ascending section of the main peak of a CC-CSH-modified sample, and Acc ctrl is the slope of the ascending section of the main peak of the control sample. C (%) is the dosage of CC-CSH. According to the results, A of CC-CSH (1.57-1.77) at 1.2% was considerably higher than CaCO 3 at the same dose (1.19), which is due to the pre-seeding of C-S-H nanogels. In addition, A showed dependence on the degree of C-S-H modification, indicating the effect of C-S-H on tuning the properties and the hydration acceleration effect of CaCO 3 . The higher apparent A values of CC-CSH-10 and CC-CSH-20 were largely due to the free C-S-H in the dispersion. Once the free C-S-H was removed, a notable drop was observed. CC-CSH-20 without free C-S-H had a slightly lower A, which is clearly due to adhesion of the particles caused by excessive C-S-H. All IC test results were in agreement with previous results and offered further confirmation of our findings.
Conclusions
From the results above, several conclusions can be drawn: (1) C-S-H nanogels can be artificially grown on CaCO 3 particles, achieved through the seeding effect of CaCO 3 . The amount of nanogels on the CaCO 3 surface is restricted by the available surface. Excessive nanogels would exist in free form in solution. The maximum content of C-S-H on the surface is reached at the modification degree of 5-10% m (CaCO 3 ). (2) The C-S-H nanogels on CaCO 3 surface alters its interfacial properties and exhibit a tuning effect on the cement hydration. The effect strengthens with higher surface C-S-H coverage, and it was halted once surface coverage reached the maximum. (3) Pre-seeding CaCO 3 with C-S-H can considerably enhance its strengthening effect on cement in early ages (<1 d). The enhancement is due to the better seeding effect on C-S-H. However, excessive C-S-H modification would weaken CC-CSH's effect on mechanical properties at later ages (>1 d), which is caused by excessively fast hydration product growth and particle agglomeration. Considering strength at all ages, CC-CSH-5 showed the most balanced performance among all the CC-CSH samples. It can be inferred that CaCO 3 and C-S-H exhibited synergistic effects at the modification degree of CC-CSH-5. According to the strength tests on dosage dependence, the optimal dosage of CC-CSH-5 is 0.6-1.2%.
From our findings, we can conclude that tuning the performance of nanomaterials in cementitious systems by building nanocomposites is viable. The effect of a nanomaterial can be tuned at a fixed dosage, which offers a novel route for the modification and performance tuning of cement-oriented nanomaterials. Still, some aspects of nanocomposite preparation and modification are worth noting: the degree of modification should be carefully controlled to achieve the maximal synergistic effect of the components, and this degree seems to be closely related to the interfacial properties of the components; in this study, it was the available surface of CaCO 3 that set the optimal degree. In addition, further work is essential for the practical application of this technique, as well as to confirm its competitiveness against other techniques. | 2022-09-30T15:26:11.829Z | 2022-09-28T00:00:00.000 | {
"year": 2022,
"sha1": "744ed3a639d719b800636a78a7c92390d657d34a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/19/6726/pdf?version=1664342034",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fae434cbafff952d1f46c805c9282ba8523a0c45",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
264363190 | pes2o/s2orc | v3-fos-license | Plasma Properties in the Earth's Magnetosheath Near the Subsolar Magnetopause: Implications for Geocoronal Density Estimates
Combined in situ ion measurements and remote sensing of energetic neutral atoms are used to determine the geocoronal Hydrogen density at large (∼10 RE) distances from the Earth. This method for determining the geocoronal density requires global magnetospheric modeling. Observations in the Earth's subsolar magnetosheath from the Magnetospheric Multiscale mission are used to determine the accuracy of using global models to predict the geocoronal density. On average, gas dynamic and magnetohydrodynamic (MHD) models and observations are in reasonable agreement, with differences <25%. In addition, the MHD model subsolar magnetopause is about 0.5 RE sunward of the observed location. However, variations around averages are large (up to a factor of 2), indicating that global models introduce relatively large uncertainties in geocoronal density estimates. Finally, the critical ion flux in the Interstellar Boundary Explorer IBEX‐Hi energy range is often minimally affected by fluctuations of a factor of 2 in the density.
• Magnetohydrodynamic modeling tends to overestimate the subsolar magnetopause standoff distance and underestimate the magnetosheath density • The subsolar magnetopause moves constantly, even under quasi-steady solar wind pressure, with displacements at least as large as ∼1 R E • Models for magnetosheath line-of-sight ion fluxes introduce uncertainties of up to a factor of 2 in the geocoronal density estimate at 10 R E
Supporting Information:
Supporting Information may be found in the online version of this article.
Introduction to the Earth's Geocorona at Large Distances From the Planet
The Earth's hydrogen geocorona has been observed as far as the moon (Baliukn et al., 2019).This extensive, tenuous neutral atmosphere has important implications for magnetospheric dynamics.For example, charge exchange with the geocorona is a major contributor to plasma ring current loss in the magnetosphere (e.g., Kistler et al., 1989).
Hydrogen atoms in the geocorona scatter solar Lyman-Alpha and line-of-sight measurement of these Far Ultraviolet (FUV) emissions are used to construct and validate geocoronal models (e.g., Bailey & Gruntman, 2013;Østgaard et al., 2003;Rairden et al., 1986;Zoennchen et al., 2011Zoennchen et al., , 2013) ) in particular, its density out to about 8 R E from the Earth (Zoennchen et al., 2017).At radial distances less than about 8 R E , this technique provides the most accurate determination of the geocoronal density.Beyond 8 R E , Lyman-Alpha emissions are weak, signals and backgrounds are comparable, and the resulting models have large uncertainties.
In contrast, ENA imaging of this region has substantially lower backgrounds.Because ENA emission is the product of charge exchange between plasma ions and the geocorona, geocoronal density is derived from ENA imaging with knowledge (through models and/or in situ measurements) of the source ion population.Equation 1 relates the line-of-sight integrated ion flux (J Ion (E,l)) along the l direction with the ENA flux (J ENA (E)) at a given energy through the energy-dependent charge exchange cross section σ(E) and the geocorona density n H (l).
Determining the geocoronal density from Equation 1 requires knowledge of the ion flux along the line-of-sight as well as how the geocoronal density varies with radial distance from the Earth.Because both FUV and ENA line-of-sight emissions are optically thin, modeling must be applied for proper interpretation of the measurements.For ENA imaging, a single in situ measurement along a ENA line-of-sight measurement provides crucial context for derivation of the geocoronal density.
A modified version of Equation 1 was solved for n H at the subsolar magnetopause (i.e., at a geocentric distance of 10 R E ) using simultaneous ENA and in situ ion observations (Fuselier et al., 2010), yielding n H (r = 10 R E ) = ∼5-10 cm −3 .Recently, Equation 1 was solved directly using simultaneous ENA and in situ observations combined with a gasdynamic model of the magnetosheath density (Fuselier et al., 2020), yielding n H (r = 10 R E ) = 11-17 ± 3 cm −3 .
This recent determination of n H was challenged (Sibeck et al., 2021).Using predictions from a global magnetohydrodynamic (MHD) simulation that incorporated the observed upstream solar wind as a boundary condition, it was argued that the neutral hydrogen geocoronal density was a factor of 4-5 times higher than that reported in Fuselier et al. (2020) (see Table S1 in Supporting Information S1).Half of this discrepancy between the two estimates of n H results from a factor of 2 discrepancy between the observed ion densities at the subsolar magnetopause reported by Fuselier et al. (2020) and the predicted ion densities at that location from an MHD model reported by Sibeck et al. (2021).Since any determination of n H (l) from Equation 1 requires a global model of the magnetosheath, it is important to determine how well these models reproduce subsolar magnetopause observations.This paper investigates some of the observed properties of the subsolar magnetopause that are important for determining n H (l) and compares these observed properties with gasdynamic and MHD model predictions.
Observations
Plasma and magnetic field observations are from the MMS spacecraft (Burch et al., 2016) and remote ENA observations are from the Interstellar Boundary Explorer (IBEX) mission (McComas et al., 2009).Plasma moments and ion fluxes are from the MMS Fast Plasma Investigation (Pollock et al., 2016) and the Hot Plasma Composition Analyzer (Young et al., 2016) and magnetic field observations are from the Magnetometers (Russell et al., 2016).Since properties of the subsolar magnetosheath on scales of 1,000's of km, that is, much larger than the ∼10s of km spacecraft separation, are considered here, data from a single spacecraft, namely MMS3, are sufficient.Upstream conditions are determined from solar wind data convected to the subsolar magnetopause.
The first ∼2 years of MMS observations (September 2015-March 2017) were surveyed for magnetopause encounters within 1 R E of the subsolar point.The MMS orbit was designed to pass near the subsolar point for 10.1029/2023GL105553 3 of 9 these two sweeps of the dayside (see Fuselier et al., 2016).After 2017, the orbit evolved and the spacecraft rarely cross the magnetopause within 1 R E of the subsolar point.
Nineteen events were identified, with multiple, inbound and outbound, complete magnetopause crossings in an event (see Table S2 in Supporting Information S1).Crossings within minutes of each other were not ruled out and event #19 includes the magnetopause encounter in Fuselier et al. (2020) and Sibeck et al. (2021).
Figure 1 shows a pair of crossings on 18 December 2015 (event #9).Top to bottom are the ion and electron omnidirectional fluxes, ion and electron densities, ion velocities, solar wind dynamic pressures convected to the magnetopause from an upstream solar wind monitor, and spacecraft distances to the magnetopause computed from the bulk ion velocity in the magnetosphere (see below).MMS3 was in the subsolar magnetosheath at the beginning of the interval.This region is characterized by high fluxes of several hundred eV to ∼1 keV ions and several tens of eV electrons, high densities, and low bulk flow velocities.The spacecraft crossed the magnetopause at 09:32 UT at a standoff distance of 10.8 R E .It remained in the magnetosphere for almost 8 min and simultaneously observed two populations of ions and electrons at high and low energies, before crossing the magnetopause at ∼09:40 UT and returning to the subsolar magnetosheath for the remainder of the time interval., solar wind dynamic pressures convected to the magnetopause, and distances from MMS to the magnetopause when the spacecraft was in the magnetosphere.The ion density at the inbound magnetopause crossing (09:32 UT, left blue arrow) is almost a factor of 2 lower than that at the outbound crossing (09:39:30 UT, right blue arrow).Both magnetopause crossings probably occurred because of the relatively small changes in the solar wind dynamic pressure.The bottom panel shows that, while the spacecraft was in the magnetosphere, the magnetopause moved almost 1 R E sunward by 0938 UT, before moving rapidly earthward and crossing over the spacecraft at 09:39:30 UT.These large changes in the magnetopause location are not reflected in the changes in the solar wind dynamic pressure.
The magnetopause crossings were likely the result of an ∼10% decrease in the solar wind dynamic pressure before 09:32 UT and a similar increase in pressure before 09:40 UT.In addition to this small change in the dynamic pressure, the IMF clock angle changed from 170° at 09:32 UT to 220° at 09:40 UT with a fairly large change in the cone angle as well (not shown).In contrast to these relatively small changes in upstream conditions, ion densities in the subsolar magnetosheath at the inbound crossing were almost a factor of two lower than densities at the outbound crossing.This large change in magnetosheath density is not reflected in the upstream solar wind conditions convected to the magnetopause.
In the magnetosphere, changes in the energy of the ion population below 100 eV in Figure 1 are related to changes in the bulk convection velocity of the plasma, and these velocity changes are in turn related to magnetopause motion (see, Sauvaud et al., 2001;Song et al., 2019).Integrating the convection velocity component normal to the magnetopause (i.e., Vx in Figure 1) over time yields changes in the distance of the magnetopause from the spacecraft, which are shown in the bottom panel in Figure 1 over the 8 min that the spacecraft was in the magnetosphere.The magnetopause remained relatively close to the spacecraft for the first few minutes, but then moved sunward from 09:35 to 09:38 UT as Vx increased to ∼100 km/s.The magnetopause was nearly 1 R E sunward of the spacecraft at 09:38 UT, before moving earthward rapidly and passing over the spacecraft at 09:40 UT.Thus, the magnetopause position changed by about 1 R E without any significant change in the solar wind dynamic pressure.This change is likely related to the rotation of the IMF because the IMF, in particular the cone angle, influences the shape of the magnetopause (see, e.g., Grygorov et al., 2017;Sibeck et al., 2000).
Comparison With Gas-Dynamic and MHD Models
In the gas-dynamic model of the magnetosheath (Spreiter et al., 1966), the plasma density in the subsolar magnetosheath is 4.43 times the upstream solar wind density (with no uncertainty assumed for this model parameter).
For event #9 in Figure 1, the upstream solar wind density was 4.91 ± 0.05 cm −3 and 5.16 ± 0.07 cm −3 for the inbound and outbound magnetopause crossings, respectively.Therefore, the gas-dynamic model predicts subsolar magnetosheath densities of 21.8 cm −3 and 22.9 cm −3 for the inbound and outbound crossings, respectively.
Averaging the observed magnetosheath density in Figure 1 over 1 min from 09:31 to 09:32 UT for the inbound crossing and from 09:40 to 09:41 UT for the outbound crossing, the observed, average subsolar magnetopause densities were 19.2 ± 2.6 cm −3 and 36 ± 3.8 cm −3 , respectively (the uncertainties are the standard deviation of the mean of the 4.5 s density measurements from FPI).Thus, the gas-dynamic model predicts densities that are approximately the same as the density observed by MMS at the inbound crossing and about 60% less than that observed at the outbound crossing.
For event #9, the subsolar magnetopause density was also predicted from the Open Geospace General Circulation Model (OpenGGCM) (e.g., Raeder et al., 1996), using the following (static) parameters as input: Vsw = −400 km/s, Nsw = 5 cm −3 , IMF(GSM) = (0, 0.5, −4) nT, dipole tilt = −25°.The density profile along the Earth-Sun line is shown in Figure S1 in Supporting Information S1.The subsolar magnetopause location is defined here as the maximum in the second derivative of this density profile.For event #9, the magnetopause location was 10.8 R E and the density was 15.8 cm −3 .Thus, the subsolar magnetopause standoff distance in the MHD simulation and in the observations are in agreement.However, the MHD density at the subsolar magnetopause is lower than the observed density at the inbound crossing and substantially lower than the observed density at the outbound crossing.
Subsolar densities from the gas-dynamic model and densities and locations from MHD simulations were also derived for the other 18 events in Table S1 in Supporting Information S1. Figure 2 compares observed and predicted subsolar densities for the 19 events using the ratio of observed to predicted values.The top panel shows observed/gasdynamic density ratios and the bottom panel shows observed/MHD density ratios.There is considerable scatter around a ratio of 1 with on apparent trend with northward versus southward IMF or with inbound versus outbound crossings.Averaging over all crossings in the 19 events, the gasdynamic model tends to over-predict the subsolar magnetosheath density, with a weighted mean of 0.77 ± 0.01 and no difference between averages of inbound only and outbound only ratios.The MHD model tends to under-predict the subsolar density, with a weighted mean of 1.09 ± 0.02 and weighted means of 1.09 ± 0.03 and 1.14 ± 0.03 for outbound and inbound ratios, respectively.Because of the substantial scatter of the ratios, an important metric is the median values.The median for the observed/gasdynamic ratio = 0.86 and the median for the observed/MHD ratio = 1.22. 10.1029/2023GL105553 5 of 9 Thus, the gasdynamic model over-estimates the density by 14% and the MHD model tends to under-estimate the subsolar magnetopause density by 22%.
Figure 3 shows the difference, in R E , of the observed and MHD predicted subsolar magnetopause locations.Almost all values are less than zero with a median of −0.5 R E .Thus, most of the time, MHD over-estimate the magnetopause distance from the Earth by about 0.5 R E for ∼75% of the events.
Discussion
Nineteen events, including a total of 27 subsolar magnetopause crossings, were surveyed to determine trends in the observed and modeled densities and subsolar magnetopause locations.Figures 2 and 3 and Table S1 in Supporting Information S1 show that the modeled densities and magnetopause locations are not grossly dissimilar from the observations.However, densities derived from global MHD have considerable scatter and are up to about a factor of 2 different from observed densities.MHD magnetopause locations show similar, significant scatter and deviate from observed locations by up to ∼1 R E .Considering the medians in Figures 2 and 3, gasdynamic models tend to over-estimate the subsolar magnetopause density by ∼14% while MHD models tend to under-estimate the density by ∼22%.MHD models tend to predict subsolar magnetopause locations that are ∼0.5 R E further from the Earth (i.e., about 5% of the typical standoff distance of 10 R E ) than the observed locations.About half of this overestimate may be explained by the fact that MHD models are single fluid, while the actual solar wind has higher dynamic pressure because it contains about 4% He 2+ .
The systematic over-estimate of the density using the gasdynamic model results in an under-estimate of the geocoronal density.In Equation 1, an over-estimate of the density of 14% translates into an over-estimate of the ion flux and an under-estimate of the geocoronal density of similar magnitudes.
The systematic under-estimate of the density and over-estimate of the magnetopause location in the MHD model have important consequences for calculating the geocoronal density using Equation 1.Both systematic effects result in an over-estimate of the geocoronal density.The lower density translates directly into a lower ion flux, resulting in an over-estimate of about 22% using an MHD model with the upstream solar wind as the boundary condition.The over-estimate in the calculated geocoronal density that results from the larger magnetosphere in the MHD model is more difficult to quantify.At energies ∼1 keV, the ion flux in Equation 1 primarily comes from the magnetosheath and the contribution from the magnetosphere is much lower (see, e.g., the differences in the magnetosheath and magnetosphere ion fluxes at 1 keV in Figure 1).Therefore, increasing the magnetopause standoff distance by 0.5 R E results in a lower ion flux; however, the magnitude of the reduction depends on the field-of-view and resolution of the ENA camera and the pixel resolution of the MHD simulation.For the simulation in Figure S1 in Supporting Information S1 and the data derived from the IBEX ENA imager with a FOV that covers essentially the entire magnetosheath (see, e.g., Fuselier et al., 2020;Starkey et al., 2022), the reduction in the ion flux caused by the sunward shift of the entire subsolar magnetosheath density profile by 0.5 R E is roughly ∼10%-20%.Thus, combined with the under-estimate of the density, the MHD model over-estimates the geocoronal density by ∼30%-40%.
The scatter in the data in Figures 2 and 3 plays an important role in the estimate of the uncertainty in the geocoronal density using Equation 1.This scatter is about a factor of two larger than the systematic over-and under-estimates of the parameters illustrated in these figures.Therefore, using the gasdynamic and MHD models with the upstream solar wind as a boundary condition results in an uncertainty of about a factor of 2 in the determination of the geocoronal density using Equation 1.
The systematic differences and uncertainties are reduced if the MHD or gasdynamic results are normalized by in situ observations at the subsolar magnetopause.However, ENA observations typically require integration over many minutes and the question then becomes which in situ observations to use.For event #9 in Figure 1, the magnetosheath densities at the inbound and outbound magnetopause crossings differ by more than 50% and for event #19 (including the outbound crossing used in Fuselier et al. (2020)) the densities differ by a factor of 2.
Understanding which observations to use requires reconsidering the terms in Equation 1, in particular J Ion (E,l).With J Ion (E,l) in mind, Figure 4 provides at least a partial answer to the question of which observations to use.Plotted are the phase space densities in the plasma rest frame as a function of velocity for the inbound and outbound crossings for events #6, #9, and #19.Identified in the panels are the center thermal velocities for energy steps 2 through 6 for the IBEX-Hi ENA camera (Funsten et al., 2009).These log-spaced energies span center energies from 0.7 to 4.3 keV.For events #9 and #19, the largest changes in the plasma distributions from inbound to outbound occur below 400 km/s, in the core of the magnetosheath distribution.At higher velocities, including those corresponding to the IBEX energy steps, there is little difference between the plasma phase space densities at the inbound and outbound crossings.Thus, based on these two events, the answer to which observations to use is that, for the IBEX-Hi camera energies, it doesn't matter.Event 6 shows that this is not always the best answer.For this event, it is probably better to use an average of the inbound and outbound phase space densities.
The higher energies are not as affected because this part of the magnetosheath distribution originates at the bow shock.At the shock, a portion of the solar wind distribution reflects off the shock front, gains energy as it propagates back into the upstream, and returns and crosses the shock and enters the downstream (see Gosling & Robson, 1985).The local effects at the subsolar magnetopause are in the core of the distribution and not in the shock-generated higher-velocity parts of the distribution (see also, Starkey et al., 2022).For the IBEX-Hi energies, the break between the core and the reflected/transmitted population occurs between energy steps 2 and 3. Phase space density versus velocity in the plasma rest frame for three pairs of inbound and outbound magnetopause crossings.For events #9 and #19, the total density is much higher for the outbound crossings (red circles) than for the inbound crossings (blue squares).Despite this density difference, the phase space densities for they two types of crossings are similar at energies above ∼0.7 keV (IBEX-Hi energy step 2).Event #6 shows the opposite at energies >1 keV (IBEX-Hi energy step 3).
In conclusion, in determining the neutral geocoronal density at ∼10 R E from the Earth, a global model of the bow shock, magnetosheath, and magnetosphere is necessary to link line-of-sight ENA measurements with the local in situ ion measurements.The two global models investigated here provide this link up to a point.There are some systematic differences between observations and the models; however, the overwhelmingly important takeaway from the comparison between models and observations in Figure 2 through 4 is that the use of a predictive model introduces an inherent uncertainty of at least a factor of 2 in the final determination of the geocoronal density."The Earth's exosphere and its response to space weather."Support for this study comes from the IBEX mission as a part of NASAʼs Explorer program under Grant 80NSSC20K0719 and from the NASA Goddard Space Flight Center MMS prime contract NNG04EB99C.ChatGPT was used to draft the plain language summary from the abstract.
Figure 1 .
Figure 1.(top to bottom) Ion, electron energy time spectrograms, ion and electron densities, ion velocity components (in GSE), solar wind dynamic pressures convected to the magnetopause, and distances from MMS to the magnetopause when the spacecraft was in the magnetosphere.The ion density at the inbound magnetopause crossing (09:32 UT, left blue arrow) is almost a factor of 2 lower than that at the outbound crossing (09:39:30 UT, right blue arrow).Both magnetopause crossings probably occurred because of the relatively small changes in the solar wind dynamic pressure.The bottom panel shows that, while the spacecraft was in the magnetosphere, the magnetopause moved almost 1 R E sunward by 0938 UT, before moving rapidly earthward and crossing over the spacecraft at 09:39:30 UT.These large changes in the magnetopause location are not reflected in the changes in the solar wind dynamic pressure.
Figure 2 .
Figure 2. Observed/predicted subsolar density for the gasdynamic (top panel) and magnetohydrodynamic (MHD) (bottom panel) models.The gasdynamic model tends to predict higher densities at the subsolar magnetopause than observed, while the MHD model tends to predict lower densities than observed.
Figure 3 .
Figure3.Observed-predicted subsolar magnetopause standoff distance from the Earth using observations and an magnetohydrodynamic (MHD) model.For most events and magnetopause crossings, the MHD model tends to predict a magnetopause that is farther from the Earth than observed.
Figure 4 .
Figure 4. Phase space density versus velocity in the plasma rest frame for three pairs of inbound and outbound magnetopause crossings.For events #9 and #19, the total density is much higher for the outbound crossings (red circles) than for the inbound crossings (blue squares).Despite this density difference, the phase space densities for they two types of crossings are similar at energies above ∼0.7 keV (IBEX-Hi energy step 2).Event #6 shows the opposite at energies >1 keV (IBEX-Hi energy step 3). | 2023-10-21T15:20:49.257Z | 2023-10-19T00:00:00.000 | {
"year": 2023,
"sha1": "d425f628c60f07529228c3816db4a0972c1f163f",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023GL105553",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "6d6e9a0da6d9aefdeb38223d892d56c055956630",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": []
} |
250328121 | pes2o/s2orc | v3-fos-license | Intermittent endoleak via an aneurysm–iliac venous fistula after endovascular aneurysm repair
We have reported the rare case of an intermittent endoleak via an aneurysm–venous fistula (AVF). An 89-year-old woman had experienced postoperative sac expansion 6 years after she had undergone endovascular aneurysm repair. During aneurysmorrhaphy, we detected a small AVF, which was the source of the endoleak responsible for the aneurysmal sac expansion. This AVF had a check valve-like mechanism that allowed the inflow of blood from the iliac vein to the sac when the venous pressure exceeded the endotension. Our case has demonstrated the occurrence of an AVF after endovascular aneurysm repair that had resulted in an endoleak that was invisible on imaging studies and the presence of endotension.
An endoleak (EL) after endovascular aneurysm repair (EVAR) has been considered one of the main mechanisms responsible for the increase in the pressure inside the aneurysm. 1 Depending on wall compliance, the aneurysmal sac can expand to equilibrate with the retroperitoneal pressure. Current research has focused on arterial ELs, including type II ELs. 2 However, few studies have reported the occurrence of an EL from a venous source as a cause of postoperative sac expansion. A few studies have reported the creation of a sacecaval fistula that mitigated the increase in sac pressure and postoperative expansion. 3 In the present report, we have described a rare type of intermittent EL via an aneurysmeright common iliac venous fistula, which could not be detected by any preoperative imaging modalities. The patient provided written informed consent for the report of her case details and imaging studies.
CASE REPORT
An 83-year-old woman had presented with a 50-mm solitary aneurysm of the right common iliac artery. Despite her relatively advanced age at the index procedure, her preoperative evaluation revealed that she was otherwise healthy. The patient preferred endovascular treatment over an open procedure for the aneurysm and underwent EVAR using the Endurant II stent graft system (Medtronic Endovascular, Santa Rosa, CA). The right hypogastric artery was embolized to obtain a sufficient distal sealing zone in the right external iliac artery. Neither an EL nor aneurysmevenous fistula (AVF) was detected during pre-and periprocedural angiography. Postoperative sac expansion was observed, although contrast-enhanced computed tomography (CT) in the early and delayed phases (Fig 1) and duplex ultrasound imaging after EVAR could not detect the EL. Six years after the EVAR, when the maximal diameter of the sac had exceeded 80 mm, aneurysmorrhaphy was indicated to prevent subsequent rupture. Before open treatment, arterial angiography was performed; however, no form of EL could be identified.
With the patient under general anesthesia, aneurysmorrhaphy was performed using a midline approach. The sac was filled with fresh thrombus and unclotted blood. Exploration revealed a dime-size wall defect in the right inferoposterior aspect of the aneurysm. The anterior wall of the right common iliac vein (CIV) was partially visualized. Venous blood upwelled under the mound around the ulcer-like lesion, where a small AVF was observed. Blood egress was observed during the inspiratory phase (Fig 2). The low pressure on the mound over the AVF was sufficient for control. Because adhesion of the aneurysm wall was too severe for complete exposure of the right CIV, the fistula was closed with 4-0 Prolene suture from inside the aneurysm. The endograft was completely enfolded using the remnant sac and the peritoneum to prevent direct contact with the intestine. The postoperative course was uneventful.
Two-phase contrast-enhanced CT on the fifth postoperative day did not detect any EL. At 3 months after aneurysmorrhaphy, noncontrast-enhanced CT did not detect postoperative sac reexpansion. Duplex ultrasound could not detect an EL during any respiratory phases (Fig 3).
DISCUSSION
EL is a risk factor for postoperative sac expansion. 1 Several studies have reported type II ELs from the inferior mesenteric and lumbar arteries. 4 In the present patient, no arterial ELs were detected before or during the procedure. A rare form of EL via an AVF, with a check valve-like mechanism, had served as a source of blood accumulation within the aneurysmal sac.
Aneurysms can erode the walls of the contacting tissues and organs, resulting in enteric or caval fistulas. 5 Depending on the amount of shunt flow, caval fistulas can be recognized as symptomatic right heart failure, complicated bleeding during open surgery, or early phase caval or iliac venous contrast filling on aortography during EVAR. 6 The AVF in our patient was not observed during the initial EVAR. Before the secondary procedure, a postoperative ultrasound study was performed to identify the common arterial sources of ELs but not to investigate posture-dependent ELs. 7 The AVF demonstrated a check valve-like mechanism directing the venous blood into the sac. The peak CIV pressure occurred during the inspiratory phase of mechanical ventilation. 8 Because bleeding was observed in the inspiratory phase, the valve threshold (ie, the difference between the intrasaccular pressure and the pressure of the inferior vena cava [IVC]) was lower than the difference between the peak pressure of the IVC and the barometric pressure. Under mechanical ventilation, assuming that the IVC pressure has been equilibrated to the peritoneal pressure, the peak pressure of the IVC will be approximately equal to the peak peritoneal pressure of w10 mm Hg. 9 Given that the sac was open to barometric pressure, the pressure threshold of the valve would have been <10 mm Hg. If the AVF had been open continuously, it might have resulted in the formation of thromboembolism after dislodgment of the aneurysmal sac debris. No evidence of pulmonary embolism was identified in our patient. This finding also suggested the existence of a check valve-like mechanism.
Our patient's EL had caused sac expansion. The average sac pressure in the absence of an arterial EL will be 20 to 30 mm Hg, 10 greater than the CIV pressure in the recumbent position. 11 Under these circumstances, this rare form of EL could not have been detected because the valve was closed. However, in a standing position, the hydrostatic CIV pressure will increase to w31 mm Hg. 12 The CIV pressure might further increase due to certain physiologic behavior. The Valsalva maneuver can increase the CIV pressure to w74 mm Hg. 11 A previous study showed that the sac pressure without an EL present will correlate positively with the systemic pressure. 13 At normal systemic pressure (100-150 mm Hg), the sac pressure will be 20 to 30 mm Hg; thus, the pressure conduction in the sac will not be >20% of the systemic pressure. If the Valsalva maneuver increases the systemic pressure #200 mm Hg, the sac pressure can be approximately less than 40 mm Hg. Therefore, the pressure difference between the CIV and the sac must have transiently exceeded this threshold. An acute increase in CIV pressure resulted in an EL from the vein and increased intrasaccular pressure. After valve closure, the sac expanded until the pressure had reached equilibrium with its surroundings (Fig 4). As explained, an AVF might serve, not only as a depressor, but also as a pressor. 3 Even retrospectively, the EL of our patient was difficult to visualize. First, the delicate pressure balance between the aneurysm and CIV must be reproduced. In addition, the imaging modality must be able to detect the small volume of blood inflow from the CIV to the aneurysm. Contrast-enhanced duplex ultrasound or cavography might be a reasonable option for visualization. Ultrasound has the advantage of modulating the pressure balance; however, a small amount of EL might be challenging to detect, even with contrast material. Cavography has the advantage of detecting a small EL if the contrast agent has been injected from inside the CIV and close to the AVF. However, difficulties can be encountered when reproducing the pressure balance owing to positional limitations. If the EL could be detected preoperatively, liquid embolization of the aneurysm, using n-butyl cyanoacrylate (Histoacryl; B. Braun, AG, Melsungen, Germany) or an ethyleneevinyl alcohol copolymer liquid embolic system (Onyx; Covidien, Irvine, CA) might be able to close the valve permanently. The deployment of a covered stent inside the CIV could be another endovascular option. However, liquid embolization requires invasive and complicated procedures. In addition, the iliacevenous deployment of stent grafts appears to have a non-negligible risk of obstruction and migration.
CONCLUSIONS
We have described a post-EVAR AVF that served as an intermittent EL. Such an AVF can increase the endotension via a check valve-like mechanism. In addition, an intermittent posture-dependent EL can cause postoperative sac expansion. Although difficult to detect using preoperative imaging studies, postoperative AVF formation can serve as a source of occult EL and endotension. | 2022-07-07T15:10:52.626Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "01d6b5043390f1fecbd9790837f4604e2d3aa42a",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jvscit.org/article/S2468428722000879/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed8e9d39bb75a57345a5ccb7175904a76370fd2c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240354410 | pes2o/s2orc | v3-fos-license | Polymers in Two-Dimensional Bacterial Turbulence
We experimentally investigate the effects of polymer additives on bacterial turbulence in twodimensional (2D) films of swarming Serratia marcescens. We find that even minute amounts (≤ 20 ppm) of polymers can suppress velocity fluctuations and increase the size and lifetime of large-scale coherent flow structures. In addition, we report an upscale transfer of enstrophy and energy using the recently developed filtering techniques. Unlike in classical 2D turbulence, both enstrophy and energy fluxes move primarily towards large scales in bacterial turbulence; such fluxes are greatly modified by polymer additives.
Microorganisms often live in fluid environments where (bio)polymers are present [24]. For instance, bacteria can secrete slime to reduce friction while swarming across a solid surface [25], and produce protective exopolymeric matrix during the formation of biofilms [26]. How the presence of polymer molecules in the fluid media affect the swimming behavior of single microorganisms has been investigated for the past decade or so; both enhancement [27][28][29][30] and hindrance [31][32][33][34] in swimming speed have been found depending on the often nonlinear interaction between the swimmer kinematics, velocity fields, and fluid rheological properties.
Less explored, however, are the effects of polymers on the collective behavior of swimming microorganisms. A numerical study on the collective dynamics of rod-like swimmers shows that fluid elasticity can suppress velocity fluctuations and break down large-scale flow structures [35]. Simulations based on mean field theory suggest that fluid elasticity can mediate hydrodynamic interactions and lead to larger coordinated structures [36]. Recently, large oscillatory vortices are found in bacterial suspensions inside droplets containing viscoelastic fluids (DNA suspensions); the observed spatial-temporal order is found for fluids with sufficiently high levels of elastic-ity [37]. Despite recent advances, there is still a dearth of investigations on the effects of polymers on the collective motion of microswimmers, particularly in the (ultra) dilute regime where polymer molecules have a relatively minor effect on the bulk fluid properties. As a result, our understanding of the collective behaviors of living organisms in polymeric fluids remains incomplete.
In this manuscript, we experimentally investigate the effects of polymer additives on the collective dynamics of swarming bacteria in quasi-2D liquid films. Our results show that even minute amounts of polymers (≤ 20 ppm) can significantly enhance bacterial collective motion and promote large-scale coherence. Velocimetry data show that the size and lifetime of the flow structures are nearly doubled in the presence of polymers, and velocity fluctuations are suppressed. Energy spectra show a power law of −5/3, reminiscent of the inverse energy cascade scaling in 2D Newtonian turbulence. Surprisingly, our calculations show that the primary directions of both energy and enstrophy fluxes are inverse (upscale) in bacterial turbulence. The inverse enstrophy flux increases substantially with the addition of polymers, which is a potential mechanism for the increase in large-scale coherence.
Swarming experiments are performed on ATCC 274 strain of Serratia marcescens, a rod-shaped bacterium that is on average 2 µm long and 0.8 µm in diameter. When cultivated on soft agar plate (see Supplemental Material [38] for details), the bacteria differentiate into swarmer cells with additional (10 to 100) flagella and elongated bodies of ≥ 5 µm [39]. Polymeric solutions are prepared by diluting a carboxymethyl cellulose (CMC, 7 × 10 5 MW) stock solution in phosphate-buffered saline (PBS) to final concentrations of 5, 10, and 20 ppm. Note that the highest polymer concentration is much below the overlap concentration c * (≤ 0.2% c * ). A 2-µL drop of PBS or CMC solution containing swarming S. marcescens [38] is placed in a thin-film apparatus [29,34,38], and stretched into an approximately 1-cm 2 large and 40-µm thin film. Images are taken using bright-field microscopy and a CMOS camera (Flare 4M180) at 24 frame/s. Velocity fields of swarming bacteria are obtained using particle imaging velocimetry (PIV, see [40]), with a total number of 6400 or 80 × 80 interrogation windows, each of a size of 25 × 25 pixel or 7.0 × 7.0 µm 2 . Figure 1 shows experimental velocity and vorticity fields for the buffer (PBS) and the 20 ppm CMC solution (see movies in [38]); instantaneous streamlines are plotted on top of the fields to better visualize structures. The flow fields show that the addition of polymers significantly increases the swimming speed of S. marcescens; the maximum velocity magnitude is nearly doubled from 10 µm/s in the buffer to 20 µm/s in the polymeric solution [ Fig. 1(a) and 1(b)]. While it has been previously found that a single microbe can swim faster in polymeric solutions [27][28][29][30], the flows induced by bacterial collective motion here are not merely scaled up with a higher swimming speed. If that were the case, one would expect the flow structures to remain roughly of the same size. Here, on the other hand, we find that the flow structures lengthscale increases with the addition of polymers, as shown by the vorticity fields [ Fig. 1(c) and 1(d)]. This indicates that bacterial collective motion in these ultra-dilute polymeric fluids have distinct underlying flow structures from those in Newtonian fluids.
The effects of polymers on flow structures in swarming bacteria are further quantified by calculating the probability density functions (PDFs) of the velocity magnitude fields. We find that the addition of 20 ppm of CMC (≤ 0.2% c * ) more than doubles the maximum swimming speed [ Fig. 2(a)] and roughly triples the mean speedū [ Fig. 2(a), inset]. We note that these PDFs are not simply rescaled, rather they follow different statistical distributions. As polymer is added to the swarms, the PDFs To test this hypothesis, we compute the PDFs of the in-plane velocity components u x and u y for the 0 (PBS) and 20 ppm CMC cases [ Fig. 2(b)]. For better contrast, the velocity components u * x and u * y are normalized to have a mean of zero and a standard deviation of unity. Importantly, we find no noticeable difference between the PDFs of x-and y-velocity components, suggesting the inplane motion of bacteria is statistically isotropic. In the buffer case (0 ppm), the velocity distributions are broadened, with heavy tails at high velocities. A generalized Gaussian function fitting, N exp(−c|u * x,y | β ), reveals that the PDFs are super-Gaussian with β ≈ 1.4. In contrast, such tails are absent in the polymeric case (20 ppm), and the PDFs are approximately Gaussian with β ≈ 2.0. The polymer additives seem to reduce the velocity distributions tails by suppressing velocity fluctuations.
The suppression of tails in the velocity PDFs can be characterized by the kurtosis of velocity components [ Fig. 2(b), inset]. The kurtosis is 3 for a Gaussian distribution, greater than 3 for super-Gaussian and less than 3 for sub-Gaussian distributions. We find that as the polymer concentration increases, the kurtosis of velocity components decreases from ∼ 4.5 in the buffer to ∼ 3 in the polymeric fluid (20 ppm). The decrease in kurtosis suggests that polymers suppress outlier velocities and weaken the intermittency of fluctuations in 2D bacterial turbulence. This is likely due to polymer molecules mediating the hydrodynamic interaction between nearby bacteria, which reduces the likelihood of local swimming velocity deviating from the mean swarming velocity. The decrease in velocity fluctuation with polymers is consistent with the observation in previous numerical simulations [35]. Note that polymer additives have an opposite effects in classic 2D turbulence, where the sudden release of polymer elastic energy increases intermittency and the kurtosis of velocity distributions [41][42][43].
Polymer mediation on local bacteria interaction may result in a long-range hydrodynamic effect, which could explain the increase in structure size shown in Fig. 1. The flow structure size can be quantified by the spatial correlation functions of velocity u and vorticity ω, defined as: C u (r) = u(x) · u(x + r) / u 2 and C ω (r) = ω(x) ω(x+r) / ω 2 . Here · denotes the spatiotemporal ensemble average. As polymer is added, we find that the velocity fields are increasingly correlated over a distance of 250 µm [ Fig. 3(a)]. Similar increases in spatial correlations are also found in the vorticity fields [ Fig. 3(b)]. The average vortex size can be estimated by the integral length scale of vorticity L ω , defined by the convergent integral: L ω = ∞ 0 C ω (r) dr. Inset of Fig. 3(b) shows that the average vortex size increases by roughly 50%, from ∼ 30 µm in the buffer to ∼ 45 µm in the 20 ppm CMC solution. This increase in structure size with polymers is in contrast to previous numerical studies [35], where the induced polymer stress breaks down large-scale flow structures, but the cluster size of swimmers is increased. A much lower swimmer concentration was used in the simulations than in the current experiments, which may explain this discrepancy. The lifetime of flow structures are examined by the temporal correlation functions of velocity u and vorticity ω, defined as: C u (τ ) = u(t)· u(t + τ ) / u 2 and C ω (τ ) = ω(t) ω(t + τ ) / ω 2 . To compensate for the increases in flow speed and structure size with polymers, the time lag τ for velocity correlation is rescaled by the eddy turnover time, L ω /ū. An increase in velocity temporal correlations is found [ Fig. 3(c)] up to half of an eddy turnover time (∼ 5 s), suggesting that polymers increase the average lifetime of flow structures. The vorticity fields are also increasingly correlated in time with the addition of polymers [ Fig. 3(d)]. Here, the time lag τ is normalized by the enstrophy time scale,Ω −1/2 , where enstrophy is defined by the mean square vorticity, Ω = ω 2 /2. The mean lifetime of flow structures can be measured by the vorticity integral time scale, defined as: T ω = ∞ 0 C ω (τ ) dτ . We find that the normalized mean lifetime is more than doubled in the 20 ppm case compared to the buffer [ Fig. 3(d), inset]. Overall, these results imply that the interplay between polymer stresses and bacteria interaction leads to a longer flow memory in 2D bacterial turbulence.
Next, we explore how the presence of polymer affect the transfer of (kinetic) energy across different length scales in bacterial turbulence. We compute the energy spectra, E(k) = 2πk |û(k)| 2 , whereû(k) denotes the Fourier transform of the velocity field. An increase in spectral power at all wavenumbers with polymers is found (Fig. 4) due to the enhancement in bacterial swimming speed. At large scales, energy spectra is found to follow an exponential decay, exp(−kL c ), for all polymer concentrations. Exponential decay in energy spectra has been observed in flows dominated by viscous dissipation [44] and expected here since the Reynolds number Re 1. The length scale associated with the exponential decay, L c , increases with polymer concentration (Fig. 4, inset). This is consistent with our observations in the spatial correlations [ Fig. 3(a) and 3(b)], although the value of L c is larger (∼ 350 to 450 µm) than the integral length scale, since L c is primarily associated with the exponential decay at the smallest wavenumbers.
A well-defined power law of k −5/3 for the energy spectra is found at small-scales of all cases (Fig. 4). In classic Newtonian 2D turbulence, the power law of k −5/3 suggests the existence of an inverse energy cascade [45,46]. In bacterial turbulence, kinetic energy is injected by bacteria at the smallest length scale of the flow, which admits an inverse flux of energy towards large scales. The addition of polymers does not affect this inverse energy cascade scaling. Note that the power law of k −5/3 is different from the k −8/3 scaling that has been found in previous experimental studies [18]. This discrepancy is likely due to the difference in experimental geometry; while our experiments are performed in free-standing films, previous studies were conducted in microfluidic channels with no-slip boundary conditions [18,19].
We further examine the spatial fluxes of energy and enstrophy using a recently developed filtering technique [47][48][49]. Central to this technique is to obtain a low-pass filtered velocity field, u (k) , containing the information only at wavenumbers smaller than k. Here, the low-pass filter is chosen to be a Gaussian function [50]. The local energy flux is defined as: Π is a stress tensor arising from the coupling between the resolved large scales and the filtered small scales [51]. By definition, the local energy flux is positive for energy flow to smaller scales (> k) and negative for energy flow to larger scales (< k). The local enstrophy flux is defined in a similar manner, with the same sign convention: Π is a vector describing the spatial transport of vorticity due to the elimination of smaller scale vortices [48].
The mean energy flux, Π (k) E , is shown in Fig. 5(a), for 0 (buffer) and 20 ppm CMC cases. To compensate for the increase in kinetic energy and enstrophy with polymers, Π (k) E is normalized by the factorū 2Ω1/2 to be dimensionless. Results show that in the buffer (0 ppm) case the net flow of energy is upscale (inverse) at all wavenumbers, consistent with the k −5/3 inverse energy cascade scaling. In the polymeric liquid (20 ppm), however, energy moves primarily towards larger scales, with a reverse of direction towards smaller scales at low k. below by the changes in the mean enstrophy flux.
The mean enstrophy flux, Π Ω , normalized byΩ 3/2 to be dimensionless, is shown in Fig. 5(b). We find that the transfer of enstrophy is primarily upscale with slight downscale flux at low k values for both 0 and 20 ppm CMC cases. This inverse transfer of enstrophy in bacterial turbulence may have been expected (it has been suggested for sperm cells [52]) since both energy and enstrophy are injected at the smallest length scale. Remarkably, the addition of polymers to the media enhances the inverse transfer of enstrophy towards larger scales, which may explain the increase in vortex size. In other words, the origin of larger coherent structures is not the accumulation of kinetic energy at large scales; rather, it is due to a enhanced transfer of vorticity or enstrophy in the presence of polymers.
In summary, we investigate the effects of polymers on 2D bacterial turbulence. Our experiments show that polymers can significantly affect bacterial collective motion by suppressing velocity fluctuations and increasing the size and lifetime of coherent flow structures. Further analysis on the energy and enstrophy fluxes reveals that polymers enhance the inverse transfer of enstrophy, which can be responsible for the large-scale coherence. Our work extends the studies of active turbulence in Newtonian fluids to a more general case of viscoelastic fluids. These results can be helpful in understanding the collective dynamics of microswimmers in more common viscoelastic fluid environments, such as spermatozoa in cervical mucus and bacteria in biofilms.
We thank Daniel Blair and Christopher Browne for helpful discussions. | 2021-11-02T01:15:39.989Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "6213082e97db7fe2a8c41da332879d39c47bcae6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6213082e97db7fe2a8c41da332879d39c47bcae6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
212788796 | pes2o/s2orc | v3-fos-license | Optimal Space Interpolation Method for Continuous Marine Vertical Datum Based on WGS-84 Ellipsoid
Global Navigation Satellite System (GNSS)-based survey technology improves the accuracy and usability of location information by enabling the rapid updating of different information and the efficient use of integration through an ellipsoid. The global marine sector also uses the GNSS to produce rapid and accurate hydrographic surveys and to expand investment and introduce integrated use, such as the S-10X electronic chart. In this study, the conversion of a vertical-based information provision system through a tidal benchmark that is currently implemented to an area-based information provision system was examined. We analyzed a range that allows area modeling using a tidal benchmark by calculating the height from an ellipsoid to the mean sea level (MSL) using the tidal benchmark. An experiment to determine the optimal spatial interpolation was conducted by comparing and analyzing the external verification and performance results. The results of this study are expected to be actively utilized in the manufacture of a continuous marine vertical datum system based on WGS-84.
Introduction
One of the most important current issues today in hydrographic surveys and information production is the use of the ellipsoidal height as the vertical standard in surveys. Recently, the development of Global Navigation Satellite System (GNSS)-based satellite survey technology has enabled the determination of the ellipsoidal height through improved geodetic and survey techniques in various fields, improving the accuracy and processing time of data. Currently, the vertical datum system of each country is divided into land and sea, which makes it difficult to link, integrate, manage, and utilize national spatial information. (1) To address this problem, countries such as the U.S., Britain, and Australia (2)(3)(4)(5) are making continuous efforts to integrate and utilize a dual vertical datum system between land and sea using the ellipsoidal height and geoid. In 2014, the International Federation of Surveyors (FIG) used two types of data from land and hydrographic surveys, and the International Hydrographic Organization (IHO) announced in 2016 that they would use the ellipsoidal height as an official guideline. (6,7) As such, the global trend is to establish a vertical datum system using the ellipsoidal height. (8) For this purpose, in this study, we defined a continuous marine vertical datum as shown in Fig. 1 and conducted an investigation on switching from the vertical datum information system with points through the current tidal benchmark to an area-type information system. Using the tidal benchmark, the height from the ellipsoid to the mean sea level (MSL) was calculated and the optimal spatial interpolation method for reflecting the characteristics of the terrain was determined. (9,10) A spatial data distribution having continuous features is expressed using area-based information by applying a range of spatial interpolation methods based on observed 3D point data. Curtarelli and co-workers performed the mapping of the water depth of the Amazon hydraulic reservoir by spatial interpolation methods, and through cross-validation, they found the conventional kriging interpolation method to be the most suitable. (11,12) Georgas et al. found spline interpolation with barriers to be the most suitable method by comparing actually observed values with a vertical datum after applying interpolation for a vertical datum of the Hudson river and constructing its tide level model. (13) Kim et al. constructed a tide level model showing the relationship between the approximate lowest low water (ALLW) and the regional MSL by spatial interpolation after extracting the semi-range sum of four largeness tide values (Z 0 ) in the tidal benchmark performance. They deduced, through cross-validation, that spline interpolation would be the most suitable method. (14) Jeong et al. constructed a submarine topography model through spatial interpolation using water depth data for the Jeju area. They deduced that kriging interpolation would be the most suitable method and considered that the point density features of the collected data affected its accuracy. (15)
Methodology
In this study, to construct a new ellipsoid-based marine vertical datum, the Gyeong-gi Bay area on the west coast of Korea ( Fig. 2) was selected as the target region in our experiment because of its large tidal difference, the complicated coastal line owing to the presence of bays and islands of various sizes, and its diverse tidal and oceanic currents. If a satisfactory spatial interpolation can be achieved for the west coast, the method used should also be satisfactory for the less complicated south and east coasts.
The height and location criteria of the continuous marine vertical datum of the WGS84 ellipsoid were used and the grid size was set to 5′′ (approximately 160 × 120 m 2 ). The reason for this is that if the grid size is set to 10′′, the resolution will be reduced to a value only suitable for large-scale areas.
The height from the ellipsoid to the MSL was calculated using the information provided by the tidal benchmark performance table for each region obtained by subtracting the MSL elevation from the ellipsoidal height. On this basis, information on tidal benchmark properties, such as the height of the MSL from the ellipsoid, was entered using the ArcGIS and Suffer tools. Then, basic data, such as coastlines, were extracted from an electronic navigational chart, a polygon file was created, and the range that can be modeled was analyzed through a correlation analysis by the least squares collocation (LSC) method. Each spatial interpolation method was then used to select its parameters.
An experimental verification that comprised an external verification and an evaluation of the performance for verification purposes was performed, and the optimal spatial interpolation method for establishing an ellipsoidal-based vertical datum was determined after the accuracy verification. A correlation distance analysis was performed using 133 performances of the tidal benchmark in 2015 for the west coast where the ellipsoidal height performance had been obtained by the Korea Hydrographic and Oceanographic Agency (KHOA). The height (reference MSL) from the ellipsoid to the MSL was calculated from the tidal benchmark to obtain the ellipsoidal height, and GPS observations for more than 4 h and GPS analysis for Bernese GNSS software were carried out.
It is assumed that when determining the correlation distance using the LSC technique, the differential distribution above the potential depends on the maximum dimension used for the spherical harmonization of a particular potential model. However, the development of specific potential models does not include data on the target area, and if actual observations and potential models used in the LSC method do not fit well, they will change very irregularly. This is because the first order of the magnitude distribution above the potential is very closely related to the error of the potential model coefficients. Therefore, it can be assumed that the differential distribution has a proportional relationship with the error, degree, and variance of the potential model as follows.
where α is a scale factor that must be determined by gravitational field measurement. The differential distribution above the potential can also be determined through a proportional relationship, but it is necessary to determine the covariance function for the potential difference. To address these problems, a differential distribution model, which consists of a functional relationship between the number of dimensions and the number of variances is used. Although various differential distribution models exist, in this study, the following secondary Markov covariance distribution model was used as an analytical covariance function between the reference MSLs for LSC interpolation.
( ) C εε is the correlation distance, C 0 is the variance at 0, and α is the scale factor defining the relationship between the error displacement of the geometric geoid and the error distribution of the gravitational geoid. (16) These are determined automatically and the correlation is determined empirically by the user. Typically, a covariance model for potential anomalies is used as a common Tscherning/Rapp model, which is known to provide the best results for determining the differential distribution of potential anomalies worldwide. However, for the covariance function analyzed in this study, the secondary Markov model was found to be more suitable than the Tscherning/Rapp model owing to the ideal air distribution model between two critical anomalies. Reference 17 provides more details of experimental procedures. Figure 4 shows the variations in the MSL and EGM2008 geoid height over the Korean sea area; they are highly similar regardless of the distance used. Figure 5 shows empirical and analytical covariances of the MSL on the west coast determined through the secondary Markov distribution model. The correlation analysis by the LSC method requires the determination of the relationship between a particular plane and the reference MSL.
Our analysis shows a slightly larger error in the initial variance, which means that the performance of the tidal benchmark in coastal regions does not match that of the geoid height. It was found that the actual covariance and the covariance obtained using the analytical model were relatively consistent and within the range of approximately 0.10 to 0.16° (approximately 16 km). Thus, the scope of surface modeling using the datum level point performance of the west coast region was determined to be up to 16 km.
As spatial interpolation generates discontinuous point data in the form of continuous area data, the spatial interpolation parameter shall be established differently on the basis of the distribution and conditions of the point data to obtain more accurate area data. Therefore, before starting this experiment, an experiment to select the optimal parameter by a different spatial interpolation method [inverse distance weighted (IDW), spline, or kriging interpolation, or spline interpolation with barriers] was performed. For the parameter obtained by each spatial interpolation method, RMSE was deduced and the optimal parameter was selected by comparing the differences between forecast and observed values for every point through crossvalidation. Cross-validation is a method of verifying accuracy through the difference from actually observed values after obtaining forecast values for all the target points while excluding experimental target points present in a certain area one by one.
IDW interpolation
The parameter of IDW interpolation is the power index (distance index), and a parameter selection test was performed for eight power indices (0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, and 4.0). The test results are shown in Table 1, and 3.0 was selected as the final parameter. The smallest RMSE means that the difference between the model and actual values is small.
Spline interpolation
The parameter of spline interpolation is mainly determined to be of the regularized or tension type and a selection test for parameters of 0.1, 0.2, 0.3, and 0.4 was performed. The test results are shown in Table 2, and as the final parameter, 0.4 was selected from the regularized type. The smallest RMSE means that the difference between the model and actual values is small.
Kriging interpolation
In the case of kriging interpolation, the parameter is mainly determined to be one of the conventional and two universal types, which are spherical, circular, exponential, Gaussian, and linear, and linear with linear drift and linear with quadratic drift, respectively. The results are shown in Table 3, and the universal-type linear with linear drift parameter was selected. The smallest RMSE means that the difference between the model and actual values is small.
Spline interpolation with barriers (minimum curvature)
In spline interpolation with barriers, the parameter is the smoothing factor, which ranges from 0 to 1. This factor determines how smoothly spatial interpolation is performed. We performed a parameter selection test for parameter values of 0.6, 0.7, 0.8, 0.9, and 1.0. The test results are shown in Table 4, and 1.0 was selected as the smoothing factor because the smallest RMSE means that the difference between the model value and the actual value is small.
Spatial Interpolation Experiment
In this study, we attempted to express the spatial distribution of the regional MSL through surface modeling based on spatial interpolation using the height from the ellipsoid of the tidal benchmark to the MSL.
As the test procedure, the height value (ellipsoid height -MSL) was first estimated from the ellipsoid height and the MSL, which are information provided by each tidal benchmark performance table for the test target region. On the basis of this result, a parameter selection test was performed for each spatial interpolation method (IDW, spline, or kriging, or spline interpolation with barriers) after generating a polygon file by extracting basic data such as the coastline from an electronic navigational chart subsequent to entering attribute data of the tidal benchmark, including the height value of the MSL from the altitude/longitude coordinate, and the ellipsoid by using the ArcGIS tool and Suffer tool. Reference 18 provides more details of experimental procedures. Subsequently, a spatial interpolation test was carried out with two kinds of experimental verification: external verification and comparison of observations. Then the optimal spatial interpolation method was obtained after verifying the accuracy, and an ellipsoid-based marine vertical datum (MSL) was constructed.
Spatial interpolation experiment and comparative validation
The test carried out to determine the optimum method of spatial interpolation for constructing the ellipsoid height-based marine vertical datum was performed for each spatial interpolation method (IDW, spline, or kriging interpolation, or spline interpolation with barriers) with the ArcGIS tool by estimating the height from the ellipsoid to the MSL on the basis of information provided by 67 tidal benchmarks for the target region in the test.
Using 60 of the 67 tidal benchmarks, where those of Gungpyeong port, Deokjeokdo bukri, Eoeundol port, Incheon port, Jumun port, Palmido, and Pungdo port were excluded, spatial interpolation was performed and external validation was carried out. The external validation was performed to verify the accuracy of the model using points not utilized at the time of modeling.
For validation, Gungpyeong port, Deokjeokdo bukri, Eoeundol port, Incheon port, Jumun port, Palmido, and Pungdo port were selected because these locations are strongly affected by oceanic and tidal currents If there is little difference between the forecast and observed values, the reliability of the spatial interpolation can be ensured. There are a few tidal benchmarks in their surroundings considered to be sufficiently accurate for deducing forecast values by minimizing the effect of the surroundings. Tables 5 and 6 show the difference between the observed and forecast values for each location obtained by each spatial interpolation. In the case of IDW interpolation, the difference was between 0.86 and 28.11 and the RMSE was 14.90 cm. In the case of spline interpolation, the difference was between 0.29 and 19.02 cm and the RMSE was 9.57 cm. In the case of kriging interpolation, the difference was between 0.89 and 17.60 and the RMSE was 8.22 cm. In the case of spline interpolation with barriers, the difference was between 0.83 and 19.63 cm and the RMSE was 8.21 cm.
Validation by comparison with observed values
Spatial interpolation was performed using 67 tidal benchmarks and verified by comparison with the observed values (Fig. 6). The validation performance was estimated through actual tide observation and GNSS surveying at locations north of Pungdo and south of Incheon Grand Bridge, where no tidal benchmarks are observed. These locations were selected on the basis of the judgment that if the difference between the forecast and observed values is minimal at locations strongly affected by oceanic and tidal currents, the reliability of spatial interpolation can be ensured; these locations were suitable for validation because tidal benchmarks are evenly distributed around them. The tidal values at these two points are annually revised using yearly data for standard ports (Incheon and Yeongheungdo) after observation for 30 days and nights. (19) Tables 7 and 8 show the difference between the observed and forecast values for each location obtained by each spatial interpolation. In the cases of IDW, spline, and kriging interpolations, and spline interpolation with barriers, the RMSE values were 13.01, 9.12, 9.36, and 9.85 cm, respectively.
Analysis of validation results
The spatial interpolation methods were applied to the test target area and two types of comparison and validation were conducted (external validation and comparison with observed values). The previously analyzed results are summarized in Table 9. As a result of analyzing the general results, we found that the RMSE values for IDW, spline, and kriging interpolations and spline interpolation with barriers are 14.503, 9.470, 8.490, and 8.599 cm, respectively.
Kriging interpolation gave a smaller RMSE than spline interpolations with barriers by 0.109 cm and it may be considered to be more suitable. However, the problem with kriging interpolation is that if a barrierlike coastline impeding physical flow is present in the space to be interpolated, interpolation cannot be performed. If the coastline is not considered, such as when interpolation can be interpolated to land and thus affect the forecast values, barriers such as coastlines must be considered and applied. Considering these features, it was concluded that spline interpolation with barriers using the minimum curvature technique (20) was the most suitable spatial interpolation method in view of the fact that spatial interpolation considering a barrierlike coastline can be performed.
In addition, Table 10 shows the minimum standards of channel surveying specified by IHO. (21) The maximum allowable total vertical uncertainty (TVU) at a confidence level of 95% satisfies a minimum of 26 cm for the special grade, which is the highest grade, and spline interpolation with barriers using the minimum curvature technique, in which spatial interpolation considering the barrierlike coastline may be performed, was considered to be the most suitable spatial interpolation method.
Conversion to each marine vertical datum
Through the previous test, the ellipsoid-based MSL height from the ellipsoid of each tidal benchmark to MSL was constructed by spline interpolation with barriers. In a tide model, when adding and subtracting as much as the semi-range of four largeness tide values (Z 0 ), the marine vertical datum of MSL can be converted to the marine vertical datum of approximate highest high water (AHHW) or the marine vertical datum of ALLW. Among the various tide models, the tidebed system of KHOA, which is applicable to all the waters of Korea, was utilized, and by extracting a range identical to that of the test target region, the conversions to AHHW and ALLW were performed. Figure 7 shows Z 0 and the marine vertical datum of AHHW, MSL, and ALLW from the ellipsoid.
Analysis of ellipsoid-based marine vertical datum
After constructing the ellipsoid-based MSL by spline interpolation with barriers, as a result of comparative analysis with actual observed values to determine whether the conversion to AHHW and ALLW was performed well, we obtained the results in Table 11.
For AHHW and ALLW, the RMSE values were 3.235 and 3.529 cm, respectively, both of which satisfy the allowable error specified by IHO. Therefore, it was confirmed that the conversion to AHHW and ALLW was performed well.
Conclusions
The objectives of this study were to improve the precision of geological surveys using GNSS and to create a continuous marine vertical standard for rapid information updates and the efficient use of integrated information through the representation of locations using ellipsoids. As a result, the method of spatial interpolation was employed to define and produce a continuous marine vertical datum, and verification tests were used to determine and justify the optimal method of spatial interpolation. We consider that the results can be actively utilized to develop a continuous marine vertical datum based on the WGS-84 ellipsoid in the future. However, in the case of an open sea area where tidal benchmark information is insufficient, it is necessary to analyze physical oceanic information obtained through satellite altimeter data or GNSS buoys and to construct an ellipsoid-based continuous marine vertical datum for the open sea. | 2019-12-05T09:14:34.199Z | 2019-11-30T00:00:00.000 | {
"year": 2019,
"sha1": "77fa8420b122a66ff61f6b6b38a5ddb9246741d7",
"oa_license": "CCBY",
"oa_url": "https://myukk.org/SM2017/sm_pdf/SM2059.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ace72475b37be14f58dcf0a575e5d476d6cad7f8",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geology"
]
} |
257757457 | pes2o/s2orc | v3-fos-license | Continuous orbit equivalence for automorphismhi systems of equivalence relations
We introduce notions of continuous orbit equivalence and strong (respective, weak) continuous orbit equivalence for automorphism systems of \'{e}tale equivalence relations, and characterize them in terms of the semi-direct product groupoids, as well as their reduced groupoid $C^*$-algebras and the associated $C^*$-automorphism systems of group actions or coactions on them. In particular, we study topological rigidity of expansive automorphism actions on compact (connected) metrizable groups
Introduction
The interplay between orbit equivalence of topological dynamical systems and C *algebras has been studied by many authors. An early celebrated result in this direction is the work on strong orbit equivalence of minimal homeomorphisms on Cantor sets given by Giordano,Putnam and Skau ([8]). Later, Tomiyama and Boyle-Tomiyama studied a generalization of the GPS's result to the case of topologically free homeomorphisms on compact Hausdorff spaces ( [3,31]). In [13], Matsumoto introduced the notion of continuous orbit equivalence of one-sided topological Markov shifts and characterized them in terms of the existence of diagonals preserving * -isomorphisms between the associated Cuntz-Krieger algebras. In [14], Matui and Matsumoto gave a classification result of twosided irreducible topological Markov shifts in the sense of flow equivalence by means of continuous orbit equivalence of one-sided topological Markov shifts. We can refer to [6,5] for some generalizations on flow equivalence and study on the relation between topological conjugacy of two-sided shifts of finite type and the associated stabilized Cuntz-Krieger algebras with the canonical Cartan subalgebras and gauge actions. More recently, in [15,16], Mastumoto introduced notions of asymptotic continuous orbit equivalence, asymptotic conjugacy and asymptotic flip conjugacy in Smale spaces and characterized them in terms of their groupoids and asymptotic Ruelle algebras with their dual actions, he also characterized topological conjugacy classes of one-sided topological Markov shifts in terms of the associated Cuntz-Krieger algebras and its gauge actions with potentials in [18].
Our interests lie in group actions. As a topological analogue of the classification results on the probability measure preserving actions in the sense of orbit equivalence, Li introduced the notion of continuous orbit equivalence for continuous group actions and proved that two topologically free systems are continuously orbit equivalent if and only if their associated transformation groupoids are isomorphic ( [11]). By Renault's result in [24], these conditions are also equivalent to the existence of C * -isomorphism preserving the canonical Cartan subalgebras between the corresponding crossed product algebras. In [7], Li's rigidity result is generalized to the case of group actions with torsion-free and abelian essential stabilisers.
The local conjugacy relations from expansive group action systems are generalizations of asymptotic equivalence relations of Smale spaces ( [21,29]). In [9], we characterized continuous orbit equivalence of expansive systems up to local conjugacy relations and showed that two expansive actions are asymptotically continuous orbit equivalent if and only if the associated semi-direct product groupoids of local conjugacy relations are isomorphic.
In this paper we consider continuous orbit equivalence between automorphism systems of étale equivalence relations. Given an étale equivalence relation R on a compact metrizable space X, let G α (X, R) be a dynamical system arising from an automorphism action of a countable group G on R in the sense that each α g is an automorphism of R as étale groupoids. Denote by R ⋊ α G the associated semi-direct product groupoid. We say that two systems G α (X, R) and H β (Y, S) conjugate if there exist an isomorphism ϕ : R → S as étale groupoids and a group isomorphism θ : G → H such that ϕ(gγ) = θ(g) ϕ(γ) for γ ∈ R and g ∈ G. We call the set [x] G,R = {y ∈ X : (gx, y) ∈ R for some g ∈ G} the bi-orbit of x. Motivated by the notion of usual orbit equivalence of dynamical systems, we say that G (X, R) and H (Y, S) are orbit equivalent if there exists a homeomorphism ϕ : X → Y such that ϕ([x] G,R ) = [ϕ(x)] H,S for x ∈ X. We call they are continuously orbit equivalent if there exist a homeomorphism ϕ : X → Y , continuous maps a : R × G → H and b : S × H → G such that both the maps ((x, y), g) ∈ R × G → (ϕ(x), a((x, y), g)ϕ(g −1 y)) ∈ S and ((x, y), g) ∈ S × H → (ϕ −1 (x), b((x, y), g)ϕ −1 (g −1 y)) ∈ R are well-defined and continuous. The followings are main results in this paper. Theorem 1.1. Assume that G α (X, R) and H β (Y, S) are essentially free. Then the following statements are equivalent.
Here the notion of essential freeness for G α (X, R) is a generalization and analogue of topological freeness of dynamical systems. When R = {(x, x) : x ∈ X} is a trivial étale equivalence relation, or R is the local conjugacy relation or asymptotical equivalence relation arising from an expansive system G α X or an irreducible Smale space (X, ϕ), this result is reduced to Theorem 1.2 in [11], Theorem 3.4 in [15] and Theorem 4.2 in [9]. The properties of strong or weak continuous orbit equivalence for automorphism systems are corresponding to two special orbit equivalence with some uniform conditions, and are also analogues of asymptotic flip conjugacy in [15] and (strong) asymptotic conjugacy in [9]. Let ρ α be the canonical cocycle from R ⋊ α G onto G. It follows from [7, Lemma 6.1] that ρ α gives us a coaction system (C * r (R ⋊ α G), G, δ α ).
Theorem 1.2. Assume that G α (X, R) and H β (Y, S) are essentially free. Then (i) G α (X, R) and H β (Y, S) are weakly continuously orbit equivalent if and only if there is an isomorphism Λ : and H β (Y, S) are strongly continuously orbit equivalent if and only if there exist an étale groupoid isomorphism Λ : Furthermore, when R and S are minimal or X and Y are connected, these two notions are consistent.
The assumption of essential freeness in the above theorems is necessary. Automorphism systems on local conjugacy relations from expansive actions are typical examples. The automorphism systems of local conjugacy relations from a full shift G A G over a finite set A and an irreducible Smale space (X, ψ) are essentially free ( [9,15]). The following result generalizes Matsumoto's result. Theorem 1.3. Let R α be the local cnjugacy relation from an expansive and transitive action G α X. Assume that X is infinite and has no isolated points and G is an abelian group such that every subgroup generated by g (g = e) has finite index in G. Then G α (X, R α ) is essentially free.
Moreover, if Z α X is generated by an expansive homeomorphism ϕ on X, then the transitivity condition on ϕ is not necessary.
In [2], Bhattacharya proved that topological conjugacy and algebraic conjugacy between two automorphism actions on compact abelian connected metrizable spaces are agreement. We have a rigidity result for automorphism actions on nonabelian groups. Proposition 1.4. Let G α (X, R) and H β (Y, S) be two systems on local conjugacy relations from topologically free, expansive automorphism actions on compact and connected metrizable groups X and Y , respectively. Assume that the homoclinic group ∆ α associated to G α X is dense in X. Then the following statements are equivalent: (i) G α (X, R) and H β (Y, S) are conjugate; (ii) G α (X, R) and H β (Y, S) are weakly continuously orbit equivalent; (iii) G α X and H β Y are conjugate; (iv) G α X and H β Y are algebraically conjugate.
In particular, two hyperbolic toral automorphisms on R n /Z n are flip conjugate if and only if the Z-actions they generates are continuously orbit equivalent up to the associated local conjugacy relations.
This paper is organized as follows. Section 3 characterizes conjugacy of automorphism systems of étale equivalence relations and the reduced C * -algebra of the associated semidirect product groupoid of equivalence relations. In Section 4, we introduce notions of continuous orbit equivalence, strong-and weak-continuous orbit equivalence for automorphism systems, and characterize them in terms of the semi-direct product groupoids and the corresponding C * -algebras. In Section 5, we discuss essential freeness of automorphism systems on local conjugacy equivalence relations arising from expansive actions, and in Section 6, we study topological rigidity of expansive automorphism actions on compact (connected) metrizable groups. As an example, we characterize the structure of the local conjugacy relation from a hyperbolic toral automorphism on n-torus.
Preliminaries
Unless otherwise specified, all our groups are discrete and countable, their identity elements are denoted by the same symbol e, and all topological groupoids are second countable, locally compact and Hausdorff. We refer to [23,28] for more details on topological groupoids and their C * -algebras, and refer to [19,32] for C * -dynamical systems.
For a topological groupoid G, let G (0) and G (2) be the unit space and the set of composable pairs, respectively. The range and domain maps r, d from G onto G (0) are defined by r(g) = gg −1 and d(g) = g −1 g, respectively. If r and d are local homeomorphisms then G is called to be étale.
When G is étale, these sets are discrete and countable, and G (0) is open and closed in G.
. Each equivalence relation R ⊆ X × X on a topological space X is a groupoid with multiplication (x, y)(w, z) = (x, z) if y = w and inverse (x, y) −1 = (y, x). If we identify (x, x) with x, then the unit space R (0) equals X and the range (resp. domain) map is defined by r(x, y) = x (resp. d(x, y) = y). If there exists a topology on R (not necessarily the relative product topology from X × X) for which R is an étale groupoid, then R is called an étale equivalence relation on X. In this case, if every R-equivalence class is dense in X then R is minimal. By a dynamical system, denoted by G α X (or simply by G X), we mean an action α of a group G on a second countable, locally compact and Hausdorff space X by homeomorphisms. The action α is usually expressed as (g, x) ∈ G × X → gx ∈ X. The associated transformation groupoid X ⋊ G is given by the set X × G with the product topology, multiplication (x, g)(y, h) = (x, gh) if y = g −1 x, and inverse (x, g) −1 = (g −1 x, g −1 ). Clearly, X ⋊ G is étale, and if (x, e) is identified with x then its unit space equals X, range map r(x, g) = x and domain map d(x, g) = g −1 x. A system G X is said to be topologically free if for every e = g ∈ G, {x ∈ X : gx = x} is dense in X. From [11,Corollary 2.3], G X is topologically free if and only if X ⋊ G is topologically principal. Two systems G X and H Y are conjugate if there are a homeomorphism ϕ : X → Y and a group isomorphism θ : G → H such that ϕ(gx) = θ(g)ϕ(x) for x ∈ X and g ∈ G.
Given an étale groupoid G, the linear space, C c (G), of continuous complex functions with compact support on G is a * -algebra under the operations: f * (γ) = f (γ −1 ) and f * g(γ) = γ ′ ∈G d(γ) f (γγ ′−1 )g(γ ′ ) for f, g ∈ C c (G) and γ ∈ G. For each u ∈ G (0) , there is a * -representation Ind u of C c (G) on the Hilbert space l 2 (G u ) of square summable functions on G u by Ind u (f )(ξ)(γ) = γ ′ ∈Gu f (γγ ′−1 )ξ(γ ′ ) for f ∈ C c (G), ξ ∈ l 2 (G u ) and γ ∈ G u . The reduced C * -algebra C * r (G) of G is the completion of C c (G) with respect to the norm f red = sup u∈G (0) Ind u (f ) for f ∈ C c (G). Since G (0) is clopen in G, C c (G (0) ) is contained in C c (G) in the canonical way, and this extends to an injection C 0 (G (0) ) ֒→ C * r (G). For an open subgroupoid H of G, C c (H) can be embedded into C c (G) as a * -subalgebra, so C * r (H) is embedded into C * r (G) as a C * -subalgebra in the canonical way. The C * -algebra C * r (X ⋊ G) of the transformation groupoid is isomorphic to the reduced crossed product C 0 (X) × α,r G ( [28]).
Given two groups N, H and a homomorphism ϕ from H into the automorphism group Aut(N) of N, the semi-direct product, denoted by N ⋊ ϕ H, of N by H is defined as the set N × H with group law given by the formulas (n, h)(n 1 , h 1 ) = (nϕ h (n 1 ), hh 1 ) and (n, h) −1 = (ϕ h −1 (n −1 ), h −1 ).
Automorphism systems of étale equivalence relations and the associated semi-direct product groupoids
Given an étale equivalence relation R on a compact metrizable space X, we call a dynamical system G α R an automorphism system if each α g is an automorphism of R as étale groupoids. Clearly, this system induces an action, also denoted by α, of G on X by homeomorphisms such that g(x, y) = (gx, gy) for g ∈ G and (x, y) ∈ R. We use the notation G α (X, R) (or G (X, R) for short) to denote such automorphism system. The semi-direct product groupoid, R × α G, attached to G α (X, R), is the set R × G with inverse ((x, y), g) −1 = ((g −1 y, g −1 x), g −1 ), and multiplication ((x, y), g)((u, v), h) = ((x, gv), gh) if u = g −1 y. The unit space identifies with X by identifying ((x, x), e) with x. Then r((x, y), g) = x and d((x, y), g) = g −1 y. Endowed with the relative product topology from R × G, the groupoid R × α G is étale ( [23]). The following is another characterization of the semi-direct product groupoid.
Remark 3.2. If we identify the unit space (R ⋊ α G) (0) with X as topological spaces by identifying (x, e, x) with x, then r(x, g, y) = x and d(x, g, y) = y. The equivalence relation R and the transformation groupoid X ⋊ G can be embedded into R ⋊ α G as étale subgroupoids through the identifications (x, y) ∈ R → (x, e, y) ∈ R ⋊ α G and One can check that the map ρ α : R ⋊ α G → G, defined by ρ α (x, g, y) = g, is a cocycle.
We call two automorphism systems G α (X, R) and H β (Y, S) on compact metrizable spaces conjugate if there are an isomorphism ϕ : R → S and a group isomorphism θ : G → H such that ϕ(gγ) = θ(g) ϕ(γ) for γ ∈ R and g ∈ G. Clearly, this is equivalent to that there are a homeomorphism ϕ : X → Y and a group isomorphism θ : G → H such that ϕ × ϕ : (x, y) ∈ R → (ϕ(x), ϕ(y)) ∈ S is an isomorphism and ϕ(gx) = θ(g)ϕ(x) for x ∈ X and g ∈ G. In particular, two systems G α X and H β Y are conjugate. Assume that one of the following statements holds: (i) X and Y are connected. (ii) R and S are minimal.
Then the above converse holds, i.e., G α (X, R) and H β (Y, S) are conjugate if and only if there is an isomorphism, Λ : Proof. Assume that G α (X, R) and H β (Y, S) are conjugate by a homeomorphism ϕ from X onto Y and a group isomorphism On the contrary, let Λ be an isomorphism from R⋊ α G onto S ⋊ β H such that Λ(R) = S and Λ(X ⋊G) = Y ⋊H. Let ϕ be the restriction of Λ to X, and let a = ρ β Λ and b = ρ α Λ −1 . Then ϕ is a homeomorphism from X onto Y , and a and b are continuous cocycles on R ⋊ α G and S ⋊ β H, respectively. Moreover, Λ(x, g, y) = (ϕ(x), a(x, g, y), ϕ(y)), and its Also since (x, g, g −1 x)(g −1 x, e, g −1 y)(g −1 y, g −1 , y) = (x, e, y) for (x, y) ∈ R and g ∈ G, we have a(x, g, g −1 x) = a(y, g, g −1 y). By symmetry, b has a similar property to a.
Assume that X and Y are connected. Since the restriction map a| X⋊G : X ⋊ G → H is continuous, we have, for every g ∈ G, the restriction map a| X×{g} is a constant, and thus a(x, g, g −1 x) = a(y, g, g −1 y) for all x, y ∈ X and g ∈ G. Similarly, Assume that R and S are minimal. For x, y ∈ X and g ∈ G, we choose a sequence {x n } in X converging to y and satisfying (x n , x) ∈ R for each n. From the above proof, a(x n , g, g −1 x n ) = a(x, g, g −1 x) for each n, which implies that a(x, g, g −1 x) = a(y, g, g −1 y) from the continuity of a. Similarly, Consequently, under the hypothesis of (i) or (ii), there exist two maps θ : Hence G α (X, R) and H β (Y, S) are conjugate. Given an automorphism system G α (X, R), one can check that the map for f ∈ C c (R), (x, y) ∈ R and g ∈ G gives an C * -dynamical system (C * r (R), G, α). Let C c (G, C * r (R)) be the set of all continuous complex functions from G to C * r (R) with compact support sets, which is a * -algebra over C under the following multiplicative and convolution: for ξ, η ∈ C c (G, C * r (R)) and whose closure under the reduced crossed norm is denoted by C * r (R) ⋊ α,r G, and is called the reduced crossed product C * -algebra associated to (C * r (R), G, α) ( [19,32]). By identifying an element a ∈ C * r (R) with the element ξ a ∈ C c (G, C * r (R) defined by ξ a (e) = a and ξ a (g) = 0 for g = e, C * r (R) can be embedded into Recall that a conjugacy between two C * -dynamical systems (A, G, α) and If such a φ exists, we call two systems conjugate. Note that the existence of an isomorphism between two étale equivalence relations R on X and S on Y is consistent with the existence of a C * -isomorphism between their associated reduced groupoid C * -algebras C * r (R) and C * r (S) preserving the canonical subalgebras C(X) and C(Y ) ( [23]). Thus one can check the following proposition by definitions.
In this case, there exists a * -isomorphism Λ : Thus, this forms a C * -dynamical system (C * r (R⋊ α G), G, ρ α ). Moreover, if G = Z, then the fixed point algebra of ρ α is isomorphic to C * r (R) ([25, Proposition 3.3.7]). The following theorem characterizes the reduced groupoid C * -algebra of R ⋊ α G by crossed product construction, which is perhaps a well-known fact, as we were unable to find an explicit reference, we provide a proof.
One can check that Φ : ) are * -isomorphisms such that Φ and Ψ are inverse to each other. Given x ∈ X, let l 2 (R x ) be the Hilbert space of all square-summable complex-valued functions on the R-equivalent class R x of x. We consider two Hilbert spaces l 2 (G, Let π x and λ x be the regular representations of C c (G) on l 2 (G x ) and C c (R) on l 2 (R x ) associated to x, respectively. Then we have the direct sums of representations Then π x , λ x , π and λ can be extended to their corresponding reduced groupoid C * -algebras and we use the same symbols to denote their extensions. Moreover, π and λ are faithful representations on C * r (G) and C * r (R), respectively. The representation λ induces a faithful representation where, for each x ∈ X,λ x is the representation of C c (G, C * r (R)) on the Hilbert space is a faithful representation. We can check that π x Φ(ξ) = λ x (ξ) for each x ∈ X, thus πΦ(ξ) = λ(ξ) for all ξ ∈ C c (G, C c (R)).
In fact, for each ϕ in l 2 (G, and .
, and Φ is an isomorphism. The conjugacy of two C * -systems follows from the definitions of dual actions and the construction of Ψ.
Remark 3.6. For a countable discrete group Γ, let λ : g ∈ Γ → λ g ∈ B(l 2 (Γ)) be the left regular representation of Γ, and C * r (Γ) be the reduced group C * -algebra of Γ. Let where we use the minimal tensor product) be the C *homomorphism defined by δ Γ (λ g ) = λ g ⊗ λ g for each g ∈ Γ. Given a unital C * -algebra A, we recall that a coaction of Γ on A is a nondegenerate homomorphism δ : We call (A, Γ, δ) a C * -coaction system. Recall that two C * -coaction systems (A, G, δ) and (B, H, ̺) are called conjugate if there exists a conjugacy φ between two systems, that is, For an automorphism system G α (X, R), it follows from [7, Lemma 6.1] that the canonical cocycle ρ α : G} is the canonical generators of C * r (R)⋊ α,r G, and C * (G) is the full group C * -algebra with generators {v g : g ∈ G} ( [10]). We conjecture that two systems (C * r (R ⋊ α G), G, δ α ) and (C * r (R) ⋊ α,r G, G, α) are conjugate when G is amenable.
Continuous orbit equivalence of automorphism systems
Given an automorphism system G α (X, R) on a compact metrizable space X, for x ∈ X, we let [x] G := {gx : g ∈ G} and [x] R := {y ∈ X : (x, y) ∈ R} be the orbits of x under the action α and the relation R, respectively. We call the set They are said to be continuously orbit equivalent if there exist a homeomorphism ϕ : X → Y and continuous maps a : 11]). Motivated by these notions, we introduce the following definitions.
Clearly, continuous orbit equivalence implies orbit equivalence for automorphism systems. Assume a system G α X is free in the sense that, for g ∈ G and x ∈ X, gx = x only if g = e. We consider two automorphism systems G (X, R 1 ) and G (X, R 2 ), where R 1 = {(x, x) : x ∈ X} is the trivial étale equivalence relation on X under the relative product topology and R 2 = {(x, gx) : x ∈ X, g ∈ G} is the orbit equivalence relation under α. Noticing that the map (x, g) ∈ X ⋊ G → (x, g −1 x) ∈ R 2 is a bijection, we transfer the product topology on X ⋊ G over R 2 via this map. Then R 2 is an étale equivalence relation on X.
Proposition 4.3. Assume that G X is free. Then G (X, R 1 ) and G (X, R 2 ) are continuously orbit equivalent, but not conjugate.
Proof. Let ϕ be the identity map on X, and let a((x, x), g) = g for ((x, x), g) ∈ R 1 × G. For each (x, y) ∈ R 2 , there exists unique an element in G, denoted by k(x, y), such that y = k(x, y)x. Let b((x, y), g) = k(x, y) −1 g for ((x, y), g) ∈ R 2 × G. Then ϕ, a and b satisfy the requirements in Definition 4.2, thus G (X, R 1 ) and G (X, R 2 ) are continuously orbit equivalent.
Since R 1 and R 2 are never isomorphic, G (X, R 1 ) and G (X, R 2 ) are not conjugate.
Using the semi-direct product groupoid R ⋊ α G and the canonical homeomorphism γ 0 , one can check the following lemma. and are continuous.
Recall that an étale groupoid G is topologically principal if {u ∈ G (0) : G u u = {u}} is dense in G (0) . Since G is assumed to be second countable, it follows from [4,24] that it is topologically principal if and only if the interior of Moreover, we have that γ 0 ((R ⋊ α G) ′ ) = (R × α G) ′ . Motivated by [9,15], we have the following notion.
Definition 4.5. A system G (X, R) is said to be essentially free if for every e = g ∈ G, {x ∈ X : (x, gx) / ∈ R} is dense in X.
One can easily see that G (X, R) is essentially free, if and only if the interior of {x ∈ X : g[x] R = [x] R } in X is empty for every g = e.
Lemma 4.6. A system G α (X, R) is essentially free if and only if R × α G ( or R ⋊ α G) is topologically principal.
Moreover, one of these conditions implies that both systems G X and G R are topologically free.
Proof. It follows from the definitions that the topological principality of R × α G implies the essential freeness of G (X, R), thus implies the topological freeness of G X. To see that the essential freeness of G α (X, R) implies the topological principality of R × α G, we only need to show that ((x, gx), g) is not in the interior of (R × α G) ′ in (R × α G) for each e = g ∈ G and x ∈ X with (x, gx) ∈ R.
In fact, for otherwise, choose e = g 0 ∈ G and x 0 ∈ X such that (x 0 , g 0 x 0 ) ∈ R and ((x 0 , g 0 x 0 ), g 0 ) is an interior point of (R × α G) ′ . Then there exists an open neighbourhood U of (x 0 , g 0 x 0 ) in R such that The last inclusion implies that y = g 0 x for each (x, y) ∈ U . Hence {x ∈ X : (x, g 0 x) ∈ R} contains the non-empty open subset r( U ) of X, which is contrast to the essential freeness of G (X, R). Assume G (X, R) is essentially free. Given e = g ∈ G and a non-empty open subset U ⊆ R, it follows from the openness of r(U) that there exists Remark 4.7. The topological freeness of neither G X nor G R can imply the essential freeness of G (X, R). To see this, if G X is free, then both systems G R 1 and G R 2 in Proposition 4.3 are free, and G (X, R 1 ) is essentially free, but G (X, R 2 ) is not. If G α (X, R) and H β (Y, S) are essentially free, then the mappings a and b in Lemma 4.4 (or in Definition 4.2) are uniquely determined by (4.1) and (4.2). In fact, suppose that a ′ : R ⋊ α G → H is another continuous map such that Ψ ′ : (x, g, y) ∈ R ⋊ α G → (ϕ(x), a ′ (x, g, y), ϕ(y)) ∈ S ⋊ β H is continuous. Then (x, g, y) ∈ R ⋊ α G → (a(x, g, y)ϕ(y), a ′ (x, g, y)ϕ(y)) ∈ S is continuous. Hence, from the continuity of a, a ′ and ρ α , for (x, g, y) ∈ R ⋊ α G, there exists an open neighbourhood U of (x, g, y) such that the map d| U : U → d( U ) is a homeomorphism, ρ α (γ) = g, a(γ) = a(x, g, y), and a ′ (γ) = a ′ (x, g, y) for each γ ∈ U . For each z ∈ ϕ(d( U)), choose γ ∈ U such that z = ϕ(d(γ)). The choice of U implies that we can assume that γ = (u, g, v), thus z = ϕ(v). Note that (ϕ(u), a(γ)z) and (ϕ(u), a ′ (γ)z), thus (a(γ)z, a ′ (γ)z) are in S. Hence (a(x, g, y)z, a ′ (x, g, y)z) ∈ S for each z ∈ ϕ(d( U)). The essential freeness of H β (Y, S) implies a(x, g, y) = a ′ (x, g, y). By symmetry, b is uniquely determined by (4.2). Proof. We only need to show that the mappings a and b in Lemma 4.4 are cocycles. Let γ 1 = (x, g, y), γ 2 = (y, h, z) ∈ R ⋊ α G be arbitrary, and write γ ′ = γ 1 γ 2 = (x, gh, z). From the continuity of a and ρ α , choose open neighbourhoods U, V and W of γ 1 , γ 2 and γ ′ in R ⋊ α G, respectively, such that a(γ) = a(γ 1 ), ρ α (γ) = g for each γ ∈ U, a(η) = a(γ 2 ), ρ α (η) = h for each η ∈ V , and a(σ) = a(γ ′ ), ρ α (σ) = gh for each σ ∈ W . Since the multiplication on (R ⋊ α G) (2) is continuous at (γ 1 , γ 2 ), we can assume that γη ∈ W when γ ∈ U, η ∈ V and (γ, η) ∈ (R ⋊ α G) (2) . Also since the range r and domain d are local homeomorphisms and d(γ 1 ) = r(γ 2 ) = y, we can assume that the restrictions d| U and r| V are homeomorphisms onto their respective ranges and d(U) = r(V ).
By the continuity of ρ α , Ψ and a at (x, g, y), as well as that of ϕ at x and y, there is an open neighbourhood V of (x, g, y) in R ⋊ α G such that (i) ρ α (γ) = g, a(γ) = h and Ψ(γ) ∈ U for every γ ∈ V ; (ii) r| V and d| V are homeomorphisms from V onto r(V ) and d(V ), respectively; (iii) ϕ(r(V )) ⊆ r(U) and ϕ(d(V )) ⊆ d(U).
By a similar way, we can show that a( The following definition comes from [9, Definition 4.1]. Definition 4.10. For two étale equivalence relations R and S on X and Y , let G X and H Y be two systems generating two automorphism systems G (X, R) and H (Y, S). We say that G X and H Y continuously orbit equivalent up to R and S, if there exist a homoeomorphism, ϕ : X → Y , continuous cocycles a : X ⋊ G → H, b : Y ⋊ H → G, σ : R → H, and τ : S → G satisfying the following conditions: (i) σ(x, y)a(y, g) = a(x, g)σ(g −1 x, g −1 y) for (x, y) ∈ R and g ∈ G; (ii) τ (x, y)b(y, g) = b(x, g)τ (g −1 x, g −1 y) for (x, y) ∈ S and g ∈ H; (iii) The map, ξ 1 : (x, g) ∈ X × G → (a(x, g) −1 ϕ(x), ϕ(g −1 x)) ∈ S, is well-defined and continuous; Moreover, b(ϕ(x), a(x, g)) τ (ξ 1 (x, g)) = g for x ∈ X and g ∈ G.
Proposition 4.11. Let G α (X, R) and H β (Y, S) be two automorphism systems. Then G X and H Y are continuously orbit equivalent up to R and S if and only if R ⋊ α G and S ⋊ β H are isomorphic as étale groupoids.
The proof of this proposition is the same as that of [9,Theorem 4.2] in which the local conjugacy is not necessary. We only provide a brief proof. For details, see [9,Theorem 4.2].
Proof. Assume that Λ : R ⋊ α G → S ⋊ β H is an isomorphism. Let ϕ be the restriction of Λ to the unit space (R ⋊ G) (0) and let a(x, g) = ρ β Λ(x, g, g −1 x), σ(x, y) = ρ β Λ(x, e, y), u, e, v). Then ϕ, a, b, σ and τ satisfy the requirements in Definition 4.10, thus G X and H Y are continuously orbit equivalent up to R and S.
Theorem 4.12. Assume that G α (X, R) and H β (Y, S) are essentially free. Then the following statements are equivalent.
(ii) G α X and H β Y are continuously orbit equivalent up to R and S; (iii) R ⋊ α G and S ⋊ β H are isomorphic as étale groupoids; Proof. The equivalence of (ii) and (iii) follows from Proposition 4.11. From Lemma 4.6, R ⋊ α G and S ⋊ β H are topological principal, thus the equivalence of (iii) and (iv) follows from [7,24].
Assume (i) holds. From Lemma 4.4, there exist mappings ϕ, a and b such that the mappings Ψ : (x, g, y) ∈ R ⋊ α G → (ϕ(x), a(x, g, y), ϕ(y)) ∈ S ⋊ β H and Ψ : Then R 1 ⋊ α G is isomorphic to the transformation groupoid X ⋊ G, and the notions of continuous orbit equivalence for G (X, R 1 ) and G X in the Li's sense are consistent. Hence Theorem 4.12 is a generalization of Theorem 1.2 in [11].
There are two special cases for orbit equivalence of two systems G α (X, R) and H β (Y, S) via a homeomorphism ϕ : X → Y . One is, for each g ∈ G, there is h ∈ H such that (hϕ(x), ϕ(y)) ∈ S for each x, y ∈ X with (gx, y) ∈ R, and by symmetry, for each h ∈ H, there is g ∈ G such that (gϕ −1 (x), ϕ −1 (y)) ∈ R for each x, y ∈ Y with (hx, y) ∈ S. The other is, for each g ∈ G and x ∈ X, there is h ∈ H such that (hϕ(x), ϕ(z)) ∈ S for each (gx, z) ∈ R, and by symmetry, for each h ∈ H and y ∈ Y , there is g ∈ G such that (gϕ −1 (y), ϕ −1 (z)) ∈ R for each (hy, z) ∈ S. Inspired by these ideas, we have the following notions, comparing with those of (strong) asymptotic conjugation in [9,Definition 4.4].
We say these two systems weakly continuously orbit equivalent, write G (X, R) ∼ wcoe H (Y, S), if they are continuously orbit equivalent and in Definition 4.2 we can take the maps a(γ, g) = a(γ ′ , g) for γ, Remark 4.15. Clearly, the strong continuous orbit equivalence implies the weak one. If G α X is free, then G (X, R 1 ) and G (X, R 2 ) in Proposition 4.3 are continuously orbit equivalent, but not weakly continuously orbit equivalent, because they do not satisfy the second special case.
The following corollary is an analogy to [9,Proposition 4.5] Corollary 4.16. Assume that G α (X, R) and H β (Y, S) are essentially free. Then Moreover, if these conditions hold, then there is a C * -isomorphism Φ : S) if and only if there exist a homeomorphism ϕ : X → Y and a group isomorphism θ : G → H such that Λ : (x, g, y) ∈ R ⋊ α G → (ϕ(x), θ(g), ϕ(y)) ∈ S ⋊ β H is an isomorphism if and only if there exist an étale groupoid isomorphism Λ : R ⋊ α G → S ⋊ β H and a group isomorphism θ : G → H such that θρ α = ρ β Λ if and only if two coaction systems (C * r (R ⋊ α G), G, δ α ) and (C * r (S ⋊ α H), H, δ β ) are conjugate by a conjuagcy φ with φ(C(X)) = C(Y ). Furthermore, when R and S are minimal or X and Y are connected, these two notions of strong continuous orbit equivalence and weak continuous orbit equivalence are consistent.
Proof. One can check that if the map a in Definition 4.2 is a cocycle on R × α G, then a(γ, g) = a(γ ′ , g) for γ, γ ′ ∈ R with d(γ) = d(γ ′ ) if and only if a(γ, e) = e for all γ ∈ R. By symmetry, b has a similar characterization when it is a cocycle. From Lemma 4.8, Theorem 4.12 and its proof, we can obtain (i) and (ii), where the equivalence of the last two statements in (ii) comes from [7,Theorem 6.2].
We now show that the weak continuous orbit equivalence of G α (X, R) and H β (Y, S) implies the strong one when R and S are minimal or X and Y are connected. To see this, by assumption and the first paragraph of this proof, we have a homeomorphism ϕ and two continuous cocycles a, b with a(x, e, y) = e for all (x, y) ∈ R and b(u, e, v) = e for (u, v) ∈ S, satisfying Lemma 4.4.
Assume that X and Y are connected. For each g ∈ G, the map x ∈ X → a(x, g, g −1 x) ∈ H is continuous, thus it is a constant. Hence a(x, g, g −1 x) = a(y, g, g −1 y) for all x, y ∈ X. By symmetry, b has a similar property.
Remark that a(x, g, y) = a(x, g, g −1 x)a(g −1 x, e, y) = a(x, g, g −1 x) for (x, g, y) ∈ R ⋊ α G. Consequently, if one of the above two assumptions holds, then a(x, g, y) = a(u, g, v) for (x, g, y), (u, g, v) ∈ R ⋊ α G. By a similar way, we can show that b satisfies a similar requirement. Hence G α (X, R) and H β (Y, S) are strongly continuously orbit equivalent.
Local conjugacy relations from expansive systems
The condition of essential freeness of automorphism systems in Theorem 4.12 and Corollary 4.16 is necessary. In this section, we give some examples satisfying the requirement. Recall that a system G α X is called expansive if the action α is expansive, which means for a metric d on X compatible with the topology, there exists a constant δ > 0 such that, for x, y ∈ X, if d(gx, gy) < δ for all g ∈ G then x = y. For convenience, given a real-valued function ψ on G, the notation lim g→∞ ψ(g) = 0 means that, for any ǫ > 0, there exists a finite subset F of G such that |ψ(g)| < ǫ for all g / ∈ F . A triple (U, V, γ), consisting of open subsets U, V of X and a homeomorphism γ : U → V , is called a local conjugacy, if lim g→∞ sup z∈U d(gz, gγ(z)) = 0. Two points x and y in X are said to be locally conjugate, if there exists a local conjugacy (U, V, γ) such that x ∈ U, y ∈ V and γ(x) = y. Let R α = {(x, y) ∈ X × X : x and y are locally conjugate} be the local conjugacy relation on X. From [27] (also see [29]), R α is an étale equivalence relation on X under the topology whose base consists of the sets of the form where (U, V, γ) is a local conjugacy. Moreover, G α X induces an automorphism system G α R α : g(x, y) = (gx, gy) for g ∈ G and (x, y) ∈ R α . Thus we have an automorphism system G α (X, R α ).
From [9] and [15], the automorphism systems of local conjugacy relations associated to a full shift G A G over a finite set A and an irreducible Smale space (X, ψ) are essentially free. The following result generalizes Matsumoto's result in the Smale space case to the Z-expansive system case.
Theorem 5.2. Let R ϕ be the local conjugacy relation from an expansive system Z α X generated by a homeomorphism ϕ on X. Assume that X is infinite and has no isolated points. Then Z α (X, R ϕ ) is essentially free.
Proof. For an arbitrary integer p ≥ 1, we first claim that the set X p = {x ∈ X : lim n→∞ ϕ pn (x) and lim n→∞ ϕ −pn (x) exist} is countable.
In fact, when p = 1, it follows from [1, Theorem 2.2.22] that X 1 is countable. Moreover, the expansiveness of ϕ p implies that X p is also countable for every p. For completion, we provide a proof for the claim. Since ϕ is expansive, it follows that the p-periodic point set F p (ϕ) = {x ∈ X : ϕ p (x) = x} is finite, say F p (ϕ) = {y 1 , y 2 , · · · , y k }. For each x ∈ X p , let lim n→∞ ϕ pn (x) = y and lim n→∞ ϕ −pn (x) = z. One can see that y, z ∈ F p (ϕ).
Hence there exists an integer N ≥ 2 such that d(ϕ n (x), ϕ n (y i )) < c 2 for all n ≥ N and d(ϕ −n (x), ϕ −n (y j )) < c 2 for all n ≤ −N, where c is an expansive constant for ϕ. Set X p,N (i, j) = x ∈ X : d(ϕ n (x), ϕ n (y i )) < c 2 for all n ≥ N d(ϕ −n (x), ϕ −n (y j )) < c 2 for all n ≤ −N . Thus X p (i, j) = ∪ N ≥2 X p,N (i, j) and X = ∪ 1≤i,j≤k X p (i, j). To finish the claim, we show, for each N ≥ 2 and 1 ≤ i, j ≤ k, the set X p,N (i, j) is finite. For otherwise, X p,N (i, j) is infinite for some i, j, N. Choose δ < c 2 such that if d(y, z) ≤ δ for y, z ∈ X then d(ϕ l (y), ϕ l (z)) < c 2 for each integer l with |l| ≤ N − 1. Since X p,N (i, j) is infinite, there are two different y, z in X p,N (i, j) such that d(y, z) < δ. Thus d(ϕ l (y), ϕ l (z)) < c for every integer l, which implies that y = z by expansiveness of ϕ and is a contradiction. We have established the claim.
Recall that the action G α X is (topologically) transitive if for all nonempty open set U, V ⊆ X, there exists an s ∈ G such that sU ∩ V = ∅. In this case, choose a countable basis {U n : n = 1, 2, · · · } for the topology on X. The transitivity of α implies that each open subset W n = ∪ g∈G gU n is dense in X. It follows from the Baire category theorem that ∩ ∞ n=1 W n is dense in X. Thus the set of points in X with dense orbit is dense.
Proposition 5.3. Let R α be the local cnjugacy relation from an expansive and transitive action G α X. Assume that X is infinite and has no isolated points and G is an abelian group such that every subgroup generated by g (g = e) has finite index in G. Then G α (X, R α ) is essentially free.
Proof. Given g ∈ G, g = e, let H g be the subgroup generated by g in G. Then H g has finite index, thus H g α| H g X is expansive, where α| H g is the restriction of α to H g . So the set F H g (α) : = {x ∈ X : hx = x, h ∈ H g } is finite. From hypothesis, X is uncountable.
If we take an enumeration s 1 , s 2 , · · · of the elements of G, then gs 1 , gs 2 , · · · is also an enumeration of the elements of G. Let x ∈ X with (x, gx) ∈ R α . Assume that z is a limit point of the sequence {gs n x : n = 1, 2, · · · }. By a similar argument to the above theorem, one can see that gz = z, thus z ∈ F H g (α).
Assume that there exists a g ∈ G, g = e such that the interior of {x ∈ X, (x, gx) ∈ R α } is non-empty. Then the transitivity of G α X implies that there exists a point x ∈ X with (x, gx) ∈ R α and having dense orbit, i.e., {gs n x : n = 1, 2, · · · } is dense in X. From the second paragraph, every limit point of {gs n x : n = 1, 2, · · · } is contained in the finite set F H g (α). Thus the closure of {gs n x : n = 1, 2, · · · } in X is countable, which contradicts the fact that X is uncountable. Consequently, G α (X, R α ) is essentially free.
Expansive automorphism actions on compact groups
Let X be a compact metrizable group with an invariant compatible metric d, i.e., d(xy, xz) = d(yx, zx) = d(y, z) for x, y, z ∈ X. Assume that G α X is an expansive automorphism system in the sense that it is expansive and each α g is a continuous automorphism on X. Let be the associated homoclinic group, which is an α-invariant countable subgroup of X in the sense that α g (a) ∈ ∆ α for every a ∈ ∆ α and g ∈ G ( [29]). Denote by σ the left-multiplication action of ∆ α on X: σ u (x) = ux, for u ∈ ∆ α and x ∈ X, and by X ⋊ σ ∆ α the associated transformation groupoid. Let G α (X, R) be the automorphism system associated to the local conjugacy equivalence relation as in Section 5. The following facts are referred to [29,Lemma 3.7]. (2)The map Λ : (x, y) ∈ R → (x, xy −1 ) ∈ X ⋊ σ ∆ α is an étale groupoid isomorphism.
Proof. We only give a proof for (2). One can see that Λ is an algebraic isomorphism Given (x, y) ∈ R, for S ⊆ ∆ α and an open subset U ⊆ X with x ∈ U and xy −1 ∈ S, we define γ(z) = yx −1 z for z ∈ U. Then (U, γ(U), γ) is a local conjugacy from x to y, and Λ({(z, γ(z)) : z ∈ U}) ⊆ U × S, thus Λ is continuous at (x, y). By a similar way, we show that Λ −1 is continuous, thus Λ is a homeomorphism.
One can check that Γ α X is an expansive affine system. Remark that ∆ α and G can be contained in Γ as subgroups by identifying a ∈ ∆ α with (a, e) ∈ Γ, and g ∈ G with (e, g) ∈ Γ, thus the restrictions of α to ∆ α and G are the same as σ and α, respectively. Hence the transformation groupoid X ⋊ α Γ contains X ⋊ σ ∆ α and X ⋊ α G as open subgroupoids.
The continuity of Λ can be implied by Lemma 6.1 and the canonical homeomorphism γ 0 from R ⋊ α G onto R × G. Hence Λ is an étale groupoid isomorphism. Proposition 6.4.
(i) The system G α X is topologically free, if and only if Γ α X is topologically free, if and only if R ⋊ α G is topologically principal, if and only if G α (X, R) is essentially free. (ii) If G is torsion-free and ∆ α is dense in X, then G α X is topologically free.
Proof. (i) It follows from [11, Corollary 2.3], Lemma 4.6 and Proposition 6.3 that we only need to show that the topological freeness for α and α is consistent. Since G can be embedded into Γ as a subgroup and the restriction of α to G is the same as the action α, the topological freeness of α implies that of α.
To see the contrary, it is sufficient to show that, for arbitrary (e, e) = (a, g) ∈ Γ and non-empty open subset U of X, there exists x ∈ U such that aα g (x) = x.
In fact, since the restriction of α to ∆ α is free, we can assume that g = e and a = e. Clearly, we can also assume that there exists y ∈ U such that aα g (y) = y. The topologically freeness of α implies there is z ∈ y −1 U such that α g (z) = z. Let z = y −1 x for x ∈ U. Then aα g (x) = x.
(ii) Given g ∈ G, assume there exists an open subset U of X such that α g (z) = z for every z ∈ U. We can let e / ∈ U. Since ∆ α is dense in X, there is x 0 ∈ U ∩ ∆ α , thus lim h→∞ d(α h (x 0 ), e) = 0. If g = e, then, from the torsion-freeness of G, the set {g n : n ∈ Z} is infinite, we have lim n→∞ d(α g n (x 0 ), e) = 0, which contradicts the fact x 0 = e and α g n (x 0 ) = x 0 for all n ∈ Z. Consequently, g = e, thus α is topologically free.
Recall that two automorphism systems G α X and H β Y on compact metrizable groups are said to be algebraically conjugate if there exist a continuous isomorphism ϕ : X → Y and an isomorphism ρ : G → H such that ϕ(α g (x)) = β ρ(g) (ϕ(x)) for g ∈ G and x ∈ X. Form [2], when X and Y are abelian, two notions of algebraical conjugacy and conjugacy for automorphism systems are consistent. In the following we have a similar result for automorphism actions on nonabelian groups. Proposition 6.5. Let G α (X, R) and H β (Y, S) be two automorphism systems on local conjugacy relations from topologically free, expansive automorphism actions on compact and connected metrizable groups X and Y , respectively. Then the following statements are equivalent: S); (iii) G α X and H β Y are continuously orbit equivalent; (iv) G α X and H β Y are conjugate.
Moreover, if ∆ α is dense in X, then the above conditions are equivalent to the following statement.
(v) G α X and H β Y are algebraically conjugate.
Proof. Since X and Y are connected, the continuous orbit equivalence and conjugacy of G α X and H β Y are consistent. To complete the proof, we only need to prove that (ii) ⇒ (iv) and (ii) ⇒ (v) when ∆ α is dense in X. From Corollary 4.16 and Proposition 6.3, there is an étale groupoid isomorphism Λ : where σ and σ ′ are the left-multiplication actions, and α and β are as in Definition 6.2. Since X and Y are connected, there are a homeomorphism ϕ : X → Y and a group isomorphism θ : ∆ α ⋊ G → ∆ β ⋊ H such that ϕ(aα g (x)) = β θ(a,g) (ϕ(x)) for every (a, g) ∈ ∆ α ⋊ G and x ∈ X, (6.1) and θ(∆ α ) = ∆ β , where ∆ α and ∆ β are subgroups of the semi-direct groups as before.
Consequently, G α X and H β Y are conjugate.
Assume that ∆ α is dense in X. Remark that θ(a, e) ∈ ∆ β , thus β θ(a,e) (y) = θ(a, e)y for a ∈ ∆ α and y ∈ Y . Letting g = e, the identity of G, and letting x = e, the identity of X, in (6.1), one can see that ϕ(a) = θ(a, e)ϕ(e) for a ∈ ∆ α . Thus, by putting g = e in (6.1), we have ϕ(ax) = θ(a, e)ϕ(x) = (ϕ(a)ϕ(e) −1 )ϕ(x), which implies that ϕ(ax) = ϕ(a) ϕ(x) for every a ∈ ∆ α and x ∈ X. From the density of ∆ α in X, the map ϕ : X → Y is a continuous isomorphism. So G α X and H β Y are algebraically conjugate. Proposition 6.6. Let G α (X, R) be an automorphism system on local conjugacy relation from a topologically free, expansive automorphism action. Then the following statements are equivalent.
Proof. For the equivalence of (i), (ii) and (iii), we refer to [29,Corollary 3.9]. From Proposition 6.3, C * r (R ⋊ α G) is isomorphic to C(X) ⋊ r, α Γ, thus they have the same simplicity and the uniqueness of tracial states. From Proposition 6.4 and [11], X ⋊ α Γ is topologically principal, thus there is a one-to-one correspondence between the family of ideals of C(X) ⋊ r, α Γ and that of α-invariant open subsets of X ( [23]).
Assume (iii) holds. Since each non-empty α-invariant open subset U in X is invariant by the left-multiplicative by elements in ∆ α , we have U = X. Hence C(X)⋊ r, α Γ is simple, thus (iv) holds. On the contrary, if (iv) holds, then C(X) ⋊ r, α Γ is simple, which leads to the fact that the α-invariant open X \ ∆ α of X is empty, where ∆ α is the closure of ∆ α in X. Thus ∆ α = X, i.e., (iii) holds.
For the implication (iii) ⇒ (v), assume that ∆ α is dense in X. Then the Haar measure µ 0 on X is the unique α-invariant Borel probability measure on X. From [30, Proposition 3.2.4], C(X) ⋊ r, α Γ, and thus C * r (R ⋊ α G) has a unique tracial state.
Example 6.7 (Hyperbolic toral automorphisms ). For n ≥ 2, we consider an expansive Z-action on the n-dimensional torus R n /Z n generated by a single hyperbolic toral automorphism α. Let π : R n → R n /Z n be the usual quotient map. Recall that R n /Z n is a compact and connected additive group under the following metric compatible with the quotient topology: d(π(x), π(y)) = inf z∈Z n x − y − z , for x, y ∈ R n , where · is the Euclidean norm on R n . The elements in R n are denoted by column vectors or row vectors.
Let A be the hyperbolic matrix in GL(n, Z) with det(A) = ±1 and having no eigenvalues of modules 1, such that α(π(x)) = π(Ax) for x ∈ R n .
Then R n = E s ⊕ E u , where E s = {x ∈ R n : lim k→+∞ A k x = 0} and E u = {w ∈ R n : lim k→+∞ A −k w = 0} are two invariant subspaces of the linear map on R n determined by A.
Remark that E s ∩ Z n = {0}, E u ∩ Z n = {0}, and both subgroups π(E s ) and π(E u ), as well as the homoclinic group ∆ α = π(E s ) ∩ π(E u ) induced by α, are dense in R n /Z n . Moreover, the system Z α R n /Z n generated by α is topologically free ( [12]).
Each m ∈ Z n has the unique decomposition m = m s − m u ∈ E s ⊕ E u . Then the map θ : Z n → ∆ α by θ(m) = π(m s ) (= π(m u )) is a group isomorphism. As before, we let σ be the translation action of ∆ α on R n /Z n : σ u (x) = u + x for u ∈ ∆ α , x ∈ R n /Z n .
Let τ be the action of Z n on R n /Z n by homeomorphisms: τ n (x) = θ(n) + x for n ∈ Z n , x ∈ R n /Z n .
Then Z n τ R n /Z n and ∆ α σ R n /Z n are conjugate. Denote by Z n ⋊ Z the semi-direct product of Z n by the automorphism given by A: m ∈ Z n → Am ∈ Z n . Let γ be the action of Z n ⋊ Z on R n /Z n : γ (m,k) (x) = θ(m) + α k (x) for (m, k) ∈ Z n ⋊ Z and x ∈ R n /Z n .
So Z n ⋊Z γ R n /Z n and ∆ α ⋊Z α R n /Z n are conjugate, where α is given by Definition 6.2.
Proposition 6.8. Let α be a hyperbolic toral automorphism on R n /Z n defined by a hyperbolic matrix A. Let R be the local conjugacy relation induced by α. Then (1) C * r (R) is generated by the unitaries U j , V j , 1 ≤ j ≤ n, satisfying the relations (6.2), thus is isomorphic to a simple 2n-dimensional noncommutative torus and is an AT-algebra with real rank zero.
Moreover, two hyperbolic toral automorphisms on R n /Z n are flip conjugate if and only if the Z-actions they generates are continuously orbit equivalent up to the associated local conjugacy relations. | 2021-06-10T01:15:45.733Z | 2021-06-09T00:00:00.000 | {
"year": 2021,
"sha1": "37a2125306a3332bb011ec301aeb25a7524e26e0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "37a2125306a3332bb011ec301aeb25a7524e26e0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
1763444 | pes2o/s2orc | v3-fos-license | The physical activity profiles of South Asian ethnic groups in England
Background To identify what types of activity contribute to overall physical activity in South Asian ethnic groups and how these vary according to sex and age. We used the White British ethnic group as a comparison. Methods Self-reported physical activity was measured in the Health Survey for England 1999 and 2004, a nationally representative, cross-sectional survey that boosted ethnic minority samples in these years. We merged the two survey years and analysed data from 19 476 adults. The proportions of total physical activity achieved through walking, housework, sports and DIY activity were calculated. We stratified by sex and age group and used analysis of variances to examine differences between ethnic groups, adjusted for the socioeconomic status. Results There was a significant difference between ethnic groups for the contributions of all physical activity domains for those aged below 55 years, with the exception of walking. In women aged 16–34 years, there was no significant difference in the contribution of walking to total physical activity (p=0.38). In the 35–54 age group, Bangladeshi males have the highest proportion of total activity from walking (30%). In those aged over 55 years, the proportion of activity from sports was the lowest in all South Asian ethnic groups for both sexes. Conclusions UK South Asians are more active in some ways that differ, by age and sex, from White British, but are similarly active in other ways. These results can be used to develop targeted population level interventions for increasing physical activity levels in adult UK South Asian populations.
INTRODUCTION
Physical inactivity is associated with an increased risk of cardiovascular diseases, colon cancer, breast cancer, musculoskeletal disorders and depression. 1 More than a third of the English adult population is insufficiently physically active to meet the UK recommendations 2 of at least 150 min of moderate-intensity aerobic activity every week.
UK South Asians suffer from higher rates of cardiovascular disease and diabetes 3 ; these groups are also known to perform less physical activity than the White British population and this is particularly true of some women from South Asian groups. 4 Differences in physical activity prevalence between ethnic groups are most often attributed to cultural differences and socioeconomic factors. 5 There is qualitative research indicating a preference for types of physical activity in UK South Asians, 6 but there is little population-level information.
People can obtain sufficient physical activity through different domains, including walking, sports, housework, DIY and occupation. Previous research that investigated the relative contributions of different domains of active adults reported that younger active people tended to play more sports and older active people tended to do more walking. 7 Policies to increase physical activity in the adult population need to be targeted appropriately, but there is currently little information about which types of activities ethnic minority groups are likely to partake in and how this differs by age.
Our study aimed to investigate types of leisuretime activities among people from Indian, Pakistani and Bangladeshi ethnic groups, using the White British as a comparison; we investigated sex and age differences.
METHOD
This study used information from the Health Survey for England (HSE) 1999 and 2004, which is an annual, nationally representative cross-sectional survey. In these 2 years, the HSE boosted the ethnic minority sample; we therefore obtained a larger sample size by combining the two survey years, as has been done in previous research. 8 There are ethnic minority participants in other years of the HSE, but the sample size is too small to allow analysis by subgroup. To combine the 2 survey years, we identified all variables in both years to be included in the final data set. We then prepared the data in each year for merging by ensuring that the definitions, categories and names of each variable in both years matched. The files were then merged using Stata V.11 to create a master data set.
The HSE physical activity questions were adapted from the Allied Dunbar National Fitness Survey 9 and asked of all people aged over 16 years. All respondents were asked questions about their physical activity in the past 4 weeks. For heavy housework, heavy manual work, walking and sports, respondents were asked to recall the total number of days in the past 4 weeks on which they had done that particular activity for 30 min or more. Occupational physical activity was not included in this analysis as we aimed to describe leisure-time activities.
To identify ethnic groups in the household, initial screening involved asking the person whether anyone from a list of ethnic groups lived at the household. Once this was established, individual respondents were asked to confirm their ethnic background, by choosing from a predefined list with an option of 'any other group'.
We grouped age into three categories, to retain a larger sample size when analysing by subgroup of Open Access Scan to access more free content age. The HSE provides an age variable grouped into 10-year age-bands, so we combined these to create age groups 16-34, 35-54 and 55 and above for this analysis. To measure occupational social class, participants answered a standard set of questions to assign them to the Registrar General Social classification system.
Analysis
We created a total physical activity variable by combining the individual responses for the different domains of heavy housework, heavy manual work and gardening (we refer to this as DIY), walking and sports; this contained the total number of physical activity events for each person. An event was defined as any activity done lasting 30 min or more. Using this information, we created stacked bar-graphs for each ethnic group, by age and sex, to illustrate the different contributions of each physical activity domain to overall physical activity. We then examined whether there were statistical differences in the proportions of physical activity domains between ethnic groups. To test for differences in physical activity types between ethnic groups, we used analysis of variance tests (ANOVAs) to explore whether there was a difference in the mean total number of physical events for total activity and for the proportion that each physical activity domain contributed to total physical activity. We used Bonferroni tests to explore how ethnic groups differed, for total physical activity and for each domain. We then added occupational social class into the ANOVA model to test the association of socioeconomic status with physical activity type independently of ethnicity.
RESULTS
The final data set contained 19 476 people. Table 1 describes the basic characteristics of the sample. South Asian ethnic groups had a younger age profile than did the White British group, with the majority being aged under 55 years.
The mean number of physical activity events was consistently higher in the White British group, followed by the Indian, Pakistani then Bangladeshi ethnic groups; the total amount of activity declined with age (table 2). Men had a higher mean number of physical activity events than women in the 16-34 and 55 and above age groups. In the 35-54 age group, Pakistani men had a lower mean number of events than Pakistani women of the same age and Indian men and women in this age group had a similar mean number of events.
The majority of people in all ethnic groups did not meet the pre-2011 recommended levels of physical activity. Overall, more men than women met the recommended levels of activity, which was consistent across ethnic and age groups. The lowest levels of physical activity were in the Pakistani and Bangladeshi groups, although for all South Asian women aged 55 years and above, more than 90% did not meet the recommended levels of physical activity.
Contributions of physical activity domains to total physical activity Figure 1 illustrates how each domain contributes to total activity for each ethnic group by age and sex. Differences between ethnic groups are apparent within all three age groups and tables 3 and 4 provide ANOVA results showing whether there was a statistical difference between ethnic groups after adjusting for occupational social class. The proportion of total activity from sports declined with age for all ethnic groups and the proportion of activity from housework increased with age. The contribution of housework to total physical activity was much higher in women compared to men in all ethnic groups and ages, although differences between the ethnic groups are apparent for both sexes. Within women, the South Asian groups appear to have a higher proportion of housework and a lower proportion of DIY contributing to physical activity compared to the White British group.
Ages 16-34
There was a significant difference between all ethnic groups aged 16 to 34 years in the proportions of physical activity domains contributing to total physical activity, with the exception of walking. For women in this age group, the unadjusted model showed very weak evidence of a difference between ethnic groups for the contribution of walking to total activity ( p=0.06); however, after adjusting for social class, there was no evidence of a statistical difference between ethnic groups ( p=0.38). Adjusting for social class removed the statistically significant difference for the mean number of DIY events in this age group, indicating that occupational social class explains some of the difference between ethnic groups for a mean number of DIY events ( p≤0.00 before adjustment for social class). Bonferroni tests showed that Indian men and women in the 16-34 years age group showed no evidence of being significantly different from White British men and women for the proportion of total activity from walking. Pakistani men had sport as a statistically significant higher proportion of their total activity compared to White British men. For South Asian women, however, sport contributes a much lower proportion to physical activity than for White British women, with over 60% of total activity coming from housework in Pakistani and Bangladeshi women aged 16-34 years.
Ages 35-54
There was a significant difference between ethnic groups for all physical activity domains. Of all the ethnic groups in this age group, Bangladeshi males have the highest proportion of total activity from walking (30%). For women, 77% of total activity came from housework for the Pakistani and Bangladeshi women, compared to 47% in the White British group.
Bonferroni tests showed that all three South Asian groups had a contribution of housework to total activity significantly different from the White British group, although the evidence was weaker for the men in the Indian ethnic group ( p=0.05). The proportion of activity from walking, DIY and sports was highest in White British women. The proportion of activity from sports in the Indian women of this age group was more than double that of the Pakistani and Bangladeshi women.
Ages 55 and above
In those aged over 55 years, the proportion of activity from sports was the lowest in all South Asian ethnic groups for both sexes; it was particularly low for Pakistani women, and Bonferroni tests confirmed that Pakistani women in this age group were significantly different from White British women in the contribution of sport to total activity. South Asian men had a higher proportion of total activity from walking, but there was no evidence of a statistical difference between the ethnic groups after adjusting for occupational social class ( p=0.47). South Asian men also had a higher proportion of total activity from housework than White British men, which was significantly different between the ethnic groups ( p≤0.00). For sports, there was a significant difference in the mean number of events between ethnic groups, but not in its contribution to total activity. For women aged over 55 years, housework accounted for the majority of South Asian women's activity and in a much higher proportion compared to White British women in this age group. The contribution of walking to total activity was 18% for White British women, compared to half of this for Indian and Pakistani women, and only 3% for Bangladeshi women; however, these differences were no longer statistically significant after adjusting for occupational social class.
Summary of findings
We analysed 19 476 participants to produce the first physical activity domain profiles by ethnic group. We show that the mean number of total physical activity events differs between Indian, Pakistani, Bangladeshi and White British ethnic groups, and that this total activity comprises different types of physical activity. Figure 1 and tables 3 and 4 demonstrate clearly that the types of physical activity undertaken by people in England vary according to their ethnic group, sex and age group. This analysis also highlights some similarities between ethnic groups; for example, in the younger Indian group, walking contributes a similar proportion to physical activity as in the majority White British group.
Comparison to the existing literature Bélanger et al 7 examined age-related differences in physical activity types in the general population using the HSE 2008. This paper showed that the proportion of activity that comes from sports and exercise and fitness declines with age, and is particularly low in those aged above 45 years. This pattern was also apparent for each ethnic group in our study, with the proportion of activity from sports consistently lower in those aged over 55 years. Bélanger et al also found that occupational physical activity is a large contributor to total activity in men aged below 65 years. Since occupational activity has been excluded from the analysis in our study, it is possible that differences between men and women of different ethnic groups are partially accounted for by occupational physical activity, although controlling for occupational social class may have partially reduced this bias. Although we could find no UK studies that had examined all types of physical activity undertaken by ethnic minorities, there are some studies published in the USA. An empirical study based in the USA showed that socioeconomic status explained much of the difference in the amount of leisure-time physical activity between Hispanics and non-white Hispanics, indicating that cultural differences are not always responsible for differences in physical activity behaviour between ethnic groups. 10 Kandula and Lauderdale 11 reported that immigrant Asian Americans were less likely to participate in leisure-time physical activity compared to American-born non-Asians; 'Asian American' in their study refers to Chinese and South Asian groups. Although our paper did not differentiate by generation, the majority of those who were UK-born were under the age of 35 years.
We were unable to investigate further the types of sports practised by different ethnic groups, but the Active People Survey indicates that there are cultural preferences for certain types of leisure-time physical activities in ethnic minority groups in England. Asian people are more likely to have participated in cricket and the gym, and weight-training and basketball are all popular among ethnic minority communities in the UK. 12 South Asian women aged 16-34 years all play less sports as a proportion of total activity as compared to White British women of the same age; it is possible that South Asian women do not have as much access as women in the White British population to the sports they may prefer, such as those mentioned. Since ethnic groups tend to cluster in geographical areas, 13 it is also possible that local facilities influence the types or amounts of sports that ethnic minorities in England play.
Strengths and limitations
To the best of our knowledge, this is the first study to assess at a population level the different types of physical activities undertaken by South Asian groups in the UK. By combining two large nationally representative data sets, we have been able to analyse ethnic groups separately by sex and age. Many studies combine Indians, Pakistanis and Bangladeshis as 'South Asian' for analysis in order to boost the sample size and thus the power of the results; however, this results in a limited ability to usefully interpret the results. In the UK, the socioeconomic profiles of Indians, Pakistanis and Bangladeshis are quite different, as are their main religious identities. Both these factors have a high potential to affect physical activity behaviour, and we have shown that there are differences in the physical activity behaviour of these three ethnic groups, only some of which are explained by occupational social class.
Some main limitations of this study are rooted in the nature of the HSE 1999 and 2004 surveys. First, the data come from studies published in 1999 and 2004 and the physical activity profiles of ethnic groups could have changed over the past decade. Second, the physical activity questions included in the 1999 and 2004 HSE surveys were limited in their scope, offering only a self-reported measure for physical activity, which may introduce recall bias. The nature of the question 'how many days in the past 4 weeks have you done (insert activity name here) for 30 min or more?' does not allow for an accurate calculation of the number of minutes of moderate to vigorous physical activity, as the more recent physical activity surveys do. There is also a possibility of misclassification bias in the chance of there being systematic differences between ethnic minorities in how they self-report physical activity; it is difficult to gain population-level information on this; however, there is some evidence that South Asians may under-report their total levels of activity. 14 Studies done in other populations 7 15 were able to use more domains of physical activity, but in this study only the four broad domains of housework, DIY, walking and sports could be included for analysis. Ideally, 'sports' could have been broken down into types of sport, such as in the paper by Bélanger et al, and 'walking' could have been broken down into 'walking for leisure' and 'active transport'. Occupational physical activity was not included in this study, which could affect the proportions of physical activity types contributing to total physical activity. However, the inclusion of occupational social class in the regression analyses should go some way towards assessing the contribution of manual work towards total physical activity.
Owing to sample size limitations, we were unable to stratify the results by occupational social class in addition to ethnic group, sex and age, or adjust for other socioeconomic variables. We would recommend that future research explore how much of the differences between ethnic groups can be explained by socioeconomic status and environmental factors relating to deprivation.
CONCLUSIONS
This analysis shows that while South Asian ethnic minority groups in England are active in different ways from the White British population, and from each other, there are also some similarities.
It is important to understand the different ways in which ethnic groups are active, as this allows physical activity interventions to be tailored appropriately, while in the cases where activity patterns are similar to the majority population, tailored interventions may be unnecessary. Activity patterns change with age, as has been shown in the general population, indicating that age-appropriate interventions are necessary for all South Asian groups. It is also possible that some of these age differences are due to generational status, with UK-born ethnic groups having very different childhood experiences from ethnic groups who were born in other countries. This analysis, however, cannot provide detailed information on why age differences in physical activity patterns exist. It is likely to be due to factors that change throughout the life course, such as health, income and leisure-time, but it may also be due to differences in early childhood experiences, which stay with people throughout life.
Understanding the role of individual social class in physical activity patterns is also important, especially as some ethnic groups are mainly in the lower social classes. Social class had some impact on physical activity patterns, but it is difficult to know whether these differences are due to occupation, income or education levels. As some South Asian ethnic groups often live in deprived areas, and people in lower social classes frequently live in deprived areas, future research should explore whether local facilities and resources or individual socioeconomic status factors contribute to differences in physical activity profiles between South Asian ethnic groups.
What is already known on this subject ▸ Physical inactivity is a risk factor for cardiovascular diseases, some cancers and musculoskeletal disorders. ▸ UK South Asians are less physically active than the White British population.
What this study adds ▸ We used a nationally representative survey to show that physical activity patterns differ between Indian, Pakistani and Bangladeshi ethnic groups in England. At all ages, sports contributes a lower proportion to total activity in Indian, Pakistani and Bangladeshi women compared to White British women, but Indian women aged 16-34 years have a similar proportion of total activity from walking as White British women in the same age group. We recommend that further research should be done on the types of physical activities available to and enjoyed by UK South Asian communities.
Twitter Follow Prachi Bhatnagar at @prachib2 Contributors PB analysed the data and wrote the paper. NT helped design the analysis and critically reviewed the manuscript. CF and AS critically reviewed and contributed to the manuscript.
Funding This research was funded by the British Heart Foundation, grant number 006/P&C/CORE/2013/OXFSTATS.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Open Access This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/ licenses/by/4.0/ | 2017-06-23T08:04:05.651Z | 2015-12-16T00:00:00.000 | {
"year": 2015,
"sha1": "b247c7ce9cbeec2f4d8808f22f096550df25d515",
"oa_license": "CCBY",
"oa_url": "https://jech.bmj.com/content/jech/70/6/602.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b247c7ce9cbeec2f4d8808f22f096550df25d515",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243028239 | pes2o/s2orc | v3-fos-license | Application of Modified Derived Equations of Motion of Respiratory Mechanics in the Interpretation of Ventilator Graphics
Ventilator Graphics is an indispensable tool at the bedside for monitoring of the mechanical ventilation. Amid the COVID-19 critical situation, the Ventilators play a major role in the management of these patients. Although continuous advancement in Mechanical Ventilator technology takes place, understanding and interpretation of Ventilator Graphics at the bedside is considered the most important in management of intensive care unit( ICU) patients.[I]The basic mathematical model of breathing mechanics consider the respiratory system as a single compartment model relating the pressure, volume and flow during Ventilation. This is known as the Equation of Motion for the respiratory system or the force balance equation.[II,III]The movement of air takes place between atmosphere and the alveoli inside the lungs during breathing. This movement is driven by the pressure gradient between these two sites and hindered by the presence of airway resistance. The amount of air flow through an airway in a given period of time depends on this pressure gradient and the airway resistance.[III]
INTRODUCTION
Ventilator Graphics is an indispensable tool at the bedside for monitoring of the mechanical ventilation. Amid the COVID-19 critical situation, the Ventilators play a major role in the management of these patients. Although continuous advancement in Mechanical Ventilator technology takes place, understanding and interpretation of Ventilator Graphics at the bedside is considered the most important in management of intensive care unit( ICU) patients.
[I]The basic mathematical model of breathing mechanics consider the respiratory system as a single compartment model relating the pressure, volume and flow during Ventilation. This is known as the Equation of Motion for the respiratory system or the force balance equation.[II,III]The movement of air takes place between atmosphere and the alveoli inside the lungs during breathing. This movement is driven by the pressure gradient between these two sites and hindered by the presence of airway resistance. The amount of air flow through an airway in a given period of time depends on this pressure gradient and the airway resistance. [III] Inhalation is an active process that requires resistive work of breathing to overcome the frictional resistance to flow and elastic work of breathing to overcome the elastance (reciprocal of compliance) of the respiratory system. This can be compared to the pressure needed to inflate a balloon through a straw where the pressure is needed to overcome both the resistance of the straw and elasticity of the balloon.
[III] The energy required for inspiration is more than that required for expiration under physiological conditions because only resistive work of breathing is required for expiration. Tidal breathing during exhalation is a passive process that does not require active energy. The elastic energy stored during inspiration is partly utilized as resistive work of breathing for expiration and partly dissipated as heat energy.
[IV]The pressure required for expiratory flow to exhale the necessary tidal volume from the lungs is supplied by the potential energy in the lungs due to elastic recoil[II,III,IV].The pressure associated with the delivery of a tidal breath is defined by the simplified equation of motion of the respiratory system(Lungs and Chest Wall) and the parameters like pressure, volume and flow are all continuous functions of time. [II] A ventilator mode refers to the set of operating characteristics that control how the ventilator functions. It indicates the pattern of breath delivery and how the breaths are triggered, cycled and limited.
[V] If the inspiration is both triggered(started) and cycled(stopped) by the patient, then it is called as spontaneous breath. If the inspiration is either ventilator triggered or ventilator cycled or both, then it is called as mandatory breath. The spontaneous breaths may be assisted or unassisted. An assisted breath is a breath during which all or part of inspiratory (or expiratory) flow is generated by the ventilator doing work on the patient.[II] The total cycle time (TCT) or Ventilator period denotes the sum of both inspiratory time and expiratory time and it is inversely related to frequency. Minute ventilation is the tidal volume times the respiratory rate (frequency). [II] The change in volume is due to the variation in pressure but the process is a time consuming process. The time constant describes the speed of this process and it specifies how much time is required to inhale adequate tidal volume during inhalation and to exhale the required tidal volume during expiration. The time constant is usually defined as the time required for inflation upto 63% of the final volume or deflation by 63 %.This is very useful to assess whether the respiratory system fills and empties slowly or quickly. The high resistance leads to a long time constant, so the lung unit fills and empties slowly. Low compliance will result in short time constant, so the lung unit fills and empties quickly.[VI -IX] During expiration a baseline or expiratory pressure is always measured and set relative to atmospheric pressure. In zero setting, the baseline pressure is set equal to the atmospheric pressure and a positive value is called as the positive end-expiratory pressure (PEEP). This is referred to as the baseline variable.
[II] An increased minute ventilation or obstruction (airway resistance) may cause an incomplete expiration due to inability in fully exhaling the tidal volume before the next breath. A large tidal volume requires a longer expiratory time to exhale and an increased respiratory rate will decrease the total cycle time that may shorten the expiratory time. If inspiration starts before the end of the previous expiration, some air will remain trapped inside the lungs. If allowed to equilibrate by preventing the next breath to happen, the trapped gas volume will generate a positive pressure.[X,XI,XII] This pressure is auto PEEP or intrinsic PEEP because it is not set directly by the clinician.
In the Ventilator Graphic display, curve or scalar waveforms and Ventilator loops or plots are used to analyze the respiratory system mechanics to provide the information about the patient-ventilator interaction.
[I] The aim of the current study is to derive and apply the newly modified equations of motion of the respiratory mechanics for better understanding in the interpretation of ventilator graphics.
MATERIALS AND METHODS
Compliance denotes the amount of air in ml the lungs can hold for every 1 cm H2O change in pressure. Elastance is the reciprocal of compliance. [VI,XIII,XIV,XV] Compliance = Change in Volume in ml(Δ V)/Change in Pressure in cm H2O(Δ P) Compliance = Δ V/ Δ P ; Elastance = Δ P/Δ V The time constant characterizes the rate of variation of the function over a period of time. Time constant is relevant when modelling a process using exponential functions and it does not apply for constant functions. During inspiration, the time constant can only be evaluated in the pressure controlled ventilations because the inspiratory flow time waveform will be in the exponential form. During exhalation, the time constant can be evaluated regardless of the ventilation mode because the expiratory flow time waveform is an exponential function for passive expiration. So, the Expiratory time constant is very useful for assessing the overall respiratory mechanics.[II,XVI] The inspiratory time (TI) is to be represented in terms of inspiratory time constant(τi or RC ins) and similarly expiratory time(TE) is to be represented in units of expiratory time constant(τe or RC exp).Their calculated ratios namely TI/RC ins and TE/RC exp are used to assess the completion of the process of inspiration and expiration. A ratio of 5, 4, 3, 2 and 1 denotes a completion of the process of 99.3 %, 98.2%, 95.1 %, 86.5 % and 63.3%respectively.The inspiration process will increase by these percentages and the expiration process will decrease by these percentages (clearly shown in figure 1).[VI,XIV]
Figure1. Time Constant for Exponential Functions
The time constant measured in seconds is calculated using the product of resistance(R) and
EQUATION OF MOTION OF RESPIRATORY SYSTEM
The Equation of motion of Respiratory system of respiratory mechanics or Force balance equation which is given below is also a differential equation.
[II]Pressure as a function of time P(t) is related to the volume and flow with constant coefficients (elastance and resistance).
The volume as a function of time represented by v(t) is equal to the constant (or average) flow multiplied by time. The flow is calculated using the derivative of volumev(t) with respect to time t which is denoted by dv/dt. [2]Volume curve is usually not measured directly but it is derived from the flow measurement as the area under the flow time curve using integral calculus.[II,XVI]The Equation of motion of respiratory mechanics in simplified form is given below. PAW = Flow X Resistance + Elastance X Tidal Volume PAW = F X R + E X TV PAW = PRES+ PEL PRES= F X R ; PEL= E X TV Where PAW is the pressure generated either by the ventilator or the muscle.
DERIVATIONS OF EQUATIONS OF MOTION
At any moment during inspiration, the airway pressure must exactly balance the forces opposing lung and chest wall expansion. The opposing pressures are the sum of flow resistive pressure (PRES), elastic recoil pressure(PEL) and inertance pressure(Pinertance) of the respiratory system.[III, XIII,XIV] PAW = PRES+ PEL+ Pinertance The inertial forces are usually negligible during conventional ventilation. The inertance pressure(Pinertance) of the respiratory system can be omitted from the equation and the equation of motion is simplified as given below.
PAW = PRES+ PEL
In the presence of positive end expiratory pressure(PEEP), the equation of motion of the respiratory system is written as follows.
PAW =PRES + PEL + PEEP
The ventilator will generate a positive pressure and the gas will flow from higher to lower pressure. The muscle will generate a negative pressure at the other end so the gas will flow through the pressure gradient.
[XVII] So the above equation can be written as follows using the pressure generated by the ventilator and the muscle.
NEWLY MODIFIED DERIVED EQUATIONS OF MOTION
Let the End expiratory alveolar volume be denoted by V and the amount of tidal volume inspired during inhalation be denoted by Δ V. Then [V + Δ V] denote End inspiratory alveolar volume. Let the amount of tidal volume expired during exhalation be denoted by dV. If the amount of tidal volume inhaled(Δ V) and the amount of tidal volume exhaled(dV) are equal then the difference between these values will be zero. But under certain conditions, these volumes may differ and their difference will increase. Decreased expiratory flow increases the residual volume that results in air trapping which may lead to a decreased exhaled tidal volume. If there is leakage then exhaled tidal volume will be grossly lower than the amount of inhaled tidal volume. [II,XVI]
PAW -PEEP = PRES + PEL
Inspiration is an active process that needs energy for both resistive and elastic work. The resistive pressure gradient depends on the product of flow and resistance(R) and the elastic pressure gradient depends on the product of elastance(E) and the amount of tidal volume (TV) inhaled. [ In the absence of Auto PEEP, Δ P = iF(R + EXTi) ; Where Δ P = PAW -Set PEEP
iF = {[Δ P -R x rF] X C} / (τi +Ti)
In the absence of Auto PEEP, residual flow is zero. So, the above equation becomes, Let the amount of tidal volume inspired during inhalation be denoted by Δ V. The above equation can written as follows.
DERIVATION OF INSPIRATORY FLOW WITH RESISTANCE AND PRESSURE GRADIENT
In the absence of Auto PEEP, residual flow is zero. So, the above equation is reduced to the following equation.
DERIVATION OF EXPIRATORY FLOW WITH RESISTANCE AND PRESSURE GRADIENT
Expiration is a process that needs energy only for the resistive work which is provided by the elastic energy stored during inspiration.
[IV]End expiratory alveolar pressure is the product of End expiratory alveolar volume(V) and elastance(E). End inspiratory alveolar pressure is the product of End inspiratory alveolar volume [V + Δ V] and elastance(E). The required passive elastic recoil pressure (PEL) is calculated as the difference between End inspiratory alveolar pressure and End expiratory alveolar pressure. At each moment in time, the pressure necessary to cause expiratory flow is equal to the pressure stored in the lungs due to elastic recoil. The negative sign indicates flow during expiration is in the opposite direction of inspiration.
[II]Negative sign can be omitted to compare only the magnitude of the pressure. In the absence of Auto PEEP, residual flow is zero. So, the above equation becomes the following.
It is very clear from the above relation that the expiratory flow is driven by the gradient between alveolar pressure and set PEEP and decreased by resistance.
DERIVATION OF END EXPIRATORY ALVEOLAR PRESSURE, PEEP & AUTO PEEP
The pressure necessary to cause expiratory flow is equal to the pressure stored in the lungs due to elastic recoil.[II,IV] The
DERIVATION OF EXPIRATORY TIME CONSTANT (τe)
The pressure necessary to cause expiratory flow in the presence of auto PEEP is as follows.
The difference between the Inhaled Tidal Volume and the Exhaled Tidal Volume will be the normal resting lung volume. It is very clear that Increased Resting Lung Volume results if the exhaled tidal volume is lower than the inhaled tidal volume.
TOTAL WORK DONE DURING INSPIRATION
Total pressure required to inflate the lung is the sum of the pressure required to overcome the airway resistance and the elastance of the respiratory system. [II,III,IV] PAW-Set PEEP = iF(R + EXTi) + R x rF The above relation shows that the pressure gradient increased above set PEEP, is necessary for the inspiratory flow. The work required to deliver a tidal breath during inspiration is the product of tidal volume(TV) and airway Pressure.[II,XIV] Work done = pressure X Change in volume
WTOT = { iF(R + EXTi) + R x rF } X TV
The above equation represents the total work done to overcome the resistive and elastic elements of the respiratory system.
[VIII]Also the work done is increased in the presence of auto PEEP generated by the product of residual flow and resistance.
WORK DONE DURING EXPIRATION BY PASSIVE ELASTIC RECOIL PRESSURE
Work done during inhalation is stored as potential energy which is recovered during exhalation. Elastic energy due to the elastic recoil of the lung is stored as potential energy which provides the necessary pressure required for exhaling the tidal volume.
[IV] The pressure necessary to cause expiratory flow in the presence of auto PEEP is as follows. The work required to exhale a tidal volume during expiration is the product of amount of tidal Volume exhaled (dV) and the passive elastic recoil pressure gradient. The work done during breathing is increased by the resistance which is shown in the figure 3 using the pressure volume work during the inspiration and expiration.
RESULTS
The novel derived modified equations of motion of respiratory mechanics (five equations) are tabulated in table 1. The Passive Elastic Recoil Pressure (PEL), End Expiratory Alveolar Pressure, work done during inspiration and work done by the passive elastic recoil of lung during expiration are tabulated in table2. These equations are represented in two forms, one in the presence of auto PEEP and other in the absence of auto PEEP. These are to be considered as differential equations and not as algebraic equations.
DISCUSSION
Ventilators play a major role in the management of intensive care unit patients. Ventilator Graphical tool helps in monitoring the mechanical ventilation at the bedside. A lot of advancement in Mechanical Ventilator technology has taken place yet the understanding and interpretation of Ventilator Graphics at the bedside seems to be a challenging one.
[I] The Equation of Motion for the respiratory system is the basic mathematical model of breathing mechanics.
[II] The understanding and application of the various physical concepts involved in it plays a vital role in the management of these patients.
APPLICATION OF THE NEWLY DERIVED MODIFIED EQUATIONS
The application of the derived modified equations of motion of the respiratory system of the respiratory mechanics is to be discussed in detail. Total Work done during the inspiration includes both resistive and elastic work. It is decreased by the compliance, increased by the inspiratory resistance, elastance and the auto PEEP generated due to the residual flow. The pressure gradient required for the inspiratory flow is decreased by the set PEEP and increased by the resistance at the start of inspiration.
Δ P =iF(R + EXTi) + R x rF ; Where Δ P = PAW -set PEEP In the presence of auto PEEP, the pressure gradient required for causing the flow will be higher. But in the absence of Auto PEEP, residual flow is zero and so the above equation is reduced to the following form.
Δ P = iF(R + EXTi) Δ P = iF(R + EX0) ; At time Ti = zero Δ P =iF X R OR iF= Δ P/R If the resistance is increased then the Peak inspiratory flow will decrease. In constant flow mode of ventilation, peak inspiratory flow rate is seen at the beginning of the inspiratory cycle depending on the set inspiratory rise time (normal, too fast and too slow) in the ventilator, then the inspiratory flow is constant throughout the inspiratory cycle and it ceases once the preset or target value is achieved. The inspiratory rise time determines the amount of time it takes to reach the desired airway pressure or peak flow rate (shown in figure 4).
[XV]It determines the rate at which the ventilator achieves a target pressure in pressure control and pressure support modes or flow rate in volume control modes.
Application of Modified Derived Equations of Motion of Respiratory Mechanics in the Interpretation of Ventilator Graphics
International Journal of Clinical Chemistry and Laboratory Medicine (IJCCLM) Page 30
Figure4. Normal Inspiratory Rise Time
During the initial portion of inspiration, the pressure gradient is a function of flow and resistance. Thus the pressure time waveform in the constant flow mode of ventilation begins with an exponential rise to a first step. If the resistance is higher, then the rise in step will be much higher ( figure 5 A). The second portion of the pressure time waveform is a linear increase to peak inspiratory pressure(PIP) representing the peak dynamic pressure that includes both the resistive and elastic components.[XIV,XVI,XVIII] As the inspiratory timeTi increases, the term {R + Ex Ti}will increase. The second portion of the pressure time waveform depends mainly on the elastance which is the reciprocal of compliance. If the compliance is decreased, then the elastance is increased and so the slope of the second portion is increased(figure 5 B).
Figure5B. Decreased Compliance
If the compliance is increased, then the elastance is decreased and so the slope of the second portion is decreased. The increased resistance and decreased compliance seen in the pressure and volume time waveform in both the constant flow and constant pressure mode of ventilation are compared with their normal in the figure 5A and 5 B respectively.
iF = [[Δ P -R x rF] X C] / (τi +Ti)
Higher pressure gradient is required for causing the inspiratory flow in the presence of auto PEEP. The residual flow is zero in the absence of Auto PEEP and so, the above equation is reduced to become the following equation.
Application of Modified Derived Equations of Motion of Respiratory Mechanics in the Interpretation of Ventilator Graphics
In constant pressure mode of ventilation, the Inspiratory flow is directly proportional to the pressure gradient (driving force) and the compliance and inversely proportional to the Inspiratory Time Constant. The peak inspiratory flow is seen at the start of inspiration (depending on the set inspiratory rise time) due to maximum pressure gradient and as the Inspiratory time increases, the pressure gradient is decreased due to the inspired tidal volume, so the inspiratory flow decreases till it reaches the baseline. The inspiratory flow will be higher if the patient has increased compliance(decreased elastance) and the inspiratory flow will be lower If the patient has decreased compliance (increased elastance)as depicted in the figure 6.If the inspiratory time constant is shorter, the tidal volume is inhaled quickly or short inspiratory time is sufficient to inhale the tidal volume. If the inspiratory time constant is longer, the tidal volume is inhaled slowly or long inspiratory time is needed to inhale the required tidal volume. As the inspiratory time constant (τi) increases, the inspiratory flow will decrease and vice versa.
Figure6. Decreased Compliance -Flow Time Waveform
The inspiratory flow requires higher pressure gradient in the presence of auto PEEP. If the residual flow is zero(absence of Auto PEEP), the above equation is reduced to become the following equation.
iF = [1/R] X {Δ P -E XΔ V}
In constant pressure mode of ventilation, the Inspiratory flow is directly proportional to the pressure gradient (driving force) and inversely proportional to the resistance. The inspiration starts at the end expiratory alveolar volume denoted by V. The pressure gradient at the start of inspiration is denoted by {Δ P -E X V}and the pressure gradient is maximum at this starting point and so the peak inspiratory flow is always seen at the beginning of the inspiration. The pressure gradient at the end of inspiration will be denoted by {Δ P -E X Δ V} and as the tidal volume(Δ V) is inhaled during inspiration, the pressure gradient{Δ P -E XΔ V} decreases and the flow ceases if the pressure gradient is zero{Δ P = E XΔ V}. The inspiratory flow will decrease with increase in resistance, so more inspiratory time is needed to deliver the required volume otherwise the inspired tidal volume will be decreased as compared with normal resistance patients which is clearly depicted in the figure 7. [XIV]
Application of Modified Derived Equations of Motion of Respiratory Mechanics in the Interpretation of Ventilator Graphics
Let the inspired tidal volume be denoted by ΔV. If the End expiratory alveolar volume is denoted by V,then[V+Δ V] denote the End inspiratory alveolar volume. The pressure gradient is maximum at beginning of expiration (end of inspiration) and then as the tidal volume is exhaled the pressure gradient decreases gradually. The peak expiratory flow will be seen at the beginning of the expiration and then the lung volume decreases towards the resting volume in the end of expiration.[II,XIV,XV,XVI] If airway obstruction is present, then the peak expiratory flow will be reduced and then it decreases very slowly towards the resting volume which is clearly noticed as scooping in the flow volume loop shown in the figure 9.
Figure9. Reduce Expiratory Flow Rate-Flow volume Loop
The passive elastic recoil pressure (PEL) required for causing flow during expiration is calculated as the difference between End inspiratory alveolar pressure and End expiratory alveolar pressure (PEL= E X Δ V). The resistive work of expiration is provided by the elastic energy stored during inspiration.
[IV] The pressure necessary to cause expiratory flow in the presence of auto PEEP is shown in the below equation. E X Δ V = (R x eF + Auto PEEP + Set PEEP) If the required tidal volume inhaled during inspiration is exhaled by the expiratory flow then there is no residual flow. The pressure necessary to cause expiratory flow in the absence of auto PEEP is given in the below equation.
At the end of expiration, the expiratory flow (eF) is zero and Volume will be the resting volume because the tidal volume is exhaled. Functional Residual Capacity (FRC)is the volume of gas that remains in the lungs at the end of expiration which is the resting state and in the presence of positive end expiration pressure (PEEP) it is called the end-expiratory lung volume (EELV).FRC is a lung volume measured without PEEP (at atmospheric pressure). PEEP contributes to increased endexpiratory lung volume (EELV) by recruitment of previously non-aerated alveolar units and distension of previously open alveolar units. If no recruitment occurs then the volume increased by the set PEEP will be the result of product of compliance and set PEEP(i.e. C X set PEEP). When there is a larger change in volume, then that volume gain will be due to the recruitment.[XIX-XXII] E X V = (Auto PEEP + Set PEEP) The end expiratory alveolar pressure and volume depends on both the externally set PEEP and autogenerated PEEP. The set PEEP increases the volume by recruitment of alveoli as well as distension of previously open alveoli. If there is no auto PEEP and the set PEEP is zero, then the end expiratory alveolar pressure will be at the atmospheric pressure level.
The relationship between the expiratory time constant, expiratory flow, residual flow and the alveolar volume at the end of inspiration and expiration is given in the below equation.
If the inhaled tidal volume is completely expelled by the exhaled tidal volume then the resting lung volume is normal. If the Inhaled Tidal Volume is not completely expelled by the exhaled Tidal Volume due to incomplete expiration, then the resting lung volume is increased. When the expiration time is insufficient, the expiratory flow will decrease and consequently the residual flow will increase leading to an increase in end expiratory lung volume that result in dynamic hyperinflation. This prevents the respiratory system from returning to its resting end expiratory equilibrium volume between breath cycles.[X,XI,XII] increasing inspiratory flow or reducing breathing frequency) to increase the ratio between expiratory time and time constant rather than by decreasing the inspired tidal volume.[X,XI,XII,XV]The expiratory flow is a passive process so the ratio between expiratory time and expiratory time constant is very important in assessing the respiratory mechanics. After one time constant (or ratio of one between the expiratory time and expiratory time constant), the expiratory flow will decrease by 63.3% to reach a value of 36.7 %.The expiratory flow will decrease by 99.3% to reach a value of 0.7% after five time constant (ratio of 5).
[VI,XIV]The expiratory flow will not reach the zero reference baseline if the ratio decreases. If the expiratory time is not sufficient then the amount of tidal volume exhaled will be incomplete resulting in trapping of air which is clearly shown in the figure 10. The inhaled tidal volume is the product of constant or average inspiratory flow and inspiratory time. If the inspiratory flow is increased then the required tidal volume is inhaled with a short inspiratory time that will help in increasing the expiratory time. The frequency and total cycle time duration are inversely related. So decreasing the frequency will increase the total cycle time (TCT) duration that may help in increasing the expiratory time depending on I: E ratio. The inspiratory time and expiratory time can be adjusted to maintain a required I: E ratio.
The current research study clearly discussed the derivation and the application of these newly derived modified equations of motion of the respiratory mechanics which are considered as differential equations and not algebraic equations. Some of the parameters in these equations are held constant while others are a variable quantity that changes with changes in other parameters. These novel equations may help in better understanding of the ventilator graphics that play a significant role in management of critically ill patients.
CONCLUSION
The ventilator graphics is an indispensable tool that plays a significant role in saving the life of the mechanically ventilated patients. The study concludes that understanding of the various physical concepts involved in mechanical ventilation and the application of these derived modified equations of motion of the respiratory mechanics at the bed side for monitoring these patients may help in better understanding required for the interpretation of invaluable ventilator graphics. | 2021-08-27T16:50:07.089Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "f6cb8af81ab23a62f21f5cfe222078b78ce5cf7e",
"oa_license": null,
"oa_url": "https://doi.org/10.20431/2455-7153.0701003",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8ec97968492cf06dbdf4a8f015bd919235c8dc7b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
221703348 | pes2o/s2orc | v3-fos-license | A review of Einstein Cartan Theory to describe superstrings with intrinsic torsion
This paper reviews the Einstein Cartan theory (ECT), the famous extension of general relativity (GR) in presence of spacetime torsion. The vacuum equations are derived step by step. Vielbein formulation is discussed for determining the field equations in presence of matter. This review would be easily comprehensible for any student familiar with general relativity. Further, ECT is used to describe superstrings with intrinsic torsion, assuming a $D_p$-brane in presence of a curved background of the NS-NS Kalb-Ramond field. D-brane worldvolume is a flat spacetime governed by the Dirac-Born-Infeld (DBI) action. In presence of the dynamical NS-NS $B$-field, the contortion tensor equals the totally antisymmetric torsion. Using this, the form of the $D_p$-brane action in presence of torsion is determined.
Motivation
Einstein Cartan Theory (ECT) is an extension of General Relativity (GR) theory, which is the simplest theory of gravity with curvature as the only geometric property of the spacetime. General relativity is a classical theory designed by Einstein on a pseudo-Riemannian 2 manifold. On the other hand, ECT has curvature and torsion both as the geometric properties of the spacetime.
Motivation to devise this extension arose by comparing general relativity with theories of the other three fundamental interactions. Strong, weak and electromagnetic forces are described by quantum relativistic fields in a flat Minkowski space. The spacetime itself is unaffected by these fields. On the contrary, gravitational interactions modify the geometrical structure of spacetime and they are not represented by another field but by the distortion of geometry itself [1]. While three-fourth of modern physics acting at a microscopic level is described in the framework of flat spacetime, the remaining one-fourth i.e. the macroscopic physics of gravity needs introduction of a dynamic or geometrical background. This situation is inadequate because three fundamental interactions are completely disjoint from the remaining one. So a theory needs to be formulated, which can in some limit give common description for all the four. In other words, the problem is what if we consider elementary particle interactions in a curved spacetime. A big drawback of general relativity is that it assumes matter to be mass energy distribution but actually matter also includes spin density.
For macroscopic objects, spin averages out in general if we ignore objects like ferromagnets but at microscopic level, spin plays an important role. Since gravity is the weakest interaction at low energy, it appears that gravitation has no effect on the elementary particle interactions. However, when we consider microphysics in curved spacetime, we have some important phenomena like neutron interferometry which can be used to observe the interaction of neutrons with Earth's gravitational field [2]. Macroscopically, spin density plays significant role in early universe (big bang) and superdense objects like neutron stars and black holes.
A mass distribution in a spacetime is described by the energy-momentum tensor while a spin distribution in a field theory is described by the spin density tensor. So at the microscopic level, energy-momentum tensor is not sufficient to characterize the matter sources but the spin density tensor is also needed. However, if we consider a system of scalar fields depicting spinless particles, the spin density tensor vanishes.
Similar to the mass-energy distribution (a property of matter) which produces curvature in spacetime (a geometric property), spin density must also couple to some geometric property. That property should be torsion. ECT is GR extended to include torsion. It is also called ECSK theory (after Einstein, Cartan, Sciama, Kibble who laid the foundations of this theory), and briefly denoted as U 4 theory, where U 4 is a four dimensional Riemann-Cartan spacetime. Torsion leads to deviation from general relativity only in exceptional situations like big bang, gravitational collapse and microscopic physics.
Spacetime torsion
In GR, we interpret gravity not as a force but as the curvature or bending of spacetime produced by a mass energy distribution. We have the constraint of torsion free spacetime, hence the connection is symmetric. In ECT, it is assumed that in addition we have a spin density of matter which produces torsion in spacetime around and connection is in general asymmetric. Then torsion is the antisymmetric part of the connection Torsion Q µν α is a third-rank tensor with antisymmetry in its first two indices. It has D 2 (D − 1)/2 independent components in a D dimensional spacetime.
Effect of torsion on the geometry of spacetime To understand the geometrical meaning of torsion, we compare it with the intrinsic curvature of GR.
When a tangent vector is parallel transported along a closed path, it changes its direction. But, in presence of torsion, if we try to parallel transport it along a closed path, it would come back translated with respect to its original position i.e. path will not be closed. This is illustrated in the figure 1.
Metric Compatibility
In general relativity, there are two constraints: (1.) metric compatibility of the affine connection and (2.) torsion free spacetime, and hence the connection is symmetric, i.e., Christoffel connection. If we relax both these constraints then what we have is a general affine manifold, A 4 . For A 4 , the affine connection is where α µν is called the Christoffel symbol, K µν α is known as contortion tensor and V µν α arises from the non-metricity.
Contortion tensor: Torsion appears in linear combination as the contortion tensor K µν It is antisymmetric in 2nd and 3rd indices. Another important combination is the modified torsion Non-metricity: In eq. (2), with D (A 4 ) α g µν = 0 known as the non-metricity tensor and the covariant derivative of the affine manifold is defined by However in a Riemann-Cartan or U 4 manifold, one constraint is relaxed, that of the torsion free spacetime. Metric compatibility condition is still there in U 4 , i.e., As a result of metric compatibility, the unit angles and lengths are preserved. Metric is covariantly constant so the lengths of the measuring rods and the angles between two of them do not change under parallel transfer. It saves a locally Minkowskian structure of the spacetime. Since the Riemann-Cartan manifold is unit preserving, so it is also called U 4 manifold.
Metric compatibility condition in U 4 also implies metric compatibility in V 4 , i.e. the Riemann manifold as follows
Trace-free contortion tensor
Trace of contortion tensor K µν α = −Q µν α + Q ν α µ − Q α µν over its various indices gives Q ρ is the torsion vector. Traceless part of the contortion tensor is
Autoparallels and extremals
When we study the curves of choice in a Riemann-Cartan spacetime, we must distinguish between the two classes of curves both of which reduce to the geodesics of the Riemannian space when we set torsion equal to zero [3].
Autoparallel curves (straightest lines) are curves over which a vector is transported parallel to itself, according to the affine connection of the manifold. Parallel displacement of a vector A µ from Using this equation with a chosen suitable affine parameter s, we get the differential equation of the autoparallels where Γ α (µν) = α µν − K (µν) α = α µν + 2Q α (µν) . Notice that only the symmetric (but torsion dependent) part of the connection enters in this equation, because of symmetry of the product dx µ dx ν = dx ν dx µ .
Extremal curves (shortest or longest lines) are curves which are of extremal length with respect to the metric of the manifold. According to ds 2 = −g µν dx µ dx ν , length between two points depends only on the metric field (and not on the torsion). Differential equation for the extremals can be derived from exactly as in the corresponding Riemannian space and we get In U 4 , the autoparallels and extremals coincide iff the torsion is totally antisymmetric i.e. Q µνρ =
Parallel or compatible volume element in U 4 manifold
In order to define a general covariant volume element in a manifold, it is necessary to introduce . This is done in order to compensate the Jacobian that arises from the transformation law of the usual volume element d 4 x under a coordinate transformation. In GR, the density f (x) = √ −g is taken for this purpose. In V 4 , the volume element √ −g d 4 x is said to be compatible with the connection since the scalar density √ −g obeys But the same volume element, In order to define such parallel volume element in U 4 manifolds, one needs to find out a covariantly constant density f (x). Such density exists only if the torsion vector, Q µ , can be obtained from a scalar potential Q µ (x) = ∂ µ Θ(x). In this case we have is the volume element compatible with the connection in Riemann Cartan manifolds.
Covariant derivative commutator in U 4 manifold
In a torsion free space, the covariant derivatives commute in their action on a scalar field. But in the presence of torsion, the commutator acts on a scalar field φ as proportional to its first derivative Action of the commutator on a vector field V ρ is evaluated as follows, The left hand side of (18) is manifestly a tensor so R µνα ρ must be a tensor too, even though it is constructed from non-tensorial segments. Curvature tensor in U 4 has the following antisymmetry properties Antisymmetry between first two indices is easy to see from eqn.(19), simply with µ ↔ ν. To see that between the last two, consider R αβωσ = g ρσ R αβω ρ . After some algebraic manipulations and with Thus the curvature can be expressed through the Riemann tensor (of V 4 ) depending only on the metric, covariant derivative ∇ (i.e. torsionless covariant derivative) and contortion tensor. From this, we easily see R αβωσ = −R αβσω .
Ricci tensor in U 4 is asymmetric: From (21), Einstein (Cartan) tensor is as usual defined by It is also asymmetric in Riemann Cartan space.
Ricci scalar in U 4
It is useful to work out that b. WithK ανρ as the tracefree contortion tensor defined in eqn. (10), So the curvature scalar, from (22), is and in terms of tracefree contortion tensor, using eqns. (25) and (26) Here This shows that the tracefree contortion tensor is symmetric in 1st and 3rd indices. Also from eqn.(10), its antisymmetric in 2nd and 3rd indices. Any tensor which has such symmetry properties, has all its components vanishing as shown below WithK µνα = 0 in eqn. (10), Here 2nd integral So, the equation of motion for Θ is C. Variation of action S w.r.t. metric tensor g µν Using To solve the last term, recall δR But in U 4 , with the covariantly constant density as e 2Θ √ −g, we need to consider for evaluating the integral. However, note that δR First two are the surface terms, using Gauss's divergence law in U 4 . Assuming variation of field to be zero at the boundary, the variation of surface terms vanishes. term(ii) also vanishes since K α νρ is antisymmetric in ν ↔ ρ while α ρν is symmetric in the two indices. Using δ α ρν δg ηκ = 1 2 δg αβ δg ηκ (∂ ρ g βν + ∂ ν g βρ − ∂ β g ρν ) and, as can be easily seen, δg αβ δg ηκ = −g αω g βλ δg ωλ δg ηκ , we get after somewhat lengthy calculation term(i) in (43) And term(iii) in (43), So, Also, using eqn. (32), Finally, from eqns. (38) and (47), equation of motion for the g µν field is Eqns. (31), (35) and (49) are the U 4 vacuum equations. Taking trace of (49), we get Comparing it with (35), Other form of U 4 gravity equations for the vacuumK Since the equations of motion are of algebraic type, and not differential equations, torsion is clearly non-propagating. The traceless tensorK αβσ = 0 and only the trace Q µ can be non-vanishing in vacuum, outside matter distributions.
Curvature and torsion are the surface densities of Lorentz transformations and translations, respectively [4]. Variation of Einstein Hilbert action of U 4 w.r.t. the metric gives T µν is the canonical stress energy tensor. Variation w.r.t. the torsion tensor Q µν ρ gives where S µν ρ is the spin density tensor. In vacuum or outside matter, S µν ρ = 0 and hence Q µν ρ = 0 as is seen by contracting (54).
Field equations in matter: Spinors in curved space
Consider a classical field ψ(x), representing matter sources in the flat Minkowski space R 4 . Its Lagrangian density L m = L m (ψ, ∂ψ, η) is assumed to depend upon the constant Minkowski metric η µν , matter field and the gradient of the matter field. When the gravitational interaction is introduced, the matter Lagrangian has to be generalized to become a scalar under general coordinate transformations x µ → x ′µ . This can be achieved by minimal coupling procedure, i.e replacing the Minkowski metric with the world metric tensor η µν → g µν and the partial derivative with the covariant one, ∂ → ∇. Also we must add to the matter Lagrangian, a kinetic term for the gravitational field, L g = R where R is the curvature scalar for U 4 .
The symmetry group of general relativity is the Lorentz group of local rotations and boosts. In special relativity, however, the group of symmetries is the global Poincaré group. Einstein Cartan theory, describing spinors in curved space, extends this symmetry group to local Poincaré transformations.
Vielbein or Cartan formulation of general relativity
Spinors transform under the spinor representation of the Lorentz group as In general, given a world tensor B µν , its corresponding components B ab in the flat tangent manifold can be obtained by directly contracting the indices with the vierbein fields B ab = e µ a e ν b B µν and vice versa. It is important to stress that if B µ B ν is a world tensor, i.e. a tensor under general coordinate transformations, then B a B c is a world scalar, but it transforms like a tensor with respect to the local Lorentz transformations. B µ B µ is both a world scalar and a Lorentz scalar In the absence of gravity, the world metric tensor reduces to the Minkowski metric, g µν = η µν , and the vierbein field is given by e a µ = δ a µ and its inverse e µ a = δ µ a Using the vierbein field, the Dirac matrices γ µ (x) for the U 4 manifold can be defined as γ µ = e µ a γ a where γ a are the (constant) flat-space Dirac matrices.
The derivative of a geometrical object carrying the Lorentz indices, which are anholonomic indices, can be made covariant under local Lorentz rotations provided that a tangent space connection ω µ ab is introduced. ω µ ab is called the spin connection or the anholonomic connection. E.g. for a local Lorentz contravariant vector A b , which transforms as Its partial derivative doesn't transform like a vector. Infact We can define however a Lorentz covariant derivative which transforms correctly as provided that the spin connection transforms inhomogenously as So we have, requires the covariant derivative of a Lorentz covariant vector to be However, the total covariant derivative of a geometrical quantity carrying both flat and curvilinear indices is to be performed using both the anholonomic connection ω µ ab and holonomic connection Γ α µν . The resulting derivative is then covariant under both local Lorentz and general coordinate transformations. Thus the covariant derivative of the vierbein field is Note that ω acts only on the flat indices while Γ only on the curved ones. The expression (69) transforms like a 2nd order covariant tensor under a general coordinate transformation and like a contravariant vector under a local Lorentz transformation In Einstein Cartan Theory, the vierbein field is assumed to be covariantly constant This is the constraint of zero torsion with torsion T a = De a defined as the Yang-Mills curvature or field strength of the vierbein [5]. It provides a relation between the two connections ω and Γ. Moreover, from the metricity condition D α g µν = 0, the spin connection is constrained to be antisymmetric in the last two indices Using this, since In a Riemannian spacetime, the spin connection is not an independent field but rather is a function of the vierbein and its derivatives. However in the Riemann Cartan spacetime, the spin connection represents independent degrees of freedom associated with the non-zero torsion.
Thus in presence of matter (fermions), the complete action for the Einstein Cartan theory is where κ = 8πG, G being the gravitational constant, Riemann tensor R µν ab (ω) is the Yang-Mills curvature or field strength of the spin connection, R = dω + ω ∧ ω.
For the Dirac field coupled to gravity with torsion, the Lagrangian density is In general, the energy momentum tensor is given by and spin density tensor is Gravitational field equations in presence of matter Eqn. (79) says that a matter-energy distribution curves the spacetime and eqn. (80) says that a spin density distribution sources the torsion in spacetime. However since the field equations relate torsion algebraically to the spin sources, as seen from eqn.
(54), torsion is non propagating in Einstein Cartan
Theory. Thus torsion is the source of a contact interaction, i.e., a spinning particle cannot influence another spinning particle by means of torsion of the manifold. Torsion disappears immediately outside the spinning bodies. This is one of the main characteristics of the Einstein-Cartan theory and in this way torsion becomes physically interesting only at the microscopic level or macroscopically, when considering extremely collapsed matter.
Nonetheless, if the gravitational Lagrangian is chosen in analogy to the standard gauge theory formalism then we are led to a Lagrangian quadratic in the curvature It contains a kinetic term for the torsion and hence torsion becomes a propagating field. But a theory with such Lagrangian is different from Einstein Cartan theory and is no longer equivalent to general relativity even if the torsion is vanishing, i.e., in vacuum [1].
Superstrings with intrinsic torsion
Superstring theory is a well studied candidate for quantum gravity theory. D p -branes are intrinsic to the type II superstring theory, whose lowest energy state is the type II supergravity. coupling magnetically with C 1 and a domain wall 3 D 8 . In II B, we have D −1 coupling electrically with C 0 , D 1 with C 2 , D 3 with C 4 , while D 5 coupling magnetically with C 2 , D 7 coupling magnetically with C 0 and a spacetime filling D 9 brane. These couplings to the R-R potentials are the well known Wess Zumino-type couplings. A natural electric coupling is given by where ρ p is the charge density of the brane and P[C (p+1) ] is the pullback of the (p+1)-form gauge potential on its worldvolume [6]. A natural magnetic coupling is given by Both the NS-NS and R-R closed strings propagate in the bulk of spacetime. The total action is a sum of the bulk or supergravity action, Dirac-Born-Infeld action and the Chern-Simon terms A constant Kalb-Ramond NS-NS B-field with components parallel to a D-brane can not be gauged away because whenever we vary B M N with a gauge parameter Λ M = (Λ µ , Λ m ), then we must simultaneously vary A µ on the D-brane as follows Here Greek indices are used for coordinates along the brane and index m for coordinates normal to it. Thus the fully gauge invariant combination is F µν + B µν = F µν . On the D-brane, F µν is not fully physical because it is not gauge invariant, but F µν is the physical field strength [7].
where g µν and B µν are the components parallel to the brane,F µν = 2πα ′ F µν and F µν is the gauge field living on the brane. The coefficient D p -brane tension is determined for B = 0, Eq.86 is for slowly varying fields f i.e., neglecting derivative terms √ κ ∂f f << 1. Here, κ = 2πα ′ is a parameter defining the size of a string [8]. The corresponding two-dimensional non-linear sigma model action describes the propagation of strings in curved spacetime. The background field is understood to be arising from condensation of infinite number of strings. Torsion is interpreted as the field strength associated with the vacuum expectation value of the antisymmetric tensor field which appears in the supergravity multiplet [9]. In presence of the totally antisymmetric torsion on D p -brane, the contortion tensor in eq. 3 becomes Since the trace of the totally antisymmetric torsion vanishes, so the Ricci scalar in eq. 26 becomes So the F -string − D p -brane action in terms of closed string variables g µν , B µν , g s and commutative gauge field A µ is a sum of the DBI action and the bulk or supergravity action of dynamical KR field, Here, [C 2 ] = 6 − p − 1 = 5 − p. In terms of the open string variables G (N S) µν , θ µν = (B −1 ) µν , G s and non-commutative gauge field µ , the D p -brane action in presence of torsion comes out to be where we have used eq. 89 in the second term. First term in eq. 91 is the open string analog of the DBI action [8]. Seiberg-Witten [8] showed that the ordinary (or commutative) Abelian gauge field A with constant curvature F and constant NS 2-form is equivalent to a noncommutative gauge field A with θ = 1 B . Thus the Born-Infeld part in eqns. 90 and 91 are equivalent. Further investigation of the deformation of D p -brane in a weakly curved NS-NS background is studied by the author in [10] and a simple heuristic derivation of the open string metric in presence of torsion is suggested.
Conclusion
We have seen that Einstein Cartan theory is a theory of gravitation that differs minimally from the general relativity theory. In the ECT field equations, spin is algebraically related to torsion, so the torsion is non-propagating. It is seen from contracting the torsion equation that torsion tensor also vanishes if the spin tensor vanishes. So in vacuum or outside matter, torsion vanishes and the two theories are identical. However, in presence of matter or fermion field, the spin sources nonpropagating torsion. Effect of spin and torsion are significant only at very high densities of matter, but these densities are much smaller than the Planck density at which the quantum gravitational effects are believed to dominate. Possibly, Einstein Cartan theory will prove to be a better classical limit of a future quantum theory of gravitation than general relativity. We next realized a F -string − D p -brane set up by assuming a D p -brane in presence of a dynamical background of Kalb-Ramond NS-NS field, while setting the dilaton field, R-R fields and fermions to zero. We use the formula that we obtained for Ricci scalar in U 4 manifold to determine the Ricci scalar on the D-brane in presence of the totally antisymmetric torsion. Thus we arrived at the D p -brane action which describes a superstring with intrinsic torsion. | 2020-09-16T01:01:03.919Z | 2020-09-11T00:00:00.000 | {
"year": 2020,
"sha1": "806869f8b7c4688556b2e9676538fcc8adbecaca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "806869f8b7c4688556b2e9676538fcc8adbecaca",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15707747 | pes2o/s2orc | v3-fos-license | A frameshift mutation in GON4L is associated with proportionate dwarfism in Fleckvieh cattle
Background Low birth weight and postnatal growth restriction are the most evident symptoms of dwarfism. Accompanying skeletal aberrations may compromise the general condition and locomotion of affected individuals. Several paternal half-sibs with a low birth weight and a small size were born in 2013 in the Fleckvieh cattle population. Results Affected calves were strikingly underweight at birth in spite of a normal gestation length and had craniofacial abnormalities such as elongated narrow heads and brachygnathia inferior. In spite of a normal general condition, their growth remained restricted during rearing. We genotyped 27 affected and 10,454 unaffected animals at 44,672 single nucleotide polymorphisms and performed association tests followed by homozygosity mapping, which allowed us to map the locus responsible for growth failure to a 1.85-Mb segment on bovine chromosome 3. Analysis of whole-genome re-sequencing data from one affected and 289 unaffected animals revealed a 1-bp deletion (g.15079217delC, rs723240647) in the coding region of the GON4L gene that segregated with the dwarfism-associated haplotype. We showed that the deletion induces intron retention and premature termination of translation, which can lead to a severely truncated protein that lacks domains that are likely essential to normal protein function. The widespread use of an undetected carrier bull for artificial insemination has resulted in a tenfold increase in the frequency of the deleterious allele in the female population. Conclusions A frameshift mutation in GON4L is associated with autosomal recessive proportionate dwarfism in Fleckvieh cattle. The mutation has segregated in the population for more than 50 years without being recognized as a genetic disorder. However, the widespread use of an undetected carrier bull for artificial insemination caused a sudden accumulation of homozygous calves with dwarfism. Our findings provide the basis for genome-based mating strategies to avoid the inadvertent mating of carrier animals and thereby prevent the birth of homozygous calves with impaired growth. Electronic supplementary material The online version of this article (doi:10.1186/s12711-016-0207-z) contains supplementary material, which is available to authorized users.
Background
Bovine stature is a prototypical complex trait that is controlled by a few loci with large effects and numerous loci with small effects. Genome-wide association studies using dense molecular markers detected several quantitative trait loci (QTL) for growth-related traits in cattle [1][2][3]. The identified QTL account for a reasonable fraction of the phenotypic variation of bovine height [2,4]. Sequence variants associated with mature height may also affect the size and weight of newborn calves [2,3,5].
Birth size and weight vary between breeds, parities and male and female calves [6,7]. Birth weight in Fleckvieh cattle typically ranges from 38 to 45 kg [8]. Calves with a strikingly low birth weight and small size in spite of a normal gestation length are commonly referred to as "dwarfs". Dwarfism (DW) has been observed in several cattle breeds including Fleckvieh [9][10][11]. Low birth size and postnatal growth restriction are the most apparent characteristics of DW. Undersized animals may be normally proportionate and have an undisturbed general condition (i.e., proportionate DW [12]). However, DW may also be accompanied by disproportionately shortened limbs and skeletal deformities (i.e., disproportionate DW, chondrodysplasia [13]). Depending on the severity of the structural aberrations, disproportionate DW may be fatal [14,15].
Here, we present the phenotypic and genetic characterization of autosomal recessive DW in Fleckvieh cattle. The use of genome-wide association testing, autozygosity mapping and massive re-sequencing data enabled us to identify a frameshift mutation in the Gon-4-Like (C. Elegans) (GON4L) gene that is likely causal for the growth failure.
Animal ethics statement
Two animals were hospitalized at the animal clinic of Ludwigs-Maximilians-Universität München. Another two animals were pathologically examined at the Institute for Veterinary Disease Control (IVDC) of Austrian Agency for Health and Food Safety. One hospitalized calf was euthanized because of recurrent tympania with no prospect of improvement, and subsequently necropsied. Tissue samples were collected during necropsy. All affected animals result from inadvertent mating between carriers that occurred in Fleckvieh farms. No ethical approval was required for this study.
Animals
Twenty-seven paternal half-sibs (16 males and 11 females) with strikingly low birth weight and postnatal growth restriction were inspected by breeding consultants at the age of 3 weeks to 18 months. Ear tissue samples were collected by breeding consultants and DNA was prepared following standard DNA extraction protocols.
Genotyping, quality control and haplotype inference
Twenty-seven affected animals were genotyped with the Illumina BovineSNP50 v2 BeadChip that includes 54,609 SNPs. The per-individual call rate ranged from 98.96 to 99.60 % with an average call rate of 99.33 %. In addition, genotypes of 10,454 unaffected Fleckvieh animals that had been genotyped with the Illumina BovineSNP50 v1 BeadChip and the Illumina BovineHD BeadChip were available [18,19]. The genotype data of cases and controls were combined and SNPs that were present in both datasets were retained for further analyses. Following quality control (minor allele frequency higher than 0.5 %, no deviation from the Hardy-Weinberg equilibrium (P > 0.0001), and per-SNP and per-individual call rates higher than 95 %), 10,481 animals (27 affected, 10,454 unaffected) and 44,672 SNPs remained for association testing. The Beagle software [20] was used to impute sporadically missing genotypes and to infer haplotypes.
Haplotype-based association testing
A sliding window of 25 contiguous SNPs (corresponding to an average haplotype length of 1.42 ± 0.43 Mb) was shifted along the genome in steps of two SNPs. Within each sliding window, all haplotypes with a frequency higher than 0.5 % (N = 787,232) were tested for association with DW using Fisher exact tests of allelic association. Haplotypes with a P value less than 6.35 × 10 −8 (5 % Bonferroni-corrected significance threshold) were considered as significantly associated.
Generation of sequence data
Genomic DNA was prepared from a frozen semen sample of the assumed founder (DW het ) and from an ear tissue sample of one affected animal (DW hom ) following standard DNA extraction protocols. Paired-end libraries were prepared using the paired-end TruSeq DNA sample preparation kit (Illumina) and sequenced using the HiSeq 2500 instrument (Illumina). The resulting reads were aligned to the University of Maryland reference sequence of the bovine genome (UMD3.1 [21]) using the BWA software tool [22]. Individual files in SAM format were converted into BAM format using SAMtools [23]. Duplicate reads were marked with the MarkDuplicates command of Picard Tools [24]. To help identify the causal mutation, we used sequence data from another 288 unaffected animals from nine cattle breeds (Gelbvieh, Nordic Finncattle, Fleckvieh, Holstein-Friesian, Brown-Swiss, Original Braunvieh, Original Simmental, Red-Holstein, Ayrshire) that had been generated previously [25,26].
Variant calling and imputation
DW hom , DW het and 288 control animals from nine cattle breeds were genotyped simultaneously for SNPs, short insertions and deletions using the multi-sample approach implemented in mpileup of SAMtools along with BCFtools [23]. Beagle phasing and imputation (see above) was used to improve the primary genotype calling by SAMtools. The detection of structural variants was performed on DW hom , DW het and 203 sequenced control animals that had an average genome fold coverage greater than 10× using the Pindel software package with default settings [27].
Identification of candidate causal variants
To identify mutations that were compatible with the recessive mode of inheritance of DW, all polymorphic sites within the DW-associated region were filtered for variants that met three conditions: (1) DW hom was homozygous for the alternate allele, (2) DW het was heterozygous and (3) all control animals were homozygous for the reference allele. Candidate causal variants were annotated using the Variant Effect Predictor tool [28,29]. Sequence variants of 1147 animals from Run4 of the 1000 bull genomes project [15] were analyzed to obtain the genotype distribution of candidate causal variants in various bovine populations.
Manual re-annotation of the bovine GON4L gene
A mutation in the coding sequence of the GON4L gene, i.e., rs723240647 was associated with DW. Since the annotation of the bovine genome may be flawed, we manually re-annotated the genomic structure of GON4L (ENSBTAG00000020356) based on the University of Maryland (UMD3.1) bovine genome sequence assembly [21] and the Dana-Farber Cancer Institute bovine gene index release 12.0 [30] using the GenomeThreader software tool [31]. The GenomeThreader output was viewed and edited using the Apollo sequence annotation editor [32].
Validation of candidate causal variants
PCR primers were designed to analyze the polymorphism of rs723240647 using Sanger sequencing (see Additional file 1: Table S1). Genomic PCR products were sequenced using the BigDye ® Terminator v1.1 Cycle Sequencing Kit (Life Technologies) on the ABI 3130x1 Genetic Analyzer (Life Technologies). Genotypes for rs723240647 and rs715250609 were obtained for 3882 and 1851 Fleckvieh animals, respectively, using KASP ™ (LGC Genomics) genotyping assays (see Additional file 1: Table S1).
Clinical and pathological examination of four animals with DW
Two calves with DW were pathologically examined at the Institute for Veterinary Disease Control (IVDC) of Austrian Agency for Health and Food Safety at the age of 101 and 143 days. Another two calves with DW were referred to the animal clinic at the age of 57 and 93 days. Initial examination (including weighing) was performed upon admission. The younger calf suffered from recurrent tympania and was euthanized 4 days after admission because there was no prospect of improvement and it was subsequently necropsied. Tissue samples were collected during necropsy. The older calf was hospitalized for 400 days. Weight records were collected once a week.
RT-PCR
Total RNA from lymph nodes, thymus, lung, heart, pancreas, liver, kidney and spleen of the euthanized animal was extracted from tissue samples using Trizol (Invitrogen) according to the manufacturer's protocol with some modifications. After DNase I (Ambion) treatment, RNA was quantified using a NanoDrop ND-1000 (PeqLab) spectrophotometer, and RNA integrity was determined by RNA Nano6000 Labchip (Agilent Technologies). Complementary DNA (cDNA) was synthesized using the SuperScript IV transcriptase (Thermo Fisher Scientific). GON4L mRNA was examined by RT-PCR using primers 1F-GAGTCAAGCAGCTCAAACCC and 1R-AGCC AAGTCAGTTTCTCCATT, which hybridize to exons 20 and 21 and amplify a 348-bp product based on the mRNA reference sequence (NCBI accession number: XM_010802911) of the bovine GON4L gene. The shorter version of exon 21 was amplified using reverse primer 2R-CTCAGACTCACCCTCCTGACTC. RT-PCR was performed in 20 mL reaction volumes containing diluted first-strand cDNA equivalent to 50 ng input RNA. PCR products were loaded on 2 % agarose gels.
Phenotypic manifestation of dwarfism
Twenty-seven calves (16 males and 11 females) with a strikingly low birth weight (~15 kg) and a small size in spite of a normal gestation length were detected among the descendants of an artificial insemination bull that was used for more than 290,000 inseminations. Four affected calves were clinically and pathologically examined. At the age 61, 97, 101 and 143 days, they were underweight with weight values of 42, 79, 53 and 51 kg, respectively. The calves had multiple craniofacial aberrations (i.e., brachygnatia inferior, elongated narrow heads, structural deformities of the muzzle) and spinal distortions. Wrinkled skin, areas with excessive skin and a disproportionately large head became visible during rearing (Fig. 1) and (see Additional file 2: Figure S1). Although the general condition, feed intake and locomotion of the animals were normal, their growth remained restricted. The average weight gain of an affected animal during a hospitalisation period of 400 days was only 450 g per day, i.e., less than half the weight gain of healthy Fleckvieh bulls (Fig. 1h). The growth of the sire and all dams was normal. Since both sexes were affected and most dams had a common ancestor, we hypothesized an autosomal recessive mode of inheritance. Dominant inheritance of DW was unlikely because less than 0.1 % of the progeny were affected.
Dwarfism maps to chromosome 3
To identify the genomic region associated with DW, 27 affected and 10,454 unaffected animals were genotyped using a medium-density genotyping array. After quality control, 44,672 SNPs were retained for genomewide association testing. Because all affected animals were highly related with each other, the haplotypebased association study with DW revealed many significantly associated haplotypes. However, a striking association between DW and a proximal region of bovine [46]. The lower dotted line is a growth curve assuming an average weight gain of 1000 g/day, i.e., a lower bound estimate for the growth of Fleckvieh bulls chromosome 3 was identified (Fig. 2a). The most significant association signal (P = 2.18 × 10 −124 ) resulted from two contiguous haplotypes located between 14,884,969 and 16,557,950 bp on bovine chromosome 3.
Autozygosity mapping revealed a 1.85-Mb segment (between 14.88 and 16.73 Mb) of extended homozygosity that was shared between the 27 affected animals, which corroborated a recessive mode of inheritance (Fig. 2b). The shared segment of extended homozygosity encompassed 71 transcripts/genes. However, none of them had previously been associated with DW.
Among the 10,454 control animals, 81 were heterozygous and none was homozygous for the DW-associated haplotype, which corresponded to a haplotype frequency of 0.38 %. In the recent male breeding population (birth years 2000-2012), the frequency of the DW-associated haplotype was 0.25 % (see Additional file 3: Figure S2). Haplotype frequency was considerably higher (2.6 %) in the female population because of the widespread use of an undetected carrier bull for artificial insemination [33].
Haplotype and pedigree analysis enabled us to track the DW-associated haplotype back (up to 12 generations) to an artificial insemination bull (DW het ) born in 1959. DW het was present in the maternal and paternal lineage of 21 affected animals. However, DW het was not detected within the pedigree of six dams, which may be due to incomplete pedigree information and recording errors (see Additional file 4: Figure S3). The missing connection between six dams and DW het may also indicate that the mutation occurred several generations before DW het .
Identification of candidate causal variants for dwarfism
One affected animal (DW hom ) and DW het were sequenced to an average read depth of 13×. In addition, to help identify the underlying mutation, we exploited sequence data from 288 animals from nine breeds including 149 Fleckvieh animals. None of the 149 sequenced control animals of the Fleckvieh population carried the DWassociated haplotype.
Multi-sample variant calling within the 1.85-Mb segment of extended homozygosity revealed 11,475 single nucleotide and short insertion and deletion polymorphisms as well as 3158 larger structural variants. These 14,633 polymorphic sites were filtered for variants that were compatible with a recessive mode of inheritance, i.e., DW hom homozygous for the alternate allele, DW het heterozygous and 288 control animals homozygous for the reference allele (assuming that the mutation is specific to the Fleckvieh breed). This approach revealed ten candidate causal variants for DW (Table 1), among which five were intergenic, four were located in introns of the KCNN3, ADAR and TDRD10 genes, and one variant was located in the coding region of the GON4L gene (see Additional file 5: Table S2).
Eight of the ten compatible variants were excluded as being causative for DW because they segregated in 1005 animals from 28 breeds other than Fleckvieh that had been sequenced for the 1000 bull genomes project [15] ( Table 1) and (see Additional file 6: Table S3). In conclusion, only an intronic variant in the TDRD10 gene (rs715250609) and a coding variant in the GON4L gene (rs723240647) segregated with DW. The intron variant in TDRD10 is unlikely to be deleterious to protein function because it is more than 4000 bp away from the most proximal splice site. Thus, we considered the coding variant in GON4L as the most likely causal mutation for DW.
A 1-bp deletion in GON4L is associated with dwarfism
Bovine GON4L consists of 31 exons that encode 2239 amino acids. The variant that is compatible with recessive inheritance is a 1-bp deletion (rs723240647, g.15079217delC, ENSBTAT00000027126:c.4285_4287 delCCCinsCC) in exon 20 (Fig. 3a). Sanger sequencing confirmed that DW hom and DW het were homozygous and heterozygous, respectively, for g.15079217delC. The deletion induces a translation frameshift that is predicted to alter the protein sequence from amino acid position 1430 onwards, and a premature translation termination codon at position 1496 (p.Glu1430LysfsX66). The Gon-4-like protein contains highly conserved paired amphipathic helix (PAH) repeats and caspase 8-associated protein 2 myb-like (CASP8AP2) domains. The mutant protein is predicted to be shortened by 745 amino acids (33 %) and to lack domains that are likely to be essential for normal protein function (Fig. 3b).
Genotypes for rs723240647 and rs715250609 were obtained for 27 affected individuals and a large number of randomly selected unaffected Fleckvieh animals using customized KASP genotyping assays (Table 2). rs723240647 was significantly associated with DW (P = 1.55 × 10 −98 ). Twenty-seven calves with DW were homozygous carriers of the deletion variant, while 3855 unaffected animals were either heterozygous or homozygous for the reference allele. One animal that carried the DW-associated haplotype was homozygous for the reference allele, which may be due to a laboratory error, such as DNA sample swapping, or to haplotype recombination or imperfect genotype phasing. The intron variant in TDRD10 (rs715250609) was almost in complete linkage disequilibrium (r 2 = 0.98) with rs723240647 ( Table 2).
The deletion in GON4L causes intron retention and mRNA degradation
The effect of the g.15079217delC variant on GON4L transcription was examined by RT-PCR using RNA extracted from several tissues of a homozygous animal.
Using primers located in exons 20 and 21, we obtained two RT-PCR products of 348 and ~310 bp from a wild type and a mutant homozygous animal, respectively. The longer PCR fragment corresponded to the reference mRNA sequence (NM_001192626) of the bovine GON4L gene. The ~310-bp PCR fragment showed a superimposed sequence of 35 bp at the 5′ end of exon 21, suggesting the presence of an alternative variant of exon 21, which is not directly associated with DW. The presence of different isoforms in the 3′ terminal end of GON4L in humans and cattle has been reported previously. The intensity of the signal corresponding to the alternative cDNA fragment was stronger for the mutant homozygote than for the wild type animal, which may be caused by degradation of the mutant transcript for the homozygous animal (see Additional file 7: Figure S4). We designed a reverse RT-PCR primer specific for the alternative exon 21, and obtained a unique 348-bp RT-PCR product from the wild type animal and two RT-PCR products of 313 and ~1500 bp from the mutant homozygous animal (Fig. 4). Analysis of the DNA sequence of the 348-bp wild type RT-PCR product revealed that it corresponded to the mRNA reference sequence of the bovine GON4L gene. Sequence analysis of the longer fragment from the mutant homozygous animal revealed that intron 20 was retained. The length of the longer PCR fragment was 1488 bp. Retention of intron 20 is predicted to introduce a frameshift mutation and to lead to a premature translation termination codon at position 1492. In conclusion, the animal homozygous for the g.15079217delC variant contains the premature translation termination codon at position 1492.
Discussion
A 1-bp deletion in the GON4L gene (g.15079217delC) is associated with DW in Fleckvieh cattle. The g.15079217delC variant causes intron retention and premature translation termination and leads to a truncated protein. Compared to the wild type variant, the mutant GON4L protein is shortened by more than 30 %. RNA analysis indicated that the mutant protein variant is less abundant, which indicates that it may be degraded via nonsense-mediated mRNA decay. If the truncated protein is (partially) retained, however, its function may be compromised because it lacks domains that are possibly essential for normal protein function. Loss-of-function variants in Udu, a gene that is similar to GON4L, compromise cell cycle progression and response to DNA damage and thereby disturb embryonic growth in D. rerio [34][35][36][37]. In our study, the g.15079217delC variant was also associated with prenatal growth failure as evidenced by the strikingly low birth weight of homozygous calves. The phenotypic manifestation of homozygosity for g.15079217delC, i.e., pre-and postnatal growth restriction and craniofacial aberrations, resembles phenotypic patterns of human primordial DW that result from DNA repair disorders [38,39]. Such findings suggest that disturbed growth of homozygous animals might result from defective responses to DNA damage due to impaired GON4L function. However, the actual mechanism(s) and pathway(s) that cause the extremely low birth weight and postnatal growth restriction of homozygous animals have yet to be elucidated. Congenital disorders that manifest as growth failure have been identified in several cattle breeds. Affected calves may be born underweight or fail to thrive during rearing [26,[40][41][42]. The phenotypic consequences of homozygosity at g.15079217delC occur at birth. Unlike mutations in the ACAN and COL2A1 genes that cause lethal disproportionate DW in cattle [14,15], homozygosity for g.15079217delC is not fatal. Apart from large heads, affected animals were normally proportionate, and moreover, their general condition and locomotion were normal and their weight gain was constant, although considerably less than that of healthy animals. Thus, homozygosity for the g.15079217delC variant is less detrimental than, e.g., homozygosity for a mutation in EVC2, which compromises both growth and locomotion of affected animals [13]. Nevertheless, animals homozygous for the g.15079217delC variant are more likely to be culled at juvenile ages because of their reduced growth performance.
The g.15079217delC variant has segregated in the Fleckvieh population for more than 50 years, but due to its low frequency, DW was rarely reported. Assuming a frequency of 0.2 % for the deleterious allele, equal use of all bulls and 1,500,000 annual births in the German and Austrian Fleckvieh populations, one would expect only six homozygous calves with DW per year. However, the widespread use of undetected carriers of rare recessive alleles in artificial insemination may cause a sudden accumulation of affected calves, as our study demonstrates. Twenty-seven calves with DW were descendants from a bull that was used for more than 290,000 inseminations. The frequent use of this carrier bull resulted in a more than tenfold increase in allele frequency in the female population [33]. Our findings now enable the rapid identification of carrier animals. The g.15079217delC variant was almost in complete linkage disequilibrium with the DW-associated haplotype. Only one animal was misclassified using haplotype information, which demonstrates a high sensitivity and specificity of the haplotype-based identification of DWmutation carriers. Since all male breeding animals are routinely genotyped with dense genotyping arrays, carriers can be readily identified using haplotype information. However, only direct gene tests will unequivocally distinguish between carrier and non-carrier animals [43]. The identification of the frameshift mutation in GON4L will now permit the development of customized genotyping assays to identify carrier animals. Excluding carrier bulls from artificial insemination will prevent the emergence of homozygous animals and remove from the Fleckvieh population the rare DW-associated allele within a few generations. However, sophisticated strategies are required to simultaneously consider multiple deleterious alleles in genomic breeding programs while maintaining genetic diversity and high rates of genetic gain [44,45]. | 2016-11-01T19:18:48.349Z | 2016-01-15T00:00:00.000 | {
"year": 2016,
"sha1": "f9f28610d1a1471e51984f841df1450d05168de4",
"oa_license": "CCBY",
"oa_url": "https://gsejournal.biomedcentral.com/track/pdf/10.1186/s12711-016-0207-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c0adfdf0a025dbf20a8ff390820ea253b60496e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
78695107 | pes2o/s2orc | v3-fos-license | Monitoring Elderly People at Home: Results and Lessons Learned
Introduction Elderly people may be affected by a decline in functioning that usually involves the reduction and discontinuity in daily routines and a worsening in the quality of life. Recently, solutions have been proposed to unobtrusively monitor activities of elderly people [1]. Tele-assistance systems that rely on a conjunction of sensors –each one devoted to monitor a specific status or activity– are normally used [2]. In this paper, we present our experience in monitoring 9 elderly people for 5 months through eKauri, a teleassistance system. The solution eKauri is composed of a set of sensors: presence-illumination-temperature sensors (i.e., TSP01 Z-Wave PIR), to identify the room where the user is and the movement from a room to another (one sensor for each room); and a presence-door-illumination-temperature sensor (i.e., TSM02 Z-Wave PIR), to detect when the user enters/exits the premises. They send the retrieved data to a gateway (based on Raspberry-pi) that collects and securely redirects them to the cloud to be stored, processed, mined, and analyzed by an intelligent system. Therapists and caregivers receive notifications, summaries, statistics, and general information belonging to the monitored users through a Web application. From a microscopic perspective, the system is able to recognize if the user is at home or away and if s/he is alone. It is also able to detect the following events: leaving home; going back to home; receiving a visit; remaining alone after a visit; going to the bathroom; going to sleep; and awaking from sleep. From a macroscopic perspective, therapists and caregivers become aware about habits and may detect unusual situations. Results eKauri has been installed in Barcelona in 9 elderly people' homes (7 women) over 65 years old. To test eKauri, we asked monitored users to daily answer to a questionnaire composed of 20 questions (12 optional). Moreover, they daily received a phone-call by a caregiver who manually verifies the data. This information has been used as baseline to evaluate the performance of the system. We calculated the accuracy in recognizing if: the user is at home (98%), s/he is alone (68%), and s/he is sleeping (78%). All detected events are shown in the Web applications and revised by therapists and caregivers. Feedback from them has been used to improve the interface and add functionality.
Introduction
Elderly people may be affected by a decline in functioning that usually involves the reduction and discontinuity in daily routines and a worsening in the quality of life. Recently, solutions have been proposed to unobtrusively monitor activities of elderly people [1]. Tele-assistance systems that rely on a conjunction of sensors -each one devoted to monitor a specific status or activity-are normally used [2].
In this paper, we present our experience in monitoring 9 elderly people for 5 months through eKauri, a teleassistance system.
The solution
eKauri is composed of a set of sensors: presence-illumination-temperature sensors (i.e., TSP01 Z-Wave PIR), to identify the room where the user is and the movement from a room to another (one sensor for each room); and a presence-door-illumination-temperature sensor (i.e., TSM02 Z-Wave PIR), to detect when the user enters/exits the premises. They send the retrieved data to a gateway (based on Raspberry-pi) that collects and securely redirects them to the cloud to be stored, processed, mined, and analyzed by an intelligent system.
Therapists and caregivers receive notifications, summaries, statistics, and general information belonging to the monitored users through a Web application.
From a microscopic perspective, the system is able to recognize if the user is at home or away and if s/he is alone. It is also able to detect the following events: leaving home; going back to home; receiving a visit; remaining alone after a visit; going to the bathroom; going to sleep; and awaking from sleep. From a macroscopic perspective, therapists and caregivers become aware about habits and may detect unusual situations.
Results
eKauri has been installed in Barcelona in 9 elderly people' homes (7 women) over 65 years old. To test eKauri, we asked monitored users to daily answer to a questionnaire composed of 20 questions (12 optional). Moreover, they daily received a phone-call by a caregiver who manually verifies the data. This information has been used as baseline to evaluate the performance of the system. We calculated the accuracy in recognizing if: the user is at home (98%), s/he is alone (68%), and s/he is sleeping (78%).
All detected events are shown in the Web applications and revised by therapists and caregivers. Feedback from them has been used to improve the interface and add functionality.
Lessons learned
Although, at least at the beginning, users were a little bit reticent, during the monitored period they felt comfortable with the services provided by eKauri. They really appreciate, on the one hand, the fact that it is not-intrusive and that it allows them to follow their normal lives; and, on the other hand, to be called by phone. In other words, it is important to provide a system that may become part of the home without losing social interactions. Thus, a tele-assistance system does not substitute the role of caregivers.
Therapists/caregivers recognize eKauri as a support to detect users' habits helping in diagnosing user's conditions and her/his decline, if any.
Finally, let us mentions two real cases.
-Case-1. A user, woman with Alzheimer and heart problems needs continuously assistance and, thus, a caregiver visits her daily. One day, eKauri detected that no visits were received, an alarm was generated and the caregiver called. The caregiver confirmed that she did not go to visit the user that day.
-Case-2. During the afternoon, a user is accustomed to go out for a walk. One day, she stayed in the bedroom. eKauri detected the change in her habit and a caregiver called her. Actually, she had a problem with a knee and she cannot walk. A physiotherapist was asked to go to visit her.
Conclusion
The goal of eKauri is twofold: helping and supporting elderly people that live alone at home; and constantly providing a feedback to therapists/caregivers about the evolution of the status of each monitored user.
Status
Completed | 2019-03-16T13:12:31.288Z | 2016-12-16T00:00:00.000 | {
"year": 2016,
"sha1": "cc51b22632b2cae55b8483c200be0a2b0025b588",
"oa_license": "CCBY",
"oa_url": "http://www.ijic.org/articles/10.5334/ijic.2823/galley/3639/download/",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "569d98cc820fc69f85910032d204762a19dc1768",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264105441 | pes2o/s2orc | v3-fos-license | Use of Silicon Nanoparticles as a Seed-Priming Solution for increasing the Germination and Growth Parameters of Faba Bean ( Vicia faba L.) Seedling under Salinity Stress
The use of saline water in agriculture is increasing these decades due to the scarcity of fresh water. The use of silicon nanoparticles is a new approach that can alleviate the adverse effect of saline water on plant growth. The current study was carried out to evaluate the impact of silicon nanoparticles (BSNPs) under salinity stress on seed germination and seedling growth parameters of faba bean ( Vicia Faba L.) seedling. The research investigated the interaction between six BSNPs (0, 1500, 3000, 4500, 6000, and 7500 mgL -1 ) and five salinity levels (freshwater: seawater) (0.70, 1.1, 1.6, 2.1, and 3.1 dSm -1 ). The results revealed a significant decrease in the germination percentage (GP) with increasing salinity levels for the seeds primed in freshwater than that primed in fresh water and BSNPs. Salinity adversely affected the length of the faba bean radicle for seeds priming in freshwater or in fresh water and BSNPs. This was more noticeable for the seeds primed in freshwater than the seeds primed in BSNPs at high salinity levels. The priming in BSNPs significantly reduced the radius of the root seedlings and the priming in 7500 mgL -1 gave the lowest value for the radius of the radicle. The Increase in salinity levels decreased the total biomass of faba bean for seeds primed in fresh water, contrary to seeds primed in BSNPs. Vigor and salt tolerance indices were significantly affected by levels of salinity, BSNP treatments, and interactions. The study concluded that BSNPs are alleviating material that can used to enhance germination and growth parameters of plants under salinity stress. Moreover, these results highlighted the positive effects of silicon nanoparticles synthesized from rice straw on reducing the harmful effects of high levels of salinity.
INTRODUCTION
Salinity stress is a sever inhibitor for plant growth.Salinity limits plant growth due to nutrients imbalance, osmotic stress and oxidative stress (Balasubramaniam et al., 2023).The salt accumulation in the soil decreases the ability of plants to extract water, so physiological drought on plants will take place, which is the major stress affecting plant growth.Application of silicon is a promising method that improves plant resistant to salinity stress.Silicon treatment has improved the salinity tolerance for many plants as wheat, rice, sorghum, cucumber and maize (Ahmad et al., 1992, Zhu et al., 2004, Rohanipoor et al., 2013, Yin et al., 2013and Flam-Shepherd et al., 2018).
Plant productivity is adversely affected by salinity stress.(Around 800 million hectares are affected by salinity or sodicity (FAO, 2005).Salinity is one of the major abiotic stresses which adversely affect crop productivity and quality.The problem of soil salinity in semi-and arid areas is further increasing because of high temperature, high evapotranspiration, low precipitation, poor water quality in irrigation and poor drainage.
A grain legume is an important source of protein and carbohydrates.Faba bean is one of the main winter grain legume food crops in Egypt.Faba bean has highly protein and nutritive content where the proteins content can reach up to 24%.Also, faba bean has an important role as a part of nitrogen fixation process.The cultivation area of beans was decreased around the world.Faba bean is a highly sensitive-moderate to salinity and drought than some other seed legumes (McDonald and Paulsen., 1997;Amede and Schubert., 2003).
Silicon is the second component in the soil which is formed 60-70% of the soil mass (Richmond and Sussman, 2003).Many studies have revealed that the role of silicon to alleviate the abiotic stress as salinity, water stress, metal toxicity and temperature stress (Tripathi et al., 2012a;Tripathi et al., 2012b;Soundararajan et al., 2014;Deshmukh et al., 2020a).Silicon is a non-essential but beneficial element, increases growth through improving tolerance against salinity, insects, water logging, metal toxicity and nutrient deficiency or toxicity.The useful effects of Si in increasing the tolerance of the plant to the biotic and abiotic stresses are proved in many studies (Chen et al., 2014, Debona et al., 2017and Kube et al., 2021).
The application of nanotechnology in agriculture is a promising and a powerful tool to alter crop production, and can changed plant production through different aspects as enhanced the nutrient use efficiency, increased plant tolerance, and regulated germination and growth of plant (Rastogi et al., 2019).The effects of nanoparticles on plants depend on many factors as the size, shape, chemical and physical properties of nanoparticle and method of application of it (Rastogi et al., 2017).Several studies reported that chemically synthesized of nano-SiO2 improved seed germination and growth characteristics by reducing electrolyte leakage.Also, application of nano-SiO2 improved seed germination and seedlings parameters such as fresh and dry weight (Siddiqui and Al-Whaibi 2014), decreased chlorophyll degradation, increased transpiration rate , and water use efficiency (Hussain et al., 2021 andRahimi et al., 2021).
Therefore, the main object of present study was to study the effect of seed priming in synthesied Bio-Silica Nanoparticles (BSNPs) solution on seed germination and seedling growth parameters of faba bean under salinity stress.
Chemicals
The rice straw was obtained from rice mills in Egypt.Hydrochloric acid was purchased as an analytical reagent from El Gomhouria Company, Egypt.MilliQ water was used during the experiment.
Synthesis of Bio-Silica Nanoparticles (BSNPs):
Figure (1) shows a schematic diagram of BSNPs formation according to (Alshatwi et al, 2015).Rice straw (RS) contains cellulose, silica, lignin, hemicellulose, and a trace amount of metal ions.RS is pre-treated with hydrochloric acid which eliminates the impurities of metal ions and decomposes cellulose, hemicellulose and lignin in rice straw.The acid-treated waste is calcined which removes the organic matter and produces silica nanoparticles of high purity and amorphous.
The synthesis of BSNPs is shown in figure (2).First the required rice straw was mixed with 1 N HCl under magnetic stirring; then transferred the mixtures to the autoclave under pressurized conditions (15 lbs).Next, acid-digested rice straw were washed with deionized (DI) water three times to remove HCl.Then, the sample was dried at 85 °C for 5 hours in an oven.After that, the samples were calcined at 700 °C for 1 hour using a muffle furnace.Finally, the color of brown residue changed to white that indicated silica formation.The brown residue changed to white, indicated silica formation.
Characterization of the BSNPs
The crystallization of the samples was characterized using x-ray diffraction (XRD) powder (Model -JEOL).Silica samples were milled with KBr to prepare pellets for Fourier Transform Infrared (FTIR) characterization, which was achieved on a Perkin Elmer spectrum one FT-IR spectrophotometer.Dynamic light scattering (DLS) analysis of samples prepared in aqueous solution was performed using a Zetasizer Nano ZS-90 Analyzer (Malvern, UK).Average volume was calculated using the software based on density and volume distributions and measured numbers.The obtained silica powders were dispersed in absolute ethanol and ultrasound before Transmitting Electron Microscope (TEM) characterization.The shapes, sizes, and elemental compositions of the samples were examined using a JEOL (TEM) at an accelerator voltage of 200 kV.
Experimental layout:
Faba bean seeds (Vicia Faba L.) were washed with distilled water prior to use.The priming solution were six BSNPs treatments (0, 1500, 3000, 4500, 6000 and 7500 mgL -1 ) and five salinity levels (fresh water: sea water) (0.70, 1.1, 1.6, 2.1 and 3.1 dSm -1 ).The seeds were soaked in the previous solutions for 24 h at 28 C° in three replicates for each treatment.Then, the germination and seedling characteristics were measured daily up to 9 days by taken three seeds from each treatment.
Radicle Length, radius and surface area of radicle
The radicle length, radius and surface area were determined according to (Mahdy et al, 2020).The plant Fresh and dry weights were measured and recorded.The germination percentage was calculated according to (Jones, 2011) using the following equation: Germination Percentage (GP),% = (number of normally germinated seeds)/(total number of seeds)x100
Seed water content:
The seed water content (WC) was calculated using the following equation: WC, % = (Gairrola et al, 2011).
Germination index (GI):
Germination index was calculated as follows: IG, unit less = .(Li, 2008) Where: di the days from the start of the experiment, Ni germinated seeds number, and S is number of seeds sown.
Mean Germination Time (MGT):
MGT was calculated using the following equation: GTM, day = ∑ ni .di/ ∑ni Where: ni germinated seeds number and di the days from the start of the experiment, (Ellis and Roberts, 1981) 3.5 Radicle Length change (RL.):RLC was calculated using: LRC, % = Where: RLS0 is the length of radicle at control; RLSx is the length of radicle at salt concentration
Salinity Tolerance and vigor indices:
Salinity tolerance index is the ratio between total fresh weights for stressed plant to controlled plants.Salinity tolerance index (%)= Fathi and Gaafar, (2015) The vigor index (VI) of faba bean seedling calculated as follows: IV = Elouaer and Hannachi, (2012)
Statistical analysis
The experiment design was Complete Randomized Design (CRD) using three replicates in each treatment.The results of the experiment were statistically analysis using Costat software package 6.311 for variance analysis, statistically significance was applied using two ways ANOVA at the (p ≤0.05) level.The significant differences test among three treatments was done at separated probability by least significance difference test (ρ ≤ 0.01).The Means values were done for three replications.
Biogenic of Si nanoparticles characterization
SEM, EDX, XRD, TEM and FTIR analyses of BSNPs are shown in (Figure 2).Scanning Electron Microscopy (SEM), transmission electron microscopy (TEM) images of the BSNPs samples clearly showed different sizes and different shapes indicating that the prepared BSNPs particles possessed an irregular and non-uniform structure (Figure 2a) (Sun et al., 2014).On the other hand, BSNPs confirmed that the representative dimension of single particle size is in the range 1-28 nm (nanostructure) using Bettersizer 2600(Wet) (Figure 2f).The EDX analysis revealed that the elements carbon, oxygen and silicon represent 97.54% of the total chemical composition (Figure 2c).
Surface characterization of BSNPs using FTIR spectroscopy contributes to identify the functional groups present in its structure (Ngah & Hanafiah, 2008).(Figure 2d) shows FTIR spectra of BSNPs.Spectral bands showed the appearance of intense bands at 3922.38, 3898.27, 3885.73, 3880.91 and 3471.02cm −1 ascribed to the vibrational stretching of the O-H bond (Stuart 2004;Han et al., 2010;Gonçalves et al., 2011, Feng et al., 2011).Spectral peak at 2350.34 cm −1 referred to vibrational stretching of C-O bond of alkane groups (Barbosa, 2007) which confirmed modification silicon to BSNPs.The spectral strong bands observed at 1873.91cm −1 may be attributed to the vibrational stretching of C=O bond.The peak of 1641.48 cm -1 could be referred to the vibrational stretching of C-H bond of amide (Han et al., 2010).The band at 1094.64 cm −1 attributed to corresponds to Si-O-Si bonds.(Guo et al., 2008).The peaks observed at 462.82 cm -1 correspond to stretching vibration of Si-O-Si group (Sekhar et al. 2009).
The XRD pattern of BSNPs shows sharper peaks which indicate better crystallinity are displayed in (Figure 2e).The results of XRD analysis indicate presence of Silicon Oxide (47.55 %), Copper Iron Phosphate (25.66%),Chromium Phosphate (23.03 %), and Carbonyl Cobalt (3.76 %).The results of XRD analysis which obtained in the present investigation are in good agreement with the reported results (Bouyer et al., 2000).
The water content of faba bean seeds
The increasing in salinity levels declined the water content of seeds at all treatments (Table 1) However, the priming in BSNPs increased the water content in seeds compared the control.
Germination percentage (GP)
The germination percentage is a key parameter in plant life cycle and it is highly sensitive to salinity stress.Results showed that increasing in salinity levels from 0.7 to 3.1 dSm -1 led to a decreasing in germination percentage (Table 1).This decreasing was more noticeable in unprimed seeds than primed seeds at all salinity levels.The reduction in germination percent declined to 10% in the control at 3.1 dSm -1 in comparison to the primed seeds.
Mean germination time (MGT)
The mean germination time is increased significantly (ρ ≤ 0.01) with the increase of salinity levels (Table 2).The MGT for BSNPs primed seeds is less than unprimed seeds.However, at a salinity level of 3.1 dSm -1 , seed primed in BSNPs increased significantly (ρ ≤0.05) the mean germination time from 0.67 at 0 mgL -1 BSNPs to 1.67, 1.67, 1.67, 1.67 and 1.50 days at 1500, 3000, 4500, 6000 and 7500 mgL -1 BSNPs respectively.ANOVA showed that BSNPs priming is significantly (ρ ≤ 0.05) affected the MGT (Table 1).These results are consistent with those of (Solanti 2012, Mushtaq et al 2017; Mahdy et al., 2020).3).But this effect was widely obvious in the root length of the seedlings obtained from the control treatment (unprimed seeds in BSNPs) than unprimed seeds, and the maximum reduction in root length was observed at 3.1 dSm -1 salt level (Table 3).
Radicle radius and radicle surface area
(Table 3) showed the radicle radius for primed seeds in BSNPs or unprimed of fabe bean seedling under salinity levels.The ANOVA showed that the RR was significantly affected by salinity levels (Table 4).
The Radicle surface area of fabe bean seeds for unprimed seeds was decreased with salinity stress.In contrast to primed seeds in BSNPs which increased with increasing in BSNPs concentration (Table 3).The interaction between salinity and the priming in BSNPs was significantly affected the RSA (Table 4).
Total fresh and dry weights
(Figure 5 a, b).displayed the total fresh and dry weights of faba bean under salinity levels for primed and unprimed seeds.Increase in salinity levels for unprimed seeds decreases both shoot and dry weights.The highest salinity levels at 3.1 dS/m - 1 reduced the fresh weight from 5 gm. in control to 3 gm., also the dry weights decreased from 0.83 gm. for the control to 0.62 gm. at 3.1 dS/m -1 .
Salt tolerance (STI) and (VI) vigor indices
(Figure 6a) displayed the salt tolerance index of faba bean under salinity levels for primed and unprimed seeds.Increase in salinity levels for unprimed seeds decreases STI.The highest salt tolerance was observed in the control then reduced with increasing in salt concentration until reached the lowest value in 3.1 dS -1 .
(Figure 6b) showed vigor index of faba bean under salinity levels for primed and unprimed seeds.There was a decreasing in VI with increasing in salinity levels in unprimed seeds except EC 2.1 dSm -1 .This decrease was from 16.98 at 0.70 dS/m to 2.35 at 3.1 dSm -1 .In contrast, primed seeds in BSNPs showed an increase of VI with different salinity levels (Figure 6b).The ANOVA variance of analysis results showed a highly significant (ρ≤0.01) between means in salinity levels, and the interaction between it and BSNPs on VI (Table 4).
The water content of faba bean seeds
The results indicated that water content decreased due to increase in salinity levels and lowered both water potential and the availability of water, then decreased the water content in seeds (Munns 2005;Panuccio et al. 2014;Sozharajan and Natarjan 2014) on contrary finding of (Tsegay and Gebreslassie 2014) where water content was increased with increasing of salinity.
On the contrary, at all salinity levels, priming of seeds in BSNPs solutions with different concentrations significantly (ρ≤0.05)increased the water uptake of faba bean seeds, and the highest significant water content was observed at BSNPs concentrations of 3000 mgL -1 and the increase in water content was non-significant at higher BSNPs concentrations in comparison with 3000 mgL -1 BSNPs .
The seed coat plays a crucial role in regulating the penetration of water into seeds.According to the findings presented in (Table 1), it is evident that as salt levels rise across various priming treatments, there is a noteworthy decrease in the percentage of water uptake in faba bean plants.Elevated salt concentrations within the root zone lead to a reduction in water potential, subsequently triggering osmotic consequences and ultimately resulting in a physiological drought condition.(Kaya et al., 2006).This observation aligns with the finding by (Moaveni, 2011, Hasegawa et al., 2000) However, it contradicts the outcomes reported of (Yan, M. 2015).)In contrast, contrary results were presented by researchers who reported an increase in the water content of Lathyrus sativus and Pisum sativum seeds as the salt concentration increased.Additionally, when seeds were subjected to priming with nanosilicon of varying ratios, a substantial enhancement in water uptake was observed, with the most significant improvement occurring at a concentration of 4500 mg -1 .This notable increase in water uptake can be attributed to the remarkable water-retention capabilities of nanosilicon particles, which may potentially penetrate plant cells due to their small size (28 nm).
Germination percentage (GP)
The results indicated that the priming in BSNPs increased the germination percentage for unprimed seeds in comparison with primed seeds where the priming in BSNPs alleviate salt stress and increase germination percentage of faba bean seeds.These results are also in a close agreement with (Mahdy et al. 2020).The reduction in germination percentage with increasing salinity levels is due to the adversely effect of salt on physiological parameters.(Khan et al. 2002).(Khajeh et al. 2003) reported that the ion specific toxicity of Na and Cl decrease the germination percentage.(Elouaer and Hannachi 2012) stated that priming in BSNPs allowed the metabolic process during germination happen sooner than progress in radicle emergence (Afzal et al. 2008) showed that the germination is increasing in primed seeds because of the metabolism of protein and rapid synthesis of nucleic acid through seed soaking.
Radicle length changes
The increased salt concentration (ρ ≤0.05) significantly reduced radicle length because the effect of ion toxicity and osmotic pressure on seedling growth.Moreover, salinity effect can delay absorption of K and P that through seed growth (Ma et al. 2020).In the current study, seeds were primed with different concentrations of BSNPs significantly improved faba bean length exposed to different salt levels.
The growing enhancement observed at 0.70 dSm -1 and 3000 mgL -1 BSNPs and the values of radicle length at the other BSNPs concentrations were significantly increased.(Jan mohammadi et al. 2015) announced that high enhancement in radicle length caused by seeds priming in BSNPs induced by the changes in tissue pliability.More active seedlings were produced from seeds prepared in BSNPs compared to the control treatment (unprimed in BSNPs).This results in a good agreement with results of (Jan mohammadi 2015) who showed that seed priming in nanosilicon significantly enhanced root lengths of sunflower.
Concerning radicle length changes, Application of different salinity levels had a significant reduction on radicle length of faba bean seedlings in control and BSNPs primed seeds (Figure 3).Moreover, this adversely affect was low in primed seeds in BSNPs in compared with unprimed seeds (Figure 3).The priming of seeds in different concentrations of BSNPs significantly reduced the effect of salinity stress on radicle length compared with the unprimed seeds.
Radicle radius (RR) and radicle surface area (RSA)
The radicle radius increased with increasing in salinity levels for unprimed seeds which observed had the highest value at salinity levels 3.1 dSm -1 .In contrary the priming in BSNPs significantly decreased the radicle radius for fabe bean seedling.The best reduction in RR found at the highest level of BSNPs, so the priming of faba bean seeds in BSNPs improved the tolerance of seedling to salinity through decreasing the radicle radius.The RSA for the primed seeds is higher than unprimed seeds at 7500 mgL -1 level.This was due to improvement the metabolism of nuclei acid and protein in the treatment with BSNPs (Awadallah, 2019).
Total fresh and dry weights
These reduction in the increase of salinity levels significantly reduced fresh and dry weights were due to the adversely effect of salinity on growth and physiological parameters (Kapoor, 2015).These results were in a good agreement with (Achakzai,2010 andAnuradha,2014).The effect of priming seeds in BNSPs improved both shoot and dry weights compared to unprimed seeds.BNSPs enhanced fresh weight under salinity stress through regulation plant growth and maintain high photosynthetic rate under salt stress (Yin 2013, Coskun et al., 2016, Zagar et al., 2019).
Salt tolerance (STI) and (VI) vigor indices
The effect of priming seeds in BNSPs improved the salt tolerance index compared with the control.The effect of silicon nanoparticles resulted in a significant increase in STI at all salinity levels, but 7500 mgL -1 have got the best effect.The BSNPs alleviate the salinity effect and increase salt tolerance through reducing the Na and Cl levels in the root system (Liang, 2003) The most elevated STI, VI was noticed at 0.7 dSm-1 and the least value of salt tolerance and vigor indices at 3.1 dSm -1 .Generally, seed priming of faba bean in different ratios of nanofertilizers significantly (p < 0.05) raised the STI and VI at all salt stress concentrations, but the 7500 mgL -1 achieved the best soaking treatment.So, the priming in BNSPs produced seedlings having a higher tolerance to salinity conditions than fresh water.This result agreed with the study of (Mahdy et al 2020) reported that nanopriming in water treatment residuals raised STI of cucumber seedlings under salt stress.(Włodarczyk et al,2022) study indicated that nanopriming of ZnO (500ppm), TiO2 (50ppm) and SiO2(25ppm) had given the greatest effect on groundnut seedling vigor index.Also, the study of (Elkhatib et al 2019) investigated that Nano priming in mango peels improved the vigor index of maize seeds.The Small size of the nanoparticles would have easily entered through cracks present on the outer seed surface, reacted with free radicals resulting in enhanced seed vigor (Cumbal, et al., 2005).
CONCLUSION
Seed priming with BSNPs at a concentration of 7500 mgL -1 exhibited a positive response in mitigating the adverse effects of salinity stress on the growth parameters of faba bean seedlings.This treatment led to notable enhancements in radicle and plumule length, as well as improvements in the salt tolerance index and vigor of the faba bean seedlings.Furthermore, the application of nanosilicon during priming resulted in a significant increase in the total biomass of the seedlings and the surface area of their radicles, when compared to seedlings treated with fresh water only.However, it's worth noting that radicle radius was observed to decrease in response to nanosilicon priming.
Our findings propose that nutrient seed priming using a BSNPs solution holds significant potential for agricultural purposes.This approach offers a straightforward, cost-effective, and environmentally friendly method that could potentially establish a protective mechanism against oxidative harm and enhance the salt stress tolerance of faba bean plants.This enhancement may be attributed to the effective absorption of nutrients from the nanosilicon solution by the seed coat.However, it is imperative that future research efforts focus on elucidating the physiological and biochemical aspects of salinity stress in various crop species to provide a more comprehensive understanding of this promising technique's applicability.
Figure 1 :
Figure 1: Flow chart of the procedure used to produce silica powders from rice straw.
Figure 3 :
Figure 3: Effect of Salinity Stress on radicle length changes of Faba bean seedligs.The standard error of the mean of three replicated was represented as error bars.
Figure 4 :
Figure 4: Effect of the priming in BSNPs on seedlings length changes (%) of fabe bean seedlings under different salinity levels.The standard error of the mean of three replicated was represented as error bars.
Figure 5 :
Figure 5: Effect of the priming in (BSNPs) on total fresh and dry weights of faba bean under different salinity levels.The standard error of the mean of three replicated was represented as error bars
Figure 6 :
Figure 6: Effect of the priming in (BSNPs) on vigor indices of faba bean seedlings under salinity levels.The standard error of the mean of three replicated was represented as error bars. | 2023-10-15T15:14:31.602Z | 2023-10-11T00:00:00.000 | {
"year": 2023,
"sha1": "da1bda83ee2e3f486ee895fd0c663db89115ee10",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.21608/alexja.2023.237309.1046",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "06e66364e9d5bf9f6baaa14b3461dac7815a9df0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
225738486 | pes2o/s2orc | v3-fos-license | Simultaneous Recovery of Ammonium and Phosphate from Leachate by Using Activated Zeolite
Ammonium and phosphate in leachate are potentially to contaminate both surface and groundwater. Zeolite has a high affinity so it can be used as an adsorbent for ammonium (NH4+), phosphate (PO43-) and other organic compounds. This research work aims to eliminate these pollutants stimulatingly by using activated zeolite. Leachate used for this experiment has initial ammonium concentration of 508.2 mg/L, and phosphate concentration of 7.77 mg/L. The zeolite has particle size of 100 mesh. Adsorption experiments were carried out with physically activated zeolite and physic-chemical activated zeolite with a dose variation of 15 - 120 g / L with a contact time of 12 hours. The results showed that a physical activated zeolite dosage of 120 g/L results in the smallest concentration of ammonium residue of 72.6 mg/L and a phosphate residue of 0.37 mg/L. The 45 g / L chemically-physically activated zeolite dose of 45 g/L produces an ammonium residue of 198 mg/L and phosphate residue of 0.74 mg/L. The ammonium adsorption with the both activated zeolites can be described very well using the 1st order kinetics model.
Introduction
Leachate generation is a major problem for municipal solid waste (MSW) landfills and causes significant threat to surface water and groundwater. Leachate can be defined as a liquid that passes through a landfill and has extracted dissolved and suspended matter from it. Leachate results from precipitation entering the landfill from moisture that exists in the waste when it is composed [1]. Leachate also comes from rainwater that hits the disposed wastes or ground water from around the landfills.
Leachate contains various pollutants in high concentrations, such as organic matters, nutrients (nitrogen, phosphorus), and various metals including heavy metals [2]. Some pollutants in leachate are classified as difficult or cannot be degraded naturally, for example heavy metals, polymers and synthetic organic materials [3]. Many landfills in Indonesia are not equipped with adequate leachate management facilities. In many cases leachate is only collected in ponds and then immediately disposed of into the environment. This practice has the potential to pollute surface water and ground water.
In general, leachate treatment can be realized physically, chemically and biologically or in its combination. However, the physical and biological treatment of leachate is generally not able to produce the quality of leachate that meets the quality standards, especially seen from the content of ammonium, phosphate and heavy metals. Therefore, further treatment is needed to produce effluents which are safely disposed of into the environment [4]. One of the potential leachate treatment methods for this purpose is the adsorption with zeolite. The removal of the ammonium from wastewater by adsorption with zeolites is found to promising. The advantages of the zeolite include high selectivity towards the ammonium ions in present of the cations in the wastewater, wide spreading in nature, and low cost [5]. Zeolites are natural, structured crystalline rocks, mainly composed of SiO 4 and AlO 4 bonds that are connected by oxygen atoms so that they have interconnected cavities in all directions [6,7]. Elimination of ammonium from leachate by zeolite through a mechanism of ion exchange, which is the side of zeolite cations containing calcium ions exchanging with ammonium ions. Furthermore, these calcium ions react with phosphate and form deposits [7] [8]. On the other hand, geologically Indonesia has the potential to produce zeolites such as those found in Lampung, West Java, Central Java, East Java, East Nusa Tenggara, and Sulawesi with 447,490,160 tons of resources [9]. Similar values were also reported by the Ministry of Energy and Mineral Resources (2015) [10], where in Indonesia there were 432 million tons of natural zeolite spread in various regions. Moreover, used zeolite to adsorb ammonium can also be reactivated to be reused or used as slow release fertilizer [2].
The removal of the ammonium ions with zeolites is a result of ion exchange and/or adsorption. Both processes take place parallel, usually one of them is prevailing depending on the solid to liquid ratio [5]. The adsorptive of ammonium ions using zeolite has been studied by many researchers [5,11,12], however the interest towards experimental studies of this process for specific wastewater still exists. The reason exists in that fact that the effectivity of the process is strongly depend on the properties of the applied zeolite and the wastewater being treated.
The aim of this research works was to evaluate the zeolite ability to eliminate ammonium and phosphate from leachate water, with the main attentions given to the study of the effect of the activation method, zeolite dose and contact time on elimination of ammonium, phosphate and organic matter. The ammonium adsorption process by zeolite is described by the adsorption kinetics model.
Material
The materials used in the study consisted of leachate taken from the Galura landfill and natural zeolite taken from Lampung. The zeolite used for the experiment has a size of 100 mesh. The chemicals used for laboratory analysis included 3N NaOH, mengsel indicator, 2% boric acid, 6N NaOH, 0.02N H 2 SO 4 , K 2 Cr 2 O4, COD (chemical oxygen demand) acid, and Fe indicator.
Equipment
The equipment used in this experiment included shakers, distillation devices, HACH spectrophotometers, pH meters, and turbid-meters. The supporting equipment consisted of filter paper, cup glass, Erlenmeyer, Mohr pipette, spatula, biuret, mass balance, sample bottle, 20 L jerry can, funnel, measuring cup, and bulb.
Research Procedure
Characteristics of leachate. Leachate is taken from the landfill of Galura in Bogor Regency. Leachate is put into jerry cans and stored in the cooling chamber. Leachate samples that have been taken then analysed the initial characteristics including measurements of pH, temperature, colour, turbidity, TSS (total suspended solids), ammonium, phosphate and COD concentrations using the method according to the APHA standard (2005) [13]. Leachate used for this study is blackish brown and has TSS, ammonium and phosphate content that are above the standard (table 1). This shows that the leachate requires treatment before being discharged into the environment.
Activation of Natural Zeolites. Activation of natural zeolite is intended to remove organic and inorganic impurities present in zeolite to improve its adsorption ability. Zeolite activation is carried out by physical treatment and chemical-physical treatment. A total of 100 grams of zeolite is first washed with 500 ml of distilled water. The washing process is carried out using a stirrer at a speed of 500 rpm at 100 o C for three hours. After the washing process, zeolite is dried into an oven and crushed back to 100 mesh. Furthermore, zeolites are activated physically and chemically-physically. Physical activation is carried out by heating zeolite into the furnace at 400 o C for three hours. Chemical-physical activation is done by soaking zeolite into 3N NaOH solution using a ratio of 1: 3 and stirring at a speed of 500 rpm at 80 o C. Figure 1 shows natural zeolite and zeolite which have been physically activated and zeolite which is chemically-activated. Physical activated zeolite is pale white and cleaner than zeolite that has not been activated. This is because the calcination process of water vapour which is firmly bound to the bond structure of the zeolite evaporates and the impurities that are bound weakly to the main zeolite bond melt. Zeolites activated with bases have a yellow colour. This is due to the formation of an oxide compound as a result of the reaction between zeolite and NaOH [16]. The difference in this activation method affects the ability of zeolites to absorb ammonium and other pollutants. applied, because it was considered sufficient to achieve the steady state according to the results of the preliminary experiments. After the adsorption process, leachate is deposited for 8 hours then filtered with filter paper and stored in sample bottles for analysis of its characteristics. All experiments were carried out with two replications for each treatment.
Changes in pH
The longer the contact time with zeolite, the pH of leachate increases. An increase in pH occurs because of the presence of alkali cations released into leachate during the adsorption process [17]. Chemicalphysical activated zeolite increased the pH of leachate water to 9.98 during a contact time of 24 hours. Physically activated zeolite increases the pH of leachate water to 7.97 at the same contact time. Chemical-physical activated zeolite increase pH higher than physical-activated zeolite. Activation of zeolite with NaOH causes zeolite to absorb Na + and release OHwhen in leachate so that the pH of leachate becomes more alkaline [18]. Leachate also tends to experience an increase in pH as more zeolites are added. The highest pH change in chemically-activated zeolite was obtained from a zeolite dose of 120 g/L, namely up to pH 9.3. Physically activated zeolite gives the highest pH change at a zeolite dose of 15 g/L which is to pH 7.97. The effect of adding zeolite to changes in pH can be seen in figure 2.
Ammonium in the bulk solution exists in both ionized and molecular forms, so that pH and temperature of the solution affect the forms of ammonium in solution. When pH<7, more than 95% of the ammonium existed in ionized form (NH4+); when pH approached 11, only about 1% of ammonium was left in ionized form [19] (a) (b) Figure 3 shows a decrease in ammonium concentration in leachate during the adsorption time (a), and a decrease in concentration at various doses of activated zeolite (b). Chemical-physical activated zeolite can reduce ammonium concentration greater than physically activated zeolite. This shows that physically-chemically activated zeolite has a higher ammonium adsorption capacity compared to physically activated zeolite. The results of this study are in line, and thus confirm the results of the Ngapa (2017) [20].
Figure 3. Effect of contact time (a) and activated zeolite dose (b) on ammonium concentration
Increasing the dose of zeolite is expected to provide a greater decrease in ammonium concentration [21]. The highest decrease for chemically-physical activated zeolite occurred at a dose of 45 g/L with residual ammonium concentration of 198 mg/L. While for physically activated zeolites, the highest ammonium reduction was obtained from a dose of 120 g/L with a residual ammonium concentration of 72.60 mg/L. The adsorption with chemical-physical activated zeolite increased again after the zeolite dose of 60 g/L. This is due to a change in the pH of the leachate which is getting bigger (pH> 9) with increasing doses of zeolite. Ammonia at pH 5 to 7 takes the form of ammonium ion (NH 4 + ) which is the main compound that can make ion exchange so that in this condition the efficiency of decreasing ammonium is high. Above pH 8 the equilibrium shifts rapidly toward ammonium instead of its ion (NH 3 ) which makes ammonia the dominant compound in leachate. Ammonia has a low ion exchange ability so that at pH> 8 ammonium reduction efficiency is lower [22]. In addition calcium ions in physically activated zeolites have a higher exchange rate than sodium ions bound to chemical-physical activated zeolites, so that at the same dose and contact time the physically activated zeolites can reduce ammonium more than chemical-physically activated zeolites. Therefore, the dose of 120 g/L for physical activated zeolite and the dose of 45 g/L for physically activated zeolite were chosen as the optimum dose in reducing of leachate ammonium.
Phosphate Removal
The addition of zeolite at each contact time produced residual concentrations ranging from 0.61 -0.84 mg/L for physically activated zeolites and amounted to 3.18 -4.40 mg/L for chemically-physically activated zeolites. The lowest residue was obtained at contact time of 6 hours with the physically activated zeolite and a contact time of 24 hours with chemically-physically activated zeolite. The decrease in phosphate based on the contact time between zeolites and leachate can be seen in figure 4.
The capacity of zeolite to reduce phosphate is achieved optimally under conditions of neutral to slightly alkaline solutions [23]. The use of chemically-physically activated zeolite results in the pH of leachate very alkaline (pH> 9) so that the decrease in phosphate becomes lower. The contact time of 12 h was chosen as the best contact time as the contact time for decreasing ammonium concentration. The chemically-physically activated zeolite dose of 15 g/L produced the smallest phosphate residue, which was 0.53 mg / L. The smallest phosphate residue of 0.37 mg / L was obtained at a dose of 120 g/L zeolite physically activated. The efficiency of phosphate reduction increases with increasing doses of zeolite because of the increase in the number of adsorbents and calcium ions which can precipitate phosphate [24] (figure 4). The mechanism of elimination of dissolved phosphate due to the reaction of ions released from zeolite with phosphate dissolved and forms a solid phase (and settles). Chemical-physically activated zeolite and physically activated zeolite produced the lowest residue at a contact time of 24 h, which was equal to 304.88 mg/L and 269.17 mg/L, respectively. The COD concentration of leachate tends to decrease with increasing the contact time. According to research by Amosa et al. (2014) [26], zeolite can reduce the COD concentration maximum at a contact time of 80 minutes and become saturated after that. The saturated zeolite adsorption ability decreases so that the longer the contact time, the less COD decrease occurs. The chemical-physically activated zeolite dose of 15 g/L produced the smallest COD residue of 329.60 mg / L while the smallest COD residue of 315.87 mg/L was obtained at a dose of 120 g/L for physically activated zeolite. The more doses of zeolite are added, the greater the COD concentration due to increased adsorption capacity [27]. The concentration of COD of the leachate after adsorption is still above the quality standard. The results of the study by Malekmohammadi et al. (2016) [28] also showed that zeolite had a low ability to adsorb COD and was only able to reduce 10% of COD concentration. Therefore, zeolite is suggested to only be applied as an advanced leachate treatment, after leachate is treated biologically to eliminate as much as organic matter.
Kinetics of Ammonium Adsorption
The rate of elimination of ammonium from leachate can be described using the help of the kinetic model. Adsorption kinetics describes the rate of adsorption of a substance by an adsorbent in a certain period of time [29]. There are generally four kinetic models that can be used, namely order 0, 1, 2, and 3. The mechanism involved during the present sorption process and the potential rate controlling such as chemical reaction processes was also studied using kinetic models by Kučić et al. [29]. The kinetic parameters are helpful for the prediction of adsorption rate, which gives important information for designing and modelling the processes.
Based on the results of this study, both physical activated and chemical-physically activated zeolites follow the order 1 kinetic model. First order reaction is a reaction whose reaction rate depends on one substance that reacts or is adsorbed. The equation of the first order kinetics model can be expressed by the equation: LnC e = -k•t + LnC 0 . From the equation, it can be seen that the relationship between LnC e vs t is linear, with the slope k and intercept LnC 0 . The LnC e vs t plot results in a slope k for physically activated zeolite and chemical-physically activated zeolite as presented in table 2. The results of zeolite doses between 15 -120 g/L showed adsorption by physically activated zeolite following the 1st order kinetics model: LnC e = -0.011•t + 5.818 with R 2 = 0.989, while adsorption with chemical-physically activated zeolite follows the 1st order kinetics model: LnC e = -0.032•t + 5.815 with R 2 = 0.980 ( figure 6). C e is the ammonium concentration in effluent, C o = initial ammonium concentration (mg/L), k = the reaction rate coefficient (h -1 ), t is the adsorption contact time. The results of the distribution of adsorption kinetics can be seen in figure 6. 032 Remarks: C e = influent ammonium concentration effluent (mg/L), C o = initial ammonium concentration (mg/L), k = the reaction rate coefficient (h -1 ), and t = contact time (h).
Based on the picture, the ammonium concentration obtained from the study corresponds to the ammonium concentration obtained from theoretical calculations following the kinetics model. This means that the physically activated zeolite kinetics and physically-chemically activated zeolite models are in accordance with the 1st order kinetic model. The first order kinetic model equation can then be used to design the adsorption system for leachate treatment. By determining the concentration of ammonium in effluent (C e ), and knowing the concentration at influent (C o ), the time of adsorption can be determined. Figure 6. LnC e vs. t plot of 1st order kinetics model of ammonium adsorption by physically activated zeolite and chemically-physically activated zeolite
Conclusion
Most leachate pollutant parameter values are above the specified quality standard. The optimum contact time of zeolite is physically activated and chemically-physically activated which is 12 hours. The optimum dose of physically activated zeolite was 120 g/L and produced the smallest residues for ammonium and phophate, each at 72.6 mg/L and 0.37 mg / L. Changes in pH during the non-significant adsorption process occurred with the addition of a dose of physically activated zeolite, ie pH ranged from 6-8. The optimum dose for physically-chemically activated zeolite was identified at 45 g/L. Ammonium residue obtained was 198 mg/L and phosphate residue was 0.74 mg/L. The zeolite adsorption rate is physically activated and chemically-physically activated following the first order kinetics model, with the equation LnCe = -0.011t + 5.818 for zeolite physically activated and LnCe = -0.032t + 5.8152 for chemically-physically activated zeolites. With the help of the model, the adsorption time (t) needed can be determined if the ammonium concentration in effluent (C e ) is determined and ammonium infleun (C o ) is known.
Recommendation
Further studies on the effect of pH on the ammonium adsorption process are needed for better improvement of leachate treatment designs. | 2020-06-11T09:04:38.039Z | 2020-06-09T00:00:00.000 | {
"year": 2020,
"sha1": "34cf275559444e191bb1da643ef4de3ddc9677e8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/477/1/012004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6070af704aa05700455f86048e82329e1a128471",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
235260262 | pes2o/s2orc | v3-fos-license | Reducing metal artifacts by restricting negative pixels
When the object contains metals, its x-ray computed tomography (CT) images are normally affected by streaking artifacts. These artifacts are mainly caused by the x-ray beam hardening effects, which deviate the measurements from their true values. One interesting observation of the metal artifacts is that certain regions of the metal artifacts often appear as negative pixel values. Our novel idea in this paper is to set up an objective function that restricts the negative pixel values in the image. We must point out that the naïve idea of setting the negative pixel values in the reconstructed image to zero does not give the same result. This paper proposes an iterative algorithm to optimize this objective function, and the unknowns are the metal affected projections. Once the metal affected projections are estimated, the filtered backprojection algorithm is used to reconstruct the final image. This paper applies the proposed algorithm to some airport bag CT scans. The bags all contain unknown metallic objects. The metal artifacts are effectively reduced by the proposed algorithm.
Introduction
Due to the wide energy spectrum of x-rays, beam hardening effects are severe when the object being imaged contains metals. The beam hardening effects introduce large errors in the x-ray computed tomography (CT) projection measurements. These measurement errors in turn produce artifacts in the reconstructed CT images. Typical metal artifacts appear as dark and bright streakings. This metal artifact problem has been recognized for a long time and it is still an open problem [1].
Most methods to combat the metal artifacts are iterative algorithm based [2][3][4][5][6][7][8][9]. Among these iterative algorithms, projection data inpainting is popular. The basic priciple of inpainting is first to remove the metal affected measurements and to assume that there is no metal in the object. Next, estimation methods such as interpolation, lowpass filtration, or some non-linear approaches are used to inpaint the measurements that are artifacially removed in the first step. Till now, the impainting methods are still not accurate enough to reproduce the true metal-free projections.
The modern metal artifact reduction methods are iterative methods. Iterative algorithms are designed to optimize an objective function, which can contain Bayesian terms. For example, the total variation (TV) norm is effective in enforcing the peicewise constant prior [10,11]. Noise weighting is often incorporated in the objective function as well.
Our proposed method is inspired by the observation that the metal artifacts usually have regions with negative pixel values. The innovation of this paper is to establish an objective function that restricts the negative pixel values in the reconstructed images. The proposed method will be presented in the next section. Results with real x-ray CT measurements are presented. The measurements are obtained from airport bags that contain metal objects inside.
Methods
A usual objective function in image reconstruction consists of two parts: the data fidelity part and the Bayesian part. The data fidelity part projects the image array to generate pseudo projections and then matches them to the measurements. Noise weighting can be applied in the data fidelity part. The main purpose of the Bayesian part is for regularization because the image reconstruction problem may be ill posed. An L 2 -norm of the reconstructed image can be used to regularize the image to enforce smoothness. The TV norm of the image can be used to denoise and maintain the sharp edges, by encouraging the piecewise constant constraint. Projection data inpainting is usually required before iterative image reconstruction. Unfortunately, inpainting methods are problematic and the pseudo projections are not the same as the projections when metals are absent.
Our innovation is an objective function that does not have a data fidelity term. Our objective function is inspired by the observation that the metal artifacts often have regions with negative pixel values. However, the xray attenuation coefficients cannot be negative. This paper proposes an objective function, which is the squared L 2 -norm of the negative pixel values of the filtered backprojection (FBP) reconstruction.
Let A be the operator of the FBP algorithm, P be the projection measurements, and X be the FBP reconstruction. Both P and X are expressed in the vector form, and A is expressed in the matrix form. The FBP reconstruction X is AP. The elements in X are x i . Let Y be the column vector containing the entries Thus, the vector Y is the same as the FBP reconstruction X, except that all positive pixels of X are set to zero. The proposed objective function is the squared L 2 -norm of Y as We would like to minimize this objective function (2). The variables for this objective function are the metal affected projections P M . Here, the entries in P M are determined by following procedure: Step 1: Use the FBP algorithm to generate a raw image X raw using raw projection measurements P. The raw image may contain severe metal artifacts.
Step 2: Segment the raw image to obtain a metal-only image, using a threshold value, for example, as the 1/3 of the maximum image value of X raw . All image values smaller than this threshold value are set to zero.
Step 3: Forward project the metal-only image to obtain the indices of P M . Other projections in P are not affected by metal and are denoted by P notM .
We propose to use a gradient descent algorithm to minimize the objective function (2) by updating the variables in P M . Let p j be an entry in P M . To find the gradient of ∂F/∂p j is not straightforward, because the min function in (1) makes (1) undifferentiable. We can use the subdifferential concept to find the gradient of ∂F/∂p j as [12,13].
where A T is the adjoint operator of the FBP algorithm and min{0, AP} sets each positive entry of AP to zero. Here, AP is the FBP image reconstruction using projections P, and A T is the forward projection followed by the ramp filtration with the one-dimensional convolution kernel, which is defined as The gradient descent iterative algorithm is given as where the super script (k) is the iteration index. The projection vector P consists of two parts: the metal affected part P M and the metal not affected part P notM . The metal not affected part P notM does not get updated from iteration to iteration. The operator D in (5) is a dimension reduction operator that discards the entries in P notM . The parameter β in (5) controls the step size of the gradient descent algorithm.
The proposed algorithm (5) was implemented in MATLAB and applied to some CT data of airport bags. The original projections of airport bags were acquired with an Imatron C300 clinical CT scanner. The contents and details were not disclosed to us. The detector and xray source details were unknown. The objects were treated as unknown objects.
The step size β was chosen to be 1, and the number of iteration was 500. The original CT data resolution was 0.5 mm. The original cone-beam data was reformatted into parallel-beam, lower-resolution data with 0.92 mm spatial resolution in this paper. The number of views for the scaled-down version was 180 over 180°. The field-ofview was 475 mm. The image was 475 mm × 475 mm. The parallel-beam data had 597 bins on each detection row, and the bin size was 0.92 mm. The reconstructed image was in a 420 × 420 two-dimensional array and the pixel size was 0.92 mm.
In our airport bag application, the ground truth is unavailable. Therefore, quantitative evaluation is not appropriated. For metal reduction evaluations, ref. [14] suggested task-based human observer studies or channelized Hotelling observer studies. The task is usually small lesion detection in medical imaging, and the ground truth should be known. Therefore, the suggested studies do not apply in our situation. The only evaluation we can perform is visual appearance evaluation, which is subjective and may not be reliable enough to make any definite conclusions. In this paper, we are very careful not to make any strong claims about the superiority of the proposed algorithm. The only claim we make in this paper is that the proposed method is different from the method that sets all negative image pixels to zeros.
Results
Some results obtained by the proposed algorithm are shown in Figs. 1, 2, 3, 4 and 5 for 5 different airport bags, respectively. Three images are shown in each figure: the raw FBP reconstruction, the proposed algorithm followed by the FBP reconstruction, and the raw FBP reconstruction with negative pixels replaced by zeros. The negative values are shown as the darkest color. The metals appear as the brightest color. Since the attenuation coeffecients are mach greater than the rest of the object, the display window is set to [− 0.1a, 0.45a], where a is the maximum image value. The display window for the raw image and the final image is the same.
The negative image pixel values are only appear in the close neighborhood of the metals in the raw FBP reconstructions. After the proposed iterative algorithm removes the negative image pixels, the dark streaking artifacts are also reduced. This phenomenon cannot be achieved by simply setting the negative image pixel values to zeros in the raw FBP reconstructions.
The raw FBP reconstruction images for bags 1-5 with the negative image pixel values replaced by zeros are also displayed in Figs. 1, 2, 3, 4 and 5 for comparison purpose and they look almost the same as the raw FBP images.
The raw sinogram and processed sinogram are compared in Fig. 6. The proposed algorithm does not alter the sinogram values if they are not affected by the metals.
As a side product, the angularly aliasing artifacts (due to insuffecient view angles) are also reduced with the proposed algorithm.
Discussion
This paper uses a unique objective function for reducing the errors in the projection measurements. The errors are caused by beam hardening effects of the metalic objects. The establishment of the objective function is inspired by the observation that the metal artifacts in CT FBP reconstructions may have some negative undershoots close to the metals. The new objective function is to penalize those negative valued pixels. It is interesting to observe that once the negative undershoots are removed, the streaking artifacts are significantly reduced, even though the streaking artifacts may not contain negative pixels.
The traditional iterative algorithm's main goal is to iteratively reconstruct the image. On the other hand, we use the FBP algorithm to reconstruct the image in every iteration of the proposed algorithm. Most iterative algorithms use image pixels as the unknowns, while the proposed algorithm uses the metal affected projections as the unknowns.
From our knowledge, this is the first time in image reconstruction that the L 2 -norm of the negative pixels is used as the objective function to be minimized.
It is not straightforward to optimize an objective function that is undifferentiable. We do not know the partial derivatives of the objective function with respect to the varaibles, which are the metal affected projections. In this paper, we propose a subdifferential to approximate gradient, which does not exist. With this subdifferential, a gradient descent algorithm is developed and tested with real CT data.
From another point of view, the proposed algorithm is able to minimize the some features of the metal artifacts. The phenomenon of negative undershoots is one of the metal artifact features. There could be other features. In principle, once we can express the features, we are able to minimize them. In our previous paper, the TV was used as a feature for the metal artiacts [15]. The TV norm is useful and effective, but it may smooth the image too much.
We would like to point out that our method does not belong to the traditional category of projection data inpainting. In traditional projection data inpainting, the metalic objects are first removed from the image by segmentation methods, and the corresponding metal affected projections are removed as well. The projection data inpainting methods then replace the removed projections by estimations from the neighbors. The metalfree image is reconstructed from the newly modifided projections. The metal-only image and the metal-free image are combined to generate the final image. In our proposed algorihtm, the metalic objects and their projections are never removed. We don't reconstruct metalfree and metal-only images separately. The proposed algorithm overcomes some difficulties of performing data inpainting.
Conclusions
This paper suggests that the total 'energy' of the negative image pixels be used as a feature of the metal artifacts. | 2021-06-01T13:50:23.595Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "1c34c284e70887bdaf02f2d33be44f91db5b39d0",
"oa_license": "CCBY",
"oa_url": "https://vciba.springeropen.com/track/pdf/10.1186/s42492-021-00083-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9882e8a8446ae54fe7e059815ec1a9ff89ceae19",
"s2fieldsofstudy": [
"Computer Science",
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
256249751 | pes2o/s2orc | v3-fos-license | R-convexity in R-vector spaces
In this paper, for every relation R on a vector space V, we consider the R-vector space (V,R)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$(V,R)$\end{document} and define the notions of R-convexity, R-convex hull, and R-extreme point in this space. Some examples are provided to compare them with the reference cases. The effects of some operations on R-convex sets are investigated. In particular, it is shown that the R-interior of an R-convex set is also an R-convex set under some restrictions on R. Also, we give some equivalent conditions for R-extremeness. Moreover, the notions of R-convex and R-affine maps on R-vector spaces are defined, and some results that assert the relation between an R-convex map f and its R-epigraph under some limitations on R are considered. Several propositions, such as R-continuous maps preserve R-compact sets and R-affine maps preserve R-convex sets, are presented, and some results on the composition of R-convex and R-affine maps are considered. Finally, some applications of R-convexity are investigated in optimization. More precisely, we show that the extrema values of R-affine R-continuous maps are reached on R-extreme points. Moreover, local and global minimum points of an R-convex map f on R-convex set K are considered.
Introduction and preliminaries
Various generalizations of the classical concept of a convex function have been introduced, especially during the second half of the twentieth century. These generalizations have been explored in various fields, such as economics, engineering, statistics, and applied sciences, and they have provided interesting results in several branches related to mathematics such as convex analysis, nonlinear optimization, linear programming, geometric functional analysis, control theory, and dynamical systems; see for example [2,13,21], and the references therein. Recently, the extensions of convexity have been considered by many researchers. For example, Nikoufar et al. studied convexity in various branches of pure and applied mathematical areas [3,18,25]. Also, we refer the readers to η-convexity and coordinate convexity [9,27,37]; GA-convexity and GG-convexity [15,20,39]; s-convexity [1]; preinvexity [35]; strong convexity [29,30,38]; quasi-convexity [32]; Schur convexity [28,34]; and pseudo-convexity [24]. Also, see the following recent related references: [12,19,31], and [36].
Over the last forty years, another type of extension of convexity, in which the convex coefficients need not commute with each other, has been considered. Examples include C * -convexity [22,23], matrix convexity and operator convexity [8,33], and the extension of C * -convexity to * -rings [4][5][6], and [7]. The basic concepts of convex analysis can be seen in [26] and [14].
Recently, the notions of orthogonal metric spaces and metric spaces with relation have been considered by many researchers [10,16,17], and [11]. In [16], the authors introduced R-metric spaces and studied some of the properties of these spaces. We recall some notions and some notations as follows.
Suppose that (M, d) is a metric space and R is a relation on M. Then the triple (M, d, R) or in brief M is called an R-metric space. An R-sequence {x n } n∈N in an R-metric space M is a sequence {x n } n∈N such that x n R x n+k for each n, k ∈ N, and R-sequence {x n } n∈N is said to converge to x if, for every ε > 0, there is an integer N such that d(x n , x) < ε for every n ≥ N . In this case, we write x n R − → x, and the R-sequence {x n } n∈N in M is said to be an R-Cauchy sequence if, for every ε > 0, there exists an integer N such that d(x n , x m ) < ε for n ≥ N and m ≥ N . It is clear that x n R x m or x m R x n .
Also, the concepts of open and closed sets are defined in these spaces. For E ⊆ M, the element x ∈ M is called an R-limit point of E if there exists an R-sequence {x n } n∈N in E such that x n = x for all n ∈ N and x n R − → x. The set of all R-limit points of E is denoted by The paper is organized as follows. We continue this introductory section with a review of the basic definitions and notations of relative metric spaces, i.e., metric spaces equipped with relations, that are needed for the next sections.
In Sect. 2, we first define the notions of R-vector space, vector space equipped with relation, and R-convexity in these spaces. After giving some examples that distinct the notions of convexity and R-convexity in general, the effect of some operations on R-convex sets is investigated. More precisely, we show that the R-interior of an R-convex set is Rconvex under certain constraints on R.
Section 3 is devoted to studying R-extreme points, which are the relative extreme points of R-convex sets. After defining this notion and giving some examples, we prove that every extreme point is an R-extreme point, but the reverse is not necessarily true. Next, we define the R-convex hull of the sets and set some conditions that for R-convex set W , R -co(W ) = W . In the main theorem of this section, we give several equivalent conditions for R-extremeness, and in the last example of this section, we show that generally, the Krein-Milman type theorem does not hold. It seems that one can deduce a Krein-Milman type result for R-convex R-compact sets by putting additional restrictions on the relation R.
In Sect. 4, we introduce the notions of R-convex maps and R-affine maps on R-vector spaces. In classical convexity, f is a convex function if and only if the epigraph of f is a convex set. In this section, we prove such a result for R-convex maps, and then some corollaries of this theorem will be given. In continuation, by putting additional conditions on the relation R, we prove several propositions which assert that R-continuous maps take R-compact sets to R-compact sets, and R-affine maps preserve R-convexity. Also, the composition of an R-affine map and a preserving R-affine map is R-affine, and the composition of an increasing R-convex map and a preserving R-convex map is also an R-convex map.
The presented results in this manuscript make powerful tools for important applications in optimization theory. Finally, we concentrate on some applications of R-convexity in the optimization theory. More precisely, we show that the R-affine R-continuous maps take their extreme values on R-extreme points. Moreover, for an R-convex map f on R-convex set K , the set of all elements of K on which f takes its minimum is an R-convex set, and in R-vector metric space M, every local minimum x 0 of f is a global minimum of f on the set
R-convex sets
In [10,16], and [11], the authors considered some spaces with relations to them and obtained important and interesting results. It seems that these properties are independent of the relation and this fact was not considered. This section is devoted to preliminaries of R-vector spaces that are needed to study the R-convexity property for sets. Some examples are considered to clarify the contents.
Definition 2.1
Let R be a relation on a vector space V . Then V (or the pair (V , R)) is called to be an R-vector space.
In [16], the authors introduced R-convex sets for R-metric space R k . We recall this notion for an R-vector space as follows.
Definition 2.2 A subset W of an R-vector space V is said to be R-convex if λw 1 + (1λ)w 2 ∈ W whenever w 1 , w 2 ∈ W , w 1 R w 2 , and 0 < λ < 1. In this case, the combination λw 1 + (1λ)w 2 is called an R-convex combination of two elements w 1 and w 2 .
The following remark and examples illustrate the relation between two notions "convexity" and "R-convexity". Remark 2.3 Every convex set W in an R-vector space V is an R-convex set. However, the reverse of the result is not true.
Example 2.4 Suppose that V = R and R is the equality relation on V , and W = N. Then N is an R-convex set, but it is not a convex set.
W is an R-convex set, but it is not convex.
Example 2.7 Let V be an R-vector space such that R is an equivalence relation. If there exists a v 0 ∈ V such that v 0 R v for all v ∈ V , then R = V × V and the notions of R-convexity and convexity are equivalent. Since for v 1 The union and intersection of sets preserve R-convexity property. In the next proposition, we investigate these subjects.
Proposition 2.8 Let V be an R-vector space. Then the following statements hold.
i
. The intersection of every family of R-convex sets in
Furthermore, not all properties of convex sets hold for R-convex sets, as is illustrated in the following two remarks.
Remark 2.9 The scalar multiplier of a convex set is convex. But it is not true for R-convex sets. Assume that E is an R-convex set and α ∈ C. Then the set αE is not necessarily Rconvex. For example, set E = (0, 1) ∪ (2, 3) and for x, y ∈ R, Remark 2.10 For convex sets E 1 and E 2 , the set E 1 + E 2 is also convex. But it is not valid for R-convex sets. To see this, let E 1 = (0, 2) ∪ (3, 5) and E 2 = {-1}, and for x, y ∈ R, xR y ⇐⇒ x, y ∈ 1 2 , 3 or x, y ∈ (3, 5) or x = y = -1.
It can be verified that E 1 and E 2 are R-convex, but the set E 1 + E 2 = (-1, 1) ∪ (2, 4) is not R-convex because for the numbers x = 3 4 and y = 2.5, xR y and some of their R-convex combinations are not in E 1 + E 2 .
The closure of any convex set is convex. This will be investigated in the following example using an R-convex set and its R-closure. Example 2.11 Let V = R and E = (0, 1) ∪ (4, 5), and The set of all interior points of a convex set is convex, but this is not true for R-convex sets. In other words, the R-interior points of any R-convex set are not necessarily an R-convex set. See the following example as a counterexample.
In the following theorem, we provide the conditions to preserve the R-convexity from E to R -int(E).
Theorem 2.13 Let (M, d, R) be an R-metric vector space such that R is an equivalence relation on M, which has the following properties for every x, y ∈ M:
i. xR y ⇒ xR(λx The set E is R-convex, so xR y implies that z ∈ E. Using condition i, xR y implies that xR z, and hence yR z (since R is an equivalence relation).
On the other hand, since z n R − → z and xR z, by using condition ii, we conclude that xR z n , (∀n ≥ N 1 ), and hence yR z n (∀n ≥ N 1 ) for some N 1 ∈ N. For each m ∈ N, put Then, for each n ≥ N 1 , in view of condition i, we have xR x n,m and yR y n,m for all m ∈ N.
Since R is an equivalence relation on M, we can conclude from xR x n,m (∀m ∈ N) that implies that {y n,m } ∞ m=1 is an R-sequence in X, and y n,m R − → y as m → ∞. Thus, there are positive integers M 1 and M 2 such that x n,m ∈ E, ∀m ≥ M 1 , and y n,m ∈ E, ∀m ≥ M 2 . By and furthermore, we have Therefore, for all m ≥ M 0 , we conclude that , and the proof is completed.
R-extreme points
In this section, we define R-extreme point concept for an R-convex subset in R-vector spaces. Also, we define R-convex hull of the sets in R-vector spaces. The main results of this section are presented in Proposition 3.9 and Theorem 3.10, and some equivalent conditions for R-extremeness in the special R-vector spaces are obtained.
Definition 3.1
In an R-vector space V , an R-open line segment is a set of the following form: In the following, some examples are given to illustrate the concept of R-extreme points and the differences between the extreme points and R-extreme points.
It is well known that W is a convex set and also an R-convex set, and ext(W ) = ∅ but R -ext(W ) = {(x, 0); x ∈ R} because for every x ∈ R and 0 < λ < 1, if then 0 = λw + (1λ)v and 0 ≤ w < v, which is a contradiction, and hence (x, 0) cannot be written as an R-convex combination of elements of W . Note that if we replace '<' with '≤' in the relation R, then Clearly, W is a convex set, and so is R-convex. We know that ext If the relation is replaced with the following relation: then R -ext(W ) = ext(W ) = {(x, y) ∈ W ; x 2 + y 2 = 1}. Now, we define the concept of R-convex hull of a set, and then we appoint some limitations on the relation R, to obtain a necessary and sufficient condition for a set to be an R-convex set. Definition 3.8 Let W be a subset of an R-vector space V . The R-convex hull of W is denoted by R -co(W ) and is defined as follows: Moreover, every element of R -co(W ) is said to be an R-convex combination of elements of W . Proposition 3.9 Let V be an R-vector space such that the relation R has the following properties: i. vR v for all v ∈ V . ii. If vR v 1 and vRv 2 , then vR λv 1 + (1λ)v 2 for v, v 1 , v 2 ∈ V and every 0 < λ < 1.
Then every subset W of V is R-convex if and only if W = R -co(W ).
Proof Firstly, assume that W = R -co(W ), and v 1 , v 2 ∈ W such that v 1 R v 2 . Then, This shows that W ⊆ R -co(W ). Now, R -co(W ) ⊆ W is obtained by induction. 3 1-α 1 = 1, and v 2 R v 3 and by using R-convexity of W . Similarly, for every n ∈ N, we obtain R -co{v 1 , . . . , v n } ⊆ W . Thus R -co(W ) ⊆ W and the proof is complete.
Note that in Proposition 3.9 the given condition for R is necessary, and if this condition is omitted, then the result is not true. To see this, let V = R and R := ' < . It is clear that In the last theorem of this section, some equivalent conditions for an element to be an R-extreme point are given.
Theorem 3.10 Let V be an R-vector space such that R is reflexive and for
vR v 1 and vR v 2 , then vR λv 1 + (1λ)v 2 for all 0 < λ < 1. Then the following statements are equivalent for every R-convex subset W of V : where v i ∈ W for i = 1, . . . , n and n ∈ N, then there exists Proof i − → ii. By definition of R-extreme point, it is clear.
ii − → iii. If λ = 1 2 , without loss of generality, we assume that 1 2 < λ < 1. Then the following equality is obtained: and v 1 Ry by the assumption. Therefore, the part iii is valid. iii By induction, the properties of R, and Proposition 3.9, we have n i=2 This is a contradiction, and so λv 1 proper R-open line segment containing v. Then v = λv 1 + (1λ)v 2 for some 0 < λ < 1. It is known that v 1 = v 2 , then v = v 1 and v = v 2 . Also, W \ {v} is R-convex, and v 1 , v 2 ∈ W \ {v}, and so v ∈ W \ {v}. But it is a contradiction, and hence v is an R-extreme point for W .
One of the most important subjects is considering Krein-Milman theorem for R-vector spaces. In the following example, we see that this theorem is not valid for an R-compact R-convex set in R-vector spaces generally.
R-convex functions
An important part of subjections in mathematics is studying the properties of a type of map between two spaces. One type of the map is a convex map. This section introduces R-convex maps and relative concepts and considers their properties with respect to relation R. i. Assume that R 1 is another relation on V . The map f is called to be R-convex with respect to R 1 if, for each 0 < λ < 1 and v 1 , v 2 ∈ V such that v 1 R v 2 , the following relation holds: ii. The map f : V → R is called to be R-convex if, for each 0 < λ < 1 and v 1 , v 2 ∈ V such that v 1 R v 2 , the following relation holds: iii. The map f : V → V (also the function f : V → R) is called to be R-affine if, for each 0 < λ < 1 and v 1 , v 2 ∈ V such that v 1 R v 2 , the following equation holds: The following example illustrates that every R-convex map is not necessarily a convex map. Then the map f : R − → R defined by is an R-convex map on R, but it is not a convex map. Because for α = 1 2 , v 1 = -1, and v 2 = 1, In the classical convexity, there is a straight relation between the convex functions and their epigraphs. In the following theorem and its corollaries, by giving some conditions, we obtain similar results for R-convex maps.
Theorem 4.4
Let R 1 and R 2 be two relations on a vector space V , and let f : V − → V be a map. Also, assume that R 2 is transitive and reflexive with the following property: For Moreover, suppose that S is a relation on V × V with two properties as follows: i.
Proof Suppose that f is an R 1 -convex map on V with respect to R 2 . For (v 1 , w 1 ) and (v 2 , Hence, by the property of R 2 , for each 0 < λ < 1, we conclude that By ii, we have v 1 R 1 v 2 , and by the R 1 -convexity of f , Now, since R 2 is transitive, we deduce f (λv 1 + (1λ)v 2 )R 2 (λw 1 + (1λ)w 2 ). Therefore, Conversely, let R 2 -epi(f ) be an S-convex set. Suppose that 0 < λ < 1 and v 1 and v 2 in V such that v 1 R 1 v 2 . By the reflexivity of R 2 and the property i, the following statements hold: Then the S-convexity of R 2 -epi(f ) concludes and hence, So, and f is R 1 -convex on V with respect to R 2 .
The special cases of the above theorem for a real vector space with different relations are concluded in the following corollaries. It is well known that for f : R − → R, the epigraph of f is {(x, y); f (x) ≤ y}. Corollary 4.5 Let R be a relation on vector space R, f be a map on R, and S be a relation on R × R such that Proof Since the relation ' ≤ is reflexive and transitive on R, so it is a straightforward conclusion of Theorem 4.4.
Corollary 4.6
Assume that R is a relation on the vector space R, and f is a map on R. Let S be the induced relation of R on R × R as follows: Then the function f : Proof It is concluded by Corollary 4.5.
Corollary 4.7 Let V be an R-vector space and f : V − → R be a function. Also, S is a relation on V × V with the following properties:
i
Then f is R-convex if and only if epi(f ) is an S-convex set.
Proof It is a consequence of Theorem 4.4, since the relation ' ≤ is reflexive and transitive.
In the classical convexity, every convex function is a continuous function. But there exist some R-convex functions which are not R-continuous. Then the function is an R-convex function on the R-convex set R, and it is not R-continuous. This is because by setting x n = 1 n for all n ∈ N, {x n } n∈N is an R-sequence converging to zero, and It is known that every continuous map preserves compact sets. In the following proposition, we show that every R-continuous map, by an additional condition, preserves Rcompact sets.
The goal of the following proposition is to show the preservation of R-convex sets under the special R-affine maps.
Proposition 4.3
In an R-metric vector space, the following statements are valid: i. Summation, subtraction, and scalar multiplication of R-affine maps are also R-affine. ii. If f and g are R-affine maps and g is an R-preserving map, then fog is also R-affine.
Proposition 4.4 Let f be an increasing R-convex function on the R-metric vector space M, and let g be an R-preserving R-convex map on M. Then fog is also an R-convex map.
Proof Let x, y ∈ M such that xR y. Then g(x)R g(y). For 0 < α < 1, This shows that fog is R-convex.
Some applications in optimization
An optimization problem considers minimizing or maximizing a given real function on a subset of its domain. In other words, in an optimization problem, one obtains the best available values for some functions that have different types corresponding to objective functions and types of their domains. The optimization theory and its techniques are useful and very important in a large area of applied mathematics. In this section, we study some results in optimization theory. More precisely, we study important results about the extreme values of some R-convex maps on R-convex sets. In the first theorem, we show that every R-continuous R-affine function attains its extrema at R-extreme points. Proof Let f take its maximum on B at x 0 ∈ B. Then there exists an R-sequence {x n } n∈N ⊂ R -co(Rext(K)) such that x n R − → x 0 . Notice that x n = N n i=1 λ n,i y n,i where N n ∈ N and y n,i ∈ R -ext(K), (1 ≤ i ≤ N n ) and λ n,i ∈ (0, 1] such that N n i=1 λ n,i = 1. Thus, On the other hand, f (x 0 ) is maximum of f on B, so f (x 0 ) = f (y 0 ), and f attains its maximum on B at y 0 . Similarly, we can prove the theorem for the minimum case.
In the succeeding proposition, we show that the set of all elements on which an R-convex function takes its minimum is an R-convex set.
Proposition 5.2 Let V be an R-vector space, K be an R-convex subset of V , and f : K − → R be an R-convex function on K . Then, the set B = {x ∈ V ; f (x) = min y∈K f (y)} is R-convex.
The following theorem asserts that every local minimum is a global minimum for Rconvex functions. Proof Suppose that f takes its minimum at x 0 on the neighborhood N of x 0 , and x ∈ [x 0 ] R ∩ K . Then, for sufficiently small λ > 0, we have and hence λ(f (x)f (x 0 )) ≥ 0, which implies that f (x 0 ) ≤ f (x), and the proof is completed. In addition, if x 0 R x for all x ∈ K , then x 0 is a global minimum of f on K since f (x 0 ) ≤ f (x) for all x ∈ K . Proof Since f is strictly R-convex on K , as the proof of the previous theorem, we obtain f (x 0 ) < f (x) for all x ∈ [x 0 ] R ∩ K where x = x 0 . Theorem 5.5 Let (V , R) be an R-vector space such that R is an equivalence relation on V with the following property: aRb ⇒ aR λa + (1λ)b , ∀λ ∈ (0, 1).
If K is an R-convex subset of V and f : K − → R is an R-convex function which has a global maximum at x 0 , then f is constant on [x 0 ] R ∩ K . | 2023-01-26T14:43:48.188Z | 2022-06-04T00:00:00.000 | {
"year": 2022,
"sha1": "66c59306e7956ff14db46e5f138c0a563cd97981",
"oa_license": "CCBY",
"oa_url": "https://journalofinequalitiesandapplications.springeropen.com/counter/pdf/10.1186/s13660-022-02802-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "66c59306e7956ff14db46e5f138c0a563cd97981",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
135091931 | pes2o/s2orc | v3-fos-license | Microbiological status of bulk tank milk and different flavored gomolya cheeses produced by a milk producing and processing plant
In bulk milk, the mean coliform count was 3.83±0.17 log10 CFU/ml; the mean E. coli count was 1.38±0.14 log10 CFU/ml; the mean mold count was 3.74±1.30 log10 CFU/ml; and the S. aureus count was <1.00 log10 CFU/ml, respectively. The mean coliform count in gomolya cheeses was 3.69±1.00 log10 CFU/g; the mean E. coli count was 2.63±0.58 log10 CFU/g; the mean S. aureus count was 3.69±1.35 log10 CFU/g and the mean mold count was 1.74±0.37 log10 CFU/g. The amount of coliforms detected in different flavored gomolya cheeses were
INTRODUCTION
The milk contains some nutrients which are important for the human body, for example proteins, fats, vitamins, minerals and water. These nutrients are also needed for microbes; therefore, many microorganisms can be present in the milk, including pathogenic organisms, e.g. Staphylococcus aureus (Akindolire et al. 2015). In addition, the high water activity and neutral pH of the milk provide optimum conditions to their growth (Deák 2006, Quigley et al. 2011. The milk of a healthy cow contains a small amount of microbes, it is considered sterile in the udder, but during handling of milk after milking it can easily be contaminated (Biró 2014). Contamination may occur for example from the skin of the animals, from the environment or from milking machines, milk lines and storage tanks. Mostly, heat treatment is used to reduce the number of bacteria. The initial microbiological quality of the milk is important not only for food safety, but it can also influence the quality of the dairy products (Cilliers et al. 2014).
Some of the most popular dairy products are cheeses, which have many varieties and forms around the world (El-Hofi et al. 2010). The cheese making process starts with the curdling of milk, followed by the molding of curd. The cheese is then pressed, salted, and then, for certain types of cheeses, maturation follows (Laczay 2008). The Codex Alimentarius Hungaricus Directive Number 2-51 (2004) contains requirements for milk and dairy products. According to this directive, the gomolya cheeses are made with mixed (acidic and enzymatic) curdling and can be consumed immediately after production, so gomolya cheeses can be regarded as fresh cheese.
According to the Codex Alimentarius Hungaricus, for the manufacture of dairy products the materials should meet the relevant requirements, national recommendations, commercial requirements and dietary goals.
Food manufacturing companies try to meet consumers' expectation of having safe and high quality products on their desk by operating rigorous quality management systems. However, despite their efforts, there may be problems with the presence of microbes (e.g. S. aureus, Escherichia coli, etc.) (El-Hofi et al. 2010). The contamination of dairy products may occur from the dairy plant itself or from the farms due to improper hygiene practices (Campolo et al. https://doi.org/10.34101/actaagrar/75/1649(Campolo et al. https://doi.org/10.34101/actaagrar/75/ 2013. Hygiene indicator microorganisms can represent a picture of the microbiological status of foodstuffs and their environment. Indicator microbes are for example coliform bacteria, E. coli and molds (Vasek et al. 2008, Campolo et al. 2013, Martin et al. 2016).
For about a century, the coliform bacteria have been used as indicator microorganisms. These bacteria are gram-negative, have aerobic or facultative anaerobic properties and do not produce spores. They can ferment lactose at 32-35 °C, while they produce acid and gas (Martin et al. 2016). As they are generally present in the environment, their presence in food may indicate environmental contamination. The coliform bacteria include E. coli, which is considered to be a frequent pollutant of raw and processed milk. E. coli is found in the intestinal tract of most mammalian species so it can be a fresh faecal contamination indicator. It can get into different foods (like milk and dairy products) from different sources. The presence of E. coli may indicate the presence of enteropathogenic and/or toxigenic microbes (Altalhi andHassan 2009, Mhone et al. 2011).
Milk provides excellent conditions for the growth of Staphylococci and their production of enterotoxin. Enterotoxins produced by enterotoxin-producing S. aureus strains can cause food poisoning in people who consume food contaminated with this bacterium. The symptoms (diarrhea, vomiting, abdominal cramps) appear 1 to 8 hours after the contaminated food is consumed. In dairy farms, raw milk may be contaminated with the bacteria from the environment, from the hands of the milkers, from the milking equipment and from the animal skin (Korpysa-Dzirba and Osek 2011).
Molds are often found in raw milk but don't survive pasteurisation. In pasteurized milk or dairy products, they may occur when re-infection happens during manufacturing. In raw milk some molds play a role in manufacturing dairy products (e.g. Penicillium camemberti, Penicillium roqueforti), but the presence of some type of molds is undesirable, because they can impair the organoleptic properties of the dairy products or even pose a health hazard through the production of mycotoxins (Wouters et al. 2002, Torkar andTeger 2006). Molds can also be considered as indicators of environmental contamination (Vasek et al. 2008).
In this study our aim was to assess the microbiological status of the bulk milk of a milkproducing farm, and some nature and flavored (garlic, dill, onion) gomolya cheeses made from pasteurized milk produced by their own processing plant. We determined the number of some indicator microbes, i.e. coliform bacteria, E. coli and molds, and we also determined the amount of S. aureus. As a conclusion, we suggest further hygiene studies to gain a better understanding of the microbiological status of the dairy products.
Place and date of sampling
For the microbiological examination we collected bovine bulk milk samples (n=3) in sterile plastic sample tubes from a milk producing medium-sized farm. The housing technology used in the farm is deep litter. The milking of the Hungarian spotted cattle breed is done in milking parlour.
The milk is processed in the farm's own dairy plant, and a variety of dairy products, for example different flavored gomolya cheeses are produced, and then sold in the farm's own retail units. In this study, 8 gomolya cheeses in 4 flavors (natural, garlic, dill, onion) were examined. The samples were tested at the Microbiological Laboratory of the Institute of Food Science, University of Debrecen (Hungary). The tests were carried out in July, August and September of 2017.
Microbiological analysis
Preparations for the tests were carried out in accordance with the MSZ EN ISO 6887-1 (2000) standard. The milk sample was stored in the refrigerator (on 4 °C) until testing, and homogenized by shaking before the decimal dilutions were prepared. For preparation of cheese samples, cheese packaging was removed and then 10 grams of samples were added to the sterile Stomacher® Bag (Seward Ltd., UK) containing the appropriate sample identification mark. 90 ml of sterile peptone water was added to the sample, in sterile conditions. For one liter peptone water, 8.5 g of sodium chloride (VWR International Ltd., Hungary) and 1.0 g of peptone (Merck Kft., Hungary) were dissolved in distilled water, then sterilized. The sample then was homogenized in lab blender for 2 minutes (paddle speed: 240/min; fixed speed: 8 strokes/s).
To prepare the decimal dilution line, 9 ml of the peptone water was measured in test tubes, which were then sterilized in pressure cooker for 30 minutes at about 120 °C.
Microbiological tests were performed according to standards for the microbes.
The determination of the coliform count was carried out in accordance with the ISO 4832 (2006) standard. Following this, sterile Violet Red Bile Lactose (VRBL) agar (Biolab Ltd., Hungary) was used and the determination was done by pour plate technique. When preforming the pour plate technique, 1 ml from the dilutions was pipetted into sterile plastic Petri dishes, then we poured the medium on it, mixed, allowed to solidify and then put it into the thermostat at 37 °C for 24 hours.
The amount of E. coli was determined according to the MSZ ISO 16649-2 (2005) standard. Sterile Tryptone Bile X-Glucuronide (TBX) agar (Biolab Ltd., Hungary) was used, and the samples were prepared by pour plate technique. Incubation lasted for 18 to 24 hours at 37 °C.
The S. aureus count was determined by spread plate method, in accordance with the MSZ EN ISO 6888-1 (2008) standard. Baird-Parker agar (Biolab Ltd., Hungary) supplemented with egg yolk tellurit emulsion (LAB-KA Ltd., Hungary) was used, and the plates were incubated at 37 °C for 48 hours. When performing the spread plate technique, 0.1 ml of the dilutions was pipetted onto the medium and then spread by a sterile glass rod. The identification of S. aureus was performed by latex agglutination test kit (Prolex Staph Xtra Kit, Ferol Ltd., Hungary). The evaluation of the amount of S. aureus in bulk milk was performed according to the regulation of the Hungarian Ministry of Agriculture and Regional Development and the Hungarian Ministry of Health, Social and Family Affairs 1/2003 (I. 8).
The determination of the mold count was carried out in accordance with the MSZ ISO 21527-1 (2013) standard. Dichloran Rose Bengal Chloramphenicol (DRBC) agar (VWR International Ltd., Hungary) was used, and the determination was done by spread plate technique. The plates were incubated at 25 °C. Because the rapidly growing molds can be problem, the colonies were counted after 2 days, and then again after 5 days.
Statistical analysis
Calculation of averages, standard deviations (SD), logarithmic transformation of the amount of microorganisms, t-tests and variance analysis were performed using the SPSS v.22.0 (SPSS 2013) software.
RESULTS AND DISCUSSION
The mean coliform count in bulk milk was 3.83±0.17 log 10 CFU/ml, so it was a higher colony count than the limit (m=1.00 log 10 CFU/ml) set in the regulation of Hungarian Ministry of Health 4/1998 (XI.11). In the case of cheeses, lower colony count than 2.00 log 10 CFU/g was measured in the garlic flavored gomolya cheeses, so that's why it is not shown in Figure 1. In case of the other flavors, higher values were detected than the limit (m=1.00 log 10 CFU/g). In the natural flavored cheeses the mean coliform count was 2.91±0.61 log 10 CFU/g, in the dill flavored cheeses it was 3.42±0.46 log 10 CFU/g, and in the onion flavored cheeses it was 4.60±1.00 log 10 CFU/g. The highest value was detected in the onion flavored gomolya, and the results of them are significantly different from the results of the other cheeses (P<0.05).
In the bulk milk samples, the mean E. coli count was 1.38±0.14 log 10 CFU/ml, so the limit (m=0.00 log 10 CFU/ml) was exceeded. In the gomolya cheeses, less than 1.00 log 10 CFU/g of E. coli was present, except for the dill flavored cheeses, so the results of the other gomolya cheeses couldn't be illustrated in Figure 2. The amount of E. coli in dill flavored gomolya cheeses (2.63±0.58 log 10 CFU/g in average) was higher than in the bulk milk, but the difference was not significant (P>0.05). The limit for E. coli in cheeses is not specified in the regulation. Less than 1.00 log 10 CFU/ml of S. aureus was detected in bulk milk, so it was less than the limit (m=2.70 log 10 CFU/ml) set in the regulation of the Hungarian Ministry of Agriculture and Regional Development and the Hungarian Ministry of Health, Social and Family Affairs 1/2003 (I. 8). In the natural and garlic flavored gomolya cheeses less than 1.00 and 3.00 log 10 CFU/g S. aureus were detected, hence these results are not shown in Figure 3. In the dill and onion flavored gomolya cheeses the mean S. aureus count was 3.66±1.86 log 10 CFU/g and 3.71±0.52 log 10 CFU/g. There was no significant difference between these results (P>0.05). The limit for S. aureus in cheeses is 0. The contamination of the cheeses with the bacterium can occur during manufacturing handling of the finished products.
In bulk milk, the mean mold count was 3.74±1.30 log 10 CFU/ml, but there is no limit in the regulation related to this. In the onion flavored gomolya cheeses the mold count (1.74±0.37 log 10 CFU/g in average) exceeded the limit (m=1.00 log 10 CFU/g). There were fewer molds in the finished product, but the difference was not significant (P>0.05). In the further gomolya cheese samples less than 1.00 and 3.00 log 10 CFU/g The microbiological quality of the tested bulk milk and gomolya cheese samples is summarized in Table 1. In bulk milk samples, the mean coliform count was 3.83±0.17 log 10 CFU/ml. Peles et al. (2008) also studied the amount of coliform bacteria in bulk milk of dairy farms. They found that the mean coliform count in bulk milk of a medium-sized farm was 1.77±1.18 log 10 CFU/ml, which was a lower value than the values we detected. El-Hamdani et al. (2016), in their study of bovine raw milk, reported coliform counts of 2.78 log 10 CFU/ml and 3.48 log 10 CFU/ml for autumn and spring, which were also lower values than the values we detected. In our study, the mean E. coli count was 1.38±0.14 log 10 CFU/ml, so it was a higher colony count than the mean E. coli count (1.09±1.05 log 10 CFU/ml) in the study of Peles et al. (2008), and it was a lower mean colony count than in raw milk samples (6.2±5.5 log 10 CFU/ml) of smallholder dairy farms examined by Mhone et al. (2011). The mean mold count in this study was 3.74±1.30 log 10 CFU/ml, so we got higher mean mold count than Peles et al. (2008), because the mean colony count they detected was 1.03±0.67 log 10 CFU/ml. In this study, the amount of S. aureus was less than 1.00 log 10 CFU/ml in bulk milk. Peles et al. (2007) also studied the amount of S. aureus in bulk milk of two farms and obtained 3.15 log 10 CFU/ml and 2.41 log 10 CFU/ml of S. aureus, respectively. So these colony counts were higher than the S. aureus count obtained in our study. Mhone et al. (2011) also obtained higher mean S. aureus count (5.4±5.1 log 10 CFU/ml) when examining raw milk of smallholder dairy farms.
In cheese samples the mean coliform count was 3.69±1.00 log 10 CFU/g. The mean E. coli count was 2.63±0.58 log 10 CFU/g, so it was a lower result than the mean E. coli count (6.15 log 10 CFU/g) reported by Torkar and Teger (2006), who evaluated the microbiological quality of cheese samples produced at small dairy-processing plants. In our study, the mean S. aureus count was 3.69±1.35 log 10 CFU/g and the mean mold count was 1.74 ± 0.37 log 10 CFU/g, as shown in the Table 1. In both cases, lower values were obtained than the colony counts reported by Torkar and Teger (2006), as in their study, the mean S. aureus count was 4.40 log 10 CFU/g, and the mean mold count was 4.30 log 10 CFU/g, respectively.
CONCLUSION
The coliform bacteria can be used as hygiene indicator microbes. As they are generally present in the environment, their presence in the food may indicate environmental contamination (Altalhi and Hassan 2009, Mhone et al. 2011, Martin et al. 2016. Since the hygiene status of dairy products is indicated by the presence of coliform bacteria (including E. coli), the amount of them was determined in the bulk milk and gomolya cheese samples. According to our results, in the majority of samples the coliform bacteria were detected above the limit, which is 1.00 log 10 CFU/g in the case of cheeses and also 1.00 log 10 CFU/ml in bulk milk. So it can be said that the samples were contaminated from the environment, either during processing of the milk or handling the dairy products.
E. coli is often used as a hygiene indicator microbe also, as it can signal direct or indirect faecal contamination, because of its origin (human and animal intestinal tract) (Ombarak et al. 2016). Based on our results, E. coli was present in excess of the limit value in the bulk milk, suggesting inadequate hygiene conditions for milk production.
S. aureus can cause significant problems in dairy farms, as one of the microbes responsible for mastitis in dairy animals. They can get into the milk from the animal suffering from mastitis; therefore, the contaminated milk may pose a public health hazard to the consumer. If enterotoxin-producing strains are present in the milk, and the amount of bacteria exceeds 10 5 CFU/ml, food poisoning can occur (Hill et al. 2012). In our studies, less than 10 CFU/ml S. aureus was detected in bulk milk, which is below the amount of microbes that would cause a food-borne disease to the consumer. The low amount of colonies indicates that the milk is not contaminated with S. aureus either from the animal or the environment. In the dill and onion flavored gomolya cheeses the S. aureus count was above the limit. The contamination of the cheeses with the bacterium can occur during making these cheeses or when handling finished products, or the contamination may be got into the products through the spices used as flavoring.
Raw milk or pasteurized milk is used for cheese production. Pasteurization reduces the amount of microbes in milk, including molds. This means that molds can get into the finished products during the production process or when handling the products (Valkaj et al. 2013). In this study, a large amount of mold was detected in bulk milk, which may have several reasons, e.g. their number did not decrease during pasteurization or the milk handling was not adequate after pasteurization. | 2019-04-27T13:13:24.585Z | 2018-12-28T00:00:00.000 | {
"year": 2018,
"sha1": "a16e51859584796f48cfc813215a481f54c7945d",
"oa_license": null,
"oa_url": "https://ojs.lib.unideb.hu/actaagrar/article/download/1649/2335",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b6dbadcd550467dd16a1fd500325227f7b84c899",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
6568154 | pes2o/s2orc | v3-fos-license | Phylogenetic tree shapes resolve disease transmission patterns
The shapes of phylogenies of pathogens can reveal patterns in how an outbreak spreads. We used simple features to summarise the shapes of pathogen phylogenies. This provided enough information to distinguish outbreaks with super-spreaders, outbreaks spreading homogeneously, and those with chains of transmission.
INTRODUCTION
Whole-genome sequence data contain rich information about a pathogen population from which several evolutionary parameters and events of interest can be inferred. When the population in question comprises pathogen isolates drawn from an outbreak or epidemic of an infectious disease, these inferences may be of epidemiological importance, able to provide actionable insights into disease transmission. Indeed, since 2010, several groups have demonstrated the utility of genome data for revealing pathogen transmission dynamics and identifying individual transmission events in outbreaks [1][2][3][4][5][6][7][8][9], with the resulting data now being used to inform public health's outbreak management and prevention strategies. To date, these reconstructions have relied heavily on interpreting genomic data in the context of available epidemiological data, drawing conclusions about transmission events only when they are supported by both sequence data and plausible epidemiological linkages collected through field investigation and patient interviews.
Given the rapidly growing interest in this new field of genomic epidemiology, several recent studies have explored whether transmission events and patterns can be deduced from genomic data alone. Phylogenies derived from whole-genome sequence data can be compared with theoretical models describing how a tree should look under particular processes; this has been done for viral sequence data over the past several decades [10,11]. For example, predicted branch lengths from sequences modelled using birth-death processes can be compared with branch lengths in trees inferred from viral sequence data to explore transmission patterns [1,[12][13][14]. The field of linking properties of pathogen phylogenies to underlying dynamics is termed 'phylodynamics', coined by Grenfell et al. [15]. Tools from coalescent theory have been adapted to pathogen transmission; where coalescent theory describes probability distributions on trees under a given model for the population size, epidemiological versions take into account the relationship between pathogen prevalence (population size) as well as incidence [16,17]. These approaches are powerful but are computationally intensive and have not explicitly focused on another potential source of information within a phylogeny-'tree shape'.
The number of different phylogenetic tree shapes on n leaves is a combinatorially exploding function of n (there are ð2n À 3Þð2n À 5Þð2n À 7Þ:::ð5Þð3Þð1Þ rooted labelled phylogenetic trees, or $10 184 trees on 100 tips, compared with $10 80 atoms in the universe). For the increasingly large outbreak genome datasets being obtained and analysed (390 [3] 616 [18] and recently 1000 [19] bacterial genomes), the numbers of possible tree shapes are effectively infinite. In the homogeneous birth (Yule) model, the distribution of labelled histories (tree shape together with the ordering of internal nodes in time) is uniform, so that there is a close relationship between the branching times and the tree shapes [20]. Perhaps for this reason, tree shapes have not typically been seen as very informative. However, for bacterial pathogens, particularly those with long durations of carriage and variable infectious rates, there is variability in the infection process which is not captured by homogeneous models. This motivates asking the question: does tree shape carry epidemiological information? Recent work indicates that tree shape reveals aspects of the evolution of viral pathogens [13,[21][22][23][24], but to date, we do not have methods to exploit tree shape in an analysis of pathogen transmission dynamics, built upon simulated data and validated using real-world outbreak data.
Host contact network structure is one of the most profound influences on the dynamics of an outbreak or epidemic, and outbreak management and control strategies depend heavily upon the type of transmission patterns driving an outbreak. It is reasonable to expect that pathogen genomes spreading over different contact network structures-chains, homogenous networks, or networks containing super-spreaders, as illustrated in Fig. 1-would accrue mutations in different patterns, leading to observably different phylogenetic tree shapes. We therefore characterized the structural features of phylogenetic trees arising from the simulated evolution of a bacterial genome as it spreads over multiple types of contact network. We found simple topological properties of phylogenetic trees that, when combined, can be used to classify trees according to whether the underlying process is chain-like, homogenous, or super-spreading, demonstrating that phylogenetic tree structure can reveal transmission dynamics. We use these properties as the basis for a computational classifier, which we then use to classify real-world outbreaks. We find that the computational predictions of each outbreak's overall transmission dynamics are consistent with known epidemiology.
Transmission model
We simulated disease transmission networks with three different underlying transmission patterns: homogeneous transmission, transmission with a super-spreader and chains of transmission. Each simulation started with a single infectious host who infects a random number of secondary cases over his or her infectious period; each secondary case infects others, and so on, until the desired maximum number of cases is reached. The models share two key parameters: a transmission rate and a duration of infection parameter D. Our baseline values are b ¼ 0:43 per month and D = 3 months, reflecting a basic reproduction number of 1.3. This is also the mean number of secondary infections for each infectious case. We do not consider depletion of susceptible contacts over time (saturation) as we model small growing outbreaks at or near the beginning of their spread in a community, and our data (for tuberculosis (TB) in a developed setting) suit this assumption. The homogeneous transmission model assigns each infectious host a number of secondary infections drawn from a Poisson distribution with parameter R 0 ¼ bD. New infections are seeded uniformly in time over the host's infectious period. In the super-spreader model, one host (at random in the first five hosts) seeds 7-24 new infections (uniformly at random), and all other hosts are as in the homogeneous transmission model. In the chain-of-transmission model, almost all hosts infect precisely one other individual. However, 2 (with probability 2/3) or 3 (with probability 1/3) of the hosts infect two other individuals, so that the transmission tree consists of several chains of transmission randomly joined together.
Durations of infection are drawn from a À distribution with a shape parameter of 1.5 and a scale parameter of D=1:5. To reflect transmission of a chronically infecting pathogen, such as Mycobacterium tuberculosis, cases were infectious for between 2 and 14 months with an average specified by D. The mean infectious period was 4.3 months; a histogram is shown in supplementary Fig. S2. We simulated 1000 outbreaks containing a super-spreader, 1000 with homogeneous transmission and 1000 chain-like outbreaks. These used a fixed parameter set; we also performed a sensitivity analysis using alternative parameters. To ensure that the size of the outbreak did not affect the tree shape and classification, we simulated outbreaks with 32 hosts-a similar size as the real-world outbreaks we later investigated. We consider the effects of phylogenetic noise in the Supplementary Material.
Genealogies and phylogenies from the process
We extracted the true genealogical relationships as a full rooted binary tree (a 'phylogeny'), with tips corresponding to hosts and internal nodes corresponding to transmission events among the hosts, as follows. The outbreak simulations create lists of who infected whom and at what time. Each host also has a recovery time. We sort the times of all of the infection events, and proceed in reverse order. The last infection event must correspond to a 'cherry', i.e. it must have two tip descendants, one corresponding to the infecting host and one to the infectee. For all other infection events proceeding in reverse order through the transmission, we create an internal node, and determine its descendants by determining whether the infector and the infectee went on to infect anyone else subsequently. If not, then the node's descendants are the infector and infectee at the time of sampling. If so, then the descendant represents the infector or infectee at the time of their next transmission. The tree is rooted at the first infection event. Branch lengths correspond to the times between infection events or, for tips, the time between the infection event and the time of sampling. This approach uses the simplifying assumption that branching points in the pathogen genealogy correspond to transmission events, as is done in almost all phylodynamic methods (see [1,17,24]) However, where there is in-host pathogen diversity transmission events do not correspond to phylogenetic branching points [7,9]. We comment on the constraints tree shape places on the space of possible transmission trees consistent with a phylogeny in [9].
In the main text, we use the true genealogical relationships among the hosts in our outbreak, extracted from the simulations-this reduces phylogenetic noise and it allows us to compare the resulting trees to 1000 samples of the BEAST posterior timed phylogenies derived from wholegenome sequence (WGS) data from the two realworld outbreaks. To determine how sensitive our approach is to phylogenetic noise, we also classified the outbreaks using neighbour-joining phylogenies derived from simulated gene sequences (Supplementary Information).
Topological summaries of trees
Eleven summary metrics were used to summarize the topology of the trees (see supplementary Table S1).
(1) Imbalance. The Colless imbalance [25] is defined as 2 ðnÀ1ÞðnÀ2Þ P nÀ1 i¼1 jT ri À T li j, where n is the number of tips and T ri and T li are the number of tips descending from the left and right sides at internal node i. It is a normalized measure of the asymmetry of a rooted full binary tree, with a completely asymmetric tree having imbalance of 1 and a symmetric tree having an imbalance of 0 [26]. The Sackin imbalance [27] is the average length of the paths from the leaves to the root of the tree.
(2) Ladders, IL nodes. We define the 'ladder length' to be the maximum number of connected internal nodes with a single leaf descendant, and we divide it by the number of leaves in the tree. This measure is not unrelated to tree imbalance but is more local-a long ladder motif may occur in a tree that is otherwise quite balanced. For this reason, ladder length may detect trees in which there has been differential lineage splitting in some clades or lineages, but where this occurred too locally or in clades that are too small have affected traditional approaches in characterizing rapid expansion in some lineages. Furthermore, traditional ways of detecting positive selection may not be appropriate in this context because the super-spreader, if present, does not pass any advantageous property to descendant infections. The portion of 'IL' nodes is the portion of internal nodes with a single leaf descendant. (3) Maximum width; maximum width over maximum depth. The 'depth' of a node in a tree is the number of edges between that node and the tree's root. The 'width' of a tree at a depth d is defined as the number of nodes with depth d. We calculated the maximum width of each tree divided by its maximum depth (max d, the maximum depth of any leaf in the tree). (4) Maximum difference in widths. We compared Áw ¼ max i fjwðd i Þ À wðd iÀ1 Þjg in the trees. This summary reflects the maximum absolute difference in widths from one depth to the next, over all depths d i in the tree. (5) Cherries. A cherry configuration is a node with two leaf descendants. (6) Staircase-ness. We use two measures of the 'staircase-ness' of phylogenies defined by Norström et al. [21]: (i) the portion of subtrees that are imbalanced (i.e. that have different numbers of descending tips on the left and right sides) and (ii) the average of min ð T li ; T riÞ =max ðT li; T ri Þ over the internal nodes of the tree.
Outbreak classification
We trained k-nearest-neighbour (KNN) classifiers using matlab's ClassificationKNN.fit function with a Minkowski distance, inverse distance weighting and 100 neighbours. KNN classification was performed on 1000 trees of each type (homogeneous transmission, super-spreaders, chains) using 10-fold cross-validation. The 10 resulting classifiers were then used to classify the groups of simulations in the sensitivity analysis, allowing us to report on the variability of classification results. KNN classification is suitable for sets of data that have any number of groups. Here, there were three groups: homogeneous outbreaks, super-spreader outbreaks and chains of transmission. KNN classifiers' quality can be assessed with a table reporting how many in each group are correctly classified, and how many are classified into which incorrect group.
Alternatively, the quality can be summarized by reporting the portion of each group that is classified correctly.
When there are only two groups to compare, so that classification is binary, better methods are available. One of the most powerful of these is the support vector machine (SVM) approach. We used a 10-fold cross-validated SVM to resolve differences between homogeneous transmission versus superspreader networks. Because SVMs are binary classifiers, their quality can be assessed by reporting the sensitivity (portion of true positives that are classified as positive) and specificity (portion of true negatives that are classed as negatives) of the predictions. The sensitivity and specificity of a classifier trade off with each other, because it is always possible to classify all cases as positive (sensitivity 1 but specificity 0) or all as negative (specificity 1 but sensitivity 0). Classifiers use a cutoff, calling a data point positive if the cutoff is above some threshold, and negative otherwise. The overall quality of a binary classifier can be visualized using a receiver operator characteristic (ROC) curve, which captures the change in sensitivity and specificity of a classifier when its threshold is changed. See Cristianini and Shawe-Taylor [28] for a full discussion of SVMs and classification.
Here, SVMs were constructed using the SVMtrain method in matlab with a linear kernel function. The training data x i in the ith 'fold' were the 11 summary metrics for 900 trees derived from each process. The test data were the remaining 100 trees. This was done 10 times (10 'folds' of crossvalidation). All training data were from simulations with the baseline set of parameters. The 10 SVMs (one for each 'fold') were tested on the remaining trees using matlab's SVMclassify, which computes where a i are weights, x i are the support vectors, x is the input to be classified, k is the kernel function and b is the bias. These tests were done separately on the different groups of simulated trees. The SVMclassify function was modified to return y (i.e. the degree to which an outbreak could be considered superspreading) rather than only the sign of y (a binary prediction). We have also performed 10-fold SVM classification in R using the e1071 package. Classifiers are available along with a script to profile the structure of a tree in newick format, using the phyloTop package [29] (see Supplementary Information).
Sensitivity analysis
To determine whether the classifier is robust to different choices of model parameters and to sampling, we simulated three groups of 500 homogeneous and super-spreader outbreaks with (i) randomly selected parameters, (ii) a random sampling density and (iii) both random parameters and random sampling. Group (i) had randomized parameters in which b=D was uniformly distributed between 1.25 and 2.5. Group (ii) had fixed parameters, but the number of cases varied uniformly between 100 and 150, and we sampled only 33 of those cases. The third group had both randomized parameters and random sampling.
To ensure that the classification is detecting variability in the number of secondary cases (i.e. super-spreading), we performed classification on outbreaks in which we used a negative binomial distribution to determine the numbers of secondary cases in the outbreak. We varied the parameters of the distribution such that the mean number of secondary cases was the same (R 0 ) but the variance differed, with the expected variance ranging from two to five times what it would be in the Poisson (homogeneous transmission) case. We classified the outbreaks using the 10 SVM classifiers obtained under 10-fold cross-validation on the baseline case. We report the mean and standard deviation of the specificity, i.e. the portion of cases correctly classified as 'super-spreader' outbreaks.
To determine whether the classifier is relevant to different kinds of models, we applied it to simulated phylogenies described in Robinson et al. [22]. In that work, dynamic networks of sexual contacts were created based on random graphs with a Poisson distribution, and with a distribution of contacts derived from the National Survey on Sexual Attitudes and Lifestyles (NATSAL) [30]. See Supplementary material for further details.
Classification of outbreaks from published genomic data
We used the classifier on phylogenetic trees derived from two real-world tuberculosis outbreak datasets. Outbreak A was previously published Gardy et al. [31] and is available in the NCBI Sequence Read Archive under the accession number SRP002589. This dataset comprises 31 M. tuberculosis isolates collected in British Columbia over the period 1995-2008 and was sequenced using paired-end 50 bp reads on the Illumina Genome Analyzer II. Outbreak B comprises 33 M. tuberculosis isolates collected in British Columbia over the period 2006-11, and was sequenced using paired-end 75 bp reads on the Illumina HiSeq. The outbreak, sequences and single nucleotide polymorphisms (SNPs) are presented in Didelot et al. [9].
For both datasets, reads were aligned against the reference genome M. tuberculosis CDC1551 (NC002755) using Burrows-Wheeler Aligner [32]. Single nucleotide variants were identified using samtools mpileup [33] and were filtered to remove any variant positions within 250 bp of each other and any positions for which at least one isolate did not have a genotype quality score of 222. The remaining variants were manually reviewed for accuracy and were used to construct a phylogenetic tree for each outbreak as described above. We apply the classification methods to 1000 samples from the BEAST posterior timed phylogenies estimated from WGS data using a birth-death prior.
Different transmission networks result in quantitatively different tree shapes
To determine whether tree shapes captured information about the underlying disease transmission patterns within an outbreak, we simulated evolution of a bacterial genome over three types of outbreak contact network-homogenous, super-spreading and chain-and summarized the resulting phylogenies with five metrics describing tree shape. Figure 2 and 3 illustrate the distributions of these metrics across the three types of outbreaks, revealing clear differences in tree topology depending on the underlying host contact network. Super-spreader networks gave rise to phylogenies with higher Colless imbalance, longer ladder patterns, lower Áw and deeper trees than transmission networks with a homogeneous distribution of contacts. Trees derived from chain-like networks were less variable, deeper, more imbalanced and narrower than the other trees. Other topological summary metrics considered did not resolve the three outbreak types as fully (Supplementary Information).
Classification on the basis of tree shape
Topological metrics can be used to computationally classify outbreaks To evaluate whether the five topological summary metrics could realiably and automatically differentiate between the three types of outbreaks, we trained a series of computational classifiers on the simulated datasets. We first trained a KNN classifier using the 11 tree features to discern which combinations of features correspond to phylogenies derived from the three underlying transmission processes. The KNN classifiers correctly identified the underlying transmission dynamics well (see Table 1), with an average of 89 (0.03)% of the homogeneous outbreaks, 86 (0.05)% of the super-spreader outbreaks and 100% of the chain outbreaks correctly classified under 10-fold crossvalidation. Mis-classifications were between the homogeneous and super-spreader outbreaks.
SVM improves classification accuracy
To better resolve the separation between superspreader-type outbreaks and those with homogeneous transmission, we trained a SVM classifier to distinguish between those two types of outbreaks alone. Figure 4a shows the ROC curve for an SVM classification trained on 300 of the 1000 simulated homogeneous and super-spreader outbreaks. The area under the curve (AUC) is 0.97, reflecting a very good classifier performance; the theoretical maximum AUC is 1, and 0.5 corresponds to random guessing. We performed 10-fold cross-validation, each time training a new SVM on 900 of the 100 trees and testing it on the remaining 100. The average sensitivity was 0.93 and the average specificity was 0.89; the average AUC was 0.98. These values are listed in Table 1.
Effects of the extent of super-spreading, sampling, and early classification
Outbreak classification is robust to variable parameters and model choice, but not to sampling To explore how robustly phylogenetic structure captures variation in transmission processes, we performed sensitivity analyses in which we explored the effect of varying the transmission parameters b=D, sampling and both the parameters and sampling together.
Using the KNN classifier applied to the three outbreak types, we found that the overall classifier error remained at $10% when the Sensitivity (the true negative rate) here is the portion of homogeneous outbreaks correctly classified as homogeneous, and specificity (true positive rate) is the portion of super-spreader outbreaks correctly classified. For SVM classification, sensitivity and specificity have a trade-off, such that greater sensitivity can be achieved at the cost of reduced specificity and vice versa. Sensitivity and specificity are computed with the optimal threshold returned by matlab's perfcurve function. The AUC captures the overall classifier quality. For KNN classification we report the portion correct by outbreak type, as there are three types. Numbers shown are mean (standard deviation) using the 10 classifiers found with 10-fold cross validation of the baseline case. Table 1). The effect of reduced sampling density was much greater, and while the portion of homogeneous outbreaks correctly predicted was high (98%), the error was high because only 21% of super-spreader outbreaks were correctly classified. Mis-classification was between these two outbreak types, and chains of transmission were always correctly classified. Varying both the sampling and the parameters decreased the quality of the predictions slightly. We also evaluated the sensitivity of SVM classification to different transmission model parameters by training and testing an SVM on a further 500 simulated super-spreading, and homogenous networks with variable transmission parameters b=D. As with the baseline parameter networks, the SVM returned an AUC of 0.98 for the variable b=D groups, Table 1). However, the SVM's performance declined with decreased sampling density (AUC of 0.86; sensitivity 0.76 and specificity 0.79), and decreased sampling with variable transmission parameters (AUC of 0.87, sensitivity 0.74 and specificity 0.83). Notably, the decline in performance was much less with the SVM method than the KNN method. Figure 4b shows the ROCs for SVM classification on these groups. The decline in performance due to lower sampling density occurs for two primary reasons. The superspreaders are relatively rare; if they were not, then the outbreak would not really be a 'super-spreader' outbreak, but one with a higher rate of transmission overall. When sampling density is reduced, there is therefore a good chance that the super-spreader individuals are not sampled. In addition, under weak sampling, only a few of a super-spreader's secondary cases would be sampled. Both of these factors act to reduce the ability of the genealogy to capture superspreading. Under very low sampling densities, it is likely that the probability of a given tree approaches what it would be under the homogeneous birthdeath model or the appropriate coalescent model, even where the infectious period is not memoryless. Though we have not shown this, very low sampling should reduce the asymmetry that arises from one lineage continuing in the same host and another continuing in a new host, because each lineage would be expected to change hosts multiple times along a branch under low sampling densities. Accordingly, if sampling density is low enough that coalescent methods are appropriate, they may be used to relate branching times and some aspects of tree shapes to epidemic models [24].
We varied the extent of heterogeneity in the numbers of secondary cases in our outbreaks, using a negative binomial distribution and varying its parameters. We found that the classifier (trained on outbreaks each with a single super-spreader but with varying secondary case numbers) had a high sensitivity of classification (>0.7) when the ratio of the standard deviation to the mean of the secondary case number distribution was 2 or more. Figure 4a shows the average sensitivity increasing with the variability in secondary case numbers.
We tested the SVM classifiers to determine whether they could distinguish between phylogenetic trees derived from simulated sequence transmission on different contact networks, namely dynamical models of sexual contact networks over a 5-year simulated time period [22]. The performance was good when sampling was done over time, such ROC curves are a visual way to assess the classifier's quality-a perfect classifier will obtain all the true positives and will have no false positives, giving an AUC of 1.
An imperfect classifier has a trade-off, and can attain a specificity (true positive rate) of 1 at the cost of having a false-positive rate of 1 (top right corner of the plot).
The ROC curve illustrates the shape of this trade-off; the higher the AUC, the higher the quality of the classifier. Guessing yields an AUC of 0.5. In b, different lines correspond to the different groups of simulations in the SVM sensitivity analysis. Panel c shows the SVM classifier's performance when only the earliest outbreak isolates are sampled. Performance is poor with 10 isolates (black line) and better with 20 (blue line) that cases infected early in the simulation were likely to be sampled. When sampling was done at one time, years after seeding the simulated infection, neither classifier detected differences between the two types of contact network. Details are presented in the Supplementary Information.
Outbreak classification is possible using early isolates only
To determine whether classification of an outbreak is possible early in an outbreak-information that could potentially inform real-time deployment of a specific public health response-we evaluated the 10 KNN and 10 SVM classifiers' performance when only the first 10 and first 20 genomes of the outbreak were sampled (10 of each, constructed using 10-fold cross-validation). The KNN performed poorly on the first 10 isolates, with none of the homogeneous outbreaks correctly classified, and only 50% of the others. Mis-classifications were between the superspreader and chain outbreaks. After 20 isolates had been sampled, KNN classifiers grouped all outbreaks with homogeneous transmission. The SVM had AUC values of 0.61 and 0.78 after 10 and 20 isolates were detected, respectively (see Table 1 and Fig. 4c), although the optimal cutoffs gave low-sensitivity values. These data suggest that SVM classification can give some information about an outbreak's transmission dynamics at early points within the outbreak.
Real-world outbreaks
Topological metric-based classification recapitulates known epidemiology of real-world outbreaks Finally, to evaluate the classifiers' performance on real-world outbreaks with known epidemiology, we applied the classifier to genome sequence data from two tuberculosis outbreaks whose underlying transmission dynamics have been described through comprehensive field and genomic epidemiology. Outbreak A [31] was reported to arise from super-spreading activity, while Outbreak B displayed multiple waves of transmission, resulting in a somewhat more homogenous network.
We found that our classification results agreed with the empirical characterizations of the two outbreaks' underlying transmission dynamics. In the KNN classification, Outbreak A was grouped with super-spreader outbreaks most often (56(0.5)%), with 44% of the posterior trees grouping with homogeneous outbreaks none with chains. 77(0.7)% of the trees from Outbreak B were classed as homogeneous, with the other 23% classed with superspreader outbreaks. As above, numbers in parentheses are standard deviations over the 10 classifiers from the 10-fold cross-validation. The SVM classification grouped 75(8)% of BEAST posterior Outbreak A trees with super-spreaders, and 76(9)% of Outbreak B trees with homogeneous transmission. We also applied the classifiers to the maximum clade credibility (MCC) trees for the two outbreaks; the MCC tree from Outbreak A grouped with super-spreaders and that from B grouped with homogeneous outbreaks in all of the 10 crossvalidated classifications. Thus both classifiers' predictions agree with epidemiological investigations of the outbreak, using tree shapes alone to classify transmission patterns.
DISCUSSION
We have found that there are simple topological properties of phylogenetic trees which, when combined, are informative as to the underlying transmission patterns at work in an outbreak. Tree structures can be used as the basis of a classification system, able to describe an outbreak's dynamics from genomic data alone. These topological signatures are robust to variation in the transmissibility, and to the nature and structure of the model, but sampling has a detrimental effect on the strength of the signal. Signs of the underlying transmission dynamics are present within the first 20 genomes sampled from an outbreak, and the classifiers are able to recapitulate known, real-world epidemiology from actual outbreak datasets.
The relationship between host contact heterogeneity and pathogen phylogenies is complex. In large datasets, phylogenetic branch lengths can reveal heterogeneous contact numbers [12], but distributions of branch lengths are not a suitable tool for small outbreaks of a chronically infecting and slowly mutating organism like TB. Early work made the assumption that heterogeneous contact numbers would yield heterogeneous cluster sizes in viral phylogenies [34]. But cluster sizes also depend on the pathogen population dynamics [22] and the epidemic dynamics [24]. The relationship between heterogeneous contact numbers and tree imbalance [13] is not robust to the dynamics of a contact network [22], sampling [22,24] or the epidemic model used [24]. It is clear from this body of work that increased heterogeneity in contact numbers will not always lead to a simple increase or decrease of some measure, like imbalance, of tree structure. However, we have found that in small outbreaks, several simple topological features, taken together, can distinguish between outbreaks with high heterogeneity (a super-spreader) and low heterogeneity.
In any modelling endeavor, when a model reproduces features of real data-whether those are tree structures, branch lengths or other data such as prevalence and incidence of an infection, locations of cases and so on-it remains possible that there are processes not included in the model that are the real origin of the observations. When we use models to interpret data, we use formal or informal priors to weigh the likelihoods of the assumptions behind the model when compared with other processes that could drive the same phenomena. Here, one aspect of the complex relationship between contact heterogeneity and phylogeny structure is illustrated by the fact that genealogies from a long chain of transmission can look similar to genealogies derived from a super-spreader. Indeed, if one individual infects 10 others over a long period, and none of those infects anyone else, the genealogy among isolates would look the same as a genealogy in which each case infected precisely one other. However, it is unlikely that such a chain of cases would occur, with no one 'ever' infecting two others rather than one. Similarly, it is unlikely that one host could infect everyone in an outbreak, with no onward transmission by anyone else. In our simulations, once the occasional person in a long chain can infect two others, and if non-super-spreader individuals infect others homogeneously, we find that simple topological structures are well able to resolve the differences between chains and super-spreader outbreaks.
We have used 11 coarse and simple summaries of tree topology. However, any small set of a few summary statistics cannot capture the topology with much resolution. In contrast, most methods to compare phylogenies in fine detail are suited only for phylogenies on the same sets of tips [35], and so cannot be used to compare different outbreaks or to compare simulations to data. Finding the correct balance to summarize trees sufficiently that they can be compared across different tree sizes, different outbreaks and different settings, without summarizing them so much as to remove the most useful information is a challenge, and a number of methods will likely be developed, beginning with viral pathogens as in the recent work on Poon et al. [23]. Indeed, although we feel that the measures we have used are demonstrative that tree structure is revealing, they are not intended to be comprehensive or exhaustive descriptions of tree topology. The fact that a few simple topological summaries can reveal underlying transmission patterns is a proof-of-principle that tree shape is informative.
We have taken a different approach than has recently been taken in a number of studies aiming to infer transmission trees from phylogenetic data [7,9,36,37], or to identify or at least rule out transmission events based on epidemiological and genetic data [2][3][4][5][6]. These methods use the timing of case presentation (and estimated times of infection) to help determine who infected whom. In contrast, in pathogens with long and variable infectious duration, the timing of case presentation does not provide much information about the timing of infection. In this setting, even whole-genome sequence data may not contain sufficient information to clearly characterize individual transmission events, as we have recently found [9]. However, individual transmission events are often of interest mainly because they reveal 'patterns' of transmission. When we reconstruct an outbreak we are not seeking to determine whether case C will infect case D in the next outbreak, but rather, to find sufficient information about how the outbreak occurred that public health practices can benefit. Here, we have found that tree shapes can reveal overall patterns of transmission without first inferring who infected whom.
The classification method we have developed provides not only an important empirical quantification of the degree to which genomic data is informative in the absence of epidemiological information, but is also a useful tool that can be used to describe outbreaks both retrospectively and prospectively. The ability to situate an outbreak on the spectrum from homogeneous transmission to super-spreading and to do so within the earliest stages of an outbreak when neither a large number of specimens nor detailed epidemiological information is available represents an important opportunity for public health investigations. Situating an outbreak on this spectrum does not require pinning down individual transmission events, but relies more on characterizing summary features of the outbreak and/or its phylogeny. If the data point towards a significant role for super-spreading in an outbreak, a containment strategy will require intensive screening of the superspreader's contacts. In an outbreak where onward transmission is occurring in chains, a focus on active case finding around multiple individuals will be needed instead. Ultimately, investigation of any outbreak of a communicable disease will involve the collation of multiple sources of information, including epidemiological, clinical and genomic data. The approach described here represents one part of this toolbox, and has the advantages of being robust to the unique nature of complex chronic infection, providing useful information even when epidemiological information is incomplete, and being informative within the earliest stages of an outbreak.
supplementary data
Supplementary data is available at EMPH online. | 2016-11-01T19:18:48.349Z | 2014-03-05T00:00:00.000 | {
"year": 2014,
"sha1": "95d30045393c1a2f0b071974fcf4eb60c761f79e",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/emph/article-pdf/2014/1/96/23677457/eou018.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fc660406483eb74a86121c7da0e404dbb44643b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
58725195 | pes2o/s2orc | v3-fos-license | Sepse na Unidade de Terapia Intensiva: Etiologias, Fatores Prognósticos e Mortalidade* Sepsis in the Intensive Care Unit: Etiologies, Prognostic Factors and Mortality
and the most prevalent pathogens were gram-negative bacilli (53.2%). Mean APACHE II score was 18 ± 9, and mean SOFA score was 5 ± 4. Median ICU stay was 6 (3-11) days and overall mortality rate was 31.1%: 6.1% for non-infectious SIRS, 10.1% for sepsis, 22.6% for severe sepsis, and 64.8% for septic shock. CONCLUSIONS : Sepsis is an important health problem that leads to an extremely high mortality rate in the ICU of Passo Fundo, Brazil.
INTRODUCTION
Sepsis is an important cause of hospitalization and the main cause of death in intensive care unit (ICU) [1][2][3] .In 1990, the Center for Disease Control and Prevention (CDC) estimated an incidence of 450 thousand cases of sepsis per year and over 100 thousand deaths in the United States 4 .In 2001, Angus et al. 5 studied over six million records of hospital discharges in seven states in the US and found an estimate of 751 thousand cases of severe sepsis per year, with a mortality rate of 28.6%.Martin et al. 6 reviewed data on hospital discharges for 750 million admissions in the US in 22 years, and found more than 10 million cases of sepsis and an increase in frequency to 82.7/100000 inhabitants in 2000.Studies conducted in Europe, Australia and New Zealand reported that the prevalence rate of sepsis in ICU ranged from 5.1% to 30% [7][8][9][10][11] .The Brazilian Sepsis Epidemiological Study (Bases Study), conducted in fi ve ICU, found mortality rates of 11%, 33.9%, 46.9% and 52.2% in patients with SIRS, sepsis, severe sepsis and septic shock 12 .Another study conducted in Brazil analyzed data from 75 ICU in different regions, and found mortality rates of 16.7% for sepsis, 34% for severe sepsis, and 65.3% for septic shock 13 .Overall mortality rates for sepsis have decreased, but, at 20% to 80%, are still unacceptably high 14 .The incidence of sepsis has increase because of population ageing, more invasive procedures, the use of immunosuppressive drugs and increased prevalence of HIV infection, and this trend is expected to accelerate in the future 5,6 .Few studies investigated the epidemiology of sepsis in ICU in the state of Rio Grande do Sul, where Passo Fundo is located.This study evaluated epidemiologic data and mortality rates of patients with sepsis in the ICU of three hospitals in Passo Fundo, Brazil.
METHODS
This prospective multicenter observational cohort study included patients at the time of diagnosis of SIRS (time zero).It was conducted form August 2005 to February 2006, in Passo Fundo (population, 180,000), a city in the State of Rio Grande do Sul, Brazil, whose hospitals provide care to the population living in the northern area of this state and in the western region of the neighboring state of Santa Catarina.The general ICU studied are located in three hospitals: Hospital da Cidade de Passo Fundo (HCPF), Hospital Prontoclínica (HP) and Hospital São Vicente de Paulo (HSVP).The three are tertiary general hospitals, and two of them are university hospitals affi liated with the Universidade de Passo Fundo (UPF) and the Brazilian Health System (SUS).They have from 90 to 550 hospital beds, and 9 to 22 ICU beds.This study was approved by the Ethics in Research Committee of UPF, and all patients or their legal guardians signed an informed consent term.Patients were included if they were 18 years or older and developed systemic infl ammatory response syndrome (SIRS) 17 while in the ICU.Exclusion criteria were: ICU stays shorter than 24 hours; and pregnancy.Each new admission was classifi ed as a new patient in this study.A questionnaire was used to collect data and to keep uniform records for the three ICU.A manual with detailed information about how to fi ll out the questionnaire and defi nitions of all variables was handed out to all researchers.The patients were followed up until discharge from ICU, death, or the 28th day after inclusion in the study.Demographic data, the cause of admission, immunosuppression, APACHE II score, SOFA score, and source of infection were collected.APACHE II scores were calculated in the fi rst 24 hours of hospitalization according to the Knaus method 15 , and the SOFA scores 16 were calculated daily during ICU stay.When a variable was absent, it was classifi ed as normal and a value of zero was entered for that variable.The use of antibiotics, predisposing factor for infection, laboratory culture results, ICU length of stay, and ICU mortality rate were also used for the analyses.Patients were classifi ed according to 4 stages: non-infectious SIRS, sepsis, severe sepsis, and septic shock, according to the defi nitions established by the consensus of the American College of Chest Physicians and the Society of Critical Care Medicine (ACCP/SCCM) in 1991 17 .Patients could change from one severity stage to the other, but did not go back to a previous stage; therefo-re, their data might be entered in more than one stage.The authors did not play any role in the decisions made by the patients' attending physicians.Clinical concepts and criteria introduced in the last decade to defi ne SIRS established a more accurate classifi cation of infl ammatory events in patients in ICU.SIRS, sepsis, severe sepsis and septic shock were defi ned according to the consensus of the ACCP/SCCM.Infection was defi ned as the presence of pathogenic microorganisms in any sterile medium (blood, cerebrospinal fl uid, and ascetic fl uid) or the clinical suspicion of infection treated with antibiotics or not 17 .
Statistical Analysis
Data are presented as mean ± SD, median (interquartile range) and percentages.The Student t test was used to analyze normally distributed variables; the Mann-Whitney test, for no normal variables; and the Fisher Exact test, for categorical variables.The evaluate the discriminatory power of APACHE II scores for mortality, a receiver operation characteristic curve (ROC) was used, and values between 0.7 to 0.8 for area under the curve were classifi ed as good discrimination and , between 0.8 and 0.9, as excellent [18][19][20] .The level of statistic signifi cance was set at p < 0.05 (two-tailed).The SPSS 13.0 for Windows (Chicago, US) software was used for statistical analyses.
RESULTS
This study was conducted in the general ICU of three hospitals in Passo Fundo Brazil.Percentages of total number of admissions were 50.7%, 36.6% and 13% for HSVP, HCPF and HP.The two teaching hospitals affi liated with the Brazilian Health System (SUS) had 87% of all admissions (Table 1).During the study, 971 patients consecutively admitted to the ICU were evaluated, and 560 met inclusion criteria, which corresponds to a prevalence rate of 58%.Mean age was 60.7 ± 18.6 years, and 56.8% of the patients were older than 60 years; 55.5% were men.Four hundred eleven patients (42%) were excluded because they did not develop SIRS, where younger than 18 years, stayed in the ICU for less than 24 hours, or data were missing from their records (Figure 1).Patients were admitted to the ICU due to neurologic (29.8%), respiratory (24.3%) or surgical (17.1) problems; clinical causes were found for 76.1% of all cases.Non-infectious causes were responsible for 28.7% of all cases of SIRS, and infectious causes, for 71.3%; sepsis, se-vere sepsis and septic shock were found in 36.4%,27.8% and 35.8% of the cases of infectious SIRS.The most frequent symptoms of SIRS were tachycardia (82.3%) and tachypnea (80%).Overall mean APACHE II score was 18 ± 9; for survivors, it was 15 ± 8; and for no survivors, 24 ± 9 (p < 0.001).According to the receiver operating characteristic (ROC) curve, a cutoff point of 18.5 was established as the value to obtain good sensitivity (67.6%) and specifi city (67.1%); area under the curve was 0.734 ± 0.02 (Figure 2).Mean SOFA scores for SIRS, sepsis, severe sepsis and septic shock were 3.99, 2.65, 4.90 and 8.12.Overall mean SOFA score was 5.4 ± 3.5.Mean fi rst and last SOFA scores of patients that survived was statistically different from mean fi rst and last SOFA score of no survivors (p < 0.001) (Figure 3).System or organ failures were most common in the respiratory (60.4%), neurologic (42.1%) and renal (37.1%) systems.Failure in 3 or more organs was found for 36.4% of the patients; mortality rate ranged from 14.6% for patients with fewer than 3 organ failures to 59.8% in patients with 3 or more organ failures (p < 0.001).Of all study patients, 414 (73.9%) developed infection; cultures were made for 340 (60.7%) and were positive in 50.3% of the cases.Nosocomial infection was found in 53.8% of the cases, and the most frequent sites of infection were the lungs (71.6%), urinary tract (4%) and surgical wound (3.0%).Positive cultures were most frequently obtained from sputum (23%), urine (18.8% and blood (12.7%).The most frequent pathogens were gram-negative bacilli (Escherichia coli, Pseudomonas aeruginosa, Enterobacter sp and Acinetobacter sp) in 53.2% of the cases, and gram-positive cocci (Coagulase-negative Staphylococcus and Staphylococcus aureus).More than one pathogen was identifi ed in 2.8% of the cases, and fungi, in 1.3%.The antibiotics used most frequently were cephalosporin (48.4%), antianaerobic agents (36.3%) and beta-lactamic antibiotics (26.4%).Only one antibiotic was used in 26.1% of the cases; 2, in 28.6%; three or more, in 24% of the patients.The most important infection risk factors were urethral catheter in 87% of the cases; nasogastric catheter in 73%; central venous catheter in 61%; and mechanical ventilation in 51%.Overall median number of days in the ICU was 6 (3-11), and median ICU stay of patients classifi ed according to stages was 6 (2-14).Overall ICU mortality was 31.1%, and on the 28th day after inclusion in the study, 34.6%.Mortality for non-infectious SIRS, sepsis, severe sepsis, and septic shock was 6.1%, 10.1%, 22.6% and 64.8% (Table 2).
DISCUSSION
This is the first prospective study in our region to analyze the occurrence of sepsis in patients ad-mitted to the ICU.Sepsis remains a global medical challenge and one of the main causes of death in ICU.This study found a high frequency of sepsis, an overall ICU mortality rate of 31.1% and a rate of 34.6% on the 28th day after inclusion in the study (p = 0.237).Studies in Europe and the US with patients with sepsis reported general mortality rates that ranged from 13.5% to 53.6% 1, [21][22][23] .Brazilian studies found a general ICU mortality rate of 21.8% and 46.4% 12,13 .When patients were divided into groups of non-infectious SIRS, sepsis, severe sepsis or septic shock, ICU mortality rates were 6.1%, 10.1%, 22% and 64.8%.Rangel-Frausto et al. 24 and Salvo et al. 10 12,13 .Our overall mortality rates and rates according to sepsis stages were similar to those reported in the literature 25,26 .Overall mean APACHE II score was 18 ± 9; for survivors, it was 15 ± 8, and for no survivors, 24 ± 9, and the difference was statistically significant (p < 0.001).APACHE II scores were significantly associated with death, and a greater score was associated with greater likelihood of death.A cut-off point of 18 was found using the ROC curve, a value that was adequate to obtain good sensitivity (67.6%) and specificity (66.6%); area under the curve was 0.734 ± 0.02 20 (Figure 3).The use of the APACHE II score as a predictor of mortality is controversial.Some studies reported that it successfully predicted outcome for their patients 19,20,26 , but Lundeberg et al. 27 failed to demonstrate the efficacy of APACHE II as a predictor of mortality of patients with sepsis.The SOFA score was associated with overall mortality in our study.The comparison of means, of the first and last SOFA scores of survivors and no survivors revealed a statistically significant difference (p < 0.001) (Table 2), results that are similar to those reported in Brazilian and European studies 14,16 .In our study, patients with two or fewer organ failures had a mortality rate of 14.6%, and those with three or more, 59.8% (p < 0.001).These findings are similar to those reported in a study conducted by Vincent et al. 21, in which patients without any organ dysfunction had a mortality rate of 6%, whereas those with 4 or more organ dysfunctions had a rate of 65%.In this study, gram-negative bacteria were found in 53.2% of the cases, gram-positive bacteria, in 30.4%, and fungi, in 1.3%.Martin et al. 6 studied the epidemiology of sepsis in the US and reported that grampositive bacteria were the most frequent in the ICU.The most frequent site of infection was the lungs (71.6%), which is in agreement with findings in the literature 12,13,21 .The mortality rate in the group of patients on mechanical ventilation or immunosuppressed patients was significantly greater than that of patients not on ventilation or immunosuppressed (47.6% and 18.2% p < 0.001; 40.9% and 28.7%, p = 0.018).Vincent et al. 21also found a significantly greater mortality rate for patients on mechanical ventilation or immunosuppressed.Median length of ICU stay was 6 3,4-10 days, similar to those reported in the literature 5,8,28 .
One of the limitations of this study was that it was conducted in a period of 6 months (August to February).As the prevalence of infections may be affected by season, this study may have failed to demonstrate the actual prevalence of germs that cause such infections or even of sites of infection 29,30 .Another limitation was that patients were followed up only up to the 28th day after inclusion in the study, and data may have failed to demonstrate actual middle-and long-term morbidity and mortality.Moreover, the study was conducted in only 3 ICU in the city of Passo Fundo, which receives patients only from parts of two Brazilian states.However, few studies investigated the epidemiology of sepsis in Brazil before, and this is the fi rst study on the epidemiology of sepsis in this region.This study described the epidemiologic profi le of patients with sepsis in ICU in the city of Passo Fundo, Brazil, and found a high prevalence of sepsis and an unacceptable high mortality rate in the region.Future studies should include a larger number of patients and ICU to better understand and treat patients with sepsis.
Figure 1 -
Figure 1 -Patients Admitted to the Three ICU and Mortality Rates.
Figure 3 -
Figure 3 -Comparison of Mean First and Last SOFA Scores of Surviving and Non-Surviving Patients.
Table 1 -
Demographics and General Data.
a Percentage.b Median and interquartile range.
Table 2 -
Data of Patients that Survived and Patients that Died. | 2017-09-24T00:12:51.324Z | 2008-06-01T00:00:00.000 | {
"year": 2008,
"sha1": "6271876696e22dee3e865e51c2eb6dc80accbca3",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rbti/a/3PtZ3BsVPWTGprJndZFbKSt/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "61879892b2894e2515460fc96c1e7d2cb819d31d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34002506 | pes2o/s2orc | v3-fos-license | Vapor-Solid Growth of High Optical Quality MoS2 Monolayers With Near-Unity Valley Polarization
Monolayers of transition metal dichalcogenides (TMDCs) are atomically thin direct-gap semiconductors with potential applications in nanoelectronics, optoelectronics, and electrochemical sensing. Recent theoretical and experimental efforts suggest that they are ideal systems for exploiting the valley degrees of freedom of Bloch electrons. For example, Dirac valley polarization has been demonstrated in mechanically exfoliated monolayer MoS2 samples by polarization-resolved photoluminescence, although polarization has rarely been seen at room temperature. Here we report a new method for synthesizing high optical quality monolayer MoS2 single crystals up to 25 microns in size on a variety of standard insulating substrates (SiO2, sapphire and glass) using a catalyst-free vapor-solid growth mechanism. The technique is simple and reliable, and the optical quality of the crystals is extremely high, as demonstrated by the fact that the valley polarization approaches unity at 30 K and persists at 35% even at room temperature, suggesting a virtual absence of defects. This will allow greatly improved optoelectronic TMDC monolayer devices to be fabricated and studied routinely.
as a transition from an indirect bandgap in the bulk to a direct bandgap at monolayer thicknesses, 3,4 massive Dirac-like behavior of the electrons, 5 excellent field-effect transistor performance at room temperature, 6 and completely tunable 2D excitonic effects. 7 Recently, these monolayers have also been suggested as good candidates for the realization of valley-based electronics. 5,8,9,10 In monolayer MoS 2 there are two energy-degenerate Dirac valleys at the corners of the hexagonal Brillion zone. 5,10 The Berry curvature and magnetic moments of electrons associated with different valleys have opposite sign and are linked to measurable quantities which can distinguish the valleys, such as k-resolved optical dichroism, offering the possibility of manipulating and utilizing the valley degree of freedom. 11,12 Valley polarization has been demonstrated in MoS 2 monolayers by circularly polarized light excitation, 8,9,10 and electrical control of it has been reported in bilayer samples. 13 Progress thus far has relied mainly on mechanically exfoliated samples where scaling for device applications 14 is probably impossible. Recent attempts to develop more scalable techniques include exfoliation in liquids, 2, 15, 16 hydrothermal synthesis, 17 epitaxy growth using graphene, 18 and soft sulfurization. 19,20 However, these methods are not easily integrated with device fabrication. Chemical vapor deposition has also been explored using an Mo film 21 (or MoO 3 powder 22 ) and sulfur powder as the reactants, yielding monolayers of MoS 2 on 300nm SiO 2 /Si substrate compatible with device fabrication. 21,22 It has yet to be proven though that such monolayers have sufficient quality for investigating valley-related physics. Inter-valley scattering enhanced by defects and impurities can reduce or destroy the valley polarization, as evident from the disparate degrees of polarization reported by different groups. 8,9,10,13,23 A high degree of valley polarization is required for valley physics and is also a hallmark of crystal quality.
Here we introduce a new and straightforward method for obtaining high optical quality monolayer MoS 2 via a vapor-solid (VS) growth mechanism. 24 Up to 400 m 2 monolayer flakes with triangular shape are directly produced on insulating substrates such as SiO 2 , sapphire, and glass, without using any catalysts. The growth procedure is simple physical vapor transport, using an MoS 2 powder source and Ar carrier gas (details are given in Fig. 1 and Methods), similar to the procedure used for growing Bi 2 Se 3 topological insulator nano-plates. 24 Using polarization-resolved photoluminescence (PL), 13 we observe valley polarization approaching nearly unity at low temperature (30 K) and 35% at room temperature. This observation demonstrates that these monolayers are of high quality and are suitable for valley physics and applications.
Results and Discussion
The resulting MoS 2 monolayers are characterized by optical microscopy (OM, Zeiss Axio Imager A1), atomic force microscopy (AFM, Veeco Dimension 3100), scanning electron microscopy (SEM, FEI Sirion), and micro-Raman spectroscopy (Renishaw inVia Raman Microscope). Figure 2 is a typical SEM image of a sample grown on SiO 2 /Si. The crystallites have lateral dimensions up to 25 microns, and are approximately equilateral triangles (see Fig. 2 inset). This is consistent with the triangular symmetry of monolayer MoS 2 (Fig.1c). It suggests that each is a single crystal without extended defects or grain boundaries; 25, 26 the facets are then the most slow-growing or stable symmetry-equivalent crystal planesit remains to be established whether these are the "zigzag" or the "armchair" edges. Therefore another advantage over exfoliation techniques is that the crystal axes can be immediately identified by inspection.
Optical and AFM characterization. Figures 3a-c show optical microscope images of growths on sapphire, glass, and 300 nm SiO 2 /Si substrates, respectively. The color contrast of all the larger crystallites is uniform; moreover, for those on SiO 2 /Si ( Fig. 3c) it is identical to that of exfoliated monolayers on the same substrate. These facts strongly indicate that they are monolayers. 3,27 The growth on sapphire is much denser than that on both SiO 2 and glass, but on all the substrates nucleation appears to be random, as was found for VS growth of topological insulators. 24 Smaller (<2 m), thicker crystallites are also present, especially on SiO 2 . We speculate that the growth kinetics are such that a monolayer is favored, and grows rapidly, if the nucleating crystal is aligned suitably with the substrate; otherwise more three-dimensional growth occurs. The monolayer thickness is confirmed by atomic force microscopy (AFM). 3, 6, 28 impurities and defects in the crystal will cause intervalley scattering even at low temperature. 9 In our measurements, a 632 nm He-Ne laser beam is circularly polarized by a quarter-wave plate (QWP) and focused at normal incidence onto the monolayer sample held in a cryostat. The PL signal is selectively detected for both and polarization using the setup described in Ref. 13.
The laser spot size is about 2 µm with an intensity of ~150 W/cm 2 . We define the degree of PL polarization, which reflects the valley polarization, as 8,9,13 where a second broad impurity peak is present at ~1.77eV. The absence of an impurity peak is powerful evidence of excellent crystal quality. 30 The PL signal is highly -polarized for both substrates. Reported degrees of valley polarization at low temperatures from mechanically exfoliated monolayers in the literature vary widely: 30%, 9 50%, 10 80%, 13 and up to 100% on boron nitride substrate, 8 showing that intervalley scattering is very sensitive to sample details. The degree of polarization in our monolayers is plotted in Figs. 5c and 5e for both SiO 2 and sapphire substrates. We see nearly unity polarization on SiO 2 and more than 95% on sapphire, with the polarization decreasing at lower photon energies as in previous reports. 8,9 Interestingly, the PL polarization is substantial even at room temperature, approaching a maximum of 35% at ~1.92 eV on both substrates, as shown in Fig. 6. Inter-valley scattering increases with temperature due to enhanced phonon populations, 5 resulting in the decrease of the valley polarization and usually making it vanish at room temperature, 9 although recently 23 there has been a report 40% of valley polarization at 300 K from a mechanical exfoliation sample.
Thus our VS grown samples are as good as the highest optical quality samples obtained by mechanical exfoliation.
Conclusion
In summary, we report a simple method for growing high optical quality monolayer MoS 2 directly on various insulating substrates, which should facilitate device fabrication without the need for a transfer process. The absence of impurity luminescence and the substantial room temperature polarization imply excellent crystal quality and the potential for optoelectronic applications without the need for low temperatures. The technique could also be applicable to other TMDCs.
Supporting Information
The temperature profile of the growth, AFM characterization on the sapphire substrate, and complementary data for the PL polarization are shown in supplementary material. This material is available free of charge via the Internet at http://pubs.acs.org. | 2016-02-24T08:38:05.773Z | 2013-02-23T00:00:00.000 | {
"year": 2013,
"sha1": "786b1dc4dd1aa4be57e2319c10f6cf4178234f1b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1302.5758",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "786b1dc4dd1aa4be57e2319c10f6cf4178234f1b",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Materials Science"
]
} |
3079607 | pes2o/s2orc | v3-fos-license | An IDE-Based Context-Aware Meta Search Engine
Traditional web search forces the developers to leave their working environments and look for solutions in the web browsers. It often does not consider the context of their programming problems. The context-switching between the web browser and the working environment is time-consuming and distracting, and the keyword-based traditional search often does not help much in problem solving. In this paper, we propose an Eclipse IDE-based web search solution that collects the data from three web search APIs-- Google, Yahoo, Bing and a programming Q&A site-- Stack Overflow. It then provides search results within IDE taking not only the content of the selected error into account but also the problem context, popularity and search engine recommendation of the result links. Experiments with 25 run time errors and exceptions show that the proposed approach outperforms the keyword-based search approaches with a recommendation accuracy of 96%. We also validate the results with a user study involving five prospective participants where we get a result agreement of 64.28%. While the preliminary results are promising, the approach needs to be further validated with more errors and exceptions followed by a user study with more participants to establish itself as a complete IDE-based web search solution.
I. INTRODUCTION
During development and maintenance of a software system, software developers face different programming challenges, and one of them is runtime error or exception. Eclipse IDE facilitates to diagnose the encountered errors or exceptions and developers get valuable information (i.e., clues) for fixation from the stack traces produced by the IDE. However, the information from the stack trace alone may not be helpful enough for the fixation, especially when the developers are novice or the encountered problems are relatively unfamiliar. Thus, for more informative and up-to-date solutions, developers are often forced to dig into the world wide web and look for the fixation. In a study by Brandt et al. [2], developers, on average, spent about 19% of their programming time for surfing the web for information. Goldman and Miller [5] made a study where they analyzed the events produced by the web browser and the IDE in temporal proximity, and concluded that 23% web pages visited were related to software development or maintenance.
Finding the working solution to a programming problem in the web is a matter of web surfing as well as programming experience. Novice developers involved in software development or maintenance are often found spending a lot of time to look for such solutions. Traditional web search forces the developers to leave the working environment (i.e., IDE) and look for the solution in the web browsers. The context-switching between IDE and the web browser is distracting and timeconsuming. Moreover, checking relevance from hundreds of search results is a cognitive burden on the novice developers.
Existing studies focus on integrating commercial-off-theshelf (COTS) tools into Eclipse IDE [8], recommending Stack-Overflow posts and displaying them within the IDE environment [4], embedding web browser inside the IDE [3] and so on. Cordeiro et al. [4] propose an IDE-based recommendation system for runtime exceptions. They extract the question and answer posts from StackOverflow data dump and suggest posts relevant to the occurred exceptions considering the context from the stack trace information generated by the IDE. They also suggest a nice solution to the context-switching issue through visualization of the solution within the IDE. However, the proposed approach suffers from several limitations. First, they consider only one source (e.g., StackOverflow Q & A site) rather than the whole web for information and thus, their search scope is limited. Second, the developed corpus cannot be easily updated and is subjected to the availability of the data dump. For example, they use the StackOverflow data dump of September 2011, that means it cannot provide help or suggestions to the recently introduced software bugs or errors after September 2011. Third, the visualization of the solutions is not efficient as it uses plain text to show the post contents such as source code, stack trace and discussion. Thus the developers do not really experience the style and presentation of a web page.
In this paper, we propose an Eclipse IDE-based search solution called SurfClipse to the encountered errors or exceptions which addresses the concerns identified in case of existing approaches. We package the solution as an Eclipse plug-in which (1) exploits the search and ranking algorithms of three reliable web search engines (e.g., Google, Bing and Yahoo) and a programming Q & A site (e.g., StackOverflow) through their API endpoints, (2) provides a content (e.g., error message), context (e.g., stack trace and surrounding source code of the subject error), popularity and search engine recommendation (of result links) based filtration and ranking on the extracted results of step one, (3) facilitates the most recent solutions, accesses the complete and extensible solution set and pulls solutions from numerous forums, discussion boards, blogs, programming Q & A sites and so on, and (4) provides a real web surfing experiences within the IDE context using Java based browser.
We conduct an experiment on SurfClipse with 25 program-ming errors and exceptions which shows interesting findings. Our approach recommends correct solutions for 24 errors and exceptions, which provides an accuracy of 96%, and most of the solutions are provided within the top five results. In order to validate the applicability of the proposed approach, we conduct a user study involving five prospective participants. We experience 64.28% agreement between the solutions chosen by the participants and the solutions proposed by our approach. Given that relevance checking of a solution to a programming problem is a subjective process and controlled by different subjective factors, our approach performs considerably well. While the preliminary results are promising, the proposed approach needs to be further validated with more errors and exceptions followed by a user study with more users to establish itself as a complete IDE-based web search solution.
II. MOTIVATION
Traditional web search forces the developers to leave the working environment (i.e., IDE) and look for the solution in the web browsers. In contrast, if the developer chooses SurfClipse, it allows to check the search results within the context of IDE (e.g., Fig. 1-(b)). Once she selects an error message using context menu option (e.g., Fig. 1-(a)), the plugin pulls results from three reliable search engines and one programming Q & A site against that error message. Then, it calculates the proposed metrics for each result related to the error content, error context, popularity and search engine recommendation to determine its relevance with the occurred error or exception, and then sorts and displays the results. Moreover, the plug-in allows the developer to browse the solution on a Java-based web browser (e.g., Fig. 1-(c)) without leaving the context of the IDE which makes it time-efficient and flexible to use. The plug-in by Cordeiro et al. [4] also shows the results within the context of the IDE; however, (1) the result set is limited (i.e., only from StackOverflow and does not consider the whole web), (2) cannot address newly introduced issues (i.e., fixed corpus and subjected to the availability of StackOverflow data dump), (3) only considers stack trace information as problem context, and (4) the developer cannot enjoy the web browsing experience. Fig. 1 shows the schematic diagram of our proposed approach for IDE-based web search. Once the developer selects an exception from the Error Log or Console View of Eclipse IDE, our approach collects necessary information about it such as error message, stack trace and source code context. Then, it collects the results from three reliable search engines (e.g., Google, Bing and Yahoo) and one programming Q & A site (e.g., StackOverflow) through API endpoints against the error message and develops the corpus. The proposed approach then considers the context of the occurred error or exception, popularity and search engine recommendation of the collected results and calculates the proposed metrics to determine their acceptability and relevance with the target exception. Once the final scores are calculated from those metrics, the results are filtered, sorted and displayed to the developers within the context of IDE. The following sections discuss about the proposed metrics and the scores we use in our approach.
III. PROPOSED APPROACH
A. Proposed Metrics 1) Search Engine Weight Based Score (S sew ): According to Alexa 1 , one of the widely recognized web traffic data providers, Google ranks second, Yahoo ranks fourth and Bing ranks sixteenth among all websites in the web this year. While these ranks indicate their popularity (e.g., site traffic) and reliability (i.e., users' trust) as information service providers, it is reasonable to think that search results from different search engines of different ranks have different levels of acceptance. We conduct an experiment with 75 programming task and exception related queries 2 against those search engines and a programming Q & A site (e.g., StackOverflow) to determine the relative weights or acceptance. We collect the top 15 search results for each query from each search tool and get their Alexa ranks. Then, we consider the Alexa ranks of all result links provided by each search tool and calculate the average rank for a result link provided by them. The average rank for each search tool is then normalized and inversed which provides a value between 0 and 1. We get a normalized weight of 0.41 for Google, 0.30 for Bing, 0.29 for Yahoo and 1.00 for StackOverflow. The idea is that if a result link against a single query is found in all three search engines, it gets the search engine scores (i.e., confidence) from all three of them which sum to 1.00. StackOverflow has drawn the attention of a vast community (1.7 million 3 ) of programmers and software professionals, and it also has a far better average Alexa rank than that of the search engines; therefore, the results returned from StackOverflow are provided a search engine score (i.e., confidence) of 1.00.
2) Title Matching Score (S title ): During errors or exceptions, the IDE or Java framework generally issues notifications from a fixed set of error or exception messages. Thus, there is a great chance that a result page titled with an error or exception message similar to the search query would discuss about the encountered problem by the developer and hopefully would contain relevant information for fixation. We consider the cosine similarity 4 measure between search query and the result title as Title Matching Score, S title which provides a value between zero and one. Here, zero indicates complete dissimilarity and one indicates complete similarity between search query and the title of the result.
3) Stack Trace Matching Score (S st ): To solve the programming errors or exceptions, the associated contextual information such as stack trace generated by the IDE plays an important role. Stack trace contains the error or exception type, system messages and method call references in different source files. In this research, we consider an incentive to the result links containing stack traces similar to that of the selected error or exception. The result links may contain stack traces; however, they are likely to be generated for different contexts or different user programs. Thus, the complete lexical similarity between them and the target stack trace is not likely and partial similarity is a suitable choice to determine their relevance. SimHash Algorithm performs better for partial similarity matching between two blocks of contents [9] and we use it to determine relevance between corresponding stack traces. We extract the stack trace information from the result page through HTML scrapping and apply SimHash Algorithm on both stack traces. We get their SimHash values and determine the Hamming distance. We repeat the process for all result links containing stack traces and use equation (1) to determine their Stack Trace Scores.
Here, d k represents the Hamming distance between the Hash values of each result stack trace and the stack trace of the selected exception, max(d k ) represents the maximum Hamming distance found, α represents the minimum Hamming distance found and S st refers to the Stack Trace Score. The score values from zero to one and indicates the relevance of result link with the target exception in terms of stack trace information. 4) Source Code Context Matching Score (S cc ): Sometimes, stack trace may not be enough for problem fixation and developers post related source code in forums and discussion boards for clarification. We are interested to check if the source code contexts of the discussed errors or exceptions in the result links are similar to that of the selected exception from IDE. The code contextual similarity is possible; because, the developers often reuse code snippets from programming Q & A sites, forums or discussion boards in their program directly or with minor modifications. Therefore, a result link containing source code snippet similar to the surrounding code block of the selected error or exception location is likely to discuss relevant issues that the developer needs. We consider three lines before and after the affected line in the source file as the source code context of the error or exception and extract the code snippets from result links though HTML scrapping. Then, we apply SimHash Algorithm on both code contexts and generate their SimHash values. We use equation (1) to determine Source Code Context Matching Score for each result link. The score values from zero to one and it indicates the relevance of the result link with the target error in terms of the context of source code. 5) StackOverflow Vote Score (S so ): StackOverflow Q & A site maintains score for each asked question post, answer post and comment, and the score can be considered as a social and technical recognition of their merit [6]. Here, the user can up-vote a post if he/she likes something about it, and can also down-vote if the post content seems incomplete, confusing or not helpful. Thus, the difference between up-vote and down-vote, also called the score of a post, is considered as an important metric for evaluation of the quality of the question or solution posted. In our research, we consider this score for the result links from StackOverflow site. Once the corpus is formed dynamically, we consider the scores of all StackOverflow result links and calculate their normalized scores using equation (2).
Here, SO k refers to the StackOverflow post score, max(SO k ) represents the maximum score found, β represents the minimum post score found and S so is the StackOverflow Vote Score for the result link. The score values from zero (i.e., least significant) to one (i.e., most significant) and it indicates the subjective evaluation of the link by a large developer crowd of 1.7 million. 6) Top Ten Score (S tt ): Ranking of first 10 results is considered very important for any search engine. Generally, users look for the solution in the first 10 results before switching to next query. In this research, we are interested to exploit the top ten ranking information provided by all web search APIs and StackOverflow, and we provide incentives to the result links found in top 10 positions. We provide a normalized score using equation (3) for each top result.
Here,P k represents the average position of a result link in top 10 ranking of each search tool and S T T represents the Top Ten Score. 7) Page Rank Score (S pr ): Page Rank score is considered as an interesting metric to determine the relative importance of a list of web pages or web sites based on their interconnectivity. The idea is that a page sets hyper links to another page or site when its contents are somehow related and the first page recommends browsing the other page or site to the visitors. This type of recommendation is important as it carries subjective evaluation of the web site by the users and we are interested to consider that in this research. We calculate Page Rank Score as a measure of the worth for recommendation. We develop an interconnected network for all the result entries in the corpus considering their incoming and outgoing links and calculate the score using PageRank Algorithm [1]. The score is also normalized and it values between zero to one. 8) Search Traffic Rank Score (S str ): The amount of search traffic to a site can be considered as an important indicator of its popularity. In this research, we consider the relative popularity of the result links found in the corpus. We use the statistical data from two popular site traffic control companies-Alexa and Compete through their provided APIs and get the average ranking for each result link. Then, based on their ranks, we provide a normalized Search Traffic Rank Score between zero and one considering minimum and maximum search traffic ranks found.
B. Result Scores Calculation
The proposed metrics focus on four aspects of evaluation for each result entry against the search query. They are-popularity of the result link, error content-based similarity, error contextbased similarity, and search engine factors. StackOverflow Vote Score, Page Rank Score and Search Traffic Rank Score are considered as the measures of popularity of the result link from different viewpoints. For example, StackOverflow Vote Score is directly computed from the votes provided by StackOverflow community, Search Traffic Rank Score is calculated from the site ranks provided by Alexa and Compete, and Page Rank Score is based on the interconnectivity among the result links. We consider the average of these component scores as the Popularity Score, S pop , of the result and use equation (4) to get the score.
Title Matching Score measures the content similarity between search query and result title. Stack Trace Matching Score and Source Code Context Matching Score determine the relevance of the result link based on its contextual similarity with that of the selected error or exception; therefore, they constitute the Context Relevance Score, S cxt . We get this score using equation (5).
Search Engine Weight Based Score denotes the relative acceptance of the result link based on its availability in the result sets provided by different search tools, and Top Ten Score refers to the relevance of the result link based on its availability in top 10 positions. Thus, Search Engine Recommendation score, S ser , for the result link can be considered as the product of its acceptance (i.e., confidence) measure and relevance measure. We use equation (6) to get the score.
A. Experimental Results
In our experiment, we select 25 runtime errors and exceptions related to Eclipse plug-in development, and collect associated information such as error or exception messages, stack traces and source code context. We then use those information (e.g., error content and context) to search for solution using our approach. We also perform extensive web search manually with different available search engines and find out the solutions for all errors and exceptions. We should note that we choose the most appropriate solution as the accepted one for each exception or error During search and ranking, we consider different combinations of the calculated scores (Section III-B) to get the search results for each query, and then identify the accepted solution within the top 10 and top 20 result entries. We also calculate the average ranks of the identified solutions. Table I shows the results of our conducted experiments. We note that the approach that considers the encountered error context, popularity and search engine recommendation about the result links in addition to the query error message for relevance and acceptance (i.e., confidence) checking, outperforms the traditional keyword-based search (i.e., that only considers the keywords in the query) both in terms of recommendation accuracy and average rankings of the solutions. 2 Average rank for solutions within first 10 results. 3 Solutions found within first 20 results. 4 Average rank for solutions within first 20 results.
B. Validation of the Proposed Approach by User Study
We select five frequent exceptions from the list used in the experiment and involve five graduate research students in the user study. Each exception was associated with ten solutions recommended by our approach and the participants were instructed to choose one or more solutions for each exception. We should note that the answers were not ranked, and were presented in a random order to the participants. The idea is to prevent the bias of the participants of selecting top answers, and to discover the subjective views about the relevance of solutions against an exception. We determine the matching between the solutions chosen by the participants and the top five results recommended by our approach. Table II shows the results of the conducted user study. We get 64.28% agreement between the responses of the participants and our proposed approach. Given that relevance checking of a solution against the selected error is a completely subjective process and controlled by various subjective factors, the agreement amount is a significant one.
V. RELATED WORKS
Existing studies related to our research focus on integrating commercial-off-the-shelf (COTS) tools into Eclipse IDE [8], recommending StackOverflow posts and displaying within IDE environment [4,7], embedding web browser inside the IDE [3] for code example recommendation and so on. In this paper, we propose a novel approach that exploits result data from the state of art web search APIs and provides filtered and ranked search results taking problem content, context, result link's popularity and search engine recommendation about the result links into consideration. Our proposed approach not only collects solution posts from a large set of forums, discussion boards and Q & A sites with the help of search enginies, but also ensures the access to the most recent content of StackOverflow through API access. However, the existing approaches by Cordeiro et al. [4] and Ponzanelli et al. [7] provide results from a single and fixed sized data dump of StackOverflow and therefore, the results do not contain the most recent posts (i.e., discussing the most recent errors or exceptions) from StackOverflow as well as the promising solutions from other programming Q & A sites.
VI. CONCLUSION AND FUTURE WORKS
To summarize, we propose a novel IDE-based web search solution that (1) exploits the search and ranking capabilities of three reliable search engines and a programming Q & A site through their API endpoints, (2) considers not only the content of the search (i.e., query keywords) but also the problem context such as stack trace and source code context, link popularity and link recommendation from the search engines, and (3) provides search result within the context of IDE with web browsing capabilities. We conduct an experiment with 25 runtime errors and exceptions related to Eclipse plugin development. Our approach recommended solutions with 96% accuracy which necessarily outperforms the traditional keyword-based search. In order to validate the results, we conduct a user study involving five prospective participants which gave a response agreement of 64.28%. Given that the relevance checking of a solution against the selected error is completely a subjective process, the preliminary results are promising. However, the proposed approach needs to be further validated with more errors and exceptions followed by an extensive user study to establish itself as a complete IDE-based web search solution. We also have plans to enable multiprocessing for the application and host it as a web service API so that others can readily use it with real time experience and also can use the API in their own IDEs rather than Eclipse. | 2015-12-31T08:38:17.981Z | 2018-07-05T00:00:00.000 | {
"year": 2018,
"sha1": "4f20887538913cba0d48c8b5838e94a1c037eac6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1807.01857",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f1ffc32724d6eac0edf7a8c52da2edf1b8620aee",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
105207598 | pes2o/s2orc | v3-fos-license | Comparison of the Degree of Shrinkage Under Air and Nitrogen Atmospheres by Laser Displacement Sensor
Radical UV curable resin commonly shrinks during photopolymerization, which is difficult to avoid. This study developed an apparatus to measure the degree of shrinkage under nitrogen and air atmospheres. The apparatus consisted of a laser displacement sensor to detect the height of the spun-cast resin on a sapphire plate, two fused silica windows, a nitrogen purging line, and a UV light source. The initial and final thickness of a mixture of diurethane dimethacrylate/1-hydroxycyclohexyl phenyl ketone showed a linear relationship between 50 and 140 μm. The linear relationship was used to calculate the degree of shrinkage assuming an initial film thickness of 80 μm. The degree of shrinkage in nitrogen atmosphere was 1.3% greater than that in air. The relationship between the degree of shrinkage and UV intensity in the nitrogen atmosphere exhibited a single linear relation, while that in air showed two lines. UV intensities lower than 15 mW/cm2 exerted a more significant influence on the degree of shrinkage.
Introduction
Radical UV curable resins typically shrink during photopolymerization. The degree of shrinkage ranges between 1 and 10%, and depends on the number of moieties and curing conditions.
The polymerization shrinkage and related residual stress has been studied in the field of dental materials. The photopolymerization has been attracted since the bis-GMA-based composites was developed for the direct and indirect dental restoratives [1,2]. The volume shrinkage is also important issue of nanoimprint lithography for improving the demolding and final shape of nano/micro structures [3][4][5][6][7][8]. The stereolithography and other type of additive manufacturing using photopolymers is the layer-by-layer building process where the volume shrinkage should be controlled [9][10][11][12]. The primary study about the volume shrinkage based on the free volume theory was directed by the group of Bowman [13][14][15].
The measurement of shrinkage, residual stress, and conversion has been studied extensively over the past 20 years with the development of new instruments. Watts et al. developed the Bioman shrinkage stress device which enabled the measurement of the shrinkage displacement [16]. Lu et al. developed an apparatus which consisted of a cantilever and near infrared spectrometer to measure the degree of shrinkage, residual stress, and conversion simultaneously [17]. Neo and Park used a laser displacement sensor and ATR FT-IR (Attenuated Total Reflection Fourier Transform Infrared spectroscopy) technique to monitor the degree of shrinkage and conversion, simultaneously [18]. Jian introduced a simple laser displacement sensor to detect changes in film thickness to calculate the degree of shrinkage. This device is quite simple, although measurements of conversion and residual stress cannot be performed with this device [19]. Schmidt and Scherzer evaluated shrinkage using a hyphenated photorheometer and near-infrared spectroscopy [20]. Arenas et al. developed the interferometric technique to measure both local and global shrinkage phenomena [21].
Although the development of new devices has been adequate, the effect of atmosphere on the degree of shrinkage remains unclear. The degree of shrinkage in air is expected to be lower than under a nitrogen atmosphere [22]. The shrinking phenomena is related to the sequential phenomena of polymerization, cross-linking, and network formation.
In this study, an apparatus for measuring the degree of shrinkage under nitrogen and air atmospheres was developed to understand the effect of oxygen on the degree of shrinkage.
Materials
A mixture of diurethane dimethacrylate (DUDM, Sigma-Aldrich) and 1-hydroxycyclohexyl phenyl ketone (Irgacure 184, BASF) with a 99:1 weight ratio was used as a UV-curable resin. The mixture was stirred at room temperature and stored for at least one night to ensure better homogeneity.
Hybrid UV-LED
The laser displacement sensor is simple and versatile for measuring the height of materials. To compare the degree of shrinkage under air and nitrogen-purged atmospheres, the laser displacement sensor was attached to an originallydeveloped optical apparatus, as shown in Fig. 1. The laser displacement sensor (LK-H008, laser wavelength of 655 nm) was obtained from Keyence (Japan). The working distance was 8 mm, applicable thickness was ±0.5 mm, and the minimum and maximum thickness that could be measured were 20 and 50 μm, respectively. The UV light source was a high-pressure mercury lamp (Omnicure S2000, ExFo). The UV light was passed through a liquid light guide (ϕ 5 mm × 1500 mm, P/N 805-00028, ExFo). The UV intensity was monitored with a UV meter which was sensitive to the 365 nm wavelength (UIT-150, USHIO, Japan). The UV light was collimated by an off-axis parabolic mirror (MPD129-F01, Thorlabs) with a reflected focal length of 2 inches. In the previous study [19], UV light was irradiated from 45° above the sample. The UV light distribution was inhomogeneous, as shown in Fig. 2(a). However, even though a simple optical system, the off-axis parabolic mirror formed a uniform UV light distribution at the sample stage, as shown in Fig. 2 The UV light distribution was captured using a band pass filter (360 nm, FB360-10, Thorlabs), fluorescence glass (excitation wavelength 200-420 nm, fluorescence wavelength 610 nm, Lumirace R-7, Sumita optical glass, Japan), long pass filter (cut off wavelength 590 nm, FGL590, Thorlabs), and a fixed-focal length lens (focal length 4.5 mm, #86-900, Edmund) by the CMOS camera (DCC1545M, Thorlabs). The UV light exited the fluorescence glass and 610 nm fluorescence light was emitted and subsequently captured by the CMOS camera. The UV intensity distribution was determined from the intensity of the fluorescence light. [19].
(b) This study. The diameter of the image is 12 mm.
A sapphire plate was used as a substrate instead of fused silica. The sapphire plate is almost transparent in the UV region. The refractive index of sapphire (n = 1.77) is larger than those of fused silica (n=1.46) and UV curable resin (n=1.48). Figure 3 shows a comparison of signals from the laser displacement sensor using fused silica and sapphire substrates.
As shown in Fig. 3(a), two peaks can be observed which originated from the reflection from the air/UV curable resin (1) and from the bottom of the substrate. For the sapphire substrate, the reflection from the substrate/UV-curable resin interface appeared as a 3 rd peak. The position of peaks (3) and (1) indicates the thickness of the sample. The thickness of the sample could be measured using the sapphire substrate. The film thickness was corrected using the refractive indexes of the UV curable resin and sapphire plate. Two fused silica plates and a nitrogen-purging line were used to reduce the oxygen concentration. The nitrogen gas was supplied from a cylinder (purity 99.999%) at a flow rate of 50 mL/min for 20 min.
The measurement was conducted as follows: The sapphire plate (ϕ 24 mm, t 0.3 mm) was cleaned using Piranha solution. The UV curable resin was subsequently spun-cast on the sapphire plate. The rotation speed and casting time were adjusted to obtain a thickness of 50 to 130 μm. The sample was then placed on the measurement apparatus and allowed to settle for 20 min. For measurements performed under the nitrogen atmosphere, the nitrogen was flushed after the sample was fixed and the two fused silica plates were fixed to the apparatus.
Measurement of the sample thickness was started prior to UV irradiation at 10 Hz. After 10 s, UV irradiation was performed for 3 s. During UV irradiation and post UV irradiation, measurement was continued. After 500 s, the film thickness stabilized. Data points were obtained from 490 to 500 s and averaged to determine the final thickness. Similarly, data points from 0 to 10 s were averaged to calculate the initial thickness.
The initial and final thicknesses were measured as a function of UV intensity and atmosphere. The design and parameters of the experiment are listed in Table 1. Figure 4 shows the change of the film thickness during UV irradiation under nitrogen and air atmospheres. The UV intensity was 10 mW/cm 2 and irradiation time was 3 s. The film thickness decreased upon UV irradiation. After the UV light was shut-off, the thickness decreased gradually until 500 s. Radicals still exist in the air atmosphere, resulting in the continuation of polymerization and subsequent shrinkage. The relationship between the initial and final thickness measured in nitrogen and air atmospheres are plotted in Figs. 5 and 6, respectively. As expected, the final thickness was smaller than the initial thickness. Interestingly, a linear relationship was observed where the final thickness was proportional to the initial thickness. The slope and intercept of the linear relationship are listed in Table 1. The correlation coefficient, r 2 , is greater than 0.99, which indicates a good linear correlation was obtained between the final and initial thicknesses. The final thickness for an initial thickness of 80 μm, for instance, was calculated and the degree of shrinkage could also be obtained. This evaluation method for determining the degree of shrinkage is useful when the reproducibility of the initial thickness is relatively poor. The effects of UV intensity and type of atmosphere on the degree of shrinkage are shown in Fig. 7.
Results and discussion
The degree of shrinkage under a nitrogen atmosphere was determined to be larger than in air, as expected. The slope of the sample under nitrogen is constant from 10 to 100 mW/cm 2 . The degree of shrinkage is 1.3% higher than in an air atmosphere. However, the slope of the sample in air exhibits a two-phase linear relationship, where the slope from 5 to 15 mW/cm 2 is larger than from 15 to 100 mW/cm 2 . In a previous study, the same material and UV light source were used [23], and the thickness of the unreacted layer formed by oxygen inhibition was measured. The thickness of the unreacted layer decreased with increasing UV intensity. The slope of the unreacted layer as a function of UV intensity changed at 10 mW/cm 2 . The unreacted layer thickness increased sharply with decreasing UV intensity. Although the UV intensity and the degree of shrinkage and unreacted layer were not in exact agreement, UV intensity below 10 mW/cm 2 formed a thick unreacted layer. As the unreacted layer does not shrink, the thickening of this layer decreased the degree of shrinkage. Fig. 7. Effect of the atmosphere on the degree of shrinkage of the UV-curable resin.
Limitation
It should be noted that this measurement technology has several limitations.
When the viscosity of the UV-curable resin is too low, the resin surface moves inhomogeneously during photopolymerization. In this apparatus, the UV light is applied at ϕ12 mm of ϕ24 mm of circular disk. When the UV light was applied to the center of the disk around 6 mm, the area shrunk and the resin outside of the ϕ 12 mm flowed to the center. The film thickness decreased by shrinking and then increased from the flow of outside resin. This situation likely occurred in the low-viscosity UVcurable resin. To avoid the flow of resin, a metal washer was placed on the center of sapphire plate, which effectively prevented the flow.
Another limitation is the calculation method of the degree of shrinkage. This study used the slope and intercept of the linear correlation of the final and initial thicknesses. When the degree of shrinkage was calculated, the initial thickness must set in a range where the final thickness could be calculated by interpolating the linear correlation. Extrapolation does not always give reasonable degrees of shrinkage.
Conclusion
The degree of shrinkage was determined by developing a novel laser displacement apparatus to monitor film thickness during UV curing. The apparatus consisted of an off-axis parabolic mirror for uniform UV intensity distribution, sapphire plate substrate for measuring exact film thickness, and nitrogen purge port with two fused silica windows for oxygen-free UV curing. The results indicated that the initial and final thicknesses exhibited a linear relationship. The effect of UV intensity on the degree of shrinkage was evaluated by calculating the final thickness for an initial thickness of 80 μm using the slope and intercept of each linear relationship. Under an air atmosphere, the degree of shrinkage initially increased with the UV intensity steeply and subsequently increased gently. Under a nitrogen atmosphere, the degree of shrinkage increased monotonically.
Although the developed apparatus had some limitations, advances made here are crucial to developing multi-purpose thickness-measuring devices for determining the degree of shrinkage, conversion, and residual stress simultaneously. The comparison of the degree of shrinkage is important to predict shrinkage distributions in complex micromanufacturing processes. | 2019-04-10T13:11:58.903Z | 2018-06-25T00:00:00.000 | {
"year": 2018,
"sha1": "bb0e592c82d0101c7acf4cb3fce5293c435c9ee2",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/photopolymer/31/4/31_497/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bb0e592c82d0101c7acf4cb3fce5293c435c9ee2",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
32936152 | pes2o/s2orc | v3-fos-license | Surgical margins and handling of soft-tissue sarcoma in extremities : a clinical practice guideline
s from conference proceedings 1 Articles found in hand-search of reference lists 1 Articles included in this report 33 Articles and guidelines that outline margin criteria (question 1) 32 (28 studies, 4 guidelines) Articles and guidelines that describe proper handling of specimens (question 1) 4 (3 guidelines, 1 protocol)
INTRODUCTION
Sarcomas are a heterogenous group of mesenchymal malignancies that arise in soft tissue and bone.They affect all age groups and can arise in any part of the body.They are relatively rare, comprising approximately 2% of tumours in adults and 15% of pediatric malignancies 1 .Soft-tissue sarcomas (stss) are the more common type, and these tumours occur most frequently in the extremities.Treatment is often multimodal and complex, and patients can experience significant morbidity and mortality as a consequence of treatment or the disease.The goals of sarcoma management include both a cure and functional preservation of involved tissues and adjacent critical structures.
Background
Surgery is the primary treatment for extremity sts.The combination of radiotherapy with surgery allows for limb salvage by using radiation to biologically "sterilize" microscopic extensions of tumour and to spare neurovascular and osseous structures.Adjuvant chemotherapy in sts-except for rhabdomyosarcoma and Ewing sarcoma-continues to be controversial.
Methods
The medline and embase databases (1975 to June 2011) and the Cochrane Library were searched for pertinent studies.The Web sites of the main guideline organizations and the American Society of Clinical Oncology conference proceedings (2007-2010) were also searched.
Results and Conclusions
Thirty-three papers, including four guidelines, one protocol, and one abstract, were eligible for inclusion.
The data suggest that patients with clear margins have a better prognosis, but no prospective studies have indicated how wide margins should be.In limb-salvage surgery for extremity sts, the procedure should be planned to achieve a clear margin.However, to preserve functionality, surgery may result in a very close (<1 cm) or even microscopically positive Surgery is the primary treatment for extremity sts.In the past, surgery consisted of amputation, but several studies have now demonstrated the efficacy of limb-sparing surgical techniques, alone or combined with preoperative or postoperative radiation, in achieving acceptable local control and equivalent overall survival.The combination of rt with surgery allows for limb salvage by using radiation to biologically "sterilize" microscopic extensions of disease and to spare neurovascular and osseous structures.Developments in cross-sectional imaging (including computed tomography and magnetic resonance imaging) and in treatment planning processes such as computed tomography simulation, have greatly improved the targeting of tissues at risk for tumour involvement.The use of adjuvant chemotherapy in localized sts-except for rhabdomyosarcoma and Ewing sarcoma-continues to be controversial, especially in sarcomas resected with negative margins (R0) 2,3 .
Surgical excision is the primary treatment for extremity sts, and although surgery is necessary for cure, recurrence and metastases can occur in the presence of what is considered complete resection, raising the question of what is an adequate margin.This question is complicated by the type of tissue at the margin-for example, fascia or fat.In addition, there is evidence that a planned positive microscopic margin 4 -such as that against a neurovascular bundle-does not result in a worse outcome, although a recent preliminary re-review of the issue has suggested otherwise 5 .As well, how is adequate assessment of resection margins (gross assessment and number of histologic samples) to be defined?
To answer those questions and to provide guidance for clinicians, the Sarcoma Disease Site Group (dsg) of Cancer Care Ontario's Program in Evidence-Based Care (pebc) decided to prepare a clinical practice guideline on this topic, based on a systematic review of the available evidence.
Guideline Development
The guideline was developed using the methods of the practice guidelines development cycle 6 , and the core methodology was the systematic review.Evidence was selected and reviewed by the working group, which included four Sarcoma dsg members (RK, JW, JE, SV) and a methodologist from the pebc (NC).The resulting evidentiary base and related recommendations are intended to promote evidencebased practice in Ontario, Canada.
Literature Search Strategy
The medline (1975 to June 2011), embase (1975 to June 2011), and Cochrane Library (2011, Issue 2) databases were searched for published practice guidelines, technology assessments, systematic reviews, clinical trials, and studies.Reference lists of papers and review articles were scanned for additional citations.
The Canadian Medical Association Infobase (http://www.cma.ca/index.cfm/ci_id/54316/la_id/1.htm), the National Guidelines Clearinghouse (http:// www.guideline.gov/), and other Web sites were searched for existing evidence-based practice guidelines.The American Society of Clinical Oncology conference proceedings from 2007 to 2010 were searched.Search terms indicative of sarcoma, surgical margins, and handling of specimens were used.
3.3.1
Inclusion Criteria Articles were eligible for inclusion in this systematic review of the evidence if they reported on studies that met these criteria: • The definition of what was considered to be a negative or positive margin through measurements or detailed descriptions was reported.• The study included adult patients with extremity (arm and leg) sts, and limb-sparing surgery was the primary treatment.• The study reported on at least one of the following outcomes: local recurrence, recurrence-free survival, overall survival, or disease-free survival.• For questions 2 and 3, the study reported an outcome resulting from the handling techniques for sts specimens.
Exclusion Criteria Studies were excluded if they
• were published in a language other than English (because translation capacity was not available).• included patients with other sarcoma types and the results for sts were not specifically reported.• did not specify what constituted a negative or positive surgical margin.• were retrospective studies with fewer than 100 subjects.
Literature Search Results
Thirty-three papers, including four guidelines, one protocol, and one abstract, were eligible for inclusion in the systematic review . Fourguidelines that assessed the criteria for positive margins in sts or that provided information on proper handling of specimens were considered relevant to the present guideline [7][8][9][10] prehensive Cancer Centres stated that their guideline was evidence-based.However, the methods were not available in English, and so that assertion could not be verified 10 .The other guidelines were consensusbased documents [7][8][9] .The quality of literature was poor because the studies were most commonly retrospective cohort studies.Furthermore, most studies did not describe how tumours were sampled or margins were evaluated.In some papers, statistical analysis was lacking, and in others, analyses were done in the presence of mixed treatment groups (for example, rt with or without chemotherapy).Thirty-two studies addressed the question of negative compared with positive criteria for surgical margins. Of tose studies, only three were prospective [11][12][13] .The rest were retrospective studies using collected patient data .
Three guidelines [8][9][10] and one protocol 39 described the handling of surgical specimens.Table i summarizes the literature search results.
Question 1
Thirty-three papers provided a definition of what were considered negative and positive surgical margins.Some papers did not quantify margin distance, but did state that a clear margin had no residual microscopic disease left at the tumour site.No agreement on what is an adequate margin could be discerned.The published range runs from "negative for tumour at the inked margin" to 5 cm.
4.2.1
Surgery Alone Two studies addressed the question of an adequate surgical margin with surgery alone.The criteria of a clear margin in one study was less than 2.5 cm 16 ; in the other, a clear margin was described as being "all normal tissue surrounding the specimen" 11 .The studies by Enneking et al. and Berlin et al. 11,16 showed that local recurrences were reduced in patients with negative margins.A potential bias in the surgery-alone group is that the tumours so treated are usually superficial 14,26,32 .
Surgery in Combination with Adjuvant or Neoadjuvant Chemotherapy, or with RT, or Both
Most of the studies that addressed margin criteria involved surgery in combination with adjuvant or neoadjuvant chemotherapy, or with radiation, or both.Chemotherapy was given in seventeen studies 12,13,15,17,[20][21][22][23]25,[27][28][29][30][31][32]35,36 and was discussed in two guidelines. However, not al the studies provided detailed results for the patients receiving chemotherapy.
The two guidelines reported on clinical situations warranting the administration of chemotherapy.The Dutch Association of Comprehensive Cancer Centres recommends that chemotherapy be given only in the context of a clinical trial 10 .The esmo guideline (a consensus document) states that adjuvant chemotherapy is not standard treatment in adult sts, but can be used in certain high-risk patients with deep tumours 8 .
Only one study provided results for patients receiving chemotherapy.That study was a randomized trial 13 , in which, after surgery, patients were randomized to a doxorubicin or a control group.The adjuvant postoperative chemotherapy consisted of doxorubicin 60 mg/m 2 intravenously on day 1.Cycle length was 28 days, and 9 cycles were given.The postoperative treatment with doxorubicin did not influence the risk of local recurrence, although patients with a marginal excision also received rt.The width of the surgical margin did not influence outcomes 13 .
Twenty-five studies and four guidelines reported on outcomes after surgery and rt, and also provided information about the surgical margin width.
The guidelines listed in Table ii vary only slightly in their recommendations.The esmo guideline does not state a margin size, but recommends that radiation be given when tumours are larger than 5 cm.The Dutch Association of Comprehensive Cancer Centres, the National Comprehensive Cancer Network, and the Association of Directors of Anatomic and Surgical Pathologists all recommend that radiation be given when margins are less than 1 cm in the fixed state and less than 2 cm in the fresh state.Only the Dutch Association of Comprehensive Cancer Centres provided a recommendation concerning the width of the field that should be radiated around the tumour.They suggested 5-10 cm depending on the tumour type.The most common reason for giving rt was a positive margin.This factor was reported in eight studies 14,17,23,24,26,27,32,34 .The administration of rt on the basis of a discussion between the surgeon and radiation oncologist was reported in three studies [19][20][21] .In two studies, all patients received rt 4,28 .Patients with positive margins were given a boost in three studies 4,28,37 .In two studies, rt was given based on the size of the tumour 17,29 and, in one study, on its grade 34 .Six studies did not provide reasons for rt 15,18,22,30,31,35 .Six studies also provided details about the width of the field irradiated around the tumour site.All treated a field of 5 cm or more 10,17,27,28,32,34 .
Twenty-five studies that provided results for patients treated with surgery and rt characterized the width of the surgical margin in some way.Twenty-one studies demonstrated that positive margins had an unfavourable effect on local recurrence rates 4,12,14,15,17,[19][20][21]23,24,26,[28][29][30][31][32][33][34][35][36]38 . One study reporte that local recurrence rates did not differ for margins that either less than 1 cm or 1 cm and greater 27 .Another study had only patients with positive margins.In that study, addition of a local postoperative radiation boost in patients who had received rt preoperatively did not alter the recurrence rate 37 .
The rate of distant metastasis was analyzed in nine studies.A positive margin was associated with a greater rate of distant metastasis in six studies 15,20,30,32,35,38 , but in three studies, there appeared to be no difference associated with margin status 4,21,31 .
Overall survival was examined in four studies.Only the study by Popov and colleagues found that margin status was related to overall survival 32 .The other three studies found no difference in overall survival and margin status with at least 3 years of follow-up 4,27,30 .
In most of the studies, the results for patients who received rt were combined with the results for patients who did not receive rt.Three studies reported local control outcomes data pertaining to rt and margins 22,27,32 .The studies by Heslin et al., Khanfir et al., and Popov et al. showed no difference in local control between the groups that received radiation and the groups that did not, although the study by Heslin and colleagues analyzed only patients with positive margins.The Heslin et al. study was further complicated by the fact that some patients received chemotherapy 22 .However, given that those three studies were retrospective and not randomized controlled trials, patients with more clinically aggressive disease might be in the rt group, potentially confounding the results.
Question 2
Three guidelines and one protocol addressed the question of the appropriate number of samples to take from the margins of a surgical resection specimen [8][9][10]39 . No vailable evidence-based data addressed how to adequately assess margins or whether the assessment should be done using fresh or fixed resection specimens.
The Association of Directors of Anatomical and Surgical Pathology and the College of American Pathologists advocate the use of perpendicular (rather than en face) blocks from margins in sts 9,39 .The Association of Directors of Anatomical and Surgical Pathology recommends that any margin macroscopically more than 5 cm be considered clear and that it need not be sampled except in cases of epithelioid sarcoma and angiosarcoma, which are prone to subclinical proximal or satellite spread 9 .However, no recommendation about the number of sections that should be taken is made.
The Dutch guideline states that margins in millimetres should be provided, but gives no guidance about how to accomplish that assessment.On one page, the guideline states that margin distances should be based on the gross assessment of the specimen; on the next page, it states that distances should be assessed microscopically.
The National Comprehensive Cancer Network guideline states that the surgeon and the pathologist should both assess the margins and that the margin distances should be provided in the surgical report.However, the guideline gives no advice on how to assess margin adequacy.
Question 3
Three guidelines and one protocol addressed the appropriate handling of surgical resection specimens [8][9][10]39 . Theguidelines written by the Association of Directors of Anatomic and Surgical Pathology, The Dutch Association of Comprehensive Cancer Centres, and the College of Pathologists all recommend that resections arrive in the pathology lab unfixed as soon as possible after excision 9,10,39 .The Dutch guideline further recommends that the specimens arrive preferably on gauze moistened with physiologic salt solution.In addition, the Dutch guideline recommends storing representative tissue and freezing it for later testing as needed 10 .The Association of Directors of Anatomic and Surgical Pathology and the esmo guidelines recommend that, whenever possible, the orientation of a resection specimen be verified with the operating surgeon 8,9 .
DSG CONSENSUS PROCESS
The draft guideline was circulated to the Sarcoma dsg for review and discussion.The group approved the document and agreed that no major changes were necessary.
REVIEW AND APPROVAL BY THE PEBC REPORT APPROVAL PANEL
The final report was also reviewed and approved by the pebc report approval panel, which consists of three members, including two oncologists with expertise in clinical and methodology issues, and the pebc director.Key issues raised by the Report Approval Panel included the lack of a discussion of health benefits and side effects, of an explicated definition of sts, and of any comment on the type or quality of radiation administered in the studies.The Sarcoma dsg received and responded to all comments.The discussion section was expanded to address most of the concerns and to provide additional context and commentary.
EXTERNAL REVIEW
The pebc external review process is two-pronged: a targeted peer review aims to obtain direct feedback on the draft report from a small number of specified content experts, and a professional consultation facilitates dissemination of the final guidance report to Ontario practitioners.
7.1.1
Targeted Peer Review During the guideline development process, 4 targeted peer reviewers from Canada (considered clinical or methodology experts on the topic) were identified by the guideline authors.Three reviewers agreed to participate, and the draft report and a questionnaire were sent by e-mail for their review.The questionnaire consisted of items evaluating the methods, results, and interpretive summary used to inform the draft recommendations and questions about whether the draft recommendations should be approved as a guideline.Written comments were invited.The questionnaire and draft document were sent on June 12, 2012.Follow-up reminders were sent at 2 weeks and at 4 weeks.All the targeted peer reviewers were required to complete a conflict of interest form.Two reviewers completed their questionnaires; one reviewer joined the professional consultation.
Professional Consultation
The guideline authors identified 60 potential participants.Feedback was obtained through a brief online survey of these health care professionals who are the intended users of the guideline.Participants were asked to rate the overall quality of the guideline recommendations and whether they would use and recommend them.Written comments were invited.Participants were contacted by e-mail and directed to the survey Web site, where they were provided with access to the survey.The notification message was sent June 11, 2012.Two followup reminders were sent on June 25 and July 9, 2012.
Summary of Written Comments from the Targeted Peer Review
The main concerns raised were • that "preoperative radiation" should be added in recommendation 1 ("the use of postoperative radiation should be considered").
Current OnCOlOgy-VOlume 20, number 3, June 2013 Copyright © 2013 Multimed Inc.Following publication in Current Oncology, the full text of each article is available immediately and archived in PubMed Central (PMC).
• that the intent of the guideline was to provide clinicians with guidance on the definition of an adequate surgical margin, but the document did not provide any clinically useful guidance on how to proceed.
x The poor quality of many of the studies and the lack of a randomized controlled trial made providing such guidance difficult.The authors inserted a recommendation based on the consensus opinion of the expert panel.
Summary of Written Comments from the Professional Consultation
From among the 60 participants, 15 responses were received, with 6 respondents saying that they had no interest in this area.Requests were made to change the verb "has" to "may have" in one qualifying statement ("A microscopic positive margin in sts of the limb treated with surgery and radiation has an increased rate of local recurrence").
Question 1
Recommendation: In limb-salvage surgery for sts, the operation should be planned with the objective of achieving a clear margin.However, to preserve functionality, surgery may result in a close or even a microscopically positive margin.Based on the consensus opinion of the expert panel, a "close" margin is considered to be less than 1 cm after formalin fixation.In the circumstance of a close or microscopically positive margin, the use of preoperative or postoperative radiation may be considered.
Qualifying Statements:
In limb-sparing surgery for sts, an adequate margin for surgical treatment alone or for surgery with rt cannot be defined because the studies identified in the literature search did not definitively identify an appropriate margin distance.Intact fascia (which can be measured in millimetres) is considered an adequate margin by some 8,15 .
A microscopically positive margin in sts of the limb treated with surgery and rt may have an increased rate of local recurrence.That possibility suggests that every effort should be made to achieve a negative margin.
In the event that limb function will be compromised, surgeons and patients may wish to discuss the benefits and risks of accepting a very close margin that may even be microscopically positive, and the importance of preoperative or postoperative rt.
Local recurrences have been observed even when negative margins are achieved with surgery and with the combination of surgery and rt, suggesting that tumour characteristics other than margin status are important.Further study is required.
At this time, there is no evidence to support the use of postoperative chemotherapy in soft-tissue tumours of an extremity that have been treated with intralesional or marginal excisions.
Question 2
Recommendation: In the histology assessment of margins, no definitive recommendations can be made concerning an appropriate required number of margin samples.
Question 3
Recommendation: It is not possible to make evidence-based recommendations concerning the appropriate handling of surgical resection specimens to assess the adequacy of excision.Where this topic is mentioned, guidelines endorse inking the margins and sampling them perpendicular to (and not en face to) the margin.In the absence of evidence-based recommendations, the Sarcoma dsg recommends the following, based on the expert opinion of the working group and consensus of the dsg members: • The specimen should be received fresh, with the orientation indicated by the surgeon.• The specimen and the tumour should be measured in three dimensions.• The distances from all 6 margins should be measured, and the location of the tumour (superficial or deep) and its relationship to fascia, if present, should be indicated.• All margins should be sampled perpendicular to the margin, with at least 2 sections being taken from the closest margin and 1-2 sections from all other margins.• More extensive margin sampling should be considered for tumours such as angiosarcoma, epithelioid sarcoma, and chondrosarcoma.
DISCUSSION
Although many studies have considered what constitutes an appropriate margin, no randomized trials or prospective studies have assessed surgical margins and outcomes for sts of the extremities.Most of the available evidence comes from retrospective reviews of charts and databases.The studies are confounded by differences in treatments received: some patients received preoperative, and others postoperative, rt or chemotherapy, or both.Many studies had to be excluded because they did not categorize their results by the type of sarcoma.For example, bone and soft tissue were analyzed together, or truncal and extremity sarcomas were grouped together.When the clinical groupings are not uniform, it is difficult to interpret results because it is impossible tell whether a treatment is effective or whether some combination of the location, type, size, or grade of the sarcoma is influencing the results.
Current OnCOlOgy-VOlume 20, number 3, June 2013 Copyright © 2013 Multimed Inc.Following publication in Current Oncology, the full text of each article is available immediately and archived in PubMed Central (PMC).
There is a need for guidance concerning what constitutes an adequate surgical margin with respect to the management of sts of the extremities.There is no standard of care, and different surgeons have different definitions of what constitutes an adequate margin.After extensive review of the literature, the working group recommends that the goal should be to obtain negative margins.Local recurrences have been observed even when negative margins are achieved with surgery or with surgery and rt, suggesting that tumour characteristics other than margin status are important.It would seem that the width of the margin obtained should be influenced by the subsequent effect on functionality.A close margin or even a planned microscopically positive margin may be acceptable given the study by Gerrand et al. 4 , although even that finding is controversial 5 .In cases with close margins (<1 cm measured in the fixed state by the pathologist), consideration should be given to the administration of postoperative rt.Clearly, other factors-tumour type, grade, and biology, or even the type of tissue at the margin (for example, fascia)-affect the rate of both local and systemic recurrence.The causes of recurrence need further investigation.It may be that ongoing molecular studies will provide insight into other relevant tumour characteristics that influence outcome.
No studies addressed the number of sections that needed to be taken from the resection margins.The evidence for the number of sections that should be taken from a surgical specimen to assess adequacy of excision was nonexistent.Few studies mentioned how the specimens in their studies were sampled or how many sections were taken from margins.This lack of consistency makes it difficult to compare results study to study.There is a great need for evidence-based standardization concerning how to sample tumours.
PRACTICE GUIDELINE DATE
This guideline was completed in September 2012.Practice guidelines developed by the pebc are reviewed and updated regularly.Please visit Cancer Care Ontario's Web site (https://www.cancercare.on.ca/toolbox/qualityguidelines/diseasesite/sarcomaebs/)for the full evidence-based series report and subsequent updates.
ACKNOWLEDGMENTS
The pebc is supported by the Ontario Ministry of Health and Long-Term Care through Cancer Care Ontario.All work produced by the pebc is editorially independent from its funding source.
table i
Literature search results (1975 to June 2011) Current OnCOlOgy-VOlume 20, number 3, June 2013Copyright © 2013 Multimed Inc.Following publication in Current Oncology, the full text of each article is available immediately and archived in PubMed Central (PMC).
table ii
Comparison of guideline criteria for giving radiotherapy and for irradiated margins Current OnCOlOgy-VOlume 20, number 3, June 2013Copyright © 2013 Multimed Inc.Following publication in Current Oncology, the full text of each article is available immediately and archived in PubMed Central (PMC). | 2017-06-02T23:38:30.729Z | 2013-06-01T00:00:00.000 | {
"year": 2013,
"sha1": "bf06f1fe24fa7c807df816eeec6f9e84c654d7e8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1718-7729/20/3/1308/pdf?version=1609334067",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bf06f1fe24fa7c807df816eeec6f9e84c654d7e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227168603 | pes2o/s2orc | v3-fos-license | Extracorporeal life support as bridge to recovery in yew poisoning: case reports and literature review
Abstract Aims This short communication aims to review the treatment of cardiogenic shock in patients with yew poisoning based on two case reports from our institution, focusing on the use of extracorporeal life support (ECLS). Methods and results We report two cases of Taxus baccata poisoning treated with ECLS at our institution and review the literature based on a search in PubMed and Google Scholar on the topic of yew poisoning and ECLS. All cases were combined for analysis of demographics, ECLS therapy, and outcome. Case 1: A 35‐year‐old woman developed polymorphic ventricular tachycardia followed by cardiovascular arrest 5 h after orally ingesting a handful of yew needles. Successful resuscitation required ECLS for 72 h due to ongoing cardiac arrhythmias and cardiogenic shock. The patient left the hospital without neurological sequelae after 10 days. Case 2: A 30‐year‐old woman developed refractory cardiac arrhythmias and circulatory arrest. Resuscitation included ECLS for 71 h. T. baccata needles found by gastroscopy confirmed the diagnosis. The patient had no neurologic deficits and was transferred to psychiatry after 11 days. Review of the literature: Nine case reports were found and analysed along with our two cases. Five out of the 11 (45%) patients were female. Median (range) age was 28 (19–46) years. T. baccata needles were ingested with a suicidal intention in all patients. Median (range) duration of ECLS was 70 h (24–120 h). Eight (73%) patients had full neurological recovery. Conclusions Yew poisoning is a differential diagnosis in young psychiatric patients presenting with polymorphic ventricular tachycardia and cardiogenic shock. A characteristic cardiac contraction pattern in echocardiography may present a diagnostic clue. The early use of ECLS is a valuable bridge to recovery in most of these patients.
Background
The common yew (Taxus baccata) is a conifer native to Europe ( Figure 1). The wood, bark, needles, and seeds contain cardiotoxic alkaloids (taxines) in different concentrations. The ingestion of 50-100 g of needles may have a lethal effect. 1 T. baccata poisoning is observed after accidental ingestion or attempted suicide. Clinical symptoms are caused by the blockade of cellular sodium and calcium channels and range from mild gastrointestinal symptoms to seizures, cardiac dysrhythmias, and cardiogenic shock. 1,2 Confirmation of Taxine B, 3,5-dimethoxyphenol (3,5-DMP), or other taxines in the blood serve as markers of ingestion. 3
Aims
The aim of this report is to raise awareness of the early use of extracorporeal life support (ECLS) as a successful resuscitation measure in patients suffering from cardiogenic shock due to T. baccata poisoning.
Methods
We report two patients treated for T. baccata poisoning with ECLS at a tertiary hospital. Written informed consent was obtained from both patients. A literature search was conducted on PubMed and Google Scholar with the following criteria: 'ECLS and Yew', 'ECMO and Yew', 'Taxus baccata and ECLS', and 'Taxus baccata and ECMO'. Eight case reports were found (PubMed, n = 5; Google Scholar, n = 3). 4-11 A case report recently published in a Swiss medical journal was additionally included. 12 All cases were combined for analysis of demographics, ECLS therapy, and outcome.
Case 1
A 35-year-old female patient with a history of depression and suicidal attempts lost consciousness during family lunch. Her husband performed lay cardiopulmonary resuscitation (CPR) for 10 min, which was continued by professional CPR for another 20 min after the arrival of the emergency service. Return of spontaneous circulation was obtained after three defibrillation shocks (200 Joules), and administration of 4 mg epinephrine and 300 mg amiodarone intravenously (iv). Ventricular fibrillation (first rhythm) was changed to pulseless electrical activity and finally to a broad complex ventricular tachycardia (VT). During transport to the hospital, the blood pressure was stabilized with cumulative 1 mg of epinephrine. In the emergency department, VT ( Figure 2) persisted. Transthoracic echocardiography (TTE) showed a desynchronized, 'vermicular'-like ventricular contraction pattern with a severely reduced ejection fraction (EF). Owing to progression of the cardiogenic shock, an ECLS system (Maquet Cardiohelp; bi-femoral veno-arterial cannulation; venous cannula 25 French, arterial cannula 17 French; initial blood flow 4.3 L/min) was implanted 2.5 h after symptoms onset. Three cardioversion shocks (200 Joules) and 100 mg lidocaine iv did not convert the VT. Coronary artery disease was excluded by coronary angiography, and the patient admitted to the intensive care unit (ICU). Meanwhile, the husband found recent search activity regarding 'yew poisoning' on the patient's mobile phone. Therefore, 7 h after symptoms onset, activated charcoal (65 g) was administered via a nasogastric tube (NT) and 100 mL sodium bicarbonate 8.4% iv (pH goal 7.45-7.5). Two hours later, the electrocardiogram (ECG) demonstrated a normal sinus rhythm without conduction blocks. Yew poisoning was confirmed by liquid chromatography coupled with mass spectrometry, showing the presence of Taxine B and Isotaxine B in the blood. The following day, a TTE showed a mildly reduced left ventricular (LV) and a normal right ventricular function. Following the administration of levosimendan, the ECLS was weaned and explanted after 72 h. A cannula-related stenosis of the external iliac artery was stented. The patient woke up with fine motor skill disturbances, which completely resolved over time. She admitted to having ingested a handful of yew needles in a suicidal intention approximately 5 h prior to her loss of consciousness. The patient was discharged from the hospital with a normal heart function after 10 days.
Case 2
A 30-year-old female patient with a history of recurrent depression was referred from a psychiatric institution, after a non-observed collapse with a bleeding wound on her forehead. After arrival of the emergency service, three generalized tonic-clonic seizures were terminated by midazolam iv. During the transport to the hospital, she suffered from nausea and vomiting. In the emergency department, her clinical state deteriorated. The ECG showed an alternating broad complex tachycardia and bradycardia with recurrent episodes of asystole (up to 45 s). A focused TTE demonstrated spiral 'vermicular' movements of the LV myocardium and a severely reduced EF. The patient lost consciousness, and CPR was immediately initiated. She was defibrillated four times and received cumulatively 6 mg adrenaline, 300 mg amiodarone, 16 mmol magnesium chloride, and 1.5 mg atropine without heart rhythm stabilization. Because of her age, low cardiovascular risk profile and, the erratic nature of the arrhythmia, coronary ischaemia seemed unlikely to cause the cardiac arrest. Based on the psychiatric history, ECG morphology, and TTE myocardial contraction pattern, a suicide attempt with antidepressant drugs, digitalis or yew, was suspected. To stabilize the patient haemodynamically, an ECLS system (Maquet Cardiohelp, cannulation site and material as in Case 1, initial blood flow 4.5 L/min) was implanted under CPR (total duration 60 min) 2 h after symptoms onset; 400 mg digoxin antibodies (fab fragments) and 200 mL sodium bicarbonate 8.4% were administered iv. Two hours later, a large quantity of tree needles macroscopically compatible with yew was found and evacuated by gastroscopy from the patient's stomach.
Activated charcoal (75 g) was administered via a NT. After exclusion of intracranial lesions by a computed tomography scan, the patient was admitted to the ICU. Six hours later, the ECG showed a regular sinus rhythm without conduction blocks. Laboratory blood analysis confirmed the presence of Taxine B. The following day, a TTE presented a normal heart function. The ECLS was weaned and explanted after 71 h. Due to the emergent ECLS implantation, the femoral artery had to be surgically reconstructed bilaterally. The patient woke up in a psychotic mental state. Magnetic resonance imaging ruled out ischaemic brain injury. After treatment with antipsychotics, the patient mentally recovered and confirmed the ingestion of yew needles in a suicidal attempt. The patient was referred to psychiatry after 11 days.
Review of the literature
The literature search revealed nine case reports of patients treated with ECLS for yew intoxication between 2010 and 2019. 4-12 With our two cases, a total of 11 patients were analysed. Patient characteristics are summarized in Table 1. Five out of 11 (45%) patients were female. The median age was 28 years (range 19-46 years). T. baccata needles were orally ingested with a suicidal intention in all patients. The median ECLS duration was 70 h (range 24-120 h). Five (45%) patients received anti-digoxin antibodies (fab fragments), 5,7-9 and two (18%) patients therapeutic hypothermia. 7,8 Eight (73%) patients completely recovered, 5-8,10,11 two (18%) patients died, 4,12 and one patient was bedridden due to post-hypoxic encephalopathy. 9
Discussion
We report the successful resuscitation with ECLS of two patients in cardiac arrest due to yew (T. baccata) poisoning. The psychiatric history, an ECG with polymorphic/bimorphic ventricular brady-tachyarrhythmia, and a TTE with characteristic LV contraction pattern raised the suspicion of yew poisoning. The presence of yew needles in the stomach and taxines in the blood confirmed the diagnosis. In all cases, yew poisoning occurred in young patients with a suicidal intention.
Along with gastroscopy and charcoal administration, multiple measures are used to limit the toxicity of T. baccata. 2 A particular problem is the poor response of associated arrhythmias and cardiogenic shock to atropine, 5,13 external/internal pacing 14,15 or catecholamines, 5,7,13 intravenous administration of lidocaine, 16 and sodium bicarbonate. 17,18 Few case reports attribute beneficial effects to digoxin-specific antibodies (fab fragment). 5, 19 Because taxines are lipophilic compounds, intravenous lipid emulsion (ILE) may be considered for their removal. 20,21 Noticeably, ECLS in yew (Taxus baccata) poisoning the use of ILE by itself has been associated with severe side effects: acute kidney and lung injury, venous thromboembolism, hypersensitivity, and fat overload syndrome have been described. 22 Caution is advised if ILE is to be used in a patient on ECLS because blood clot formation in the circuit and malfunction of the membrane oxygenator due to fat emulsion agglutinations have been reported. 23 In our opinion, ILE treatment may be used in poisoned patients with cardiogenic shock only if ECLS is not readily available or no option.
ECLS may be used as a bridge to recovery, bridge to decision, bridge to bridge, or bridge to transplantation. 24 Its increasing use engenders morbidity and mortality. 25 Complications include vascular injury, bleeding, neurological adverse effects, and infection. 26 In yew poisoning, ECLS supports the circulation during cardiac arrhythmias and/or failure. The biological half-life of taxine metabolites ranges between 11 and 13 h. 27 Therefore, ECLS explantation can be accomplished in less than 72 h in most patients, as cardiac function recovers when the taxines are cleared from the patient's blood. Timely transfer to a tertiary care centre with an ECLS program is highly recommended for patients with suggested reversible causes of cardiogenic shock like yew poising. According to the available literature, the outcome of patients with yew poisoning on ECLS appears to be excellent if ECLS implantation is preceded by optimal CPR.
Conclusions
Yew poisoning is a differential diagnosis in young psychiatric patients presenting with polymorphic ventricular tachyarrhythmias, characteristic echocardiography findings, and cardiogenic shock. The diagnosis is confirmed by the presence of yew needles in the patient's stomach and/or laboratory evidence of typical toxins in the blood. The early use of ECLS is a valuable bridge to recovery in most of these patients. | 2020-11-26T09:05:03.156Z | 2020-11-24T00:00:00.000 | {
"year": 2020,
"sha1": "aaefb539b791fa8abf19beb5d0ebc52a76a98a11",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ehf2.12828",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b87b71870792e1fd041028aee8f9ba9b72d06192",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259315205 | pes2o/s2orc | v3-fos-license | Buccal nerve schwannoma mimicking a salivary gland tumor: a rare case report
Schwannomas are benign tumors originating from myelinating cells constituting nerve sheaths but rarely contain cellular elements of the nerve. The authors encountered a 47-year-old female patient with a schwannoma on the anterior mandibular ramus arising from the buccal nerve, measuring 3 cm×4 cm. Surgical resection was performed with preservation of the buccal nerve via microsurgical dissection. After one month, the sensory function of the buccal nerve was recovered without complications.
II. Case Report
This retrospective clinical study was approved by the Institutional Review Board of Gangnam Severance Hospital (IRB No. 3-2021-0298). The authors have read the Helsinki Declaration for research on humans and have followed the guidelines in this investigation. A 47-year-old female patient complained of swelling on the anterior ramus ascending branch. She had no history of trauma, bleeding, limitation of mouth opening, or pain. Although there appeared to be linea alba on the overlying mucosa, an intra-oral examination revealed firm and mobile swelling without pain or other symptoms. Radiography revealed expansion of the mandibular ramus and a well-defined heterogeneously enhancing mass, approximately 3 cm×4 cm in size, without aggressive destruction of the underlying bone. (Fig. 1. A) MRI showed a multifocal hypointense portion and a well-bounded lesion. (Fig. 1. B) Although no malignant appearance was detected on these images, the plan was to excise the tumor completely under general anesthesia because it had been misdiagnosed as a salivary gland tumor, and the patient refused incisional biopsy under local anesthesia due to fear. Laboratory investigations for routine preoperative screening did not yield anything significant.
Under general anesthesia, surgical excision was started by carefully dissecting the mass from the buccal mucosa on the anterior part of the mandibular ramus. (Fig. 2. A) The middle portion of the tumor was peripherally attached to the buccal nerve. (Fig. 2. B) A tumor fragment was sent for frozen biopsy to preserve the buccal nerve. Immunohistochemical staining confirmed a favorable schwannoma positive for the S-100 protein and vimentin. The tumor was excised via microsurgical dissection from the buccal nerve. (Fig. 2
. C)
Pathological examination of the surgical specimen revealed a spindle cell proliferative lesion surrounded by a fibrous capsule and exhibiting a characteristic biphasic (Fig. 3. A) morphology. Most of the tumor was composed of compact spindle cells with indistinct cytoplasmic borders and a nuclear palisading pattern. (Fig. 3. B) A small portion of the tumor was composed of loosely arranged spindle cells in the myx- Fig. 1. Preoperative images. A. Computed tomography revealed a homogeneous mass, slightly less dense than the adjacent muscles, and expansion of the ramus with some irregular resorption borders (arrow). B. T2-weighted magnetic resonance imaging showed a multifocal hypointense portion, but the long buccal nerve was not visualized clearly (arrow). oid stroma and irregularly spaced blood vessels (not shown). Neither nuclear atypia nor increased mitotic activity was observed. The final diagnosis of schwannoma was established from these histopathological findings and the S-100 positivity on immunohistochemistry. Postoperatively, the patient complained of partial paresthesia of 70% and a limitation of mouth opening for approximately 30 mm, but these symptoms resolved after one month. The five-month follow-up examination showed no signs of recurrence or neuropathic symptoms.
III. Discussion
To the best of the authors knowledge, this is the first described oral schwannoma that originated from the buccal nerve. The authors could minimize the patient's discomfort by preserving the nerve, which was not detected in the preoperative images. Hence, a three-dimensional diagram should be used when considering the nerve, as shown in Fig. 4.
Although the recurrence rate of schwannoma was reported to be 4%-6% after resection 1 , the rate was as high as 33.3% while preserving the facial nerve from jugular foramen schwannomas 8 . With regard to recurrence, the size of the schwannoma was suggested to be a critical risk factor, with risk increasing by 15.7% for every 1 cm increase in size (hazard ratio, 1.157; 95% confidential interval, 1.016-1.319) 1 . Therefore, long-term follow-up of patients with a large schwannoma will be needed to preserve the parental nerve.
In general, schwannomas are well-encapsulated and eccentric to the nerve fascicles, with few axons embedded in the tumor mass, and contain variable amounts of Antoni A and B areas. The Antoni A area is a compact cellular area characterized by well-aligned nuclei and interdigitating cellular processes. The intense laminin-positive staining in schwannomas is believed to represent the tight organization due to the adhesive properties of laminin, which also expressed the S-100 protein 9 . In contrast, the Antoni B area contains relatively few cells and loosely arranged cellular components. A schwannoma with a large portion of Antoni B tissues might be thin and wispy due to microcysts filled with basophilic mucin 9 .
The schwannoma of this patient was uniform with a solid parenchyma and was histologically revealed, during surgery by frozen section, as the favored type from the S-100 positive stain and dominant Antoni A areas. These features might allow a dissection from the buccal nerve. As this is the first report of schwannoma from the buccal nerve, the authors cannot suggest a long-term prognosis for recurrence. Change in malignant transformation of schwannomas is rare, and they rarely metastasize, though the lung is the most common site if metastasis occurs 10 . In conclusion, this case report showed | 2023-07-04T06:17:26.799Z | 2023-06-30T00:00:00.000 | {
"year": 2023,
"sha1": "df7ef66e23e3ab29e1fc79d85868f03cdc71c7e2",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "966c14a32bd1afb77380d89428fcbc762e770a03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218858254 | pes2o/s2orc | v3-fos-license | Exhalation delivery system with fluticasone improves quality of life and health status: pooled analysis of phase 3 trials NAVIGATE I and II
Background Chronic rhinosinusitis with or without nasal polyps (CRSwNP/CRSsNP) seriously impairs health‐related quality of life (HRQoL). This analysis describes the impact of the exhalation delivery system with fluticasone (EDS‐FLU) on HRQoL, assessed by the 36‐item Short‐Form Health Survey version 2 (SF‐36v2), and on utilities, assessed via the Short‐Form 6‐Dimension (SF‐6D), in patients with CRSwNP. Methods Post hoc analysis of pooled randomized clinical trial data (NAVIGATE I and II; N = 643) to examine change from baseline in SF‐36v2 and SF‐6D at end‐of‐double‐blind (EODB: 16 weeks) and end‐of‐open‐label (EOOL: 24 weeks; following 8 weeks of open‐label treatment) for EDS‐FLU vs placebo (EDS‐PBO). Baseline characteristics predictive of change in SF‐36 and SF‐6D scores were assessed. Results Mean baseline SF‐36v2 scores were below population norms. At EODB, mean improvement was greater for all SF‐36v2 domain and component scores with EDS‐FLU (range: 2.9 [physical functioning] to 5.11 [bodily pain {BP}]) vs EDS‐PBO (range: 0.81 [mental health] to 2.87 [BP]) (each comparison p < 0.01); physical and mental component score improvements within the EDS‐FLU group exceeded the minimal clinically important difference (MCID). Clinically meaningful and statistically significant improvements in SF‐6D utility scores were seen in EDS‐FLU–treated patients compared to EDS‐PBO–treated patients (0.058 vs 0.023, respectively, p < 0.001). At EOOL, SF‐36v2 and SF‐6D mean scores were at or above population norms, with clinically meaningful and statistically significant improvements from baseline. Conclusion In this pooled analysis of 2 large pivotal EDS‐FLU trials, health domain and health utilities improvements were significantly greater with EDS‐FLU than EDS‐PBO and were comparable to population norms.
. Gamma camera image deposition information (logarithmic hot-iron intensity scale) from the nasal cavity that is superimposed on the corresponding sagittal MRI section. The image represents deposition 0 to 2 minutes after delivery using a conventional liquid spray (A) and an exhalation delivery system (B). In the first image (A), deposition of spray was greatest in the lower anterior regions of the nose, whereas in the second image (B), deposition of liquid was greatest in the upper posterior regions of the nose. The images were from the same healthy subject after each method of administration (Image used with permission from Djupesland 18 ). MRI = magnetic resonance imaging.
pain, and dysosmia, lasting 12 weeks or longer, and is frequently associated with flares of acute sinusitis, increased healthcare resource utilization, 2,3 and increased antibiotic prescriptions. 4 A recent claims-based study estimated that an antibiotic is prescribed in approximately 70% of CRS visits, and that CRS is responsible for 7.1% of all antibiotic prescriptions, more than any other primary diagnosis. 4 In addition, CRS has been shown to be associated with significant impairments in sleep, mood, and work productivity. [5][6][7][8] Not surprisingly, studies have found that patients with CRS report decreased health-related quality-of-life (HRQoL) and health status. [9][10][11][12] QoL instruments are classified into 2 main types: general (or generic) and diseasespecific, and the measurement of both of these perspectives has proven instrumental in understanding the impact of diseases and conditions on patients' QoL. The measurement of the impact on general QoL can help better understand the relative burden of diseases, and is useful in evaluating the impact of healthcare interventions relative to general population health levels. 13 For U.S. patients with CRS, general QoL domains and mean health utility levels (as measured by the Short Form-36 Health Survey, version 2 (SF-36v2, standard 4-week recall) and the Short-Form 6-Dimension (SF-6D), respectively, have been reported to be below U.S. norms, and its effect on health utilities has been shown to be comparable to other serious chronic conditions. 11,14 Significant improvements in general QoL and health utility levels have been observed following successful medical or surgical therapy. 15 CRS has been shown to be responsive to systemic steroid therapy, 16 and many patients respond to conventional nasal steroid sprays. 17 However, due to the limited deposition of conventional nasal steroid medication at the level of the ostiomeatal complex (OMC) and deeper portions of the nasal cavity Fig. (1A), 18 patients with more severe disease may not experience adequate symptom control. 17 Recent treatment strategies for these failures include surgery and alternative methods to deliver steroids to sinus tissue where polyps grow. 17 Furthermore, polyps typically originate in the upper regions of the nasal cavity, contributing to continued CRS symptoms, and exacerbations of CRS. Conventional medical treatments are frequently insufficient for patients with more severe disease due to this inability to deliver medication throughout the sinonasal cavity.
Although functional endoscopic sinus surgery (FESS) has been shown to improve general and disease-specific QoL in patients with CRS, 19,20 it does not specifically treat the underlying inflammation of CRS. For many patients surgery is not curative, with approximately 50% reporting a return of CRS symptoms and polyps within 18 months 21,22 and approximately 20% requiring revision surgery within 5 years. 19 The Exhalation Delivery System with Fluticasone (EDS-FLU, XHANCE R ) uses a different approach to intranasal drug delivery shown to achieve high/deep deposition for treatment of nasal polyps (Fig. 1B). 18,23 EDS-FLU has demonstrated a broad improvement in all 4 defining symptoms of CRS (congestion, rhinorrhea, facial pain/pressure, hyposmia) and total 22-item Sino-Nasal Outcome Test (SNOT-22) score. 24,25 However, its impact upon general HRQoL and health utility has not been described. Therefore, the objective of this study was to describe the impact of EDS-FLU on general HRQoL and health utility in patients with CRS with nasal polyps (CRSwNP). Secondary aims were to evaluate the impact of baseline characteristics on changes in HRQoL and health utility.
Study population
In this post hoc analysis, data from patients participating in 2 identically-designed, double-blind, randomized, controlled trials were pooled and analyzed (NAVIGATE I and NAVIGATE II). The countries from which patients were recruited was the only difference between the NAVIGATE I and II studies. Pooling the unit data across studies provided wider geographic representation and greater statistical precision. Figure 2 provides a brief description of the study designs, which are described in detail elsewhere. 24,25 Briefly, patients were randomized to a double-blind, 16-week, EDS-placebo controlled phase followed by an 8-week open-label extension without revealing prior treatment allocation. Eligible patients with CRSwNP were 18 years old and required to have moderate or severe nasal congestion/obstruction as reported by the patient (morning score ࣙ2 [0 = none, 1 = mild, 2 = moderate, 3 = severe] for at least 5 days during the 7-day period leading up to screening) and a total nasal polyp grade of 2 or greater (minimum polyp grade of 1 in each nasal cavity). Exclusion criteria included complete nasal cavity obstruction or inability to achieve bilateral nasal airflow, perforated septum, >5 prior sinonasal surgeries, or sinonasal surgery within 6 months of screening.
Outcome measures
Change from baseline in HRQoL (measured by the SF-36v2, standard 4-week recall) and health utility (measured by the SF-6D) was calculated at end of double-blind (EODB: week 16) and end of open label (EOOL: week 24).
HRQoL
The SF-36v2 is a validated, patient-reported outcomes instrument widely used to measure HRQoL in patients with a wide variety of health conditions, and enables comparisons to established general population HRQoL norms. 26
Health utility
The SF-6D is a health status measure representing a preference-based score, or "health utility," with a value between 0 (worst health state) and 1 (best health state). It is derived from a subset of 11 SF-36v2 questions and calculated from UK population nonparametric Bayesian preference weights. It is used to calculate quality-adjusted life years (QALYs) in economic evaluations. 27 U.S. population norms range between 0.76 and 0.80. 28 The MCID for the SF-6Dv2 is 0.03 points. 29
Analysis
All patients with CRSwNP receiving EDS-FLU were pooled in 1 treatment arm (All EDS-FLU) and compared to placebo (EDS-PBO). SF-36v2 (domain and component) and SF-6D scores were classified relative to U.S. population norms (ie, at, below, or above population norms) at baseline, EODB, and EOOL. Change from baseline in SF-36v2 and SF-6D scores at EODB and EOOL was also classified according to their clinical meaningfulness using MCID levels. Statistical significance of changes from baseline between EDS-FLU and EDS-PBO was assessed using an analysis of covariance (ANCOVA) model that included the corresponding SF-36v2 or SF-6D baseline score, treatment group (EDS-PBO, All EDS-FLU), and country. To assess the baseline characteristics associated with change from baseline in SF-36v2 and SF-6D scores at EODB, the following baseline characteristics were added to the ANCOVA model: age, sex, race; history of FESS, asthma, or allergic rhinitis; baseline polyp grade and SNOT-22 score; current FESS eligibility (using criteria established for the studies by a panel of rhinology experts 24,25 ), and study. No imputation of missing values was undertaken.
Patient demographics
In the pooled population of 643 patients with CRSwNP, 161 were in the EDS-PBO arm and 482 in the All EDS-FLU arm (161 in 93 µg twice per day [BID], 160 in 186 µg BID, and 161 in 372 µg BID). Of these, 633 had data available at baseline and EODB, and 575 had data at baseline and EOOL. Table 1 shows the demographic profile of the study population.
SF-36v2 domain and component summary score changes Baseline to EODB At baseline, scores were comparable between all EDS-FLU and EDS-PBO, with mean scores within 1 MCID between groups ( Table 2). The mean improvement from baseline to EODB for all SF-36v2 domain and component scores was significantly higher in the All EDS-FLU arm compared to EDS-PBO (all p < 0.01), with the EODB means in the All EDS-FLU arm at or above population norms. These improvements in the All EDS-FLU arm exceeded the MCID in all domains and components except PF and RE. Although all domains and components showed some improvement in the EDS-PBO arm, none exceeded the MCID. The magnitude of the change in each mean SF-36v2 domain and component score was between 1.8 times (PF) and 4 times (SF) greater for All EDS-FLU vs EDS-PBOtreated patients. The absolute increase in SF-36v2 from baseline to week 16 relative to the MCID is shown in Figure 3.
Baseline to EOOL
After all patients received 8 weeks of open-label treatment following the end of the double-blind phase, patients experienced an additional improvement in mean SF-36v2 domain and component scores ( Table 2). Patients previously treated with EDS-PBO appeared to generally catch up to those who had been on drug for 24 weeks. All mean scores at EOOL were at or above U.S. population norms, and improvements from baseline were clinically meaningful, ranging from 1.1 times the MCID (RE) to 2.7 times the MCID (PCS).
SF-6D utility changes
Baseline to EODB At baseline, all treatment arms had SF-6D utility scores well below population norms (EDS-PBO, 0.680; EDS-FLU, 0.686) ( Table 3). The mean improvement from baseline to EODB for SF-6D was significantly higher in the All EDS-FLU arm compared to EDS-PBO (0.058 vs 0.023, respectively, p = 0.0001). The mean SF-6D utility at EODB in the EDS-FLU arm did not reach a value within the population norms; however, the magnitude of the change from baseline for the All EDS-FLU arm was clinically meaningful, 1.9 times the MCID, and was over 2.5 times greater than the change observed in the EDS-PBO arm.
Baseline to EOOL
By the end of the open-label treatment phase, both treatment groups experienced additional improvement in mean SF-6D utility from the EODB, achieving a clinically meaningful improvement from baseline (Table 3) to levels comparable to population norms. The change from baseline to EOOL in both treatment arms was more than 2.6 times the MCID for the SF-6D.
ANCOVA model of baseline characteristics associated with change in SF-36v2 and SF-6D
Results of the multivariate regression analysis of baseline characteristics on changes in QoL and health utilities at week 16 are shown in Table 4. In addition to treatment with EDS-FLU, predictors of improvement in PCS were lower age, male gender, and white race. The additional predictor of improvement in MCS was no history of ESS. For health utilities at week 16 the additional predictor was male gender. All other baseline characteristics assessed were not associated with change in PCS, MCS, or SF-6D.
Discussion
A large and growing body of evidence demonstrates that patients with CRSwNP have significantly decreased QoL. 6,8,10,11,14 This impairment leads patients to seek treatment and has dramatic economic impacts with impaired patient productivity. 7,8 Similar to prior reports, patients in this study reported baseline general HRQoL and health state utility scores significantly below population norms. The severity of this impairment has an impact comparable to other chronic illnesses such as asthma, coronary artery disease requiring percutaneous coronary intervention, and end-stage renal disease with hemodyalisis. 10 In addition to objective improvements in endoscopic or computed tomography (CT) grading, it is also critical that any therapy for CRSwNP results in meaningful improvement in QoL. At the completion of the study, all patients receiving EDS-FLU reported significant improvement in HRQoL and mean health utility scores, with both measures returning to or exceeding population norms.
In addition to being able to compare the health impact of various diseases, health state utility data also allows comparisons of treatment efficacy across these different disease states. These data can be important for policymakers and those who must make decisions on how to apportion healthcare resources. The change in health utility with EDS-FLU is similar or greater than that reported for many commonly utilized treatments for chronic disease, including anti-tumor necrosis factor (TNF) drugs for psoriasis, joint replacement surgery, and coronary angioplasty (Fig. 4). 10 With regard to CRS, change in SF-6D scores were similar to those reported after ESS for CRS, a treatment widely considered to have significant patientreported benefit. Although it may be tempting to suggest that EDS-FLU is equivalent to surgery based on this data, it is not appropriate to make direct comparisons, as surgery is typically reserved for those who have failed a medical therapy such as EDS-FLU. However, the similarity in absolute change does provide context for providers as they counsel patients who may be familiar with the extent of patient-reported benefit typically seen after surgery.
Much of the data regarding patient-reported treatment outcomes for CRS comes from uncontrolled observational cohorts. The weakness of these studies is that one cannot entirely discount the possibility of the placebo effect, wherein a patient's perception of their HRQoL is influenced by their belief in the treatment's efficacy. Furthermore, uncontrolled treatments can suffer from bias related to regression to the mean and/or fluctuations based on natural history of disease. The randomized, double-blind placebo-controlled design of this study reduces or eliminates these potential concerns. These improvements in patient-reported outcome measures are further corroborated by objective reduction in polyp grade with EDS-FLU. Thus, this data demonstrates both biologic responsiveness, as evidenced by improved polyp scores, as well as clinically relevant improvements in HRQoL. A secondary aim of this study was to determine whether the impact of EDS-FLU varied based on baseline characteristics including age, sex, race, polyp grade, SF-36/6D scores, history of surgery/allergy/asthma, or study-specific surgical eligibility at baseline. This question has direct clinical relevance as penetration of topical medication can be influenced by prior surgery, which typically improves access to sinus mucosa, as well as by other patient characteristics. Importantly, these baseline characteristics had minimal impact upon HRQoL or health utility improvements with EDS-FLU.
Strengths of this study include the randomized, doubleblind design, and use of validated general HRQoL and health utility surveys. However, there are several caveats worth keeping in mind. Patients enrolled in these clinical trials had a relatively high burden of disease, as evidenced by high baseline SNOT-22 scores. Therefore, the magnitude of response might not be uniformly seen across patients who report less baseline QoL impairment. The study was also carried out within the confines of a tightly controlled clinical trial, wherein compliance is expected to be high and treatment breaks are prohibited, in contrast to real-world settings where patients can run into logistical hurdles acquiring and using their medications regularly. Last, the total study duration was just under 6 months. Although this was clearly adequate to assess the efficacy and safety of treating CRSwNP, it does not allow for conclusions of treatment efficacy beyond this time period. Considering CRSwNP is a chronic disease without a known cure, understanding long-term treatment impacts on general HRQoL and health utility would be important, particularly regarding future comparative effectiveness research.
Safety findings
The most commonly reported adverse events (AEs) in the active treatment groups were associated with local effects at the site of administration in the nasal cavity (epistaxis, nasal congestion, erythema, and nasal septum ulceration) or associated with the underlying disease (acute sinusitis or nasopharyngitis). The majority of these AEs were mild and are known to have resolved with continued use of study drug. 24,25 Conclusion In this post hoc analysis of patients with CRSwNP treated with EDS-FLU, clinically meaningful and statistically significant improvements in HRQoL and health utility were observed after the 16-week double-blind period, and additional clinically meaningful and statistically significant improvements were observed during the open-label extension phase. Age, sex, and race were predictive of change in some but not all SF-36 domains/components; medical history and ESS eligibility/history were not predictive of change in SF-36/6D scores. The magnitude of change in the SF-6D with EDS-FLU was comparable in magnitude to other medical and surgical interventions, such as the pharmacological treatment of Parkinson's disease, anti-TNF psoriasis treatments, joint replacement therapy, and coronary angioplasty. | 2020-05-24T13:06:35.350Z | 2020-05-22T00:00:00.000 | {
"year": 2020,
"sha1": "770420b580b61201ada3cfd465d85ec1f9b201b1",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/alr.22573",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "22662c8109e8056eb64f128077a7ab0cef9a6947",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59269651 | pes2o/s2orc | v3-fos-license | Effect of crude glycerol on in-vitro ruminal fermentation kinetics
levels did not affect ammonia nitrogen content, metabolizable energy content, in-vitro digestibility of organic matter and neutral detergent fiber, nor ruminal degradation parameters. However this by-product of biodiesel production may be tested in-vivo as an alternative energy feedstuff in ruminant diets.
INTRODUCTION
As ruminant production systems have become increasingly intensified, economic assessment related to feeding became critical, as feed accounts for 30 to 70% of total production costs depending on activity and type of operation (RESTLE et al., 2007;RODRIGUES & RONDINA, 2013) reducing the profit margin for producers (GOES et al., 2008).Energy is the most expensive component of ruminant diets and its price has been influenced due to use of corn, soybean, and other grains for ethanol and biodiesel production.Also, oil price has risen due to growth of global population and income in countries whose economies grew at a faster rate.In this context, there has been an increasing number on the use of renewable energy sources due to rise in oil prices caused by the possible exhaustion of fossil energy reserves couples with concerns on global climate changes.Among the renewable energy sources, biodiesel production has received much attention.Brazil has a great potential for the production of biofuels.In addition to planting several oil seeds that can be used for biodiesel production, it has cutting-edge technology and industrial capacity to develop it (OLIVEIRA et al., 2008).Crude glycerol is the main by-product generated from biodiesel production: approximately 100 milliliters of crude glycerol are produced from each liter of biodiesel (THOMPSON & HE, 2006).There are several industrial applications of purified crude glycerol, such as in cosmetics, pharmaceuticals and food industries.However, the degree of purity required for these applications demands complex and expensive processes, as crude glycerol contains impurities, such as water, oil, catalyzer, reagent residues, ethanol or methanol, propanediol and minerals (SOARES et al., 2010).On the other hand, crude glycerol could be used as an alternative energy source for ruminant feeding, due to its increasing availability and favorable price as a result of the expansion of the biofuel industry, as well as increase in grain prices.Nevertheless, adequate inclusion levels, the impact and level of contaminants and the nutritional value of crude glycerol need to be determined in order to prevent intoxication or reduction of the efficiency of utilization of other dietary components.The objective of this study was to evaluate in vitro digestibility and ruminal fermentation kinetics of substituting corn for different levels of crude glycerol in diets consisting of alfalfa hay and ground corn.
MATERIALS AND METHODS
The experiment was carried in the ruminant sector of Laboratory of Animal Science and the chemical analyses were performed at the Animal Nutrition Laboratory, both belonging to the Department of Animal Science of the School of Agronomy of the Federal University of Rio Grande do Sul.Two Texel sheep, with 40kg average body weight ruminally cannulated, were used as inoculum donor.The animals were kept in a 120m 2 paddock with a shelter during the entire experiment.Alfalfa hay (88.51% dry matter, 88.32% organic matter, 19.20% crude protein, 54.45% neutral detergent fiber, and 30.64% acid detergent fiber) was fed at 3% body weight twice daily (8:00 AM and 5:00 PM).Mineral salt (13.2% calcium, 8.0% phosphorus, 1.8% sulphur, 14.7% sodium, 0.13% manganese; 0.27% zinc; 0.0044% cobalt; 0.0088% iodine; 0.0018% selenium and 0.0800% fluorine) and water were supplied ad libitum.Before the experiment started, the animals were subjected to a 10 day adaptation period to the above described diet.The experimental protocol followed the guidelines of the Ethics Committee on the Use of Animals in Research as Number 18.442 in compliance with Law 11.794.The experimental treatments consisted of substituting corn for liquid crude glycerol (0, 4, 8 and 12%) in dry matter basis.Alfalfa hay (Medicago sativa) was used as roughage and comprised 60% of the diet.Table 1 shows the nutritional composition of the ingredients of the experimental diets.
In-vitro
true digestibility was determined according to Goering & Van Soest (1970); however, in addition to the 48 hours traditionally used, digestibility was also determined at different times (0; 4; 8; 16; 48; 72 and 96 hours) aiming at studying the digestion kinetics of the different treatments.One day before the incubation, in each 120 ml fermentation flask, 0.5 grams of a sample consisting of the experimental treatments was placed, and the flasks were then kept in an oven at 39ºC.On incubation day, two hours after the animals received the morning meal, ruminal fluid was collected and kept in a thermos bottle at 39ºC, filtered through four gauze layers, and kept in a water bath to maintain temperature close to 39ºC.The ruminal fluid was mixed with artificial saliva (McDOUGALL, 1948), which was kept in water bath at 39ºC and saturated with CO 2 , at a ratio of 1 part of ruminal fluid to 4 parts of artificial saliva.The mixture was homogenized in water bath and saturated with CO 2 .Subsequently, 50 ml of the mixture containing the culture medium and the ruminal fluid were added to each of fermentation flasks containing the different treatments, saturated with CO 2 for about 30 seconds, rapidly closed with a rubber stopper with Bunsen valve and place in the incubator.The incubator was opened to agitate the flasks, three times per day, at 08:00, 12:00 and 17:00 hours.This procedure was carried out as quickly as possible to avoid drop of temperature.Eight flasks (4 treatments and 2 replicate) were removed from the incubator at 0; 4; 8; 16; 48; 72 and 96 hours of incubation and were placed in iced water to interrupt microorganism activity.Flasks were then immediately centrifuged at 10.000rpm for 10 min, the supernatant was removed, and 100ml of neutral detergent solution were then added, following the method of Van Soest & Robertson (1985).Flasks were sealed with aluminum foil and placed in a forced-circulation oven at 90ºC for 16 hours, according to the technique proposed by Chai & Udén (1998).The flask content was then filtered in sinterized glass crucible with coarse pore diameter.The crucibles with the residue were placed in an oven at 105ºC for 12 hours, and weighed in order to obtain moisture-free residue weight, and later placed in the oven at 450ºC for 5 hours.The in-vitro organic matter true digestibility (IVOMTD) was calculated as the difference between the incubated organic matter (OM) and the nondigested OM, considered as the residue remaining in the crucibles.In-vitro neutral detergent fiber digestibility (IVNDFD) was calculated as the difference between incubated NDF and non-digested NDF, considered as the residue remaining in the crucibles after filtration and drying in the oven at 105ºC.After the flasks were removed from the incubator and centrifuged, and before the neutral detergent solution was added, 20 ml samples of the supernatant were removed for analysis of ammonia nitrogen (NH 3 -N).The concentration of NH 3 -N was determined by distillation with magnesium oxide (PRATES, 2007).In order to study the kinetics of ruminal degradability, IVOMTD results obtained at the different times were subjected to the model of McDonald (1981), as follows: Y t = a + b(1-e -c(t-to) ), where Y t = degradation after t hours; a = substrate solubilized immediately; b = insoluble, but potentially degradable material; a+b = potential degradability; c = degradation rate of b; to = lag time.The same model was used to obtain NDF degradation parameters, but the "a" parameter was excluded due to the absence of NDF solubilized immediately.Effective degradability (ED) was calculated using the equation proposed by McDonald (1981): ED = a + [(b x c)/(c + k)]e-(c+k)t , where a, b, and c follow the previous definitions, and k = feed passage rate of 2 or 5%/h.The same model was used to calculate ED of NDF but the parameter "a" was excluded from the model.The experiment was replicated in three runs with two duplicates within runs (56 treatment flasks corresponding to seven different times, 4 level of glycerol substitution and 2 duplicates plus 4 blanks flasks in each run).Samples of the incubated feedstuffs (alfalfa hay and ground corn) were ground and analyzed for dry matter, organic matter and crude protein (PRATES, 2007).Neutral detergent fiber and acid detergent fiber were determined using a fiber analyzer (ANKOM's Fiber Analyzer Ankom®) as described by Prates (2007).The enzyme alpha-amylase was used to determine NDF.Crude energy expressed in MJ/kg dry matter, were determined in duplicate using an isoperibol calorimeter bomb (IKA® calorimeter C 2000).The analyses aforementioned were performed in triplicate.The nutritional composition of crude glycerol and its contamination with methanol were evaluated by a specialized laboratory (CBO Análises Laboratoriais, Campinas, SP).A metabolizable energy (ME), expressed in MJ/kg dry matter, was estimated considering the organic matter degradability (OMDeg) values obtained by the above mentioned technique and applied to the equation proposed by Menke & Steingass (1988), where ME = 1.15 + (0.16*OMDeg).The effect of the dietary inclusion of increasing crude glycerol levels on in-vitro digestibility with 48 hours of incubation, degradation rate, effective degradability at passage rate of 2 or 5%/h, lag time and average N-NH 3 , at all times, were analyzed as a completely randomized design using the PROC MIXED (STATISTICAL ANALYSIS SYSTEM, 2012) according with the following mathematical model: yij = µ + αi + βj + (αβ)ij + εij, where yij is the observation at run j given treatment i; µ is the overall mean; αi is the fixed effect of treatment i (0, 4, 8 and 12 % glycerol); βj is the aleatory effect of run j (1, 2, 3) and (αβ)ij is the interaction of treatment i by run j.Means were compared by the PROC MIXED (STATISTICAL ANALYSIS SYSTEM, 2012) considering linear or quadratic effect of the glycerol level inclusion.Statistical significance was declared at P ≤ 0.05.
RESULTS AND DISCUSSION
Substituting corn for crude glycerol had no effect on in-vitro organic matter true digestibility (IVOMTD) or on in-vitro neutral detergent fiber digestibility (IVNDFD) after 48 hours of incubation, with average values of 81.89 % and 60.04 %, respectively (Tables 2 and 3).These results are consistent with the previous research which shows that feeding levels of glycerol up to 20% of the total ration does not have any effect on nutrient digestibility or animal performance (DONKIN 2008;KRUEGER et al., 2010).On the other hand, Wang et al. (2009) and Paggi et al. (2004) observed that glycerol decreased the in-vitro organic matter digestibility and in-vitro neutral detergent fiber digestibility in diets.These results suggest that glycerol can modulate the digestion in a dosedependent manner.In this study, one can concluded that the IVOMTD and IVNDFD were not affected by crude glycerol due to its low level in the diet, allowing optimum rumen fermentation, as growth, adhesion and cellulolytic activity were inhibited when glycerol was included in cultures at high concentration but not at low concentration (ROGER et al., 1992;PAGGI et al. 2004).(2009).
No effects of the increasing substituting levels of corn for crude glycerol was observed on mean OM and NDF lag time, expressed in hours, with values of 4.46h for OM and 5.02h for NDF (Tables 2 and 3). 2 and 3).
In the present study, we worked with effective degradability (ED) values with solid fraction passage rates of 2 and 5%/h, because, whereas a low-quality forages present passage rates of approximately 2%/h, most concentrates mixed with forages have passage rates of about 5%/h.However, passage rate is closely related to intake, and therefore, it would be more correct to say that the results with passage rates corresponding to low and high intake were analyzed (SHAVER et al., 1986) In in-vitro media, NH 3 -N concentration works as an indicator of protein degradability, because there is no nitrogen absorption or recycling as in the rumen media in-vivo (DETMANN et al., 2011).Because most cellulolytic bacteria require ammonia for growth, low NH 3 -N concentrations may limit microbial activity and thereby, reduce the rate and degree of cell wall digestion.The mean NH 3 -N results obtained in the present study of 15.69mg/dl were within the optimal ruminal NH 3 -N range (12 to 17mg/dl, MAPATO et al., 2010;LUNSIN et al., 2012) for rumen ecology, fermentation and optimal microbial growth (ANANTASOOK & WANAPAT, 2012).These results are consistent with the previous work that shows that feeding glycerol substituting corn or barley grain in the diet does not have any effect on NH 3 -N concentration (ABO EL-NOR et al.;2010;AVILA et al., 2011).The energy value of glycerol is approximately equal to the energy contained in corn starch.However, the energy value of glycerol is variable due to the difference between the levels studied, unknown interactions with other dietary components and the proportion of corn and starch in the diet (DONKIN, 2008).Mach et al. (2009) estimated for Holstein young bulls, a metabolizable energy content of crude glycerol (86% glycerol) of 14.52 MJ/kg DM, higher than the value observed in this study.But the lack of differences in ME in the present study suggests that corn can be substituted for glycerol without adjustments for the energy content in the diet.The dietary corn substitution for increasing crude glycerol levels did not affect ammonia nitrogen content, metabolizable energy content, in-vitro digestibility of organic matter and neutral detergent fiber, nor ruminal degradation parameters.However this by-product of biodiesel production may be tested in-vivo as an alternative energy feedstuff in ruminant diets.
FINANCIAL SUPPORT
The present study received financial support of CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico -Brazilian Scientific and Technological Development Council).
Table 2 .
Effect of crude glycerol inclusion level on the mean values of in-vitro organic
Table 3 .
Effect of crude glycerol inclusion level on the mean values of in-vitro neutral detergent fiber digestibility (IVNDFD, % -with 48 hours of incubation), and neutral detergent fiber degradation parameters (b, c, and lag time) and effective degradability of neutral detergent fiber at passage rates of 2 and 5%/h
Table 4 .
Effect of crude glycerol inclusion level on the mean ammonia nitrogen values (NH 3 -N, mg/dL) and metabolizable energy content (ME, MJ/kg DM) | 2019-03-22T16:16:56.187Z | 2014-01-04T00:00:00.000 | {
"year": 2014,
"sha1": "8a4436e057324e00fe4f127fd2bba98d302ace2d",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/rbspa/v15n1/v15n1a15.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b1d34ec0d1d010407848944b2973f953de200bc7",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
} |
4821003 | pes2o/s2orc | v3-fos-license | Is self-weighing an effective tool for weight loss: a systematic literature review and meta-analysis
Background There is a need to identify effective behavioural strategies for weight loss. Self-weighing may be one such strategy. Purpose To examine the effectiveness of self-weighing for weight loss. Methods A systematic review and meta-analysis of randomised controlled trials that included self-weighing as an isolated intervention or as a component within an intervention. We used sub groups to analyse differences in frequency of weighing instruction (daily and weekly) and also whether including accountability affected weight loss. Results Only one study examined self-weighing as a single strategy and there was no evidence it was effective (-0.5 kg 95 % CI -1.3 to 0.3). Four trials added self-weighing/self-regulation techniques to multi-component programmes and resulted in a significant difference of -1.7 kg (95 % CI -2.6 to -0.8). Fifteen trials comparing multi-component interventions including self-weighing compared with no intervention or minimal control resulted in a significant mean difference of -3.4 kg (95 % CI -4.2 to -2.6). There was no significant difference in the interventions with weekly or daily weighing. In trials which included accountability there was significantly greater weight loss (p = 0.03). Conclusions There is a lack of evidence of whether advising self-weighing without other intervention components is effective. Adding self-weighing to a behavioural weight loss programme may improve weight loss. Behavioural weight loss programmes that include self-weighing are more effective than minimal interventions. Accountability may improve the effectiveness of interventions that include self-weighing. Electronic supplementary material The online version of this article (doi:10.1186/s12966-015-0267-4) contains supplementary material, which is available to authorized users.
Introduction
Finding simple, yet effective, ways in which individuals can be helped to lose weight and sustain weight loss could improve public health. One promising behaviour change technique is to prompt self-monitoring, which has been shown to be an effective technique for healthy eating, physical activity and alcohol reduction [1][2][3].
Programmes in which participants set a target for their weight, and monitor performance against that target may prove to be an effective stand-alone or adjunct technique for weight loss programmes. Self-weighing is monitoring of the outcome (i.e. weight) rather than behaviour and thus may be used as a prompt to change dietary and physical activity behaviours. There have been two systematic reviews specifically examining selfweighing for weight management and both concluded that regular self-weighing appeared to be a good predictor of moderate weight loss, less weight regain or avoidance of initial weight gain in adults [4,5]. The first systematic review included a mix of study designs and it was not possible for the authors to do a metaanalysis or identify the key elements of the interventions that might have led to the apparent effectiveness of self-weighing. The second systematic review did not separate the effects of self-weighing for weight loss, prevention of weight regain after weight loss and prevention of weight gain, and there may be differential effects for these interventions. There was also no meta-analysis or estimate of the likely effect of self-weighing. Here we aim to assess self-weighing for weight loss and identify elements associated with greater effectiveness, focusing exclusively on studies with randomised controlled trial designs.
We examine whether self-weighing is effective for weight loss and also examine whether advising people to weigh themselves can be effective as a single intervention or only in the context of a behavioural support programme. If self-weighing can be effective on its own as a prompt to action, then perhaps advice to do so might form the basis of a public health campaign. This is important as many people try and lose weight by themselves rather than seeking advice from a clinician or attending a programme [6]. Having effective techniques people can use for self-regulation of weight is important, as many people would benefit from weight loss. However it could also be recommended by clinicians to help patients manage their weight. If selfweighing can work but only with adjunctive interventions then incorporating advice on self-weighing into behavioural programmes could enhance their effectiveness. Currently, widely used behavioural programmes in the UK advise their participants against weighing themselves. There are also widely expressed concerns that self-weighing may have adverse psychological consequences and we will assess this [7,8].
We address two theoretical issues. Firstly, for selfweighing to be effective it probably needs to become habitual and this might be easier to achieve if it occurs daily rather than, say, weekly [9]. Daily weighing may also be more effective than weekly because it provides more immediate feedback on how behaviour influences weight and immediate feedback leads to greater learning than feedback that is delayed [10]. Secondly, participants in behavioural weight loss programmes often report that it is the weekly weigh-in that is the most salient component of the programme that keeps them committed to their diet and physical activity plan. This is primarily because it provides accountability as it is done in front of the group leader [11]. We assess here whether accountability enhances the effectiveness of self-weighing. Accountability is defined as creating in a person the sense that someone other than themselves is observing and cares whether they weigh themselves or not.
Trial eligibility criteria
RCTs were included and participants were adults (aged ≥18 years). Trials were included if self-weighing was the main intervention strategy or a strategy within a multi-component intervention. Self-weighing was defined as participants being asked to weigh themselves rather than being weighed as part of a programme. The primary outcome of interest was weight change at programme end defined by the last point of intervention contact. A further outcome was weight change at final follow-up, which in some cases was beyond the end of the intervention. Only trials reported in the English language were included. Trials were excluded if participants were pregnant. Although the initial search was part of a wider search of selfweighing for weight management here, we present only the weight loss trials. A trial was defined as a trial of a weight loss intervention if the aim of the intervention was to achieve weight loss and it enrolled only people of an unhealthy weight. These interventions commonly incorporated strategies for preventing weight regain but the main focus was still on achieving weight loss. Trials were excluded if they enrolled people after weight loss where the prime aim of the intervention was to prevent weight regain or trials that enrolled people that aimed to help prevent gradual weight gain.
Search strategy
A systematic search of the following databases was conducted: Cochrane central register of controlled trials (CENTRAL, The Cochrane library, CINAHL (EBSCO Host) (1982 to August 2014), MEDLINE (OVID SP) (1946 to August 2014), EMBASE (OVID SP) (1980 to August 2014), PsychInfo (OVID SP) (1806 to August 2014) and Web of Science. ISRCTN and clinical trials registries were also searched. Search terms included: body weight, weight loss, weight maintenance, self-monitoring, selfcare, self-weighing and weight monitoring. MESH terms were used where applicable (online Additional file 1). We searched the reference lists of included trials and of three previous systematic reviews of self-weighing and selfmonitoring [4,5,12].
Study selection
Two independent reviewers screened all search results (titles and abstracts) for possible inclusion and those selected by either or both authors were subject to full-text assessment. The reviewers were not blinded to trial authors, institution, or publication journal.
Data collection process
One author independently extracted data using forms based on the Cochrane systematic review data collection forms and a second author checked the forms for any discrepancies [13]. Five authors were contacted for further data and one response was received [14].
Data items
Information was extracted about the study design, inclusion criteria, participants, study setting, duration of intervention and follow-up, intervention and comparator group weight management strategies, number providing follow-up data, imputation method used for missing weight data and any adverse events. Information was also collected about the two theoretical components proposed to influence the effectiveness of self-weighing; frequency of self-weighing and accountability. We extracted behaviour change techniques based on the CALO-RE behaviour change taxonomy [15] and clustered the techniques to make them more manageable based on previous recommendations [16]. Weight change data for intervention and control groups with standard deviations (SD) were recorded.
Risk of bias in individual trials
The risk of bias of included trials was assessed in accordance with the Cochrane guidelines [13]. We collected information as detailed in the online Additional file 2. This was independently extracted and checked by another author. A high risk of bias for reporting outcome data was defined as a difference in follow-up rates between the groups of ≥10 % or that there was ≥30 % attrition. Other measures of bias were based on the Cochrane guidelines [13].
Summary measures
The outcomes of interest were mean weight change from baseline to programme end and weight change from baseline to last follow-up. Follow-up was defined as a period after receiving the last intervention contact and a point of data collection. For each study we extracted weight change for each group reporting the mean, SD of the change, and number of participants contributing data. Where SDs were not presented these were calculated from standard errors.
Studies varied in how they imputed weight change data for those missing follow-up weights. Synthesising such studies' raw data would create spurious differences due to this. Therefore, we standardised the imputation method by calculating change in weight using baseline weight observed carried forwards (BOCF) [17]. We used BOCF because this mitigates bias that may arise because participants that do less well may be reluctant to be followed up. In one trial [18] weight change was not available but mean baseline and end weight were. The mean weight change and its SD was calculated using a standard formula, which imputes a correlation for the baseline and follow-up weights. This correlation was taken from two previously published trials [19,20]. One trial used a conservative method of imputation that was similar to this imputation, by adding 0.5 kg to the last weight observed carried forwards. The trial was included within the analysis as presented [21].
Synthesis of results
Meta-analyses were conducted using Review Manager 5.3. Random effects models were used as the diversity of intervention components and control conditions meant that treatment effects were expected to differ. A pooled mean difference was calculated for weight change at programme end and last follow-up separately and I 2 were reported to quantify heterogeneity. The range of treatment effects from self-weighing was quantified by calculating 95 % prediction intervals providing there were at least four comparisons in a meta-analysis [22]. If there were more than two intervention groups the comparator group was divided by the number of intervention groups and each intervention group was analysed individually.
Analysis strategy
We examined whether advising self-weighing as a standalone intervention could be effective. We then examined self-weighing as an addition to a behavioural programme in which the same behavioural programme without selfweighing instruction constituted the control group. Within this group, there were two subgroups: trials where self-weighing was the only addition to the behavioural programme and trials where several self-regulatory interventions including self-weighing were added to the behavioural programme. Finally, we examined the largest group of trials in which self-weighing was part of a behavioural intervention that was compared with a minimal or no intervention control group. Within the largest group of trials we used subgroup analysis to examine whether the theoretical propositions we identified were supported by the evidence i.e. daily versus less than daily weighing and accountability. We also conducted a sensitivity analysis to investigate the association of the length of the programme and weight change in this largest group of trials. Table 1 summarises the participants, interventions, control group intervention and outcome measures that were included within this review. The search identified 1401 studies after duplicates were removed. Titles and abstracts were screened and 79 full text articles were assessed for eligibility. Of those, 24 trials were included in the descriptive synthesis (Fig. 1). The reasons for excluding studies are given in Fig. 1. Data in three trials could only be included descriptively because these studies did not provide standard deviations or data to derive these [23][24][25]. Table 2 provides a concise summary of the trials and detailed information can be found in the online Additional file 3 and the clustered behaviour change techniques can be found in Additional file 4. All trials were RCTs with the majority conducted in the USA (n = 15). The number of participants ranged from 23 to 415 (median 110). Four trials included only women and the percentage of women in the other trials ranged from 26 to 91 % (median 75 %). Eleven interventions used predominantly internet interventions or a mixture of internet and face to face sessions, four were conducted in primary care [26][27][28]. Intervention length varied from a single session to fifteen months (median: 6 months). Follow-up periods ranged from the end of the intervention to two years. The three most reportedly used clusters of behaviour change techniques were goals and planning, feedback and monitoring, and shaping knowledge (online Additional file 4).
Risk of bias
Risk of bias for individual trials is documented in online Additional file 2. Several trials did not give sufficient information to assess risk of bias in detail. Eleven trials [21,[27][28][29][30][31][32][33][34][35] were at low risk of bias for sequence generation; for thef other trials it was unclear since they did not provide enough information. Seven trials [26-29, 32, 34, 36] had low risk of bias for allocation concealment and four trials were considered as high risk [21,33,34,37],the remainder were unclear.
Two trials [27,31] did not blind staff to treatment condition at outcome assessment and six trials were classified as low risk of bias for outcome assessment [26,28,29,32,33,38], the rest were unclear. All but one trial reported the percentage of participants who were followed up and of these 18 were classified as low and six as high risk of Adultsnon pregnant.
Interventions
Self-weighing as a standalone or a component of a weight loss intervention.
Control/comparator group No intervention/comparator or a weight loss intervention that did not include self-weighing.
Outcome
Weight change from baseline to programme end and weight change from baseline to last follow-up point.
Records identified through database searching (n = 1960) Additional records identified through other sources n = 3 due to previous systematic reviews n = 3 screening of reference lists n = 1 trials registry n = 1 aware from previous research Records after duplicates removed (n = 1401) Records screened (n = 1401) Records excluded (n = 1322) Full-text articles assessed for eligibility (n = 79) Full-text articles excluded n = 55 n = 9 both groups self-weighing n = 1 unable to obtain full texts n = 13 study design n = 11 no self-weighing intervention n = 2 weight change not an outcome n = 3 protocols only n = 2 systematic reviews n = 3 secondary analysis of a trial included n = 8 for weight maintenance n =3 for weight loss maintenance 24 trials included in qualitative synthesis 21 trials included in quantitative synthesis (meta-analysis) bias [23,24,27,30,41,40], because the rate of follow-up differed by more than 20 % between the trial arms or were reported as significantly different. There were only four trials in which selective reporting could be assessed as there was a protocol was available. Three studies were at high risk of bias because they did not report all outcome data [23,25,30]. All trials except one used objective data to assess weight change. Fujimoto and colleagues [30] did not report that weight was measured objectively, but follow-ups took place at a hospital so it is probable that weight was measured and not self-reported.
Synthesis of results
In one study after the initial intervention, participants in both groups were given an optional weight loss maintenance intervention, therefore end of treatment weight only was included in our analysis [40]. Two weight loss trials had a later follow-up and were thus analysed separately [30,31]. One involved one treatment session and no contact [31] and the other had end of treatment weights and follow-up weights two years from baseline [30]. One trial had more than three intervention groups and a comparator group that received a behavioural weight management programme. We included only the comparator group and the intervention group that received the same programme with additional selfmonitoring [39]. Two trials were cluster randomised controlled trials [27,35]. The trial by Mehring and colleagues did not take account clustering because some clusters included only one participant [27]. The trial by Batra and colleagues [35] did not account for clustering. We undertook sensitivity analysis by removing these two All studies are RCTs, unless stated that they are Cluster RCTs b 1 = self-weighing isolated, 2 = the same behavioural weight management programme given to both groups but the intervention group were also given self-monitoring/ self-weighing techniques, 3 = self-weighing added to a behavioural weight management programme trials from the analysis and in the main outcome, the estimate was reduced by only 0.1 kg and therefore we included them. There was no evidence of subgroup differences in weight change at programme end between programmes that lasted 3 months or less, 6 months, and 12+ months so we analysed all trials together. A summary of the meta-analyses can be found in Table 3 and Fig. 2 displays the three main groups results in a forest plot. One trial examined the impact of selfweighing without a behavioural programme to achieve weight loss. The mean effect of this intervention was -0.5 kg (95 % CI -1.3 to 0.3 kg) [28]. Four trials [30,38,39,41] compared a behavioural weight management programme plus self-weighing/self-regulation components with a behavioural weight management programme alone. One of these trials included selfregulatory strategies i.e. how to use and interpret the scales like a blood glucose monitor as well as receiving feedback about weight [41]. The other three trials gave participants the option to record their diet and physical activity [38,39]. The self-weighing/intervention arms had a significantly greater mean weight loss of -1.7 kg (95 % CI -2.6 to -0.8). The prediction intervals ranged from -7.5 to 4.1 indicating that in some interventions participants would lose a considerable amount of weight but in others interventions participants may gain weight. All but one of these trials instructed participants to weigh themselves daily [39].
Theoretical concepts
Of the multicomponent interventions seven trials asked participants to weigh themselves at least daily [18, 21, 35-37, 43, 44]. Eight trials asked participants to weigh less than daily [26, 27, 29, 32-34, 40, 42] and the mean difference was -3.3 kg (95 % CI -4.0 to -2.5). There was no significant difference in the weight differences of the two subgroups (Table 3). Only three trials measured adherence to self-weighing instruction and all three asked participants to weigh daily so we could not examine whether adherence differed between weekly and daily programmes. Adherence was 44 % [36], 50 % [21] and 95 % [37]. A fourth trial instructed participants to weigh daily but asked them to submit weekly logs and found participants did this 76.8 % (SD 23.7 %) of the time [35].
In 14 trials, the intervention group asked to weigh themselves knew that they were accountable to a therapist/researcher [18, 21, 26, 27, 29, 32-37, 40, 42, 43] while this was not the case in two trials [14,29]. The mean difference between intervention and control groups for those with accountability was -3.6 kg (95 % CI -4.6 to -2.7 kg) and it was -2.3 kg (95 % CI -3.1 to -1.5 kg) for trials without accountability. This difference was significant (p = 0.03). The intervention in two trials had particularly strong accountability because participants knew that the therapist would contact them if they did not weigh themselves [21,37]. Although there was accountability in other trials, this was more closely related to weight lost and not the act of weighing. The difference between intervention and control groups was larger in the trials with high accountability than in the other trials [30,31,34] followed up participants beyond the end of the intervention. The first trial followed up participants approximately 18 months from the last intervention contact and resulted in a mean difference of -8.0 kg (95 % CI -12.5 to -3.5 kg) [30]. The second trial followed up participants six months after the last intervention contact and resulted in a mean difference of -0.3 kg (95 % CI -11.4 to 3.7 kg) [31]. The third trial followed up participants nine months after the last intervention contact and found a mean difference of -7.5 kg (95 % CI -11.3 to -3.7). The three trials that could not be included in the meta analysis found no differences between groups at programme end [23][24][25].
Three trials measured adverse psychological outcomes by questionnaire. Steinberg and colleagues examined the change in body dissatisfaction, anorectic cognitions, depressive symptoms, dietary restraint, disinhibition, susceptibility to hunger and binge eating episodes between the groups and found no significant differences [45]. Gokee La Rose examined change in depressive symptoms, dietary restraint, body shape concerns, eating concerns, weight concerns and number of binge eating episodes by a trial arm x time interaction [41]. They reported that psychological symptoms improved in both groups and that there were no significant differences in change between groups. In the other trial participants were asked about their mood and how they felt about their body at three months follow-up and there was no difference between groups [28].
Three trials reported there were no serious adverse events related to the intervention in either the selfweighing or control group [21,26,32]. One trial detected five serious adverse events possibly related to the intervention but not specifically self-weighing. There were three fractures, one case of chronic subdural hematoma during an intervention session which led to surgery therefore counting twice [33].
Two trials provided non-randomised explanatory analyses to examine further evidence that self-weighing led to adverse psychological outcomes. Steinberg and colleagues conducted a sensitivity analysis of those in the intervention group (instructed to weigh daily) who did not lose weight, and found no difference in body dissatisfaction or depressive symptoms compared to those who did lose weight [45]. This is important, as those who lost weight may have had more positive experiences of selfweighing than those who weighed regularly and didn't lose weight. Gokee LaRose and colleagues found no relationship between change of frequency of self-weighing and disordered eating [41].
Discussion
One trial has tested the effectiveness of self-weighing as a single intervention compared with no intervention and there was no evidence that it was effective. There was evidence that adding advice to self-weigh to a behavioural programme improves its effectiveness, but only four trials have assessed this, and the estimate of effect was imprecise and clouded by the use of other self-regulatory elements. There was strong evidence that behavioural weight loss programmes that incorporate self-weighing are more effective than minimal interventions. There was some evidence to suggest that adding accountability to a selfweighing programme improves its effectiveness.
The previous descriptive systematic review of selfweighing using a pre-post analysis found that selfweighing would result in a 5.4 to 8.1 kg weight loss [5]. Our findings are similar, but represent mean differences between intervention and control groups rather than total weight losses and therefore are more conservative and represent the net effect of the self-weighing intervention itself. In the present review only experimental studies with a control group (imputing BOCF for missing weight data) were included which may explain the lower weight change.
Michie and colleagues' reviews of effective behavioural techniques for healthy eating, physical activity and reduction of alcohol consumption concluded that self-monitoring was effective alone but when combined with other techniques the effect size nearly doubled [1,2]. The other techniques were prompt intention formation, prompt specific goal setting, prompt review of behavioural goals and provide feedback of performance [1]. However, unlike Michie and colleagues, we found that selfmonitoring alone was ineffective for weight loss. However, only one study investigated this and the estimate was imprecise enough to encompass effects that would be worthwhile. Additionally self-weighing is different to the behaviours investigated by Michie and colleagues, as self-weighing is monitoring the outcome rather than the behaviour. To improve the effectiveness of self-weighing additional intervention components may need to be included. This is because people need to reflect on their weight, and then change their dietary and physical activity behaviours. It may be that not all people were prompted to reflect by weighing themselves or were unable to use that reflection to create new strategies to manage their energy intake and expenditure. We did find that adding self-weighing/self-regulation components to a behavioural weight management programme resulted in greater weight loss than the same programme that included no self-monitoring. This suggests that adding selfweighing to a behavioural programme might enhance its effectiveness. Additionally because self-weighing is less cumbersome than recording diet and physical activity it might be a behaviour that can be continued and therefore help weight control in the longer term. The National Weight Control Registry has found that those who are successful at preventing weight regain, after weight loss, weigh themselves on a regular basis [46]. Self-weighing may be used as a strategy to get feedback of cognitive restraint of eating, and this may result in an improved ability to detect changes in weight and thus prompt action if needed.
Multicomponent programmes that included self-weighing compared with a minimal/no intervention comparator group resulted in significant weight loss of -3.4 kg (95 % CI -4.2 to -2.6). These findings are similar to a systematic review of behavioural weight management programmes that found a significant difference of -2.6 kg (95 % CI -2.8 to -2.4 kg) [47]. This was 12-18 months after the start of the programme and may explain the smaller mean difference than found when weight loss was assessed at the end of programme only.
We hypothesised that daily self-weighing would more easily lead to the development of habits, however adherence to the self-weighing recommendation was not always reported. There was no evidence that daily weighing led to greater weight loss than weekly weighing and it appears that both may be effective when combined with multi-component interventions. Previous research has examined self-weighing frequency for both weight loss and weight maintenance using a prospective design without a comparison group [48]. Higher weighing frequency was associated with greater weight loss and less weight regain at 24 months follow-up. However, greater motivation to maintain weight or success in achieving weight maintenance may motivate people to weigh themselves frequently, which makes observational data difficult to interpret [49].
We hypothesised that accountability could enhance the effectiveness of self-weighing as participants may feel the need to conform as others were observing what they were doing. Our findings suggested that interventions with accountability had significantly greater weight loss than those without accountability. Gardner and colleagues conducted a systematic review examining similar behaviour change techniques to accountability called audit and feedback [50]. They investigated whether audit and feedback changed healthcare professionals' behaviour and found a significant effect (OR = 1.43 95 % CI 1.28 to 1.61). Audit and feedback are similar to accountability as participants are aware of being observed, however there is the additional technique of providing feedback which was not necessarily considered within the analyses in our review.
There were no adverse effects of the trials reported, however few trials assessed whether self-weighing led to psychological problems. Those that did, found no evidence of negative consequences.
Strengths and limitations
This is the first systematic review to include only RCTs to examine the effect of self-weighing. The risk of bias was also reduced by imputing missing weight data using the same method for all studies. There was significant heterogeneity between trials, although this was expected and random effect models and planned sub-group analyses were conducted to investigate this.
Interpreting the data was complicated because there were only a few trials in our sub group analyses and there were differences between trials, such as in length of follow-up, comparator groups, and intervention components. However, we believe that a meta-analysis random effects model is appropriate to investigate whether self-weighing programmes can be effective. Our aim is not to produce a definitive estimate of the effect of selfweighing on weight loss at a particular point in time; rather it is to find evidence that self-weighing as a technique is effective. As length of follow-up is the same in both intervention and control groups, any differences are down either to random variation or differences in effectiveness of self-weighing. Thus the estimates we produce should not be read as estimates of the effect of self-weighing, but as valid evidence that self-weighing can be effective in these contexts. Our analyses addressing theoretical constructs were analyses across trials and therefore observational, as no trial directly addressed these issues. We extracted behaviour change techniques used in each intervention, however these were generally poorly reported. It was impossible to separately code those techniques used to promote self-weighing and those that related to other components of the intervention.
Future research
There was insufficient evidence that self-weighing alone is effective but it is an appealing self-help strategy. Future research should examine other behavioural techniques that can be effectively combined with self-weighing to build low cost public health interventions. Adding accountability may improve the effectiveness of selfweighing. Both daily and weekly weighing may be effective strategies for weight loss but it is not clear whether one is more effective. A trial that is currently being conducted is comparing a behavioural weight management program without self weighing to the same intervention with either daily or weekly weighing [51].
Not all interventions will result in effective weight management for all people and it may prove helpful to identify people who respond to self-weighing and those who do not. Pacanowski reported a subgroup analysis from a trial of self-weighing that found people with internal weight locus of control and males lost more weight [44]. However, indicating that on average some groups respond better does not necessarily make these predictors useful screening tools to exclude people from self-weighing. Given that the advice is apparently simple it may prove that the only screening required is to get people started and react to their responses. | 2016-05-04T20:20:58.661Z | 2015-08-21T00:00:00.000 | {
"year": 2015,
"sha1": "d1a9113c67412f39debc7efdf8f215659430f9aa",
"oa_license": "CCBY",
"oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/s12966-015-0267-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ac51803e609e720c5c87d6e2250da704591c4c5",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117374177 | pes2o/s2orc | v3-fos-license | Iron line emission in X-ray afterglows
Recent observations of X-ray afterglows reveal the presence of a redshifted Kalpha iron line in emission in four bursts. In GRB 991216, the line was detected by the low energy grating of Chandra, which showed the line to be broad, with a full width of ~15,000 km/s. These observations indicate the presence of a>1 solar mass of iron rich material in the close vicinity of the burst, most likely a supernova remnant. The fact that such strong lines are observed less than a day after the trigger strongly limits the size of the remnant, which must be very compact. If the remnant had the observed velocity since the supernova explosion, its age would be less than a month. In this case nickel and cobalt have not yet decayed into iron. We show how to solve this paradox.
Introduction
There are now four bursts displaying evidence of an emission line feature during the X-ray afterglow: GRB 970508 (Piro et al., 1999); GRB 970828 (Yoshida et al., 1999); GRB 991216 (Piro et al., 2000, hereafter P2000); GRB 000214 (Antonelli et al., 2000). These lines have been observed 8-40 hours after the burst explosion, have a large equivalenth width (0.5-2 keV) and a flux of about 10 −13 erg cm −2 s −1 . Given these properties, each iron atom has to produce at least 2000 line photons, in order not to exceed 0.1 M ⊙ mass of emitting iron,. Fast recombination and ionization is therefore required. The line of GRB991216 is resolved in the Chandra gratings, with a width 0.05c (P2000). As discussed by Lazzati et al. (1999), the detection of the line implies the presence of a sizable fraction of a solar mass of iron concentrated in the vicinity of the GRB site. This is naturally accounted for in the SupraNova scenario (Vietri & Stella 1998).
General Constraints
The size problem If the line is detected after t obs from the burst, the line emitting material must be located within a distance R given by: where θ is the angle between the line emitting material and the line of sight at the GRB site. This limit implies a large scattering optical depth: where µ, is the mean atomic weight of the material.
The kinematic problem For a radial velocity of the remnant of v = 10 9 v 9 cm s −1 the time elapsed from the supernova (SN) is t SN ≃ 12.5(t obs /10hr)/[(1 + z)(1 − cos θ)v 9 ] days. Such short times implies that most of the 56 Co nuclei (and a fraction of the 56 Ni nuclei) have not yet decayed to 56 Fe (half-life of 77.3 and 6.08 days, respectively, see Vietri et al. 2000).
Line emission rate
We can derive the photon line luminosity by estimating the volume V em effectively contributing to the line emission, and assuming a given iron mass. If the layer contributing to the emission has τ T ∼ 1 (to avoid Compton broadening), and in this layer τ FeXXVI ∼a few (to efficiently absorb the continuum), we have V em = S/(σ T n e ), where S is the emitting surface. The line emission rate from V em is then: where the total volume is V = S∆R (slab or shell geometry). Mass Eq. 3 shows that the total iron mass must be a sizable fraction of a solar mass in order to give rise to the observed line photon luminosity of 4 × 10 52 s −1 .
Notice also that Eq. 3 establishes that the line emitting material must be a SNR: no other known astrophysical object contains this iron mass.
Models
The wide funnel Consider a wide funnel excavated in a young plerionic remnant. This solves the size problem, since it extends to large radii but can maintain the time-delay contained because it is built close to the polar axis (see Fig. 2). Fixing the line photon rate (Eq. 3) yields R = 6 × 10 15 cm, and thus an opening angle θ = 48 • to fit the time-delay. Assuming a cone geometry for simplicity, we can rewrite Eq. 3 as: This is a lower limit, since a parabolic funnel has a larger surface and we neglected the (likely) density stratification inside the remnant. Consider now the kinematic properties of the funnel. We expect radiation pressure to exert a force parallel to the surface accelerating the layer with τ T = 1. The absorbed fluence E ion accelerates the funnel layer to v f = (2E ion /M layer ) 1/2 sin φ ≃ 10 4 E 1/2 ion,50 sin φ km s −1 if R = 6 × 10 15 cm. φ is the angle between the funnel's normal and the incoming photons. Thus, we expect ablation by radiation pressure to be able to propel the reflecting layer to velocities comparable to those seen in GRB991216.
Back illuminated equatorial material
The model above assumes that a SN explosion preceded the GRB by some months. We now explore the possibility of a simultaneous GRB-SN explosion. Assume that a GRB ejects and accelerates a small amount of matter in a collimated cone, while a large amount of matter is instead ejected, at sub-relativistic speeds, along the progenitor's equator. Massive star progenitors are inevitably surrounded by dense material produced by strong winds of mass loss ratesṁ w = 10 −5ṁ w,−5 and velocity v w = 10 7 v w,7 . This wind scatters back a fraction of the photons produced by the bursts and its afterglow (Thompson & Madau 2000). The scattered luminosity L scatt is constant, since there is an equal number of electrons in a shell of constant width ∆R (for a density profile ∝ R −2 ). This luminosity is of order: Scattered photons illuminate the expanding equatorial matter after a time 2R/c, giving rise to the line emission. Since in this case the SN and GRB explosions are supposed to be simultaneous, the emitting iron must be produced directly by the SN and not through the nickel decay. Iron ( 54 Fe) is directly synthesized for high neutronization of the material at the SN shock.
Conclusions
The recently detected features in the X-ray afterglow of GRBs impose strong constraints on models, the most severe being how to arrange a large amount of iron close to the GRB site, while avoiding at the same time a large Thomson scattering opacity. This limit applies to all bursts showing a line feature. An additional limit comes from the Chandra observation of a broad line in GRB 991216. These observations require a very large amount of iron, known to be contained only in SNe. We have described two models. The "wide funnel" model is in better agreement with observations: its geometry solves the size problem, and the acceleration of the line emitting material by grazing incident photons solves the kinematic problem, allowing the remnant to be a few months old (enough for most cobalt to have decayed into iron). This model implies that the GRB progenitors are massive stars exploded as SNe some months before the burst, inundating the surroundings of the burst with iron rich material. This two-step process and the time-delay between the two steps are exactly what is predicted in the SupraNova scenario of Vietri & Stella (1998). | 2014-10-01T00:00:00.000Z | 2001-04-04T00:00:00.000 | {
"year": 2001,
"sha1": "382d763a96a74476aea8fe76089ffe255557bcb8",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "382d763a96a74476aea8fe76089ffe255557bcb8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118404247 | pes2o/s2orc | v3-fos-license | Collective flow in ultrarelativistic $^3$He-Au collisions
The triangular flow in ultrarelativistic $^3$He-Au collisions at RHIC energies is enhanced due to the triangular arrangement of the nucleon configurations in $^3$He. We study the fireball eccentricities in the Glauber Monte Carlo model and find that since the configurations of the projectile $^3$He are elongated triangles, the created fireball has a large ellipticity and a smaller triangularity. The dependence of the triangularity on centrality is weak, so it cannot be extracted from the centrality dependence of the triangular flow $v_3$, as it is dominated by the centrality dependence of the hydrodynamic response. We propose to look at the centrality dependence of the ratio $v_n\{4\}/v_n\{2\}$, where the uncertainties from the hydrodynamic response cancel, and show that the basic signature of the geometry-driven collective flow is the raise of the ratio $v_3\{4\}/v_3\{2\}$ with the number of participant nucleons for centralities less than 10%.
Introduction
Collective behavior in relativistic collisions of small system is an active field of experimental studies at RHIC and the LHC [1][2][3][4] A large number of measurements are consistent with calculations in the hydrodynamic model [5][6][7][8][9][10]. Some observations can also be explained in the color class condensate framework [11,12]. On-going studies are aimed at elucidating the nature of the observed flow correlations and test the limits of collectivity in small systems.
The azimuthal deformation of the fireball in small systems is due to fluctuations, as in p-Pb collisions, or to a combination of fluctuations and the intrinsic deformation of the small projectile, as in d-Au collisions. Collisions involving a projectile with a triangular deformation, 3 He-Au [13] or 12 C-Au [14] are particularly interesting, as they provide systems with a geometry-driven triangular flow. The difficulty in the study of the geometry-driven flow in small systems comes from the interplay of a large contribution from the shape fluctuations to the initial eccentricities of the fireball. While the large quadrupole deformation of the deuteron makes it possible to trigger on central events to get a sample of events with a large eccentricity [5], for 3 He-Au collisions the centrality dependence of the triangularity is weaker and it is much more difficult to identify the triangular flow driven by the projectile geometry [9].
We study the eccentricities of the fireball formed in 3 He-Au collisions as a function of centrality (here defined via the number of wounded nucleons [15]) to find signatures Email addresses: Piotr.Bozek@ifj.edu.pl (Piotr Bożek), Wojciech.Broniowski@ifj.edu.pl (Wojciech Broniowski) of the triangular flow caused by the geometrical deformation the projectile. We find that the effect is clearly seen in the ratio of the cumulant moments of the eccentricities, 3 {4}/ 3 {2}, thus suggesting to investigate the ratio v n {4}/v n {2} in experimental studies. We show that the basic signature of the geometry-driven triangular flow is the raise of this ratio with the number of wounded nucleons for centralities below 10%.
Method
The Fourier coefficients v n of the azimuthal dependence of the spectra of particles emitted in relativistic nuclear collisions appear due to the collective expansion of an azimuthally deformed source profile (in the following we consider the flow coefficients integrated over the transverse momentum). The hydrodynamic evolution that generates the azimuthally asymmetric particle distribution gives an approximately linear response of the flow coefficients v n to the eccentricities of the initial source density ρ(x, y) in the transverse plane, for n = 2, 3 [16][17][18], with φ = arctan(y/x) and Φ n denoting the angles of the principal axes. The flow fluctuates from event to event. The cumulant method allows one to extract even cumulant moments v n {m} of the distribution of flow coefficients v n [19]. With the linear hydrodynamic response one has the proportionality v n = κ n n , hence the cumulant flow coefficients can be related to the corresponding moments of the eccentricity distributions in the initial state, namely where the response coefficient κ n is independent of the rank m, but it does depend of the dynamic features such as the multiplicity or the collision energy. We will need explicitly Relation (3) allows one to discuss the cumulant moments of the eccentricity instead of the flow coefficients, i.e., the features of the initial state can be used to make certain predictions for the final flow coefficients. In particular, with the Glauber model of the initial state one finds a large ellipticity 2 for collisions with the deuteron projectile [5], and a substantial triangularity 3 for collisions with the 3 He [13] or 12 C [14] projectiles. The geometric deformation increases for collisions with a larger number of participants, corresponding to high multiplicity events. On the other hand, the eccentricity due to fluctuations of independent sources decreases with the number of participants. We recall that for a finite number of wounded nucleons N w the eccentricity distribution is not of a Bessel-Gaussian [20,21]. In particular, n {m} = 0 for m ≥ 4, and n {m} decreases as 1/N 1−1/m w . Accordingly, for p-Pb collisions a nonzero value of the higher order cumulants is expected from fluctuations [22][23][24], which does not signal by itself an intrinsic geometric deformation of the source.
For events with a large number of participants, the contribution from fluctuations to n decreases, while the geometrical deformation is enhanced due to the preferential orientation of the deformed projectile hitting the large nucleus [14]. This brings the possibility to identify the geometric deformation in the initial state through the increase of v 2 or v 3 for the high-multiplicity events. Unfortunately, the argument cannot be applied directly, since the hydrodynamic response (3) depends on the centrality, i.e., κ n increases with the multiplicity of the event. Therefore, just from the increase of v n with centrality one cannot infer that the deformation of the fireball grows as well. This is especially difficult for 3 He-Au collisions, where, as we shall see, the increase of 3 for central events is very mild.
One possibility, of course, is to run the involved hydrodynamic simulations, as in [9]. However, such modeling introduces the uncertainties of hydrodynamics, which for small systems may lead to substantial sensitivity and, in fact, difficulty in pinpointing the signatures of the geometric deformation of the initial state. We thus propose a different strategy to evidence the presence of an initial intrinsic deformation. By considering the ratio of cumulants of different order for a given flow coefficient v n , with m ≥ 4, we gain two things. First, the hydrodynamic response with unknown centrality dependence cancels out in the ratio and the centrality dependence of the ratio of flow cumulant can be directly compared to the corresponding ratio of eccentricity cumulants. Second, the ratio n {m}/ n {2} has a known behavior as a function of the number of participants in two important limits. For a fireball, with deformations solely driven by fluctuation of independent sources, the ratio monotonously decreases as N , whereas if the fireball possesses an intrinsic geometric deformation, the ratio approaches 1 from below for (very) large N w .
3 He wave functions and eccentricities
To obtain a large triangular deformation in 3 He-Au collisions, two condition must be met. First, the plane of the 3 He nucleus should be more-less aligned with the transverse plane (flat-on collision), second, the configuration of the 3 He wave-function should have a large triangularity, which happens for configurations close to an equilateral triangle. In practice, it is difficult to realize these conditions in a typical event, which makes the experimental observation of the geometrical triangularity challenging. Thus our first goal is to understand the structure of 3 He in simple, geometric terms. Similarly to Ref. [9], we use the samplings of the 3 He wave functions as provided, e.g., in the distribution of the Phobos Monte Carlo code [25], generated within the state-of-the-art Monte Carlo Green's function method [26]. We start our analysis with a closer look at these distributions. The centers of the three nucleons form a triangle. We consider eccentricities defined by these three points, evaluated in the plane determined by the triangle.
Configurations that follow from the 3 He wave function, with the positions of the nucleons fluctuating, only very rarely realize configurations of maximum triangularity characteristic of the equilateral triangle, where 2 = 0 and 3 = 1. Indeed, we note widely distributed 2 and 3 in Figs. 1 and 2. The 2 distribution has a pronounced maximum at 2 = 1. These configurations correspond to a very elongated isosceles triangle. Such configurations also yield 3 0.6, a value corresponding to the maximum of the triangularity distribution in Fig. 2.
Of course, in the collision one does not control the orientation of the nucleus, which is random. In that case the relevant characteristics of the triangle are the eccentricities evaluated for the triangle projected on the transverse plane, which is then reflected in the fireball eccentricity. After projection of the 3 He configurations with random orientations, the distribution of the ellipticity is even more peaked at 2 1, and the distribution of the triangularity at 3 0.6. Thus the configurations projected on the transverse plane are mostly elongated isosceles triangles. The 3 He nuclei in such configurations, when hitting the large Au nucleus at a small impact parameter, generate a fireball with large 2 and moderate 3 .
Eccentricities of the fireball
The fireball created in the collision of a small 3 He nucleus with a large Au target inherits largely the shape of the smaller projectile, as discussed in the previous Section. Each of the three He nucleons wounds several nucleons in the Au target. The result is a concentration of participant nucleons around the positions of the three He nucleons in the transverse plane. Therefore, the shape of the fireball preserves partly the ellipticity and triangularity of the incoming 3 He nucleus, but with considerable smearing. Our simulations are carried out with GLISSANDO [27,28] and for most of the results use the simplest wounded nucleon model. We use a realistic wounding profile, which results in a larger smearing than for the black disc case [29]. We investigate the RHIC energy of √ s N N = 200 GeV, where the inelastic NN cross section is equal to 42 mb. The source density is obtained by smearing the density at the Monte-Carlo generated positions of the wounded nucleons with a Gaussian of width 0.4 fm, which introduces a further reduction of azimuthal asymmetries. Additional fluctuations in entropy deposition at each source (considered in Sec. 5) smear the initial geometry even more. While triangularity increases due to fluctuations, at the same time the imprint of the geometric triangularity from the deformed 3 He configuration is washed out to a large degree.
The cumulant moments of ellipticity, 2 {2} and 2 {4}, are very large (Fig. 3). Moreover, for centralities below 10% they do not decrease with the increasing number of wounded nucleons, which signals a significant contribution of the intrinsic geometric deformation. This observation is consistent with the characteristics of 3 He configurations (Sect. 3). The projectile 3 He has a dominant quadrupole deformation, whereas its triangular deformation is significantly smaller.
A similar trend is visible in the dependence of the cumulant moments of 3 on N w , i.e., they also do not decrease for the most central events. In fact, the behavior is nonmonotonous, especially strong for 3 {4}. The change in the trend reflects switching from fluctuation-driven triangularity at smaller N w to domination of the intrinsic geometry deformation for the most central events. The modification of the trend in the centrality dependence of 3 is probably not strong enough to imply a noticeable signature in the centrality dependence of the triangular flow v 3 . This is because the hydrodynamic response increases with the multiplicity of the event and its effect in small systems depends on details of the hydrodynamic evolution [7].
In Figs. 1 and 2 we also show the distributions of ec- He-Au centricities of the fireball at N w = 22 and N w = 34, corresponding to centralities 10% and 0.1%, respectively. We notice that the fireball eccentricities are significantly smaller than the eccentricities of the 3 He configurations. As discussed above, it is due to a random orientation of the incoming 3 He nucleus, and smearing of initial density with the Gaussians centered at the positions of the wounded nucleons. Triggering on the most central events increases the ellipticity of the fireball, but has a small effect for the average triangularity. Even triggering on ultra-central events (c < 0.1%) is not enough to provide a direct experimental signature of the geometric triangularity in the system, as the increase of the average triangular flow is not much stronger than the expected increase of v 3 from the stronger hydrodynamic response in the very central collisions.
A comparison of eccentricities for different collision systems: p-Au, d-Au, and 3 He-Au, exhibits differences that signal different origin of the fireball eccentricities (Fig. 4). The moment 2 {2} is large for d-Au and 3 He-Au collisions, reflecting the large elliptic deformation of the projectile nucleus. The triangularity in p-Au and d-Au collisions originates from fluctuations only, thus decreases for the central events, while an opposite behavior for the geometry-driven triangularity in the 3 He-Au case is observed. Therefore, the comparison of the triangular flow v 3 in p-Au and d-Au reactions to the 3 He-Au case might display the geometric triangularity in the latter. However, the argument may be difficult to apply in practice. First, the same number on participants corresponds to very different centralities in all the three systems. Second, the hydrodynamic response depends not only on the multiplicity in the system, but also on its size, hence relations between eccentricities cannot be compared directly to analogous relation between flow in different systems.
Predictions for measurable quantities
As stated in Sec. 2, a simple way to assess the properties of the collective flow without hefty hydrodynamic simulations is to consider the ratios of cumulant moments of the flow coefficients (6). Importantly, these ratios provide simple signatures of the appearance of intrinsic geometry, or just the fluctuation driven flow asymmetry.
The ratio of the flow coefficients v 3 {4}/v 3 {2} is nonmonotonous function of N w (Fig. 5). For small N w it decreases as expected from a fluctuation mechanism of source shape deformation. At around N w = 22 the trend is reversed, signaling the dominance of the geometric triangular deformation. The change in the trend is due to two reasons. First, by triggering on high-N w events the orientations of the incoming 3 He projectile become somewhat more deformed. Second, the fluctuations of v 3 decrease as the number of participant nucleons increases, and the ratio v 3 {4}/v 3 {2} increases towards 1. We note that a similar change in the trend of the dependence on N w is visible for The balance between the geometry and fluctuations depends on the model of the initial fireball formation. One source of additional fluctuations comes from fluctuations in the entropy deposition from each participant. Here we use a model with a gamma distribution for the entropy distribution [7,28] superimposed over the distribution of participants. On the other hand, the admixture of binary collisions in the fireball makes the geometric deformation stronger, as the concentration of binary collisions (located in the mean location of the two colliding nucleons) follows closer the shape of the 3 He projectile than the distribution of the wounded nucleons from the Au nucleus. In Fig. 5 we show the results of the Glauber model with an admixture of binary collisions and fluctuation of the deposited entropy. In all variants of the calculations we find that the proposed signature of the geometric flow, namely the change in the trend for the ratio v 2 {4}/v 2 {2}, is still present.
The described change in the trend for the ratio v 3 {4}/v 3 {2} as a function of centrality is the main result of this Letter. The centralities where the minimum of the ratio occurs ( 10%) are easily accessible in experimental analysis. Experimentally, the ratio v 3 {4}/v 3 {2} for the centrality bin 5-10% should be compared to the one in ultra-central events, 0-0.1%, and in semi-peripheral events, e.g., 20-40%., to search for a non-monotonic dependence on centrality.
As the sensitivity of the results shown in Fig. 5 on the fireball formation model is significant, precise measurements of this quantity may be used to discriminate between these models.
Finally, we remark that the results for the 3 H collisions with the configurations of Ref. [26] are indistinguishable from the 3 He case presented in this work.
Conclusion
The 3 He-Au collisions form a system where the intrinsic triangular deformation could lead to a large triangular flow. Hydrodynamic simulations predict a triangular flow of emitted particles [9], but the contribution to the flow from geometry and fluctuations in the initial state cannot be easily separated, since the relatively small number of participant nucleons gives large fluctuations of the fireball shape. The small effect of the intrinsic triangular deformation of the 3 He projectile on flow signatures can be traced to a number of reasons: 1) The most probable three nucleon configurations in 3 He wave-function have the shape of an elongated triangle with 3 0.6 and 2 1. 2) In the collisions, the fireball is determined not by the wavefunction configuration, but its protection on the transverse plane. The three nucleon configurations projected on the transverse plane are even more dominated by configurations with 3 0.6 and 2 1. 3) As a result the fireball created in a 3 He-Au collision has most often a very large ellipticity 2 and a smaller triangularity 3 . 4) Triggering on central events does not change the average triangularity 3 significantly.
In that situation, we propose to look at the ratio v 3 {4}/v 3 {2} as a function of centrality, or N w . For 3 He-Au collisions this ratio has a non-monotonic behavior, with a minimum at centrality 10%. For collisions with a small number of wounded nucleons, fluctuations dominate the triangularity and the ratio v 3 {4}/v 3 {2} decreases with increasing N w , while for the most central events the geometric deformation dominates, and the ratio increases. The main reason to look at the centrality dependence of the ratio instead of the centrality dependence of v 3 {2} or v 3 {4} is that a large part of the centrality dependence of v 3 {m} comes from the change of hydrodynamic response coefficient with centrality. Moreover, the centrality dependence of the hydrodynamic response in small systems is not very well constrained in the models. The proposed signature v n {4}/v n {2} can be straightforwardly investigated in experiments with 3 He-Au collisions at RHIC or, more generally, when looking for elliptic or triangular flow driven by the projectile geometry in d-Au, 9 Be-Au, or 12 C-Au collisions. | 2014-09-07T20:21:10.000Z | 2014-09-07T00:00:00.000 | {
"year": 2014,
"sha1": "920e17585add02bf01e1d0de157ef761f15e56ab",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2014.11.006",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "920e17585add02bf01e1d0de157ef761f15e56ab",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257251619 | pes2o/s2orc | v3-fos-license | Neuropsychological Performance and Cardiac Autonomic Function in Blue- and White-Collar Workers: A Psychometric and Heart Rate Variability Evaluation
The 21st century has brought a growing and significant focus on performance and health within the workforce, with the aim of improving the health and performance of the blue- and white-collar workforce. The present research investigated heart rate variability (HRV) and psychological performance between blue and white-collar workers to determine if differences were evident. A total of 101 workers (n = 48 white-collar, n = 53 blue-collar, aged 19–61 years) underwent a three lead electrocardiogram to obtain HRV data during baseline (10 min) and active (working memory and attention) phases. The Cambridge Neuropsychological Test Automated Battery, specifically the spatial working memory, attention switching task, rapid visual processing and the spatial span, were used. Differences in neurocognitive performance measures indicated that white-collar workers were better able to detect sequences and make less errors than blue-collar workers. The heart rate variability differences showed that white-collar workers exhibit lower levels of cardiac vagal control during these neuropsychological tasks. These initial findings provide some novel insights into the relationship between occupation and psychophysiological processes and further highlight the interactions between cardiac autonomic variables and neurocognitive performance in blue and white-collar workers.
Introduction
In the 21st century, productivity is a crucial element in the strength and sustainability of a company's gross business performance [1]. Both white-collar and blue-collar professions often require executive function to perform the tasks required for their work. However, compared to white-collar workers, blue-collar employees have been shown to have a higher prevalence of a large range of health complications, particularly cardiovascular disease (CVD) [2]. The workplace can often play a major role in the onset of cardiovascular disease and the current European guidelines on the prevention of CVD recommend an assessment of long-term stress, which includes occupational psychological stressors [3].
Executive cognitive function refers to a family of mental processes that are recruited for concentration and attention [4]. These executive functions have also been implicated in other aspects of health, such as obesity [5], occupational prosperity [6], and public safety [7]. Increasing evidence suggests an association between CVD and reduced psychological performance; however, few recent studies have delved into the inner workings which relate working memory (WM) to CVD. Additionally, many previous studies linking memory and working memory deficits to cardiac failures have mostly focused on patients with severe CVD [8].
Heart rate variability (HRV) has been extensively used to reflect the sympathetic and parasympathetic activity of the autonomic nervous system [9]. Furthermore, previous research has linked HRV to CVD [10,11], as well as various psychological processes [12,13]. Hansen et al. [12] established a relationship between HRV and performance tasks that taxed executive function in normal subjects (n = 53 male, average age = 23 years) and found that the qualitative differences between task demands could be predicted by the subject's cardiac vagal tone. Other researchers have investigated this connection, but vagal tone relationships remain largely unexplored [14]. Furthermore, in order to predict cognitive performance by utilising cardiac vagal tone as an independent variable, Johnsen et al. [15] investigated attentional bias in 20 patients with anxiety in a dental setting using a modified Stroop-test [16] (14 male and 6 female, mean age = 36 years). Results showed that poor attentional performance was characterized by reduced HRV as compared to patients with higher HRV [15].
This indication of decreased HRV with increased working memory load and higher HRV in better performers supports the notion that, during working memory function, HRV may qualitatively predict cognitive differences among individuals [17]. This also implies that executive performance and autonomic functions, such as HRV, may be adaptively regulated by an interrelated neural network. Therefore, HRV may provide an index of an individual's ability to function effectively in a dynamic environment [17].
Limited research has linked working memory and attentional deficits to cardiac deficits [18], with most studies focused on end stage patients [19]. Therefore, more research needs to be centred around healthy individuals, which may implicate HRV as a pre-emptive biomarker for working memory and attentional performance.
This study aims to investigate neuropsychological processes (working memory and attention) in two major working populations, white-collar (n = 48) and blue-collar (n = 53) workers, further identifying the fundamental associations between working memory, attention, and HRV. Heart rate variability and executive function are evaluated in a sample of healthy blue and white-collar workers to better understand the cardiac autonomic vagal influence during neuropsychological performance and risk factors that may contribute to cardiovascular complications. It was hypothesized that (1) attentional states will increase cardiac vagal input, HF and RMSSD HRV in white-collar workers while indicating a decrease in blue-collar workers, and (2) spatial neuropsychological stress will exhibit a decrease in cardiac vagal input, HF and RMSSD HRV in white-collar workers and an increase in blue-collar workers.
Participant Recruitment
Healthy participants between the ages of 18-68 years (n = 101) were recruited from the community. Participants were required to abstain from caffeine and nicotine for 4 h and alcohol for 12 h prior to the commencement of testing. These factors are known to influence physiological measures and their restrictions enhance the reliability of the data. Additionally, participants with pre-study blood pressure (BP) measures greater than 160 mmHg (systolic)/or 100 mmHg (diastolic) were excluded [20]. Testing was conducted between 8:30 am and 12:00 pm to minimize the effect of circadian rhythm fluctuation [21] on the data obtained. No volunteer was excluded from the current study and written informed consent was obtained prior to commencement of the study protocol. This study was approved by the Institutional Human Research Ethics Committee of the University of Technology Sydney (HREC: 2014000110 and HREC ETH19-3676).
Experimental Methodology
Participants were seated for 5 min prior to three BP recordings using an automated monitor (OMRON IA1B, Kyoto, Japan). Three blood pressure readings were obtained both before and after the study protocol with 2-min intervals between each measurement [22]. Participants were then asked to complete the General Health Questionnaire (GHQ60) [23], which obtained detailed health information. Participants then underwent a baseline electrocardiogram (ECG) for 10 min followed by an ECG recording during the neurocognitive tasks performed. The ECG was obtained using a FlexComp Infiniti encoder (Thought Technology Ltd., Montreal, QC, Canada) and an ECG-Flex/Pro amplifier sensor (Thought Technology Ltd., Canada) connected to three electrode leads. BioGraph Infiniti software (T7900) (Thought Technology Ltd., Canada) was used to record and display the ECG wave. Prior to placement of the electrodes, the skin was cleaned using Liv-Wipe (Livingstone International Pty Ltd., Sydney, Australia) 70% alcohol swabs. Disposable electrodes were used in all cases (Ag/AgCl ECG electrodes (Red Dot TM ) 2239, Tukwila, WA, USA).
The electrodes were placed in an inverted triangle to allow for positive deflections corresponding to the P, Q, R, S, and T waves [24]. The negative electrode was placed beneath the right clavicle, the ground electrode was placed beneath the left clavicle, and the positive electrode was placed 2 centimeters beneath the sternum and over the xyphoid process. Additionally, the ECG was sampled at 2048 samples per second for high precision detection of successive heart beats [25].
Neuropsychological Tasks
The tasks performed utilized the Cambridge Neuropsychological Test Automated Battery (CANTAB) and tests included were the spatial working memory (SWM) task, attention switching task (AST), rapid visual processing task (RVP), and the spatial span (SSP) task [26]. The SWM task requires the retention and manipulation of visuospatial information. Outcome measures include errors, strategy, and latency. The AST is a test of a participant's ability to shift attention between tasks and to ignore irrelevant information during interfering and distracting events. This test measures top-down cognitive control and provides measures of latency and errors. The RVP task is a measure of sustained attention assessing latency, probability, and sensitivity to pattern recognition. Finally, the SSP task is an assessment of working memory capacity and provides outcome measures of span length, errors, attempts, and latency.
Heart Rate Variability
Prior to statistical analysis, ECG data was pre-processed to obtain time and frequency parameters of heart rate variability (HRV). The ECG data was imported into Kubios HRV software (Version 3.1, University of Kuopio, Kuopio, Finland). The R-waves were automatically detected by applying the built-in QRS detection algorithm [27]. Frequency bands obtained were low frequency (LF) (0.04-0.15 Hz), high frequency (HF) (0.15-0.4 Hz), total power HRV (TP), and the ratio of LF to HF (LF/HF). The inbuilt process within Kubios and the smoothness priors method was used to correct for artefacts and ectopic beats in the raw ECG data [27,28]. It should also be noted that the data were log-transformed prior to analysis, where relevant.
Statistical Analysis
Statistical analysis was performed using SPSS Version 22.0 (IBM Corp., 2013, New York, NY, USA) IBM Corp [29] with statistical significance reported at p < 0.05. Independent sample t-tests were applied to establish significant differences in HRV parameters and neurocognitive performance measures between the blue and white-collar workers.
Demographic Data of Blue and White-Collar Workers
The demographic data of the blue and white-collar groups are shown below in Table 1. Compared to the blue-collar workers, the white-collar workers had spent significantly more time in education (3.4 ± 1.2 years and 4.33 ± 1.2 years, respectively) (p < 0.001). (Goldberg, 1972); n = Sample size; SD = Standard deviation; % = Percentage.
Neuropsychological Performance of Blue and White-Collar Workers
Independent sample t-tests of neuropsychological performance showed significant differences in the tasks (SWM, AST, RVP, SSP) between white (n = 48) and blue-collar (n = 53) workers. The significant findings are presenting in Table 2. Key: AST = Attention switching task; df = Degrees of freedom; F = F statistic; n = Sample Size; p = Level of statistical significance (p < 0.05); RVP = Rapid Visual processing; SD = Standard deviation; SSP = Spatial Span.
Attention Switching Task: During the AST, the white-collar sample group were found to make fewer errors when incongruent cues were given compared to the blue-collar worker group (8 ± 3.14 and 9.4 ± 3.62, respectively) (p = 0.04). When the "side" cue was given, the white-collar worker group made more errors than the blue-collar worker group (4.33 ± 2.56 and, 3.19 ± 2.16, respectively) (p = 0.02). Moreover, the white-collar worker group made significantly more correct responses than the blue-collar worker group overall (144.20 ± 8.53 and, 139.43 ± 9.80, respectively) (t = p = 0.01).
Rapid Visual Processing Task: Throughout the RVP task, the ability to detect signals was significantly higher in the white-collar worker group as compared to the blue-collar worker group (0.90 ± 0.08 and, 0.85 ± 0.08, respectively) (p = 0.002). Spatial Span Task: Finally, the SSP task saw the white-collar worker group make more total errors than the blue-collar worker group (13.48 ± 5.65 and, 9.74 ± 6.55, respectively) (p = 0.003).
HRV in Blue and White-Collar Workers
Independent sample t-tests were used to compare HRV parameters between the white and blue-collar worker groups. The significant findings are summarised in Table 3.
Spatial Span Task: During the Spatial Span (SSP) task, it was found that log HF was significantly lower in the white-collar worker group as compared to the blue-collar worker group (4.81 ± 0.58 and, 5.07 ± 0.67, respectively) (p = 0.03).
Discussion
The present study aimed to investigate the differences in HRV and psychological performance between a sample of blue and white-collar workers. The analysis indicated higher vagal cardiac mediation in blue-collar workers, as indexed by RMSSD and HF HRV, in response to spatial working memory and attention based cognitive tasks. Additionally, these results show that blue-collar workers performed significantly better on spatial tasks while white-collar workers performed better on attentional process tasks.
The current literature comparing these two sub groups is very limited; however, early work by Myrtek [29] investigating the level of stress and strain and its relationship to heart rate, physical activity, emotional strain, and mental strain found no differences in variability of heart rate (HR) between the two groups. The authors did, however, find that white-collar workers were more stressed, subjectively [29]. Additionally, it is thought that blue-collar workers are subject to an increased physical workload while white-collar workers are thought to have a high mental workload, and although interviews and questionnaires supported this idea, the physiological measurements did not [29].
Early work in the literature highlights conflicting evidence regarding the predisposition of blue and white-collar workers to CVD with some studies suggesting blue-collar workers were more at risk [30] while others suggested white-collar workers were more at risk [2]. Moreover, there is very little research investigating HRV parameters, psychological performance measures, and their associations with CVD in these two cohorts, and the present research aimed to provide more information and data regarding the relationship between different occupational and physiological risk measures and CVD.
When comparing the two sample cohorts, the only statistically significant difference in demographics was the years spent in education, where the blue-collar workers had spent less time in education than the white collar workers. Interestingly, Prihartono et al. [2] found that the increased level of education of white-collar workers significantly increased the prevalence of CVD. Moreover, prevalence of CVD by diagnosis was higher in the white-collar worker population, while the prevalence by symptoms was higher among the blue-collar worker group [2]. Even though the blue-collar workers are inherently more physically active in their day to day work, their socio-economic status and lifestyle choices may have a significant impact, particularly in the available access to health care. Lower education and lower salaries are more likely to predispose to unhealthy lifestyle choices [31]. Moreover, a higher BMI increased the prevalence of CVD in both blue and white-collar workers [2].
Spatial Working Memory
During the SWM task, the LF, LF/HF, and TP parameters of HRV were all greater in the white-collar worker group compared to the blue-collar worker group. LF HRV was traditionally thought to reflect sympathetic activity, as previously mentioned, but recent research indicates it is influenced by both the sympathetic and parasympathetic branches of the ANS [32]. This increase in LF HRV activity may point to increased sympathetic activity and dominance during these tasks for the white-collar worker group. This finding has been previously associated with an increased risk of CVD [33]. This has also been contrasted by other literature reporting that low LF HRV was associated with certain risk factors which predispose to CVD, for example, hypertension [34]. Moreover, a review by Hillebrand et al. [35] highlighted that low HRV indices, including LF HRV, indicated a higher risk of CVD in populations without any prior CVD. Interestingly, much of the prior research indicates that vagal withdrawal, and therefore an increased sympathetic response, is responsible for the cardiovascular disease risks [36]. However, Hamaad et al. [33] provide a differing perspective, suggesting that it is sympathetic activation which may be associated to cardiac events, and not the former. The authors of [33] investigated the associations between indices of HRV (time and frequency) and inflammatory biomarkers in patients with acute coronary syndrome (n = 100, male = 77, average age = 63 ± 12 years) and healthy controls (n = 49, male = 32, average age = 60 ± 10 years). Though the correlations were modest, the authors reported an inverse relationship between LF HRV and inflammatory biomarkers and, therefore, implicate sympathetic tone in CVD [33]. This idea is further supported by several studies which further investigate the inflammatory biomarkers and associated HRV changes [37].
Rapid Visual Processing
The RVP task showed that the blue-collar workers had higher HRV parameters across the board, particularly LF, HF, TP and SDNN. This is an interesting finding as the RVP task is one of sustained attention, and it was therefore expected that the white-collar worker group would exhibit higher levels of cardiac vagal control, as indexed by HF HRV or RMSSD. The findings of the present research may reflect high levels of stress within the white-collar working population as shown by lower HF HRV. Previous research concluding which occupational group is more stressed is contentious, and the literature suggests a multitude of variables that may contribute. Dedele et al. [38] indicate that blue-collar workers are 1.5 times more likely to perceive higher levels of stress in general. However, the white-collar workers had a four times increased likelihood of perceiving greater stress when they had been sedentary for more than 3 h per day [38]. Contrastingly, Nydegger [39] found no significant differences in stress levels between blue and white-collar workers, nor any differences between genders. Given that these studies only assessed perceived stress by way of surveys, the results may be too subjective, with numerous factors potentially influencing the responses. The use of a more objective measure would have been of great benefit to support their findings. Notwithstanding, they do provide grounds to indicate intricate interrelationships between workplace stress, HRV, and CVD. Moreover, recommendations made to white-collar workers include making improvements in sedentary lifestyle and increasing physical activity during work hours, while blue-collar workers must avoid unhealthy lifestyle habits [39,40]. These practices will ultimately reduce stress, improve cardiac autonomic activity and parasympathetic input, and therefore may reduce the risk of a cardiovascular event.
Spatial Span
The final difference in HRV between the blue and white-collar worker group found in this study was related to the SSP task, whereby the blue-collar worker group showed higher vagal mediation than the white-collar worker group. This is indicative of better control and better performance. Moreover, it may indicate a more relaxed scenario, as the SSP task is designed to evaluate working memory capacity in the 3D space around them, an environment familiar to blue-collar workers.
Comparison of Neuropsychological Performance between Blue and White-Collar Workers
Occupation has been considered as an important predictor of cognitive ability and decline over time [41]. Furthermore, the executive function requirements in the workplace, as well as the complexities of the environment, seem to have a correlation to cognitive decline [42]. Prior research has tended to be focused on age-related decline in cognitive processing and few studies have focused on the occupational effects. However, given that people spend a substantial portion of life at work, the workplace environment may have a significant effect [43].
Attention Switching
The AST showed that the white-collar workers made less errors when the cues were changing and more errors when the "side" cue was given. However, in the task as a whole, the white-collar workers gave significantly more correct responses than the blue-collar workers. In a longitudinal study spanning 10 years, Kim et al. [44] assessed executive function in blue (n = 1216, 61% Female, aged 70.7 ± 4.64 years) and white-collar workers (n = 242, 22% Female, aged 69.98 ± 4.18 years). The authors gathered data using the Mini-Mental Sate Examination (MMSE) [45] and other potential covariates, including sociodemographic factors, health related factors and occupational factors [44]. Primary findings between the longest-held lifetime occupation and executive function decline showed that males had no significant risks, whilst females showed a 2.5-fold increased risk of cognitive impairment amongst blue-collar workers compared to white-collar workers [44].
Rapid Visual Processing/Spatial Span
The white-collar workers showed significantly better performance during the RVP task, where their ability to detect sequences was much better. However, the white-collar workers made more errors during the SSP task. The relationship between mental workload and cardiovascular parameters is further illustrated by Capuana et al. [46]. These authors assessed 22 young adults (17 women, 18-27 years, average age = 20.5 years (SD not specified)) and 18 older adults (11 women, 65-83 years, average age = 72.3 (SD not specified)) and indicated relationships between cardiac measures and performance, as well as an association between increased cardiac workload and more errors in the older adults but not the younger adults [46]. This further supports and adds to the age-related literature regarding neurocognitive performance with the added element of cardiac risk measures. The results of previous literature and the present findings suggest that the effects of occupation on executive functions are multifaceted [41]. Prior research has indicated that white-collar workers are more cognitively inclined in the later years [41]. Moreover, manual labor workers (including machine operators, assembly workers and plant operators) have been shown to have a significantly higher chance of reduced executive function as compared to non-manual laborers (including business executives, administrators, and managers) [47]. As a whole, the white-collar workers seem to have performed better on the executive function tasks. Notwithstanding the varying performance on different tasks, an in depth analysis must be conducted to supplement broader examinations in order to identify specific relationships between cardiac variables and neurocognitive performance measures. Several factors may be considered when assessing the performance and risks between the blue and white-collar worker populations. Most people spend a large portion of their life at work, and so the inherent risks related to employment are something that must be further researched. These risks may be a result of the complexity in given occupations, which was first touched upon by Schooler [48] and further by Schooler et al. [49]. These authors suggested that complex environments at work, or during leisure time, allow for continued reinforcement of executive function. This greater intellectual stimulation increases neural growth and synaptic density, which protects against cognitive decline [50]. Therefore, lower intellectual demands for blue-collar workers may predispose them to executive function impairments. This is just one facet by which the literature suggests the enhanced ability of white-collar workers. Another theory indicates that, since blue-collar work is associated with a lower income, this translates to poor housing, nutrition, environment, and poor lifestyle habits and practices, which may be linked to cognitive decline [51,52]. Interestingly, white-collar workers are more educated in the traditional sense, but this does not necessarily reflect in overall intelligence. Given that white-collar workers are known to use cognitive abilities more often than blue-collar workers, it could be assumed that they have superior cognitive abilities. This may not be the case however, as a study showed that there was no evidence that regular use of computerized brain trainers improves general cognitive functioning [53].
Limitations and Future Directions
The present findings show perhaps that changes in HRV are in fact influenced by various tasks, spanning all professions. Increased sample numbers in each profession would allow for stratification and observations within the same job type. For example, one white-collar worker may perform more administrative tasks while another may perform more data analytics and these differences in neuropsychological load may further influence HRV. Moreover, this cross-sectional design provides a snapshot in time of the measures. Therefore, a longitudinal study would allow for a more in-depth analysis of how a particular profession may influence these physiological variables over the course of one's life. It is also acknowledged that, even though only 18% of the blue-collar worker group was made up of female workers, this is an accurate reflection of this population sample [54]. Though the present study identified numerous findings, it may only be predictive in nature and not causal. Therefore, future studies may be able to investigate the causal link between vagal tone, working memory, and attention through various techniques, such as transcutaneous vagal nerve stimulation or other neuroimaging techniques.
Conclusions
Overall, the present research identified multiple significant differences in HRV parameters and neurocognitive performance measures between the blue and the white-collar workforce. Blue-collar workers indicated higher vagally mediated cardiac control during neuropsychological tasks with better performance in spatial working memory exercises, whilst white-collar workers had superior performance on attention-based tasks.
Notably, reduced parasympathetic modulation of the heart, particularly in white-collar workers, was observed. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.
Data Availability Statement:
All data relevant to the present study are available from the corresponding author on reasonable request. | 2023-03-01T16:03:57.252Z | 2023-02-27T00:00:00.000 | {
"year": 2023,
"sha1": "fa4ca36a6d9b4957a8ae883d00a366a983c9fd65",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/5/4203/pdf?version=1677471197",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae8a3de08052a1cf465d4865446fcb40f8ce5af2",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216071615 | pes2o/s2orc | v3-fos-license | Selection of the Better Dual-Timed Up and Go Cognitive Task to Be Used in Patients With Stroke Characterized by Subtraction Operation Difficulties
Background: The Timed Up and Go Test (TUG) with serial subtraction is commonly used to assess cognitive-dual task performance during walking for fall prediction. Some stroke patients cannot perform number subtraction and it is unclear which cognitive task can be used to substitute for the subtraction task in the TUG test. The aim of this study was to determine the type of cognitive task that produced the highest decrease on both motor and cognitive performances during TUG-dual in stroke patients. Methods: A total of 23 persons with stroke but capable of completing subtraction (ST) and 19 persons with subtraction operation difficulties (SOD) participated. Both groups have a similar age range (ST: 59.3 ± 10.4 years and SOD: 62.0 ± 6.8 years) and stroke onset duration (ST: 44.13 ± 62.29 months and SOD: 42.34 ± 39.69 months). The participants performed TUG without a cognitive task (TUG-single) followed by a cognitive task when seated (cognitive-single). In addition, TUG with a cognitive task (TUG-dual) was performed, with the activity randomly selected from four cognitive tasks, including alternate reciting, auditory working memory, clock task, and phonologic fluency. The main outcome variables—TUG duration measured by OPAL accelerometer and cognitive-dual task effect (DTE)—were analyzed using repeated-measures analyses of variance (ANOVA). Results: The number of correct responses when seated were significantly lower in the SOD as compared to the ST (p < 0.05) during all cognitive tasks, except the phonologic fluency. During TUG-cognitive, TUG duration in the ST was significantly longer for all cognitive tasks compared with TUG-single (p < 0.0001), whereas TUG duration in the SOD was significantly increased only during the phonologic fluency task (p < 0.01). In the ST, there was a significant difference in cognitive DTE between the subtraction and the phonologic fluency tasks (p < 0.01). The highest cognitive cost was found in the subtraction task, whereas the highest cognitive benefit was shown in the phonologic fluency task. No significant cognitive DTE was found among the cognitive tasks in the SOD. Conclusion: For stroke persons with SOD, phonologic fluency is suitable to be used in the TUG-cognitive assessment. In contrast, subtraction (by 3s) is recommended for the assessment of TUG-cognitive in stroke persons who can perform subtraction.
INTRODUCTION
People who have suffered a stroke exhibit a greater risk of falling than similarly aged individuals (1). Cognition, mobility, and functional performance are major factors that contribute to fall risk and fall-related injury in persons with stroke (2). In addition, the measurement of cognitive function provides essential information that can assist in the prediction of falls (3). Dual-task methodology has been used for assessing cognitivemotor interference (CMI) while walking in various populations, including among persons with stroke. Previous studies reported an association between impaired dual-task performance and the increased risk of falls (4)(5)(6)(7). The clinical measures commonly used to assess cognitive-dual task performance during walking include the Stop Walking While Talking Test, the Walking While Talking Test, the Multiple Tasks Test, and the Timed Up and GO cognitive (TUG-cognitive) (8). Among these four clinical tests, the TUG-cognitive is widely used for clinical assessment in stroke patients (9,10). In the TUG-cognitive, the subjects will be asked to perform TUG [standing up from a chair, walk 3 m, turn, walk back, and sit down (11)] with the addition of cognitive task (subtract by 3s) (12). The TUG cognitive is useful for the evaluation of walking balance. Performing TUG-cognitive has a detrimental effect on functional mobility, of which the additional secondary task increased the time taken to complete the TUG by 22-25% (12). The completion time of TUG-cognitive has 80% sensitivity and 93% specificity for identifying communitydwelling older adults who are prone to falls (12).
A mismatch between available cognitive resources and current task demands often results in the recruitment of additional neural resources, resulting in better or worse task performance (13). Changes in cognitive processes from neural damage following a stroke can be assessed under the dual-task condition. In persons with stroke, the occurrence of CMI while performing simultaneous motor and cognitive tasks results in the impaired performance of one or both tasks (14). Decreased cognitive performance and gait changes can be influenced by the type and the difficulty of the cognitive task (15). Previous studies reported that mental tracking (serial subtraction) was more detrimental to motor performance than discrimination/decision-making and reaction time tasks (16,17). TUG-cognitive test with serial subtraction was also found to be the most reliable assessment for CMI (18). Decreased motor performance during TUG-serial subtraction, such as taking a longer time during walk, turn, and turn-to-sit tasks, decreased stride length, stride velocity, and increased single leg stance when walking, have been observed in people with stroke (19).
However, some stroke patients are unable to perform subtraction due to damage in the areas of the central nervous system responsible for arithmetic performance (20,21). It has been reported that quantitative number processing is likely to require bilateral inferior parietal areas, with patients exhibiting damage to this area presenting subtraction deficits. This type of damage limits the use of the TUG-cognitive with serial subtraction test as an assessment for CMI. In patients with subtraction operation difficulties (SOD), it is unclear which cognitive tasks can be used as a substitute for serial subtraction when performing TUG-cognitive assessments. Therefore, the objective of the present study was to determine the type of cognitive task that, when combined with TUG, would lead to the most significant decrease in both motor and cognitive performances in stroke patients not capable of performing subtraction. Data from persons with stroke who did not have a subtraction operation difficulty were also collected as a reference. We hypothesize that adding a cognitive task in the same category as serial subtraction will result in decreased performance in both motor and cognitive assays in stroke patients where the use of subtraction is not possible.
Participants
Fifty persons with stroke were recruited from two hospitals and five rehabilitation centers based on the following inclusion criteria: diagnosis of cerebrovascular accident, medically stable, and able to walk independently for at least 6 m with or without walking aids. The participants were excluded if they had (1) brainstem or cerebellar lesion, (2) cerebral aneurysm, (3) color blindness, (4) hearing loss, (5) aphasia, (6) severe visual impairment, (7) major depression (as score on 2Q ≥ 1 and score on 9Q questionnaire ≥ 19), (8) orthopedic conditions or pain affecting natural gait, (9) other neurological disorders that sufficiently disturb balance, (10) inadequate language comprehension resulting in an inability to understand instructions, or (11) a comprehension problem (defined as having Mini-Mental State Examination Thai version score of < 24 (22). The participants were then classified into two groups based on their ability to perform subtraction by 3s; the groups were designated as able to subtract (ST) and as with SOD. The criteria for inclusion in the SOD group was the inability to perform serial subtraction with one or fewer correct answers out of five within 1 m. Ethical approval was granted by the Institutional Review Board of Srinakharinwirot University. All the participants signed a written informed consent prior to participating in the study.
The sample size calculation for the repeated-measures analyses of variance (ANOVA) was carried out using G * power version 3.1. The minimum number of subjects required in each group was 19 persons, based on the estimated values of error probability (α) at 0.05, power (β) at 0.8, six repeated measurements, and the effect size specification of 0.25.
Measurement Tools
Baseline information including age, gender, diagnosis of stroke, hemiplegic side, time since stroke, occurrence of recurrent stroke, use of walking aids, and number of education years were collected from all participants using a questionnaire. The motor and walking performance of the participants with stroke was determined using the Fug-Meyer Assessment motor subscale of the lower extremity and stride velocity, respectively. The responses to the cognitive tasks were recorded using digital recorders. Two raters evaluated the responses from stroke patients to ensure accurate scoring of the answers and that any repetition was scored once.
For assessing motor performance during TUG-single and TUG-dual, the APDM's Mobility Lab TM (APDM Inc.) was used to collect and store data. A gyroscope (±400 • /s range) and an accelerometer (±5 g range) were used to capture the angular movement and the acceleration at a sampling rate of 200 Hz; gait cycles and related events were detected and estimated (23). Four portable 3D inertial sensors were placed on the participant at mid-thoracic, 5th lumbar vertebrae, and left and right ankles. In the TUG protocol, the subjects were instructed to stand up from a chair, walk 3 m with self-selected speed, turn 180 • , then walk back, and sit down. During TUG-dual, the participants were asked to perform TUG and cognitive task simultaneously with the instruction to "perform both tasks as well as possible without prioritizing on either gait or cognitive tasks." The cognitive tasks can be classified into five categories, including reaction time, mental tracking, working memory, discrimination and decision-making, and verbal fluency (24). Reaction time was not explored in this study as it was impractical in the clinical setting. Based on the results from our pilot study with 29 persons with stroke, the cognitive task from each type that produced the largest detrimental effect on walking (i.e., statistically significant slower gait speed as compared to walking with no cognitive task) was selected. The tasks selected for this study are as follows: (1) subtraction by 3s and alternating reciting (mental tracking category), (2) auditory working memory (working memory category), (3) clock task (discrimination and decision-making category), and (4) phonologic fluency (verbal fluency category) ( Table 1).
Procedures
The participants were asked to performed TUG without a cognitive task (TUG-single) at the beginning of the test. Then, four cognitive tasks were randomly selected until all tasks were performed. The participants in the ST group were asked to perform one additional task of serial subtraction. The participants received standardized verbal instructions regarding the cognitive task procedures and were allowed to practice while sitting on a chair. To avoid learning effects, the contents of the cognitive tasks used during practice were not similar to those performed during the actual analysis (e.g., different numbers, different letters, etc.). After a practice trial, the participants performed the cognitive task when seated (cognitive-single) for 60 s and the number of correct responses was collected by using the tape recorder. Then, they were asked to perform TUG with the same cognitive task category (TUG-dual). Each condition of the task was performed once, and the participants were allowed to rest at the end of each task for 2 min before performing the next task to prevent mental fatigue. To avoid the learning effects, the rater randomly assigned different sets of letters or numbers when assessing the same cognitive category during cognitivesingle or TUG-dual. The results from our pilot study ensured that the different sets of letters or numbers selected in this study produced comparable difficulty in the stroke patients. All the test conditions were completed within 1 h.
Data Analyses
The total TUG duration, stride velocity, and duration of TUG components (sit-to-stand duration, straight walk duration, turn duration, and turn-to-sit duration) were calculated using APDM's Mobility Lab software.
The dual-task effect (DTE) was used to determine the influence of the added cognitive task on the cognitive task performance. To determine the DTE, first, the cognitive correct response rate (CRR), which is the rate of the correct answer from each cognitive task, was calculated by using the following equation (18): Then, the DTE was calculated as (25): The negative value of the DTE indicates the decrement of cognitive performance under dual-task conditions (which is referred to as "cognitive costs"), while the positive value of the DTE indicates the improvement of cognitive performance under dual-task conditions (which is referred to as "cognitive benefits") (24). All statistical analyses were conducted using SPSS statistics software. An independent t-test was used for comparing the age, the onset of stroke, the scores for FM-Motor, and the gait speed. The number of education years was compared using the Mann-Whitney U-test. Independent t-test was also used to compare the number of correct responses at sitting between the ST and the SOD. The level of significance was set at 0.05. Repeatedmeasures ANOVA was used to examine the effect of cognitive task on TUG durations (total TUG duration and subcomponent TUG durations) and cognitive DTE. In the ST, the design was 1 × 6 (one group and six task conditions), whereas in the SOD the design was 1 × 5 (one group and five task conditions). The level of significance was set at 0.05, with the Bonferroni test used for post-hoc analyses.
RESULTS
From 50 participants, 24 persons were classified into the ST group and 26 were classified into the SOD group. Five participants were excluded from the analyses because of invalid data, and several other participants were excluded due to their inability to perform all tasks. The final groups consisted of 19 participants in the SOD and 23 participants in the ST (Figure 1). The participants in both groups were similar in age, onset, lower limb function, number of education years, and walking speed ( Table 2). The ability to perform the cognitive-single task in the ST and the SOD was measured from the number of correct responses during sitting (Figure 2). In comparison to the ST, the SOD demonstrated a lower number of correct responses during almost all types of cognitive tasks, including alternate reciting, auditory working memory, and clock task (p < 0.01, 0.05, and 0.001, respectively), with the exception of phonologic fluency where no group difference was found.
The total TUG duration for both groups during TUG-single and TUG-dual is shown in (Figure 3). Significant effects of In the ST, total TUG duration and straight walk duration were significantly longer during TUG-dual, for all tasks, as compared to TUG-single (p < 0.001). In contrast, total TUG duration and straight walk duration in SOD significantly increased only during the phonologic fluency task (p < 0.01 and 0.05, respectively). (Figure 3) also shows the time spent in each of the four components of TUG. There was no significant difference found in duration during the sit-to-stand component between TUG-single and all TUG-dual for both groups. Straight walk duration and turning duration were significantly longer between TUG-single and all TUG-dual in the ST (p < 0.01 and 0.0001, respectively), while in the SOD, straight walking duration and turning duration were significantly longer only between TUG-single and TUG-phonologic fluency (p < 0.05). Turn-to-sit duration was different from TUG-single during TUG-subtraction (p < 0.05), TUG-alternate reciting (p < 0.05), and TUG-clock task (p < 0.01) in the ST group. In contrast, in the SOD group, no significant difference between tasks was found for turn-tosit duration. Three different patterns of the cognitive DTE were found in individuals: decline, no change, and improvement of cognitive performance when compared to cognitive function during sitting (cognitive-single) (Figure 5). The significant effect of adding cognitive task on cognitive performance during TUG-cognitive was found in the ST [F (4,88) (Figure 4). There were also significant differences in cognitive DTE between subtraction and phonologic fluency tasks in the ST (p < 0.01); however, no significant difference was found between these tasks in the SOD (Figure 4). The majority of the participants (65.22%) in the ST group showed a decreased cognitive performance during TUG-dual in the subtraction task compared to that in the cognitive-single. In contrast, many participants in the SOD demonstrated a decline in cognitive performance (57.89%) during the phonologic fluency task (Figure 5).
DISCUSSION
This study is the first to examine the types of cognitive tasks that cause detrimental effects in both motor and cognitive performances during TUG-dual in persons with stroke and who had a subtraction operational difficulty. Our initial hypothesis that similar types of cognitive tasks would interfere with the cognitive-motor performance in both the ST and the SOD groups did not support this analysis. We found instead that the FIGURE 3 | Average duration (with SD) of (A) total Timed Up and Go Test duration, (B) sit-to-stand duration, (C) straight walk duration, (D) turning duration, and (E) turn-to-sit duration, comparing between cognitive tasks in persons with stroke who were able to subtract and those with subtraction operation difficulties. *depicts a significant difference at p < 0.05. **depicts a significant difference at p < 0.01. ***depicts a significant difference at p < 0.001.
FIGURE 4 | Group average (with SD) of percentage of Cognitive Dual-task Effect (%DTE)
, comparing between different cognitive tasks in the able to subtract group and in the subtraction operation difficulties group. A positive value means an improvement in cognitive performance (cognitive benefits); a negative value means a decline in cognitive performance (cognitive costs). **depicts a significant difference between tasks at p < 0.01. type of cognitive task played different roles in interfering with the cognitive-motor performance during walking in persons with stroke but capable of performing the subtraction task compared to those who were unable to complete this task.
In persons with stroke but who can perform number subtraction, the subtraction task produced larger negative effects on cognitive-motor performance during cognitive-dual tests than in the other cognitive tasks examined. In a previous study by Patel and Bhatt (16), the subtraction task was also found to cause a higher negative cognitive effect compared to the Stroop task (discrimination and decision-making). This indicates that the type and the complexity of the task are important in dual-task interference (26). We demonstrated in this study that the subtraction task was more complex than phonologic fluency as it resulted in a higher cognitive cost. The difficulty in performing the subtraction task may be due to a requirement for higher neural activity compared to the phonologic fluency task. The subtraction task triggered neural activity in the bilateral inferior parietal network (20,27), whereas phonologic fluency activated neural networks only in the left inferior frontal cortex and supplementary motor area (28)(29)(30). In addition, working memory is required for the subtraction task, and this task is more directly related to executive function than the verbal fluency task (31).
Compared to the subtraction task, the other cognitive tasks used in this study produced more limited detrimental effects on motor and cognitive performances in persons with stroke but who can perform subtraction. The auditory working memory task caused impaired motor and cognitive performance, although at a lower magnitude than the subtraction task. The alternate reciting letter and clock task resulted in decreased TUG performance. However, the effects on cognitive performance were inconclusive as nearly equal numbers of participants were observed with negative and positive effects on cognitive performance. These findings were in agreement with previous studies which reported limited effects from the clock and alternate reciting letter tasks on dual-task gait performance. Dennis et al. (17) reported no change in gait speed in individuals with stroke during the performance of the clock task. Additionally, a report by Liu-Ambrose et al. (32) found that the alternate reciting letter task did not interfere with gait performance in the elderly.
On the contrary, phonologic fluency was found to produce a more detrimental effect than the other cognitive tasks in the group of stroke patients with SOD. We demonstrated in the SOD that total TUG duration was significantly longer only during the phonologic fluency task, with the highest cognitive cost. The differences in cognitive task difficulty may be responsible for this finding in the SOD. The results from the number of correct responses obtained during cognitive-single task (Figure 2) suggested that the ability to perform cognitive tasks in general was lower in the SOD as compared to those who can perform the subtraction task. The phonologic fluency task was considered to be the easiest task among the four testing cognitive tasks for the SOD as they were able to perform this task in a comparable manner as the ST. It was plausible that, when the cognitive tasks were too difficult for the person in the SOD, they may prioritize their attention to the task that they could perform N = 19). A positive value means dual-task benefit, a negative value means dual-task cost, and zero value means no effect. The upper number represents the percentage of participants with cognitive benefit, and the lower number represents the percentage of participants with cognitive cost. The rest of the percentages (not shown in the figure) represented those participants with no cognitive effect (zero).
(motor task), leading to no deterioration of the motor task during the three other cognitive tasks: alternate reciting letter, auditory working memory, and clock task. Another explanation lies in the method of calculating the cognitive DTE when the number of correct responses is very low (< 3 correct responses) in both the single and the dual tasks. The relative comparison of dual task from single task in this case could lead to a misinterpretation of no cognitive interference during cognitive-dual condition when the actual reason is the inability to perform the cognitive task.
The other possible explanation on why the phonologic fluency led to the highest deterioration of both motor and cognitive performances in the SOD could be because phonologic fluency triggers more neural activities in the supplementary motor area compared to the other cognitive tasks examined. The neural activities in the left intraparietal sulcus, bilateral superior temporal gyrus, and inferior frontal gyrus are activated during the alternate reciting task (33). The working memory task involves an executive attention control mechanism, and this ability is mediated by portions of the prefrontal cortex (34). The activation of the inferior frontal gyrus and the anterior insula bilaterally, the left supramarginal gyrus, and the putamen was noted during the performance of the clock task (35). The supplementary motor area plays an important role in postural control and contributes to the timing and amplitude of the anticipatory postural adjustment of human gait initiation (36). Therefore, the competitive cognitive demand between retrieving specific words within lexical memory and gait control may impair the performance during TUG-dual with the phonologic fluency task.
Some limitations are noted for this study. Due to the heterogeneity of our participants with stroke, the information regarding cognitive DTE during alternate reciting letter, auditory working memory, and clock task was inconclusive. A large sample size is required in the next study to unravel the cognitive DTE during those aforementioned cognitive tasks. Next, the measurement of phonologic fluency is differentially sensitive to age and education (37,38). The results from this study were obtained from the participants, the majority of whom have a primary education level; thus, the generalization of the results is limited. Lastly, gait pattern, cognitive abilities, and motor and functional outcomes after stroke are correlated with brain lesion site and location (29,39,40). A lesion assessment based on CT or MRI images was not taken for all participants. Furthermore, a longitudinal study is required to explore the relationship between performance under TUG-verbal fluency and falls in stroke patients who were not capable of performing subtraction. Impaired dual-task performance has been associated with an increased risk of falls in people with stroke. The TUG test with serial subtraction is a useful tool to assess cognitive-dual task performance during walking in order to identify persons with stroke who are prone to fall. Apart from the traditional use of arithmetic task such as number subtraction, this study provided the alternative of using phonologic verbal fluency in conjunction with the TUG when assessing the cognitive-motor ability in individuals with stroke and who have SOD. This can be applied in the clinical practice as it will enable the clinicians to customize the cognitive tasks for assessment based on individual limitation so that the fall prevention program can be implemented as early as possible to prevent fall-related consequences in persons with stroke.
CONCLUSIONS
When combined with the TUG, phonologic fluency led to the largest deteriorating effect on dual-task performance in stroke patients with SOD. Therefore, phonologic fluency is suitable to be used in the TUG-cognitive assessment for persons with stroke and who have SOD. In contrast, number subtraction (by 3s) is recommended for the assessment of TUG-cognitive in persons with stroke but who can perform subtraction as it caused the largest reduction in cognitive-motor performance in persons with stroke but who can perform subtraction.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Institutional Review Board of Srinakharinwirot University. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
RB conceived and designed the project, funding procurement, and preparation of the final manuscript. AP collected and analyzed the data and wrote the first draft of the manuscript. NC helped with the instrumentation and data analysis. VS helped in the preparation of the final manuscript.
FUNDING
Financial support was received from the Capacity Building Program for New Researcher from the National Research Council of Thailand (NRCT) (grant number: 2561NRCT32018) and the Graduate School of Srinakharinwirot University, Thailand. | 2020-04-23T13:10:26.761Z | 2020-04-23T00:00:00.000 | {
"year": 2020,
"sha1": "dcef206e6c901fd83515befce192cdbc2da53b93",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.00262/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dcef206e6c901fd83515befce192cdbc2da53b93",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215827441 | pes2o/s2orc | v3-fos-license | Stochastic Peierls-Nabarro Model for Dislocations in High Entropy Alloys
High entropy alloys (HEAs) are single phase crystals that consist of random solid solutions of multiple elements in approximately equal proportions. This class of novel materials have exhibited superb mechanical properties, such as high strength combined with other desired features. The strength of crystalline materials is associated with the motion of dislocations. In this paper, we derive a stochastic continuum model based on the Peierls-Nabarro framework for inter-layer dislocations in a bilayer HEA from an atomistic model that incorporates the atomic level randomness. We use asymptotic analysis and limit theorem in the convergence from the atomistic model to the continuum model. The total energy in the continuum model consists of a stochastic elastic energy in the two layers, and a stochastic misfit energy that accounts for the inter-layer nonlinear interaction. The obtained continuum model can be considered as a stochastic generalization of the classical, deterministic Peierls-Nabarro model for the dislocation core and related properties. This derivation also validates the stochastic model adopted by Zhang et al. (Acta Mater. 166, 424-434, 2019).
1. Introduction. Different from the conventional alloys developed based on one primal element, high entropy alloys (HEAs) are single phase crystals that consist of random solid solutions of multiple elements (five or more) in approximately equal proportions [32,5,28,37,10,18,9]. Because each lattice site in HEAs is randomly occupied by one of the main elements, HEAs have significantly higher mixing entropies than those in conventional alloys. It is widely believed that the high mixing entropies in these materials facilitate the formation of simple structures (e.g., face-centered cubic or body-centered cubic lattices) and enable many ideal engineering properties, such as high temperature stability, high strength, high fracture resistance, and high radiationdamage resistance, etc. Because of these promising properties, HEAs have attracted considerable research interest ever since the discovery of this novel class of materials. One attractive mechanical property of HEAs is the high strength combined with high ductility and other desired features, which cannot be achieved in single-component crystals and conventional alloys. There are extensive experimental studies (e.g., [24,20,34]) and atomistic simulations/ab initio studies (e.g., [26,25,13,23,35,21]) available on the high strength of HEAs (see also the reviews [28,37,10,18,9]).
Theoretically, the strength of crystalline materials is determined by the motion of dislocations (line defects) [12]. Many of the existing models for the strength of HEAs are based on the classical ideas of solute solution strengthening; e.g., the Labusch model [14]. While the original Labusch model is directly applicable for cases where there is a distinction between solute and solvent atoms in conventional alloys (unlike in HEAs), some extensions to the HEA case have focused on how to combine contributions from each component to the strength. Toda-Caraballo et al. [27] adopted an averaging procedure for this purpose. Curtin et al. [30,29,17] explicitly considered the interaction energy between a solute atom and a dislocation in a matrix that was described as an effective medium with random local concentration fluctuations.
Recently, Zhang et al. [36] have developed a stochastic continuum model under the framework of the Peierls-Nabarro model [22,19,12] to understand how random site occupancy affects intrinsic strength of HEA materials. The stochastic Peierls-Nabarro model accounts for the randomness and short-range order on the atomic level in HEAs. Nonlinear effect associated with the dislocation core is described by a stochastic nonlinear interplanar potential. The model predicts the intrinsic strength of HEAs as a function of the standard deviation and the correlation length of the randomness. They also found that the compositional randomness in an HEA gives significant rise to the intrinsic strength, which agrees with atomistic simulations and experiments.
Despite the success of these theories in predicting results that agree with those of experiments and atomistic simulations, convergence from atomistic models to these theories has not been examined in the literature. The theory in Ref. [27] focuses on averaging the result of the Labusch model and does not explicitly consider the elastic interaction of dislocations with the atomic level randomness in HEAs. The theories in Ref. [30,29,17] were derived from continuum models of interactions under linear elasticity theory; as a result, these models may not necessarily accurately incorporate the influence of the atomic level randomness on the dislocation core, in which linear elasticity theory does not apply. In Ref. [36], the stochastic effects in the nonlinear interaction under the Peierls-Nabarro model are incorporated phenomenologically instead of direct derivation from the atomistic model.
In this paper, we derive a continuum model for inter-layer dislocations in a bilayer HEA from an atomistic model that incorporates the atomic level randomness. The continuum model is under the framework of the Peierls-Nabarro model, in which the nonlinear effect within the dislocation core region is included. The total energy in the obtained stochastic continuum model consists of a stochastic elastic energy in the two layers, and a stochastic misfit energy that accounts for the nonlinear inter-layer interaction and whose energy density is the stochastic generalized stacking fault energy (or the γ-surface). The obtained continuum model can be considered as a stochastic generalization of the classical, deterministic Peierls-Nabarro model [22,19,12] with generalized stacking fault energy [31]. This derivation also validates the stochastic model adopted in Ref. [36].
We use asymptotic analysis and (modified) central limit theorem in the convergence from the atomistic model to the continuum model. The atomic level randomness is incorporated by assuming that each lattice site is occupied by atom species with certain distributions. In the derivation, we introduce a supercell whose size is much greater than the lattice constant, and in the meantime, much smaller than the length unit of the continuum model, and employ the Cauchy-Born rule [3] for the derivation of the continuum formulation of the elastic energy and definition of the generalized stacking fault energy [31] for the calculation of the misfit energy.
The rest of the paper is organized as follows. In Sec. 2, we review the classical Peierls-Nabarro model for dislocations. In Sec. 3, we introduce the atomistic model for a bilayer HEA, from which the continuum model will be derived. In Sec. 4, we first calculate the generalized stacking fault energy of the bilayer HEA using the atomistic model, and then derive stochastic continuum formulations for it and the misfit energy. In Sec. 5, we derive stochastic continuum formulation of the energy due to the intralayer elastic interaction of the bilayer HEA from the atomistic model. In Sec. 6, we formulate the continuum stochastic total energy that incorporates the covariance of the randomness in the misfit and elastic energies, and rigorously prove the convergence from the atomistic model by modified central limit theorem. The stochastic model adopted in Ref. [36] is examined. In Sec. 7, we summarize the results.
2. Review of classical Peierls-Nabarro model. The Peierls-Nabarro model for dislocations [22,19,31,12] is a continuum model that combines a long-range elastic field of a dislocation and an atomic-level description of its core. In its classical form, it describes a straight dislocation with its core spread over a small, finite region along the slip plane.
We assume that there is an edge dislocation located along the z-axis, its Burgers vector b is in the +x-axis, and the y = 0 plane is the slip plane. The slip plane separates two linear elastic continua (y > 0 and y < 0). Across the slip plane y = 0, there is a jump in the displacement in the x direction, which is called disregistry across the slip plane (i.e., slip in the x-direction). The disregistry function φ(x) = u + (x) − u − (x), where u + (x) and u − (x) are respectively the displacements in the x direction on the atomic layers right above and below the slip plane, and φ(−∞) = 0, φ(+∞) = b, where b is the length of the Burgers vector b. The Burgers vector distribution is ρ(x) = φ (x), which characterizes the dislocation core and takes the form of a regularized delta-function. See Fig. 1(b) for a schematic illustration of the disregistry function φ(x).
The total energy of a dislocation in the Peierls-Nabarro model can be written as where E elastic is the elastic energy in the upper and lower continua delimited by the slip plane and E misfit is the misfit energy associated with the nonlinear atomic interactions across the slip plane. The misfit energy can be written in terms of the disregistry: where γ(φ) is the nonlinear interplanar potential. In the classical Peierls-Nabarro model, γ(φ) is approximated by the Frenkel sinusoidal potential [7,12], where µ is the shear modulus, and d is the atomic interplanar spacing perpendicular to the slip plane. In general, the nonlinear potential γ(φ) is the generalized stacking fault energy (or the γ-surface) [31] that is defined as the energy increment per unit length when there is a uniform shift of φ between the upper and lower halves of a perfect lattice along the slip plane. See Sec. 4.1 and Fig. 1(d) for more details of the generalized stacking fault energy of a bilayer system. In the case of a bilayer system with an inter-layer edge dislocation being considered in this paper, the elastic energy due to the intra-layer elastic interaction is where α is an elastic constant. Note that an edge dislocation in a three-dimensional space is considered in the classical Peierls-Nabarro model [22,19,12], with the elastic energy E elastic = 1 2 +∞ −∞ σ xy (x)φ(x)dx, where the shear stress on the slip plane is x−x1 dx 1 (ν is the Poisson ratio). For a bilayer system, when the Frenkel sinusoidal potential in Eq. (2.3) is used for the misfit energy, together with the elastic energy in Eq. (2.4), the model is the Frenkel-Contorova model [8].
3. Atomistic model of HEAs. HEAs are different from conventional alloys in the sense that each lattice site is randomly occupied by one of the main elements (normally more than five) with nearly equal proportions. We focus on a bilayer HEA with an inter-layer straight edge dislocation; see Fig. 1(a) for an illustration of the atomic configuration (to be explained at the end of this section). The averaged perfect lattice structure (without dislocation) has a triangular atomic configuration, see Fig. 1(c). The randomness of lattice occupation is expressed by a probability model. Assume that in the bilayer HEA, there are m elements that could possibly occupy each lattice site. All these elements form a sample space of a random variable ω: which is equipped with probability measure: P (e 1 ) = p 1 , P (e 2 ) = p 2 , · · · , P (e m ) = p m , The probability of each element occupying a lattice site is the proportion of this element over all elements in the HEA. Especially, in equimolar HEAs, the probabilities of all elements are equal, i.e., p 1 = p 2 · · · = p m = 1/m. At each lattice site, say atom i, there is a random variable ω i that describes the element on that site. In this paper, we assume that all the random variables {ω i } for all the lattice sites of the HEA are independent and identically distributed with distribution given in Eqs.
We use pair potential V pair (r, ω i1 , ω i2 ) in the atomistic model of the HEA, from which the continuum model will be derived. This interatomic potential is a function of not only inter-atomic distance r but also the two-side atom species ω i1 = χ 1 and ω i2 = χ 2 with χ 1 , χ 2 ∈ Ω. We focus on the nearest neighbor interaction in the derivation. An example of such a pair potential is the Lennard-Jones potential [15] (3.4) with Lorentz-Berthelot's combining rules [16,1] ( That is, in this potential, the dependence on atom species is defined through the empirical parameters (χ 1 , χ 2 ) and a(χ 1 , χ 2 ). This and similar forms of the Lennard-Jones potential have been used for atomistic simulations of HEAs [25,6,33] and other systems [11] in the literature. In the numerical validation after the continuum model is derived, without lose of generality, we will use this Lennard-Jones potential. Note that this specific potential is only for numerical validation, and the obtained continuum model does not depend on the specific form of the pair potential V pair (r, ω i1 , ω i2 ).
The empirical parameters and a of the Lennard-Jones potential for some transition metal elements, which are some commonly used ingredients of HEAs, are listed in Table 1 (from [6,11]). Table 1 The empirical parameters of Lennard-Jones potential for some transition metals.
is the averaged value of φ(x). We will derive a continuum model from this atomistic model in the following sections.
Stochastic misfit energy of HEAs.
In this section, we first calculate the misfit energy density, i.e., the generalized stacking fault energy of the bilayer HEA using the atomistic model with randomness described in the previous section, and then derive stochastic continuum formulations for the generalized stacking fault energy and the misfit energy.
4.1.
Review of the definition of the generalized stacking fault energy [31]. In the definition proposed by Vitek [31], for a given plane, the generalized stacking fault energy (or the generalized stacking fault energy) as a function of disregistry φ is the energy increment per unit area after a perfect crystal is cut along this plane and then reconnected after a uniform shift φ.
For a bilayer single-element crystal with triangular lattice as shown in Fig. 1(c), the generalized stacking fault energy γ(φ) is the energy increment per unit length after the top and bottom layers have a uniform shift (disregistry) φ relative to each other (i.e. along the x direction); see Fig. 1(d). This is the traditional crystal and can be regarded as a special case under our framework when there is only one possible element in the probability space, i.e. Ω = {e 1 } with P (e 1 ) = 1. In this classical case, the interatomic potential becomes a function of only distance r. When the nearest neighbor interaction is considered, under a uniform inter-layer disregistry φ, the increment in the interaction energy of one atom with all the other atoms consists of increments of the interaction energies with two nearest neighbors on the other layer U (φ) and V (φ) (due to the red and blue bonds, respectively, in Fig. 1(d)): Here we have used the fact that with the uniform inter-layer disregistry φ, the distances between one atom with its two nearest neighbors in the same layer do not change, thus the associated interaction energies do not change and do not contribute to the energy increment. Therefore, the generalized stacking fault energy γ(φ) can be expressed as Table 1. Note that γ(φ) is a periodic function with period of h. The approximation of the Frenkel's sinusoidal-type potential in Eq. (2.3) [7] adopted in the classical Peierls-Nabarro model [22,19] is also plotted in Fig. 2, with the same period and amplitude as the calculated γ(φ). It can be seen that the Frenkel sinusoidal potential indeed provides a good approximation to the generalized stacking fault energy in this case. This also validates that using the Frenkel sinusoidal potential as the averaged nonlinear inter-layer potential in the studies of HEAs in Ref. [36] is a reasonable approximation.
Stochastic generalized stacking fault energy.
In order to incorporate the atomic level randomness into the continuum model, we introduce the concept of supercell. One supercell of type-n contains 2n atoms (n atoms on each layer), with species denoted by random variables ω 1 , ω 2 , · · · , ω 2n ∈ Ω. We further define the atomic configuration of the supercell as ω := (ω 1 , ω 2 , · · · , ω 2n ) ∈ Ω 2n . Periodic boundary condition is used for the supercell. See Fig. 3 for illustrations of the supercell and supercell with a disregistry φ for the calculation of the generalized stacking fault energy. The atoms in the upper layer are labeled as 2i, i = 1, 2, · · · , n, and those in the lower layer are 2i − 1, i = 1, 2, · · · , n.
We will derive a continuum model from the atomistic model under the assumption that the size of the supercell δ = nh is large on the atomic level and small on the continuum level, i.e., h δ L, where L is the length scale of the continuum model. The derivation will be given in Secs. 4.3 and 6.2. We have assumed that the occupation of atom species on one lattice site is independent from that of any other sites. Therefore, the probability measure of any atomic configuration is well established by direct product of the probability from each site: The interaction energy between each pair of atoms and the total interaction energy within the supercell are functions of ω.
With a uniform inter-layer disregistry φ, following the formulation of deterministic case in Eq. (4.1), the increment in the interaction energy of one atom, without loss of generality, the atom with label 2i in the upper layer, with all the other atoms consists of the interaction energy increments of atom 2i with its two nearest neighbors 2i − 1 and 2i + 1 in the lower layer: Therefore, for this type-n supercell with atomic configuration ω under disregistry φ, the value of the generalized stacking fault energy, i.e., the average energy increment per unit length of the supercell, is If the probability space contains only one possible element, i.e. Ω = {e 1 }, each lattice site should be occupied by this element with probability 1. In this extreme case, the stochastic γ n in Eq. (4.5) reduces to be the classical, deterministic expression in Eq. (4.2). This indicates that our definition of stochastic generalized stacking fault energy is consistent with the classical definition by Vitek [31]. Now we calculate the mean and variance of γ n (φ, ω). Since the random variables ω i for the elements on the lattice sites have identical distribution, using Eq. (4.5), the mean of γ n (φ, ω) is for i = 1, 2, · · · , n. Next, we calculate the variance of γ n (φ, ω). Subtracting (4.6) from (4.5), we have It can be calculated that Here we have used the fact that all U i and all V i have identical distributions, respectively. Moreover, since the random variables ω i for the elements on the lattice sites are independent to each other, each U i is correlated only with V i−1 and V i and is independent with all the other V j 's, see Fig. 3 Introducing the notation θ(φ): 4.3. Continuum limit of the stochastic generalized stacking fault energy. In this subsection, we will derive a continuum formulation of γ(φ, ω) from the atomic-level expression γ n (φ, ω) in Eq. (4.5) by letting the size of the supercell n → ∞. Here we perform numerical samplings to examine this limit. More rigorous convergence proof using a modified central limit theorem will be given in Sec. 6.2.
We consider an HEA that consists of the five elements shown in Table 1. The probability space equipped with probability measure is In this calculation example, we set one supercell containing n = 7 atom pairs. We sample total number of 10 6 atomic configurations by the probability distribution (4.3).
Each atomic configuration ω sample corresponds to one curve of γ n (φ, ω sample ) shown in Fig. 4. With all those samples, we also statistically find the distributions of γ-values at five fixed disregistry, namely φ = 0.5h, 0.4h, 0.3h, 0.2h and 0.1h. The total 10 6 samples indicates there are 10 6 sampling of values of γ n (φ, ω) in Eq. (4.5) at each fixed disregistry φ. Fig. 5 shows the normalized distributions of those sampling γ-values at each of these values of φ, using the mean and variance of γ n (φ, ω) in Eq. (4.6) and (4.14), and comparison with the probability density function of Gaussian distribution with mean 0 and standard deviation 1. The results shows that the sample distributions agree excellently with the Gaussian distributions for this supercell with size n = 7. We have also performed samplings with larger sizes of the supercell, and the results are almost identical to those shown in Figs. 4 and 5.
These numerical results show that for each value of the disregistry φ, the value of the generalized stacking fault energy γ n (φ, ω) converges to a random variable with Gaussian distribution. That is where N (µ, σ 2 ) is the Gaussian distribution with mean µ and standard deviation σ.
The numerical results show that the convergence is already quite good for n = 7. More rigorous convergence proof using a modified central limit theorem will be given in Sec. 6.2. The above limit is equivalent to which will be used in later derivation.
4.4. Stochastic misfit energy. Now we derive the formulation for the misfit energy based on the stochastic generalized stacking fault energy γ n (φ, ω).
We have assumed that the size of the supercell δ = nh is much smaller than the length unit of the continuum model. Defining which is the Gaussian distribution with mean 0 and standard deviation √ δ, and using Eq. (4.8) and (4.17), the misfit energy within the supercell is We discretize the slip plane x-axis into a series of such small intervals meaning microscopic supercells: δ 1 , δ 2 , δ 3 , · · · , and each interval is associated with a Gaussian random variable for the atomic structure within it: Y δ1 , Y δ2 , Y δ3 , · · · . (For the infinite domain, we can start from a finite number A < 0 and then let A → −∞.) Because the atomic configuration within one interval is almost independent from that of any other interval due to the assumptions of nearest neighbor interaction and δ h, Y δ 's are approximately mutually independent and can be regarded as independent Gaussian increments. Therefore, the sequence of {Y δ } defines a Brownian motion (Wiener process) B x (ω) as Since δ is small on the continuum length scale, the microscopic misfit energy in Eq. (4.19) can be written on the continuum length scale as This is the formulation of the stochastic misfit energy on the continuum level.
In the extreme case that there is only one possible element in the probability space, i.e. Ω = {e 1 } with P (e 1 ) = 1, then θ(φ) ≡ 0 and the formulation of the misfit energy reduces to that in the classical Peierls-Nabarro model shown in Eq. (2.2).
Stochastic elastic energy.
In this section, we first calculate the energy due to the intra-layer elastic interaction of the bilayer HEA using the atomistic model, and then derive stochastic continuum formulation from it. 5.1. Elastic energy using the atomistic model. The elastic energy comes from the pairwise interactions between intra-layer neighboring atoms. Fig. 6 illustrates one supercell with and without displacements. The supercell for calculating the elastic energy is the same as that for evaluating the misfit energy, i.e. the atom configuration ω = (ω 1 , ω 2 , · · · , ω 2n ) in Fig. 6 is the same as that in Fig. 3. We set the displacement of the i'th atom of the top layer as u + i , and that of the bottom layer as u − i . Because only nearest-neighbor interactions are considered in our model, the equilibrium atomic lattice is reached when the distance between each nearest neighbors is the energy minimum distance of the pair potential. In fact, in this case, the total energy of the lattice is minimized, as can be seen from the fact that any perturbation of the location of an atom will lead to increase of the total energy.
We consider a nearest-neighbor pair with species generally noted as χ 1 , χ 2 whose interaction is given by the pair potential V pair (r, χ 1 , χ 2 ), where r is the distance between them. The equilibrium distance h is the value when pair potential reaches minimum, i.e., r = h is the solution of (5.1) dV pair dr = 0. Because the pair potential V pair is dependent on atom species χ 1 and χ 2 , the equilibrium distance h determined by solving Eq. (5.1) should also be regarded as a function of the pair species, i.e. h = h(χ 1 , χ 2 ). For the Lennard-Jones potential in Eq. (3.4), the equilibrium distance h(χ 1 , χ 2 ) = 2 1/6 a(χ 1 , χ 2 ). For the supercell shown in Fig. 6, we denote the equilibrium distance of the i'th nearest-neighbor pair as for the top and bottom layers, respectively.
When the atomic lattice is deformed, the elastic energy stored in the bond between the i'th and the (i + 1)'th atoms for the top layer can be expressed as (see Fig. 6): and for the bottom layer as The variables u + (x) and u − (x) are notations for the continuous displacements of the top and bottom layers, respectively, and the stiffness coefficients β + i and β − i are defined as for the i'th neighboring atom pairs of the top and bottom layers, respectively. The elastic energies associated with them, i.e., P i and Q i in Eqs. (5.3) and (5.4), are in the form of Hooke's law. As shown in (5.5), stiffness coefficients β ± i are random variables depending only on the species of the i'th atom neighbor.
As in the previous section, we will derive a continuum model from the atomistic model under the assumption that the size of the supercell δ = nh is large on the atomic level and small on the continuum level, i.e., h δ L, where L is the length scale of the continuum model. Following the Cauchy-Born rule [3] for deriving continuum model from the atomistic model for an elastically deformed crystal, we assume that the deformation gradient, which is du + dx or du − dx in the top or bottom layer here, is constant in the supercell. Under this assumption, the elastic energy of the supercell for the top or bottom layer, which is the summation of all the bonding energy of the layer, can be expressed as The randomness in this elastic energy is associated with the random variable It can be calculated that Here we have used the fact that β + i = β + i (ω 2i , ω 2i+2 ), β + i is independent of β + j for j = i − 1, i, i + 1, and same for β − i , due to the assumption that {ω i } are independent random variables.
Stochastic continuum elastic energy.
As in the previous section for the misfit energy, here we obtain the continuum limit of the stochastic elastic energy under the assumption that h δ L, where δ is the size of the sumpercell and L is the length scale of the continuum model. The elastic energies in Eq. (5.9) depend on the summation of stochastic stiffness coefficients n i=1 (β ± i −β). We derive a continuum formulation of n i=1 (β ± i −β) by letting the size of the supercell n → ∞. We perform numerical simulations to examine this limit in this subsection. More rigorous convergence proof using a modified central limit theorem will be given in Sec. 6.2.
In the numerical simulations, we use the same HEA system in Eq. (4.15) being used for deriving the misfit energy in the previous section, which consists of five elements with parameters shown in Table 1. We sampled total number of 10 6 atomic configurations by the probability distribution As illustrated by Fig. 7, when n becomes large enough, the probability distribution of the summation n i=1 (β ± i −β) converges to that of a Gaussian distribution with mean 0 and variance nσ ββ . That is, Rigorous convergence proof for a general case using a modified central limit theorem will be given in Sec. 6.2.
We have assumed that the size of the supercell δ = nh is much small than the length unit of the continuum model. Using the notation Y δ ∼ N (0, δ) defined in Eq. (4.18), which is the Gaussian distribution with expectation 0 and standard deviation √ δ, and Eq. (5.12), the elastic energies of the top and bottom layers in the supercell given in Eq. (5.9) can be written as As did in Sec. 4.4 for deriving the misfit energy, the slip plane is discretized into infinite such small intervals: δ 1 , δ 2 , δ 3 , · · · , and each interval is associated with a Gaussian random variable forming the sequence Y δ1 , Y δ2 , Y δ3 , · · · . As argued in Sec. 4.4, {Y δ } are approximately mutually independent and can be regarded as independent Gaussian increments, forming the Brownian motion as given in Eq. (4.20). Since the size of the supercell δ = nh is much smaller than the length unit of the continuum model, from Eq. (5.13), we have the following expression for the elastic energies on the continuum level: This equation can be written as For the bilayer HEA system (4.15), it can be calculated that ε e = 0.0914.
6. The Peierls-Nabarro model for HEAs. In this section, we formulate the stochastic total energy of the Peierls-Nabarro model for the bilayer HEA, and rigorously prove the convergence from the atomistic model. The stochastic model adopted in Ref. [36] is also examined.
6.1. Total energy of the supercell using atomistic model and its continuum limit. In the Peierls-Nabarro model for an interlayer dislocation, there will be both disregistry φ across the slip plane and elastic deformation {u ± i } within each layer. We consider the same suppercell whose size is nh as in the previous two sections (see Figs. 3 and 6), and the supercell has both φ and {u ± i } (with constant du ± dx as in the previous section). Using Eqs. (4.8), (4.7) and (5.9), (5.7), (5.8), the total energy of the supercell can be calculated as in which the first term is the average value of the total energy and the second term is a stochastic contribution whose mean value is 0. HereP = 1 2β du + (x) dx The variance of this total energy is In Sec. 4.2, we have calculated the variances of those terms of the misfit energy (Eq. (4.13)): Using the variances of the elastic energies in the top and bottom layers calculated in Sec. 5.2 (Eqs. (5.6), (5.9), and (5.10)), we have Since the atomic configurations of the top and the bottom layers are mutually independent, the covariance of their elastic energies vanishes: The remaining part in Eq. (6.2) (sum of the last four terms) is the covariance between the misfit energy and the elastic energy. The covariances between different terms of the misfit energy and the elastic energy can be calculated as Here, similar to the calculation of σ uv (φ) in Eq. (4.11b), we have used the property that U i is not independent only of P i−1 and P i (i.e., β + i−1 and β + i ) and same for other covariances.
Summarizing Eqs. (6.3)-(6.6), the variance of this total energy of the supercell in Eq. (6.2) can be written as where (6.9) Similar to the continuum limits of the misfit energy in Eq. (4.17) (shown in Fig. 5) and the elastic energy in Eq. (5.12) (shown in Fig. 7), numerical simulations also suggest that the stochastic perturbation in the total energy ∆E PN in Eq. (6.1) converges to a Gaussian distribution: This limit will be proved in the next subsection. When du ± /dx = 0, this limit reduces to the continuum limit of the misfit energy in Eq. (4.17). When φ = 0 and only the elastic energy of either the top or the bottom layer is considered, this limit reduces to Eq. (5.12). The standard deviation of the total energy density σ φ, du + dx , du − dx defined in Eq. (6.9) depends on the elastic strain du ± /dx in the top and the bottom layers and on the disregistry φ between the two layers through functions θ(φ) and η(φ), where θ(φ) is the standard deviation of the misfit energy (see Eq. (4.16)) and η(φ) defined in Eqs. (6.6) and (6.10) is associated with the covariance between the elastic energy and misfit energy. For the bilayer HEA system (4.15), the calculated functions θ 2 (φ) and η(φ) are shown in Fig. 8(a). We also compare the function θ(φ) with the gamma surfaceγ(φ) using the bilayer HEA system (4.15), see Fig. 8(b). It can be seen that we can have the following relation (6.12) θ for some small ε m . Here we can choose ε = 1/11. 6.2. Prove of convergence to Gaussian distribution. In probability theory, the central limit theorem states that the normalized summation of independent random variables tends towards a normal distribution as the number of random variables goes to infinity. However, the random variables {(U i −Ū ) + (V i −V ) + (P i −P ) + (Q i −Q)} in the summation in Eq. (6.11) are not mutually independent when the sub-index i varies. Thus the central limit theorem does not apply to it directly. A modified central limit theorem still holds when the assumption of independence in classical central limit theorem is relaxed to weak dependence [2]. In this subsection, we apply the modified central limit theorem to prove the convergence in Eq. (6.11) (and accordingly the convergence in Eqs. (4.17) and (5.12) as two special cases).
The weak dependence means that the random variables in a sequence far apart from one another are nearly independent [4], which is called α-mixing and is measured by a mixing coefficient. For the random variable sequence {X i } ∞ i=1 , the mixing coefficient α n is defined as (6.13) α n = sup |P (A ∩ B) − P (A)P (B)| : ∀k = 1, 2, · · · , A ∈ F k 1 , B ∈ F ∞ k+n , in which F b a denotes the σ-field generated by {X a , X a+1 , · · · , X b }. Suppose that α n → 0, then X k and X k+n are approximately independent for large n uniformly over all k. With the definition of mixing coefficient, the modified central limit theorem holds for a weakly-dependent random-variable sequence [2].
To prove the convergence in Eq. (6.11), we set We now check the α-mixing coefficient. Note that the energy components U i , V i , P i and Q i are defined based on the local atomic configurations ω 2i−1 , ω 2i , ω 2i+1 and ω 2i+2 . Thus X i is independent with X i±n when n ≥ 2. Therefore, in our case, for k = 1, 2, · · · , and ∀A ∈ F k 1 , ∀B ∈ F ∞ k+n , the mixing coefficient The condition of the modified central limit theorem holds. The convergence in Eq. (6.11) follows from the conclusion of the theorem in Eq. (6.14). The convergence in Eqs. (4.17) and (5.12) hold accordingly as special cases.
6.3. Stochastic total energy. From Eq. (6.11), as n −→ ∞, the total energy (6.1) of the supercell with δ = nh can be written as Recall that Y δ ∼ N (0, 1). As in the continuum limit in previous sections, the slip plane is divided into infinite such small intervals: δ 1 , δ 2 , δ 3 , · · · , and each interval is associated with a Gaussian random variable forming a sequence Y δ1 , Y δ2 , Y δ3 , · · · , which are independent Gaussian increments and form the Brownian motion as described in Eq. (4.20). Using the assumption that δ is small compared with the length unit in the continuum model, the continuum limit of Eq. (6.16), using integral form, is: (6.17) Recall that in this formula, φ(x) is the diregistry across the slip plane, and u + (x), u − (x) are displacements in the upper and lower layers, respectively. They have the relation φ(x) = u + (x) − u − (x). In the Peierls-Nabarro models [22,19], it is assumed that u + (x) = −u − (x), and accordingly, u + (x) = −u − (x) = 1 2 φ(x) from the equation above. Under these conditions, the total energy in Eq. (6.17) can be written as an expression that depends only on φ(x): where from Eq. (6.9),σ φ, dφ dx = θ 2 (φ) + 1 32h σ ββ dφ dx If we consider the randomness in the misfit energy and elastic energies separately as in previous two sections, we have the following formulation for the total energy of the Peierls-Nabarro model: where the Brownian motion B (1) represent the randomness in the misfit energy and the elastic energies of the top and bottom layers, respectively. Because the randomness in each energy component correspond to the same random atomic configuration, these Brownian motions are not mutually independent. Using the covariances of different energies on the atomic level calculated in Sec. 6.1, we obtain the covariances between these Brownian motions as follows. For any s 1 ≤ s 2 , τ 1 ≤ τ 2 , using the notation δ c as the length of the overlap between the two open sets (s 1 , s 2 ) and (τ 1 , τ 2 ), the correlations are Here we have used the small parameters ε e and ε m defined in Eqs. (5.15) and (6.12). This energy formulation is an alternative form of Eq. (6.17). When u + = −u − in the Peierls-Nabarro model, the total energy is where the Brownian motion B (1) x and B (2) x represent the randomness in the misfit energy and the elastic energy, respectively, and the covariance between them is where the notations s 1 , s 2 , τ 1 , τ 2 and δ c are the same as specified above. This energy formulation is an alternative form of Eq. (6.18).
6.4.
Smoothed stochastic total energy. Using the stochastic energy in Eq. (6.19) or (6.23) (or the formulation in Eq. (6.17) or (6.18) using a single Brownian motion), we have a Dirac delta function-like energy density and accordingly infinite point force in the Peierls-Nabarro model, which is not practical to describe the continuum profile of the dislocation core structure. On the other hand, resolution in the continuum Peierls-Nabarro model below atomic distance is not physically meaningful. Based on these, we make average over the size of an atomic site in the obtained continuum models as follows.
Performing similar average in the elastic energy E elastic , we have the smoothed stochastic total energy 1 (x, ω) ∼ N (0, 1) represent the randomness in the misfit energy and the elastic energy, respectively. These Gaussian random variables are independent at different locations, and the correlation and covariance between them are ρ Y 1 (x, ω) = σ em . In Ref. [36], the stochastic effects in the nonlinear interaction associated with the dislocation core under the Peierls-Nabarro model are incorporated phenomenologically by a stochastic misfit energy, which is in the form of E misfit = +∞ −∞ η(x)γ(φ)dx with η(x) being a random variable at each location x (Eq. (8) of [36], with slightly different notations). In the stochastic Peierls-Nabarro model in Eq. (6.27) obtained here, if we only consider the misfit energy, it is E misfit = +∞ −∞ 1 + ε m Y 1 (x, ω) γ(φ)dx. Perfect agreement can be seen if we choose η(x) = 1 + ε m Y 1 (x, ω) in the stochastic model in Ref. [36]. This validates the stochastic model adopted in Ref. [36]. 7. Summary. We have derived a continuum model for inter-layer dislocations in a bilayer HEA from an atomistic model that incorporates the atomic level randomness. The continuum model is under the framework of the Peierls-Nabarro model, in which the nonlinear effect within the dislocation core region is included. The obtained continuum stochastic total energy can be written in the form of either a single Brownian motion or multiple Brownian motions (separating the stochastic effects in different energies). Smoothed formulations of the stochastic total energy are also presented. The derivation validates the stochastic model adopted in Ref. [36]. | 2020-04-21T01:01:28.637Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "634e419c5c044beaa2996250754e4baf070d2716",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.09375",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "634e419c5c044beaa2996250754e4baf070d2716",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science",
"Materials Science",
"Physics"
]
} |
119099832 | pes2o/s2orc | v3-fos-license | Gravitational wave memory in $\Lambda$CDM cosmology
We examine gravitational wave memory in the case where sources and detector are in a $\Lambda$CDM cosmology. We consider the case where the universe can be highly inhomogeneous, but the gravitatational radiation is treated in the short wavelength approximation. We find results very similar to those of gravitational wave memory in an asymptotically flat spacetime; however, the overall magnitude of the memory effect is enhanced by a redshift-dependent factor. In addition, we find the memory can be affected by lensing.
I. INTRODUCTION
Gravitational wave memory, a permanent displacement of the gravitational wave detector after the wave has passed, has been known since the work of Zel'dovich and Polnarev [1], extended to the full nonlinear theory of general relativity by Christodoulou [2] and treated by several authors [3,4,[7][8][9][10][11][12][13][14][15][16][17][18]. See also heuristic ideas in a weak field but no mentioning of memory [5,6]. It has been shown [11] that the memory found by Zel'dovich and Polnarev in a linearized situation and the one by Christodoulou in the nonlinear theory are two different effects, the former (i.e. linear) called ordinary and the latter (i.e. nonlinear) called null memory. The ordinary memory is very small, whereas the null memory is large enough to be detected by Advanced LIGO and other experiments. Most of these works treat memory in an asymptotically flat spacetime. However, we live in an expanding universe, not an asymptotically flat spacetime. Furthermore, the sources of gravitational waves are so rare that the ones that have been detected so far [19][20][21] have been at distances at which the expansion of the universe cannot be neglected. As the detectors become ever more sensitive, one can expect detections due to sources at even greater distances where the expansion of the universe will be even more important.
A proper treatment of memory in an expanding universe is thus crucial. This has been done in [17] for deSitter spacetime, and in [18] for a more general cosmology, but with a particular idealized source. Our universe, however, has both ordinary and dark matter (which for cosmological purposes can be treated as pressureless dust), as well as dark energy (which can be treated as a cosmological constant). Although the * lbieri@umich.edu † garfinkl@oakland.edu ‡ nicolas.yunes@montana.edu universe started as a small perturbation of a Friedman-Lemaître-Robertson-Walker (FLRW) spacetime, by the present time these perturbations have grown to the point that the local density is very far from that of the unperturbed cosmology. Thus a realistic treatment of memory in our universe must treat the propagation of gravitational waves through this highly inhomogeneous medium. Furthermore, unlike the case of deSitter spacetime, the dust equations of motion are coupled to those of the gravitational waves, so a consistent treatment of the gravitational waves should take this coupling into account.
To treat these complications we take the following approach. We begin by dividing the region far from the source into a "wave zone" and a "cosmological zone." Here the wave zone is taken to be the region where the distance from the source is large compared to the wavelength of the waves, but small compared to the Hubble distance. In the wave zone, the fields that give rise to memory will behave to an excellent approximation just as they do in Minkowski spacetime. Thus the behavior of the fields is essentially as given in [11]. The cosmological zone consists of regions where the distance is not small compared to the Hubble distance.
If we can determine how the waves change as they propagate from the wave zone to the cosmological zone, then this change, along with the results of [11], will allow us to read off the behavior of memory in the expanding universe. To treat this propagation in the cosmological zone, we will use the fact that the wavelength of the gravitational waves is short compared to all other length scales in the problem. We will therefore use the short wavelength approximation of [22], though generalized from the vacuum case to the case with dust and a cosmological constant. This approach is similar to that of [23,24], though explicitly taking into account the matter equations of motion and their coupling to gravity.
In this article, we derive the gravitational wave memory in both the wave zone and the cosmological zone. We show that in the wave zone the memory is given via an expression involving the radiated energy per unit solid angle. As the background in the wave zone is approximated well by the Minkowski metric, this memory is computed as in [11]. We also show that in the cosmological zone the memory is given by the memory computed for the wave zone multiplied by (1 + z)M , where z denotes the redshift and M is a magnification factor due to lensing and the Sachs-Wolfe effect. The computations are done with respect to the luminosity distance, which in FLRW spacetimes is the natural replacement for the r coordinate, recalling that in Minkowski spacetime the luminosity distance is equal to r.
Our derivation of the ΛCDM memory makes use of the results for asymptotically flat spacetimes [11] by the first and second of the present authors and the short wavelength approximation [22] by Choquet-Bruhat. We recall that in [11] by a gauge-invariant method of perturbing the Weyl tensor away from a Minkwoski background two types of memories are computed, namely the null memory or Christodoulou memory and the ordinary memory that dates back to Zel'dovich and Polnarev. A memory tensor is derived that consists of exactly these two parts. The null memory, which is much larger than the (tiny) ordinary one, is due to energy radiated away to infinity per unit solid angle whereas the ordinary memory is due to changes in the (r, r) component of the electric part of the Weyl tensor. Decomposing the memory into spherical harmonics it is shown that the major part of the null memory is due to energy radiated in the l = 2 modes. This paper restricts attention to the null memory, i.e. Christodoulou memory. However, in cosmological spacetimes there is no "null infinity", which plays a crucial role in analyzing radiation and memory in asymptotically Minkowskian spacetimes. The solution to this problem is provided first by our separate treatment of memory in the wave and cosmological zones and by using the short wavelength approximation. The latter [22] makes use of the fact that the wavelength of the gravitational waves is short compared to all other scales in the problem. In particular, we consider a one-parameter family of spacetime metrics consisting of a background metric plus ω −2 times a radiative metric of frequency ω. The short wavelength limit is then the limit for large ω. We start with the background to be FLRW on large scales and add a perturbation that takes into account the local inhomogeneities in our universe. We also express each component of the stress-energy tensor in a similar way. Thus, the spacetime metric and matter content are provided by the tensorfields given in (2)-(4). In situations as studied here, our spacetime metric solves the Einstein equations asymptotically in the high frequency limit, that is to a given order in 1/ω. An interesting feature analyzed is that these perturbations can only be purely gravitational or purely fluid. Therefore, doing a gravitational perturbation, the fluid part vanishes at lowest order. It then follows that the decay behavior of the gravitational wave amplitude is given by a simple argument. In particular, apart from gauge terms, it is computed using the diver-gence of the null geodesic vector field introduced in the next section. The inhomogeneities in our universe generate curvature that can interfere with the waves. As these are traveling on null geodesics, we investigate the latter and seek to understand how the Weyl curvature changes. As light rays follow null geodesics, we explore what is known about gravitational lensing in the corresponding situation to show that the Weyl tensor is multiplied by a magnification factor due to gravitational lensing. Hereby, we use results obtained in [30]. Finally, knowing how the Weyl curvature behaves, we use this in the geodesic deviation equation to compute the memory in the two different zones, and thereby derive the results mentioned above.
The remainder of this paper is organized as follows. Section II will treat the short wavelength cosmological gravitational waves and will obtain a result for the behavior of the Weyl tensor as the waves propagate from the wave zone to the cosmological zone. Section III will use the results of Sec. II to obtain the cosmological memory. Section IV contains our conclusions and the observational implication of our work.
A. Field Equations
We want to consider waves in a cosmology that consists of dust and a cosmological constant. However, we do not want to assume that the spacetime is nearly FLRW, since at the present time initially small density perturbations have become large. Instead, we will use the approximation that the wavelength of the waves is short compared to all other scales in the problem, and we will use the weak progressive wave method of [22].
We begin with a background solution of the Einstein field equations with dust and a cosmological constant. This background solution consists of a metricḡ ab (x µ ), a dust densityρ(x µ ) and a four-velocityū a (x µ ) that satisfȳ HereR ab is the Ricci tensor ofḡ ab ,R is the scalar curvature and Λ is the cosmological constant. This background represents the cosmology of our evolving universe, which we take to be FLRW on large scales, though with (possibly large) density contrasts on small scales. This background, however, does not describe gravitational waves or their sources. With this in mind, we introduce another one-parameter family of tensor fieldŝ g ab (x µ , ξ),ρ(x µ , ξ),û a (x µ , ξ), and a scalar field φ(x µ ). These perturbations represent high-frequency deformations of the background that are uniformly bounded in ξ, with the only restriction that the length scale of the inhomogeneities is large compared to the much smaller wavelength of the gravitational waves.
The full spacetime and matter content of the universe is then given by the one-parameter family of tensor fields (g ab , ρ, u a ), which we write as where ω is the frequency of the perturbations. The fields (g ab , ρ, u a ) represent our universe in the sense that they satisfy the Einstein-fluid equations to the appropriate order : where R ab is the Ricci tensor of g ab , and R is the scalar curvature. Note that here the parameter ω plays a dual role, both as the frequency of the perturbation and as an inverse amplitude. The surfaces φ = const are wavefronts, since in the large ω limit the waves vary rapidly in the direction perpendicular to them. Note also that Eq. (5) describes the waves only in the region away from their sources. This approach is somewhat different from the usual perturbative approach in general relativity. In the usual perturbative approach, we assume that there is a oneparameter family of metric tensor fields, where each member of the family is an exact solution of the field equations, but we only calculate that family to first order in the parameter. In contrast, in the weak progressive wave approach, the one-parameter family of metric tensor fields is not expanded only to first order in the parameter, but rather the field equations themselves, (Eq. (5)) are only satisfied to a given order in ω −1 . The approach is similar to that of Isaacson in [24], but differs from that of Isaacson in [25] where the waves can be strong enough that their effective stress-energy has a strong effect on the background geometry. The factor of ω −2 in Eq. (2) and conditions entailed by Eq. (5) ensure that no such strong effect is present. Our approach will also differ from the usual perturbative approach in that we never use the diffeomorphism invariance of the theory to choose a particular gauge. Instead, in keeping with the methods of [11,17] all our calculations and results are stated in a way that is manifestly gauge invariant.
We now compute the covariant derivative operator and Riemann tensor of g ab . For any one-form A a , we have exactly that [26] where ∇ a and∇ a are the covariant derivative operators of g ab andḡ ab respectively, and where the difference tensor C c ab is given by 1 The formula for the difference tensor C c ab is similar to that of the Christoffel symbol Γ c ab and indeed the Christoffel symbol of a metric g ab is the difference tensor between the covariant derivative ∇a and the coordinate derivative ∂a. See [26] for details.
Using Eq. (2) in Eq. (7) and expanding in 1/ω, we then obtain where k a = ∇ a φ andĝ ab is considered as a function of x µ and ξ. A prime denotes derivative with respect to ξ and ∇ a takes derivatives only with respect to x µ . It is only once all these operations are performed that we evaluate all quantities at ξ = ωφ(x µ ).
Let us now compute the Riemann tensor to O(ω −1 ). A standard result of general relativity is that the Riemann tensor of a metric g ab can be written exactly as whereR abc d is the Riemann tensor of the background. From Eq. (8), it follows that terms quadratic in C c ab are O(ω −2 ). Using Eq. (8) in Eq. (9), we obtain where the tensors R (0) abc d and R (1) abc d are given by (12) Contracting on indices b and d we find that the Ricci tensor of g ab takes the form HereR ac is the Ricci tensor of the background, and the tensors R (0) ac and R (1) ac are given by where we raise and lower indices with the background metric, and the one-forms P a and L a are given by withĝ =ĝ a a . The quantities P a and L a can be changed by a gauge transformation, and indeed in the usual perturbative approach the radiation gauge is used to set analogous quantities to zero. As stated before, we take a gauge-agnostic view here, and thus, we will not impose such gauge conditions.
B. Solutions to O(ω 0 )
We now consider the consequences of the field equations, Eq. (5). Equations (3) and (4) imply the matter terms give no corrections to the field equations at O(ω 0 ). Therefore, Eq. (5) implies that R (0) ab = 0. Thus at zeroth order our results are the same as those of [22] for vacuum progressive waves. These results are comprised of two cases: • Case (i): k a k a = 0.
It then follows from Eq. (14) that P ′′ a = 0, but then we must have P ′ a = 0, since otherwise P a would grow linearly with ξ, and this would violate our assumption thatĝ ab is uniformly bounded. Since k a =∇ a φ, it follows from k a k a = 0 that k a∇ a k b = 0, i.e. the waves propagate along null geodesics.
It then follows from Eq. (14) thatĝ ab takes the form g ab = k (a s b) for some s a , but this is a pure gauge mode. Consider a vector field η a = ω −3 q a (x µ , ωφ) and act on the background metricḡ ab with a diffeomorphism along η a . This produces a physically identical metric that, to O(ω −2 ), takes the form This is aĝ ab metric of the formĝ ab = k (a s b) , which is thus a pure gauge mode.
For our purposes, a different way of seeing that a perturbation of the formĝ ab = k (a s b) is pure gauge is to note that from Eq. (12) and the vanishing of R abcd is the zeroth order Weyl tensor of the wave, which is given by Note then that aĝ ab of the formĝ ab = k (a s b) leads to a zero Weyl tensor of the wave.
Let us begin by considering the equation of motion for the matter fields. From the Bianchi identities and Eq. (5) we obtain from which it follows that Now using Eqs. (3) and (4) in Eqs. (21) and (22), we find that to O(ω 0 ) k aū aρ +ρk aû a = 0 .
From Eq. (23) it follows that either k aū a = 0 orû a = 0. However, k aū a = 0 is not compatible with k a k a = 0 (if k a is orthogonal toū a , then k a is spacelike and therefore cannot be null). Thus, if we have a (not pure gauge) gravitational wave, then we must haveû a = 0. It then follows from (24) thatρ = 0. In other words, the fluid perturbation vanishes. In physical terms, what all this means is that gravitational perturbations (which travel at the speed of light) and fluid perturbations (which travel at the speed of sound, in this case zero because the fluid is dust) cannot have the same wavevector. Thus a perturbation with a single wavevector must be pure gravity or pure fluid.
Let us now consider a non-trivial gravitational perturbation. Since the fluid perturbation vanishes at lowest order, it follows that R That is, up to terms that are pure gauge, the fall-off of the gravitational wave amplitude is determined by the properties of the divergence of the null geodesic vector field k a . This result can be stated in a manifestly gauge invariant way as follows. Taking the derivative with respect to ξ of Eq. (25) and using the result in Eq. (19) we obtain
D. Implications in Homogeneous Background Spacetimes
We now consider the implications of Eq. (26) in Minkowski spacetime and in an FLRW spacetime. Recall that the line element of Minkowski spacetime can be written in spherical polar coordinates as It then follows that k a (defined to be the affinely parameterized radial outgoing null geodesic) is given by k a = ∇ a (r − t) and therefore that ∇ a k a = 2/r. It then follows from Eq. (26) that Similarly, in FLRW spacetime the line element is and it then follows that and therefore, that However, it follows from Eqs. (29) and (30) that k a ∇ a r = a −2 and that k a ∇ a a =ȧ/a, and thus from Eq. (31) we know that ∇ a k a = 2 ar k a ∇ a (ar) .
From Eq. (26) it then follows that k e ∇ e a r C (0) abcd = 0 .
E. Implications in an Inhomogeneous Background Spacetimes
The background spacetimes we consider are FLRW on large scales, but on small scales the null geodesics can encounter curvature that can lead to modifications in wave propagation. How, then are we to take into account this additional effect? Since light rays are described by null geodesics, the effect of lensing on the brightness of a light wave is given by an equation of the same form as Eq. (26). We therefore expect that the additional effect of inhomogeneities is precisely to multiply the Weyl tensor by a magnification factor due to gravitational lensing. In this section, we will re-derive this result, using results from [30].
Consider a scalar A that satisfies the equation We can then use this equation to rewrite Eq. (26) as We recognize this expression as the product-rule distributed version of The expression above can be used to calculate the memory, if we can find an expression for A −1 . Let us thus specialize to an inhomogeneous FLRW spacetime background, described by the line element where δ ij is the Kronecker delta and (Φ, Ψ) are matter inhomogeneities that in principle depend on conformal time τ (related to the time coordinate t via dt = a(τ )dτ ) and the Cartesian coordinates x i . Such a perturbed FLRW spacetime suggests similar perturbative decompositions of other quantities, such as the scalar function A = A 0 (1 + ζ), where A 0 and ζ are independent and linearly-dependent on the matter inhomogeneities respectively. By comparison with Eq. (33), we immediately see that A 0 = 1/(ar). The part of A that is linearly proportional to the matter inhomogeneities can be obtained by solving Eq. (34) linearized in (Φ, Ψ). This equation, in turn, depends on the solution to the null-geodesic equation in the perturbed spacetime of Eq. (37). The calculation, using the methods of [30], are given in appendix B. Here, we just present the final result: where λ is the affine parameter of null geodesics in the non-expanding but inhomogeneous spacetime (Eq. (37) with a(τ ) set to unity), Ψ e is the value of this Ψ at emission, and D A D A is the laplacian on the unit two sphere. The first term in the above equation corresponds to the standard Sachs-Wolfe effect [31], while the second is a magnification due to lensing [30]. The result in eqn. (38) is similar to the corresponding equation in [30]; however, we correct an overall minus sign in that reference, and we improve the accuracy of the terms involving angular derivatives.
With A calculated, we then find that Eq. (26) in the spacetime of Eq. (37) simplifies to where once more we have expanded in small matter inhomogeneities. As we will see in the next section, the ζ term magnifies the signal, thus magnifying the memory effect.
III. COSMOLOGICAL MEMORY
We now apply the results of the previous section, and in particular of Eq. (33), to gravitational wave memory. Let us begin by introducing two 4-dimensional spacetime regions: the wave zone and the cosmological zone. The wave zone is defined through the asymptotic relation H −1 0 ≫ r ≫ λ, while the cosmological zone is defined through r H −1 0 , where r is the distance from the gravitational wave emitting source to a field point, λ is the gravitational wave wavelength and H 0 is the Hubble parameter today. In the wave zone, the FLRW background spacetime can be well-approximated by the Minkowski metric, while in the cosmological zone one must use the full FLRW metric. Let us then imagine that a gravitational wave is emitted at r 0 = 0, detected first at r 1 in the wave zone and then detected again at r 2 in the cos-mological zone 2 . The goal of this section is to compare a memory measurement in the wave zone to another measurement in the cosmological zone. We begin by considering the cosmological memory in a homogenous FLRW background, and then conclude this section with a discussion of the effect of inhomogeneities.
Let us first recall some of the basic properties of the gravitational wave memory [11]. For two nearby geodesics with four-velocity u a and separation s a acted on by a gravitational wave with Weyl tensor C abcd , the geodesic deviation equation requires thaẗ where an overdot denotes derivative with respect to the proper time of the geodesics. For simplicity we assume an initial displacement orthogonal to the direction of propagation of the wave, and we consider only the memory due to energy radiated to infinity [2]; we use capital letters to denote indices in this two-sphere of orthogonal directions. Measurements in the wave zone can be related to measurements in the cosmological zone through the definition of the luminosity distance. In Minkowski spacetime, the luminosity distance is the same as the usual r coordinate, but in an FLRW spacetime this is not the case, since the FLRW r coordinate does not have a physical meaning by itself. The luminosity distance is defined as d L = [P/(4πF )] 1/2 , where P is the power of a light source and F is the flux through a sphere of radius equal to the luminosity distance. Because the time of flight and the frequency of photons and gravitons redshifts as they propagate in an expanding universe, we then have that d L = ra(1 + z), where z is the redshift and a is the scale factor at the location of the measurement. For the general treatment we present in this paper, we will find it convenient to express all of our results in terms of d L .
Equation (40) implies that after the wave has passed there will be a residual change in the separation ∆s a . Let the original separation s be in the B direction. Then the change in separation ∆s in the A direction is given by where the memory tensor m A B is given by Here x a and y a are respectively unit vectors in the A and B directions. For simplicity, we treat the case where x a 2 Note that these conventions are opposite to those used in [30], where the observer is at the end of the gravitational wave worldline. This, for example, affects the sign of the n i vector, which should point in the direction of arrival of the gravitational wave. However, since only even powers of n i enter our equations, the signals cancel and the results are the same. and y a are orthogonal to the direction of wave propagation. One can write a similar expression for the differential change in arm length given a gravitational wave at a generic sky location through the inclusion of an antenna pattern tensor, as we will show in appendix A. In a spacetime with a Minkowski background, there is a relation between the memory tensor m AB and the energy radiated to null infinity. Specifically, let F (θ, ϕ) be the energy per unit solid angle radiated to infinity in the direction given by the two-sphere coordinates (θ, ϕ) and let D A be the derivative operator on the unit two-sphere. Then m AB is the unique traceless tensor satisfying where Φ is the solution of and F [1] is the sum of the ℓ = 0 and ℓ = 1 pieces of F . Equations (40-42) are general and thus apply both at r 1 with a memory tensor m (1) AB and at r 2 with a memory tensor m (2) AB . Equations (43-44), on the other hand, are specific to spacetimes with a Minkowski background and thus apply only at r 1 . Thus, our strategy for calculating cosmological memory is as follows: m (1) AB is determined by the local (i.e. at r 1 ) F using the usual Minkowski spacetime methods. Then, using Eqs. (33) and (42) we will determine a relation between m AB from the local F . Let w a , z a and q a be the vectors that start as u a , x a and y a respectively at r 1 and are parallel propagated along k a toward r 2 . With these definitions in hand, it then follows from Eq. (33) that the quantity along the null geodesic to which k a is tangent, and therefore this quantity is the same at r 2 as it is at r 1 . However, it follows from the properties of FLRW spacetimes that z a = x a and q a = y a and where a 1 = a(t 1 ) and t 1 is the time at which the gravitational waves cross r 1 , which for our purposes is essentially the time the waves are emitted. However, it follows from Eq. (19) that k a C (0) abcd = 0, and then from Eqs. (45) and (46) that along the null geodesic to which k a is tangent, and therefore, this quantity is the same at r 2 as it is at r 1 : where a 1,2 = a(t 1,2 ) and t 1,2 is the time when the gravitational wave is detected at r 1,2 . Let us use this relation to express the memory in terms of the luminosity distance and the redshift. For the gravitational waves of interest to us, the luminosity distance from the source to the wave zone measurement at r 1 is simply d (1) L = r 1 a 1 (1 + z 1 ) ∼ r 1 a 1 , while the luminosity distance from the source to the cosmological measurement at r 2 is d where z 1 ≪ 1 is the redshift between r 0 and r 1 and z 2 = 1 − a 2 /a 1 is the redshift between r 0 (or r 1 ) and r 2 . Thus, it follows from Eq. (48) that We can now use this expression in Eq. (42), together with the fact that dt at r 2 is 1 + z 2 times dt at r 1 to find This would be the result if the spacetime were exactly FLRW without matter inhomogeneities. As we discovered in Sec. II E, matter inhomogeneities introduce a lensing correction to the amplitude of gravitational waves. Following the same reasoning as above and using Eq. (39), we then find where the contribution of ζ at r 1 vanishes because it is very close to the emission point λ e . Using again Eq. (42) and expanding about ζ 2 ≪ 1, we then find m (2) where ζ 2 induces a magnification or a demagnification (analogous to focusing and de-focusing) of the signal. This result is consistent with the analysis of [30] and the results of [17,18].
IV. ASTROPHYSICAL IMPLICATIONS
We have seen that the gravitational wave memory acquires a redshift enhancement and a lensing correction when the gravitational waves travel large cosmological distances through matter inhomogeneities. Let us now study the degree to which these inhomogeneous and cosmological modifications affect astrophysical observations with gravitational waves.
Current (second-generation), ground-based detectors are sensitive only to low redshift sources. This is because ground-based instruments operate in the hecto-Hz range, allowing for the detection of black hole mergers with masses not larger than O(10 2 M ⊙ ), which restricts the detection range to redshifts below O(10 −1 ). This immediately implies that the redshift magnification will not exceed of order 10%, while lensing modifications are probably an order of magnitude smaller than that. Such small modifications will not magnify the gravitational wave memory enough to make it detectable with single observations. Recent work has suggested that the stacking of multiple observations may make the memory effect detectable [32], and here including the redshift magnification will probably be important, although lensing is unlikely to matter. Fortunately, the waveform models that the LIGO collaboration uses already include the redshift magnification, and thus, no modifications to the analysis are needed.
Once the next generation (third-generation) gravitational wave detectors come online, the redshift enhancement of the memory will become very important and lensing might also need to be included. The gravitational wave community is currently studying the possibility of upgrading the current aLIGO detectors (e.g. Voyager, Cosmic Explorer, Einstein Telescope) within the next one or two decades [33,34]. Such detectors will have a significantly improved sensitivity that will allow for the detection of gravitational waves emitted at much larger redshift. For such events, the redshift magnification will enhance the gravitational wave amplitude by an order of magnitude, while lensing may affect it by O(10%). Recent work that included the redshift magnification suggests that such third-generation detectors may be able to detect the memory without need of stacking (ie. with single events) [35]. The statistical combination of multiple events detected with third-generation observatories may also allow for the mapping of the lensing potential, although it is not clear how important this modification in the memory part of the signal will be.
Space-borne detectors, such as LISA [37], are also being planned by both the European Space Agency and NASA, with an expected launch date of early 2030s. Such space-borne detectors will detect gravitational waves in the milliHz frequency range, allowing for the detection of supermassive black hole mergers (with masses as large as O(10 7 M ⊙ ) at redshifts as large as 10. Clearly, for such events the redshift magnification will increase the amplitude of the signal by an order of magnitude and the focusing or defocusing effect of lensing will also be important. Recent work has suggested that such observations will also be able to detect the memory effect with single events [35]. Here again, the inclusion of the redshift magnification to the memory is crucial, although it is less clear how relevant the lensing correction is. Fortunately, the LISA community already has waveform models that include both the redshift magnification and the lensing correction calculated in this paper, so no additional model-building is necessary. It would be interesting to see if the lensing correction to the memory can contribute to the mapping of the lensing potential with many LISA observations, as has been argued could be possible with the non-memory part of the signal [30,36].
Last but not least, gravitational waves may also soon be detected in the nano-Hz range with pulsar timing arrays [38]. The idea here is to cross-correlate the signal of multiple pulsars to disentangle any correlated residuals in the times of arrival of the pulses. Because of their frequency of operation, pulsar timing arrays are expected to detect gravitational waves produced in the mergers of supermassive black holes (with masses of O(10 9 − −10 10 ) at redshifts of O(5 − −10). Once again, the inclusion of the redshift enhancement and the lensing correction to the memory should also be important in the extraction of the memory from pulsar-timing gravitational wave observations. Work along this line has only recently begun and is currently ongoing.
zeroth-order in perturbation theory, one finds the Friedman equations for the scale factor, which we can solve for any energy component for the Universe. To first order in perturbation theory, one finds after imposing the Lorenz gauge. The differential operator on FLRW is simply where H = a ′ /a and primes denote partial differentiation with respect to conformal time.
Let us now use the geometric optics approximation (also known as the short-wavelength approximation or the WKB approximation) to express an ansatz for the solution to this differential equation: where κ is the conformal wavenumber, n k is a spatial unit vector (pointing in the direction of propagation of the wave), χ i is a conformal spatial coordinate and φ(η) is assumed to vary much more rapidly than the amplitude tensor A ab (η), i.e. A ′ /A ≪ φ ′ . With this ansatz, the propagation equation reduces to Alternatively, one can also arrive at this equation through a Fourier analysis of Eq. (A8). The dispersion relation in Eq. (A8) is a differential equation for φ. One can solve this equation easily if one assumes that φ ′′ ≪ (φ ′ ) 2 and that H ′ ≪ H 2 . The second condition is true in cosmology, unless one is considering the inflationary era. The first condition is true when f ′ ≪ f 2 ; this, for example, holds during the inspiral of compact binaries. With these conditions, one then finds that since H/κ ≪ 1 for the sources we have in mind, and then for κ nearly constant. With this at hand, we can now reconstruct the propagated GW: where we have used that a s /a o = (1 + z) −1 . This result is sensible because one expects the GW amplitude to be Hubble diluted as the GW propagates in an expanding background. Let us then reconstruct the full GW, including the source dependence, as observed a cosmological distance away. Comparing Eq. (A10) to Eqs. (A1) and (A2), we see that A ab = A e ab , where e ab is a polarization tensor and The radial distance here, r FLRW , is that associated with the FLRW conformal coordinate in Eq. (A7), and thus, where d M is the comoving distance and d L is the luminosity distance. We then have and re-expressing this result in terms of the observable frequency, f s = (1 + z)f o , one then finds One then sees that this is identical to what one finds with Thorne's trick, namely Eqs. (A4) and (A5).
Appendix B: Amplitude of gravitational waves in an inhomogeneous universe We want to find the amplitude A satisfying in the perturbed FLRW metric of eqn. (37). Here k a is an affinely parameterized null geodesic congruence with affine parameter λ, and θ = ∇ a k a is the divergence of that congruence. The metric of eqn. (37) can be written as a 2 (η ab + γ ab ) where η ab is the metric of Minkowski spacetime and and t a is the unit vector in the time direction.
It is a standard result that if k a is an affinely parameterized null geodesic in the metric g ab then Ω −2 k a is an affinely parametrized null geodesic of the metric Ω 2 g ab . It then follows that if A is a solution of eqn. (B1) for the metric g ab then Ω −1 A is a solution of eqn. (B1) in the metric Ω 2 g ab . In this appendix we will calculate A for the perturbed flat metric η ab + γ ab (see eqns. (B12-B13) below). It then follows that A for the perturbed FLRW metric of eqn. (37) is the result of eqn. (B12) multiplied by a −1 .
We start by calculating θ in the perturbed metric. We do this two ways: first a coordinate method, like that of [30], then a geometric method.
We decompose k a as k a = (1 + β)k a + αt a + s a (B3) wherek a = t a +r a is the background value of k a , and s a is orthogonal to both t a and k a . We will calculate only to first order in perturbation theory. Note that α, β and s a are all first order quantities. Therefore to zeroth order in perturbation theory, we can use k a andk a interchangably. Also note thatk a ∇ a r = 1. Therefore to zeroth order in perturbation theory we can use r and the affine parameter λ interchangably. The fact that g ab k a k b = 0 immediately yields A decomposition of the Christoffel symbols yields where P ab = δ ab −r arb is the projection to the twosphere. The geodesic equation then yields We then find ∇ a k a = ∂ a k a + Γ a ab k b = ∂ a (1 + β)k a + αt a + s a + Γ a abk b = 2 r (1 + β) + ∂ λ β + ∂ t α + ∂ a s a + ∂ λ (Φ − 3Ψ) However, we have L k (∂ a s a ) = ∂ a (L k s a ) = ∂ a −2λ −1 s a − P ab ∂ b (Φ + Ψ) where D A D A is the Laplacian on the unit two-sphere. We then find We also have where the integration "constant" c can depend on the angle, and the quantity ζ is given by We now turn to a geometric derivation of the result for A. The null Raychaudhuri equation is But our null geodesic congruence is the light cone of the point of emission of the waves. So the shear σ ab and the rotation ω ab vanish in the flat spacetime background, and their squares are second order and can therefore be neglected in first order perturbation theory. Thus to first order we have which can be written as Thus we find We therefore have We therefore find A standard formula for the perturbed Ricci tensor is Applying eqn. (B21) to the metric perturbation in eqn.
(B2) yields Thus we find We then find Using eqn. (B24) in eqn. (B20) then yields the result of eqn. (B11). Thus the geometric method agrees with the coordinate method. | 2017-06-06T23:44:05.000Z | 2017-06-06T00:00:00.000 | {
"year": 2017,
"sha1": "a41142ca5b58783aac455d491e35a41cd59dc49f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1706.02009",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f4a6d2a6b3f72d534a5daea83b136c7128db642a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
54910966 | pes2o/s2orc | v3-fos-license | Counting function fluctuations and extreme value threshold in multifractal patterns: the case study of an ideal $1/f$ noise
To understand the sample-to-sample fluctuations in disorder-generated multifractal patterns we investigate analytically as well as numerically the statistics of high values of the simplest model - the ideal periodic $1/f$ Gaussian noise. By employing the thermodynamic formalism we predict the characteristic scale and the precise scaling form of the distribution of number of points above a given level. We demonstrate that the powerlaw forward tail of the probability density, with exponent controlled by the level, results in an important difference between the mean and the typical values of the counting function. This can be further used to determine the typical threshold $x_m$ of extreme values in the pattern which turns out to be given by $x_m^{(typ)}=2-c\ln{\ln{M}}/\ln{M}$ with $c=3/2$. Such observation provides a rather compelling explanation of the mechanism behind universality of $c$. Revealed mechanisms are conjectured to retain their qualitative validity for a broad class of disorder-generated multifractal fields. In particular, we predict that the typical value of the maximum $p_{max}$ of intensity is to be given by $-\ln{p_{max}} = \alpha_{-}\ln{M} + \frac{3}{2f'(\alpha_{-})}\ln{\ln{M}} + O(1)$, where $f(\alpha)$ is the corresponding singularity spectrum vanishing at $\alpha=\alpha_{-}>0$. For the $1/f$ noise we also derive exact as well as well-controlled approximate formulas for the mean and the variance of the counting function without recourse to the thermodynamic formalism.
Abstract. Motivated by the general problem of studying sample-to-sample fluctuations in disorder-generated multifractal patterns we attempt to investigate analytically as well as numerically the statistics of high values of the simplest model -the ideal periodic 1/f Gaussian noise. Our main object of interest is the number of points N M (x) above a level x 2 V m , with V m = 2 ln M standing for the leading-order typical value of the absolute maximum for the sample of M points. By employing the thermodynamic formalism we predict the characteristic scale and the precise scaling form of the distribution of N M (x) for 0 < x < 2. We demonstrate that the powerlaw forward tail of the probability density, with exponent controlled by the level x, results in an important difference between the mean and the typical values of N M (x). This can be further used to determine the typical threshold x m of extreme values in the pattern which turns out to be given by x (typ) m = 2 − c ln ln M / ln M with c = 3 2 . Such observation provides a rather compelling explanation of the mechanism behind universality of c. Revealed mechanisms are conjectured to retain their qualitative validity for a broad class of disorder-generated multifractal fields. In particular, we predict that the typical value of the maximum p max of intensity is to be given by − ln p max = α − ln M +
Introduction
Investigations of multifractal structures of diverse origin is for several decades a very active field of research in various branches of applied mathematical sciences like chaos theory, geophysics and oceanology [1,2] as well as climate studies [3], mathematical finance [4,5], and in such areas of physics as turbulence [6,7], growth processes [8], and theory of quantum disordered systems [9]. The main characteristics of a multifractal pattern of data is to possess high variability over a wide range of space or time scales, associated with huge fluctuations in intensity which can be visually detected.
To set the notations, consider a certain (e.g. hypercubic) lattice of linear extent L and lattice spacing a in d−dimensional space, with M ∼ (L/a) d ≫ 1 standing for the total number of sites in the lattice. The multifractal patterns are then usually associated with a set of non-negative "heights" h i ≥ 0 attributed to every lattice site i = 1, 2, . . . , M such that the heights scale in the limit M → ∞ differently at different sites: h i ∼ M x i , with exponents x i forming a dense set. To characterize such a pattern of heights quantitatively it is natural to count the sites with the same scaling behaviour. Then a multifractal measure is characterized by a (usually, concave) single-smoothmaximum singularity spectrum function f (x). Denoting the position of its maximum as x = x 0 , such function describes the (large-deviation) scaling of the number of points in the pattern whose local exponents x i belong to some interval around x 0 . More precisely, defining the density of exponents by ρ M (x) = M i=1 δ ln h i ln M − x a nontrivial multifractality implies that such density should behave in the large-M limit as [10] with a prefactor c M (x) of the order of unity which may still depend on x. We will refer below to the above form as the multifractal ansatz. The major effort in the last decades was directed towards determining the shape and properties of f (x). In contrast, our main object of interest will be the behaviour of the prefactor c M (x) which is much less studied, to the best of our knowledge. In particular, if the multifractal pattern is randomly generated like e.g. those considered in [9] the ansatz (1) is expected to be valid in every realization of the disorder. One may then be interested in understanding the sample-to-sample fluctuations of the prefactor c M (x).
To that end we find it convenient to introduce the counting functions Usually one defines the exponents γ i via the relation h i ∼ L γi i.e. by the reference to linear scale L instead of the total number of sites M ∼ (L/a) d , and similarly for the density of exponents ρ(γ) ∼ L f (γ) . We however find it more convenient to use instead the exponents x i = γ i /d and the singularity spectrum f (x) = 1 d f (γ).
for the total number N > (x) of sites of the lattice where heights satisfy h i > M x (respectively, h i < M x ). Substituting the multifractal form of the density into (2) and performing at ln M ≫ 1 the resulting integral for x > x 0 by the Laplace method we find N > (x) ≈ c M (x)M f (x) /|f ′ (x)| √ ln M and a similar expression for N < (x) for x < x 0 , relating the singularity spectrum f (x) to the counting functions. As both N > (x) and N < (x) can not be smaller than unity we necessarily have f (x) ≥ 0 for all x, and the condition f (x) = 0 defines generically the maximal x + and the minimal x − threshold values of the exponents which can be observed in a given height pattern.
The singularity spectrum f (x) is not a quantity which is easily calculated analytically or even numerically for a given multifractal pattern of heights [10]. An alternative procedure of analysing the multifractality is frequently referred to in the literature as the thermodynamic formalism [1,2]. In that approach one characterizes the multifractal pattern by the set of exponents ζ q describing the large-M scaling behaviour of the so-called partition functions Z q as To relate ζ q to the singularity spectrum f (x) discussed above one rewrites (3) in terms of the density as Z q = ∞ −∞ M qx ρ M (x) dx, and again employs the multifractal ansatz (1) for ρ M (x). Evaluating the integral in the ln M ≫ 1 limit by the steepest descent (Laplace) method gives where we have assumed that x − < x * < x + for simplicity. This shows that the relation between ζ q and f (x) is given essentially by the Legendre transform. We thus see that formally the original definition of multifractality based on the density (or, equivalently, the counting functions N >,< (x)) and the thermodynamic formalism approach (3)-(4) should have exactly the same content for ln M → ∞, provided the singularity spectrum is concave ¶. Note also the normalization identity Z 0 = ∞ −∞ ρ M (y) dy ≡ M implying ζ 0 = 1. It also shows that at the point of maximum x = x 0 we must necessarily have f (x 0 ) = 1 and that c M (x 0 ) is indeed of the order of unity.
The formalism described above is valid for general multifractal patterns, and is insensitive to spatial organization of intensity in the pattern. In the present paper we will be mostly interested in disorder-generated multifractal fields whose common feature is presence of certain long-ranged powerlaw-type correlations in data values [12]. In practice, to extract singularity spectra from a given multifractal pattern obtained in real ¶ Examples of non-concave multifractality spectrum and the associated thermodynamic formalism are discussed in [11] or computer experiments, one frequently employs the so-called box counting procedure which can be briefly described as follows. Subdivide the sample into M l = (L/l) d non-overlapping hypercubic boxes Ω k of linear dimension l. Associate with each box the mean height H k (l) = (l/a) −d i∈Ω k h i , and define the scale-dependent partition functions Note that for l = a obviously M a = M = (L/a) d and Z q (a, L) coincides with the partition function Z q featuring in the thermodynamic formalism. One may however observe that in the range a ≪ l ≪ L the scale-dependent partition functions are sensitive to the spatial correlations in the heights at different lattice points. In particular, a simple consideration shows that when the heights are powerlaw-correlated in space as is actually the case for many systems of interest, see [12] and also below, the scaling behaviour of Z q (l, L) depends non-trivially on both l/a and L/a. At the same time the behaviour of the combination I q (l, L) = Z q (l, L)/ [Z 1 (l, L)] q turns out to be a function only on the scaling ratio L/l and is given by which allows to get reliable numerical values of the scaling exponents τ q by varying the ratio L/l over a big range. Further noticing that for q = 1 the l−dependence of the partition function disappears due to linearity: Z 1 (l, L) = (l/a) −d (L/l) −d M 1 h i ∼ (L/a) ζ 1 −d we also can reliably extract ζ 1 from the same data, hence relate the set of exponents τ q to ζ q for q = 1.
The quantities I q = I q (a, L) have interpretation of the inverse participation ratio's (IPR's) and are very popular in the theory of the Anderson localization [9] and related studies. Passing from the partition functions Z q of the thermodynamic formalism to the IPR's is equivalent to focusing on the properties of the normalized probability measure 0 < p i = h i /Z 1 < 1, i p i = 1 rather than on the original height pattern h i itself. In fact in such a setting it is more natural to introduce the scaling of those weights in the form p i ∼ M −α i , α i ≥ 0 and consider the corresponding singularity spectrum f (α) related directly to the Legendre transform of the exponents τ q . Working with the exponents τ q has some advantages, as one can show they must be monotonically increasing convex function of q: dτq dq > 0, d 2 τq dq 2 ≤ 0. In many situations, as e.g. the diffusion-limited aggregation [8] or indeed the Anderson localization the multifractal probability measures arise very naturally. In other contexts, e.g. in turbulence or in financial data analysis, the normalization condition seems superfluous. In the main part of the present paper we are mainly interested in the pattern of heights and are therefore concentrating on partition functions. We will discuss the normalized multifractal probability measures and associated IPR's briefly in the end.
The major features of the picture outlined above is of general validity for a given single multifractal pattern of any nature, not necessarily random. In recent years considerable efforts were directed towards understanding disorder-generated multifractality, see e.g. [9,13,14,15] for a comprehensive discussion in the context of Anderson localisation transitions and various associated random matrix models, [16], [17] in the context of harmonic measure generated by conformally invariant two-dimensional random curves and [18,19,20] for examples related to Statistical Mechanics in disordered media. We just briefly mention here that one of the specific features of multifractality in the presence of disorder is a possibility of existence of two sets of exponents, τ q versusτ q , governing the scaling behaviour of the typical IPR denoted I (t) q ∼ M −τq versus disorder averaged ("annealed") IPR, I q ∼ M −τq . Here and henceforth the overline stands for the averaging over different realisations of the disorder. Namely, it was found that for large enough q > q c the two exponents will have in general different values: τ q =τ q . The possibility of "annealed" average to produce results different from typical is related to a possibility of disorder-averaged moments to be dominated by exponentially rare configurations. As a result, the part of the "annealed" multifractality spectrum recovered via the Legendre transform fromτ q for q > q c will be negative [21], [9]:f (x) < 0 for x < x − , and similarly for x > x + . Further detail can be found in the cited papers and in the lectures [22].
Another important aspect of random multifractals revealed originally by Mirlin and Evers [14] in the context of the Anderson localization transition is the fact that IPR's I q for disorder-induced multifractal probability measures are generically powerlaw distributed: [14,9]. Such behaviour suggests that the actual values of the counting functions N > (x), N < (x) should also show substantial sample-to-sample fluctuations, even in the range x − < x < x + where the singularity spectrum f (x) is self-averaging and the same multifractal scalings N > (x) ∼ M f (x) is to be observed in every realization of the pattern. Though the presence of such fluctuations was already mentioned in [18], a detailed quantitative analysis seems to be not available yet. The main goal of our paper is to achieve a better understanding of statistics of the counting functions N > (x), N < (x) by performing a detailed analytical as well as numerical study of arguably the simplest, yet important class of multifractal disordered patterns -those generated by one-dimensional Gaussian processes with logarithmic correlations, the so-called 1/f noises.
The structure of the paper is as follows. In the next section we will introduce the 1/f noise signals and discuss their properties already known from the previous works. Then we will use that knowledge to show that the probability density of the counting function N > (x) for such a model is characterized by a limiting scaling law with a powerlaw forwards tail, with the power governing the decay changing with the level x. We will then demonstrate that such powerlaw decay has nontrivial implications for the position of the maxima (or, with due modifications, minima) of such processes, and derive the expression for the threshold of extreme values. Finally, using 1/f noises as guiding example we will attempt to reinterpret the results of the theory developed in [14] to get a rather general prediction for the position of extreme value threshold for a broad class of disorder-generated multifractal patterns whose intensity is characterized by power-law correlations. We conclude with briefly discussing a few open questions.
1/f noise: mathematical model and previous results
An ideal 1/f (or "pink") noise is a random signal such that spectral power (defined via the Fourier transform of the autocorrelation function of the signal) associated with a given Fourier harmonic is inversely proportional to the frequency ω = 2πf . Signals of similar sort are known for about eighty years and believed to be ubiquitous in Nature, see [23] for a discussion and further references. Rather accurate 1/f dependences may extend for several decades in frequency in some instances, es. e.g. in voltage fluctuations in thin-film resistors [24] or resistance fluctuations in singlelayer graphene films [25], in non-equilibrium phase transitions [26], and in spontaneous brain activity [27]. Still, the physical mechanisms behind such a behaviour are not yet fully known, and are a matter of active research and debate. It was noticed quite long ago (see e.g. [28]) that a generic feature of all such signals is that the twopoint correlations (covariances) depend logarithmically on the time separation. During the last decade it became clear that random functions of such type appear in many interesting problems of quite different nature, featuring in physics of disordered systems [18], [29], [30], [31], [32], quantum chaos [33], mathematical finance [34,35], turbulence [36,37,38] and related models [39,40,41], as well as in mathematical studies of random conformal curves [42], Gaussian Free Field [43,44] and related models inspired by applications is statistical mechanics [45] and quantum gravity [46], and the most recently in the value distribution of the characteristic polynomials of random matrices and the Riemann zeta-function along the critical axis [47]. Let us note that a simple argument outlined in [22] and repeated in section 5 of the present paper shows that by taking the logarithm of any spatially homogeneous powerlaw-correlated multifractal random field we necessarily obtain a field logarithmically correlated in space. A somewhat similar in spirit suggestion to refocus the attention from the multifractal ("intermittent") signals and fields to their logarithms was also put forward in [37]. All this makes logarithmically-correlated Gaussian processes an ideal laboratory for studying disorderinduced multifractality, though investigating the effects of non-Gaussianity remains a challenging outstanding issue.
Despite the fact of being of intrinsic interest and fundamental importance, a coherent and comprehensive description of statistical characteristics of ideal 1/f noises seems not to be yet available, and relatively few properties are firmly established even for the simplest case of a Gaussian 1/f noise. Among the works which deserve mentioning in such a context is the paper [48] which provided an explicit distribution of the "width" (or "roughness") for such a signal, as well as the work [49] describing a curious property of spectral invariance with respect to amplitude truncation. In recent papers [30], [31] and [45] the statistics of the extreme (minimal or maximal) values of various versions of the ideal 1/f signals was thoroughly addressed. From that angle the subject of the present paper is to provide a fairly detailed picture of statistics of the number of points in such signals which lie above a given threshold set at some high value. The latter can be rather naturally defined as being at finite ratio to the typical value of the absolute maximum.
In this paper we are going to consider only Gaussian ideal 1/f noises, 2π-periodic version of which is naturally defined via a random Fourier series of the form where v n is a set of i.i.d. complex Gaussian variables with zero mean and the variance |v n | 2 = 1, with the asterisk standing for the complex conjugation, and the bar for the statistical averaging. It implies the following covariance structure of the random signal Mathematically such series represents the periodic version of the fractional Brownian motion with the Hurst index H = 0. The corresponding definition is formal, as the series in (7) does not converge pointwise, the fact reflected, in particular, in the logarithmic divergence of the covariance in (8) + . Although it is possible to provide several bona fide mathematically correct definitions of the ideal 1/f noise as a random generalized function (based, for example, on sampling 2d Gaussian free field along the specified curves , e.g. the unit circle for the periodic noise, see [42], or the constructions proposed in [39] or [45]), for all practical purposes the 1/f noises should be understood after a proper regularization. In what follows we will use explicitly the regularization proposed by Fyodorov and Bouchaud [30], though we expect the main results must hold, mutatis mutandis, for any other regularization.
In the model proposed in [30] one subdivides the interval t ∈ (0, 2π] by finite number M of observation points t k = 2π M k where k = 1, . . . , M < ∞, and replaces the + One can also define other, in general non-periodic versions of similar log-correlated random processes on finite intervals using different basis of orthogonal functions, or even exploit the appropriate random Fourier integral to define the process on the half-line 0 < t < ∞. The corresponding models arise very naturally in the context of Random Matrix Theory and will be discussed in a separate publication [50]. function V (t), t ∈ [0, 2π) with a sequence of M random mean-zero Gaussian variables V k correlated according to the M × M covariance matrix C km = V k V m such that the off-diagonal entries are given by To have a well-defined set of the Gaussian-distributed random variables one has to ensure the positive definiteness of the covariance matrix by choosing the appropriate diagonal entries C kk . A simple calculation [30] shows that as long as we choose the model is well defined, and we will actually take the minimal possible value: C kk = 2 ln M + ǫ, ∀k with a small positive ǫ ≪ 1. We expect that the statistical properties of the sequence V k generated in this way reflect correctly the universal features of the 1/f noise. An example of the signal generated for M = 4096 according to the prescription above via the Fast Fourier Transform (FFT) method as explained in detail in [31] is given in the figure. Using the model (9)-(10) the authors of [30] defined the associated random energy model via the partition function Z(β) = M i=1 e −βV i , with the temperature T = β −1 ≥ 0 and succeeded in determining the distribution of Z(β) in the range β < 1. To reinterpret those findings in the context of multifractality we introduce the height variables h i = e V i > 0 and rename β → −q , converting Z(β) of the random energy model to Z q of the "thermodynamic formalism", eq.(3). Note that due to the statistical equivalence of V i and −V i in the model all results may depend only on |q|. Then the findings of [30] can be summarized as follows. The probability density of the random variable Z |q|<1 consists of two pieces, the body and the far tail. The body of the distribution has a pronounced maximum at Z ∼ Z e (q) = M 1+q 2 /Γ(1 − q 2 ) ≪ M 2 , and a powerlaw decay when Z e ≪ Z ≪ M 2 . Introducing z = Z q /Z e (q) the probability density of such a variable is given explicitly by For z ≫ M 1−q 2 the above expression is replaced by a lognormal tail [30]. Note that the probability density (11) is characterized by the moments where Γ(z) is the Euler Gamma-function. It is worth noting that although the particular form of the density (11) is specific for the chosen model of 1/f noise, the power-law forward tail P q (Z) ∝ Z −1− 1 q 2 is expected to be universal [31] and so the divergence of moments of the partition function for Re s > q −2 . A closely related fact which can be also traced back to the existence of the universal forward tail is that the typical partition function scale Z e (q) in all 1/f models is expected to behave for q → 1 as Z e (q)/M 1+q 2 ∼ (1 − q) → 0. This property will have important consequences at the level of the counting function.
To get some understanding of how the above asymptotic results agree with the direct numerical simulations of the model (9)-(10) for large, but finite M it is useful to provide the exact finite-M expression for the second moment of the partition function, see [30]: Second term here is dominant in the large-M limit and gives precisely the result (12) as asymptotically we can replace the sum by the integral convergent for q 2 < 1/2 and get The main correction to this asymptotic result is given by the first term in (13), whose relative contribution is small as M 1−2q 2 for q 2 < 1/2. In figure 2 we show the exact evaluation for the relative variance δ z (q, M) = z 2 (z) 2 −1 with z = Z q /Z e (q) given explicitly by We see that δ z (q, M) clearly approaches the asymptotic value δ z (q, 3. Thermodynamic formalism for the counting function of the 1/f noise sequence and the threshold of extreme values The statistics of Z q for the model under consideration suggest that it reflects the corresponding strong sample-to-sample fluctuations in the counting function of pattern of heights. Our goal is to quantify statistics of those fluctuations by considering the total number N > (x), which in the present context will be denoted as N M (x), of the x−high points in the (regularized) 1/f sequence V 1 , . . . , V M . Those points are defined as such that We then relate the number N M (x) to the partition function Z q by the thermodynamic formalism: where now the densityρ M (y) = ρ M (y) ln M = M k=1 δ(V k − y ln M) is anticipated to be given in the large-M limit by the multifractal ansatz of the particular "improved" form: Here n M (y) is assumed to be a random coefficient of order of unity which strongly fluctuates from one realization of the sequence V i to the other in such a way that its probability density is given by the formula (11) with q value chosen to be q = y/2. Indeed, substituting the density (17) to Z q in (16) and performing the integrals in the limit ln M ≫ 1 by the Laplace method we arrive at the asymptotic behaviour Z q ≈ n M (2q) Z e (q), q < 1, with the value Z e (q) = M 1+q 2 Γ(1−q 2 ) precisely as we have found from the exact solution (11). On the other hand, substituting the same ansatz to the counting function in (16) yields by the same method Thus, our main conclusion is that two random variables n = N M (x)/N t (x) and z = Z q/2 /Z e (q/2) must be distributed in the large−M limit according to the same probability law, which after invoking (11) yields the asymptotic probability density for the scaled counting function in the form The shape of the distribution for a few values of x is presented in Fig. 3.
The following qualification is needed here. For large but finite M such form of the density stops to hold true for extremely large n → ∞ as in any realization obviously N M (x) < M at the very least. Therefore there must exist an upper cut-off value N c (x) such that for n > n c = N c (x)/N t (x) the scaling form of the probability density (19) loses its validity. The cutoff n c should diverge as long as M → ∞. Similarly, another restriction on the validity of (19) should exist in the region of extremely small n → 0 due to implicit condition N M (x) ≫ 1. Precise value of the cutoffs can not be extracted in the framework of the thermodynamic formalism, and its determination remains an open issue.
The important scale N t (x) defined explicitly in (18) describes typical values of the counting function N M (x) for a given observation level x. In particular, it can be used to define one of the objects of central interest in the present paper, the threshold of extreme values. The latter stands for such a level above which typically we can find for ln M ≫ 1 only a few, i.e. of the order of one points of our random sequence. The scaling behaviour N t (x) ∼ M f (x) , f (x) = 1 − x 2 /4 is the hallmark of the multifractality. A very similar parabolic singularity spectrum characterizes the high value pattern of the two-dimensional Gaussian free field as revealed in [18] and proved in a mathematically rigorous way in [44]. The above result for f (x) is the simple one-dimensional analogue of that fact, see e.g. [45] where it is rigorously shown that lim M →∞ ln N M (x) ln M = 1 − x 2 /4 in our notations. Note that as the singularity spectrum f (x) vanishes at x = 2 the typical position of the absolute maximum of the random sequence of V i 's is given by V max = 2 ln M at the leading order. The corresponding subleading term was conjectured in the work by Carpentier and Le Doussal [29] to be V m = 2 ln M − c ln ln M + O(1), with c = 3/2. That conclusion was based upon an analysis of the travelling wave-type equation [51] appearing in the course of one-loop renormalization group calculation, and the value c = 3/2 was conjectured to be universally shared by all systems with logarithmic correlation. Such a result is markedly different from c = 1/2 typical for short-ranged correlated random signals, so the value of c may be used as a sensitive indicator of the universality class. Indeed, in a recent numerical studies of the behaviour of the logarithm of the modulus of the Riemann zeta-function along the critical line [47] the value 3/2 was used to confirm the consistency of describing that function as a representative of logarithmically correlated processes. Despite its importance, no transparent qualitative argument explaining c = 3/2 vs. c = 1/2 values was ever provided, to the best of our knowledge, though for the case of the 2D Gaussian free field the value 3/2 was very recently rigorously proved by Bramson and Zeitouni [52] by exploiting elaborate probabilistic arguments. Below we suggest a very general and transparent argument showing that the change from c = 1/2 to c = 3/2 is a direct consequence of the strong fluctuations in the counting function reflected in the power-law decay of the probability density (19). That observation not only allows one to explain the origin of c = 3/2 for the Gaussian case, but has also a predictive power in a more general situation as will be demonstrated in the section 5.
To begin with presenting the essence of our argument we first observe that (19) implies that n = Γ 1 − x 2 4 , 0 < x < 2 so that the mean value of the counting function is given asymptotically by We shall see in the next section that the above expression is asymptotically exact for any real x > 0 without restriction to 0 < x < 2. Notice however that the mean value (20) and the characteristic scale N t (x) in (18) differ from each other by the factor 1 Γ(1−x 2 /4) tending to zero as x → 2. Such a difference, origin of which can be again traced back to the specific power-law tail of the probability density (19), see Fig. 3, is one of the hallmarks of the random signals with logarithmic correlations. Indeed, consider for comparison the case of uncorrelated i.i.d. Gaussian sequence sharing the same variance V 2 i = 2 ln M with the logarithmically correlated noise (by historical reasons it is natural to refer to such model as the Random Energy Model, or REM [53]). A straightforward calculation shows that we still would have precisely the same mean value (20) of the counting function as in the logarithmically-correlated case, but unlike the latter it will be simultaneously the typical value of that random variable as no powerlaw tail is present in that case (see Fig. 4 and discussion in the next section).
Such a difference between the two cases has important implications for the location of the threshold x = x m which corresponds to the region of extreme value statistics of multifractal heights. Indeed, by approximating the singularity spectrum f (x) close to its right zero x + = 2 as f (x) ≈ (2 − x), and similarly writing Γ 1 − x 2 should be universal for one-dimensional Gaussian processes with logarithmic correlations. It also should show up, mutatis mutandis, in higher-dimensional versions of the model like e.g. the Gaussian Free Field on the lattice. Such a tail will ensure the difference between the typical and the mean of the counting function by a factor which vanishes linearly on approaching the extreme value threshold value x = x m . This factor then will lead to the value c = 3/2 by the mechanism illustrated above for our particular explicit example. The case of non-Gaussian signals with logarithmic correlations is relevant for general disorder-generated multifractals and is discussed in Section 5 of the paper.
Finally it is worth mentioning that for x > 2 the mean value of the counting function (20) is exponentially small. This fact reflects the need to generate exponentially large number of samples to have for ln M ≫ 1 at least a single event with V i > 2x ln M when x > 2. Indeed, such values of V i will not show up in a typical realization (cf. earlier discussion about "annealed" vs. "quenched" singularity spectra).
Exact results versus asymptotics
The above results for the counting function obtained in the framework of the thermodynamic formalism are expected to be valid as long as ln M ≫ 1. To get a feeling of how big in practice should be M to ensure the validity of our asymptotic formulae it is natural again to try to perform the direct numerical simulations of the regularized version of the ideal 1/f noise. We start with checking directly the distribution of the scaled counting function, Eq. (19), for a particular value x = 1. The results are presented in Fig.4. They show that although the main qualitative features of the distribution (in particular, the well-developed powerlaw tail) are clearly in agreement with the theoretical predictions, the curve is still rather far from its predicted asymptotic shape for M = 2 18 , and the convergence is too slow to claim a quantitative agreement.
To get a better understanding of the mechanism of such disagreement at a quantitative level, and to check the results obtained in the framework of the thermodynamic formalism we choose to consider in much greater detail the first two moments of the counting function. The asymptotic formula (19) yields for the mean of the counting function the expression (20) and for the variance computed as and is independent of the correlations. The problem of deriving a closed-form expression for the variance which is amenable to accurate numerical evaluation for very big ln M ≫ 1 is less trivial and may have an independent interest. Before presenting the results we find it most convenient to define the following object in terms of which the relative variance is expressed as ∆ M (x) is a convenient measure of correlation-induced fluctuations. Using the definition of the counting function we explicitly get: from which is evident that ∆ M (x) vanishes for any i.i.d. sequence. In the latter case the relative variance tends to zero in the large-M limit assuming N M (x) → ∞ for M → ∞. This simply means that in the i.i..d. case the variable N M (x)/N M (x) is self-averaging, i.e its limiting density approaches the Dirac delta-function, see Fig. 4. On the other hand, ∆ M (x) is formally different from zero for correlated variables. Nevertheless, using the general formalism exposed in the Appendix A one can satisfy oneself that for all stationary Gaussian sequences with correlations decaying fast enough to zero at big separations (e.g. as a power of the distance) the quantity δ n (x; M) still tends to zero as M → ∞. In contrast, we will see below that in the logarithmically correlated case δ n (x; M) tends to a finite positive number for M → ∞, and thus coincides with the leading behaviour of the variance of the counting function.
In the Appendix A we have derived the exact expression for ∆ M (x) for a general correlated Gaussian sequence. For the periodic 1/f noise sequence using (9) the result reads In the figure we test its validity by comparing the results obtained for moderate value of M by direct numerical simulations of the log-correlated sequences with the predictions of (27).
The expressions above are suitable for developing a well-controlled approximation to the exact expression (27) in the large-M limit assuming ln M ≫ 1. First of all, it is clear that in the large-M limit we may replace the discrete sum in (27) by the integral treating πn M as a continuous variable θ. Using the symmetry θ → π − θ we then arrive to an approximation to the ratio δ n (x; M) = ∆ M (x) (NM(x)) 2 given by where the lower limit of integration over τ is given by At the next step we assume that the θ−integral is dominated by the finite values 0 < θ < π/2. This allows to replace the lower limit of integrations over θ by zero and also to expand h θ ≈ 1 + ln |2 sin θ|/ ln M for ln M → ∞. Changing then the integration variable τ = (1 + u/ ln M) 1/2 and keeping only the leading order terms the u−integral is easily calculated and the emerging θ−integral takes the form π 0 4 sin 2 θ −x 2 /4 dθ. The latter is convergent as long as x < √ 2 where it can be reduced to the Euler beta-function, see (14). As a result, we arrive to the following expression: which is fully equivalent to the variance result (22) we have anticipated on the basis of the thermodynamic formalism. For x > √ 2 the θ− integral diverges at the lower limit θ → 0 rendering our large-M procedure invalid. This case will be separately treated in the end of the section. Now we are in a position to check numerically the range of applicability of the approximations derived. First we attempt to compare the results of exact numerical evaluation of the discrete sum in (27) to the integral (28). Actually, the direct evaluation of the sum in Mathematica is affordable up to M = 50000. To go up to the higher values of M we use the identity where R(n) = n for n < M/6 and R(n) = n − M/3 + 1 2 for n ≥ M/6. Since for large ln M the difference |h n − h n+1 | is very small, the integral in the above expression can be very accurately approximated by the trapezoidal rule This trick allows us to evaluate the sum in (27) The results are presented figure 5 and figure 6 for x = 1. In figure 5 we compare the fast convergence of δ z (q = 0.5, M) against the very slow convergence of δ n (x = 1; M). For M ∼ 2 15 δ z (q = 0.5, M) is already very close to the asymptotic value whereas δ n (x = 1, M) shows a curious non-monotonicity and even for M ∼ 2 30 we are still very far from the asymptotic value predicted by (30) which is δ (n) ∞ (x = 1) = 0.180341. We observe that for M larger than 2 17 the continuum approximation of Eq.(28) matches perfectly the numerical integration involving the identity (31) and the trapezoidal rule. The latter matches perfectly the exact discrete sum (27) even for moderate M. As we now are confident in the accuracy of the continuum approximation formula (28) given by a double integral with ln M entering as a simple parameter we can use it to check what values of ln M actually ensure the validity of the asymptotic (30). The results are presented in fig. 6 and show that one needs astronomically big values of M even to achieve a rather modest agreement with the asymptotic value. These facts explains our failure to confirm the infinite-M asymptotic by direct simulation of the variance of the counting function predicted for the periodic 1/f noise in the framework of the thermodynamic formalism. In any realistic signal analysis of such variance one must therefore rely upon the exact formula (27) or its analogues instead of the asymptotic value (30). The conclusion should be of significant practical importance, in particular in view of the growing interest in numerical investigations of statistical properties of high values of the modulus of the Riemann zeta-function and of the characteristic polynomials of large random matrices.
Having verified by the independent approach the asymptotic of the second moment of the counting function for x < √ 2 we can now apply a similar methods beyond that range. The formal divergence of the right-hand side in (30) for x → √ 2 simply means that the second moment of the counting function is not proportional to the squared typical scale N e , but is parametrically larger. An accurate analysis of the second moment in the range √ 2 < x < 2 √ 2 performed in the Appendix B shows that the leading asymptotic behaviour of δ n (x; M) is given by As is easy to see δ n (x; M) ≫ N M (x) −1 in the above domain, and is therefore the dominant term in the relative variance (25).
The position of threshold of extreme values in generic disorder-generated multifractal patterns
Results obtained so far in the paper suggest a natural question about statistics of high values and positions of extremes of more general power-law correlated multifractal random field with a generic non-parabolic singularity spectrum. Most obvious examples include the variety of the Anderson transitions [14], but in fact many more random critical systems should be of that sort, see e.g. [12,16,17,19]. A straightforwards calculation outlined in [22] shows that behind each pattern of such type lurks a certain logarithmically correlated field, though in general of a non-Gaussian nature. Below we sketch that simple argument for the sake of completeness. Consider a d−dimensional sample of linear size L, and assume following [12] that the multifractal patterns of intensities p(r) is self-similar p q (r 1 )p s (r 2 ) ∝ L −yq,s |r 1 − r 2 | −zq,s , q, s ≥ 0, a ≪ |r 1 − r 2 | ≪ L (33) and spatially homogeneous The consistency of the above two conditions for |r 1 − r 2 | ∼ a and |r 1 − r 2 | ∼ L implies: If we now introduce the field V (r) = ln p(r) − ln p(r) and combine the identity d ds p s | s=0 = ln p with the fact that τ 0 = −d we straightforwardly arrive at the relation Thus we conclude that the logarithm of any multifractal intensity is a log-correlated random field. The above argument does not say anything about higher cumulants of the field V (r), but it is easily checked that had those fields be always Gaussian the resulting singularity spectrum f (α) obtained from τ q via the Legendre transform would be invariably parabolic. Therefore, any non-parabolicity of the singularity spectra necessarily implies non-Gaussian nature of the underlying log-correlated fields. Nevertheless, combining our previous insights with properties of disorder-generated multifractal patterns revealed in [14] suggests the way in which our results on Gaussian 1/f noise can be generalized to statistics of high values and positions of extremes of more general non-Gaussian logarithmically correlated random processes and fields.
As was already mentioned in the introduction, in the case of the Anderson transition the probability density of the inverse participation ratios I q was shown to be dependent only on the scaling ratio z = I q /I (t) q , with I (t) q standing for the typical value. Moreover, that ratio is expected to be power-law distributed: P q (z) ∼ z −1−ωq [14,9]. We may try to combine that fact with the theory developed in the present paper to conjecture the typical position of the extreme values (maxima or minima) in a pattern of normalized multifractal probability weights p i ∼ M −α i for i = 1, . . . M ∼ L d such that i p i = 1. A brief account of such a procedure is as follows. Suppose the mean participation ratios are given by I q = B(q) M −τq , with a coefficient B(q) of order of unity, and concentrate on those q for which typical and annealed exponents coincide. From it we recover in the usual way the singularity spectrum f (α) by the corresponding Legendre transform: f (α) ≥ 0 for α ∈ [α − , α + ] and further assume α − > 0 to avoid complications related to the so-called "multifractality freezing" [9,13,20] which would require a special care. Define α(q) to be a solution of the equation q = f ′ (α) and denote the mean of the scaling ratio z = I q /I (t) q as z q = ∞ 0 P q (z) z dz. Further given any function φ q of the variable q define a "Lagrange conjugate" function φ * (α), by the relation φ * (α(q)) = φ q . Then, by naturally generalizing our earlier consideration of the 1/f noise we suggest that the density of exponents defined as ρ M (α) = M i=1 δ ln p i ln M − α should be given asymptotically, in every realization, by the following "improved multifractal Ansatz", cf. (17): Here n = n * (α) is assumed to be a random coefficient of the order of unity distributed for a given α according to a probability density P * α (n) defined in terms of the density for the IPR scaling ratio P q (z) via the rule P * α(q) (n) = P q (n). Indeed, substituting the Ansatz (37) into the definition I q = ∞ 0 M −qα ρ M (α) dα and performing the integral by the Laplace method for ln M ≫ 1 gives I q ≈ n * (α(q)) I q /z q , where the random variable z = n * (α(q)) is distributed according to the probability density P q (z). This is precisely what is required, provided we identify I (t) q = I q /z q . Now we can substitute the same Ansatz to the definition of the counting function N < (α) = α −∞ ρ M (α) dα choosing the value of α to the left of the maximum of f (α). This gives asymptotically N < (α) = n * (α)N t (α) where the scale N t (α) is now given by and defines the typical value of the counting function for a given α. Then, in a typical realization of the multifractal pattern a few "extreme" values among the probability weights p i 's will be of the order of p m = M −αm , where α m is determined from the condition N t (α m ) ∼ 1. Clearly, at the leading order α m = α − given by the left root of f (α) = 0, and the goal is to extract the subleading term. For doing this properly a crucial observation taken from [14] is that for q → q c = f ′ (α − ) the tail exponent ω q characterizing the IPR probability density P q (z) ∼ z −1−ωq should tend to ω qc = 1. As the derivative d dq ω q | q=qc is generically neither zero nor infinity the mean value z q = ∞ 0 P q (z) z dz will diverge close to q = q c as z q ∼ (q c −q) −1 . In turn, as z * (α) is the Lagrange conjugate of z q the divergence of the latter implies similar behaviour z * (α) ∼ (α − α − ) −1 in the vicinity of α − . At the same time generically f ′ (α), f ′′ (α) neither vanish nor diverge at α = α − , and we do not see any reasons to expect that B * (α) vanishes or diverges at this point either. Approximating f (α m ) ≈ f ′ (α − )(α m − α − ) we arrive at the following equation for the extreme value threshold α m : Solving it for ln M ≫ 1 to the first non-trivial order beyond α m = α − gives which constitutes our main prediction for the typical position of the threshold of extreme values in disorder-induced multifractals. In particular, the value of the absolute maximum p max will be such that y = ln p max − ln p (typ) m is a random variable of the order of unity.
Discussion and Conclusion
In conclusion, we have studied, both analytically and numerically, the strongly fluctuating multifractal pattern associated with high values of the periodic ideal 1/f noise. In particular, we concentrated on the signal level comparable with the typical maximum value of the 1/f noise. The exploitation of the thermodynamic formalism allowed us to translate the distribution of the partition function found in the previous studies [30,31] to a similar distribution for the counting functions of exceedances of such a high level. The power-law forward tail of the latter distribution was shown to give rise to a parametric difference between the mean and the typical value of the counting function when the position of the high level approaches threshold x m of extreme values. Such a mechanism which can be traced back to the logarithmic correlations inherent in the 1/f noise allowed us to explain the universal coefficient in front of the subleading term in the position of the threshold x m .
We have also performed a direct numerical simulations of the 1/f signal and calculated numerically the lowest two moments of the partition function. This served to demonstrate that for the samples of M ∼ 10 6 points the numerics follows M = ∞ results rather faithfully. Performing the same check for the counting function moments however showed that truly asymptotic results can be never achieved even with very moderate accuracy due to prohibitively slow convergence. Instead, even for M as big as M ∼ 10 9 one has still to use a more elaborate finite-M formulas which we derived in the present paper for that goal. This lesson may prove important in view of the growing interest in numerical simulations of related systems arising in the framework of the Random Matrix Theory and the Riemann zeta function along the critical line [47].
Finally, by comparing the results obtained for 1/f noises in our paper with those known to hold for multifractal patterns of wave-function intensity at the points of Anderson transitions [9,14] we propose a quite general formula (40) for the position of extreme values in generic disorder-generated multifractal patterns with non-parabolic singularity spectra. We hope that such prediction can be checked against the accurate numerical data in random multifractals of various origin, and will generate further interest in statistics of high and extreme values in such multifractal patterns. We leave this issue as well as a related, but much more difficult question about actual statistics of the counting function in the region of extreme states for future investigations. For 1/f noise the latter should involve understanding of how the so-called "freezing phenomena" known to have profound influence on the behaviour of the partition function Z q with |q| > 1 are reflected in the thermodynamic formalism correspondence between Z q and the counting function. Note that the freezing mechanism suggested in [29,30,31] predicts for |q| > 1 the tail behaviour for the distribution of Z = Z q to be P |q|>1 (Z) ∼ Z −(1+ 1 |q| ) ln Z. It is based on considering properties of the generating function g q (y) = exp {−e −qy Z q /Z e (q)} which in the limit M → ∞ is conjectured to stay q−independent (i.e. "frozen") to the value g |q|=1 (y) for all |q| > q c = 1. The factor ln Z in P |q|>1 (Z) plays a prominent role and is believed to be a universal feature within the class of Gaussian logarithmically correlated fields. Note that such factor will be precisely absent for i.i.d. case of the standard REM model. To that end let us mention that in the context of the Anderson Localization a certain use of the thermodynamic formalism for the IPR's combined with a clever heuristic power counting [14,9] lead to predicting the probability density for I = I q with q > q c = f ′ (α − ) of the form P q>qc (I) ∼ I −(1+ qc q ) . It is most natural to suspect that the logarithmic factor ln I should be present in the above formula for P q>qc (I) as well, and the accuracy of the power counting procedure used in [14,9] was simply not enough to account for it. Closely related questions are whether the generating functiong q (y) = exp −e −qy I q /I (t) q will be actually q−independent for q > q c = f ′ (α − ) for general non-parabolic multifractals and whether the probability density of the logarithm of the (appropriately shifted) absolute maximum y = ln p (typ) m − ln p max , with p (typ) m given by (40), will show a characteristic non-Gumbel tail |y|e −|y| , y → −∞ [29,30,31] as our extended analogy would suggest. All these intriguing issues certainly deserve further investigation, both numerically and analytically.
where f ′ ≡ df dV . To verify the proposition we introduce the vector v = V 1 V 2 , denotê C = c 0 c c c 0 and write the joint probability density of V 1 and V 2 as from which it is immediately clear that This implies: where the last equality follows after integration by parts.
Our goal is to extract the large-M asymptotic behaviour of the integral featuring in (.1), that is: 2 < x < 2 √ 2 we have u * ∈ (0, 1) so we can apply the standard Laplace method. Using L(u * ) = −1 + √ 2x − x 2 /4 and d 2 du 2 L(u = u * ) = 2 √ 2/x we find the leading order contribution given by: Finally, using asymptotic formula Erfc x x √ π ln M we arrive at the expression (32). | 2012-12-07T14:37:34.000Z | 2012-07-19T00:00:00.000 | {
"year": 2012,
"sha1": "a5d75dd8a8041b1021dfef3ba21da1a71ca3bcff",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1207.4614",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a5d75dd8a8041b1021dfef3ba21da1a71ca3bcff",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
2790740 | pes2o/s2orc | v3-fos-license | For robust big data analyses: a collection of 150 important pro-metastatic genes
Metastasis is the greatest contributor to cancer-related death. In the era of precision medicine, it is essential to predict and to prevent the spread of cancer cells to significantly improve patient survival. Thanks to the application of a variety of high-throughput technologies, accumulating big data enables researchers and clinicians to identify aggressive tumors as well as patients with a high risk of cancer metastasis. However, there have been few large-scale gene collection studies to enable metastasis-related analyses. In the last several years, emerging efforts have identified pro-metastatic genes in a variety of cancers, providing us the ability to generate a pro-metastatic gene cluster for big data analyses. We carefully selected 285 genes with in vivo evidence of promoting metastasis reported in the literature. These genes have been investigated in different tumor types. We used two datasets downloaded from The Cancer Genome Atlas database, specifically, datasets of clear cell renal cell carcinoma and hepatocellular carcinoma, for validation tests, and excluded any genes for which elevated expression level correlated with longer overall survival in any of the datasets. Ultimately, 150 pro-metastatic genes remained in our analyses. We believe this collection of pro-metastatic genes will be helpful for big data analyses, and eventually will accelerate anti-metastasis research and clinical intervention.
Background
Cancer metastasis is the greatest cause of death in almost all types of malignancies [1]. Multiple factors from the tumor and the host contribute to the formation and progression of distant secondary tumors [1,2], and most of the mechanistic studies to date have mainly focused on the metastatic potential of tumor cells. It is believed that the metastasis of single cancer cells begins with the cells gaining the ability to migrate and invade. The cancer cells can gain motility in several ways, including epithelialmesenchymal transition (EMT) and fusion of cancer cells to highly mobile bone marrow-derived cells [3,4]. In the metastases formed by clusters of tumor cells, EMT may not be necessary [5]; however, the layer of endothelial cells enveloping the entire tumor cluster/embolus seems critical for the survival of tumor clusters [6].
The ability to identify cancer patients with a high risk of metastasis is essential in the era of precision medicine. In addition to applying clinicopathologic parameter combination, also known as clinical prognostic classifiers in some circumstances, molecular profiling based on highthroughput technologies is expected to allow for a more accurate and robust prognostic prediction of metastatic potential in patients. How to effectively analyze big data generated from high-throughput screening is an emerging issue for many bioinformaticians. We hypothesize that, with optimal weighting on the impact of each individual gene, a collection of key pro-metastatic genes could be useful to generate a prognostic tool to identify the metastatic potential of a specific tumor and novel signaling pathways underlying metastasis.
Main text
The increased investigation of cancer metastasis in recent years has identified over 200 pro-metastatic genes. In this review, we aim to identify a group of key pro-metastatic genes with in vivo functional evidence and reasonable clinical relevance for application to big data analyses. Figure 1 summarizes the analytic procedure of this review. First, we carefully selected 285 genes from the literature through searching PubMed based on the following criteria: (1) author-provided evidence of promoting migration and/or invasion of cancer cells; (2) authorprovided evidence of promoting metastasis in vivo using animal models; (3) when a gene has been reported as pro-metastatic in several articles, all articles reporting the link were reviewed, and the most convincing studies are listed as the key references in Although different tumor types are believed to rely on different molecular mechanisms for metastasis, 23 common pro-metastatic genes have been identified in our analyses, associating with poor prognosis in both cancer types. Among them, we are most interested in 11 genes that are not only statistically significant in terms of prognostic impact but also associated with distinct overall survival curves in both cohorts, suggesting the genes' profound biological impacts on tumor progression. For the other 12 genes, although their biological Fig. 2 The survival curves of two cohorts of cancer patients comparing the mRNA expression levels of 11 genes. The data were retrieved from The Cancer Genome Atlas (TCGA) database. The survival curves were plotted using the Kaplan-Meier method and compared using the log-rank test. Consistently, among all 11 genes presented in this figure, elevated gene expression levels significantly associate with shorter overall patient survival (P < 0.05) in both tumor types. ccRCC clear cell renal cell carcinoma, HCC hepatocellular carcinoma impact on tumor progression were found to be significant in log-rank tests in both cohorts, the survival curves of high versus low expression groups crossed at some time points. The 11 most interesting genes are BIRC5 (Survivin), CXCL1, CXCL8 (IL8), E2F1, ETV4, EZH2, MMP1, MMP9, MYB, PTTG1, and YBX1. Figure 2 shows the survival curves of patients with either ccRCC or HCC expressing these 11 genes. Our findings suggest that different tumor types may partially share some common metastatic mechanisms, therefore strengthening the rationale of applying the list of 150 pro-metastatic genes to big data analyses. Interestingly, 4 of these 11 genes encode secreted proteins, namely, CXCL1, CXCL8, MMP1, and MMP9, which are ideal pharmaceutical targets for blocking cancer metastasis.
Although not covered in this review article, emerging data regarding the regulatory roles of non-coding RNA in metastasis have linked different pro-metastatic genes to forming signaling cascades [7][8][9]. Further investigation into the roles of non-coding RNA in metastasis is warranted.
Conclusions
In summary, we present here a collection of 150 important pro-metastatic genes for big data analyses. We expect more key molecules to be identified and validated in the near future to be included in the list, thereby accelerating the efforts in preventing and treating cancer metastasis. | 2017-07-21T03:40:50.934Z | 2017-01-21T00:00:00.000 | {
"year": 2017,
"sha1": "01629fbf0676bcd2c197c09aa6efe9832252252a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40880-016-0178-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01629fbf0676bcd2c197c09aa6efe9832252252a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266240146 | pes2o/s2orc | v3-fos-license | Causal relationship between asthma and ulcerative colitis and the mediating role of interleukin-18: a bidirectional Mendelian study and mediation analysis
Objective Numerous observational investigations have documented a correlation between asthma and ulcerative colitis(UC). In this Mendelian Randomization (MR) study, we utilized extensive summary data from Genome-Wide Association Studies (GWAS) to further estimate the association between adult-onset asthma and the risk of UC, and to investigate the role of Interleukin-18 (IL-18) as a potential mediator. Materials and methods A two-step, two-sample MR study was conducted through mediation analysis. For this study, we employed a two-sample MR analysis using the inverse variance-weighted (IVW), weighted median, weighted mode, and MR-Egger regression techniques. We utilized publicly accessible summary statistics from a GWAS meta-analysis of adult-onset asthma in the UK Biobank (n=327,253; cases=26,582; controls=300,671) as the exposure factor. The outcomes were derived from GWAS data of individuals with European ancestry (n=26,405; cases=6,687; controls=19,718). GWAS data for IL-18 were obtained from individuals of European ancestry (n=9,785,222; cases=3,636; controls=9,781,586). Results The MR analysis indicates that adult-onset asthma is associated with an increased risk of UC, with an odds ratio (OR) of 1.019 (95% CI 1.001–1.045, P=0.006). However, there is no strong evidence to suggest that UC significantly impacts the risk of adult-onset asthma. IL-18 may act as a potential mediator in the causal relationship between adult-onset asthma and UC, with a mediation proportion of 3.9% (95% CI, 0.6%–6.9%). Conclusion In summary, our study established a causal relationship between asthma and UC, in which IL-18 contributes to a small extent. However, the primary factors underlying the influence of asthma on UC remain unclear. Future research should focus on identifying other potential mediators. In clinical practice, it is important to pay greater attention to intestinal lesions in patients with asthma.
Introduction
Asthma, a diverse and inflammatory respiratory condition, is closely linked to the remodeling of the airways.Individuals with asthma experience difficulty breathing and wheezing, caused by blockage and increased sensitivity of the air passages (1).Asthma is a prevalent chronic condition in childhood (2), and is also frequently found in adult populations (3).Approximately 339 million individuals worldwide suffer from asthma, with projections suggesting that this figure will rise to 400 million by 2025.Asthma is understood to be caused by a combination of different environmental factors and an individual's genetic makeup, classifying it as a genuine multifactorial condition (4).In Europe, more than 8% of adults are afflicted with asthma.Among them, ten million individuals developed the disease before the age of 45 years (5), with the highest prevalence rates observed in countries such as the United Kingdom and Sweden (6).In the United Kingdom, one out of every twelve adults (8.3% prevalence) is affected by asthma (7).
UC is a persistent and recurring non-penetrating condition that causes inflammation in the colon, primarily impacting the large intestine.Its progression is unpredictable, typically extending continuously from the distal to the proximal end, without skip lesions.Common symptoms include sores in the mucous membranes, bloody stools, rectal spasms, and an increased vulnerability to developing colorectal cancer (8).UC shows a higher prevalence in adult males compared to females, with a ratio of around 1.5 males per female.However, in children, girls are more likely to develop UC.The peak age of onset for UC is between 30 and 40 years (9).UC affects approximately 1.5 million people in Europe and over 3 million people worldwide (10), with a prevalence rate exceeding 181.1 individuals per 100,000 in North America and Europe (11).About 24.2011% of the British population is affected by UC (12).The prevalence of this illness is increasing annually, and individuals in advanced stages are susceptible to complications, leading to a decline in quality of life.There is also a rise in mortality rates among individuals who have recently been diagnosed and those with advanced disease (13).
The pathogenesis of asthma and UC remains uncertain, despite extensive research on the genetic and environmental aspects of these conditions (14,15).Observational studies suggest a link between long-term respiratory conditions like asthma and gastrointestinal disorders (10,12).This phenomenon of mutual influence is termed the 'gut-lung axis' (16).Certain exposures during early life are associated with an increased vulnerability to respiratory illnesses and alterations in the composition of gut microbiota (17).However, the precise mechanisms responsible for the interaction between the gut and lungs remain unclear.For example, some observational studies have focused on the correlation between asthma and inflammatory bowel disease (IBD) (18).Yet, which specific disease causes the other is still uncertain.This uncertainty persists even in systematic reviews and meta-analyses.Therefore, further investigation is required to unravel the complex connections among these ailments.
To the best of our knowledge, there is currently no research on the potential pathways between asthma and UC.Previous studies have provided evidence suggesting an association between IL-18 and the pathogenesis of asthma, wherein increased IL-18 expression was found in the serum of patients (19).Additionally, another clinical study has shown that IL-18 levels are related to the severity of UC (20).Therefore, IL-18 may be a potential mediator between asthma and UC.
Over the past few years, the application of Two-Sample MR analysis has become prevalent as a robust approach for inferring causality and investigating the impact of exposure factors on various diseases (21).Mendelian Randomization utilizes genetic variations as instrumental variables (IVs) to evaluate the causal association between exposure factors and diseases, thus mitigating the influence of genetic and environmental confounders (22).This approach estimates causal effects by collecting exposure and outcome data from separate samples (23).
Abbreviations: UC, ulcerative colitis; IL- Through the utilization of this approach, we can enhance the precision of evaluating the influence of asthma on the susceptibility to UC and acquire additional understanding regarding the intricate correlation existing between these two medical conditions.New insights and approaches for preventing and treating UC are anticipated to be revealed by the findings of this research.
Study design 2.1 Choosing genetic variants and identifying data sources
The genetic variation data will be sourced from GWAS of asthma and UC, which we will utilize as datasets.The datasets usually contain numerous individual samples, genotype data, and disease-related effect sizes.We explored The IEU Open GWAS (https://gwas.mrcieu.ac.uk/), which compiles numerous summary statistics from numerous GWAS.
As the exposure factor, we chose summary data from the UK Biobank cohort for individuals with adult-onset asthma, with a sample size of 327,253 (cases=26,582, controls=300,671) (24).To conduct the Two-Sample MR analysis, we utilized genetic variants linked to asthma as IVs, obtaining summary statistics with a genome-wide significance P-value threshold of 5.00E-08.In particular, we acquired summary data for 25 single nucleotide polymorphisms (SNPs) linked to asthma.As the outcome dataset, The dataset for UC (n=26,405; cases=6,687, controls=19,718) was sourced from a European population (25).The summary statistical data for IL-18 levels were derived from published GWAS metaanalysis, which included individuals of European ancestry (n=9,785,222; cases=3,636; controls=9,781,586) (26).
Statistical analysis
In MR studies, IVW is a commonly employed method used to amalgamate causal effect estimates for each Single Nucleotide Polymorphism (SNP), assessing the impact of a specific biological factor on a particular outcome (27).Additionally, MR studies utilize MR-Egger (28) and Weighted Median methods (29) to validate and complement IVW results.MR-Egger assesses directional pleiotropy of IVs, whereas the Weighted Median method offers enhanced precision relative to MR-Egger (30).These methods, employed under varying assumptions of validity, serve to derive MR estimate and facilitate a deeper understanding of causal relationships.It is worth noting that the IVW method does not require individual-level data but can instead utilize summary data to directly compute causal effect estimates.The IVW method incorporates information from multiple genetic variants and can be regarded as a two-stage least squares or allele score analysis conducted at the individual level, which is employed as the primary approach for MR analysis in this context (31).
Primary analysis
In Mendelian randomization analysis, it is crucial that genetic variation is linked to the specific exposure and is not influenced by potential confounding factors (32).Initially, we evaluated the individual correlations between asthma and SNPs.Furthermore, we investigated the connections between each SNP and the susceptibility to UC. we utilized summary data from GWAS of asthma and UC, employing a Two-Sample MR approach (33).An overview of the study is detailed in the technical flowchart (Figure 1), aiming to assess the mutual causal relationship between asthma and UC (Figure 1A), referred to as the total effect.
B A
Diagrams illustrating the associations investigated in this study are provided below.(A) The total effect between asthma and UC, where 'c' represents the total effect when genetically predicted asthma is considered as the exposure, and UC as the outcome, while 'd' represents the total effect when genetically predicted UC is considered as the exposure, and asthma as the outcome.(B) The total effect is further decomposed into two components: (i) the indirect effect, which is calculated using a two-step approach (with 'a' denoting the total effect of asthma on IL-18, and 'b' representing the effect of IL-18 on UC) along with the product method (a × b), and (ii) the direct effect (c′= ca × b).The proportion mediated is defined as the ratio of the indirect effect to the total effect.Genetic IVs were constructed according to the following criteria (27,34): (1) Genetic IVs exhibited a level of significance reaching genome-wide association (P< 5.00E-08); (2) Genetic IVs demonstrated no linkage disequilibrium (LD) among them (r2< 0.001, window size = 10,000 kb); (3) Genetic IVs had a minimum minor allele frequency (MAF) > 0.01; (4) IVs located within palindromic sequences were excluded; (5) SNPs with erroneous causal directions were identified through MR Steiger filtering; (6) Genetic IVs associated with confounding factors were removed using the PhenoScanner database; (7) Outliers in the IVs were excluded using the Outlier-corrected method from the Mendelian randomization pleiotropy residual sum and outlier (MR-PRESSO) model.We excluded weak IVs characterized by an F-statistic less than 10.The formula for calculating the F-statistic is , where n represents the sample size, k the number of IVs used, and R 2 indicates the extent to which the IVs explain the exposure (35,36).
Mediation analysis
In this study, we employed a two-step MR design to conduct a mediation analysis, aiming to investigate whether IL-18 acts as a mediator in the causal pathway between asthma and ulcerative colitis (UC) (see Figure 1B).The total effect was decomposed into direct effects [the impact of asthma on UC without intermediaries (depicted in Figure 1B, path c')] and indirect effects [the influence of asthma on UC through mediators (shown in Figure 1B, path a×b)].We assessed the magnitude of the mediation effect by calculating the proportion of the indirect effect relative to the total effect.Additionally, we calculated 95% confidence intervals using the delta method to assess the reliability of our results (37).This analysis contributes to a deeper understanding of the causal relationship between asthma and UC, including the mechanisms of mediation involved.
Heterogeneity and sensitivity test
We employed the MR Steiger filtering method to assess the causal relationships between each extracted SNP and the exposure factor, as well as the research outcomes (38).This approach calculates the variance explained in the exposure and outcomes by instrumental SNPs, verifying whether the variance in outcomes is less than that in the exposure.'TRUE' MR Steiger results indicate a causal relationship in the expected direction, while 'FALSE' results suggest a causal relationship in the opposite direction.We excluded SNPs with 'FALSE' results because they significantly impacted the research outcomes rather than the exposure factor.
This study employed various methods to assess heterogeneity and horizontal pleiotropy among SNPs.We utilized Cochran's Q statistic (39,40), funnel plots, the MR-Egger intercept method (28), and MR-PRESSO (41) to detect and address outliers, while a random effects model was employed to evaluate the stability of results.Additionally, a leave-one-out analysis was conducted to validate the impact of each SNP on the overall causal estimates.These methodologies collectively contribute to ensuring the credibility and robustness of the MR analysis (42).All statistical analyses were conducted using R (version 4.2.3), in conjunction with the TwoSampleMR and MR-PRESSO software packages (43).
Association of asthma with UC
In the GWAS for asthma, 25 IVs reached significance for association differences.After identifying SNPs with erroneous causal directions through MR Steiger filtering and examining LD among these IVs, one variable (rs35441874) located in a palindromic sequence was excluded.Consequently, 24 independent IVs with no LD association (r²< 0.001, window = 1000kb) were identified.After screening, we identified seven variables (rs12470864, rs17454584, rs2338821, rs6866614, rs72743461, rs7936312, and rs943689) associated with inflammatory bowel disease, and no potential confounding factors such as smoking, alcohol consumption, or body mass index were found.Using the MR-PRESSO method to remove 4 outlier values (rs2381712, rs28635831, rs4771332, rs7824278) to ensure the robustness of the assessment results.We excluded genetic variants with weak IVs (F-statistics< 10), Among them, individual SNPs explain a variance proportion of 2% to 5% for asthma.Ultimately, 13 independent SNPs were included as IVs for asthma (Supplementary Table S1).
Choosing IVW as the primary method of analysis, the results indicate a causal relationship between asthma and UC (OR=1.019,95%CI, 1.001-1.045,P=0.006).The weighted median method (OR=1.023,95%CI, 0.998-1.053,P=0.004) and the weighted mode method (OR=1.026,95%CI, 0.999-1.052,P=0.041) are consistent with the main analysis method IVW.The MR-Egger regression shows that directional pleiotropy is unlikely to bias the results (intercept=-0.194;P=0.103), but the results of the MR-Egger regression method (OR=1.027,95%CI 0.802-1.518,P=0.077) differ from the aforementioned analysis methods.Considering that weighted median estimation has the advantage of maintaining higher estimation precision compared to MR-Egger analysis (44), the results of the MR analysis may support a potential causal relationship between asthma and UC.The forest plots and scatterplots depicting the results of the four MR methods can be found in Figure 2 and Figure 3.
Association of asthma with IL-18
After removing palindromic and ambiguous SNPs, as well as SNPs identified through MR Steiger filtering in the incorrect causal direction, we were left with 32 genome-wide significant SNPs to use as IVs.Among them, individual SNPs explain a variance proportion of 1.4% to 5% for asthma.The F-statistic is far greater than 10 (Supplementary Table S2).Genetically predicted asthma was found to be positively associated with IL-18 risk according to the IVW (OR 1.0149, 95% CI 1.006-1.024,P=0.002)and weighted median methods (OR 1.02, 95% CI 1.004-1.030,P=0.009).MR-Egger (OR 0.89, 95% CI 0.775-1.013,P=0.086),weighted mode (OR 1.02, 95% CI 0.991-1.046,P=0.195).The forest plot and scatterplot of the results can be found in Figure 2 and Figure 3.
Association of UC with asthma
To assess the potential causal relationship between UC and asthma, we conducted a reverse Mendelian randomization (MR) analysis.Within the GWAS of UC, 96 IVs exhibited statistically significant differences.Addressing the issue of linkage disequilibrium among IVs, we excluded 11 IVs that were found within palindromic sequences, resulting in a set of 85 IVs free from linkage disequilibrium (r2<0.001,window=1000KB).Additionally, 39 IVs associated with confounding factors such as asthma, body weight, smoking, diabetes, and lipid levels were excluded using the PhenoScanner database.We further applied the MR-PRESSO method to eliminate 24 outlier IVs.Finally, 22 independent SNPs were included as IVs for UC.Among them, while the F-statistics are all greater than 10, the individual SNPs explain only a variance proportion of 0.1% to 0.7% for UC (Supplementary Table 4).
Reverse analysis revealed that the IVW (OR=7.860,95% CI 5.268-11.729,P<0.001), weighted median (OR=6.607,95% CI 5.462-7.993,P<0.001), and weighted mode (OR=6.329,95% CI 5.296-7.562,P<0.001) methods all support a causal relationship between UC and asthma.However, the results of the MR Egger analysis (OR=0.577,95% CI 0.117-2.847,P=0.507) are inconsistent with those of the three previous methods (Figure 2).Cochran's Q test indicates the presence of heterogeneity (Table 1).The repeated application of MR-PRESSO for the detection of IVs indicated the absence of horizontal pleiotropy, as suggested by the Global Test P-value of 0.32 (Table 1).However, the MR-Egger regression analysis revealed the presence of horizontal Forest plot to visualize the causal effects of IL-18 with asthma and UC.
pleiotropy (Intercept = 0.138; P = 0.004) (Table 1).Therefore, while the primary IVW analysis suggests an association between UC and asthma, the presence of horizontal pleiotropy in IVs may lead to less robust results and potential false positives.Furthermore, the individual SNPs exhibit a relatively weak capacity to explain the genetic effects on UC (Supplementary Table 4).Thus, we conclude that there is insufficient genetic evidence to support a causal relationship between UC and asthma in genetic prediction.
Proportion of the association between asthma and UC mediated by IL-18
Through our analysis, we have identified that IL-18 plays a significant role in the pathway from asthma to UC.Specifically, we have observed a certain association between asthma and an increase in IL-18 levels, and this increase in IL-18 is also linked to an elevated risk of UC.By employing mediation analysis and the delta method for computation, the results indicate that IL-18 explains 3.9% (mediation proportion: 3.9%, 95% CI, 0.9% -6.9%) of the increased risk of UC-associated asthma (Figure 4).
Sensitivity analysis
In order to assess and correct for pleiotropy in causal estimation, we conducted sensitivity analyses in Section 3.4 of this paper concerning the causal relationship between UC and asthma.Additionally, we performed sensitivity analyses for two sets of MR studies, namely, the relationships between asthma and UC, and asthma and IL-18.Using Cochran's Q-test and funnel plots, no evidence of heterogeneity or asymmetry among these SNPs within the causal relationships was observed (Table 1 and Supplementary Figure S1).The MR-PRESSO global test did not detect potential horizontal pleiotropy (Table 1).Furthermore, we employed leaveone-out analysis to validate the impact of each SNP on the overall causal estimation (Supplementary Figure S1).After individually removing each SNP, we performed MR analysis on the remaining SNPs.The consistent results suggest that the inclusion of all SNPs significantly contributes to the establishment of causal relationships.Additionally, in the MR study of IL-18 on UC, due to the limited number of IVs (only two), MR-Egger regression analysis and MR-PRESSO global tests could not be conducted for sensitivity analysis.Therefore, we utilized the Wald ratio method to test for horizontal pleiotropy for each SNP (45).The results consistently showed no evidence of horizontal pleiotropy (rs385076 P=0.019, rs71478720 P=0.018) (Supplementary Table 3).
Discussion
Multiple research studies have indicated that people who suffer from asthma are more prone to developing UC compared to the overall population (18).Nevertheless, the connection between asthma and UC remains unclear.In order to tackle this inquiry, we performed an analysis of MR utilizing four distinct estimation techniques: IVW, weighted median, Weighted mode, and MR-Egger regression.The findings of our study suggest that there is a cause-and-effect connection between asthma and UC (OR=1.019,95% CI 1.001-1.045,p=0.006).Despite the variation in the MR estimates acquired from IVW, MR-Egger, Weighted mode and weighted median analyses, both IVW, Weighted mode and weighted median analyses provide evidence for a causal link between asthma and UC.Especially when taking into account the greater accuracy in estimating with the weighted median estimator in comparison to MR-Egger analysis (29).The findings of our study provide additional evidence for the potential link between asthma and the likelihood of developing UC.Hence, our discoveries validate the previously noted association between asthma and UC in observational investigations.In addition, our research results indicate that IL-18 a mediating role in this causal relationship, accounting for approximately 3.9%.However, in reverse MR studies, due to the presence of horizontal pleiotropy in IVs and insufficient explanatory power for genetic effects, there is insufficient evidence to establish a causal relationship between UC and asthma.
In a retrospective cohort study based on population, the occurrence of UC and Crohn's disease was assessed in individuals who had asthma.The findings of this research suggest that asthma could have an influence on the progression of UC, with potential involvement of various mechanisms (18).The potential cause of the disease is the breakdown of immune tolerance and consequent immune damage, resulting in heightened sensitivity to environmental triggers frequently seen in intestinal and respiratory conditions (46).The gut and respiratory tract originate from the same embryonic structure and have similarities in terms of epithelium, glands, and lymphoid tissue (47).Bronchoalveolar lavage and bronchial biopsies frequently reveal signs of subclinical pulmonary inflammation in individuals diagnosed with UC and Crohn's disease (48,49).To summarize, both the intestinal and respiratory epithelial cells originate from the same embryonic source, and it is believed that the correlation between asthma and UC is caused by the disturbance from possible immune and environmental factors (46, 47).Nevertheless, as not all individuals with asthma experience UC, there might still be some uncertainty regarding the cause-and-effect connection between asthma and UC.The study conducted by L-X Chen et al. (50) revealed that, in an analysis of 200 asthma patients, the examination of IL-18 levels showed significantly higher results compared to the control group.These findings indicated a notable association between the expression levels of serum IL-18 and its genetic polymorphism with asthma.In another study (20), it was demonstrated that the average plasma concentration of IL-18 in patients with active UC (422 ± 88pg/mL) was twice that of the healthy control group (206 ± 32pg/mL).This result suggested a close correlation between the activity of UC and the concentration of interleukin-18 in plasma, which aligns with the outcomes of our own research.
The study's robustness is derived from the utilization of Mendelian randomization, which mitigates the inherent biases that may exist in observational studies (51).Nevertheless, Even when employing the Mendelian randomization method in research, the problem of pleiotropy (correlation) bias remains unresolved (52).The presence of genetic variations can be linked to various phenotypes, referred to as 'pleiotropy,' which has the potential to complicate and influence causal estimates (53).Including multiple variations in MR analysis can enhance statistical power, but it may also introduce invalid IVs and pleiotropy (54).Hence, it is necessary to employ sensitivity analysis techniques in order to verify the accuracy of the findings obtained from MR studies.To tackle pleiotropy, we employed weighted median estimation, which yields accurate estimates even when half of the SNPs are not valid instruments (44).Furthermore, MR-Egger regression was employed to examine the presence of unequal pleiotropy and assess the influence of exposure on outcomes.Despite the potential decrease in accuracy and effectiveness, the outcomes of our weighted median estimation align closely with the IVW estimation results, thus enhancing our confidence in these connections.The data we have corroborate prior observational research indicating a link between asthma and UC.The present research results offer possible ways to assess the influence of asthma on the likelihood of developing UC.Simultaneously, a two-step two-sample MR study conducted through mediation analysis revealed that IL-18 is a potential mediator in the causal relationship between asthma and UC.
In recent years, research has confirmed the significant role of IL-18 in the pathogenesis of asthma (19).IL-18, a cytokine closely related to the IL-1 family, plays a crucial role in fine-tuning cellular immunity.It serves as both an auxiliary factor in the development of Th2 cells and the production of IgE, and it is vital in the differentiation of Th1 cells.Studies have found increased expression of IL-18 in the serum of asthma patients, with an association between IL-18 polymorphism and susceptibility to asthma, highlighting its potential importance in asthma treatment.Additionally, the role of IL-18 in UC has gained new understanding.A study investigating the mechanistic role of IL-18 in colitis induced by dextran sulfate sodium (DSS) used a DSSinduced colitis mouse model to examine its functional role (55).This study found that IL-18's pro-inflammatory effects became more pronounced in the later stages of the disease.From these studies, we observe that IL-18's role differs between asthma and UC: in asthma, elevated levels of IL-18 are associated with disease activity, while in UC, IL-18 exerts pro-inflammatory effects.These findings suggest that monitoring and regulating IL-18 levels may be crucial in the management of both diseases.For asthma patients, controlling the disease and monitoring IL-18 levels can contribute to disease prevention and early detection and may also reduce the risk of developing UC.This is particularly significant given that UC, a challenging-to-treat disease, can lead to severe complications, such as an increased risk of tumors in later stages.Therefore, these studies offer new insights into the combined management of asthma and UC.
There are several limitations to this study.Firstly, the impact of genetic variations on the mentioned exposure factor is limited, as they can only account for a small portion of the variability in a specific exposure (56).Therefore, our analysis may have limited ability to detect associations.Secondly, the investigation concerning asthma,IL-18 and UC relies on individuals of European descent.It is crucial to conduct additional MR studies on diverse populations to account for potential racial disparities and selection biases that could impact relationships.Thirdly, this research utilized a pair of GWAS summary repositories and did not possess data at the individual level, thereby making it unfeasible to conduct subgroup analysis based on age or gender and compare variations in causal impacts among subgroups.Fourthly, We acknowledge that due to sample size limitations, the instrumental variables we utilized may not have been sufficient to provide higher statistical power, particularly within smaller subgroups (57).Additionally, despite our efforts to address potential pleiotropy and minimize horizontal pleiotropy, the complete elimination of such effects in MR analysis remains challenging.
3 (
FIGURE 3(A) displays scatterplot results depicting the relationship between asthma and UC in four different MR analyses, (B) presents scatterplot results for the relationship between asthma and IL-18 in the same four MR analyses.The slopes of each line represent the causal association for each method.The blue line represents the inverse-variance weighted estimate, the green line represents the weighted median estimate, the dark blue line represents the Mendelian randomization-Egger estimate, the dark green line represents the weighted mode estimate.
FIGURE 4
FIGURE 4Schematic diagram of the IL-18 mediation effect. | 2023-12-16T16:22:38.663Z | 2023-12-14T00:00:00.000 | {
"year": 2023,
"sha1": "b063775deb28781f73f83a22d78f4c6c2e789515",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1293511/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c57d178c330930dd96ac2c2a0aa2b1fb9ee5b92c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55098195 | pes2o/s2orc | v3-fos-license | Distinct Effects of Bovicin HC5 and Virginiamycin on in vitro Ruminal Fermentation and Microbial Community Composition
Antibiotics are used as feed additives for cattle to alter rumen fermentation and increase weight gain. However, this practice can potentially lead to the presence of antibiotic residues in milk and meat and the selection of multiresistant bacteria. Bacteriocins have been suggested as an alternative to antibiotics used in animal production. This work aimed to evaluate the in vitro effects of bovicin HC5 and virginiamycin on ruminal fermentation and on microbial community composition. Ruminal fluid was collected from fistulated cows fed corn silage and incubated with Trypticase (15 g L). Cultures treated with bovicin HC5 or virginiamycin decreased (P < 0.05) ammonia accumulation by 47.46% and 66.17%, respectively. Bovicin HC5 and virginiamycin also decreased (P < 0.05) the concentration of organic acids and gas production, but the effects were somewhat distinct. Molecular fingerprinting of the microbial community using PCR-DGGE revealed that community structure varied between treatments and were distinct from the controls. These results demonstrate that bovicin HC5 and virginiamycin have distinct effects on ruminal fermentation and modify differently the microbial community composition. These results also expand the knowledge about the effects of antibiotics and bacteriocins on bacterial and archaeal communities involved in protein metabolism in the rumen.
Introduction
In ruminant livestock, feedstuffs are fermented by rumen microorganisms generating microbial protein, volatile fatty acids (VFAs), ammonia, methane and heat (Rychlik & Russell, 2000;Bach et al., 2005).Much of these products are used as protein and energy sources by the host, but dietary losses due to urea excretion and methane production can raise the cost of production of dairy and beef cattle.
For decades, ruminant nutritionists have used chemical additives in rations of dairy and beef cattle to decrease dietary losses and increase useful end-products of ruminal fermentation, thus enhancing the efficiency of feed utilization (Callaway et al., 1997;Shen et al., 2017).Ionophore antibiotics are the most commonly used feed additives for manipulation of rumen fermentation in cattle (Patra, 2012).However, the European Union and countries in Asia have gradually banned the use of antibiotics as growth promoters in food-producing animals between 1997 and 2006 (Maron et al., 2013).Therefore, several alternatives to growth promoters have been sought by farmers and ruminant nutritionists to reduce feeding costs, to maintain animal health and growth performance, and to decrease the environmental impact of animal production systems.Among these, several feeding management strategies and chemical and biological additives are being investigated as potential alternatives to control or manipulate the processing and assimilation of dietary nutrients.
Chemical and biological additives such as essential oils, non-ionophore antibiotics, antimicrobial peptides, yeasts, probiotics and direct-fed microbials could be dosed in-feed to entire herds of ruminants, a route typically used for prophylaxis, metaphylaxis, and growth promotion of production animals (Cameron & McAllister, 2016).Previous work demonstrated that antimicrobial peptides could have a role modulating the utilization of dietary nutrients and bovicin HC5, a ruminal bacteriocin, inhibited amino acid deamination and methane production by rumen microorganisms in vitro (Lee et al., 2002).However, its effect on microbiota composition and other fermentation parameters has not been investigated.In this work, we aimed to evaluate the in vitro activities of bovicin HC5 and virginiamycin, a non-ionophore antibiotic that has been successfully applied to improve feed efficiency in different livestock production systems, and evaluate if a ruminal bacteriocin would have similar effects on ruminal fermentation as commercially used feed additives.We focused our analysis on amino acid utilization by the rumen microbiota due to the fact that protein is the most expensive component of ruminant rations and both inhibitors appear to have protein-sparing effects.
Preparation of Antimicrobial Agents
Previously, we demonstrated by reverse-phase HPLC analysis that S. equinus HC5 semi-purified extracts had only one peak with antimicrobial activity corresponding to bovicin HC5 (Lima et al., 2009).Briefly, stationary phase S. equinus HC5 cultures (1 L, c. 400 µg mL -1 microbial protein) were harvested by centrifugation (1710 g, 10 min, 4 °C).The cell pellets were washed and resuspended in acidic sodium chloride (100 mmol L -1 , pH 2.0, 39 °C, 2 h).The cell suspensions were then centrifuged (1710 g, 10 min, 4 °C), and the cell-free supernatants were stored at -20 °C until use.The antimicrobial activities of the bovicin HC5 extracts were evaluated by agar well-diffusion assays (Tagg et al., 1976).Semi-purified extracts of bovicin HC5 were serially diluted (2-fold increments) in phosphate solution (5 mmol L -1 , pH 2.0) and assayed for antimicrobial activity using Alicyclobacillus acidoterrestris DSMZ 2498 as the indicator organism (10 6 CFU mL -1 ).Bacteriocin activity was expressed as arbitrary unit (AU), defined as the reciprocal of the highest dilution that showed a zone of inhibition with at least 5 mm in diameter (Lewus et al., 1991).Stock solutions of bovicin HC5 had an activity of 40,960 AU mL -1 .Virginiamycin (Phigrow®) was obtained from Phibro Corporation Ltd.The commercial preparation was composed by 10% virginiamycin and 90% calcium carbonate.The virginiamycin solution (1.0 mmol L -1 ) was prepared dissolving the commercial additive in sterilized distilled water prepared under anaerobic conditions with O 2 -free nitrogen (N 2 ).The virginiamycin solution was prepared fresh in the same day the incubations were carried out.
Animal Sampling and in vitro Incubations
Ruminal fluid was collected from rumen-cannulated dairy cows about two hours after feeding.All procedures were performed in accordance with a protocol approved by the Universidade Federal de Viçosa Ethics and Animal Care and Use Committee (nº 05/2016).The diet of animals sampled consisted of corn silage (31.2% DM; 7.2% CP; 54.5% NDF and 2.8% EE) and 30% concentrate (89.4% DM; 28.1% CP; 13.3% NDF and 2.7% EE) (Valadares Filho et al., 2016), provided ad libitum.Ruminal digesta was filtered through four layers of cheesecloth into thermal containers and transported to the laboratory.Rumen fluid was anaerobically transferred to tubes and centrifuged (500 g, 5 min) to remove feed particles.The supernatant (35 mL) was anaerobically transferred to serum bottles and Trypticase was added as a source of peptides and amino acids to a final concentration of 15 g L -1 .The study was conducted in a completely randomized design performed in triplicate with two inhibitors and four doses of each inhibitor.Concentrations of bovicin HC5 were 0; 500; 1,000 and 2,000 AU mL -1 , while the following concentrations of virginiamycin were used 0; 5; 10 and 20 µmol L -1 .The in vitro incubations were carried out in 50 mL anaerobic serum bottles containing 35 mL of rumen contents (final volume) incubated under stirring (160 rpm) at 39 °C for 24 h.
Analysis of Ammonia Concentration, Microbial Protein and Amino Acid Deamination Activity
Concentration of ammonia in ruminal fluid (1 mL) was monitored according to the method of Chaney and Marbach (1962).Absorbance at 630 nm was measured in a spectrophotometer Spectronic 20D (Thermo Fisher Scientific, Madison, WI, USA) and ammonium chloride (NH 4 Cl) was used as the standard.Total ammonia (mmol L -1 ) was expressed as the difference in ammonia concentration determined after 24 h of incubation and the initial concentration of ammonia (0 h). Concentration of microbial protein was determined according to the colorimetric method of Bradford (1976), using lysozyme as the standard.Specific activity of deamination was calculated from the difference in ammonia concentration (mmol L -1 ) between the times zero and six hours of incubation, divided by microbial protein concentration (mg L -1 ) and the incubation time (minutes).
Analysis of the Bacterial and Archaeal Community by Denaturing Gradient Gel Electrophoresis (PCR-DGGE)
Changes in diversity of the ruminal bacterial and archaeal community caused by the addition of antimicrobials (bovicin or virginiamycin) was assessed in rumen fluid samples (25 mL) collected after 24 hours of incubation.The samples were stored at -80 °C and defrosted at room temperature immediately before being processed for DNA extraction, using the phenol-chloroform procedure described by Stevenson and Weimer (2007).Genomic DNA extracted from the rumen fluid was utilized in the amplification reactions using primers specific for the V3 region of the 16S rRNA of the Bacteria and Archaea (Muhling et al., 2008;DeLong, 1992).The regions V3-V4 and of V4-V5 the 16S rRNA of the Fimicutes and Bacteroidetes phyla, respectively, were also amplified to investigate changes in community composition within these phylogenetics groups (Muhling et al., 2008).
PCR reactions were performed in a Biocycler MG96G (São Paulo, Brazil) using the primers and amplification conditions previously described (Bento et al., 2016).DGGE was performed in a DGGE-2401 apparatus (CBS Scientific Company, San Diego, CA, USA) as previously described (Bento et al., 2016).The software Bionumerics 7.5 (Applied Maths, Kortrijk, Belgium) was used to analyze the bands in the DGGE gel.Comparison of the data sets was based on Dice's similarity coefficient with the optimization and tolerance parameters set at 1.0%.Clustering was performed using the unweighted pair group method (UPGMA).Shannon-Wiener index was calculated using the Past software (Hammer et al., 2001).
Analysis of Organics Acids, pH and Gas Production
Organic acids were analyzed by high performance liquid chromatography (HPLC) using a Dionex Ultimate 3000 Dual detector HPLC (Dionex Corporation, Sunnyvale, CA, USA) coupled to a refractive index (RI) Shodex RI-101 detector maintained at 40 °C.Separation was performed in a Phenomenex Rezex ROA column (300 × 7.8 mm) maintained at 45 °C.The mobile phase was 5 mmol L -1 sulfuric acid (H 2 SO 4 ) and the flow rate was maintained at 0.7 mL min -1 .Rumen fluid samples (2.0 mL) were harvested after 24 hours of incubation, centrifuged (12,000 ×g, 10 min) and the cell-free supernatants were treated as described by Siegfried et al. (1984).
The pH values of the in vitro cultures were recorded at 0, 6 and 24 h of incubation using a pH meter (Model TEC-2 mp, Tecnal Scientific Equipments, Piracicaba, Brazil).The volume (mL) of total gas accumulation was measured using lubricated syringes that were coupled to the fermentation bottles at time 6 and 24 hours of incubation (Theodorou et al., 1994).
Statistical Analysis
All in vitro incubations were performed in triplicate and with three biological replicates.The data were subjected to analysis of variance (ANOVA) and significant differences were analyzed with the Tukey's test using the Statistical Analysis System and Genetics software (Ferreira, 2011).Differences among means with P < 0.05 were considered statistically significant.
Results
When ruminal microbiota were incubated in vitro with Trypticase (15 g L -1 ), the specific activity of deamination and ammonia accumulation were 36.11nmol of NH 3 mg protein -1 min -1 and 57.49mmol L -1 , respectively (Table 1).Addition of bovicin HC5 or virginiamycin to the incubated cultures did not affect the concentration of microbial protein, but most of the other parameters being evaluated were influenced by one or both of the inhibitors.The ruminal pH decreased (P < 0.05) with the addition of 2,000 AU mL -1 bovicin HC5 or with 10 and 20 µmol L -1 virginiamycin compared to the controls.Bovicin HC5 did not affect (P > 0.05) synthesis of microbial protein and the specific activity of deamination (SAD).However, a significant decrease in total ammonia accumulation (47.46% reduction in NH 4 + ) was observed with the addition of 2,000 AU mL -1 (Figure 1A).In the case of virginiamycin a reduction in SAD was observed at 10 µmol L -1 , while ammonia concentration was inhibited by 66.17% with the addition 20 µmol L -1 of the inhibitor (Figure 1B).Differences (P < 0.05) were also observed for the interactions control vs antimicrobials (57.49vs 28.59) in the SAD and for bovicin HC5 vs virginiamycin (33.63 vs 24.82) in the quantification of ammonia accumulation (Table 1).
Bovicin HC5 and virginiamycin decreased (P < 0.05) the concentration of total organic acids produced during ruminal fermentation in vitro (Table 1).The addition of bovicin HC5 or virginiamycin decreased (P < 0.05) the concentration of isobutyric acid, valeric acid, and isovaleric acid.Bovicin HC5 increased the concentration of acetic and propionic acid and the acetate:propionate ratio was not affected.Treatments containing virginiamycin increased (P > 0.05) the concentration of propionic acid and showed lower (P < 0.05) acetate:propionate ratios compared to controls.Total gas production also decreased (P < 0.05) throughout the fermentation when bovicin HC5 or virginiamycin were added to the cultures, being reduced by 32% and 53%, respectively, when compared to the controls (Table 1).
Diversity analysis of Bacteria and Archaea domains and different bacterial phyla by denaturing gradient gel electrophoresis (DGGE) revealed higher similarity of the microbial communities within each treatment and lower similarity between treatments (Figure 2).Analysis of the band profiling in ruminal fluid generated from this study revealed 38, 41, 56, and 29 bands for the domain Bacteria, phylum Firmicutes, phylum Bacteroidetes and domain Archaea, respectively (data not shown).The structure of the microbial community (domain Bacteria and phylum Bacteroidetes) in samples treated with bovicin HC5 were more similar to the controls compared to the samples treated with virginiamicin (Figures 2A and 2B).Note. 1 Microbial protein (mg ml -1 ). 2 Specific activity of deamination (SAD; nmol of NH 3 mg protein -1 min -1 ). 3 Ammonia concentration (NH 4 + ; mmol l -1 ). 4 Total gas concentration (ml). 5Total volatile fatty acids (Total VFA; mmol l -1 ). 6Acetic acid, propionic acid, butyric acid, isobutyric acid, formic acid, succinic acid, valeric acid and isovaleric acid (%). 7Acetate:Propionate ratio (A:P). 8Antimicrobial effects were tested using contrasts.Cont.(control with no antimicrobial); Bov.(different doses of bovicin HC5); Vir (different doses of virginiamycin).Means followed by at least one letter in the line for the different doses of each antimicrobial and control do not differ at 5% significance level by Tukey test.
Richness analysis and the Shannon-Wiener index were calculated from a binary matrix generated based on the electrophoretic profiles in the DGGE gels using the software BioNumerics 7.5 (Figures 3A and 3B).Our results showed no differences (P > 0.05) in species richness and diversity of Firmicutes and Bacteroidetes between the controls and treatments added with bovicin HC5 or virginiamycin.The addition of antimicrobials increased (P < 0.05) the species richness and the Shannon-Wiener index of the domain Bacteria; however, only virginiamycin increased (P < 0.05) the species richness and the Shannon-Wiener index of the domain Archaea (Figures 3A and B).
Discussion
The emergence of multidrug resistant bacteria associated with livestock has increased the threat of antibiotic resistance genes being spread throughout the food chain.Therefore, alternatives have been investigated to decrease the use of antibiotics in animal production.Among these, antimicrobial peptides (bacteriocins) produced by Gram-positive bacteria have been evaluated in vitro and in vivo as potential alternatives.Bacteriocins have been traditionally studied as potentially useful biological tools in the food industry (Deegan et al., 2006), but studies demonstrated that these antimicrobials are also effective in controlling animal pathogens (Twomey et al., 2000;Wu et al., 2007).These peptides also show synergistic interactions with antibiotics (Todorov, 2010), and could be useful to manipulate rumen fermentation (Lima et al., 2009;Shen et al., 2017).for the treatments with the addition of bovicin HC5, and might be related with changes in structure and function of the microbial community.
In conclusion, our results show that both the lantibiotic bovicin HC5 and the non-ionophore antibiotic viriginiamycin had positive effects on ruminal biochemical parameters, but the impacts of these inhibitors on rumen microbial community composition were distinct.Results indicated that bovicin HC5 had more pronounced effects against members of the phylum Firmicutes.Based on the results shown here it appears that both antimicrobials have a protein-sparing effect and it seems plausible that bovicin HC5 could have potential as an additive to manipulate rumen fermentation in cattle production.
Figure Ruminal
Table 1 .
Effect of bovicin HC5 and virginiamycin on ruminal fermentation parameters in vitro | 2018-12-06T00:39:02.981Z | 2018-07-10T00:00:00.000 | {
"year": 2018,
"sha1": "08fb135daa8122af3c810f658818c143de455831",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/jas/article/download/75118/42275",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "08fb135daa8122af3c810f658818c143de455831",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
82540152 | pes2o/s2orc | v3-fos-license | Antixenosis Induced by Talc-based Bioformulation Pseudomonas fluorescens against Okra Shoot and Fruit Borer, Earias vittella
In studies undertaken to assess the antixenotic resistance induced by talc-based bioformulations, the microbial consortia Pseudomonas fluorescens Pf1 + Beauveria bassiana B2 isolate (Pf1 + B2) treated plants were least preferred by the Earias vittella (Fab) moths for settling (3.11 and 6.00 nos.) and oviposition (18.36 and 21.62 nos.) in variety, Arka Anamika and hybrid, CoBhH1 respectively as against untreated check (9.00, 11.00 nos. of moths and 56.66, 80.66 nos. of eggs, respectively) The Pf1 + B2 treated plants had high Repellency index (43.11 %) and negative relative ovipositional index (-59.72%). The variety, Arka Anamika was less preferred for the settling of moths compared to the hybrid, CoBhH1. Among the different plant parts, egg laying was significantly greater on the terminal bud with 37.33 eggs on Arka Anamika and 53.00 eggs in CoBhH1.
Induced protection of plants against various insect pests and pathogens by biotic or abiotic elicitors has been reported since 1930s.Various studies indicated Pseudomonas fluorescens, (Migula) Pf1 and Beauveria bassiana (Bals.)Vuill.B2 were able to induce the defense mechanism in host plants through alterations in the secondary plant compounds and thus enhancing the resistance in plants against challenging insect pests, pathogens and nematodes (Saravanakumar et al., 2007;Sivasundaram et al., 2008;Karthiba et al., 2010).The chemical composition of the plant is of fundamental significance in the acceptance or rejection of host plant for shelter, oviposition and feeding by insects.P. fluorescens treated plants deterred Plutella xylostella (L.) moths from oviposition on cauliflower (Mohana Sundaram et al., 2006) and Helicoverpa armigera (Hubner) on tomato (Murugan et al., 2007).Fungal metabolites produced by B. bassiana reduced infestation due to feeding deterrence rather than direct fungal infection (Vega et al., 2008).Preliminary field study also showed that okra plants, on application of microbial talc-based formulation exhibited resistance against okra shoot and fruit borer, Earias vittella (Fab).Laboratory studies were hence carried out to assess the mechanism underlying the antixenosis mechanism of resistance due to microbial induction in okra plants on the behaviour of E. vittella.
Materials and Methods
Pot culture studies were carried out in screen house at Insectary, (35±1.5°Cand 72±3% RH), Department of Agricultural Entomology, Agricultural College and Research Institute, Madurai during the year 2009.Okra shoot tips were collected from plants and utilized for laboratory experiments to assess the settling and oviposition preference.
Induction of resistance in pot culture okra plants
Okra variety, Arka Anamika and a hybrid, CoBhH1 were used for the experiment.Five treatments, two variety/hybrid with three replications were maintained.Microbials were applied as seed treatment, soil application and foliar spray.Endosulfan 35 EC (treated check) 0.07% was given only as foliar spray.
Okra seeds were treated with talc based bioformulations at the rate of 10 g/ kg of seed, by wet seed treatment method.Untreated seeds were used for treated and untreated check.Treated and untreated seeds were sown in respective treatment pots at the rate of three seeds per pot (5 kg soil capacity).On 30 days after sowing (DAS), five gram of talc-based bioformulations mixed with FYM (100 g) was applied per pot as soil application.The talc based bio formulations were dissolved in water (20g / lit), soaked overnight, filtered through muslin cloth and the filtrate was sprayed using knapsack hand sprayer on potted plants on 30 DAS and 45 DAS (Saravanakumar et al., 2008).Endosulfan 35 EC (2ml/lit) was applied only as foliar spray on 30 and 45 DAS.It served as treated check.Foliar spray with water was given on 30 and 45 DAS in the untreated check.The plants were maintained free of insect infestation.The shoot tips were excised after five days of second spraying and utilized for settling and ovipositional preference studies.
Mass culturing of E. vittella
Infested okra fruits were collected from the field and brought to the Insectary, for mass culturing of E. vittella.Larvae were released individually on the okra fruit blocks kept in multicavity tray and allowed to grow.Pupae were collected and kept in adult emergence cage (44 x 45 x 43 cm).Moths were provided with ten per cent honey solution as artificial food.On emergence, moths were paired and allowed for mating and oviposition at the rate of ten pairs per cage.Eggs were collected daily in gada cloth and disinfected with 0.05 per cent formaldehyde on the second day.Upon hatching, the neonate larvae were released individually, using soft camel hairbrush, onto the okra fruit blocks kept in multicavity tray (32 cells/ tray).The spent fruit blocks were replaced with fresh ones daily, until pupation.Pupae were collected and kept in adult emergence cage to continue mass culturing.Moths were utilized for the conduct of laboratory experiment (Shanthi, 2000).
Evaluation of settling and ovipositional preference of E. vittella moths
Settling and ovipositional preferences of E. vittella moths towards treated okra shoot tips were studied under free-choice conditions.Shoot tips (20 cm long) were excised at flowering phase (5 days after second spraying) from treated and untreated okra (variety/ hybrid) plants grown in pots under screen house condition and placed immediately in conical flask containing sugar solution (3%) to maintain leaf turgidity (Sharma, 2008).They were arranged equidistantly in circular manner at random in insect cage (44x45x43 cm) without touching each other (Dhillon and Sharma, 2004).Thirty pairs of freshly emerged moths were released inside the cage for each replication.Moths were provided with (honey solution 10%).The setup was maintained for three days.The experiment was laid out in Completely Randomized Design (CRD) with three replications.
Number of moths settled on each shoot tip was recorded at 24 hours interval for three days.It was expressed as number of moths settled per shoot.Adult repellency was estimated with the formula suggested by Roman Pavela and Gerhard (2007) and expressed as Repellency Index (RI) (%).
In the same experiment, moths were allowed to oviposit for three days.Shoot tips were removed after third day and the number of eggs laid on different plant parts were recorded individually (Dhillon and Sharma, 2004) and total number of eggs laid was estimated.Using the above data, relative ovipositional preference (ROP) was estimated (Mehta and Saxena, 1970).
Statistical analysis
The data were transformed by using square root and arcsine transformation and analysed statistically.The analysis of variance was done and the means were separated by Duncan's Multiple Range Test (DMRT) (Duncan, 1955).
Results and Discussion
In the present study, application of talc-based bioformulation either individually or as consortia on the okra plants significantly reduced both the settling and oviposition of E. vittella under laboratory conditions (Tables 1-4).
Settling of E. vittella moths
Settling of moths was affected by microbial talc based formulations and also by insecticide treatment.Chemical (Endosulfan 35 EC) treatment proved maximum deterrent towards moths from settling on treated shoot tip.Among the microbial talc based formulation, consortia (Pf1+B2) treated shoot tips were the least preferred (Table 1).When the moths were offered a choice among treated shoot tips of the same variety (Arka Anamika) or hybrid (CoBhH1), the preference was significantly lower for chemical and microbial consortia treated shoots than for the untreated shoots (Table 1 and 2) .In microbial consortia inoculated plants, maximum reduction in settling of moths was recorded in variety (65.44%) followed by hybrid (45.45%).It can be inferred that between the variety and hybrid, Arka Anamika was less preferable for moths settling (4.73 nos.).
There was no noteworthy variation in moths settling response over time from 24 hours to 72 hours.During different periods of observation, only 3.00 to 3.33 moths settled on Arka Anamika and 5.66 to 6.33 moths on CoBhH1 in the microbial consortia treated shoot tips, as against 8.66 to 11.66 moths in untreated check (Table 1).Arka Anamika exhibited higher RI (25.22 to 71.67%) compared to CoBhH1 (19.15 to 44.95%) (Table 2).This is in line with the findings of Murugan et al. (2007) who reported that volatiles released by the P. fluorescens treated plants affected the adult settling and egg laying behaviour of Bemisia tabaci (Gennadius) and H.armigera in tomato.P. fluorescens also possibly induces the JA dependent octadeconoid pathway mediated by jasmonic acid (JA) (Van Loon et al., 1998).This stimulates enhanced activities of some enzymes required for volatile synthesis critical for terpenoid biosynthesis that may distract the insects from settling.
Oviposition of E.vittella moths
Volatiles also have indirect role in repelling females and reducing oviposition (Kessler and Baldwin,2001).Some phenolics and sesquiter penes along with volatiles can repel herbivores from oviposition on host plants.In the present study, the data on total number of eggs laid revealed that oviposition behaviour was greatly influenced by the microbial bioformulation inoculation.The Pf1+B2 consortia treated shoot tip was the least preferred for oviposition among the bioformulations.Endosulfan 35 EC (14.14 eggs) was the least preferred among the treatments.The untreated check supported 68.66 eggs.In Arka Anamika, Pf1 +B2 consortia (18.36 eggs) and B. bassiana, B2 (20.66 eggs) treatments were equally effective in reducing the oviposition.B. bassiana, B2 treated Arka Anamika was on par with microbial consortia treated CoBhH1 which supported 21.62 eggs.With regard to untreated plants, CoBhH1 was highly preferred for oviposition (80.66 eggs) than Arka Anamika (56.66 eggs) (Table 3).
Relative ovipositional preference (ROP) results also revealed that the microbial consortia inoculated shoot tips were the least preferred with -62.87 and Murugan (2003), who observed that P. fluorescens induced plants recorded enhanced concentration of acyl sugar, which is known to reduce oviposition and feeding by Liriomyza trifolii (Burgess).
In untreated check, among the plant parts, preference for egg laying was significantly greater on the terminal bud, with 37.33 eggs in Arka Anamika and 53.00 eggs in CoBhH1 (Table 4).Terminal bud supported only minimum eggs in Pf1+B2 consortia treated shoot tip of Arka Anamika and CoBhH1, which It is concluded that the microbial talc-based bioformulation (Pf1+B2) is effective in inducing antixenotic resistance against shoot and fruit borer, the biochemical parameters and volatiles released from microbial bioformulation treated plants greatly affect the settling and oviposition behaviour of E. vittella.
Table 1 . Effect of microbial consortia on settling behaviour of E. vittella moths in okra
T= Treatment; V= Variety # Mean of three replications; ST -Seed treatment; SA -Soil application; FS -Foliar spray.Figures in parentheses are square root transformed values.In a column means followed by similar letters are not significant different by DMRT (P = 0.05) **Highly significant
Table 2 . Repellent effect of microbial consortia on E. vittella adults
T= Treatment; V= Variety # Mean of three replications; ST -Seed treatment; SA -Soil application; FS -Foliar spray.Figures in parentheses are arc sine transformed values In a column means followed by similar letters are not significant different by DMRT (P = 0.05) **Highly significant *Significant
Table 4 . Effect of microbial consortia on oviposition of E. vittella on different plant parts in okra
T= Treatment; V= Variety # Mean of three replications; ST -Seed treatment; SA -Soil application; FS -Foliar spray.Figures in parentheses are square root transformed values In a column means followed by similar letters are not significant different by DMRT (P = 0.05) ** Highly significant * Significant was 65.82 and 73.58 per cent less than untreated check, respectively.The next preferred part was petiole followed by leaf and stem.With regard to petiole, maximum number of egg was recorded in untreated CoBhH1 (14.00 eggs), but less number of eggs was laid in B. bassiana, B2 (1.33 eggs) and Pf1 (4.00 eggs) treated shoot tips, which contributed 90.50 and 71.43 per cent reduction over untreated check, respectively. | 2019-03-19T13:05:58.197Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "98cb883c824be1748abffc41763c862e8545c001",
"oa_license": "CCBY",
"oa_url": "http://masujournal.org/store_file/archive/98-7-9-273-276.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "11868639b37fba93d79ee5e96f6079dde2295321",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
229464049 | pes2o/s2orc | v3-fos-license | Solar energy efficient - AOP process for treatment of cyanide in mining effluents
Photochemical process using of UV component of natural sunlight (hybrid Solar method) in the presence of an oxidizing agent – persulfates was studied for destruction of cyanides in mining effluents. The kinetic regularities was studied for process of photooxidation of cyanides. Comparative destruction of cyanides experiments have shown that the efficiency of the destruction process in the selected oxidative systems changed in the following order: {Solar + PS} > {Solar} > {UV + PS} > {UV}. High treatment efficiency of cyanides using a hybrid system {Solar + PS} is due to high intensity of the UV-C component of the sunlight and the rate of generation of hydroxyl radicals OH and sulfate anion radicals SO4 -·, respectively. The results obtained indicate the high efficiency of the hybrid Solar-induced method for purification of cyanide-containing pollutants, which allows achieving complete destruction of toxic cyanides to non-toxic products.
Introduction
It is known that in the process of cyanide leaching of gold and silver from refractory sulfide ores and flotation concentrates in gold processing plants technogenic waters formed containing cyanide compounds, both in the form of slightly dissociated, very toxic hydrocyanic acid (CN , HCN free), and in the form toxic complex cyanides (WAD, SAD) [1,2]. In this case, the values of cyanide concentrations in these wastewater and circulating solutions are hundreds and thousands of times higher than the legal limits 0.05 mg·L -l [3]. The wastewater and recycling water of the mill are of particular danger to humans and animals, as well as to the environment and must be neutralized before discharge or before reuse.
The mining industry uses technologies based on regeneration methods and chemical oxidation (destruction) methods to neutralize cyanide-containing effluents. Regenerative include: "acidificationvolatilization-reneutralization" method (AVR) [4], bio-oxidation of cyanides by dissolved oxygen O and solar photolytic decomposition [5]. The main disadvantages of these methods include the need for post-treatment of wastewater due to the high residual concentration of cyanide compounds that do not meet the legal standards, the significant processing time, the need to constantly keeping environmental conditions (oxygen regime, ambient temperature and pH).
Among the methods of oxidation and destruction of toxic cyanide compounds, "environmentally dirty" reagents, for example, chlorine compounds (hypochlorites, bleach, liquid chlorine, etc.) are still widely used as oxidizing agents [6]. The main disadvantages of which are the toxicity of the reagent itself, which poses a real danger when stored in warehouses, a decrease in the activity of the oxidizing agent over time, and the need for strict pH control to avoid the formation of toxic gas -chlorocyan.
Strict environmental and economic requirements strongly dictate the need to create new low-waste / non-waste and energy-efficient technologies that give the greatest environmental effect.
Currently, much attention of researchers is focused on the development of new combined oxidation processes (AOPs -"advanced oxidation processes"), which consist in the generation in situ by various methods of highly reactive radicals -predominantly reactive oxygen species (ROS) that can induce oxidation and mineralization of dissolved in water pollutants [7][8][9][10][11].
Among the AOPs, the most promising in our opinion are combined photochemical methods. The use of ultraviolet radiation (UV) is becoming increasingly common for the intensification of various oxidative processes. There is evidence that shows the possibility of decomposing complex cyanides of zinc, cadmium, nickel and copper by heterogeneous photocatalysis [12,13]. For carrying out photochemical processes, sources of a wide optical range (from short-wave UV to infrared) are used in combination with various oxidizing agents [14,15].
Among environmentally friendly oxidizing agents (ozone, hydrogen peroxide, Caro acid, ferrates, etc.), persulfates are technologically advanced because they are solids, practically do not lose their activity over time and are easily dosed. Due to inertness, persulfates are low toxic; USEPA normalizes their content in drinking water to 250 mg·L -1 and classifies them as pollutants that worsen only organoleptic properties.
In the world scientific literature, there is a tendency for researchers to increase interest in using persulfates in AOPs [16]. EAAOP-4 (4 European Conference on Environmental Applications of Advanced Oxidation Processes, Athens -Greece, October 21-24, 2015) proposed the inclusion of in situ methods for the generation of in situ sulphate radical anions SR-AOPs (Sulfate Radical-based AOPs) into the modern classification of advanced combined oxidative methods.
An alternative to UV treatment can be the UV component of natural solar radiation. Thus, a group of scientists obtained data on the photochemical oxidation of certain cyanide compounds, including hexacyanoferrates, induced by sunlight in the presence of a photocatalyst TiO [17,18].
The aim of the work was to study the photochemical destruction of cyanides using natural solar radiation (hybrid solar method) in the presence of an oxidizing agent -potassium persulfate.
Experimental section
The experiments carried out on sunny days from May to September. The intensity of solar radiation (Solar) in the ultraviolet, visible ranges and general illumination measured using metrologically verified UV -radiometer "TKA-PKM" and Luxmeter -UV-radiometer (A, B) "TKA-PKM -6". The range of values measured during the experiments presented in table 1. Simple cyanide ( NaCN ) as an object was chosen to study the kinetic laws of neutralizing the priority ecotoxicant of gold mining effluents under the photochemical effect of solar radiation. All chemical reagents (sodium cyanide, potassium persulfate) were analytical grade. Distilled water 2 с mS cm ) was used for the preparation and dilution of solutions. The cyanide content in the solution was determined by the photometric method [19].The concentration of ammonium, nitrites and nitrates was determined by standard photometric methods [20][21][22]. (Figure 3b). In the combined oxidation system process speed gradually decreases, and accordingly the process efficiency (67% after 240 min), the nature of the kinetic curve of which reflects a decrease in the probability of effective collisions between reacting particles, i.e. between CN ions and ROS molecules due to their consumption (Figure 3a). The resulting rate constants under solar exposure are almost 5.3-6 times greater than those for photolysis by UV radiation from a quasi-solar lamp, but are practically comparable (table 2). It is assumed that this effect is caused by the fact that the intensity of the UV C component of the sunlight is several times higher, respectively, and the rate of in situ generation of ROS (hydroxyl radicals OH and sulfate anion radicals It is important to note that during the processing of solutions of pollutants, the pH values vary significantly. Therefore, in direct photolysis, the value decreases slightly to 9.57-9.93. In the combined treatment of Solar PS solutions at initial pH values of 10-11, the final values tend to be neutral and lie in the range of 6.3-7.0. It is clear that this behavior indicates that under the selected conditions the oxidation of the cyanide pollutant is quite effective and continues to the end with the formation of neutral salts. It was experimentally found that when using the combined system Solar PS toxic cyanides are effectively mineralized to less toxic products (ammonium ions, nitrites, nitrates), which were quantified (table 3). The results obtained are in good agreement with the published data on the heterogeneous photocatalytic oxidation of cyanides using a suspension of titanium dioxide, where it was shown that 9 pH conditions are necessary for highly efficient mineralization of cyanide ions [26].
Conclusion
The results obtained indicate the high efficiency of the solar-induced method for the purification of cyanide-containing pollutants, which allows to achieve complete destruction of toxic cyanides to nontoxic end products. The developed hybrid method can be used to treat wastewater containing highly toxic cyanide compounds in areas with high solar activity. Thus, it is possible for effectively purify cyanide-containing wastewater using natural sunlight. | 2020-11-19T09:13:31.197Z | 2020-11-18T00:00:00.000 | {
"year": 2020,
"sha1": "280892c1a4b3e308e6852f64d2f6ffb18325acd9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/962/4/042079",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5323f4f9004fe36e41a1531a7cbca9ba67602f3b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
151890675 | pes2o/s2orc | v3-fos-license | Non-representational approaches to the unconscious in the phenomenology of Husserl and Merleau-Ponty
There are two main approaches in the phenomenological understanding of the unconscious. The first explores the intentional theory of the unconscious, while the second develops a non-representational way of understanding consciousness and the unconscious. This paper aims to outline a general theoretical framework for the non-representational approach to the unconscious within the phenomenological tradition. In order to do so, I focus on three relevant theories: Maurice Merleau-Ponty’s phenomenology of perception, Thomas Fuchs’ phenomenology of body memory, and Edmund Husserl’s phenomenology of affectivity. Both Merleau-Ponty and Fuchs understand the unconscious as a “sedimented practical schema” of subjective being in the world. This sedimented unconscious contributes to the way we implicitly interpret reality, fill in the gaps of uncertainty, and invest our social interactions with meaning. Husserl, however, approaches the unconscious in terms of affective non-vivacity, as a sphere of sedimentation and the horizon of the distant past which stays affectively connected to the living present. Drawing on these ideas, I argue that these two accounts can reinforce one another and provide the ground for a phenomenological understanding of the unconscious in terms of the horizontal dimension of subjective experience and a non-representational relation to the past.
1 Introduction: situating the problem of the unconscious I can live more things than I can represent to myself, my being is not reduced to what of myself explicitly appears to me (Merleau-Ponty 2012, 310).
The topic of the unconscious spans such various fields as philosophy, cognitive science, theoretical and clinical psychology, notoriously including psychoanalysis where this notion finds its most famous conceptualization and therapeutic application. Even though the unconscious is no longer a scandal for science and philosophy, it still holds strong positions as one of the most challenging topics for the research of the human mind. One of such challenges in particular concerns a conceptual contradiction that the idea of the unconscious mental life presents for our understanding of consciousness. As John Kihlstrom points out, it is not the existence of automatic processes in our body and brain that challenges consciousness, but rather the assumption that mental life itself can be devoid of its conscious character (Kihlstrom 2013). Such an assumption in turn can lead to a hypothesis that conscious awareness is not a necessary characteristic of the mind and that some part of our mental representations-such as for instance thinking, memory, and perception-can take place without phenomenal awareness and influence our behavior without us realizing it. The theoretical question then arises as to how conceive of the possibility for a mental state (e.g., a thought about one's brother, a memory of one's childhood or a seeing of a road sign) to occur in the mental life and not being a conscious thought, an explicit memory or a conscious case of seeing.
This puzzling question has defined, for the large part, the respective relations between phenomenology and psychoanalysis in the course of the last century. 1 Its formulation can be traced back to Franz Brentano's work, who not accidentally was a teacher of both Edmund Husserl and Sigmund Freud during their studies in Vienna. Brentano famously argues against possibility of unconscious representations claiming that it amounts to the idea of an unconscious consciousness which in turn bears on a serious contradiction. This contradiction, however, is not a contradiction in terms: the idea of an unconscious consciousness, as he puts it, is not the same as a non-red redness (Brentano 1973, 79). The contradiction is rather a contradiction in essence: something analogous to an unconscious representation would be Ban unseen seeing,^that is such a seeing that does not see. Maurice Merleau-Ponty brings this line of thought even further when he writes that Ban unconscious thought would be a thought that does not think ( Merleau-Ponty 2012, 396). 1 Such relations and corresponding theoretical issues have been the subject of several productive investigations. See, for instance, the volume Founding Psychoanalysis Phenomenologically, edited by Dieter Lohmar and Jagna Brudzinska and featuring different approaches to this topic (Lohmar and Brudzinska 2012), as well as a collection of essays Approches phénoménologiques de l'inconscient co-edited by Maria Gyemant and Délia Popa (Gyemant and Popa 2015). Other relevant recent contributions to the topic, such as those by Rudolf Bernet, Aaron Mishara, Dan Zahavi, Thomas Fuchs, Bruce Bégout, Jagna Brudzinska, Nicolas De Warren, and Nicholas Smith, are all to a larger or lesser extent discussed in the present paper. This argument, developed in Brentano's Psychology from an Empirical Standpoint (Brentano 1874), is directly related to his view on consciousness as inner representation (innere Vorstellung) 2 which accompanies mental acts, but in such Ba peculiarly intimate way^that would not lead to an objectifying, reflexive relation, nor to infinite regress. 3 As he points out, the term Bconsciousness^refers to the mental phenomenon insofar as this phenomenon has certain content and can therefore be conceived of as a representation of this content accompanied by the representation of the mental phenomenon itself. This implies that, for Brentano, the inconceivability of an unconscious consciousness ensues from the inconceivability of an internally unperceived representation. It also suggests that only mental phenomena with representational content are necessarily accompanied by inner consciousness. For Brentano, of course, this encompasses the totality of mental states since they all are defined by intrinsic intentionality, i.e., directedness towards their primary objects.
Thus, the central point in understanding the problem of consciousness and correlatively of the unconscious, in this perspective, revolves around the representational nature of conscious phenomena. This perspective has been implicitly adopted in both Freud's and most of Husserl's writings on the matter and shaped the way they approached the issue.
Unlike Brentano, Freud is not threatened by the conceptual contradiction involved in the idea of unconscious representations and instead advocates the possibility of nonconscious mental states which can influence one's conscious life and behavior. As Bernet points out, Freud's aim is to understand Bthe way in which unconscious representations appear in consciousness without negating their origin in the unconscious^ (Bernet 2002, 329). In this sense, Freud, in his attempts to clarify the unconscious, still largely relies on the possibility to conceive of the unconscious representations or, more generally, of the unconscious way of appearing and manifestation. At the same time, Freud is convinced that consciousness has strict boundaries and that it makes no sense to expand any notions of it so that the concept could somehow include all the complexity of the unconscious. Thus, in A Note on the Unconscious in Psychoanalysis, he claims that not only the form of presentation, but also Bthe laws of unconscious activity differ widely from those of the conscious ( Freud 2008, 39) and that Bwe have no right to extend the meaning of this word [i.e., conscious] so far as to make it include a consciousness of which its owner himself is not aware^ (Freud 2008, 36).
Husserl, on the other hand-especially at the early stages of his thought-agrees with Brentano that there is a contradiction in the idea that the unconscious is opposite to 2 Vorstellung is often translated as either Bpresentation^or Brepresentation.^The latter appears to be more common and adequate and will be preferred here as well. The main reason for this is that the use of the term in its current philosophical meaning was established in Kant's philosophy, who employed it as a German version of the Latin term representatio (Cassin and Rendall 2014, 891). Note, however, that in the English translation of Brentano's Psychology from an Empirical Standpoint the term is translated as Bpresentation.3 In this spirit, he claims: BThe presentation (Vorstellung) of the sound and the presentation of the presentation of the sound form a single mental phenomenon; it is only by considering it in its relation to two different objects, one of which is a physical phenomenon and the other a mental phenomenon, that we divide it conceptually into two presentations^ (Brentano 1973, 98). consciousness, while influencing it without subject's awareness. Along these lines, in Logical Investigations, he dismisses the task to account for Bobscure, hypothetical events in the soul's unconscious depths^ (Husserl 1970, 105). In the Appendix IX to his lectures on time-consciousness, Husserl refutes the idea that there can be any Bunconscious^content that subsequently becomes conscious in retention and insists that Bconsciousness is necessarily consciousness in each of its phases^ (Husserl 1991, 123). For Husserl, consciousness encompasses both the sphere of explicit wakeful awareness and the obscure background of conscious life. In this spirit, in the Ideas II, he points out that the sphere of self-consciousness cannot be restricted only to the narrow scope of attentive or alert awareness, but must include in itself equally all Bbackground,^obscure conscious experiences (Husserl 1989, 115).
In the Appendix to Husserl's Crisis of European Sciences and Transcendental Phenomenology, written by Eugen Fink, the phenomenological stance regarding the problem of the unconscious finds a somewhat different elaboration. Instead of dismissing the significance of the challenge altogether, Fink states that the problem of the unconscious relies on Ba naïve and dogmatic implicit theory about consciousness^that requires systematic reconsideration. This suggests that a phenomenological idea of the unconscious is possible, but should be necessarily based on Ban explicit analysis of consciousness^that employs the methodical means of phenomenological philosophy in general and of the intentional analysis in particular (Husserl 1936(Husserl / 1970. Fink's proposal clearly goes in the direction of the intentional theory of the unconscious and supports Husserl's brief remarks in the same text concerning Bunconsciousî ntentionalities (Husserl 1936(Husserl /1970. The above mentioned appendix was written by Fink in 1936 and is consistent with the general attitude of Husserl's phenomenology towards Bdepth psychology^and especially towards the critical position the latter assumes in relation to the Bconsciousness-idealism of phenomenology.^Husserl's way to overcome this Bnaïve and dogmatic^theory of consciousness is related to the transformation of Brentano's idea of inner consciousness into the absolute inner timeconsciousness (Husserl 1991). The resulting conception of consciousness, as Bernet argues, is not at odds with the idea of the unconscious and paves the way to the possible detecting of the Bunconscious mode of appearance^in acts of presentification (Vergegenwärtigung). In this regard, consciousness and the unconscious are understood as two different types of representations. Such a position is generally consistent with Fink's indication in the mentioned Appendix that phenomenological analysis of consciousness might contribute to the intentional theory of the unconscious.
This direction in the phenomenological exploration of the unconscious still relies on the theory of the representational structure of consciousness, even if with significant differences from the one advocated by Brentano and implicitly accepted by Freud. However, this is not the only possible way of exploring consciousness and the unconscious phenomenologically. Another way would be to approach this issue in non-representational terms and to question not merely the mode of appearance of the unconscious, but rather its intrinsic immanence to consciousness and subjective experience. This latter perspective explores the complexity of the unconscious that cannot be easily reduced only to a question of manifestation and representation. The most elaborate version of this approach is pursued in the works of Maurice Merleau-Ponty and Thomas Fuchs. Another non-representational approach to the unconscious can be found in Husserl's later woks related to genetic phenomenology and passive constitution of subjective experience.
Thus I assume that there are two main directions in the phenomenological understanding of the unconscious: one exploring the intentional theory of the unconscious and the other inquiring into a non-representational way of approaching consciousness and the unconscious respectively.
In the framework of contemporary phenomenology, an example of the first account can be found in Bernet's analysis of the unconscious representations in phantasy. His approach underlines a particular aspect of the issue, namely the manifestation of unconscious representations in the reproductive inner consciousness. According to Bernet's interpretation, the unconscious can be clarified phenomenologically not as Bamputated, unperceived consciousness^ (Bernet 2002, 330) but as another type of self-consciousness. Such self-consciousness is defined in respect of what appears (the absent, the alien) and how it appears in consciousness (reproductively as opposed to impressionally), but not in terms of this appearance being itself devoid of a certain Bconscious^quality or accompanying representation. Jagna Brudzinska's work is also closely related to Bernet's approach. She describes the unconscious as Ba phantasmaticimaginary structure of intentionality^and views the unconscious as the manifestation of absence and originary otherness (Brudzinska 2006).
The importance of non-representational approaches to the unconscious has been emphasized by Dan Zahavi in his book Self-Awareness and Alterity. Notably, he claims that when Bphenomenology moves beyond an investigation of object-manifestation and act-intentionality, it enters a realm that has traditionally been called the unconscious ( Zahavi 1999, 207). By drawing attention to Husserl's analyses of affectivity and passivity, Zahavi proposes that we see the phenomenological unconscious as a fundamentally altered form of consciousness and a Bdepth-structure of subjectivity^ (Zahavi 1999, 206). Aaron Mishara provides a thorough analysis of this aspect of Husserl's work in the phenomenological clarification of the unconscious in his article BHusserl and Freud: Time, memory and the unconscious^ (Mishara 1990). This line of thought is also central to the present article.
Husserl's analyses of passivity and pre-predicative experience have been a major source of inspiration for phenomenological approaches to the unconscious and other closely related phenomena. For instance, in his recent book Towards a Phenomenology of Repression, Nicholas Smith develops a phenomenological model for understanding Freud's concept of the unconscious by focusing on the theory of repression. He also relies heavily on Husserl's analyses of passivity and genetic constitution and paves the way toward understanding repression within the sphere of the living present as Ba necessary part of all constitution^ (Smith 2010, 305). Another interesting facet of contemporary phenomenological discussion about the unconscious is the topic of dreams and sleep in the works of Nicolas de Warren. In his article BThe Inner Night: Towards a Phenomenology of (Dreamless) Sleep,^de Warren takes on the challenge that the phenomenon of dreamless sleep presents to our conceptions of consciousness and self-awareness. By exploring this experience that Bexists for no one^and venturing into the depths of Husserl's analyses of time-consciousness and passivity, de Warren presents sleep as the way that consciousness constitutes itself as the absence of itself (de Warren 2010). His insightful analyses of the metaphor of sleep contribute to the phenomenological clarification of the past, self-forgetting, and the unconscious. De Warren's analyses in fact run parallel to my own attempts to bring together Husserl's and Merleau-Ponty's ideas of sedimentation, past-horizon, and the past existing in the mode of oblivion.
Equally important to the phenomenological clarification of the unconscious is the topic of drives and instincts. In Husserl's approach, drive-intentionality (Triebintentionalität) has quite broad implications that range from analyses of concrete bodily drives to a universal drive-intentionality that generally motivates the protentional openness of the streaming life of subjectivity (Husserl 1973, 595). Drives, as tendencies emerging in obscurity, belong to the sphere of the senses, which Husserl also calls Ba hidden reason^(verborgener Vernunft) and which constitutes the precognitive, passive level of subjective experience (Husserl 1989, 277). Non-objectifying drive-intentionality presents an interesting case of a non-representational relation to objects and has direct links to the psychoanalytic perspective, which becomes particularly evident in Husserl's later manuscripts (Husserl 2014). Husserl's deliberations on this topic have attracted a considerable amount of attention in phenomenological literature, some of which draws explicit connections to the issues of repression and drives in Freud's work (Lee 1993;Brudzinska 2004;Smith 2010).
These discussions demonstrate that contemporary phenomenology has no reservations about taking seriously the challenge of the unconscious and that there are several valuable avenues of thought arising out of this topic. In the framework of this paper, I will focus on one of these, namely, the non-representational way of accounting for the unconscious in phenomenology. The idea is to explore not only critical arguments exposing the insufficiency of representational approaches but also to propose a constructive phenomenological account of the non-representational unconscious. As will be shown, such an account can be found in theories which explore how subjectivity is connected to its past life beyond the representational relation constituted by explicit remembering. When it is acknowledged that our experience is not restricted to representational content, it becomes possible to see the unconscious as a horizontal dimension that connects past and present without making this past an explicit object of observation. Such an approach therefore is not restricted to the living present, nor is it focused on unconscious intentionalities. Rather, it explores this particular horizontal Bopening upon the past^(Merleau-Ponty 2012, 413) that makes subjective experience meaningful and consistent with itself.
On the one hand, the main advantage of such non-representational approaches consists of breaking with the unfortunate view of the unconscious as a hidden reservoir of past experiences piling up Bbehind the back^of consciousness, as well as with the contradictory concept of the unconscious consciousness, or Bmemory that does not remember.^On the other hand, phenomenological treatment of such phenomena as sedimentation, forgetting, body memory, and affectivity allows for the preservation of the very idea of the unconscious by approaching it as a horizontal dimension of subjective experience. The results of this approach can arguably be extended to the area of cognitive psychology and its views on the so-called cognitive unconscious and implicit memory.
Accordingly, in what follows I will look into two major examples of nonrepresentational accounts. First, I will present Maurice Merleau-Ponty's and Thomas Fuchs' proposal for an approach to the unconscious as a Bsedimented practical schemaô f subjective being in the world. Afterwards, I will turn to Husserl's idea of affective consciousness and examine another possible non-representational phenomenological account of the unconscious. My argument is that both Husserl's and Merleau-Ponty's accounts of the unconscious can reinforce one another and provide a ground for the phenomenological approach to the unconscious in terms of non-representational relation to the past. In the conclusion, I will situate the non-representational approach to the unconscious in relation to the issue of cognitive unconscious and implicit memory as discussed in cognitive psychology.
2 Non-representational accounts of the unconscious: Merleau-Ponty and Fuchs on the unconscious and body memory The critique of the representationalist approach to consciousness and correspondingly to the unconscious is characteristic of several post-Husserlian phenomenological projects. 4 Arguably the most fruitful account of non-representational consciousness inside the phenomenological tradition is given by Maurice Merleau-Ponty, who emphasizes the role of embodiment, being in the world, and of intersubjectivity as fundamental constitutive dimensions of subjectivity. He asserts that Bthere is no private sphere of consciousness^(Merleau-Ponty 2012, 395) and that consciousness is entirely transcendence, Bthe simultaneous contact with my being and with the being of the world( Merleau-Ponty 2012, 396). For him, this implies the reevaluation of the very idea of transcendence and of intentionality, which accordingly can be understood not as a cognitive relation to an object by positing it mentally in one's mind, but rather as a concrete embodied and situated directedness towards the world.
In his Phenomenology of Perception, Merleau-Ponty adopts Husserl's notion of Boperative^intentionality (fungierende Intentionalität) and interprets it as a prereflective directedness which establishes a natural, pre-predicative unity of our being in the world (Merleau-Ponty 2012, lxxxii). Contrary to act-intentionality, which describes the relation to objects on the level of judgments and reasoning, and thereby constitutes the basis for objective knowledge, operative intentionality can be understood as Bthe body-subject's concrete, spatial and pre-reflective directedness towards the living world^ (Reuter 1999, 72). While bringing the subject's embodiment and the practical nature of bodily directedness to the foreground of the constitutional issue, Merleau-Ponty points to an apparent insufficiency of representational accounts. Such accounts, so his argument goes, fail to make sense of a particular intentionality involved in the performance of movements 5 and all essentially bodily phenomena. Furthermore, they lead to an altogether false image of subjectivity, featuring it as consisting of distinct representations which are either available or unavailable to conscious awareness.
Merleau-Ponty highlights two main problems in understanding consciousness and the unconscious in representational terms. The first problem, which he ascribes to the philosophies of consciousness, consists in the impossibility to conceive of any content of experience beyond the Bmanifest content spread out in distinct representations( Merleau-Ponty 2012, 171). The second problem, belonging to the theories of the unconscious, Bis to double this manifest content with a latent content, also made up of representation^(Merleau-Ponty 2012, 171). He uses an example of sexuality to make a point that featuring it in terms of either conscious or unconscious representations does not come any closer to understanding its continuous presence Bin human life as an atmosphere^(Merleau-Ponty 2012, 171).
Merleau-Ponty's critique of the approach to consciousness and the unconscious as consisting of representations is directly related to his idea that subjective experience cannot be made transparent to itself, but is instead intrinsically characterized by its selfopacity and fundamental ambiguity. In this case, Merleau-Ponty clearly diverges from Cartesian as well as Husserlian ideal of certainty and their belief that self-consciousness provides us with a perfect vantage point towards inner workings of our minds. Instead, he draws on the idea of bodily structure of perception, where the body is both what perceives and what stays invisible for itself: Bit [the body] is neither tangible nor visible insofar as it is what sees and touches^(Merleau-Ponty 2012, 94). The ambiguity of bodily experience and the non-representational character of bodily awareness and perception lie at the foundation of Merleau-Ponty's view of subjectivity and inspire his descriptions of various phenomena. Contrary to representational approaches that feature contents of conscious experience through what appears to the subject, Merleau-Ponty believes that what we acquire through experience is not represented in our minds in either conscious or unconscious way. 6 He claims that we can live more things than we can represent to ourselves and that our experience is by no means restricted to the content of intentional representations (Merleau-Ponty 2012, 310).
Thus Merleau-Ponty makes a radical suggestion for the phenomenological theory of the unconscious-to avoid talking about conscious vs. unconscious representations altogether, and rather understand the unconscious as a Bsedimented practical schema ( Merleau-Ponty 2010, 191) and as our own self-opacity. In a similar vein, in the Phenomenology of Perception, he gives examples of situated feelings and actions, which are defined as much by their directedness to objects as by their ambiguity and obscurity regarding their own contextuality: BWe would be equally wrong by making sexuality crystallize in 'unconscious representations' or by setting up in the depths of the dreamer a consciousness that can identify sexuality by name. Similarly, love cannot be given a name by the lover who lives it. It is not a thing that one could outline and designate, it is not the same love spoken of in books and newspapers, because it is rather the way the lover establishes his relations with the world; it is an existential signification. The criminal does not see his crime, nor the traitor his betrayal, but not because these exist deep within him as unconscious representations or tendencies, but rather because these crimes or betrayals are so many relatively closed worlds and so many situations. If we are situated, then we are surrounded and cannot be transparent to ourselves, and thus our contact with ourselves must only be accomplished in ambiguity^(Merleau-Ponty 2012, 401)my emphasis. Here we can see that such ambiguity and self-opacity refer not merely to impossibility of complete selfknowledge but rather to what Merleau-Ponty calls Bsituatedness^of subjective experience. In other words, we are intransparent to ourselves because our experience is not restricted to representational content and thereby cannot be made an explicit object of observation.
Along the same lines, in his lecture courses on Institution and Passivity and Visible and Invisible, Merleau-Ponty presents the unconscious as Bperceptual consciousness,^7 drifting not that far from the definition of the unconscious in terms of the intrinsic selfopacity of conscious experience. Already in Husserl, perception is described as an unending process, in which objects appear only to a certain degree of approximation and never in fullness (Husserl 2001). For Merleau-Ponty, it means that perceptual consciousness relies on unconscious syntheses which complete our otherwise fragmentary view of reality by means of particular subjective predispositions and a sedimented history. The unconscious can be therefore understood as a background against which we see objects, not as something that can be grasped in our representations of these objects: BThis unconscious is to be sought not at the bottom of ourselves, behind the back of our 'consciousness,' but in front of us, as articulations of our field. It is 'unconscious' by the fact that it is not an object, but it is that through which objects are possible, it is the constellation wherein our future is read-It is between them as the interval of the trees between the trees, or as their common level. It is the Urgemeinschaftung of our intentional life, the Ineinander of the others in us and of us in them^ (Merleau-Ponty 1968, 180).
The description of the unconscious as the Binterval between the trees^appears to be quite a precise analogy: the unconscious is literally taken to be the way we fill in the gaps of uncertainty in objects' perception-and what is more-a way which determines how exactly we will relate to them. Different people will fill up the gaps between these metaphorical trees quite differently: depending on their background and individual history, someone might see a situation as threatening, while someone else might see an equivalent situation as promising and exciting. It is an interesting feature of our experience that when a certain amount of information is missing (which is the case for any kind of inadequate or essentially incomplete experience, such as perception and interaction with other people), we tend to fill it in with our expectations based on previous experiences. Even if we see objects only from a certain perspective and never from all possible angles, our perception still functions as if it were complete.
Thus, when Merleau-Ponty claims that Bperception is unconsciousness^ (Merleau-Ponty 1968, 189), he intends to emphasize not what one directly perceives as an object as being unconsciousness, but that perception functions as a medium through which objects are perceived in this or that manner. He states that uconsciousness Bis and is not perceived. For one perceives only figures upon levels-and one perceives them only by relation to the level, which therefore is unperceived^(Ibid). Such a definition of the unconscious as a perceptual consciousness however does not imply that Merleau-Ponty ever intended to reject the distinction between consciousness and the unconscious altogether. He rather sought to avoid understanding the unconscious in terms of another psychic reality or some kind of other BI think,^which forms representations Bbehind the back^of the conscious subject (Merleau-Ponty 2010, 207). Instead of the strictly dualistic idea separating conscious and unconscious processing, Merleau-Ponty develops the idea that the unconscious is a necessary part of any conscious experience. The unconscious thus is not the opposite of consciousness, it is Bthe very perceptual consciousness in its ambiguity, opacity, multiplicity of meanings, and unending quest for interpretation^ (Stawarska 2008, 62).
A similar critique of representationalism regarding consciousness and the unconscious returns in Merleau-Ponty's accounts of memory in his lecture course on Institution and Passivity. In this course, the problem of memory oscillates between two modes of our relation to the past: memory as Bconstruction^and memory as Bconservation^of the past. In the first mode, roughly corresponding to that of explicit memory, the past is constituted as an object of one's recollections. This is a transcendent past which gets to be constantly recreated in the history of subjective transformations. It is a Bconstruction^as long as it becomes the past which I can remember and bring to my present awareness and link it actively to other events in my life. This is not the past which merely happened, but rather the past as it is remembered. As to the second mode, Merleau-Ponty first calls it Bconservation^of the past, only to subsequently criticize this formulation as it relies on the idea of memory-traces or representations residing in some kind of reservoir or collector of past experiences. Refuting this idea, Merleau-Ponty nevertheless claims that there is the past for us, which exists not in the mode of remembering but in the mode of oblivion. 8 Once again, the very idea of representation proves to be the main enemy obstructing the comprehension of subjective relations with the past, which makes the past either a mere construction of one's memory or a mere collection of memory-traces. Merleau-Ponty thinks that the truth lies in between these two modes of past-relation and can only be articulated when the idea of representation regarding memory is abandoned altogether. He claims that memory should not be seen as an opposite of forgetting but that it could be elucidated through our relation with a past on the pre-reflective level of embodied existence (Merleau-Ponty 2010, 208-209).
To summarize, there are several important steps clarifying Merleau-Ponty's approach to psychoanalysis and to the problem of the unconscious. First of all, unlike Husserl, Merleau-Ponty's phenomenology finds itself confronted with the same challenge which was central to the psychoanalytic endeavor and which concerns the issue of consciousness being intransparent to itself and defined as much by its explicit as by its implicit or latent dimensions. As he puts it: BPhenomenology and psychoanalysis are not parallel; much better, they are aiming toward the same latency^(Merleau-Ponty 1993, 71). Secondly, Merleau-Ponty believed that the idea of representation obscures the understanding of both consciousness and the unconscious. He aims to overcome this limitation in his theory of operative intentionality, embodiment, and perceptive consciousness. In the perspective opened by these ideas, he features the unconscious as a sedimented practical schema and as the subject's ambiguity with regard to his own situatedness in the world. And finally, he applies his critique of representationalism to the phenomenon of memory and suggests that the subject's relation to the past is mediated by forgetting as much as by remembering.
These last two directions in understanding the unconscious (via situated, embodied, perceptive consciousness and via non-representational relations to the past) remain very close to each other within Merleau-Ponty's thought. The necessary step to bring them together has been accomplished by Thomas Fuchs' phenomenology of body memory, one of the aims of which is to bring to the fore the basic temporal structure of embodied existence. By analyzing the phenomenon of implicit memory, Fuchs shows that it consists in a different kind of presence of the past than that of the explicit memory. While explicit recollection presumes the presentification of one's past experiences in a personal autobiographic memory, implicit memory, for its part, cannot be clarified via any kind of representational relation. As embodied subjects we cannot be said to have the past as an object, but rather we are ourselves this past (Fuchs 2000, 76). This past becomes a modus of one's bodily existence and stays unnoticed but effective, unseen but present through bodily dispositions, familiarities, habits, unintentional avoidances and omissions.
Body memory serves as a foundation for our personal identity-such an identity which exists beyond explicit memory and narratives we tell about our lives, but instead constitutes the indispensable basis for our self-familiarity. It is personal inasmuch as it accumulates experiences and dispositions specific for each particular individual.
The unconscious character of body memory once again is not due to any incarnation of an implicit core of subjectivity behind the back of consciousness in the form of either subconscious psychic or else automatic brain processes. Similar to Merleau-Ponty's views, Fuchs understands the unconscious not in terms of representations or hidden intentionalities but as a sum of bodily dispositions which tacitly define the individual relation to the world and to other people. For instance, a shy person does not need to form representations either consciously or unconsciously, in which her attitude would find its manifestation. Instead, as Fuchs remarks, such a person would exhibit her attitude in her very posture or tone of the voice, in her avoidance to assert herself firmly in front of other people or to risk expressing her opinions in public. In the same vein, in Merleau-Ponty's example, love is described not as relation to a person which could be grasped in a particular object-directed intentionality, but rather as Ban existential signification,â s a Bway the lover establishes his relations with the world^(Merleau-Ponty 2012, 401).
Another example can be found in the phenomenon of traumatic experience, which contributes to the phenomenological clarification of the dynamic unconscious. The repressed trauma does not survive as some kind of representation, objective Btrace^or Bimage,^which cannot be erased. Instead, it survives Bonly as a style of being and only to a certain degree of generality^(Merleau-Ponty 2012, 85). As Fuchs points out, the influence of past traumatic experiences on a traumatized person manifests itself in resistance and defensive behavior (not necessarily transparent for the person) in situations triggering such unconscious dispositions (Fuchs 2012a, 98). The unconscious influence of traumatic experiences persists not in the form of explicit menacing objects, but as a medium making these objects appear as threatening. The dynamic unconscious is therefore not understood as a reservoir for repressed feelings, thoughts or desires, but as transformations of the lived body and the lived space, which restructure one's field of experience and determine against which background one would see and judge new existential situations and interactions with other people.
By extending the life of consciousness beyond the narrow focus of self-knowledge and present awareness, by bringing the experiencing subject back into the intersubjectively shared world and into the concreteness of its embodied and affective being, the phenomenology of the lived body overcomes the idea of the unconscious as hidden Bbehind the back^of consciousness, and takes it as the practical schema of our bodily being in the world and as the structure of our field of perception. Summarizing this position, Fuchs writes: B[The unconscious] surrounds and permeates conscious life, just as in picture puzzles the figure hidden in the background surrounds the foreground, and just as the lived body conceals itself while functioning. It is an unconscious which is not located in the vertical dimension of the psyche but rather in the horizontal dimension of lived space, most of all lodging in the intercorporeality of dealings with others, as the hidden reverse side of day-to-day living^ (Fuchs 2012a, 100).
While Bernet claims that the unconscious is the presence of the absent, appearance of the non-appearing, Fuchs develops Merleau-Ponty's opposing view that the unconscious is Babsence in presence, the unperceived in the perceived^ (Fuchs 2012a, 101). This absence however is not the concealed or isolated reverse side of consciousness, but rather its own way of being-the sum of incorporated predispositions, habits and the like, which themselves do not appear in any graspable way, but instead constitute a background against which we relate to the world.
A non-representational account of the affective unconscious in Husserl's Analyses Concerning Passive Synthesis
Husserl's own most consistent attempt to provide an account of the unconscious hinges upon the level of pre-predicative experience and passive constitution. Similarly to the previously discussed phenomenological approaches, for Husserl the unconscious is also the problem of consciousness. He decides, however, to work on it against the background of the idea of affectivity and associative syntheses, and not starting from the idea of cogito or intentional representation. A sketch of the phenomenological theory of the unconscious can be found in Husserl's Analyses concerning Passive Synthesis and later manuscripts, which are now published in the volume 42 of Husserliana: Grenzprobleme der Phänomenologie: Analysen des Unbewusstseins und der Instinkte, Metaphysik, späte Ethik: Texte aus dem Nachlass .
In my view, there are three important aspects of the affective unconscious in Husserl that should be made explicit here. The first concerns its formal definition in terms of Grenzphänomen which designates the unconscious as the zero-level of affective vivacity and features it as relative to the graduality of consciousness. The second corresponds to the idea of the affective past-horizon and the unconscious as Bsedimented.^The third explores the topic of the affective conflict and Husserl's take on the issue of repression. Inquiry into these three aspects of the unconscious in Husserl's genetic phenomenology will allow us to see that not only Merleau-Ponty's but also Husserl's phenomenology can contribute to understanding consciousness and the unconscious in the nonrepresentational way. 9
Zero-point of affective vitality and the unconscious as Grenzphänomen
The first and the most basic sense of the unconscious for Husserl is the non-vivacity as opposed to different degrees of vivacity of consciousness. In the Analyses, Husserl employs several metaphors to describe this phenomenon. Some of them, as Aaron Mishara illustrates (Mishara 1990, 36), evoke images from the German Romantic literary tradition, such as those of the Bnightfall^or the Bnight of the unconscious.N icolas de Warren underlines Husserl's employment of wakefulness and sleep as metaphors for transformations of time-consciousness, where de-presentification in retention and loss of Bintuitivity^are seen as analogous to Bfalling asleep^(de Warren 2010). Other terms are also used by Husserl to feature the unconscious as the underworld and the realm of death. Closely related to these metaphors are the archeological images of sedimentation. 10 Other expressions play with the psychological and even psychophysical vocabulary of the time and situate Husserl's notion of the unconscious at the threshold of affective intensity. The difference between conscious and unconscious is grasped in terms of foreground/background differentiations and in reference to affective power and powerlessness (Kraftlosigkeit). Mathematical vocabulary provided Husserl with another useful term for the unconscious as the zero level of vivacity and an Baffective zero-horizon^(affektiver Nullhorizont) (Husserl 2001, 216/ 167).
What brings these different metaphors and analogies together is an attempt to situate the unconscious at the border of the affective vivacity of consciousness. Such a border, however, is not something that exists objectively, which could be measured or determined in quantitative terms. Moreover, Husserl does not need to suggest any functional relation between the intensity of conscious representations and the intensity of physical phenomena, since from the start he attributes intensity or vivacity to consciousness itself and not to its content. In this spirit, he writes: B[The unconscious] designates the nil of this vivacity of consciousness and, as will be shown, is in no way a nothing: A nothing only with respect to affective force and therefore with respect to those accomplishments that presuppose precisely a positively valued affectivity (above the zero-point). It is thus not a matter of a Bzero^like a nil in the intensity of qualitative moments, e.g., in intensity of sound, since by this we mean that the sound has ceased altogether (Husserl 2001, 216).
The unconscious in Husserl is clearly a concept founded on the idea of affective graduality of consciousness and designates the zero-level of affective vivacity. However, the unconscious in this sense is by no means an opposite of consciousness, but is 9 An important aspect of this topic, namely the one that concerns drives and instincts, will as such be absent from the current interpretation. However, it is essential to Husserl's analyses of association and affectivity and thereby makes up part of what I designate here as the affective unconscious. 10 All those metaphors get mixed in Husserl's descriptions, as for instance : B…every accomplishment of sense or of the object becomes sedimented in the realm of the dead, or rather, dormant horizontal sphere, precisely in the manner of a fixed order of sedimentation: While at the head, the living process receives new, original life, at the feet, everything that is, as it were, in the final acquisition of the retentional synthesis becomes steadily sedimented^ (Husserl 2001, 227)my emphasis.
necessarily relative to it. It should be noticed that this formulation makes of the unconscious a Grenzphänomen and does not contribute to the substantial definition of the phenomenon. However, based on this general definition, Husserl succeeds-if not in fully developing a phenomenological account of the unconscious-at least in sketching several directions of its possible elaboration.
According to Mishara, there are two different types of the unconscious which can be separated here: the pre-affective unconscious in the impressional sphere of consciousness and the unconscious as the sphere of forgetfulness and the remote past (Mishara 1990). In Husserl, this distinction can be found in Appendix 22 to §35 of the Analyses (Husserl 2001, 525). The pre-affective unconscious mostly designates all the multiplicity of affective tendencies which do not reach the ego's awareness and thereby stay in the background against which prominent tendencies come to be differentiated. In my view, this sense of the unconscious as pre-affective should rather be called preconscious and distinguished from the proper unconscious which refers to the pasthorizon. In what follows, I will restrict my analyses to the unconscious in this latter sense. This terminological choice finds its support in Husserl's later differentiation between the sphere of the affective past-horizon and of Bsedimentation,^on the one hand, and the pre-affective background, on the other hand. The term Bunconscious^is then reserved for the sedimented: Bthere are no other unconscious backgrounds than those of sedimentation^ (Husserl 2014, 37). Thus, in order to understand Husserl's idea of the unconscious in this sense, we need to focus on the three following notions: background consciousness, past-horizon, and sedimentation. These clarifications will allow to go beyond merely formal definition of the unconscious as Grenzphenomen and to make explicit the important link between the problem of the unconscious and the problem of memory.
Affective past-horizon and the unconscious as BsedimentedT
he past is a real stumbling block for any theory of memory which seeks not only to explain processes of retention and remembering but equally to understand how the past experience can be preserved so that it can be brought back to awareness. Merleau-Ponty pinpoints a certain paradox here, consisting in the fact that any idea of past-preservation already presumes that this past should be present in some peculiar way (Merleau-Ponty 2012, 436). Husserl successfully deals with this paradox in the case of retention which serves the double purpose of being past in the present and preservation of this present at the same time. The same goes for remembering which by definition is a presentification of the past. Only the remote past, the sphere of forgetfulness and sedimentation, appears to have this status of inexplicable absence: it is nowhere to be found, it does not appear in any way, and yet it must be somehow preserved since it affects our present life implicitly and can be reawakened in the explicit memory.
It is almost impossible to avoid this paradox within the frame of the temporal analytics of consciousness since this paradox itself belongs to the temporal order. As long as one approaches the problem of the past exclusively in terms of its temporal distance, the past becomes necessarily transcendent to the present life of consciousness. However, as already argued in the previous section of the paper, the presence of the remote past and its effectiveness in nearly any domain of one's present life can be approached without necessarily conceiving of it in terms of hidden representations, but rather as an implicit dimension incorporated in one's way of being. Both Merleau-Ponty and Fuchs appeal to this dimension in terms of one's personal history as sedimented in the living body and the way it inhabits its space. Husserl also developed an idea of sedimentation and the remote past which served the purpose to solve the mentioned paradox and to explain how the Bsphere of forgetfulness^can remain connected with the present life of consciousness.
In order to do so, Husserl speaks of the constitution of the past in terms of horizon, which makes the inclusion of the past in the sphere of living present possible only in its potentiality and not in its actuality. This potentiality of the past-horizon is made possible due to the retentional structure of consciousness as well as due to the fact that near retention belongs to the impressional present which serves as a source of all affective force. The past-horizon is further divided into spheres of close past, as the near horizon of living retention, and the horizon of the distant past or B'the forgotten' that carries on the differentiated retentional path of the past^ (Husserl 2001, 529). This retentional path is carried on into an indeterminate empty horizon, that Husserl describes as Bdead horizon,^Bendless past,^Bsphere of forgetfulness^and finally as the unconscious (Husserl 2001, 513-525). 11 The horizon of the distant past presents a serious problem for the idea of temporal continuity of consciousness because it presumes the extension of the retentional process beyond the point where this process itself is finished. 12 An important aspect of Husserl's solution to this issue consists in considering this remote past-horizon not exclusively in terms of its temporal constitution but as an Baffective horizon,^-that is as constituted essentially through modifications of the affective vivacity of consciousness. Importantly, in the Analyses, retention means not only temporal modification but designates equally the loss of affective vivacity. The past-horizon, accordingly, is described as a horizon of affective gradations, which extends from its peak in the impressional present to the less and less affective retentional past until it reaches the point of ineffectiveness. 11 Similarly, in her analyses on retention in Husserl, Lanei Rodemeyer distinguishes between Bnear^and Bfarr etention (Rodemeyer 2006, 88-91). Whereas the former is involved in the constitution of the living present, the latter designates what is here called the distant past-horizon. In my work, I prefer maintaining this distinction in terms of retention and Bpast-horizon^(instead of distinguishing between near and far retention) for several reasons. First, this terminological choice allows to overcome all possible confusion between Bnearâ nd Bfar^types of retention, while preserving the sense of retention for the continuous temporal modification of the living present into the just-past. Secondly, it allows to clearly preserve Husserl's own difficulties regarding the extent of the retentional process. For him, retention presupposes, in the first place, a Bconnection to the immediate realm of the present^ (Husserl 2001, 416), whereas the distant Bsubmerged^past exceeds the process of retentional modification. Husserl underlines that the retentional process stops at some point and gets transformed into the sphere of sedimented unconscious. This sedimented distant past constitutes the core of the past-horizon. Finally, the use of the term Bpast-horizon^instead of Bfar retention^allows to overcome the merely temporal aspects of the constitution of the past. The term Bpast-horizon,^therefore, is conceptually more suggestive and allows accounting for not merely temporal, but also Bunconscious^and affective aspects of the distant past, as well as underlying its horizontal connectedness with the present. 12 In this spirit, Husserl points out that the unconscious as sedimented past history goes beyond temporal modifications: BThe past is finished time (erledigte Zeit), the finished duration […]^ (Husserl 2001, 520). He further asserts that the retentional process ceases and sinks into the atemporal unconscious: BEarlier I thought that this retentional streaming and the constitution of the past would continue to go on incessantly even within complete obscurity. But now it seems to me that one can dispense with this hypothesis. The process itself ceases.
The retentional modification, as Husserl underlines repeatedly, is a transformation of consciousness itself, consisting of changing modes of temporal appearances as well as in the affective depleting of the original impressions. In this regard, de Warren points out that retentional consciousness not only B'de-presentifies' its intentional object but also 'de-presentifies' itself^(de Warren 2010, 283). By this he means that becoming unconscious is inherent to conscious self-transformation. However, the retentional process is not only depleting and Bclouding over,^but it is equally a process of identification, inasmuch as it is the conservation of noematic senses of objects: BAnd when there is no affection coming from the diverse objects, then these diverse objects have slipped into sheer nightfall, in a special sense, they have slipped into the unconscious^ (Husserl 2001, 221). This Bnightfall,^however, is not nothing: all noematic senses are preserved there, but in such a peculiar and undifferentiated manner that prevents them from reaching conscious awareness.
Thus, on the one hand, the retentional process is a process of identification securing the sameness of objective senses. On the other hand, it is a process of affective depleting and temporal modification. It means that an objective sense's temporal mode changes, loses its affective impact on the impressional present and yet the sense itself is not altered in these transformations. A song heard yesterday is still the same song, even if it no longer belongs to one's actual field of experience: BIn the fading away, the tone itself thus does not lose anything that it originally was; if it is given at the end as completely empty of differences with respect to content, then this concerns its mode of givenness, not it itself^ (Husserl 2001, 220). Such a transformation of the mode of givenness consists in a shift Bfrom an explicit sense to an implicit sense^ (Husserl 2001, 223). Moreover, empty presentations themselves cannot be described in terms of representational or explicit intentionality. The object-directedness in the past is therefore grasped as Bimplicit intentionality^ (Husserl 2001, 222), which can be reawakened and brought back to intuitive presentification, but which as such is in no way an actual objectifying intention. Now, an important question needs to be answered on how this affectively depleted and temporally distant past can be reawakened again. Husserl claims that the unconscious past-horizon is a necessary condition for affective awakening and the latter is a prerequisite for remembering: BAwakening is possible because the constituted sense is actually implied in background-consciousness, in the non-living form that is called here unconsciousness^ (Husserl 2001, 228). In the process of awakening of the distant past, an affectively discharged, sedimented sense Bemerges^from out of the Bfog^and Bwhat is implicit becomes explicit once more^ (Husserl 2001, 223-224). Such an awakening is a product of affective communication 13 and therefore a product of associative synthesis.
Affective awakening of the past and remembering are two closely related phenomena, which, however, should not be identified. While the first is essentially a phenomenon of affective nature, by means of which a past sense regains its affective force, the latter is an act of intuitive presentification, in which a sense becomes the object of an explicit intention. BThe affective awakening,^-as Husserl remarks-Bdoes not bring the uniform sense to intuition […], but does indeed effect an un-uncovering^ (Husserl 2001, 225). Not all affectively awakened senses become actual intuitions or recollections, most of them never reach this level. In this sense, remembering is the transition of an awakened empty presentation in reproductive intuition. Without this awakening no remembering would be ever possible.
Thus remembering is a modification of the mode of givenness of an objective sense and thereby of consciousness itself, and so is the retention: the latter changes the impressional consciousness into an undifferentiated past-horizon, the former transforms it into reproductive consciousness of the past. Bernet claims that such a reproductive consciousness itself can be understood as unconscious representation. However, in Husserl, the unconscious does not correspond to reproduction, but rather to the undifferentiated consciousness of the past-horizon. Moreover, I think it is consistent to claim that this consciousness is by no means a representational or an intentional one, but is an affective consciousness of the indistinct horizon of the past, which Husserl also calls background-consciousness: BOne may well say that within the zero-stage, all special affections have passed over into a general indifferentiated affection; all special consciousnesses have passed over into one, general, persistently available backgroundconsciousness of our past, the consciousness of the completely unarticulated, completely indistinct horizon of the past, which brings to a close the living, moving retentional past^ (Husserl 2001, 220).
In this sense, the past and all its content is preserved as a Bhorizon,^temporally and affectively relative to the impressional consciousness. Accordingly, the preliminary conclusion can be drawn that there are two main modes of our relation to the past: the remembered past, in which it becomes an object of explicit recollection, and the affective past, which is present as an affective horizon and as a sphere of sedimentation and forgetfulness. In this latter perspective, the past has no other reality which could be attributed to it besides affective reality, relative to one's impressional present. In the Analyses as well as in later manuscripts, Husserl clarifies it as a sphere of unconscious sedimentation (Sedimentierung), whose affective status is always dependent on the actual impressional experience.
The idea of the unconscious as the past-horizon constituted through affective and temporal modifications is closely linked to the idea of its ineffectiveness. If, as Husserl insists, Bpositive affective force is the fundamental condition of all life^ (Husserl 2001, 219), and if the affective vivacity of the unconscious is close to zero, then its affective impact must be fully dependent on the conditions of the present subjective experience. And indeed, this seems to be exactly what Husserl implies claiming that the affective reinforcement for the awakening of past senses must always come from the living present, as well as from dispositions and motivations inherent to it.
Although this position is arguably justified as it comes to the general conditions of affectivity (if the living present is completely empty and lifeless no communication with the past is possible), it nevertheless causes some trouble regarding the affective status of the past itself. Moreover, the reality of our subjective experience may cast some doubts on Husserl's view. The riddle of the past asserts its importance not because it has lost its impact on our present life but precisely because it has not. There are past experiences, which however temporally distant remain constantly affectively present to us, even if their influence as such remains unnoticed. Also, the distinction between the sedimented, as characteristic of the distant past, and the totality of non-sedimented, as characteristic of the living present (Husserl 2014, 37), might appear contradictory. There is indeed a level of implicit and sedimented experience which by no means can be called unconscious as ineffective and dead for us. In what follows, I shall investigate the possibility to account for this issue within Husserl's own approach. Notably, it is in these deliberations concerning repression and affective conflict that Husserl comes closer than ever to drawing some explicit connections between the psychoanalytical and phenomenological approaches to the unconscious.
Affective conflict and the unconscious as repressed
One of the radical differences between Freud's and Husserl's theories of the unconscious concerns the affective status of the past and its capacity to affect the present. While for Husserl the unconscious corresponds to the zero level of affective intensity, it is the affective capacity of the unconscious which plays the major role for Freud. The main reason for taking the unconscious as ineffective and incapable of exercising any influence on the present consciousness lies in the very idea which specifies the unconscious as a frontier and the final point of modification and vitality.
However, Husserl also outlined other directions of enquiry concerning the affective status of the remote past and the sphere of forgetfulness. Already in the Appendix 19 to the Analyses, he questions the possible development of affections as Bprogressing^or Brousing from the unconscious^ (Husserl 2001, 518-519). In order to understand this line of thought, it is fruitful to address Husserl's take on the issue of affective suppression.
First of all, in the Analyses, Husserl approaches suppression of affective tendencies as a function of contrast. In general, contrast delineates the affective relation between opposite or antagonistic tendencies. The highest form of contrast is affective conflict: BContrast is the affective unification of opposites […] Rivalry, conflict, is the dissension of opposite things^ (Husserl 2001, 514). The applications of the principle of contrast are quite broad. On the one hand, association of contrast can lead to the increase of affective intensity of affectively unified opposite terms. Husserl's examples include the augmentation of the vivacity of the whole (a string of lights, a melody) by means of contrast between parts, so that a louder tone makes a softer one more noticeable, or a sudden change in brightness of a particular light influences the noticeability of the whole string. On the other hand, contrast in the form of affective conflict can lead to the suppression of concurrent affections, especially if they are not integrally cohesive (Husserl 2001, 514). Interestingly, such suppression can equally result in an increase of affective vivacity which in this case is confined to the unconscious: BIn this case, a special repression takes place, a repression of elements, which were previously in conflict, into the 'unconscious,' but not into the integrally cohesive sphere of the distant past; by contrast, in the living conflict, repression takes place as a suppression, as a suppression into non-intuitiveness, but not into nonvivacity-on the contrary, the vivacity gets augmented in the conflict, as analogous to other contrasts^ (Husserl 2001, 514-515).
To a certain extent, the concurrence of affective tendencies which Husserl describes as pertaining to the affective relief of the living present is already a case of suppression and affective conflict: stronger affective tendencies win over their weaker counterparts and suppress them into the background. Moreover, any retentional modification also presupposes suppression of other affections which gradually lose their affective impact. However, as it can be seen in the above cited quote, Husserl also has something more specific in mind. Affective conflict suppresses the affective tendencies in the unconscious, but in such a way that the affective vivacity of these tendencies increase instead of diminish. In this case, affection which is Bwinning out does not annihilate the other ones, but suppresses them^ (Husserl 2001, 518) and this suppression has a reverse effect on vivacity of contrasted affections. In this passage, Husserl underlines that repressed elements sink into the unconscious. However, this is not the unconscious in the sense of cohesive, undifferentiated past that has lost its affective impact. Husserl's version of the Brepressed^unconscious is alive and has its own affectivity which even imply that affections can evolve or progress from it.
Whether Husserl ultimately meant to separate these two versions of the unconscious-as undifferentiated past-horizon and as repressed-cannot be elucidated on the basis of his texts. Nevertheless, the fact that he was aware of the challenge that repression presents to the phenomenological theory of the unconscious is clear. Not accidental in this sense is the way he approaches it, seeing the repressed unconscious more as an open question than a solution: BAffections can play to each other's advantage here, but they can also disturb one another. An affection, like that of extreme contrast ('unbearable pain') can suppress all other affections, or most of them […]-this can mean to reduce to an affective zero-but is there not also a suppression of the affection in which the affection is repressed or covered over, but is still present, and is that not constantly in question here?^ (Husserl 2001, 518). 14 It was clear to Husserl that repressed affections do not lose their affective vivacity and can even evolve from the unconscious. Not accidentally, he sees the question of repressed affects as one closely related to Freud's psychoanalysis. 15 In Husserl's opinion, the phenomenological clarification of instinctual drives and repressed affections can contribute to the eidetic (as opposed to merely subjective) analyses of the unconscious which were first brought to light by the psychoanalytic approach (Husserl 2014, 126).
Bégout, who first linked these fragments from Husserl's later manuscripts to the question of affective efficacy of the past, believes that this might prove that Husserl's view on the affectivity of the past is not uniform. He writes in this regard: BIn fact, Husserl develops the decisive idea according to which the repressed affections do not loose, contrary to what one might have thought, their affective validity and effectiveness. Indeed, repression of an affection by another affection privileged by the self, does not nullify its affective force^ (Bégout 2000, 187-188)my translation.
Bégout suggests distinguishing between on the one hand the retentional process, which corresponds to the constitution of the distant past as devoid of affective force, and on the other hand the process of repression, which also leads to non-intuitivity of the past but maintains affective vivacity of the repressed tendencies (Bégout 2000, 14 A similar line of thought returns in the later manuscripts (1934), in which Husserl comes to thematize another kind of affective conflict-the one that belongs to the sphere of drives (Triebe) and affects (Affekte). In the Appendix XIVentitled BEingeklemmter Affekt,^he notes that the intensity of desire is increased not only in an actual turning of one's attention towards the object of such desire but also in the opposite case, when one's desire is ignored and repressed (Husserl 2014, 112). 15 When he claims, for instance: BAlles Verdeckte, jede verdeckte Geltung fungiert mit assoziativer und apperzeptiver Tiefe, was die Freud'sche Methode ermöglicht und voraussetzt^ (Husserl 2014, 113). 216). In a similar vein, when Smith addresses the topic of the repressed unconscious in Husserl's work, he also underlines this double destiny of affective modification in retention. Notably, he shows how Husserl's analysis of the perseverance of sedimented experiences, especially in the sphere of drives and feelings, contributes to understanding the repressed unconscious through the lens of genetic phenomenology (Smith 2010, 228-241).
The phenomenon of repression illustrates that the past cannot be reduced only to temporally modified and obscure experience. Quite the contrary, seeing the past from the perspective opened up by analyses of affectivity allows accounting for essential differences in the way that it maintains connections to the living present. In this sense, it is plausible to accept the zero-affectivity of the past-horizon and repressed affectivity as two main types of affective modification, both of which contribute to the phenomenological understanding of the unconscious.
To summarize, there are several important points clarifying conception of the unconscious that emerges from Husserl analyses of passive synthesis. First, Husserl approaches the unconscious not in terms of cognitive or intentional structure, but as a phenomenon belonging to the affective order of subjective constitution. Husserl's idea of affectivity as constitutive dimension of subjectivity paves the way to seeing consciousness and the unconscious not as mutually exclusive phenomena but as different levels on the scale of affective intensity. Secondly, Husserl develops his understanding of the affective unconscious as the sphere of sedimented past, horizontally connected to the living present. Concept of the affective past-horizon designates a particular mode of givenness of the past and intends to account for the connectedness between the present and the past life of consciousness which exists beyond the level of explicit memory and underlies the possibility of retroactive affective awakening. Finally, Husserl's inquiries into the topic of affective conflict and the issue of repression allow enriching his idea of affective modification and thereby contribute to a phenomenological clarification of the affective vivacity of the past.
4 Discussion and conclusion: the unconscious as a non-representational relation to the past The aim of this paper has been to present an alternative to intentional or representational analyses of the unconscious. By systematically exploring the views of Merleau-Ponty, Fuchs and later Husserl, I intended to outline a general theoretical framework for the non-representational approach to the unconscious within the phenomenological tradition. One of the main goals of such an approach is to counter the understanding of the unconscious mental life in terms of mental acts or representations of essentially the same type as conscious mental acts but devoid of any conscious quality. Instead of trying to account for unconscious intentionalities, the authors featured in this paper paved the way to quite a different idea of the unconscious.
We have seen that Merleau-Ponty and Fuchs understand the unconscious as a Bsedimented practical schema^of subjective being in the world, which contributes to the ways by which we implicitly interpret reality, fill in the gaps of uncertainty, and invest our social interactions with meaning. Husserl's genetic phenomenology reveals another non-representational approach to the unconscious, namely the one based on the broadened idea of consciousness and its affective graduality. The most important contribution of Husserl's account of the unconscious concerns his ideas of sedimentation and the affective past-horizon. Significantly, both Merleau-Ponty's and Husserl's conceptions of the unconscious converge on the concepts of sedimentation and horizontal openness of subjective experience upon the distant past. These conceptions, however, allow for quite different concrete interpretations of the unconscious: While Merleau-Ponty and Fuchs explore the bodily dimension of the sedimented unconscious, Husserl ventures into a constitutive problematic and accounts for those Bblind^rules and operations that govern the pre-cognitive life of subjectivity and contribute to the continuous interconnectedness of the sedimented past and the living present.
However, when it comes to Husserl's approach to the unconscious, it should be noted that representational phenomena regarding the past are by no means dismissed by him. As we have seen, he attributes to the unconscious a peculiar form of Bempty presentation,^devoid of affective vitality. Distinct from non-objectifying intentionality of awakened affections, as well as from explicit intentionality of recollections, Bempty presentations^must be yet another kind of implicit intention. In these, Husserl asserts, the identical senses must be preserved in an implicit form without any actual intention taking place. 16 In Merleau-Ponty's terms, one could say that the idea of the past as preservation of memory Btraces^is not completely alien to Husserl's thought. There is still some vagueness in Husserl's idea of the past: On the one hand, he conceives of it as horizontal and constituted through temporal and affective modifications while remaining connected to the present and containing the intrinsic possibility of awakening. On the other hand, the status of empty presentations, in which objective senses are preserved in the unconscious, is far from clear. I believe that at this point Merleau-Ponty's critique of representational intentionality of the unconscious is justified and should complement Husserl's idea of the affective past-constitution. If our present is directed towards the past in the horizontal manner, 17 this should not imply that the past is preserved in the form of unconscious, empty presentations. Merleau-Ponty's idea is that the unconscious and the past should be thought of not as sedimented in any representational way but rather as sedimented in the very structure of one's personality and behavior, in the way one perceives and interprets the world.
Although I have stressed that representational approaches to the unconscious are characteristic of early psychological and philosophical theories such as those of Brentano, Freud, and early Husserl, it is important to note that understanding the unconscious in terms of hidden mental representations is by no means absent from contemporary research. For instance, Kihlstrom's influential conception of the cognitive or psychological unconscious still largely relies on the idea of unconscious representations. Kihlstrom distinguishes between two different uses of the term 16 As Bégout shows, such an idea might even undermine Husserl's fundamental definition of intentionality in terms of noetic-noematic structure. Namely, he asks how can an objective sense be conceived beyond its mode of givenness, and how, consequently, is it possible that a noematic sense can be preserved beyond any affective or active intention? (Bégout 2000, 204). 17 The horizontal structure of subjective experience is not limited to the so-called Bhorizontal intentionalities,ŵ hich contribute to the adumbrational givenness of perceptual objects. Horizontality equally applies to expectations and to past-experience, meaning that the living present is always open towards not only its future but also its past.
Bunconscious.^The first refers to automatic mental processes, which themselves have nothing to do with consciousness, but which generate mental content that is essentially available for conscious awareness (Kihlstrom 1987). This category encompasses, for instance, processes involved in calculating distances between objects, or the use of certain phonological and linguistic principles in speech. Even if such unconscious calculations are indispensable for perception, performance of movements, and use of language, they do not need to resemble our conscious computation, estimation, or spatial perception. Kihlstrom suggests distinguishing these automatic unconscious processes from the psychological unconscious that describes mental acts that occur without phenomenal awareness or voluntary control, but that still bear an influence on conscious experience. To this category belong phenomena of subliminal perception, implicit memory, learning, and thinking (Kihlstrom 1990). Unlike automatic processes, this latter type of the psychological unconscious conceptually resembles our conscious experience. It becomes especially clear when implicit perception, memory, and thinking are defined by means of an analogy with conscious representations. For instance, Kihlstrom and his co-authors Jennifer Dorfman and Victor Shames define implicit thought in terms of Bactivated mental representations of current and past experience [that] can influence experience, thought, and action even though they are inaccessible to conscious awareness^ (Dorfman et al. 1996). 18 A similar situation occurs in research on the implicit memory that makes up part of the psychological unconscious. In cognitive psychology, the definition of implicit memory as distinct from explicit memory usually calls upon conscious awareness. For example, in Daniel Schacter, we find the following definition: BExplicit memory is roughly equivalent to 'memory with consciousness' or 'memory with awareness.' Implicit memory, on the other hand, refers to situations in which previous experiences facilitate performance on tests that do not require intentional or deliberate remembering^ (Schacter 1989, 356). In other words, implicit memory designates situations in which Bpeople are influenced by a past experience without any awareness that they are remembering^ (Schacter 1996, 161). An operational definition of the phenomenon is then reduced to a presence of retention or response in absence and/or independent of explicit recollection. In experimental conditions, this means that implicit recall is shown to be independent of the explicit memory performance. This general definition allows for including within the group of implicit memory such phenomena as procedural memory of bodily skills, priming on both perceptual and conceptual levels, and emotional memory without recall. 19 Simple recognition is excluded from the category of implicit memory, as it cannot be shown to be independent of explicit recollection. For the same reason, other phenomena-such as emotional and traumatic memory-fall into the grey area between implicit and explicit cognition.
From the phenomenological point of view, Bremembering without awareness^or even an Bunconscious retrieval^is an ambiguous definition. At first glance, it may suggest that a subject remembers in the usual fashion, but simply shows no sign of awareness. This can mean that conceptually implicit memory is the same type of intentional experience as explicit recollection except that it is unconscious. In other words, when one sees unconscious processes as analogous to conscious representations, there is a danger of supposing that there is some actual thinking or remembering going on behind the spotlight of phenomenal awareness.
The core of the phenomenological argument consists of pointing out the apparent paradox contained in understanding the psychological unconscious as Bseeing that does not see,^Bthinking that does not think,^or Bmemory that does not remember.^In this spirit, Zavahi argues that the unconscious cannot be understood as an ordinary intentional act devoid of self-awareness. He goes on to assert that the unconscious, in its proper sense, should instead be identified with Ba quite different depth-structure of subjectivity^ (Zahavi 1999, 206). This depth-structure, as we have seen, is to be found not on the level of act-intentionality but rather in a dimension of Bopaque passivity which makes up the foundation of our self-aware experience^ (Zahavi 1999, 210).
The non-representational accounts of the unconscious discussed here take this line of thought even further. Specifically, they clearly show that a proper understanding of the unconscious cannot limit discussions of memory to its representational or explicit form and at the same time demand an understanding that both connects the past and present life of consciousness on an immanent level and grasps the affective, nonrepresentational presence of the past. In the phenomenology of the lived body, this non-representational relation is further understood as essentially a bodily one. On these grounds, implicit memory is clarified as body memory and includes different types of memory that could not be ascribed to it based solely on the psychological definition of this phenomenon. More specifically, implicit memory is identified as encompassing habitual bodily skills, situational memory, and traumatic and intercorporeal memory, as well as involuntary memories and pre-thematic recognitions (Casey 1987;Fuchs 2012b;Summa 2014).
It has been argued that Husserl's investigations on affectivity and his conception of the unconscious can be taken as another possible explication of this nonrepresentational past-relation. For instance, the phenomenon of affective awakening of the past discussed in section 3.2 may prove useful for phenomenological clarification of implicit remembering. From this perspective, the latter can be understood not in terms of unconscious representations but in terms of implicit or non-objectifying intentionality of affective awakening. As the affective conditions of retroactive awakening precede those of active recollections, they can be preserved even when the explicit memory functions decline. In this case, one's past can continue influencing one's present through affective awakenings that simply never reach the level of intuitive recollections.
Another contribution of phenomenology to the topic of implicit memory is the development of a constructive approach to forgetting. In Husserl, forgetting is seen as a function of affective modification in retention. What is forgotten does not disappear but becomes a part of the implicit background of subjective experience. This past is not presentified, nor is it given to any consciousness. Its mode of givenness is that of an indistinct horizon, a Bdimension of escape and absence^ (Merleau-Ponty 2012, 436). For Merleau-Ponty, such a view of the unconscious allows for attention to be brought to the dimension of our past experience that inevitably escapes objective thought and exists in the mode of oblivion. In Institution and Passivity, he clearly underlines this fundamental ambiguity: We have to be able to think of the past beyond representation, that is, beyond the past as construction or as preservation (Merleau-Ponty 2010, 208).
There must be, as he says, another way in which we relate to our past, and yet such another way is constantly missing, most likely because this dimension of the past inevitably escapes objective thought. He expresses this idea in Phenomenology of Perception: BExistence always takes up its past, either by accepting or by refusing it. We are, as Proust said, perched upon a pyramid of the past, and if we fail to see it, that is because we are obsessed with objective thought. We believe that our past, for ourselves, reduces to the explicit memories that we can contemplate. We cut our existence off from the past itself, and we only allow our existence to seize upon the present traces of this past. But how would these traces be recognized as traces of the past if we did not otherwise have a direct opening upon this past?^ (Merleau-Ponty 2012, 413).
From this viewpoint, it becomes evident that an important part of memory actually belongs not solely to what emerges on the surface of our affective consciousness but equally to what stays in the background. A person who once fell in love, learned how to read, heard a lion's roar, understood Bayes' theorem, or experienced a car accident will always remain affected by these experiences even if they are not constantly reactualized in his or her memory. Clearly, not all of these events will necessarily have an equal impact on that person's life: some will become fundamental and define his or her personality, others will become acquired skills or habits, some will be reawakened only when similar situations are encountered, and a significant portion of them will probably simply sink into the undifferentiated background. The past remains: not as hidden senses or traces in some deep repository of the mind, but rather in the way these events shape and change one's experience and thereby prefigure the totality of one's attitudes towards the present and the future. Similar to the horizontal structure of perception, in which an object is always approached from different sides while still maintaining a quasi-complete way of appearing, the unconscious past-horizon is what enables the present itself to be experienced in a way that has a meaning within, and is coherent with, the whole of one's experience. 222 A. Kozyreva | 2019-05-10T13:07:42.902Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "8e450f7d213bdfc62a1fd9688099858ca6974cc7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11097-016-9492-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "42ca5ca7f314071bbecd807810dc5583b8b6ede0",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Psychology"
]
} |
220948332 | pes2o/s2orc | v3-fos-license | ACE2 and SARS-CoV-2 Infection: Might GLP-1 Receptor Agonists Play a Role?
Type 2 diabetes mellitus (T2DM) represents an important risk factor for a more severe evolution associated with higher lethality of the infection from the new coronavirus disease 2019 (COVID-19; caused by severe acute respiratory syndrome coronavirus 2, SARS-CoV-2), responsible for the current pandemic that originated from the epidemic which initially affected the Wuhan region in China in December 2019 [1]. SARS-CoV-2 uses as a receptor for the infection of respiratory epithelial cells—the angiotensinconverting enzyme 2 (ACE2) receptor [1]; this was also the case for SARS coronavirus (SARSCoV), responsible for the epidemic that affected more than 8000 peoplemainly inAsia during the 2002–2003 period [2]. In contrast, Middle East respiratory syndrome (MERS) coronavirus, which caused (as of November 2019) 2494 confirmed cases of infection reported to the World Digital Features To view digital features for this article go to https://doi.org/10.6084/m9.figshare.12722258.
Liraglutide modulated different elements of the renin angiotensin system (RAS), significantly increasing ACE2 and Mas receptor (MasR) mRNA expression in pup lungs from food-restricted mothers.
Some action mechanisms support the hypothesis of a protective action of GLP-1R agonists, capable of mitigating a more serious clinical course among SARS-CoV-2-infected individuals with T2DM. Type 2 diabetes mellitus (T2DM) represents an important risk factor for a more severe evolution associated with higher lethality of the infection from the new coronavirus disease 2019 (COVID-19; caused by severe acute respiratory syndrome coronavirus 2, SARS-CoV-2), responsible for the current pandemic that originated from the epidemic which initially affected the Wuhan region in China in December 2019 [1]. SARS-CoV-2 uses as a receptor for the infection of respiratory epithelial cells-the angiotensinconverting enzyme 2 (ACE2) receptor [1]; this was also the case for SARS coronavirus (SARS-CoV), responsible for the epidemic that affected more than 8000 people mainly in Asia during the 2002-2003 period [2]. In contrast, Middle East respiratory syndrome (MERS) coronavirus, which caused (as of November 2019) 2494 confirmed cases of infection reported to the World Health Organization and 858 deaths [3], uses as a cellular receptor-the enzyme dipeptidyl peptidase 4 (DPP4) [4]. This enzyme is able to degrade glucagon-like peptide 1 (GLP-1), an enterohormone produced by the L cells of the ileum in response to the intestinal transit of glucose (socalled incretin effect) [5].
People with T2DM are frequently treated with orally delivered DPP4 inhibitors drugs (sitagliptin, vildagliptin, and saxagliptin with mimetic inhibition mechanisms, and alogliptin and linagliptin non-mimetic inhibitors) or GLP-1 receptor agonists (GLP-1 RAs) with daily (exenatide BID, lixisenatide, liraglutide) or weekly (semaglutide, exenatide LAR, dulaglutide) subcutaneous administration, or once-daily oral administration (semaglutide) [6,7]. Recently it has been hypothesized that these two antidiabetic drug classes can have a beneficial effect on SARS-CoV-2 infection. DPP4 inhibitors seem to act through an immunoregulating activity by regulating M1/M2 macrophage polarization [8], whereas GLP-1 RAs have been considered excellent candidates for the treatment of patients with COVID-19 with or without T2DM owing to their multiple beneficial effects on excessive inflammation-induced acute lung injury [9].
Indeed, multiple preclinical studies performed in mice and rats with experimental induced lung injury demonstrated that GLP-1 RAs attenuate pulmonary inflammation, through inhibitory activity on cytokine release [10,11], as a result of their interference with nuclear factor-kB signaling pathways [12].
In particular, an experimental study conducted on streptozotocin-treated rats showed that liraglutide was able to stimulate the expression of pulmonary ACE2 and angiotensin (1-7) [A(1-7)] and to increase the production of the lung surfactant proteins A and B (SP-A and SP-B) [13].
More recently a study conducted in an animal rat model showed that liraglutide significantly restored the SP-A mRNA expression in pup lungs from food-restricted mothers [14]. Moreover, liraglutide modulated different elements of the renin angiotensin system (RAS), significantly increasing ACE2 and Mas receptor (MasR) mRNA expression in pup lungs from food-restricted mothers [14].
Several studies have demonstrated the protective role of ACE2 in acute respiratory distress syndrome (ARDS) in many lung diseases partially as a consequence of restored A(1-7) production [15], and it has been suggested that ACE2 can favorably modulate the SARS-CoV infection [16].
In relation to SARS-CoV-2 infection, it has been hypothesized that the RAS dysregulation can be an important causative event leading to ARDS and multi-organ dysfunction [17].
Both ACE and ACE2 are zinc metallic enzymes. ACE cleaves C-terminal dipeptide residues from susceptible substrates, originating in angiotensin II (AII) from angiotensin I with vasoconstrictor action mediated by AII receptor 1 (AT 1 R) activation [17]. ACE2 acts as a simple carboxypeptidase that can hydrolyze AII to A(1-7) which exerts numerous salutary and opposite effects to those of AII through an efficient binding with the G protein-coupled receptor MasR [18]. Therefore, the ACE2 ? A(1-7) ? MasR axis is counter-regulatory to the ACE ? AII ? AT 1 R axis [17]. Moreover, the ACE2 ? A(1-7) ? MasR axis activity has an important antithrombotic effect through prostacyclin and nitric oxide production [19] which opposes the pro-thrombotic effects of AII [20].
AII is able indeed to determine the overproduction of interleukin-6 (Il-6), tumor necrosis factor alpha (TNFa) and other pro-inflammatory cytokines [21]. Moreover it is noteworthy to consider that AII is also able to activate the disintegrin and metalloproteinase domain-containing protein 17 (ADAM17) Zn-dependent enzyme, which cleaves the membrane-anchored ACE2, thereby releasing a circulating form of ACE2 with loss of the catalytic activity of the remaining part of the membrane-anchored enzyme [22]. The endocytosed spike SARS-CoV-2 viral proteins stimulate ADAM17 activity, too [22]. Moreover, ADAM17 (also known as TNFa-converting enzyme) is able to mediate the extracellular domain shedding and activation of TNFa, which exhibits auto-and paracrine functionality [22]. TNFa activation of its tumor necrosis factor receptor represents a third pathway elevating ADAM17 activity, thus increasing ACE2 shedding and impaired production of A(1-7), with further RAS-mediated detrimental effects in a positive feedback cycle [22]. The enhanced AII and TNFa activation, along with systemic cytokines released as a result of SARS-CoV-2 infection, can indeed exacerbate the ''cytokine storm'' ultimately leading to ARDS [22] (Fig. 1).
Therefore the capacity of GLP-1 RAs to enhance the activity of the ACE2 ? A(1-7)-? MasR axis by directly stimulating ACE2 expression would contribute to reduce the progression of inflammatory and thrombotic processes frequently associated with the poor prognosis of SARS-CoV-2 infection [23], through the fostering of an antithrombotic and anti-inflammatory milieu.
It should also be considered that ACE2 is mainly expressed by type II pneumocytes, which represent the Sp-A-and SP-B-producing cells and the progenitor cells of the type I pneumocytes [17]. The damage of type II pneumocytes due to SARS-CoV-2 infection causes loss of lung surfactant, alveolar collapse, and impaired tissue repair capacity.
The aforementioned ability of GLP-1 RAs to induce the synthesis of the pulmonary surfactant proteins [13,14], which exhibit anti-inflammatory and immune-modulating protective properties against bacterial and viral infections [24], can directly preserve the type II pneumocytes with consequently ARDS-preventing effect.
In addition the expression of ACE2 by intestinal enterocytes, where ACE2 has a RASindependent function, regulating the intestinal amino acid homeostasis and the gut microbiome [25], can represent a second site of SARS-CoV-2 infection, with consequently intestinal barrier leakage and bloodstream invasion from endotoxin and other intestinal bacterial metabolites, exacerbating the multi-organ dysfunction and septic shock [22]. Therefore, the ability of GLP-1 RAs to restore the ACE2 ? A(1-7) ? MasR axis can exert a protective intestinal effect, with a further favorable effect on the clinical course of SARS-Cov-2 infection.
In conclusion, it is nowadays unclear whether GLP-1 RAs can play a role in modulating SARS-CoV-2 infection, but it is conceivable that administration of these drugs can exert a pulmonary protective effect, as it has been hypothesized regarding other therapeutic approaches to increase the ACE2 ? A(1-7)-? MasR axis activity [22,26]. Particularly regarding as the possible role of AT 1 R blockers (ARBs), it has been hypothesized that chronic treatment with ARBs, already in place in patients infected with SARS-CoV-2, would stimulate greater ACE2 activity (ACE2 is the recognized viral receptor), and that this paradoxically would play a protective role against acute lung injury due to viral infection rather than promoting it [27].
Moreover the current position statements of the Council of Hypertension of the European Society of Cardiology (www.escardio.org) clarify that ACE inhibitors and ARBs should be continued during SARS-Cov-2 infection [11]. Two trials of losartan as additional treatment for SARS-CoV-2 infection in hospitalized (NCT04312009) or non-hospitalized (NCT04311177) patients have been announced and it is hoped that the results of these trials will answer this question [17].
Nevertheless, regarding the protective role of these antihypertensive drugs, it is noteworthy to consider that they lack any action on the synthesis of pulmonary surfactant proteins and lack direct inhibitory activity on the synthesis of pro-inflammatory cytokines, exerting antiinflammatory properties through the inhibition of unbalanced AT 1 R activation [28]. It is therefore conceivable that the GLP-1 RAs can exert an adjunctive anti-inflammatory and antithrombotic activity beyond the enhancing effect of ACE2 ? A(1-7) ? MasR axis activity.
To better explore this issue the percentage of people with T2DM treated with GLP-1 RAs in the general population should be compared with that of inpatients with T2DM presenting serious symptoms of SARS-CoV-2 infection, as already suggested [9,11]. If the latter percentage turned out to be significantly smaller, this would support the hypothesis in favor of a protective action of GLP-1 RAs, capable of mitigating a more serious clinical course among SARS-CoV-2-infected individuals with T2DM. As shown in Fig. 1, SARS-CoV-2 depresses ACE2 activity, thus imbalancing the RAS towards predominantly ACE activity and ADAM17 activation. This leads to a pro-thrombotic milieu and overproduction of pro-inflammatory cytokines that, along with systemic cytokines released as a result of SARS-CoV-2 infection, can exacerbate the ''cytokine storm''. The cited preclinical studies have shown that GLP-1 RAs can significantly increase ACE2 and MasR mRNA expression at pulmonary level [13,14]. Thus, increased ACE2 levels could shift the RAS toward A(1-7) production exerting two different protective effect: increased prostacyclin and nitric oxide levels, which could reduce the thrombotic complications of SARS-Cov-2 infection, and reduced AT 1 R activation counteracting its detrimental effects.
Finally, it is conceivable that the rebalancing of ACE2 ? A(1-7) ? MasR axis activity through ACE2 expression enhancement can at least partially account for the well-documented anti-atherosclerotic and cardiovascular protective effects of GLP-1 RAs [29]. These ACE2-mediated favorable effects are entirely insulinindependent and may be of particular relevance in those pathological conditions of reduced ACE2 production and RAS dysregulation like T2DM. In addition, it is conceivable that the documented renal protective effects of GLP-1 RAs [30] may also be at least partially mediated by the stimulation of ACE2 and A(1-7) expression. Further clinical and experimental studies are therefore warranted in the near future to demonstrate the importance of ACE2 in mediating the favorable multi-organ extraglycemic effects of GLP-1 RAs.
This study was conducted in conformance with good clinical practice standards. No human or animal data are present in the paper for which compliance with the ethical guidelines of the Helsinki Declaration and subsequent versions are not applicable.
Funding. No Rapid Service Fee was received by the journal for the publication of this article.
Authorship. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published: Vincenzo Maria Monda, Felice Strollo, Francesca Porcellati and Sandro Gentile. The authors declare that the text is original and has not been submitted to another journal at the same time.
Authorship Contributions. VMM created the paper and wrote it. SG and FS critically revised and approved the paper. FP contributed with a critical reading of the text and with the revision of the pharmacological aspects of the paper. All authors approved the final text.
Disclosures. Vincenzo M. Monda, Francesca Porcellati and Felice Strollo have nothing to disclose. Sandro Gentile is a member of the journal's editorial board.
Compliance with Ethics Guidelines. This study was conducted in conformance with good clinical practice standards. No human or animal data are present in the paper for which compliance with the ethical guidelines of the Helsinki Declaration and subsequent versions are not applicable.
Data Availability. No data obtained directly from the authors appear in the text.
Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/bync/4.0/. | 2020-08-04T14:27:43.713Z | 2020-08-04T00:00:00.000 | {
"year": 2020,
"sha1": "af778e41c98e9c8b80cc137944327004a398c3c4",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13300-020-00898-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "af778e41c98e9c8b80cc137944327004a398c3c4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235508784 | pes2o/s2orc | v3-fos-license | STORAGE (STOchastic RAinfall GEnerator): A User-Friendly Software for Generating Long and High-Resolution Rainfall Time Series
The MS Excel file with VBA (Visual Basic for Application) macros named STORAGE (STOchastic RAinfall GEnerator) is introduced herein. STORAGE is a temporal stochastic simulator aiming at generating long and high-resolution rainfall time series, and it is based on the implementation of a Neymann–Scott Rectangular Pulse (NSRP) model. STORAGE is characterized by two innovative aspects. First, its calibration (i.e., the parametric estimation, on the basis of available sample data, in order to better reproduce some rainfall features of interest) is carried out by using data series (annual maxima rainfall, annual and monthly cumulative rainfall, annual number of wet days) which are usually longer than observed high-resolution series (that are mainly adopted in literature for the calibration of other stochastic simulators but are usually very short or absent for many rain gauges). Second, the seasonality is modelled using series of goniometric functions. This approach makes STORAGE strongly parsimonious with respect to the use of monthly or seasonal sets for parameters. Applications for the rain gauge network in the Calabria region (southern Italy) are presented and discussed herein. The results show a good reproduction of the rainfall features which are mainly considered for usual hydrological purposes.
Introduction
Many hydrological applications, mainly related to small and ungauged catchments that are characterized by a short response time of runoff to rainfall, require the use of continuous rainfall time series at high resolutions [1]. However, these data series usually present a very short sample size or they are absent for many sites of interest, where only 1.
the model calibration is carried out by using summary statistics from annual maxima rainfall (AMR), annual / monthly cumulative rainfall, and annual number of wet days, which are usually longer than continuous observed high-resolution series (mainly adopted for SRG calibration but typically very short or absent in many locations). In this way, the SRG generates 1 min or 5 min continuous rainfall series which present, at coarser resolutions, summary statistics which are comparable with those of the above-mentioned sample data; 2.
the seasonality is modelled by using series of goniometric functions. This approach makes STORAGE more parsimonious with respect to the use of monthly or seasonal sets for parameters.
Concerning the latter aspect, the proposed approach is very flexible, because it is possible to model seasonality: • by using goniometric series only for some rainfall descriptors, and by considering the other ones as invariant during the year; • by setting the maximum number of harmonics for each selected descriptors, with the goal of having a parsimonious model.
Obviously, this methodology can be applied for any SRG proposed in literature.
The present manuscript is organized as described in the following. A brief overview of the investigated study area, i.e., the rain gauge network of the Calabria region in southern Italy, is presented in Section 2. The theoretical background of the STORAGE model and the user-friendly interface are described in Section 3. Applications are then discussed in Section 4, and the conclusions are drawn in Section 5.
Study Area
The investigated study area is the rain gauge network of the Calabria region (southern Italy). The employed data were downloaded from the website of the Multi Risks Centre of the Calabria region [48]. In particular, authors selected as reference the rain gauges with at least 30 years of observed data concerning AMR series with rainfall durations from 1 to 24 h. In total, time series of AMR, annual and monthly cumulative rainfall values, and annual number of wet days were analyzed for 64 stations (Figure 1). It is noteworthy that in Italy a day is classified as wet if the daily rainfall is greater than or equal to 1 mm. The Calabria region is located in the central part of the Mediterranean area and the total area is about 15,000 km 2 ; the territory is hilly in 49.2% and mountainous in 41.8% of the total area. From the collected data, the mean annual precipitation (MAP) assumes an average value of about 1150 mm, with higher values in mountainous areas and lower values in the coastal areas (particularly on the north-eastern one). As explained in [49], many rainfall events are induced by cyclones that develop close to the Alps and in the western part of the Mediterranean, and impact on the Tyrrhenian side, moving from west to east. Cyclones from North Africa and the Balkans are less frequent and mainly affect the region eastern side. In general, in the western part of Calabria there are the greatest rainfall amounts, while in the eastern part the most extreme events occur, as they are exposed to more intense cyclones [50].
Theoretical Overview of the Implemented Model
The basic version of the NSRP model [13,51] is suitable for stationary (i.e., without any seasonality and trend) continuous rainfall processes. In such model, five quantities, which are considered as random variables, hence following assigned probability distributions, play a crucial role. In detail, the five quantities are (see also Figure 2): Figure 2. Representation of the Neyman-Scott Rectangular Pulses (NSRP) stochastic process for at-site rainfall modeling. In the upper part of the Figure, 2 storm occurrences (red dots) with an inter-arrival t s , 2 bursts for the first storm and 1 burst for the second storm, are represented. The corresponding waiting times, intensities and duration are also indicated. Then, in the lower part of the Figure, the total precipitation intensity at time t can be calculated as the sum of all the intensities from the active bursts at time t.
• the inter-arrival time, T s , between the origins of two consecutive storms, which is assumed to be an exponential random variable. Consequently, the probability P[T s ≤ t s ] to have a new storm origin after a waiting time T s ≤ t s from the previous one can be calculated as: where 1/λ represents the mean value for the inter-arrival times, i.e., E[T s ] = 1/λ; • the number M of rain cells (also indicated as bursts or pulses) inside a specific storm, which is set in this work as a geometric random variable with a mean value E[M] = θ; • the waiting time W between a specific burst origin and the origin of the associated storm, which follows an exponential distribution: By considering all these five mentioned quantities, the total precipitation intensity Y(t) at time t is then calculated as the sum of all the intensities from the active bursts at time t (see also Figure 2), and the rainfall height R (τ) j , aggregated on the temporal τ resolution and related to the time interval j with extremes (j − 1)τ and jτ, is: An SRG model such as NSRP is usually calibrated by minimizing an Objective Function (OF), which is defined as the sum of residuals (normalized or not) concerning the considered (by user) statistical properties of the observed data at chosen time resolutions and their theoretical expressions. The statistical properties are typically referred to highresolution continuous time series (e.g., 1 or 5-min rainfall time series): mean, variance, and k-lag autocorrelation for R A first crucial aspect of the NSRP model is represented by the seasonality modelling of the rainfall process, for which monthly or seasonal parameter sets are usually considered, i.e., by carrying out a specific calibration for each considered month or season. This procedure clearly implies an increase in the number of the parameters to be estimated, and then a reduced ratio data/parameters.
In this context, another important aspect emerges, i.e., continuous high-resolution data sets are typically very short (in general no more than 15-20 years) or absent in many locations, and then a calibration with these data sets could not be suitable for a robust estimation of parameters.
To overcome these critical issues, a modified version of NSRP was implemented in STORAGE software, which is discussed in this work. STORAGE represents the implementation of the framework presented in [6,7], and its innovation regards the following features:
•
In order to reproduce the seasonality of the rainfall process, goniometric series are adopted (Section 3.1.1). In doing so, the model is more parsimonious, with respect to the use of monthly or seasonal sets for parameters. Moreover, this approach is very flexible, because it is possible to model seasonality: • by using goniometric series only for some rainfall descriptors, and by considering the other ones as invariant during the year; • by setting the maximum number of harmonics for each selected descriptors, with the goal of having a parsimonious model.
•
Moreover, model calibration is carried out by using data series, such as AMR, annual and monthly rainfall, and annual number of wet days series (Section 3.1.2), which are usually longer than continuous observed high-resolution series.
Obviously, like for other SRGs proposed in literature, a transient version can be implemented [6,7] in order to obtain perturbed synthetic series, which are representative of future hypothesized rainfall scenarios on spatial and temporal hydrological scales. However, in this work we describe only the implementation in STORAGE software of the cycle-stationary process (i.e., without temporal trends).
Seasonality Modelling with Goniometric Series
Focusing on the five NSRP summary statistics: 1.
1/λ: mean value for the inter-arrival times between two consecutive storms; 2.
θ: mean value for the number of rain cells (or bursts) for each storm; 3. 1/β W : mean value for the waiting time between a specific rain cell and the associated storm; 4.
1/β I : mean value for intensity of the cells with a rectangular shape; 5.
1/β D : mean value for duration of the cells with a rectangular shape.
The adoption of different sets for each month would imply the estimation of 60 parameters. Alternatively, it is possible to use goniometric series for the seasonal variation of an investigated quantity p: where p(t) is the summary statistic along the time t (expressed in min); p 0 is the mean value of p(t) in the whole year; K is the maximum number of goniometric functions (also named as harmonics) to be considered; n is the n-th harmonic function; T y is total number of minutes in the whole year (here considered with 365 days); A n corresponds to the amplitude for the n-th harmonic function; φ n corresponds to the phase shift for the n-th harmonic function. Adoption of Equation (6) implies the estimation of 1 + 2K parameters for each summary statistic, i.e., the annual mean value and the K couples regarding amplitude and phase shift for the harmonics.
Under the assumption that the seasonal variation regards all the five summary statistics, the proposed SRG is characterized by: 15 parameters if K = 1 for all, 25 parameters if K = 2 for all, 35 parameters if K = 3 for all and so on. Obviously, K can be also different from a summary statistic to the other.
For the selected case study (described in Section 2), STORAGE software was organized with the following assumptions: (a) The quantities 1/λ,θ, 1/β I and 1/β D present a seasonal variation. Specifically, K = 2 is used for 1/λ (according to [52]): 1 where 1 λ 0 represents the mean annual value without any seasonal variation; , and 1 λ min is equal to the smallest value for mean inter-arrival times between two consecutive storms; A 2,λ = ξ · A 1,λ ; φ 1,λ and φ 2,λ are the phase shifts for the two adopted harmonics. (b) As regards θ, 1/β I and 1/β D , we adopted K = 1: where • θ 0 , 1 β I , 0 and 1 β D , 0 are the mean annual values without any seasonal variation; • A 1,θ = θ 0 − θ min , and θ min is the smallest value for the mean number of cells for each storm; , and 1 β I min is the smallest value for the mean intensity of a rain cell. We considered 1 is the smallest value for the mean duration of a rain cell. We considered 1 in summer months and 1 β I (t) = 1 β I min during the winter.
These assumptions are compatible with the climatology of the Calabria region. In this part of Italy, the summer period is characterized by a lower average number of rain events with respect to the winter season. Moreover, the summer season usually presents rain events with higher intensities and shorter durations, compared with winter months, due to convective phenomena [53]. No seasonal variation (i.e., K = 0) is assumed for 1/β W .
Overall, calibration requires the estimation of twelve parameters: 1/λ 0 , (1/λ) min , ξ, Obviously, as also reported in Section 4, future developments of STORAGE will allow to consider a more comprehensive ensemble of combinations of K for the involved parameters, together with more flexibility about the phase shifts here fixed, in order to adequately model rainfall series in other climatic areas around the world.
Calibration
An a priori ensemble of simulations, described below, was carried out and the results were filed into an "information reservoir" in STORAGE software, ready to be queried for a specific site of interest. In detail, all the previously mentioned twelve parameters were considered uniform random variables with assigned ranges of variation, reported in Table 1 [7,54]. Then, 50,000 parametric sets were generated with the Monte Carlo technique and, for each one, a simulation of a 200-year rainfall series with resolution of 1 min was carried out by using the same macros which were afterwards implemented in STORAGE. At the end, we filed in STORAGE software only the parametric sets for which the 200-year synthetic series presented summary statistics according to the variation ranges of those from the observed data of a wide area of interest (i.e., all the rain gauges of the Calabria region for the presented application). Specifically, we focused on the following summary statistics:
•
Mean Annual Precipitation (MAP), and • mean annual number of wet days (i.e., mean annual number of days for which the daily rainfall is greater than or equal to 1 mm), and • parameters of Amount-Duration-Frequency (ADF) curves, related to rainfall durations from 1 to 24 h, and • mean values for seasonal rainfall in DJF (December-January-February), MAM (March-April-May), JJA (June-July-August), and SON (September-October-November).
The results of this composite filter, constituted by a subset of 50,000 parametric sets, are illustrated in Section 4 for the Calabria region. The storage of this information further justifies the choice of the acronym STORAGE. In fact, the software allows to use, for the synthetic generation related to a single rain gauge of interest, parametric sets belonging to this pre-existing "information reservoir" (regarding a wide previously investigated area), for which the corresponding series of AMR, annual rainfall, seasonal rainfall and number of wet days are comparable with those related to the sample historical data. Obviously, this aspect considerably reduces the calculation times for the model calibration on a specific site of interest, with respect to a usual calibration procedure that is carried out without any a priori indication about possible model outcomes. It is clear that this "information reservoir" can be continuously updated when other areas are investigated as case studies. Moreover, refinement algorithms will be implemented in future versions of STORAGE, in order to enhance the performance of calibration for a specific rain gauge. [7,54].
Parameter
Min Max
The User-Friendly Interface of STORAGE
When a user executes STORAGE, after having enabled the VBA macros, the Main worksheet will appear as in Figure 3. Two different procedures are allowed for the generation of a synthetic rainfall time series, and each one is associated with a specific command button: In addition to the Main worksheet, STORAGE contains the following worksheets:
1.
Annual and Monthly Rainfall, in which the generated rainfall values, aggregated at monthly and annual timescale, as well as the annual number of wet days, will be printed (for each generated year); 2.
Statistics, in which the mean and standard deviation values will be calculated and printed for all the quantities reported in the previous points 1 and 2; 4. EV1 Plots, in which the frequency distributions of all the previously listed AMR series will be represented on EV1 (Extreme Values type 1) probabilistic plots;
5.
Average Monthly Rainfall Plot, which contains the histogram of the average monthly rainfall values related to the generated rainfall series; 6.
Annual Rainfall Plot, where the annual cumulative rainfall series is represented.
Concerning the Annual Maxima worksheet, the series from 60 min to 24 h are estimated by considering the continuous series with a time step of 1 h. This choice is justified by the fact that many observed AMR series around the world were extracted, until 20-30 years ago, by using 1-h continuous data, while data with resolutions lower than 1 h are available only from 1990 or later [55]. Consequently, the comparison among synthetic and observed AMR series should be preferred by using this setting.
Data Input
For both previously mentioned procedures of time series generation, it is necessary to insert the following input information before starting the elaborations: • the number of years to be generated (Cell D3). The maximum allowed is 500 years; • the time resolution, expressed in minutes (Cell D4). The software allows for resolutions of 1, 5, 10, 15, 20, 30 and 60 min.
If the option RUN with parameter values chosen by the user is selected, then the user has to fill all the cells from C10 to C22 (Figure 3).
On the contrary, if PARAMETER ESTIMATION AND RUN is chosen, then the user has to insert the following input data, which are sample estimates from the observed series of the investigated case study: • The values of parameters for Amount-Duration-Frequency (ADF) curves, expressed as a power function: where d is the rainfall duration (hours) ranging from 1 to 24 h, T is the return period (years), h T (d) is the d-AMR associated with T, and a T and n T are ADF parameters. In detail, the values for a T and n T , associated with specific T values, are requested: If the size of the sample AMR series for the investigated case study is limited (less than 20 years), then it is advisable to use only sample estimates from low T values (2, 5 and 10 years). For higher sample sizes, information deriving from higher return periods can also be entered.
•
The values for Mean Annual Precipitation (MAP) into the cell L5, for the mean annual number of wet days into the cell M5, and for the mean cumulative seasonal precipitation, associated with December-January-February (DJF), March-April-May (MAM), June-July-August (JJA) and September-October-November (SON), into the cells L8, M8, N8 and O8, respectively. Moreover, also in this case, it not necessary to fill all the listed cells. The VBA macro will run the model calibration on the basis of the available information. Concerning the cell M5, strictly related to the wet day proportion, it should be remarked that the trivial rainfall (of which amount is less than the capacity of the tipping bucket of the rain gauges) could highly distort the result of the calibration in some cases, and so not filling this cell could avoid this possibility.
An example of Data Input is shown in Figure 4, if the option PARAMETER ESTIMA-TION AND RUN is selected by the user.
Synthetic Generation of Rainfall Time Series at a High Resolution
After completing the Data Input step, it is possible to run one of the two generation procedures. In the following pages, attention is focused on the PARAMETRIC ESTI-MATION AND RUN button (Figure 4 It must be highlighted that, in a worksheet hidden for the user, the results deriving from the use of about 3500 parametric sets, in terms of a T and n T for the ADF, MAP and the mean annual number of wet days, and mean cumulative seasonal rainfalls (DJF, MAM, JJA, SON), are stored. In detail (see also Sections 3.1.2 and 4), for each single parametric set, 200 years of precipitation were synthetically generated.
By clicking on the PARAMETRIC ESTIMATION AND RUN button, the userform shown in Figure 5 is displayed; from the combobox at the top ( Figure 6) it is possible to select the statistical descriptors to be reproduced, i.e.,:
1.
only the parameters a T and n T of the ADF curves; 2.
only MAP and the mean value for annual number of wet days (NumWetDays); 3.
a T , n T , MAP and NumWetDays; 4. a T , n T , MAP, NumWetDays and the mean cumulative seasonal rainfalls (DJF, MAM, JJA, SON).
After the choice of the descriptors to be reproduced (for example, a T , n T , MAP, NumWetDays, DJF, MAM, JJA, and SON, as in Figure 6), it is possible to click on the PARAMETRIC ESTIMATION button. The software will display by default, in the cell range C10:C22, the parametric set (indicated with ID SET 1) which is characterized, among the 3500 used offline, by the best value (i.e., the lowest value) of the evaluated Objective Function (OF) (Figure 7), in percentage terms, as:
OF a_n
Option 1
OF MAP_NumWetDays Option 2
OF a_n + OF MAP_NumWetDays Option 3 OF a_n + OF MAP_NumWetDays + OF Seasons Option 4 (12) in which: where: • a i is the i-th value (i = 1, . . . K a ) of parameter a for an ADF curve of an assigned T, inserted by the user into an input cell, while a * i is the correspondent NSRP value. K a is the number of return periods T which are considered by the user for parameter a. • n j is the j-th value (j = 1, . . . K n ) of parameter n for an ADF curve of an assigned T, inserted by the user into an input cell, while n * j is the correspondent NSRP value. K n is the number of return periods T which are considered by the user for parameter n. Whatever option is selected in the combobox, STORAGE will provide the correspondent values for all the OFs (Equations (13)-(15)) for a specific parameter set.
Moreover, by using the spin button (Figure 8), it is possible to adopt other parameter sets for simulation, which are sorted (by STORAGE in the hidden worksheet) on the basis of the values related to the selected OF.
After the choice for parametric set, the user can click on the RUN button for carrying out the generation of a synthetic rainfall series.
During the run, the user can control the progress of generation by analyzing the several worksheets in STORAGE.xlsm. As examples, the cell D5 in Main (Figure 9) and the histogram for Annual Rainfall (Figure 10) can be checked.
A message box will appear when simulation is completed. Then, the final results can be analyzed in the several tables and plots of STORAGE (Figure 11), while the whole synthetic rainfall series at the selected high resolution (cell D4 in Main), will be printed in "C:\NSRP\RainSim.txt".
As explained in the following sections, STORAGE also allows for rainfall generation with multisets approaches, as an alternative way to the run with a single parametric set.
Multisets Approaches
Focusing on Option 4 of Equation (12), different parametric sets can be characterized by very similar OF values among them, but some sets could better reconstruct ADF curves, while other ones could best fit MAP and NumWetDays, and so on.
In this context, if the fourth option of Equation (12) (i.e., a T , n T , MAP, NumWetDays, DJF, MAM, JJA, SON) is chosen as an ensemble of statistical descriptors to be reproduced, the user can take advantages from several parametric sets by selecting one of these two options concerning multisets approaches ( Figure 12): • Ranking from total OF; • Merging different OFs, which is further subdivided in 3 OFs and 4 OFs.
The proposed multisets approaches are based on the concept of equifinality [56], which means that "different parametric sets within a chosen model structure may be behavioural or acceptable in reproducing the observed behaviour of that system".
Ranking from Total OF
It is possible to select S parametric sets (sorted with increasing values of Option 4 in Equation (12)) by using the spin button of Figure 12. Automatically, STORAGE will assign (to a specific set) a frequency of use which is inversely proportional to its overall OF value (Option 4 in Equation (12)). In detail, let f i be the frequency of use for the i-th parametric set (i = 1, . . . , S) and OF i its corresponding value of OF; f i is computed as: Then, considering the total number N of years to simulate (input data in cell D3 in the Main worksheet, Figure 4), the number f i · N of years will be generated with the i-th parametric set.
It should be highlighted that: • if a multisets approach is selected, a user should consider at most S = 4 and a large value for N (we suggest N = 500 years), in order to have a significant number of years for each set (with N = 500 years and S = 4, there are on average 125 years which are simulated with each set); • in a context, such as in this case, of stationary/cycle-stationary process (i.e., without any climatic trend), it is not necessary to generate a large number L of N-year synthetic series (in which each i-th set should regard f i · L series), but it is sufficient to consider the generation of only one year, which is repeated L = N times. This is allowed by the ergodicity property of a stationary process [57], which means that the statistics from a long temporal N-year series are equal to the statistics from one year (generated N times).
After clicking on the RUN command button (Figure 12), the user is able to check the progress of the rainfall generation, similarly to the procedure with only one parameter set (Figures 9-11). It is clear that this approach can be well used for a more comprehensive sensitivity analysis (i.e., not only related for the first ranked parametric sets) in further upgraded versions of STORAGE software.
Merging Different OFs
This approach can be carried out in two options: In the first case, from the worksheet (hidden for the user) where the information of the offline generations with about 3500 parametric sets is stored, the VBA code selects the three parametric sets with the lowest values for, respectively, Equations (13)- (15). Then, STORAGE will assign to each selected set a frequency f i , evaluated by considering Option 4 of Equation (12) as OF i in Equation (16).
In the second case (4 OFs), the parametric set with the lowest value of the overall OF (option 4 of Equation (12)) is also considered, together with the three above mentioned sets.
It must be highlighted that these two options are allowed by STORAGE only if all the 3 OFs of the first option are inside the first 10 positions of the ranking for OF calculated with Option 4 of Equation (12).
Also in this case, after clicking on the 3 OFs or 4 OFs buttons (Figure 13), the user is able to check the progress of the rainfall generation, similarly to the procedures with only one parameter set (Figures 9-11).
RUN with Parameter Values Chosen by the User
This option allows for manually setting the values for the parameters in the interval C10:C21 of cells in the Main worksheet ( Figure 4). Also in this case, after clicking on the corresponding command button for the run, the user is able to check the progress of the rainfall generation, similar to the previous described procedures. The ranges of variation for parameters are reported in Table 1, according to [7,54].
Application for Rain Gauge Network of the Calabria Region and Discussion
As regards the application for the Calabria region, we saved in STORAGE about 3500 parametric sets, for which the 200-year synthetic series presented summary statistics ranging inside specific intervals (according to the observed data in the whole region). In detail: • concerning MAP, a value between 450 and 2500 mm; • concerning the mean annual number of wet days, a value between 50 and 120; • concerning the ADF curves (Equation (11)), values of a and n for T = 5 years between 20 and 65 mm/h and between 0.12 and 0.65, respectively; • concerning the SON cumulative rainfall, a mean value inside a variation of ±50 mm with respect to the linear regression curve between observed MAP and SON of the investigated data series.
By applying this composite filter, graphical comparisons among synthetic and observed summary statistics are shown in Figures 14 and 15. From analysis of these dispersion plots, the STORAGE good reconstruction for the investigated rainfall descriptors can be assessed. Tables 2 and 3. For all the three stations, 500-year synthetic rainfall time series with a resolution of 5 min were generated, and we carried out model validation by analyzing the reproduction of frequency distributions for sample data of AMR, annual and seasonal rainfall, and annual number of wet days. The best STORAGE performances were obtained: • by using the parametric set with the lowest value for the total OF (Option 4 in Equation (12)), concerning Montalto Uffugo; • by considering the multisets approach Ranking from total OF for Reggio Calabria and Vibo Valentia, with S equal to 3 and 4, respectively.
For Montanto Uffugo rain gauge, STORAGE provided a 500-year synthetic rainfall time series which satisfactorily reproduces the frequency distributions of AMR sample data (see the EV1 probabilistic plots in Figure 16), with an over-estimation only for 24-h AMR series. The reproduction of the frequency distributions concerning sample series for annual rainfall, annual number of wet days, and seasonal precipitation in DJF, MAM and SON is analyzed on Gaussian plots ( Figure 17): a slight underestimation is obtained only for JJA rainfall. As regards Reggio Calabria and Vibo Valentia rain gauges, the obtained results (Figures 18-21) highlighted some crucial aspects to be investigated further when future developments in STORAGE software will be carried out. In detail: • when AMR sample data present outliers from an EV1 behaviour (Figures 18 and 20), or if extremes are underestimated, it could be useful to consider other probability distributions for cell intensity I (e.g., Weibull, Gamma or a mixture of exponential functions, [20,25,58]), and/or to use other shapes for rain cells (such as the sinusoidal one, [59]), in order to better reproduce quantiles at high values of return period T; • though frequency distributions of annual rainfall are properly reproduced, an increase in the maximum number of harmonics for 1/λ(i.e., the mean inter-arrival time between two consecutive storms) and/or modelling seasonality also for 1/β W (i.e., the mean waiting time between a specific burst origin and the origin of the associated storm) could improve the reconstruction of both the annual number of wet days and seasonal rainfall in some specific cases.
Starting from this latter aspect, a more in-depth investigation of the maximum number of harmonics for some quantities, and of their phase shifts, could justify the STORAGE application also in regions far from the investigated area, i.e., characterized by drier or wetter climates. This obviously means to increase the number of parametric sets to be stored in the software.
Further analyses of STORAGE performances were carried out focusing on Montalto Uffugo rain gauge, characterized by 30-year continuous time series at resolutions of 20 min. Such analyses aim to evaluate the model capacity for reproducing summary statistics of high-resolution continuous series (not used for STORAGE calibration) and to compare the STORAGE results with those from a standard NSRP (i.e., calibrated by only using continuous high-resolution data). In details: • we calibrated a basic version of NSRP with the 1-h continuous data series (aggregated from the available 20-min one), by estimating parameters for each month (according to [14]) in order to avoid possible underestimation of extremes (as mentioned in the introduction). This version of NSRP is indicated as NSRP_v0 in the following; • we compared STORAGE and NSRP_v0 performances, graphically and in terms of Root Mean Square Error (RMSE), as regards the modelling of: • mean, standard deviation and percentage of dry intervals from the continuous series at 20-min and 1-h resolutions; • mean values of monthly rainfall heights; • rainfall heights of ADF curves for return periods T = 5, 50 and 200 years.
Concerning the summary statistics of the continuous series, it is clear that NSRP_v0 provides the best performances for 1-h resolution, because this time step was used for NSRP calibration in this case. However, the obtained STORAGE results for 1-h data series can be considered acceptable for the mean and percentage of dry intervals (Table 4 and Figure 22). For a 20-min resolution, STORAGE and NSRP_v0 performances are comparable (Table 4 and Figure 23). Table 5 and Figure 24). The clear benefit of using STORAGE is highlighted by focusing on ADF curves (Table 5 and Figure 25). As expected, STORAGE provides a very good reconstruction (RMSE values are comprised between 5.5 and 6 mm) because it is calibrated with a T and n T of the sample ADF curves (Sections 3.2.1 and 3.2.2). On the contrary, NSRP_v0 significantly overestimates rainfall extremes in this specific case; as its parametric estimation is only based on summary statistics from high-resolution continuous series, an acceptable reproduction of ADF curves could not be guaranteed in general (such as for Montalto Uffugo rain gauge), also by using monthly or seasonal parameter sets.
This last comparison allows us to remark the most important aspect of the usefulness of STORAGE, i.e., the possibility of calibrating an SRG by only using information at coarser resolutions (AMR, MAP, and so on) and then generating continuous series which preserves sample features (often un-known for lack of data) in an acceptable way at high resolutions.
Conclusions
The developed STORAGE software constitutes a very useful user-friendly tool for generating long rainfall time series at high resolutions, which could be applied as input data in many hydrological analyses, such as in the continuous rainfall-runoff modeling.
The innovative aspects of the software regard: (i) the possibility of using information, for model calibration, from observed time series which are longer than continuous data sample at high resolutions; (ii) the modelling of seasonality by adopting goniometric series, which allows for a more parsimonious approach with respect to considering monthly parametric sets (as is usually done).
The presented version of STORAGE software, available at https://sites.google.com/ unical.it/storage, is currently suitable for the reproduction of rainfall series which exhibit a clear EV1 behaviour in terms of AMR and present values of annual and seasonal precipitation that are typical of the Mediterranean area.
Future developments will concern: (i) the extension of the ensemble of the parametric sets and the possibility to use other probability distributions for some rainfall features and other shapes besides the rectangular one for rain cells, in order to apply the model in other regions with different climates with respect to the investigated area; (ii) the implementation of a module for obtaining perturbed synthetic series, which can be representative of future hypothesized rainfall scenarios on spatial and temporal hydrological scales.
Moreover, the authors consider as very important the possibility of implementing in STORAGE specific modules related to soft computing methods (widely used in recent literature [60][61][62]), in order to provide different approaches for a specific case study. This aspect will allow to immediately compare the performances of an SRG (having a mathematical structure which is "physically-based", as it models some aspects of rainfall genesis, see Figure 2) with those from approaches such as Artificial Neural Networks (ANNs), Support Vector Regression (SVR) and Fuzzy Logic (FL), which are characterized by high nonlinearity, flexibility and data-driven learning. | 2021-06-22T17:54:58.221Z | 2021-05-03T00:00:00.000 | {
"year": 2021,
"sha1": "cae49d48da4846891ca8fda61e1388ca623adfb2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-5338/8/2/76/pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d8c56b5eefec3b1ccc0ecf93a40a13e26eb217b5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
17522419 | pes2o/s2orc | v3-fos-license | Evidence for Letter-Specific Position Coding Mechanisms
The perceptual matching (same-different judgment) paradigm was used to investigate precision in position coding for strings of letters, digits, and symbols. Reference and target stimuli were 6 characters long and could be identical or differ either by transposing two characters or substituting two characters. The distance separating the two characters was manipulated such that they could either be contiguous, separated by one intervening character, or separated by two intervening characters. Effects of type of character and distance were measured in terms of the difference between the transposition and substitution conditions (transposition cost). Error rates revealed that transposition costs were greater for letters than for digits, which in turn were greater than for symbols. Furthermore, letter stimuli showed a gradual decrease in transposition cost as the distance between the letters increased, whereas the only significant difference for digit and symbol stimuli arose between contiguous and non-contiguous changes, with no effect of distance on the non-contiguous changes. The results are taken as further evidence for letter-specific position coding mechanisms.
Introduction
There is a general consensus nowadays that visual word recognition is essentially letter-based, at least in languages that use an alphabetic script. Within this perspective, efficient reading requires the association of different letter identities with different positions in the printed word, and the question of how letter position information is encoded has become a major issue in reading research in the last decade (see 1 for a review). One key question is whether the mechanism used to code for letter position information in printed words is essentially the same mechanism as might be used to code for positional information in arrays of any kind of visual object. Different approaches to letter position coding provide different answers to this question depending on how they account for the kind of flexibility in position coding that has been revealed by recent research. One phenomenon in particular has been used to illustrate this flexibility -the fact that we can easily read text in which letter odrer has been slightly mofidied. More precisely, an impressive amount of evidence obtained from various paradigms suggests that letter strings formed by transposing two letters of a real word are perceived as being more perceptually similar to the base word than letter strings formed by substituting two letters of the base word [2][3][4][5][6][7][8][9]. As noted by Grainger (2008), these transposed-letter effects have become one of the principle benchmark phenomena that models of orthographic processing must account for.
There are two very different accounts of transposed-letter effects. One account [10,11] proposes that they reflect the operation of generic noise (i.e., object-position uncertainty [12,13] on an otherwise rigid position-coding mechanism. Such models were developed specifically to account for transposedletter effects by adding positional noise to a position-coding mechanism that cannot otherwise produce transposition effects (i.e., slot-coding [14]. Another class of models [15][16][17][18] have proposed letter-specific position coding mechanisms in order to account for location-invariant, and to a certain extent, lengthindependent orthographic processing [19][20][21]. It was subsequently discovered that such letter-specific coding mechanisms could also account for transposed-letter effects [22]. According to these models, transposed-letter effects are the result of the very mechanism used to code for letter position information. Of course noise will affect processing in these models, just like it will affect processing of positional information for any kind of visual object (i.e., generic positional noise), but this noise operates on top of a mechanism that is already endowed with a certain amount of positional flexibility.
Models that apply generic positional noise [10,11] predict that letter stimuli should behave like other kinds of visual stimuli, at least when familiarity is controlled for. In support of this approach, García-Orza, Perea and Muñoz (2010) [23] used the masked priming version of the perceptual matching task in order to investigate transposition effects on different types of stimuli (letter strings, digit strings, symbol strings and pseudoletter strings). Results showed that the transposition priming effects were not specific to letter strings, supporting the hypothesis that position coding takes place before the distinction of different types of stimuli. Critically, a highly similar transposed-character effect was found for letter, digit and symbol strings, suggesting a generic position-coding scheme which is governed by domain-general principles (see also [24]. However, one recent study [25] has provided clear evidence for greater transposition costs for letter stimuli compared with both digit and symbol stimuli. Duñabeitia et al. combined the perceptual matching task with ERP recordings in order to explore changes in character position coding in different types of strings (i.e., letters, digits and symbols). In their experiment, the authors used the classic version of the perceptual matching task [26,27], in which a reference stimulus is explicitly presented, immediately followed by a target stimulus. Participants are then asked to judge whether or not the two stimuli are the same. Duñabeitia et al. observed an early transposed-character similarity effect only for letter strings, while a generalized transposed-character similarity effect arose at around 350ms post-target onset for all types of characters. Furthermore, behavioral data showed that transposition costs (difference between the transposed and substitution conditions) were significantly greater for letter strings compared with the other types of characters. Interestingly, these data highlighted that the most familiar items (which presumably are the letter strings) are the ones that suffer the greatest level of positional uncertainty as compared to other items (i.e., digit and symbol strings), leading to the greatest transposition costs.
The only way that models that apply generic positional noise can account for the greater transposition costs found for letters compared with digits and symbols in the Duñabeitia et al. (2012) study, is by postulating that such noise is greater for letter stimuli. Although this is a possibility, it runs counter to the evidence suggesting that if anything, positional noise should be reduced for letters compared with other kinds of visual stimuli [28]. On the other hand, the greater transposition cost found for letter stimuli in the Duñabeitia et al. study is perfectly in line with models according to which such costs are at least partly driven by flexible letter-specific position coding mechanisms.
Given the theoretical importance of Duñabeitia et al.'s (2012) finding of differential transposition costs for letters and other kinds of visual stimuli, the present study provides a further examination of such effects. Here we go one important step further than the Duñabeitia et al. study by manipulating the distance (measured in number of characters) separating the two transposed elements in strings of letters, digits, and symbols. Participants were presented with pairs of 6-character strings and were asked to decide whether they were identical or different. The two strings could be identical or could differ by transposing or replacing two contiguous characters, two non-contiguous that were 1-character apart, or two non-contiguous characters with two intervening characters.
Prior research has shown that transposed-letter effects can be also obtained with nonword primes involving transpositions of non-contiguous letters (e.g., cholocate-CHOCOLATE, e.g., [5,7,29,30]). The magnitude of the transposed-letter effect highly depends on the number of other letters intervening between the two transposed-letters, diminishing as a function of this distance (e.g., contiguous, 1-letter apart, 2-letter apart; see 5. We therefore expected to observe a diminishing transposition cost as distance increases in the present study. More important, this manipulation of distance provides us with another opportunity for observing a dissociation between letter strings and other kinds of stimuli, which is the focus of the present work. That is, given the hypothesized role of letterspecific position coding mechanisms, letter strings might not only exhibit greater transposition costs than the other kinds of stimuli, but these transposition costs might also be differentially modulated by distance.
Ethics Statement
All the participants signed informed consent forms before the experiment and were appropriately informed regarding the basic procedure of the experiment, according to the ethical commitments established by the BCBL Scientific Committee and by the BCBL Ethics Committee that approved the experiment.
Participants
32 participants (16 women) with a mean age of 22.06 (SD =2.34) years took part in the experiment. They were paid for their collaboration. All of them were native speakers of Spanish and had normal or corrected-to-normal vision.
Materials
1296 reference-target pairs were used as stimuli. Each of the pairs was composed of two 6-character long strings of uppercase consonants, digits, or meaningful symbols. These three categories were assigned to three blocks, so that each block consisted of 432 letter strings, 432 digit strings, or 432 symbol strings. For the digit strings, the numbers 1, 2, 3, 4, 5, 6, 7, 8 and 9 were used. For the letter strings, the uppercase version of the consonants, G, N, D, K, F, T, S, B and L were used. For the symbol strings, the characters % ? , !, &, +, <, ), $ and # were used. While digit-pair judgments seem to be unaffected by the similarity between the digits and existing letters [31], it is unknown whether or not the same would hold for letter-like symbols. Hence, in order to minimize the potential impact of the £€€T effect [32,33], we decided to substitute two of the letter-like symbols that were used previously in other studies. However, even if the symbols are not exactly matched between studies [25,28], we do not predict any difference in the processing of the symbol strings across experiments, given that in all cases only symbols being highly familiar to the participants were used. The same reference stimulus appeared twice in the experiment, once requiring a "same" response and once a "different" response. In each block, half of the items required a "same" response (216 trials, i.e., 349256-349256, DKLNFT-DKLNFT, & +! ? $ #-& +! ? $ #). The other half (216 trials) required a "different" response. Half of the different pairs differed by means of character transpositions (i.e., transposed condition) or of character replacements (i.e., replaced condition). The distance between the two transposed or replaced characters was also manipulated, measured in terms of number of intervening characters between the two critical ones (i.e., 72 trials per block of contiguous transpositions or replacements, DKLNFT-DLKNFT; 72 trials including noncontiguous transpositions or replacements with one intervening character, KTDLNB-KLDTNB; 72 trials with non-contiguous transpositions or replacements with two intervening characters, LNBKTD-LTBKND). Critically, transpositions or replacements never involved the outer characters. The same proportion of transpositions or replacements was carried out in all the possible within-string locations. Following a counterbalanced design, the reference-target pairs were separated into two subsets to create two lists of experimental stimuli that were presented to different participants.
Procedure
The presentation of the stimuli and recording of the responses were carried out using Presentation software. All stimuli were presented on a CRT monitor. Participants were informed that two strings of characters were going to be subsequently displayed. All stimuli were presented in white Courier New font (size 16 pt.) on a black background. Each trial began with the centered presentation of a fixation stimulus (*) displayed for 500ms. Immediately after this, the reference was presented for 300ms horizontally centered and positioned 3mm above the exact center of the screen. The reference was immediately replaced by the target stimulus that was horizontally centered and positioned 3mm below the center of the screen. Target stimulus remained on the screen for 2000ms or until a response was given. Each trial ended with a blank screen displayed for 500ms. The manipulation of the location of references and targets on the vertical axis was carried out in order to avoid physical overlap between the two strings (see Figure 1 for a schematic representation of a trial). Participants were instructed to decide as rapidly and as accurately as possible whether or not the two strings were exactly identical. They responded "same" by pressing the "L" button on the keyboard and "different" by pressing the "S" button. The experiment was divided in three separate blocks that only included items belonging to the same stimulus category. A short practice session was administered before the main experiment to familiarize participants with the procedure and the task.
Results
Statistical analyses were performed only over the "different" trials, since there was no experimental manipulation within the set of "same" trials. Incorrect responses and reaction times below 250ms and above 1300ms (1.34% of the data) were excluded from the latency analysis. Mean latencies for correct responses and error rates are presented in Table 1. The transposition costs are presented in Figure 2. ANOVAs over participants and items on the transposition costs (the result of the data in the replacement condition minus the data in the transposition condition) on the response latencies and on the error rates were conducted based on a 3 (Type of Character: letter, digit, symbol) x 3 (Distance: contiguous, non-contiguous 1-apart, non-contiguous 2-apart) factorial design.
Error rates
The error rate analyses showed a significant main effect of Type of Character, F1(2,62) = 14.23, p<.001; F2(2,142) = 24.59, p<.001. A significant main effect of Distance was also observed, F1(2,62) = 75.48, p=<.001; F2(2,142) = 61.86, p<. 001. The interaction between the two factors was significant, suggesting that the magnitude of the transposition cost differed across character types and distances, F1(4,124) = 5.53, p=. 001; F2(4,284) = 3.82, p=.006. The interaction is illustrated in Figure 2, where we can see the different influence of the distance factor for the three types of stimulus. In the following sections we will unravel this interaction by looking separately at the transposition cost for the different distances (the Distance effect), and at the transposition cost for the different types of characters (the Character effect). Regarding digit and symbol strings, the transposition cost was larger for contiguous manipulations than for the two non-contiguous manipulations that did not differ from each other. Note: Mean reaction times and percentage of errors for the "same" trials were 590 ms (7.29%), 596 ms (11.41%) and 601 ms ( In a nutshell, we found there is a clear gradation of the Type of Character effect for the contiguous manipulations, showing the largest cost for letter strings followed by digit strings and finally by symbol strings. Regarding the Type of Character effect in the noncontiguous 1-apart manipulations, the transposition cost was larger for letter strings than for digit and symbol strings (which did not differ from each other).
Discussion
The present study employed a perceptual matching task in order to investigate participants' ability to judge that two strings of items are different when the difference lies in the transposition of two characters compared with the substitution of two characters. In line with prior research, we found that detecting a transposition change was harder than detecting a substitution change, an effect referred to as a transposition cost [25]. We also investigated whether this transposition cost was modulated as a function of the distance between the characters involved in the change (contiguous, non-contiguous with one intervening character, and non-contiguous with two intervening characters). In line with prior research using masked priming [5,7,29, and 30], we found evidence for a decrease in transposition costs as distance increased.
Most important, however, is that we investigated whether such transposition costs, and their modulation by transposition distance, would be the same for the different types of stimuli we tested, or differ as a function of stimulus type. According to one account of how letter position information is encoded during visual word recognition [10,11], transposed-letter effects are driven by generic positional noise that operates identically for different types of familiar visual stimuli. A very different account of letter position encoding postulates that transposed letter effects are not only driven by positional noise, but also by the flexibility that is inherent in the very mechanism that codes for positional information [15,16,18]. According to the latter approach to letter position coding we should find evidence for letter-specific effects in the present study, thus providing a replication and extension of the prior evidence in this direction [25].
The present data show that transposition costs are larger for contiguous character transpositions than for non-contiguous character transpositions for all types of materials. The overall graded effects of contiguity are in line with the predictions of the Overlap model [10]. This model considers that objects' locations in a sequence are modeled over position which occurs before the distinction of object types. In that sense, the probability of a given character being at a given position diminishes as function of the distance from its exact location following a Gaussian distribution. In the case of transposedcharacter manipulations, the overlap model predicts that transposing contiguous characters would lead to a higher perceptual overlap with regard to the reference (namely, a larger cost) than transposing non-contiguous characters involving one intervening character, which in turn will lead to greater perceptual overlap than transposing non-contiguous characters involving two intervening characters. The results of simulations with the Overlap model are shown in Figure 3. According to this point of view, transposition effects are a consequence of object position uncertainty as depicted by general models of visual attention [34,35]. This process of position encoding is assumed to be effective regardless of the type of visual objects. Thus, this apparent flexibility in the positional information encoding would be a by-product of a general property of the visual recognition system.
However and more importantly, our results also revealed that transposition costs were overall larger for letter strings than for both digit and symbol strings. Furthermore, the distance factor had a different impact on these transposition costs for letter stimuli compared with digit and symbol stimuli. More precisely, letter stimuli showed a gradual decrease in transposition cost with increasing distance, whereas digit and symbol stimuli both showed a decrease in transposition cost from contiguous to non-contiguous transpositions, but no significant effect of the number of intervening characters in the non-contiguous conditions. Considering similar parameter estimations for all types of characters, these present data cannot be accommodated by the Overlap model since it does not a priori predict any interaction between transposition effects and the type of character. However, this model would be able to account for the differences in the transposition costs between letter, digit and symbol strings by tuning the values of the sparameter (which corresponds to the standard deviations of the letter distribution function) as a function of the type of input (letter, digits or symbols; see Figure 3) in order to fit the data. Nonetheless, it is worth mentioning that even with this parameter tuning, the model would run into difficulties to fit the high error rate found for contiguous letter transpositions. Hence, even if the Overlap model seems a reasonably good candidate to account for most of the data here reported, the pieces of data that are not readily captured by the model seem to favor models of orthographic processing based on letterspecific principles (over and above domain-general principles).
Greater transposition costs for letters compared with other kinds of familiar visual stimuli, is a natural consequence of models that code for letter-in-string position in a flexible manner. This is the case for models that employ open-bigram coding [16,18] and spatial coding [15]. Such flexibility in position coding is used in order to achieve a location-invariant (i.e., independent of viewing position) sublexical representation of orthographic information that codes for position-in-word rather than position-on-retina. However, current implementations of these models would appear to not generate the amount of flexibility required to capture the transposition costs that occurred with 2-character separations. The results of simulations (The match scores were obtained from the MatchCalculator application (v. 1.9) developed by Colin Davis. This application is available at: http://www.pc.rhul.ac.uk/staff/ c.davis/Utilities/MatchCalc/index.htm) are shown in Figure 4. These simulations revealed that none of the models predicted a transposition cost when the change involved letters separated by two letters. It should nevertheless be noted that a more recent version of open-bigram coding [17] has opted for increased flexibility by implementing distance as a parameter that can change as a function of encoding conditions. Most important, however, is that all these simulations are noise-free. Now, assuming that noise operates on position coding mechanisms, whatever their nature, then we have a simple means to extend the above models in order to capture the complete pattern of results observed in the present study. Here we provide one example of this extension, couched in the framework of Grainger and van Heuven's (2003) model of orthographic processing. In this particular model, generic positional noise operates at the level of retinotopic letter detectors [20], and this noise will affect coding of wordcentered bigram representations. Assuming minimal positional noise here such that a letter at position N can be erroneously encoded as being at positions N-1 or N+1, then computation of a bigram with distance 2 can sometimes lead to computation of a bigram formed of two letters separated by four letters (distance 4). Noisy retinotopic coding increases the flexibility of word-centered open-bigram coding, enabling such models to capture TL effects with two intervening letters. This is very different from the so-called "overlap open-bigram model" (see 20 according to which positional uncertainty only arises at the level of retinotopic letter detectors, and it is due to this positional noise that non-contiguous bigrams are encoded [36].
In sum, the observed differences in transposed-character effects found for letters compared with both digits and symbols in the present study, points to the existence of letter-specific position-coding mechanism. Generic positional noise operating on an otherwise rigid position coding mechanism cannot capture the present results. Nevertheless, generic positional noise must still influence the processing of letter stimuli, just like any other kind of visual stimulus, and it might be the case that when added to existing flexible position-coding mechanisms, this would provide the additional flexibility required to provide a complete account of the present data. | 2017-03-31T18:47:14.995Z | 2013-07-02T00:00:00.000 | {
"year": 2013,
"sha1": "dac74d092e2c74b1c8668dca2eeeeb17f0cb2688",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0068460&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f9a12523395fc4e19bac3b90879706743df2302",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
268058025 | pes2o/s2orc | v3-fos-license | Spatial Growth Factor Delivery for 3D Bioprinting of Vascularized Bone with Adipose-Derived Stem/Stromal Cells as a Single Cell Source
Encapsulating multiple growth factors within a scaffold enhances the regenerative capacity of engineered bone grafts through their localization and controls the spatiotemporal release profile. In this study, we bioprinted hybrid bone grafts with an inherent built-in controlled growth factor delivery system, which would contribute to vascularized bone formation using a single stem cell source, human adipose-derived stem/stromal cells (ASCs) in vitro. The strategy was to provide precise control over the ASC-derived osteogenesis and angiogenesis at certain regions of the graft through the activity of spatially positioned microencapsulated BMP-2 and VEGF within the osteogenic and angiogenic bioink during bioprinting. The 3D-bioprinted vascularized bone grafts were cultured in a perfusion bioreactor. Results proved localized expression of osteopontin and CD31 by the ASCs, which was made possible through the localized delivery activity of the built-in delivery system. In conclusion, this approach provided a methodology for generating off-the-shelf constructs for vascularized bone regeneration and has the potential to enable single-step, in situ bioprinting procedures for creating vascularized bone implants when applied to bone defects.
INTRODUCTION
−3 However, the presence of these cells alone is insufficient to initiate the bone healing cascade.Correct signaling sequences of both osteoconductive and osteoinductive factors including bone morphogenic proteins (BMPs) are essential. 4,5Equally important in this healing cascade is the coactivity of angiogenic cells and their well-orchestrated physiological process known as angiogenesis. 6,7It was discovered a century ago that the bone tissue possesses a remarkable network of highly vascularized blood vessels, extending through its osteons, Haversian/Volkmann's canals in the cortical section, and penetrating into the medullarypositioned cancellous section. 8−11 Moreover, the lack of a bone vessel network leads to poor graft viability and nonuniform osteointegration, both of which can ultimately lead to graft failure in the postoperative phase. 5,8,12,13−19 Since the natural bone formation process is fairly complex and involves multiple growth factors released in different regions of the bone tissue, delivering multiple factors to the forming tissue microenvironment is essential. 20Scaffold-based applications of bone tissue engineering have introduced innovative concepts over the last few decades to construct multifaceted tissue grafts capable of fulfilling all necessary functions. 21Nonetheless, conventional scaffold fabrication methodologies face certain limitations such as the challenge of fabricating highly ordered, porous, and complex architectures. 22Ever since the introduction of 3D printing technology, research focus has shifted in this direction due to its innovative and groundbreaking capabilities in customizing manufactured products compared to standard manufacturing practices. 23This state-of-art technology enables researchers to fabricate intricate geometries with high precision, accuracy, and most importantly reproducibility. 24ccordingly, bone tissue engineering utilizes 3D printing of cell-laden bioinks, known as bioprinting, to harness its versatile advantages. 25,26This additive manufacturing technology allows precise control over all aforementioned properties, as well as controlled spatial distribution, and the deposition of multiple cell-laden biomaterials in a layer-by-layer manner. 27For these reasons, in this study, we aimed to utilize bioprinting technology to fabricate sophisticated scaffolds with spatiotemporal inductive cues by incorporating human adipose-derived stem cells and stromal cells (ASCs) into predetermined locations.
ASCs, a heterogeneous source of precursor cells, possess MSC properties and osteogenic tissue formation potential upon osteoinduction (such as BMPs).Additionally, they include endothelial cells capable of inducing de novo vessel formation within a synthetic graft through stimulation with the vascular endothelial growth factor (VEGF). 28In this work, BMP-2 and VEGF proteins were encapsulated in polymeric microparticles and placed at predetermined locations within the graft during bioprinting.Polycaprolactone (PCL) was coprinted with ASC-laden bioink, and growth factor-loaded microparticles were supplemented within the alginate-based bioink.The native structure of the vascularized bone tissue was practically mimicked by means of a unique architectural design.Figure 1 illustrates the methodology followed in this approach in order to produce a bioprinted vascularized bone structure.
Microencapsulation of BMP-2 and VEGF.
PCL microcapsules were produced following a previously established method. 29riefly, PCL was dissolved in methylene chloride at varying concentrations.Aqueous solutions of BMP-2 and VEGF were added and sonicated for 15 s at 50 Hz.The resulting emulsion was then introduced to a 4% (w/v) poly(vinyl alcohol) (PVA) solution and underwent further sonication under the same conditions.The double emulsion was homogenized in 0.3% (w/v) PVA solution, and the solvent was evaporated under continuous stirring overnight.The microcapsules were subsequently rinsed with Tris−HCl (10 mM, pH: 7.4) and lyophilized for 24 h.The morphology of the microcapsules was assessed using scanning electron microscopy (SEM, Quanta 400F Field Emission SEM) after sputtering with gold.
Assessment of Encapsulation Efficiency and Release Kinetics.
To evaluate release kinetics and encapsulation efficiency, bovine serum albumin (BSA) was employed as a model protein since it has a molecular weight (66 kDa) comparable to those of BMP-2 and VEGF (ca.40 kDa).5 mg of microparticles was suspended in 1 mL of phosphate-buffered saline (PBS) solution (pH 7.4) and incubated at 37 °C for 21 days.At various time points (days 1, 3, 7, 14, and 21), the samples were subjected to centrifugation and the supernatants were collected.The particles were subsequently resuspended in fresh 1 mL of PBS.Quantification of BSA was performed using the Bradford assay (Coomassie Plus Bradford Assay, Pierce) according to the manufacturer's instructions.To determine the encapsulation efficiency, microparticles were disrupted in methylene chloride and the protein content was extracted with distilled water prior to content quantification.Release kinetics were assessed from the free microparticles and loaded particles embedded within the alginate bioink.A similar procedure was applied by suspending microparticle-loaded 3D-printed scaffolds in PBS.
Adipose-Derived Stem/Stromal
Cell Isolation and Culture.ASCs were isolated from human subcutaneous adipose lipoaspirates obtained from Ankara University Faculty of Medicine, Department of Reconstructive and Plastic Surgery, in accordance with Clinical Research Ethical Committee approval (#13-25-15).The cell isolation protocol was followed as described previously. 30In brief, lipoaspirate was enzymatically digested by using 0.1% collagenase type I, 1% BSA, 2 mM CaCl 2 in PBS for 1 h at 37 °C.Samples were then centrifuged to isolate the stromal vascular fraction (SVF).Subsequently, the SVF pellet was plated onto tissue culture dishes to obtain the plastic-adherent population (P0).Vials containing P0 were cryopreserved and stored until use.ASCs were then thawed in expansion medium (Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 1% penicillin/ Figure 1.Schematic representation of the hierarchical and spatial organization followed for 3D bioprinting of the vascularized bone graft.Three different print heads were used for the biofabrication.The first print head was used to 3D print PCL.The second print head included osteogenic bioink (ASCs and BMP-2-loaded particles within alginate).The third print head was used to bioprint angiogenic bioink (ASCs and VEGF-loaded particles within alginate).The magnified figure on the right-hand side represents the structure of the bioprinted vascularized bone graft with distinct regions including (i) PCL (blue lines), (ii) osteogenic bioink (bone compartment, red squares), and (iii) angiogenic bioink (vessel compartment, white circles).streptomycin (P/S), and 1 ng/mL FGF-2) under standard culture conditions (37 °C and 5% CO 2 ).P3 ASCs were employed for all experiments conducted in this study.
2.4.Bioprinting of Vascularized Bone Scaffolds.PCL (Perstorp AB, Sweden) (M w = 37000 g/mol) was melted in the high-temperature print head of the 3D Bioplotter (EnvisionTEC, Germany) at 130 °C.The molten PCL was extruded through a syringe (460 μm) at 3.5 bar and 3D printed on a 37 °C glass surface in the form of filaments.Other print heads were loaded with 2% (w/ v) sterile alginate solution (loaded with 1.25 × 10 6 ASCs/mL and GFladen microparticles) and was coprinted as filaments.Cross-linking of the bioink was achieved using a sterile 0.5 M CaCl 2 solution.
To optimize and characterize the manufacturing process, cubic scaffolds with dimensions of 10 × 10 × 15 mm were printed.Subsequently, to establish the feasibility of our concept, anatomically shaped scaffolds were designed and 3D printed, aiming to craft custom-made vascularized bone grafts using a single-cell source (ASCs).Patient DICOM data from an MRI scan was obtained from the Ankara University Department of Plastic, Aesthetic and Reconstructive Surgery Clinic (Approval No.: 13-25-15).The 3D model generation and implant design were performed using Mimics software (Materialise).After radiological image segmentation, 3D image creation (surface rendering) was carried out, followed by 3D implant design using the MIMICS software's "Materialise-3-matic" module.3D models produced for patients from DICOM images were obtained as .stlfiles and further used in the 3D printing.
Circular canals were designed and incorporated into the hybrid scaffolds to localize neovascularization triggered by VEGF release (vessel configuration was optimized as described below).Print optimization was conducted to enhance reproducibility, repeatability, and rigidity by adjusting parameters such as the fiber spacing and fiber arrangement.Additionally, the linear speed of the print head, loading density, print pressure, and temperature were also optimized for the ideal construction of the inner structure.
For the preparation of the bioink, alginate powder, BMP-2, and VEGF-laden PCL nanocapsules were sterilized under UV light for 30 min.BMP-2 (40 ng/mL) and VEGF (10 ng/mL) were loaded per scaffold, with doses determined previously. 4,31,32ASCs collected from P3 were resuspended in 4% (w/v) sterile alginate solution, along with the microcapsules, at a concentration of 1.25 × 10 6 cells/mL for both osteogenic and angiogenic bioinks.3D grafts were incubated in 0.5 M CaCl 2 solution for 5 min to cross-link the bioinks prior to culture (static or perfusion) for 21 days.
2.5.Optimization of Vessel Configuration in 3D-Bioprinted Scaffolds.The 3D printing of the vascularized bone structure was carried out by creating distinct vessel and bone compartments within the graft structure, as depicted in Figure 1.The inclusion of each of these compartments facilitated the accommodation of both osteogenic and angiogenic bioinks.In order to optimize the spatial arborization of the vessels in the bone matrix, several 3D models were developed, drawing inspiration from the Haversian canal microarchitecture found in native tissue.As a result, seven different types of vessel and bone structure placements were designed and fabricated with the 3D Bioplotter.The effect of vessel placement on the mechanical strength of the hybrid bone graft was evaluated through compression testing (as described below).Straight cylindrical and three-spin (slalom) vessel structures (one slalom, two slalom, and four slalom configurations) were printed to investigate the impact of 3D vessel geometry on the mechanical strength of the vascularized bone graft.
2.6.Uniaxial Compression Testing.Uniaxial compression testing was conducted to assess the mechanical strength of the grafts by using a Universal Testing Machine (Shimadzu AGS-X, Japan).A 50 kN load cell was employed with a linear compression speed set at 200 μm/min.The amount of stroke applied was adjusted in accordance with the specimen's thickness.Stress−strain graphs were utilized to derive mechanical properties, including Young's modulus and ultimate compressive strength.
2.7.FTIR Analysis.Characterization of PCL and PCL microparticle-loaded alginate bioink structures was carried out using Fourier transform infrared (FTIR) spectroscopy (Jasco FT/IR Spectrometer 4600, Japan).The transmittance was calculated, and the FTIR spectra were recorded by performing 128 scans with a resolution of 4 cm −1 .Samples were categorized as follows: (1) PCL 3D scaffold, (2) growth factor-laden PCL microcapsules, (3) 3D alginate scaffold, (4) PCL microcapsule-loaded 3D alginate scaffold.FTIR analyses were processed using the KnowItAll Informatics System (Wiley, 2023), and the spectra between 500 and 1450 cm −1 were selected to enhance the clarity of the analysis, with a focus on the regions containing the remaining functional groups.Subsequently, the analysis was completed with the regions using this refined spectral range.
The bioprinted vascularized bone scaffolds were subjected to culture under both static cell culture conditions and a perfusion bioreactor system.In static culture, the scaffolds were placed in 12well plates and submerged in DMEM-based high-glucose growth media, supplemented with 10% FBS, 1% P/S, and 1 ng/mL FGF-2.The medium was refreshed every other day throughout a 21-day culture period.Osteogenic induction was made possible by the DMEM-based low-glucose media, supplemented with 10% FBS, 1% P/S, 100 nM dexamethasone, 50 μM ascorbic acid, and 10 mM glycerol 2-phosphate.
Bioprinted vascularized bone scaffolds were cultured in a perfusion bioreactor system (3D CulturePro Electroforce, TA Instruments) for 21 days to ensure the effective nutrition and homogeneously distributed cell survival in the 3D matrix.100 mL of growth medium was perfused at a rate of 60 rpm.
Determination of Viable Cell Number and Morphology.
The number of living cells within the grafts was determined using the Alamar Blue assay (US Biological) at days 1, 7, 14, and 21.Shortly, samples were incubated under standard cell culture conditions for 1 h with Alamar Blue solution in DMEM without phenol red (10% v/v).Following the incubation period, 200 μL of the test solution was transferred to 96-well plates and optical density was measured on a microplate reader.
Both living and dead cells within the 3D grafts were stained with the LIVE/DEAD Viability/Cytotoxicity kit (Invitrogen) and subsequently imaged under confocal scanning laser microscopy (CLSM, Zeiss-Germany).The morphology of the cells within the scaffolds was observed with SEM (Quanta 400F Field Emission SEM).Scaffolds were fixed within 3.7% (v/v) glutaraldehyde, washed in 0.1 M Sorenson buffer, dehydrated in a series of ethanol, dried in hexamethyldisilazane (HMDS), and finally coated with Au−Pd (15 nm).Observations were carried out at 20 kV, and images were recorded at low magnifications (50−500×).
2.10.Alizarin Red Staining.The osteogenic differentiation of ASCs was assessed through Alizarin red staining at various time points during culture period (days 7, 14, and 21).The samples were fixed in 3.7% paraformaldehyde (PFA) for 20 min, rinsed with distilled water, and then incubated with a 40 mM Alizarin Red solution at room temperature for 30 min.Any excess dye was removed by washing with distilled water until the rinse solution became clear.The samples were then left to air-dry overnight before examination under brightfield microscopy.Images were captured from the same areas of interest in all samples.
2.11.Alkaline Phosphatase Activity Assay.To assess ASC osteogenic differentiation, the activity of the osteoblast-specific enzyme alkaline phosphatase (ALP) was quantified.The samples were first rinsed with PBS and then preserved by freezing until the time of testing.Upon thawing, the frozen scaffolds were rinsed with PBS and subsequently lysed with a 1% Tris−Triton X-100 solution.Freeze−thaw cycles were performed in triplicate, followed by a 10 min sonication (30 s pulses with 30 s breaks).After centrifugation to remove cell debris and other components, 40 μL of the supernatant was diluted at a 1:2 ratio with ALP Assay buffer (Abcam, USA) and this mixture was then incubated in a 96-well culture dish, and 50 μL of the p-nitrophenyl phosphate (pNPP) substrate was added.The incubation occurred at 37 °C for 60 min.The reaction was halted by adding 40 μL of ALP Stop Solution (Abcam, USA), and the absorbance value was measured at 405 nm.The ALP enzyme activity was determined based on a standard curve prepared with known concentrations of pNPP.
2.12.Immunocytochemistry.The analyses of ASC-derived osteogenesis and angiogenesis within the grafts were conducted through immunocytochemistry.In brief, samples were fixed with 3.7% PFA on days 7, 14, and 21.Subsequently, the samples were rinsed with PBS and incubated in PBS 1× with 0.1% Tween 20, 1% BSA for 2 h at 4 °C.This step served the purpose of preventing nonspecific binding and permeabilizing the cell membranes.Following blocking and permeabilization, the samples were exposed to primary antibodies [mouse antiosteopontin (1:500) (Abcam ab8448) and rabbit anti-CD31 (1:200) (Abcam ab 9498)] that were diluted in PBS containing 1% BSA.This incubation took place at 4 °C overnight.Subsequently, the samples were treated with fluorochrome-conjugated secondary antibodies [Alexa Fluor 488-conjugated goat antimouse IgG (1:200) and Alexa Fluor 594-conjugated goat antirabbit IgG (1:200)] for 3 h at room temperature.To visualize cell nuclei, DAPI staining was carried out for 10 min at room temperature.Imaging was conducted by using CLSM (Zeiss).
2.13.Statistical Analysis.All quantitative results were presented as means ± standard deviation (n > 3).Statistical significance between groups (p < 0.05) was assessed through one-way analysis of variance (ANOVA), followed by Tukey's post hoc tests.
PCL Microcapsule Production and Assessment of
Release Kinetics.Microparticles made from PCL were developed to encapsulate BMP-2 and VEGF.Prior to this, BSA was employed as a model protein to investigate the release kinetics and encapsulation efficiency.Different formulations of (5, 10, 15, and 20% (w/v)) PCL solutions were prepared.No capsular structure formation was observed with the use of 5% (w/v) PCL.PCL concentrations of 10% (w/v) and higher led to the formation of microcapsules, as shown in Figure 2A.
The frequency distribution of the microcapsules was evaluated by ImageJ software based on the SEM images at 8000× magnification.For this, spherical structures were detected and their diameters were measured by the straightline tool of the software.GraphPad's histogram analysis tool was used to sketch the frequency distribution of the microcapsules over the particle diameter, as shown in Figure 2B (n = 10).For all groups, particle size varied between 50 nm and 2 μm.For 10% (w/v) PCL capsules, the highest frequency of particle diameter was in between 800 and 1400 nm whereas smaller and larger particles were present in relatively less amount.Particles with smaller diameters below 400 nm were not observed under 20% (w/v) conditions.The majority of the capsule diameters in 15 and 20% (w/v) PCL groups were 1000 nm and above.The average diameter of 10% (w/v) condition was 1200 nm, while this average value goes up to 1900 nm for 15 and 20% w/v conditions (Figure 2C).
Encapsulation efficiency of the microcapsules was determined by using BSA as the model protein.Results indicated that encapsulation efficiency was not affected significantly by the particle diameter and/or the size distribution.Encapsulation efficiencies of 10, 15, and 20% (w/v) PCL microparticles were found to be 49.32% ± 2.50%, 57.06% ± 1.09%, and 60.18% ± 0.73%, respectively.Encapsulation efficiency increased with an increasing PCL concentration, although the increase was not statistically significant.
The impact of PCL concentration on the release kinetics, as previously mentioned with BSA as a model protein, was examined for both free particles and particle-laden bioink.The cumulative release profiles for both groups are shown in Figure 2D and E, respectively.It was observed that an increase of polymer concentration from 10 to 20% led to BSA release to decrease over time.Considering the thickened walls of capsules, increased polymer concentration has led to a more sustained release rate of the content.Furthermore, embedding capsules into the hydrogel matrix enabled BSA to have an even more sustained release profile over time.
Scaffold Mechanical Properties in
Relation with the Vessel Configuration.Vascularized bone scaffolds were produced with preformed vessel structures.The 3D vessel configuration was optimized for a higher compressive strength.For this, the effect of fiber distance during 3D printing was the first step.Figure 3A depicts the fiber distance and orientation, where Figure 3B shows the corresponding compressive modulus.Young moduli of the scaffolds decreased as the distance between the fibers increased, as expected, due to the reduced material presence.Despite the fact that Square Grid-1 (SG-1) exhibited the highest compressive modulus, it was not chosen due to its diminished porosity.Instead, the SG3 scaffold geometry, with a 1 mm fiber distance and appropriate porosity, was selected for further studies.
Several designs were developed in order to emulate the Haversian canal architecture in native bone tissue, with the goal of optimizing the vessel density across the graft and not hindering the mechanical stability and structural integrity.For this, a single canal was implemented in the center of the square grid.Subsequently, the canal layout was further optimized by introducing new canals into the configuration, as illustrated in Figure 3C.The results of the compression test are presented in Figure 3D,E.According to this, the single-vessel (SV) structure resulted in 30.23 ± 2.30 MPa compressive moduli and it gradually decreased as new canals were added (TV = 29.35± 3.30 MPa, FV = 26.94± 2.50 MPa).Diagonal placement of the vessels (CTV) further reduced the compressive modulus to 27.04 ± 3.98 MPa in comparison with TV (Figure 3D).
The 3D vessel configuration pattern was varied from straight lines to continuous slaloms in order to assess the effect of this biomimetic orientation on the mechanical properties of the scaffolds.This change resulted in slightly higher compressive moduli, as seen in the SS vs SV comparison.A similar trend has emerged when TS vs TV models and FS vs FV models were compared with each other.Furthermore, it was revealed that there were no statistically significant differences in ultimate compressive strength among different vessel configurations (Figure 3E).
Characterization of PCL Particles Embedded within the Alginate Bioink Matrix.
The morphology of 3D-printed PCL scaffolds without (Figure 4A,B) and with (Figure 4C) the alginate bioink matrix was investigated using SEM imaging.The presence of PCL particles within the alginate matrix was visualized.FTIR spectroscopy was utilized to validate the presence of growth-factor-laden PCL particles within the alginate bioink matrix, and the corresponding spectra are presented in Figure 4D.Given that PCL was employed both in the scaffolding of the bone matrix and in the particle preparation, it was noted that the transmission values for the PCL filament and the capsule analysis results displayed similar patterns in the common bands.Specifically, the prominent IR peak observed at 1720−1722 cm −1 corresponds to vibrations occurring in the C�O band plane.The shared peaks at 2865−2867 and 2942− 2943 cm −1 represent the strong symmetrical and antisymmetric stretch bands of the C−H bond, respectively.
The presence of alginate within the structure was confirmed by the corresponding peak.Consequently, the broad stretch band with the presence of the O−H hydrogen bond at the value of 3336−3348 cm −1 was evident in both spectra (alginate and alginate + PCL capsule).The peak at 1638−1639 cm −1 signified the asymmetric stretching vibration of the O−C−O carboxylate group in the alginate.Upon examination of the alginate−capsule group, it was evident that vibration bands at 2866 and 1721 cm −1 were present, originating from both the alginate solution (at 3348 and 1639 cm −1 ) and the PCL capsule.
Cell Viability and Proliferation Is Maintained within the Grafts under Static Culture Conditions.
All cell culture experiments were carried out with the following experimental groups: (I) no induction, (II) osteogenic medium, and (III) capsule incorporation (BMP-2 and VEGF release from scaffolds).Initially, cell viability was assessed qualitatively using the Live/Dead cell viability assay, and subsequently, metabolic activity was quantified through the Alamar Blue assay.The results revealed that cell viability was maintained in bioink despite the fact that cells were exposed to physical strain and high thermal exposure during the printing process and cells proliferated enough to cover the entire surface of the graft by day 7 (Figure 5A).The introduction of PCL microcapsules into the system did not significantly impact the number of viable cells, as shown in Figure 5B.However, the induction of osteogenic differentiation led to considerably less proliferation compared to the control group.Similarly, it was observed that addition of capsules containing BMP-2 and VEGF had no significant effect on cell viability.Examination of the groups over different time periods indicated a statistically significant increase in the number of viable cells between day 1 and day 21, signifying cell proliferation within the bioprinted structure.
The gradual increase in the number of viable cells from day 7 to day 21 aligned well with the cell viability results described above (Figure 5C).When encapsulated BMP-2 was integrated into the grafts, it was observed that cells had spread across the entire surface (Figure 5D), a finding consistent with the quantified number of viable cells within the grafts.
Osteogenesis and Vascularization Is Triggered Spatially by the Inherent Dual Growth Factor Delivery
System.BMP-2 and VEGF were encapsulated within PCL particles to create the osteogenic and angiogenic bioinks combined with the ASCs.The impact of this dual temporal growth factor delivery on the local osteogenic differentiation and vessel formation was evaluated under static culture conditions. Figure 6A,B displays the distribution and intensity of red in alizarin red staining.The variation in red color intensity among groups reflects the extent of Ca 2+ deposition within the scaffolds.Significant differences in red color intensity were observed between images taken on day 7 and day 14 for the BMP-2 treated group, as well as between day 7 and day 21 for the osteogenic medium group.
Assessment of ALP activity is a commonly used method for quantifying the osteogenic differentiation. 33,34ALP activity significantly increased in the capsule-incorporated group on both days 14 and 21 when compared to both osteogenic medium and no-induction groups (Figure 6C).This gradual increase in activity can be attributed to osteogenic differentiation initiated by the presence of BMP-2.These findings align with a study by Tian et al., which emphasized ALP as a vital marker for osteogenic differentiation, closely associated with the presence of BMP-2. 35n order to analyze the effects of spatial delivery of both growth factors BMP-2 and VEGF, samples from day 7 to day 21 underwent double immunostaining with antiosteopontin and anti-CD31.Immunofluorescence staining for osteopontin is frequently used in the evaluation of ASC osteogenic differentiation. 36The results demonstrated that spatial delivery of growth factors successfully induced local vascularization in the canal section and osteogenesis in the remaining regions of the scaffold (Figure 6D).
3.6.Perfusion Culture.Scaffolds 3D bioprinted together with the spatial osteogenic and angiogenic bioinks were cultured under perfusion culture (Figure 7A,B).ASC-and dual growth factor delivery system-laden scaffolds were cultured with the bioreactor for 21 days.At the end of days 7, 14, and 21 in the perfusion culture, cells were fixed and stained for their actin filaments and nuclei to examine cell migration and morphology within the grafts (Figure 7C).The number of viable cells was determined by analyzing the images stained with DAPI, as shown in Figure 7D.It was observed that there was a statistically significant increase in the number of cells at day 21 of the perfusion culture.
ALP activity assessment was performed in order to quantitatively measure osteogenic differentiation.The amount of p-nitrophenyl phosphate formed in each chamber under dynamic culture per minute was read as the absorbance of pnitrophenol (Figure 7E).It was observed that along with the number of cells, the amount of total ALP also increased, although it was not as statistically significantly different as compared to the cell number as the culture period increases.
The intensity of red color and the color distribution in the images of alizarin red-stained grafts cultured under perfusion culture reflect the calcium depositions on the sample due to osteogenic differentiation.Similar to the pattern observed in ALP activity, there was a notable increase in calcium (Ca 2+ ) deposition over the course of 21 days.Importantly, this increase was statistically significant (p *<0.0001) when compared to the changes in ALP activity Figure 7F.The quantified (intensity of red color per image) color intensity results are presented in Figure 7G, which was correlated using the ImageJ tool.A white area was selected for background subtraction, and measurements for average light intensity were done from five different regions of interest with the same area.
To monitor osteogenic differentiation and vascularization within the graft simultaneously, 3D grafts were fixed and double-stained with antiosteopontin/anti-CD31 primary antibodies.It was observed that similar to the results observed under static culture, vascularization in the canal section and the osteogenesis in the rest of the scaffold were successfully induced by the spatial growth factor cues provided (Figure 7H).
It was also shown as a proof of principle that anatomically shaped scaffolds can be produced with the two straight canal structures per unit volume configuration to produce prevascularized scaffolds that are coprinted with osteogenic and angiogenic bioinks.Additionally, it was shown as a proof of principle that the anatomically shaped grafts can be cultured within the perfusion bioreactor system, as shown in Figure 8.
DISCUSSION
The tissue engineering field actively seeks solutions to meet the need for large-scale, vascularized bone substitutes for clinical transplantation.However, most of the strategies applied involve complex material combinations that are less feasible and harder to adapt in clinical applications.With this work, it was scoped to fabricate an off-the-shelf construct produced from simpler components (PCL as the only load-bearing component, alginate as the bioink material, and ASCs as the single source of cells) to produce a vascularized bone graft.
The graft was designed and 3D printed to mimic the native macro-and microarchitecture of bone including the Haversian canal systems.In order to make the bone substitutes functional, osteogenic induction and control of cellular behavior within the engineered graft should be taken care of.If these parameters are controlled, it is possible to provide osteogenic differentiation of ASC and vascularization can be observed.BMP-2 and VEGF are two potent growth factors that stimulate osteogenesis and angiogenesis of ASC populations, respectively.In this work, the spatial delivery of BMP-2 and VEGF was applied by positioning capsules that encapsulate them within specific regions of the 3D-printed construct.
Higher amounts and relatively uniform capsular structures were produced by using a 10% w/v polymer solution.In the preliminary studies, it was found that 5% PCL concentration was not sufficient to form capsules; thus, the SEM characterization of that batch could not be conducted.The insufficient formation and particle loss during fabrication concur with the previous results. 4Particle size distribution for increasing concentrations of PCL is distinctively explained in Figure 2C.The average diameter of the 10% w/v batch is found to be 1200 nm, where the average goes up to 1900 nm for 15 and 20% w/v batches.Since Iqbal et al. 37 report that average particle size is dependent on the duration of the ultrasonication process, the period of sonication can be optimized for obtaining better particle distribution.In further studies, new modification methods for the particle production process can be investigated to decrease the average particle size.Byun et al. 38 suggested that an increased polymer concentration results in higher efficiency since the average particle size is higher.Since bigger particles lead to needle tip obstruction during 3D printing, it was determined to continue with 10% polymeric concentration rather than increasing the encapsulation efficiency.
In light of capsule diameter, EE, and release profile results, the most suitable polymer concentration was selected in order to be used in the controlled release of growth factors.Even though the EE% is higher in 15 and 20% w/v groups, total cumulative BSA release at the end of 21 days is found to be greater in the 10% PCL concentration; thus, the 10% PCL Figure 8. Patient DICOM data with cranial defects were obtained from MRI scans.The 3D model generation and implant design were performed using MIMICS software (Materialize) according to these defects (left-hand side).After radiological image segmentation, 3D image creation (surface rendering) was carried out, followed by 3D implant design using the MIMICS software's "Materialise-3-matic" module.3D models produced for patients from DICOM images were obtained as .stlfiles and further used in the 3D printing, as shown in the middle images.Anatomically shaped scaffolds were 3D printed with two straight-vessel configurations within the unit volume together.Moreover, these structures are 3D bioprinted with osteogenic and angiogenic bioinks in the bone and vessel regions to induce spatial ASC osteogenesis and induction of vascular structures.The anatomically shaped vascularized bone scaffolds can be cultured within a perfusion bioreactor system.concentration was performed in VEGF and BMP-2 encapsulation.
Scaffold architecture was reported to have the utmost importance, in terms of cell response.It was found that SG-1 has the highest compressive modulus; however, due to its diminished porosity, it was not suitable for in vitro studies.That is why SG3′s scaffold geometry with a 1 mm fiber distance has the best trade-off in terms of porosity and Young modulus.Numerous studies conducted 31,39−41 reported similar porosity and pore interconnectivity for the scaffolds.
In order to imitate the Haversian canal system in the bone and thus to create vascular structures, a central structure vessel design was carried out inside the square grid structure that represents the bone structure.When the Young's modulus values are compared, it was observed that the Young's modulus value decreased when TV or FV vessel structures were added per unit volume.When the situation in the unit volume of vessel placement is straight or diagonal in bilateral placement, it was determined that there is a decrease in cross-location.When the vessel structures in the unit volume are examined in the case of twisted compared to the straight layout, a high Young's modulus value was obtained in the single slalom (SS).Lower and close values in the double (TS) and quadruple (FS) slaloms were obtained.When compared with all channel models except SS and TV, it is observed that the SV model gives significant differences in Young's modulus value.
The cell viability and uniform distribution of cells across the grafts were critical indicators of construct success.Utilizing Live/Dead cell viability assays and Alamar Blue assays, we evaluated the effects of different experimental groups including those with and without encapsulated growth factors.Intriguingly, we observed that cell viability was well maintained within the alginate bioink during the fabrication process with cells promptly colonizing the scaffold surface by day 7.The incorporation of PCL carriers did not substantially alter the viable cell count, suggesting favorable compatibility of this composite system.Importantly, the induction of osteogenic differentiation yielded valuable insights with a noticeable reduction in cell proliferation compared to the control group.This finding underlines the influence of microenvironmental cues on cellular behavior, aligning with our previous findings and reports by other researchers. 31,42,43Furthermore, when comparing induction medium groups, the presence of BMP-2 and VEGF within capsules demonstrated minimal impact on cell viability, affirming the controlled and targeted delivery of these growth factors.
The success of our approach hinged on the effective delivery of BMP-2 and VEGF within the grafts, stimulating osteogenic and angiogenic processes.Alizarin red staining enabled the visualization of calcium deposition, which is a key indicator of osteogenic differentiation.The variation in red color intensity across time points reflected dynamic changes in calcium deposition, emphasizing the role of temporal growth factor cues.Furthermore, the quantification of ALP activity, a hallmark of osteogenic differentiation, highlighted the efficacy of the dual growth factor delivery system.−46 Immunofluorescence staining corroborated the successful local induction of osteogenesis and vascularization within the scaffold, underscoring the potential of our spatial growth factor delivery strategy.
Transitioning into perfusion culture allowed us to explore the effects of continuous nutrient supply and mechanical stimulation on graft development. 47The increase in cell number and ALP activity over the 21-day period hinted at the dynamic interplay between perfusion conditions and cellular behavior.Interestingly, the observed increase in calcium deposition, as indicated by Alizarin red staining, exhibited a statistically significant rise compared to ALP activity, indicating an intricate relationship between mineralization and osteogenic markers.The combination of dual staining with antiosteopontin and anti-CD31 antibodies further confirmed our scaffold's ability to concurrently induce vascularization and osteogenic differentiation.
Moreover, our study provided a proof of concept for anatomically shaped, prevascularized grafts cofabricated with osteogenic and angiogenic bioinks.This advancement holds significant promise for personalized regenerative medicine strategies, where patient-specific constructs could be tailored to mimic native bone architecture and functionality. 48
CONCLUSIONS
In this work, the spatial delivery of BMP-2 and VEGF was applied by positioning capsules that encapsulate them within specific regions of the 3D-bioprinted construct.Results of both static cell culture and perfusion cell culture studies showed that ASCs were successfully proliferated within the grafts.In addition to positive immunostaining for the osteogenic marker osteopontin, alizarin red staining and quantification of ALP production results quantitatively reflect that osteogenic induction of ASCs was achieved.Moreover, the vascularization process of the endothelial population within ASCs was evaluated with immunostaining for the CD31 marker, which also proved spatial vascularization through induction with VEGF.In conclusion, this work presents a novel yet simple vascularized bone graft production procedure that can possibly be developed to be an off-the-shelf product for clinical use in the future.
Bioprinted vascularized bone scaffolds cultured in a perfusion bioreactor system (3D CulturePro Electroforce, TA Instruments) for 21 days to ensure the effective nutrition and homogeneously distributed cell survival in the 3D matrix (MP4)
Figure 3 .
Figure 3. (A) Scaffold layouts with varying fiber distances and pore sizes.(B) Compressive modulus of canal-free square grids at 10% deformation.(C) CAD models of varying vessel configurations.(D) Compressive Young's modulus.(E) Ultimate compressive strength of scaffolds with varying vessel configurations.
Figure 7 .
Figure 7. (A) Schematic representation of the perfusion bioreactor system that operates within the CO 2 incubator, depicting the inlet, outlet, flow direction, and sample holders.(B) Assembly of the perfusion bioreactor system showing the connection ports and tubing.(C) Phalloidin (green)/ DAPI (blue) staining of cells cultured within the grafts in perfusion culture representing homogeneous cell distribution over the filaments of the graft (scale bar = 200 μm).(D) Quantification of the viable cell number within the 3D grafts during perfusion culture.(E) ALP activity indicated ASC osteogenesis under perfusion culture during 21 days.ALP enzyme activity reflects the amount of p-NP formed per minute.Data is expressed as mean ± standard deviation of three samples.(F) Quantification of color intensity in alizarin red staining (p values: *<0.05, ****<0.0001).(G) Light microscopy images of the representing alizarin red staining of samples under perfusion culture for days 7, 14, and 21.Scale bar = 100 μm.(H) Immunofluorescent staining for osteopontin (red), CD-31 (green), and DAPI (blue) on day 21 under perfusion culture.Scale bar = 50 μm. | 2024-03-01T06:18:23.896Z | 2024-02-28T00:00:00.000 | {
"year": 2024,
"sha1": "1f8cd6afbf2e755fc10cd67ab839c17a57a0be94",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsbiomaterials.3c01222",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9b0a5dfd18416090d28a693afb030e10a85d23b",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245005705 | pes2o/s2orc | v3-fos-license | Justifying the Dependability and Security of Business-Critical Blockchain-based Applications
In the industry, blockchains are increasingly used as the backbone of product and process traceability. Blockchain-based traceability participates in the demonstration of product and/or process compliance with existing safety standards or quality criteria. In this perspective, services and applications built on top of blockchains are business-critical applications, because an intended failure or corruption of the system can lead to an important reputation loss regarding the products or the processes involved. The development of a blockchain-based business-critical application must be then conducted carefully, requiring a thorough justification of its dependability and security. To this end, this paper encourages an engineering perspective rooted in well-understood tools and concepts borrowed from the engineering of safety-critical systems. Concretely, we use a justification framework, called CAE (Claim, Argument, Evidence), by following an approach based on assurance cases, in order to provide convincing arguments that a business-critical blockchain-based application is dependable and secure. The application of this approach is sketched with a case study based on the blockchain HYPERLEDGER FABRIC.
I. INTRODUCTION
As a concept, blockchains are expected to uphold applications with valuable properties such as data immutability, transparency and accountability in many areas such as finance, industry and public services. As the properties of blockchains lead to use cases that involve critical business missions such as certification, financial exchanges or traceability, many blockchain-based systems are considered "businesscritical systems", whose Dependability and Security (D&S) must be thoroughly justified from the design to the operational phases to decision-makers including senior management and regulators. Usually, dependability engineering is used to deal with safety-critical systems (like transportation, aerospace, ...) whose failure can have a significant impact on people, environment or assets, all of which are valuable in the physical world. By analogy, a business-critical system performs missions whose failure can have a significant impact on the intangible assets (like financial, legal, reputation ...) of an organisation; for this reason dependability engineering is also relevant for these systems.
Unfortunately, there is currently a lack of standards that can be used as a guide for engineering business-critical systems based on blockchains. Numerous published works contribute to strengthen the D&S of specific blockchains protocols. On the one hand, formal works are conducted to prove desirable properties on the distributed protocols (like the Byzantine fault tolerance of consensus algorithms [4] or the safety and liveness of transaction validation [7]). On the other hand, analyses are performed to identify specific risks and investigate mitigation solutions (e.g., [12] for Bitcoin, [8] for Hyperledger Fabric, [25] for public blockchains...). Nevertheless it seems not trivial to determine how this collection of evidences can be put together to build an end-to-end justification that a businesscritical application relying on a blockchain system meets its high-level D&S requirements.
Therefore, this paper addresses the following issue: How a collection of risk mitigation measures can be organized into an argumentation justifying the D&S of a blockchain application? and proposes a dedicated approach as a contribution.
The proposed approach is inspired by "assurance (security and safety) cases" [6], [14], coming from nuclear and aerospace industries [5], [9], [19], which is a structured argument, supported by evidences, intended to justify that a system is acceptably assured relative to a concern (such as reliability or security) in the intended operating environment. In a nutshell, the argumentation proposed in this paper consists in decomposing the system under study into functional elements representing the elementary services it should deliver. D&S justification is then conducted via the provisioning of arguments and evidences documenting how the identified risks have been mitigated. To structure this argumentation, we adopt the Claim-Argument-Evidence (CAE) framework [6], [11].
We believe that this paper will help paving the way for industrial engineering of blockchain-based systems, thus promoting their use and acceptability.
The paper is organised as follows: section II develops the industrial motivation and the related works. Section III proposes a general guideline to engineer a dependable and secure blockchain based system, introducing in particular a CAE template for blockchain based applications. Section IV instantiates the CAE template considering a fictive application based on the HYPERLEDGER FABRIC blockchain. Finally, section V concludes this paper by introducing our ongoing works on dependable engineering and uses of blockchain based applications.
II. INDUSTRIAL MOTIVATION AND RELATED WORKS
Regulated industrial activities (such as nuclear) imply a reliable and secure traceablility of quality convincing data collected during the life-cycle of safety-critical equipment (fabrication, qualification, maintenance...). These data are distributed into all actors of the industrial sector (integrators, suppliers, subcontractors...) what makes difficult to ensure their long time integrity and consistency. Trust on these data are generally provided by costly third-party certification procedures ensuring only a partial arbitrary coverage. Blockchain technologies (or more generally distributed ledgers) promise the integrity and availability of the registered data, while allowing a control of the data through smart-contract ensuring its compliance and consistency. These properties are a priori relevant arguments to gain trust of regulators on a certificationaided system based on Blockchain technologies [23]. Since the targeted data should prove the quality of safety-critical equipments, the registering system is business-critical. Therefore to be ultimately acceptable by regulators, the D&S of such system should be thoroughly justified (Dependability is the ability to deliver a service that can justifiably be trusted [3]).
In the literature, several studies contribute to strengthen the D&S of a blockchain-based application, by analysing the risks of specific blockchain implementation and proposing mitigation measures [1], [8], [12], [15], [20], [24], [25]. These works provide a large number of factual pieces of evidence from different natures (simulation, demonstration, testing, statistical analysis, formal analysis etc.) contributing to build the assurance case. Nevertheless to manage the complexity of the whole system, we claim that there is a need for a framework to build upon them the argumentation capturing the whole justification chain of the top-level claim: "the application is dependable and secure" without any missing links. The notion of justification framework refers to structuring and capitalizing the reasoning chain that has been followed throughout the design process of a complex system to provision the D&S (by ensuring that there is a clear link between D&S claims and D&S designations). Such approach is called "Assurance (security and safety) cases" in [6], which are defined as "documented bodies of evidence that provide valid and convincing arguments that a system is adequately dependable in a given application and environment" [19].
The two main identified justification frameworks are the Goal Structuring Notation (GSN) [16], and the Claim-Argument-Evidence (CAE) [5], [11] which are highly recommended by safety-critical systems regulators [19] and is standardized in [14]. The main difference between both frameworks resides in the characterization of arguments: GSN uses a generic notion of argument strategy, whereas CAE introduces three argument types: the decomposition, the substitution and the concretization. Hence CAE can be seen as a refinement of GSN. For this reason, we adopt the CAE framework in this paper.
III. ON THE ENGINEERING OF DEPENDABLE AND SECURE BLOCKCHAIN BASED SYSTEMS
This section proposes a guideline to engineer a businesscritical blockchain-based system while justifying the fulfillment of its D&S requirements. It mainly refines the usual system engineering workflow of mission-critical system design for blockchain, which entails functional analysis: breakingdown the system in smaller functional elements.
A. Functional analysis
D&S requirements refer to the expected functionalities of the system. First step is then to address its functional analysis. Figure 1 illustrates the fundamental functions of a blockchain system: as a distributed ledger system it must asynchronously handle read and write operation requests. To be eventually registered following a write operation, data must be encapsulated in transactions that must be syntactically correct and semantically valid. Validity criteria depend both on the system specification (e.g. a transaction must be signed by a valid signature) and on the application needs which are usually enforced by smart contracts (e.g. a numerical value extracted from the transaction data must be in a specific range). The yellow note on the Figure 1 gives an example of validity criteria matching the token-based blockchain protocol specifications. Moreover, as a distributed system, a blockchain is expected to guarantee consistency: users should read consistent values despite the underlying distribution (at this stage of the system description, consistency can be described in an intuitive way and refined later through formal specifications [22]). To summarize, as a distributed ledger system, blockchain functional elements (FE) are specified as follows: • FE1: register any valid transaction eventually • FE2: register only valid transactions • FE3: answer consistently to read requests These fundamental FEs can be easily mapped to classical requirements of distributed systems: FE1 satisfies a liveness requirement while FE2 and FE3 satisfy safety requirements (validity and consistency). It is important to notice that some blockchains may weaken their service with respect to these FEs (and associated requirements). Typically, public blockchains may occasionally fail to register valid transactions or temporarily fail to guarantee consistency under degraded network conditions. Given a blockchain protocol and an application logic, the validity criteria of transactions and the consistency criteria of read results should be defined. Moreover, depending on the application, additional functional elements may be expected. For example, if the system should protect its confidentiality then specific functional elements should be put in place.
B. Risk preliminary analysis
The second step of the engineering process is to list the risk of faults and failures and classify them against their criticality and their likelihood. According to the functional analysis, feared events that we want to avoid here could be:
1) registering of an invalid transaction 2) rejection or deletion of a valid transaction 3) inconsistent read result
Such failures can be refined to match better the application services. For example, the main feared event for financial applications is the famous double spending, which is an event that compromises consistency. Indeed, if the system is not well-designed or flawed, an attacker can register two conflicting transactions.
Such failures are induced by the propagation of errors in the system that are activated in unfavorable conditions because of the presence of elementary faults [3]. Such risks of faults should also be identified to the best of our knowledge. As a computing system, a blockchain is subject to all classical risks of faults affecting both software and hardware parts (due to poor quality of development, malicious activities, physical hazard...). But because blockchains are designed to be trustworthy, they address by nature workflows where some actors would have an interest in behaving incorrectly. It is therefore often very relevant to focus on Byzantine faults when analysing the risks of a blockchain application [18]. In particular, fraud and censorship are two kinds of malicious faults that can be injected by an attacker attempting to provoke respectively the above mentioned feared events 1 and 2.
Additional non-malicious faults should also be thoroughly considered, depending on the protocol specification and the application logic. Typically, some blockchain applications suffer from performance issues, so that when the system is overloaded (too many transactions to handle in parallel), some valid transactions are unreasonably delayed or even aborted. This kind of failure results from non-malicious faults (due to a poor quality of development of the system or its misuse during the operational stage): inadequacy of the system performance to scale-up for the application needs.
C. Risk mitigation
More critical risks of fault should be mitigated by appropriate measures taken during engineering phase or operational phase. According to [3], there are four categories of risk mitigation measures: • fault prevention: the application of best practice during the development stage is a simple way that is often enough to prevent many basic faults. Such recommendations are often supplied in the (official or side) documentation of a computing system (including blockchain based). Fault prevention measures can also be deployed during the operational stage, e.g. by limiting the system operation to permissioned users (for consortium blockchains like HYPERLEDGER FABRIC [2]). • fault elimination: this is the main purpose of the verification stage in the development process. Development faults can be detected then removed using dynamic methods where the system is experienced (testing, symbolic execution). Furthermore, to prove that an implementation is free of fault, formal static methods may be exploited (static analysis, deductive verification, model-checking).
During the operational phase, maintenance measures can be deployed to eliminate current faults. • fault tolerance: this kind of mitigation measures is generally provided by design. Fault tolerance is obviously a natural key feature of distributed system. They can be as simple as replication on redundant architecture or based on more complex techniques to detect, diagnose and handle errors and faults in order to recover an erroneous system. But to take advantage of a fault tolerant design, a particular attention should be applied in the system configuration. For example, if a byzantine fault-tolerant consensus algorithm is used in a too little architecture of only 3 nodes, the system has a single point of failure, despite the fact that it is theoretically faulttolerant. Indeed, such algorithm can tolerate, by nature, a proportion of byzantine nodes strictly inferior than 1/3 [18]. The configuration of fault-tolerant complex system can be guided by simulation techniques. • fault forecasting: once a set of measures have been taken to prevent, eliminate and tolerate a large diversity of faults, the residual risks can be analysed by qualitative or quantitative assessment to check that the D&S requirements has been met. Simulation technique can once again be exploited to perform such kind of analysis.
D. Justifying the D&S using a CAE tree
Because the D&S of a critical system must by definition be justified to be accepted [3], this section introduces a framework to structure a justification argumentation. A justification of a claim is a structured collection of arguments built upon elementary evidences. This structure can be represented as a tree whose root is the claim to justify, the intermediate nodes are steps of argumentation and the leaves are elementary facts. Such formalism is called a Claim-Argument-Evidence (CAE) tree [5].
Three kinds of argumentations can be used to refine a high level claim into several focused and less ambiguous claims: • Decomposition: a claim is decomposed in a conjunction of simpler subclaims. When it is not trivial, the validity of the inference relation gives rise to an auxiliary subclaim. For example, a claim on a computing system may be decomposed into its software and hardware components. • Substitution: a claim related to a given object is transposed into the analogous claim related to an equivalent object. Once again, when it is not trivial, the validity of the equivalence relation gives rise to an auxiliary subclaim. For example a claim on the target system may be substituted for a claim on an equivalent simulated system. • Concretization: an abstract claim is refined by introducing for example a definition or a quantified value. Finally, evidences may be for instance facts like design choices preventing or tolerating by nature some faults or results of analysis performed to eliminate or forecast some other faults. There are two categories of evidences: • Hypothesis: what should be commonly accepted • Proof: what cannot be opposed Figure 2 shows a template of CAE tree that can be instantiated to justify the D&S of a given blockchain based application (in our graphical representation, claims, arguments and evidences are respectively colored in blue, yellow and green. Evidences are introduced by their type "Hypothesis" or "Proof", colored in orange). The first argument is a decomposition justified by the general functional analysis made in subsection III-A. For a particular application built upon a specific blockchain protocol, the corresponding validity criteria of transactions and the consistency criteria of read queries have to be defined. Finally, any risks identified during the risk analysis affecting one of the Functional Elements (FE) must be addressed either by providing evidence of their mitigation measures, or by justifying their acceptability. Note that this level of abstraction is independent of the type of blockchain.
IV. CASE STUDY: JUSTIFYING THE D&S OF A HYPERLEDGER FABRIC BASED APPLICATION
This section instantiates the CAE template represented on Figure 2 for an application built upon the HYPERLEDGER FABRIC blockchain [2]. The application is fictive and will not be described since the aim here is not to discuss the relevance of a dependable design for a specific application but to demonstrate the application of the CAE method to a given blockchain.
A. Refining the functional analysis
Subsection III-A introduces three generic functional elements (FE1, FE2 and FE3) of a blockchain-based application that should be refined by specifying the validity criteria of the transactions and the consistency criteria of the read results. This subsection applies this refinement for an HYPERLEDGER FABRIC based application. 1) Validity criteria: Seven criteria of validity are checked in turn whenever a transaction is to be processed. They can be split into two sets, each one being enforced by a different role in terms of participants to the protocol.
First, in the course of the execution phase, the endorsers have to assess whether three properties are met: • V1 -the transaction emitter is legit: its signature has been recorded by the Membership Service Provider (MSP) and it belongs to the emitter set. • V2 -the business logic is respected: this point is validated by a successful execution of the application chaincode • V3 -the transaction is unique: i.e. it has not already been acknowledged (as checked with a nonce-based antireplay mechanism) If these three conditions are met, the endorser peers endorse the transaction by sending its effects on the database (read and write sets) to the application's client. The reply is signed by the endorser. Once the client has received the endorsements, it can submit the transaction to the orderer service. The orderers execute a consensus algorithm to commit a new block, deciding on an order for the ongoing transactions. During this phase, no validity criterion is checked.
Finally, once any peer receives a new block, it performs the validation phase, checking whether: • V4 -the endorsement policy has been respected: it complies with the governance rules (which takes part of the application configuration) • V5 -the endorsers are legit: their signatures have been recorded by the Membership Service Provider (MSP) and they belong to the endorsers set.
• V6 -the answers of the endorsers are consistent: they have computed the same effects on the database • V7 -the transaction is not in conflict with another one already applied: two transactions are in conflict if their effects on the database are conflictual (e.g. if the two transactions would write on the same key). This checking follows the Multiversion Concurrency Control (MVCC) method commonly used by database management systems.
If these 4 conditions are met, the peers tag the transaction valid and apply its effects on their local copy of the database (this means that the transaction has been accepted by the system).
The CAE tree extract drawn on the Figure 3 shows how the claim C1 (from Figure 2) can be concretized then decomposed into subclaims related to the related HYPERLEDGER FABRIC components. First, the abstract notion of validity is concretized through the seven criteria listed above, resulting in the claim C1c. This claim is then decomposed into 5 subclaims. The subclaims C1c.1 and C1c.4 are straightforward since they introduce the two roles that check the validity criteria. The subclaim C1c.2 introduces the MSP component since it is used by the endorsers and the peers to check respectively the criteria V1 and V5. The subclaim C1c.3 introduces the orderer components because, although they do not apply any validation criteria, the committing of blocks is a prerequisite to the checking of criteria V4 to V7 by the peers. Finally, the subclaim C1c.3 introduces the communication network because all these components communicate by message passing. The hypothesis H1c' justifies this decomposition by defining what means exactly the acceptation of a transaction.
2) Consistency criteria: Every peers in the system maintain a local replica of the application database (a key-value store). To answer consistently to read requests on this database, they have to synchronize their local states. This is done through the blocks of transactions committed by the orderer service and propagated to the network using a gossip protocol. Then as we can see on Figure 4, the claim C2 can be concretized by introducing the criterion of consistency C2c: Two correct peers synchronized on the same block have the same state of their local key-value store. To achieve this criterion HYPERLEDGER FABRIC uses three kind of components, following an Execute-Order-Validate paradigm: • Execute: application's transactions are firstly executed in parallel by endorser peers considering the current blockchain state. This execution may fail for different reasons (detailed in the subsection IV-A1) and computes the effects of the transaction on the application database state but does not apply them. • Order: pending transactions are ordered in a new block by orderer peers executing a raw consensus algorithm (no validity criterion of transactions is checked during this phase) • Validate: all peers apply the valid-after-ordering transactions to update their local copy of the database (cf. subsection IV-A1).
Thus, assuming that the local databases of all correct peers are initialized with the same initial state (H2c'), if we can justify that the three involved sets of peers (endorsers, orderers and standard peers) are able to deliver their corresponding services (C2c.1, C2c.2 and C2c.3), then we justify the claim C2c and therefore the claim C2.
B. Risk analysis and mitigation instantiation examples
The refining of functional analysis results in several subclaims concerning the elementary services of the blockchain's components. To justify these subclaims, we have to demonstrate that any identified risk of faults affecting these components have been mitigated and that the residual risks are acceptable. What follows exemplifies this process for the subclaims C1c.1 and C2c.2. Several published works identify generic risks regarding the HYPERLEDGER FABRIC blockchain ( [1], [8]), but they can be completed or refined to fit better with a specific application (for instance, some applications do not have particular concerns about privacy). Note that the purpose here is not to discuss the relevance of the risk analysis and the mitigation measures but only to illustrate how can they contribute to the justification of higherlevel claims. Figure 5 shows how the mitigation of these risks of faults can be justified. The first kind of faults is eliminated by testing methods applied by the Hyperledger foundation. The System Verification Test (SVT) report is used here as a proof 1 (P1c.1.1). The second kind of faults is prevented by the application of formal methods to guarantee the absence of flaw and the correctness of the implementation with regard to a formal specification (P1c.1.2). Finally, the four other faults are mitigated by a fault-tolerant endorsement policy, which determines the quality and quantity of endorsements that a client of the application has to collect in order to validate its transaction proposal (cf. the criterion of validity V4). The endorsement policy should be wisely configured to find a compromise between the tolerance of the different risks identified. Indeed, extreme policies cannot tolerate all kinds of faults. A policy requiring that all endorsers sign the transactions tolerates very well the attempts of frauds but a single malicious endorser can easily censor a transaction. On the contrary, a policy requiring transactions to be signed by any endorser tolerates very well the attempts of censorship but a single malicious endorser may bypass the chaincode execution to fraudulently endorse an invalid transaction. For a particular application, depending on actual participants and trust relations between them, a compromise should be found, possibly resulting in a custom endorsement policy. The determination of such a fault tolerant policy can be guided by a simulation-based approach. The report of this simulation analysis is used in the justification to establish that the residual risks are acceptable (P1c.1.3).
2) Risks impacting the orderers: Let us consider now that the designer of the target application has chosen to use the consensus algorithm Raft (HYPERLEDGER FABRIC supports several consensus algorithms). This algorithm is Crash-Fault Tolerant (CFT) [21]. This implies in particular that the fault analysis would not identify any risks of Byzantine fault of orderer peers (for example, we could argue that in a permissioned blockchain, orderers have no rational interest in attacking the consensus). However figure 6 shows a CAE justifying the subclaim C2c.2. First the claim is substituted by the claim C2c.2s replacing the generic consensus algorithm by the particular Raft algorithm. Then two mitigation measures are used to justify that Raft allows to deliver the ordering service, assuming the risk analysis conclusions (H2c.2s'): • Bugs in etcd (the Raft implementation embedded in HY-PERLEDGER FABRIC) have been eliminated (C2c.2s.1) by functional tests performed by its developer 2 (P2c.2s.1). • Design faults of Raft have been prevented (C2c.2s.2) by a formal proof provided by its author [21] (P2c.2s.2).
C. Discussion
We found that a CAE tree is a relevant framework to build confidence regarding the D&S of a blockchain-based application. Indeed: • it enforces a well-structuring of the argument, which is a required condition for managing the complexity of such systems, • it supports discussion and reduces time-to-agreement on what evidence is needed and what the evidence means, • having established the argument structure soon in the engineering process, it focuses D&S activities towards the lacking evidences, • it enables the recognition of convincing argument patterns and then supports monitoring of project progress towards successful qualification in the prospect of a regulator acceptance. Our case study is based on a permissioned blockchain because they are natural candidates to support industrial business-critical systems. Nevertheless, we believe that our approach is generic enough to be applicable for permissionless blockchains, where risk mitigation measures should focus more on the operational phase than on the engineering phase of the considered application. The main difference would therefore be on the nature of evidences that would be provided more by probabilistic assessment than by deterministic analysis. These intuitions have to be verified in future works.
V. CONCLUSION AND PROSPECTS
This paper takes advantage of a justification framework, called Claim-Argument-Evidence [5], to build a convincing argument that a business-critical blockchain-based systems is dependable and secure. This approach is inspired by assurance cases that traditionally involve safety-critical systems. The framework is applied on a fictive use case based on the HY-PERLEDGER FABRIC blockchain. Such an approach requires a preliminary D&S engineering of the targeted application, where risks are identified and mitigation measures are applied, to prevent, eliminate, tolerate or forecast them. Since this approach is encouraged for demonstrating the D&S of a complex critical system to a regulator auditor, we hope that our work will favor blockchain acceptability for regulated industrial applications.
For future work, we plan to apply our approach on real use cases (addressing the certification of industrial operations such as tensile testing or welding [23]) and to develop specific mitigation measures for justifying its D&S. We are contributing in particular to the formal verification of HYPERLEDGER FABRIC smart contracts, and the multi-agent simulation of blockchain systems [13] (using respectively the tools Why3 [10] and MAX [17]). | 2021-12-10T02:15:37.337Z | 2021-11-15T00:00:00.000 | {
"year": 2021,
"sha1": "ce60ace7f861f02657155d6757cdc2554569b8c8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ce60ace7f861f02657155d6757cdc2554569b8c8",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
160012816 | pes2o/s2orc | v3-fos-license | “It Was as Though My Spirit Left, Like They Killed Me”: The Disruptive Impact of an HIV-Positive Diagnosis among Women in the Dominican Republic
An HIV diagnosis may be associated with severe emotional and psychological distress, which can contribute to delays in care or poor self-management. Few studies have explored the emotional, psychological, and psychosocial impacts of an HIV diagnosis on women in low-resource settings. We conducted in-depth interviews with 30 women living with HIV in the Dominican Republic. Interviews were audio-recorded, transcribed, and analyzed using the biographical disruption framework. Three disruption phases emerged (impacts of a diagnosis, postdiagnosis turning points, and integration). Nearly all respondents described the news as deeply distressful and feelings of depression and loss of self-worth were common. Several reported struggling with the decision to disclose—worrying about stigma. Postdiagnosis turning points consisted of a focus on survival and motherhood; social support (family members, friends, HIV community) promoted integration. The findings suggest a need for psychological resources and social support interventions to mitigate the negative impacts of an HIV diagnosis.
Introduction
An HIV-positive diagnosis can be a life-altering event with detrimental emotional and psychological effects. Research suggests that an HIV diagnosis can act as a traumatic stressor and is associated with mental distress and trauma. 1,2 Those newly diagnosed may experience emotional distress, fear of HIV status disclosure, and anxiety about HIV-related stigma and discrimination. [3][4][5] These impacts are of particular concern if they contribute to delays in seeking appropriate health-care services or impede appropriate self-management and adherence to antiretroviral therapy (ART). [6][7][8][9][10] Increased global access to ART and advances in treatment has transformed how individuals respond to an HIV-positive diagnosis. In the early years of the epidemic, individuals may have faced significantly direr impacts due to scarce to nonexistent treatment options and high levels of external stigma. Today, people with HIV are living longer and the condition has transitioned from an acute to chronic illness for many given the widespread availability of treatment. 4,[11][12][13] According to the biographical disruption framework, individuals diagnosed with a chronic illness may be confronted with a rupture in their sense of identity and in various domains of their life. 11,[14][15][16][17][18] Identity is an important element of psychological functioning for people living with HIV (PLHIV) and may influence decisions about treatment and self-management. 13 Individuals can accept or reject aspects of the illness (and treatment) as they reshape their biography and sense of self and identity. 4,19 HIV is also a stigmatized illness, 13 and the stigma can be particularly disruptive if it leads to the disintegration of social relationships or loss of economic opportunities. To the best of our knowledge, all prior work that has employed the biographical disruption framework to study the experience of living with HIV/AIDS has been conducted in Europe, 17,18,[20][21][22] Africa, 4,12,23 and the United States. 11,15,16 Furthermore, limited qualitative research has been conducted with Caribbean people to examine the emotional, psychological, and psychosocial impacts after an HIV-positive diagnosis and related coping strategies in the Caribbean 24,25 and elsewhere. 18,26 How PLHIV reshape their identities may impact their decisions to engage in treatment or appropriate self-management. Certain types of coping strategies after an HIV diagnosis are associated with differential mental health outcomes and wellbeing. For instance, maladaptive coping behaviors (eg, extended rumination and avoidance) are associated with worse mental health outcomes. 27 According to a meta-analysis, women living with HIV (WLHIV) seem to experience greater harm from maladaptive coping than men. 28 How women experience and cope with an HIV diagnosis in a low-resource environment is not well understood, particularly in the context of their own narratives and daily circumstances. 12 Women are also generally less likely to disclose their HIV status compared to men, 29,30 potentially making them more vulnerable to isolation. Understanding how women process and cope with HIV as a chronic illness is valuable toward understanding their use of resources and disease self-management. 14,16,20 Additional research is also needed to better understand coping mechanisms and their impact on women's health, which can be compounded by poor social and economic environments that include high levels of stigma, gender inequality, and poverty. A qualitative study in rural South Africa explored how WLHIV (n ¼ 19) coped with the disease; results emphasize the importance of taking into consideration the role of social context of emotional distress in resource-poor communities. For mothers, the distress caused by these social conditions resulted from the impact on their children (ie, not being able to feed their children) instead of concerns about their own personal health, 31 thus highlighting the importance of examining how perceived roles and social context intersect when examining coping motivations and behaviors.
To address existing gaps in the literature, this qualitative study explores the various ways in which an HIV-positive diagnosis impacts the identity and behavior of urbandwelling women in low-resource communities in the Dominican Republic (DR). The burden of HIV in the Caribbean region is relatively high, and the region has the second highest HIV prevalence rate in the world after sub-Saharan Africa. Among Spanish-speaking countries in the region, the DR has the highest general HIV prevalence rate, estimated at 1.0% among individuals aged 15 to 49 years (approximately 69 000 people). 32 Reported HIV transmission is mostly driven by unprotected heterosexual sex and the infection rate has been disproportionately increasing among women. 33 More than half of adults living with HIV in the DR and the Caribbean are women. 34 In addition to exploring the impacts of an HIV diagnosis on identity and behavior among WLHIV, this study investigates how women cope with the diagnosis and selfmanage their condition. Understanding these dimensions can lead to the development and integration of culturally appropriate behavioral and social support interventions to improve selfmanagement behaviors, quality of life, and HIV outcomes.
Method
A qualitative approach was used to explore the impact of an HIV diagnosis on WLHIV in the DR and coping strategies after diagnosis. We conducted in-depth, semistructured interviews with a purposive sample of WLHIV (N ¼ 30).
Setting
Eligible participants resided in 1 of 3 cities with medium to high HIV prevalence rates in the southeast part of the country (Santo Domingo, San Pedro de Macorís, and La Romana).
What Do We Already Know about This Topic?
HIV is a stigmatized illness and individuals diagnosed with HIV may experience emotional distress, fear of HIV status disclosure, and anxiety about HIV-related stigma, which can contribute to delays in care and/or impede treatment and self-management.
How Does Your Research Contribute to the Field?
In-depth interviews with a sample of urban-dwelling women living with HIV in the Dominican Republic reveal 3 critical disruption phases after an HIV diagnosis (impacts of a diagnosis, postdiagnosis turning points, and integration). Fear of disclosure and stigma-related anxiety were key psychosocial barriers to treatment and appropriate self-management, whereas survival and motherhood identities served as motivators to engage in care.
What Are Your Research's Implications toward Theory, Practice, or Policy?
Newly diagnosed individuals in low-resource countries need psychological resources and social support interventions to help newly diagnosed individuals mitigate the negative impacts following an HIV diagnosis.
Three clinics agreed to participate as the recruitment sites for the study (ie, 1 clinic per city). The first clinic is a governmentoperated HIV clinic with a longstanding history of providing care for PLHIV (beginning in the early 1990s). The second clinic is faith based and operates as part of the Health Ministry of an Episcopal Church. This clinic provides primary and specialty care to indigent populations and integrated HIV/AIDS services for PLHIV, including behavioral health services. The third clinic is a nonprofit health-care organization that primarily provides primary and specialty care to PLHIV.
Recruitment
Local study staff recruited females from HIV clinics located in the study cities. Eligibility criteria included being a minimum of 18 years or older, being a registered patient in an HIV clinic, residing in an urban or periurban area in 1 of the 3 selected cities, and reporting recent food insecurity. Food insecurity was assessed using 2 questions from the validated Latin American and Caribbean Food Security Scale/Escala Latinoamericana y Caribeña de Seguridad Alimentaria, 35,36 namely, (1) In the past 3 months, was there a time when you were worried that you would not have enough food to eat because of a lack of money or other resources? (2) In the past 3 months, has your family been unable to eat healthy and nutritious food because of a lack of money or other resources? HIV clinic staff referred female patients on select recruitment days to learn about the study from a trained interviewer. The interviewer explained the purpose of the study and asked screening questions to confirm eligibility. Interviewers provided eligible women with additional information about the study and obtained verbal consent.
Data Collection
Two trained, local field workers conducted in-person, semistructured interviews in Spanish (range: 60-90 minutes). Both interviewers were women with considerable professional experience working with PLHIV and HIV programs. A total of 30 eligible women (10 per site) were recruited and consented to be interviewed and recorded. The average age of participants was 38 years (range: 20-56 years). All the interviews were audiorecorded and transcribed verbatim. Table 1 lists the key interview themes and select questions.
Data Analysis
Spanish language transcripts were uploaded to Dedoose, a qualitative data management software program. During fieldwork, the interviewers and research team regularly debriefed and discussed the field notes to assess whether data saturation had been attained for the key themes and to document emergent findings. We employed content analysis methods [37][38][39] and an inductive approach 40,41 to identify principal and emergent themes. The full codebook is described in detail elsewhere. 42 Key subcategories relevant to this article included mental health effects of a diagnosis, negative sentiments, positive sentiments, mental health, HIV diagnosis, HIV disclosure to others, external stigma, internal stigma, social support, economic changes related to HIV, ART adherence, and non-ART treatment adherence (eg, clinical visits).
Two researchers, fluent in Spanish, independently coded the transcripts using a systematic coding approach. Any discrepancies were resolved through consensus. Detailed coding summaries were extracted for relevant categories and iteratively revised by the analysis team. A third researcher, the lead author, further examined how an HIV diagnosis disrupted respondents' identity and lives using a 5-component biographical disruption model of HIV/AIDS identity (diagnosis, postdiagnosis turning point, immersion, postimmersion turning point, and integration) 15 adapted from Bury 14 to analyze relevant content within the summaries. Preliminary findings were discussed and interpreted with collaborators and stakeholders at various times to reduce the threat of researcher bias. 43 For example, the lead author was not involved in data collection and reviewed the results with collaborators in the DR to corroborate and confirm the findings. Illustrative quotes were selected and translated to English for inclusion in this article.
Ethical Approval and Informed Consent
The study's protocols and materials were approved by the RAND Corporation Human Subjects Protection Committee (2008-0345-AM02) and a local institutional review board (Consejo Nacional de Bioética en Salud del Ministerio de Salud Pública) in the DR. All participants provided verbal informed consent prior to enrollment in the study. The consent was audiorecorded in the presence of a trained research staff member. Written informed consent was not pursued as an option since obtaining documentation with participants' names could pose a risk to protecting the confidentiality of the WLHIV.
Results
Nearly all respondents described an HIV-seropositive diagnosis as a highly stressful and emotionally impactful event. Several spoke of the life-changing nature of the diagnosis and severe psychological and emotional distress, which was often compounded by psychosocial stressors (eg, perceived HIVrelated stigma, fear of disclosure). Nearly a quarter were diagnosed while pregnant, piquing mothers' concerns about mother-to-child HIV transmission. Three of the 5 biographical disruption phases were identified based on the data (ie, impacts of an HIV diagnosis, postdiagnosis turning points, and integration). Figure 1 shows the different biographical disruption phases and key findings from this study, which are explained in detail below.
Perceived Psychological and Emotional Impacts of a Diagnosis
Most respondents described the immediate impacts of an HIVpositive diagnosis as being psychological and emotional. Many were deeply shocked and distressed upon receiving the news. The following quote illustrates the grief and disconnect several women expressed, I sobbed so much. I left there and arrived home late. It was because I didn't go home right away. I left [the clinic] and just walked and did not even pick up the car. I just walked, my thoughts were so far away.
For many, the diagnosis carried considerable mental anguish and immediate shock. One respondent said, "Look, at once this mind of mine was transformed in a way I can't describe. I don't know how to describe it. It was as though my spirit left, like they killed me." Another felt as though she were in a foreign country when she was told and desired to leave the clinic: Well, I went in my mind very far away for about 5 minutes, it was as though I was in Egypt. It wasn't until a woman began to knock on the door that I could . . . well I couldn't. I just wanted to stand up and go so far away and disappear.
Diagnosis during Pregnancy and Postpartum
Nearly a quarter of the women reported being diagnosed while receiving prenatal care or after giving birth. Being informed during pregnancy or postpartum was described as highly traumatic. One woman spoke about her feelings of depression: At that moment, I felt extremely depressed. Oh yes, I felt so depressed and as if I was wandering on the street and headed somewhere very far away! I would go to the park and just sit there thinking, "My God, what will happen to me and my child." I also thought about giving up my child for adoption.
Another said she felt "extremely traumatized" because she was pregnant when she was notified she was HIV positive and the news altered how those around her responded to the pregnancy. In her case, a health-care provider and her mother encouraged her to abort, which led to additional stress and anxiety. A third, who was diagnosed 20 years prior to the interview, said she was informed after her child was born and coped with alcohol: They sent me to take all of these tests and then they told me . . . it was after the caesarian. I took all of the test results and there they said, "Do you know that you have HIV?" I didn't know I had HIV. Then they explained, "HIV is AIDS" and I went crazy. I became so ill, so, so ill. It was horrible. I became an alcoholic. In a few cases, women said a health-care provider tried to alleviate their anxiety by informing them of treatment options. A respondent said she was separated from her child after giving birth and told she could not breastfeed her child, however, "the psychologist told me they realized I had this [condition], but that it would not kill me and there was medication for it." The respondent said she felt assured simply knowing HIV treatment existed.
Feelings of Depression and Reduced Self-Worth
Several respondents said their lives completely changed postdiagnosis. A respondent described this change by saying, "in every way, you live life differently." Another felt she "died" on the day she was diagnosed. Many alluded to the disempowering effect of the diagnosis on their sense of identity and used phrases denoting reduced self-worth or self-efficacy, as in the case of the following respondent: "I felt like I was worthless. I said, 'ah, well, I am no longer worth anything because I have HIV.'" A deep sense of sadness and grief extended into a depressed state for some. They used the term "depressed" to characterize their emotional state postdiagnosis, whereas others described indicators of depression, including constant crying and expressions of grief, loss of appetite, social isolation, and reduced interest in regular activities. Among these respondents, some attributed these symptoms to the distressing nature of the diagnosis, as illustrated by the following quote, "Nothing was appetizing. Perhaps the news really hurt me emotionally." Others ceased engaging in regular daily activities and isolated themselves. One woman shared, "Well, I stayed quiet and fell into depression. I didn't shower, and I didn't do anything." For some, the diagnosis was conflated with questions about their moral condition, which contributed to extended grief and feelings of shame, "When I learned I was HIV positive, I wept so much. I wept every day and asked God, 'why is this happening to me since I am not a bad person?'"
Denial and Fear
A few reported being in denial after learning of their diagnosis and coped by behaving as though they were not HIV positive. These women reported delaying care until they internalized and accepted the news or began to experience symptoms. The following quote reflects the narrative of a woman who delayed care until she experienced symptoms: "I began to cry and said I would not return [to the clinic] to get tested. I left and sobbed so much. Afterward, when I got worse and my CD4 was 24, then I decided to return to repeat the exam and it came out positive." Another was in denial for nearly 4 years before she began treatment: "I did not want to accept it. I rejected it for 4 years, ha! Four years. And I said, 'Do not talk to me about that.' I went 4 years with a positive result without medication, nothing."
Perceived Psychosocial Impacts of a Diagnosis: Fear of Disclosure and Perceived Stigma
Nearly all women expressed fear of disclosure and anxiety about HIV-related stigma and rejection as key concerns. They said they were either previously fearful or remained concerned about disclosing their status to members of their social network due to potential HIV-related stigma and rejection from family members, friends, and/or employers.
For some, fear of disclosure stemmed from a belief that others would not keep the information confidential or they would be rejected. Several believed they would be treated differently: "The changes occur when you have this illness and the family distances themselves. Your family treats you differently." One said she felt she could not tell "her family or anyone because they talk too much" and constantly worried about rejection. Some reported feeling depressed, ashamed, or conflicted by not disclosing the diagnosis with family members. Others informed select individuals but continued to worry they would be rejected if others knew: "I was very careful not to let others know because I did not want to be rejected. I told my mother and my two children, but we all kept it a secret. It never left my house." Concern about disclosure and perceived stigma was described as a barrier to self-management and treatment adherence. One said she went to great lengths to avoid going to a local clinic out of fear that someone would find out she was HIV positive: It was very difficult. I decided that I was not going to come here [the HIV clinic]. I would not go to places where I was referred since there were probably a lot of people and perhaps someone from my neighborhood would see me, and I would become a laughing stock.
Several women communicated a desire for others to view HIV as a "normal condition" that was akin to other chronic illnesses that were not stigmatized. I wish it didn't have this taboo . . . I wish we believed it was just a health condition like any other. It doesn't matter how we got it, the important thing is that we have it. We should see this like something normal, like high blood pressure.
Postdiagnosis Turning Points
Most participants came to view an HIV diagnosis as a chronic illness requiring consistent treatment (ie, ART and regular clinical visits) over time and reported engaging in a variety of positive coping strategies after being diagnosed. Survival and motherhood identities emerged as facilitators to engaging in self-management and treatment.
Survival Identity
Most came to terms with their HIV-positive diagnosis and said they eventually accepted the diagnosis as a lifelong chronic illness that was a part of their new identity. For many, an HIV-positive diagnosis was seen as an ongoing ordeal and perceived as a constant struggle for survival. Survival was identified as a strong motivator for engaging in self-care and daily activities after being diagnosed: "But when you get this, it's like something comes over you and you lose your pride in order to survive." Several mentioned the importance of survival to overcome the initial phase of shock and emotional distress: Survival appeared to motivate many to reengage in regular activities and cope with a new reality. Several described eating as a means of surviving and said eating-particularly eating healthily-was a necessary part of warding off illness and even death. One woman summarized this viewpoint, "You have to go through with your treatments and you also have to eat well. You have to eat healthy food to survive." Some said they consumed food to "survive the virus" and to avoid "getting sicker." Another spoke of the decreased enjoyment she received from food after being diagnosed and said she would now eat to survive: "Before I would eat, I would eat a lot. But not anymore. Now I only eat because it's necessary." The survival lens persisted for many as motivation for taking care of oneself and extended beyond health to these other domains. One simply said, "I survive however I can," referring to temporary jobs or seeking alternative sources of food to address issues stemming from poverty.
According to respondents, socioeconomic barriers posed difficulties to survival. Several said surviving with HIV was more difficult due to economic and material resource constraints, which made it challenging to adhere to appropriate self-management and recommended dietary guidelines.
Motherhood Identity
Motherhood was also frequently mentioned as a motivator for engaging in positive coping mechanisms and adhering to treatment. A woman who was tested and diagnosed with HIV during pregnancy said the negative effect of the diagnosis was buffered by her pregnancy, which provided her with a sense of purpose and a reason to live: To tell you the truth, I don't know what would have happened to me had I not been pregnant with my son. When they told me the news, they actually left me here by myself in the room until the afternoon so that I could take in the news. But to tell you the truth, I honestly don't know if I would have come back had I not been pregnant.
Having children motivated some to pursue treatment. One woman began treatment at an HIV clinic due to her desire to see her children mature and thrive: "I said I would go [to the clinic] because I have my children and I want them to study and to grow old. I want to see them grow up so I have to finish my treatment. I am not going to give up."
Integration: Mobilizing Support and Resources through Social Networks
Positive factors that facilitated WLHIV's transition from a state of initial shock and fear to integration of the diagnosis into their sense of self and identity consisted of support from existing social networks and networks of PLHIV. Women mentioned the importance of family members and friends who provided support. Emotional support by family members and friends was the most common type of support mentioned. One woman found her older sister's words of support and encouragement particularly helpful after disclosing her status, "she gave me hope. Yes, she said 'don't think about that. There are people who suffer through illnesses worse than that and you are not the only one. There are many just like you.'" Another was emotionally buoyed by words from a friend who counseled her: "She hugged me and then said that although I had [the virus], she was not going to abandon me. She said I had to take care of myself. She told me I had to use a condom when I had sex." A small number said their family members and friends encouraged them to seek treatment and facilitated self-management by reminding or accompanying them to clinical appointments. A woman's teenage daughter frequently reminded her mother to take her medication and encouraged her to attend her appointments while helping her with household chores and duties. In another case, a woman's husband regularly reminded her to take medication and attend clinical appointments.
A few also mentioned the economic and material support provided by members of their social networks, such as housing, food, and money. However, this type of support was rare, likely due to conditions of poverty or economic scarcity.
Positive Impact of Connecting with Networks of PLHIV
New social networks involving the broader community of PLHIV were described as comforting and valuable sources of empowerment. Perceived benefits of getting involved with HIV support groups included an increased sense of purpose, empowerment, and motivation to engage in positive coping strategies. Five spoke about the importance of interacting with other PLHIV and their involvement as attendees or counselors in the group. They said HIV support groups were instrumental in helping them cope by providing support and encouragement: The support groups have helped me so much. I used to think that I was the one and only person with this condition but when I began to see how many people had HIV, I said "I have to fight for my life and not give up." This woman said she had learned from members of the support group and was now in a place to help newly diagnosed persons. Another said she greatly benefited from attending HIV support groups, "and my life changed when I started to attend those support groups." A third worked as a counselor to encourage screening and treatment in high-risk neighborhoods and said the work resulted in "many beautiful memories," which further motivated her to take care of herself.
Discussion
This study explored the impacts of an HIV diagnosis on Dominican WLHIV. The immediate psychological and emotional impacts of an HIV-positive diagnosis were described as severe, leading to feelings of depression and reduced self-worth. According to the accounts, several women were vulnerable to depression after their diagnosis and reported self-isolating or disengaging from regular activities (eg, eating). A few were in denial and nonadherent to treatment until they were symptomatic. Denial is also referred to as a state of submersion in other studies where PLHIV attempt to not think about their diagnosis. 8,26,44,45 The tension to disclose their condition to family members was particularly straining, particularly for those who primarily relied on kin for social support. Further, selfmanagement of the condition was challenging due to perceived stigma and lack of disclosure. Several reported coping with this stigma by masking their condition/treatment with family members, friends, and coworkers.
Studies on initial responses to an HIV diagnosis similarly identified shock, emotional distress, and withdrawal as reactions to an HIV-positive diagnosis in a variety of settings. 4,11,12,18,19,26,44,46,47 For some, the immediate shock and distress may potentially reduce a newly diagnosed individual's capacity to comprehend and absorb what they are being told and information about their treatment. 48 The emotional burden and feelings of stress associated with living with and managing a chronic illness, including concerns about access to healthy food and treatment, can exacerbate perceptions of poor health and further propagate stress as found in a study with adults diagnosed with diabetes in the DR. 49 Further research is needed to examine how HIV diagnoses are delivered 50 to inform the development of culturally appropriate patient-physician communication interventions related to HIV notification in low-resource settings. For women who are diagnosed during pregnancy or postpartum, it may be valuable for providers to emphasize the effectiveness of ART in preventing mother-to-child transmission to reduce trauma.
Similar to prior research, we found a high level of fear of disclosure and anxiety about HIV-related stigma and rejection postdiagnosis. 8,24,26,[51][52][53] Another study with WLHIV in the DR referred to decisions related to disclosure as "HIV disclosure control" and referred to it as a stigma coping strategy. 25 Fear of disclosure and stigma-related anxiety are important psychosocial factors that may act as barriers to treatment and appropriate selfmanagement for stigmatized chronic illnesses such as HIV. The need to address such concerns is even more critical in this era of treatment as prevention. 54,55 According to a meta-analysis examining the effects between stressors and coping mechanisms on behavioral health outcomes among WLHIV in the United States, those who cope by avoidance or social isolation are at risk for more severe mental health outcomes. 28 Disclosure is an important component of how PLHIV manage their identity 4,11,12,19 and may play an important role in facilitating the process of acceptance and support before seeking care. 19 A systematic review found disclosure to a spouse was associated with improved treatment outcomes (ie, initiation, adherence, and retention in care) among pregnant and postpartum WLHIV. 8 Worry associated with disclosure and lack of disclosure suggest a need to further explore disclosure processes and decision-making among WLHIV in Latin America and the Caribbean since few related studies have been conducted in these regions. 25,29,56 Our study is the first to use the biographical disruption framework to assess how PLHIV integrate their diagnosis into their identity in the Caribbean. Interestingly, the same 3 biographical disruption components that emerged in our study (diagnosis, postdiagnosis turning point, and integration) were also identified in a study with PLHIV in the United States after medication was made readily available. 11 In our study with WLHIV in a lowresource setting, important postdiagnosis turning points were characterized by a focus on survival and motherhood to overcome distress and engage in self-management strategies. The will to survive and motherhood may have encouraged respondents to engage in positive reappraisal, thus motivating them to initiate treatment and self-management. Although motherhood and concern for their children's well-being has previously been mentioned as a motivator to engage in care, 20,26,56 this is the first to identify a survival identity and its context as part of a postdiagnosis turning point during a period of adjustment. Future studies may wish to explore the potential interrelatedness of these identities (ie, survival for a child's sake) postdiagnosis among expectant or new mothers diagnosed with HIV. 20 The findings suggest a need for psychological and psychosocial resources for recently diagnosed WLHIV in the DR. Integrating HIV counseling or a behavioral health provider into primary care or the HIV care continuum could help facilitate positive coping strategies for newly diagnosed patients 57 and help reduce the stigma of mental illness by focusing on prevention. This strategy can be particularly beneficial for women, given the prevalence of domestic abuse in the country 58 and for WLHIV. 42 However, this approach can be challenging in countries with limited behavioral health services 47 such as the DR, which faces a shortage of behavioral health service providers and has insufficient funding for these services. 58,59 In light of the limited availability of mental health services, nonclinical strategies may be of use. 60 In our study, important facilitators to integration included receiving support from friends and family and, for some, the broader community of PLHIV. These sources of social support have also been identified in other studies 4,5,11,15,25,47,51 and may help alleviate psychological distress postdiagnosis. 61 According to women's accounts, most support was provided in the form of encouragement and reminders from family members and friends to engage in treatment, and those who participated in HIV support groups reported feeling a sense of empowerment and improved self-efficacy. Support and counseling groups for newly diagnosed individuals is a potential mechanism to increase social support levels for WLHIV and also provide timely information and instructions regarding appropriate treatment and self-management. Peer health workers used in resource-limited settings have been found to be effective in reducing stigma, improving retention in care, and improving quality and outcomes of HIV care. 60,[62][63][64][65][66] In Nairobi, community health workers who serve PLHIV in specific regions are also an important source of social capital and support. 4 These groups can help mitigate the trauma, distress, and social isolation associated with an HIV diagnosis. Our findings suggest interventions may be needed to help WLHIV in the DR develop new sources of social support to improve adherence and quality-of-life outcomes.
Limitations
The study has several limitations, including the use of a convenience sample. The sample may only reflect experiences of WLHIV who access ART and who are more adherent to treatment since they were recruited from HIV clinics. As such, we may have excluded the perspectives of those who have not linked to HIV care, attend infrequently, or have abandoned care altogether. 18 Another limitation is that we did not ask respondents when they had been diagnosed. Future studies should collect this information as there is some evidence that long-term diagnosed respondents (ie, PLHIV diagnosed >10 years) use different coping strategies than those recently diagnosed. 26
Conclusion
An HIV-positive diagnosis can be a life-altering event with harmful psychological and psychosocial effects. Emotional distress, fear about disclosure, and anxiety about HIV-related stigma and discrimination from others were common and of concern given that these factors have been found to be barriers to self-management and ART adherence. This study uses the biographical disruption framework to conceptualize these impacts and coping strategies after an HIV diagnosis. Key stages of biographical disruption consisted of the diagnosis, postdiagnosis turning points, and integration.
WLHIV in the DR may benefit from counseling and support services to improve their treatment adherence and quality of life. Culturally appropriate behavioral and social support interventions, such as mental health services and HIV peer support groups, are needed in low-resource settings to mitigate the negative impacts of an HIV-positive diagnosis that impede ART adherence, appropriate self-management, and quality of life among PLHIV. Continued efforts to reduce HIV-related stigma more broadly are also needed for improved outcomes across the HIV care continuum. | 2019-05-22T13:31:43.618Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "f27327460c610b1371cff3311a62e3b658530c5c",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2325958219849042",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "913c1bef24b009ce5f865efbc79b5cd86ac49fac",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266222856 | pes2o/s2orc | v3-fos-license | Computational design of experimentally validated multi-epitopes vaccine against hepatitis E virus: An immunological approach
Hepatitis E virus (HEV) is one of the leading acute liver infections triggered by viral hepatitis. Patients infected with HEV usually recover and the annual death rate is negligible. Currently, there is no HEV licensed vaccine available globally. This study was carried out to design a multi-epitope HEV peptide-based vaccine by retrieving already experimentally validated epitopes from ViPR database leading to epitope prioritization. Epitopes selected as potential vaccine candidates were non-allergen, immunogenic, soluble, non-toxic and IFN gamma positive. The epitopes were linked together by AAY linkers and the linker EAAAK was used to join adjuvant with epitopes. Toll-like receptor (TLR)-4 agonist was used as an adjuvant to boost efficacy of the vaccine. Furthermore, codon optimization followed by disulfide engineering was performed to analyse the designed vaccine’s structural stability. Computational modeling of the immune simulation was done to examine the immune response against the vaccine. The designed vaccine construct was docked with TLR-3 receptor for their interactions and then subjected to molecular dynamic simulations. The vaccine model was examined computationally towards the capability of inducing immune responses which showed the induction of both humoral and cell mediated immunity. Taken together, our study suggests an In-silico designed HEV based multi-epitope peptide-based vaccine (MEPV) that needs to be examined in the wet lab-based data that can help to develop a potential vaccine against HEV.
Introduction: HEV infection
Viral hepatitis is typically known as liver infection and results in liver damage.HEV is also associated with the onset of liver inflammation [1] HEV infections can be symptomatic as well as asymptomatic and associated with the general symptoms such as nausea, fatigue, and jaundice [2].HEV has eight different genotypes.However, humans are vulnerable to get infected with the genotypes 1-4 [3].Infection due to genotype 1 and genotype 2 are confined to humans and transmission is like to be done through contaminated water bodies leading to acute infection.HEV genotype 3 and 4 are considered zoonotic, and exhibit a broad host range including human, cattle, swine etc., leading towards acute infections [4].A significant contributing factor towards this disease spread is the consumption of inadequately cooked meat [5] Globally, the morbidity rate of HEV is around 20 million cases annually, out of which 3.3 million are symptomatic infections.However, this viral infection is associated with approximately 70,000 deaths.HEV is mostly prevalent in areas with poor sanitation and hygiene conditions [6].HEV infections exhibit higher prevalence in European countries, while they are categorized as emerging infections in Japan and Korea.Additionally, cases of chronic HEV infection or re-infection have been documented among individuals with compromised immune systems [7].
HEV infection often goes undiagnosed and untreated, as its causes remain ambiguous and overlap with other viral hepatic infections.The diagnosis of HEV infection is confirmed by detecting anti-HEV antibodies or by utilizing an alternative approach such as qRT-PCR on plasma, stool, and serum samples [8].
Currently, there is no commercially available vaccine against HEV, therefore, the goal of this study is to design an effective vaccine against this viral infection.Traditional vaccine designing can be challenging due to the complex nature of pathogens, time-consuming and lots of experimental trial-and-error process involved in identifying suitable antigens.Henceforth, immunoinformatics, a field that integrates bioinformatics and immunology, emerge as an effective tool towards translational aspect.Immunoinformatics utilizes computational tools and algorithms to analyze large datasets and predict immunogenic epitopes, improving the efficiency and accuracy of vaccine design.By exploring the power of computational methods, immunoinformatics enables researchers to identify potential vaccine targets, optimize antigen selection, and predict vaccine efficacy.It offers a faster and more cost-effective approach to vaccine development, complementing traditional methods and accelerating the discovery of novel vaccines [9].
In this study experimentally validated epitopes of HEV were retrieved and prioritized, followed by employing immunoinformatics based tools towards the MEPV design construct.The constructed MEPV was then modelled and refined using the molecular docking technique to investigate immune responses.
Methodology
The flowchart for this research study is shown in the Fig 1 .Experimentally validated epitopes were retrieved from the database and subjected to epitope prioritization using various tools.Shortlisted epitopes being potential candidates for the vaccine construct were analysed against the set of alleles evaluating their population coverage globally.The vaccine construct based on epitopes linkers and adjuvant was modelled into a 3D structure following the loop modeling, Figure and refinement.The designed vaccine was subjected to analyses using In-silico cloning disulfide engineering and computational simulation before docking.To predict the interaction between the vaccine and the human body receptors molecular docking was performed followed by Molecular dynamic simulation as shown in Fig 1.
Epitope retrieval
The Experimentally validated epitopes of HEV belonging to the species orthohepevirus A were retrieved from the database ViPR (viral pathogen and database and analysis resource) [10].ViPR is an online tool that helps the researchers to search, analyze, and visualize data about different families of human pathogenic viruses.It's user-friendly as well as reduces time.Using MHC pred (a tool to predict binding affinity for major histocompatibility complex (MHC) best possible epitopes were screened 19 .Vaxijen [11] an online tool was employed to check the antigenicity (antigenic and non-antigenic character) of the epitopes and antigenic epitopes having a threshold above 0.7 are fed into AllerTop, a web tool to inquire about the Allergenicity [12] (Allergen/non-allergen). Non-allergen epitopes were subjected to solubility check via Innovajen and soluble were further analysed for toxicity using ToxinPred [13].Epitopes being non-toxin were examined for IFNgamma positive by the IFNepitope tool [14], followed by the Virulency evaluation on Virulent pred [15].The population coverage for each shortlisted epitope was obtained using the Immune epitope database (IEDB)'s population coverage analysis [16].
Designing vaccine construct
MEPVs are effective solution for treating viral infections.The vaccine construct was designed using adjuvant which is supposed to boost up the effectivity making the vaccine more immunogenic.The adjuvant TLR-4 agonist was used and connected with epitopes via linker EAAAK.AAY linkers are used for epitope-epitope junction.
Physiological and immunological properties and 3D modeling
The vaccine construct was further subjected to allergenicity and antigenicity scans/checks using AllerTop and VaxiJen respectively.Protparam of ExPasy server [17] (tool involved in computing the physical and chemical parameters of the protein) was utilized to examine physiological properties for the earlier designed vaccine construct and then fed onto I-TASSER [18] as input sequence for 3D modeling of the vaccine construct.
Codon optimization and disulfide engineering
An In-silico analysis for the stability of the structure, known as disulfide engineering, was applied using Disulfide by design 2.0 [19] associated with the phenomenon of introducing disulfide bonds in the structure.Disulfide bonds play a crucial role in stabilizing the 3D structure of mature proteins and can also have important redox activity.Disulfide engineering focuses on the formation and stabilization of disulfide bonds in proteins.By strategically introducing or modifying these bonds, the vaccine's protein structure becomes more stable, leading to improved functionality and durability.Jcat [20] (codon adaptation tool) was used for codon optimization and adaptation.Codon optimization is a technique used to enhance the codon composition of a recombinant gene without changing the amino acid sequence.By considering different criteria, such as codon usage bias, the gene's performance can be improved.This is feasible because multiple codons can encode the same amino acid.Codon optimization involves optimizing the genetic code of a vaccine to improve its efficiency in protein production.This optimization ensures that the vaccine's genes are tailored to the host organism, enhancing protein expression and overall vaccine efficacy.Codon optimization was evaluated based on the percentage of the GC content and CAI (codon adaptation index) value.The CAI is a handy measure of codon usage bias.It uses a set of highly expressed genes as a reference to evaluate the effectiveness of each codon.By calculating the frequency of codon usage in a gene, a score is generated to assess its codon adaptation.Both codon optimization and disulfide engineering are essential techniques that contribute to the overall stability and effectiveness of vaccines.Jcat optimization is followed by insertion of DNA sequence of the vaccine construct in expression vector Pet-28a (+) via snapgene (software for in-silico cloning).
Molecular docking
The molecular docking was performed for modelled vaccine with an immune receptor using the PatchDock server [21] (designed algorithm for molecular docking).Molecular docking plays a significant role in computational vaccine design.It helps us understand how the vaccine interacts with specific target molecules, such as viral proteins or immune receptors.By simulating the docking process, we can predict the binding affinity and stability of the vaccinetarget complex.This information guides the selection and optimization of vaccine candidates, leading to more effective and targeted vaccine designs.
The vaccine model obtained was docked with the TLR-3.TLR-induced proinflammatory responses serve as the initial defence mechanism of the host.They not only combat pathogens but also facilitate the healing process to restore immune homeostasis.The molecular docking was performed with TLR3 to understand their interaction and potential immune response.TLR3, or Toll-like receptor 3, is a key component of the innate immune system.It recognizes viral RNA and triggers an immune response, leading to the production of cytokines and activation of immune cells.By docking the vaccine with TLR3, researchers can evaluate how the vaccine stimulates the immune system and potentially enhances the immune response against HEV.Results of docking via PatchDock were further refined by FireDock server [22] (web server providing high-throughput refinement) and UCSF Chimera (a program to analyse and visualize the molecular structures) was utilized for visualization.
C-immune simulation
An in-silico simulation against the designed vaccine using C-ImmSim (an immune system simulator to investigate immunological processes) was performed to inquire probability of vaccine construct, capable of inducing immunogenicity.Epitope-immune interaction was predicted and analysed by this server, it is a computer model of cellular interactions in the immune system can mimic how immune cells communicate and respond to pathogens, aiding in understanding and improving immune function [23].
Molecular dynamic simulation
Molecular dynamics simulation is a powerful computational technique used to study the movement and behaviour of atoms and molecules over time.It plays a crucial role in understanding the dynamics and interactions of biological molecules, such as proteins and small molecules.By simulating their motions, we can gain insights into their structural changes, binding events, and functional mechanisms.This information is valuable for drug discovery, protein engineering, and understanding biological processes at the molecular level.Molecular dynamic simulation is comprised of 3 steps including system preparation, pre-processing and simulation.Assisted Model Building with Energy Refinement (AMBER18) a software to investigate biomolecular interactions, has been employed to perform molecular dynamic simulations at 100-ns.The process of molecular dynamic simulation starts with first step of system preparation uses Antechamber module to construct the initial complex libraries.The docked complex (receptor-vaccine) was dissolved in Transferable Intermolecular Potential with 3 Points (TIP3P) water box with a border size 12.Force fields, namely "ff14SB", were employed to parameterize the receptor molecules as well as the vaccine molecules, respectively.To make the system neutralized, the procedure was followed by the inclusion of 12 NA + ions.The second step of pre-processing is followed by the minimization of number of hydrogen atoms for 500 cycles and water box for 100 rounds with energy limit of 200Kcal/mol.While the alpha carbon atoms were minimized with energy limit of 5Kcal/mol for 1000 rounds, and non-heavy atoms were minimized for 300 runs with 100Kcal/mol energy limit on the remainder of the system.Following that, the systems were given the temperature of 300k for 20ps in the Number of particles Volume and Temperature (NVT) ensemble under the conditions that were periodically bounded, while hydrogen bond atoms were controlled by spherical harmonic accelerated kinetic energy (SHAKE) algorithm.The system was equilibrated for 100 ps.In addition, pressure was equilibrated in Number of particle Pressure and Temperature (NPT) ensemble (used to determine the equation of state) for 50ps, initially with restrain check on carbon atoms and then without any restriction.System equilibration was completed after the preceding procedures were completed.The preset cut-off value for unbounded interactions was set at 0.8, and the run for production was conducted for 100ns.To conduct trajectories, the Conformational Analysis by Protein Packing Trajectories (CPPTRAJ) command was used.The resulting trajectories were visualised by visual molecular dynamics (VMD) to detect the interaction between the vaccine and the receptors of human immune system.
Retrieval and prioritization of potential epitopes
Experimentally determined epitopes of HEV retrieved from ViPR, were used to design vaccine construct against HEV.Potential epitope screening was done while keeping different parameters including antigenicity, allergenicity, solubility, toxicity, IFN-gamma inducing, and virulence.Experimentally validated epitopes retrieved from VIPR database were 208 in number and were fed into the MHC pred server.Those having there IC50 < 100 were selected for further analysis, of those which were allergens, poorly soluble, non-antigen, toxins, and IFNgamma negative were discarded.The 7 shortlisted epitopes were ligated with linkers to design a vaccine construct.The threshold set for antigenicity was 0.6 and the antigenicity value for these 7 epitopes were ranging from 0.7-2.6 and virulency was 0.98 for all except PLGSAWRDQ having a virulency score of 1.05.All these epitopes were used for MEPV construct designing (Table 1).
Population coverage analysis
Global population coverage analysed by IEDB's population coverage analysis server.The epitopes interacting with the set of targeted alleles were covering 99.74% population globally.Population coverage from different geographic regions was analysed as shown in The predicted PC50 value for MHC I was 10.6 and for MHC II was 3.85.High population coverage is desirable because it means a larger portion of the population is vaccinated, which can contribute to the potential effectiveness of a vaccine.When a significant percentage of the population is immunized, it creates a collective immunity known as herd immunity.This helps protect individuals who are unable to receive the vaccine due to medical reasons or have a weakened immune system.Additionally, high population coverage reduces the overall transmission of the disease, making it harder for the pathogen to spread and infect susceptible individuals.This ultimately helps control the spread of infectious diseases and reduces the likelihood of outbreaks [Figs 2 and 3].
Vaccine construct
Vaccines designed from epitopes using a computational approach are reliable and can be more beneficial because they can be produced by using simple computational tools and provide specific immune responses.Shortlisted 7 epitopes were linked by AAY (linker) in order to design a MEPV construct.EAAAK linker is used to link TLR-4 Agonist to MEPV construct to increase immunogenicity so that immune response can be enhanced against the antigen.The resulting vaccine construct was comprised of 93 amino acids as shown in Fig 4.
Multiepitope peptide-based vaccine
A MEPV model was obtained using I-tasser first model based on Z-score was selected and then subjected to loop modeling.Loop modeling of the 5 loops of the predicted model was done using galaxy web and then sent for refinement.The 3D structure for the vaccine model is MEPV construct after being designed were examined under different physiological and biochemical properties (Fig 5B).Total number of residues are 93, GRAVY was -0.468.GRAVY value is calculated to determine hydrophobicity and hydrophilicity for the sequence, while the value being negative depicts that the designed construct was having hydrophilic nature.Being non-allergen and having antigenicity of 0.788 the construct was considered overall stable.
Disulfide engineering
The designed vaccine was gone through the process of disulfide engineering that can provide stability, acquiring a geometric conformation.A mutant to the vaccine was obtained and residues pairs having unfavourable interactions energy regarding vaccine stability were mutated as cysteine residues.Mutated residues have a binding energy > 1 kcal/mol.For this designed vaccine 6 residue pairs were mutated.All 6 residues with angle and binding energy are given in Table 3.The mutant vaccine model with its mutated cysteine residues is shown in Fig 6.
In-silico cloning
Using J-Cat server the vaccine sequence was processed into a DNA sequence being reversely translated acquiring a high level of expression in standard organism E. coli.For recombinant proteins, the expression system (E.coli) was specified, followed by codon optimization assuring the production of recombinant vaccine proteins at a higher level in the E. coli k12 system (Fig 7).
Molecular docking.A computational method (molecular docking) is performed to examine interactivity between the vaccine model and human immune receptors leading an idea towards drug/vaccine development [24].So, the immune receptor (TLR-3) was selected to predict the binding interactions.Blind docking (for an unknown target site) [25] was applied using patchdock.The designed vaccine construct was docked with TLR-3 receptor.TLR3 (tolllike receptor 3) is considered as a PPR (pattern recognition receptor) and is supposed to be an inducer for type 1 interferon development, playing a key role in activating both types of immune responses [26].Patchdock top solutions for all the receptors were submitted to Firedock (online server) for refinement of predicted dock complexes and top 10 refined solutions were obtained.The solution ranking is based on the global energies.For TLR3 docked complex solution 1 having -35.18 (Table 4) global energy was selected.The dock complex for TLR-3 and their Protein-Protein interaction obtained from PDBSum are shown in Fig 8. Salts bridges between both protein chains of the dock complexes are shown in red while hydrogen bonds are coloured blue, dilsufide bonds are yellow while, doted orange lines are unbound interactions.
Computational modeling of immune simulation
C-IMMSIM server was used for immune simulation as shown in Fig 9.
Molecular dynamic simulations
Molecular dynamic simulations were employed on vaccine-TLR-3 complex.The resulting trajectories were based on four parameters including root mean square deviation (RMSD), root RMSD was calculated to interpret the structure stability for vaccine-TLR-3 complex.RMSD measures the average deviation between the positions of atoms in a molecule compared to a reference structure.The average RMSD calculated for the system was 3.5Å while the maximum was 5.
Discussion
MEPV against HEV has shown its effectiveness and highlighted its advantages over conventional vaccines.The fact that it is cost-effective and can be developed quickly is a significant breakthrough.Considering that there is currently no globally recognized HEV vaccine, our research provides a valuable and innovative idea to combat HEV infections, which are particularly common in Asia and many European countries.This work has the potential to make a meaningful impact in the field of HEV prevention.The computational analysis performed on MEPV indicates that it is highly effective in stimulating immune responses and triggering the necessary processes.This is a significant finding that highlights the potential of the vaccine in combating HEV infections.The need for an effective HEV vaccine arises from the significant burden of Hepatitis E virus infections worldwide.As an RNA virus, HEV can cause acute hepatitis and occasionally progress to chronic infection, leading to severe liver disease.Developing a vaccine against HEV is crucial for preventing transmission and reducing the global impact of this virus.By providing protection against HEV infection, a vaccine can contribute to public health efforts and improve overall well-being.HEV is involved in liver infection along with the extrahepatic indications and the treatment is still not specified.Vaccination is considered as the most efficient and effective method to treat infections.Thus, a vaccine against the HEV infection is required.MEPV designing using computational approaches is a very cost-effective method, take very less time and is making advancements in vaccine designing.The use of immunoinformatics and computational research techniques to design an epitope-based vaccine is better option as compared to the conventional method of vaccine designing which takes much time and show less efficiency and effectivity than the epitopebased vaccine [28].A similar research based on the use of immunoinformatics states that this approach of vaccine designing is leading to promising results against the viral infections [29].
A MEPV made from experimentally validated epitopes of HEV can be highly useful in combating HEV infections.By incorporating these specific epitopes into the vaccine, it can effectively target the viral proteins and elicit a targeted immune response.This approach enhances the vaccine's ability to induce a strong and specific immune response against HEV, potentially providing better protection against the virus.The use of experimentally validated epitopes ensures that the vaccine is designed based on reliable and accurate information, increasing its efficacy and safety.In this study we picked the experimentally validated epitopes of HEV from ViPR, shortlisted 7 vaccine candidate conforming the physiochemical properties to design MEPV against HEV infection.MEPVs are proficient enough to activate both types of immune responses; the cellular and the humoral proving itself better than the monovalent vaccines [30].Experimentally validated epitopes used in our study are already validated as epitopes of HEV, so they were directly scrutinized for epitope prioritization.Epitope prioritization is the process of selecting potential epitopes and evaluating them on different parameters.Epitope were subjected to antigenicity, allergenicity, toxicity, solubility and IFNgama positive/negative checks.Epitope obtained after prioritization are observed as safer, efficient, and having potential to induce long lasting immune responses [31].
The potential immunogenic epitopes shortlisted after different checks were linked together to design the vaccine construct.TLR-4 agonist used as an adjuvant in vaccine construct designed for HEV infection.Adjuvant was linked with epitopes via EAAAK linker, while AAY linkers were used to design epitope-epitope connection.Adjuvant aid in boosting up the vaccine's effectiveness and functionality.Linkers avoid overlapping between epitopes and infer stability into structure, also they have role in activating immune system [32].TLR-4 agonist specifically involved in the activation of T cells used as an adjuvant and helps in raising the immunogenic property of the vaccine.It considered safe promising the efficacy and effectiveness of the vaccine [33].Use of vaccine globally confirms its efficacy; therefore, population coverage analysis was performed.The vaccine was covering 99.74% of the world population.Physiochemical properties of vaccine construct were evaluated.Molecular weight of 10kd and the GRAVY value obtained from Protparam was -0.463 for MEPV against HEV.Molecular weight less than 100kd and more negative GRAVY score depicts the more stable and best structure/construct [34].3D structure was predicted and verified by plotting and analysing the Ramachandran plot which highlights 88.86% residues in the most favoured region and 2.5% in disallowed region referring to a good quality.However, the best structure quality is depicted if 90% residues are in Rama favoured region and disallowed region of plot is marked with 0 residues [34] The MEPV against HEV infection designed in this study were then subjected to molecular docking.HEV vaccine was docked with TLR-3 (receptor of human body) using patchdock followed by the refinement of the dock complexes by fire dock.Dock complex obtained for the TLR-3 showed a good protein-protein interaction.The Global energy prediction for the dock complex was -35.18.Docking with TLR-3 come up with the best interaction predicting the idea of activating good immune response.More negative the global energies for the dock complexes more effective the docking will be resulting into a better dock complex thought to be involved in inducing the immunity.Molecular docking is performed to predict an interaction and binding affinity among the ligand (vaccine model) and receptor (immune receptors like Toll like receptors).This interaction evaluates the efficacy of the designed vaccine that to which extent it can induce immunity and activate the immune system [35].
The MEPV was designed against HEV previously.In comparison to our study, they have used capsid protein only leading towards the epitope prediction of helper T-lymphocytes (HTL) [36].Furthermore, they performed in vitro analysis based on the results of their in-silico approach.While epitopes with experimental validation were utilized in our study.
Conclusion
HEV is a self-limiting yet acute infection prevalent in developing countries with poor hygiene and sanitation.Immunocompromised individuals more are vulnerable to chronic infections.It can be fatal to patients with other clinical manifestations.There is no proper treatment for HEV to date nor any vaccine introduced globally.Conventionally produced vaccines are still under trial, to save time we proposed idea of MEPV against HEV that is considered good in terms of efficacy, using an immunoinformatics approach and reverse vaccinology.Several analyses were performed in this study to test efficacy of the proposed /designed vaccine and ended up with satisfactory results.
Based on this study, we suggest that the proposed vaccine model is effective in terms of provoking immune responses in humans, but its efficacy can be improved by adding up or using different adjuvants.Nevertheless, these In-silico approaches provide promising results, however, it needs to be validated with experimental findings to draw a conclusive assessment of immunological response against HEV infection.
Fig 1 .
Fig 1.Schematic representation of the study.Flow chart of the strategy used to design a Multiepitope peptide-based vaccine against the HEV using In-Silico approach.https://doi.org/10.1371/journal.pone.0294663.g001 Fig 2. The world population coverage for MHC I was found to be 98.55% depicted in Fig 3A and 81.81% for MHC II as shown in Fig 3B.While class-combined population coverage was 99.74% (Fig 3C).
Fig 4A is the
depiction of vaccine's amino acid sequence.The amino acids in favoured regions of the vaccine model are 86%, 8.9% in the additional allowed region while the Ramachandran's disallowed region was showing 2.5% amino acids.(Fig4C).Secondary structure predicted for the vaccine model was having 7 helices, 12 helix-helix interaction, 5 beta-turns and 2 gamma-turns, as in Fig 4B.
Table 1 .
Selected epitopes to be used to design multi-epitope vaccine construct./doi.org/10.1371/journal.pone.0294663.t001given in Fig 5A.Binding energies for the top 10 models are shown in Table 2. Model 1 was selected using the criteria of low Molprobity, low galaxy energy and high RMSD value.The values for RMSD, Molprobity ang Galaxy energy were 2.295, 1.541, and -1794 (the lowest energy score favours the best model) respectively.
1 Å
at 50ns with few variations (Fig 10A).RMSF indicates the flexibility or fluctuation of atoms within a molecule.The residue fluctuation was observed as 1.4 Å with maximum residue fluctuation of 4.5 Å (Fig 10B) depicting the stability of binding modes of the vaccine was disturbed.Protein compactness and relaxation is predicted by the estimation of ROG.RoG represents the compactness or size of a molecule.The ROG estimated was 55Å (Fig 10C) and remained stable confirming the stability of the protein.B-factor is the estimation of the residual deviation based on temperature.B-factor reflects the thermal motion or flexibility of atoms in a crystal structure.The beta-factor observed as 50Å with few variations the maximum Betafactor obtained as 590 Å (Fig 10D) showed that binding mode of vaccine with receptor cell got disturbed due to temperature.These values help researchers understand the conformational changes, stability, and dynamics of biomolecules. | 2023-12-16T05:14:03.853Z | 2023-12-14T00:00:00.000 | {
"year": 2023,
"sha1": "6d05ac4412e4f1f820497e9525c2b164076eac13",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6d05ac4412e4f1f820497e9525c2b164076eac13",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2410022 | pes2o/s2orc | v3-fos-license | doi:10.5455/vetworld.2013.95-99 Effects of ketamine-xylazine and propofol-halothane anesthetic protocols on blood gases and some anesthetic parameters in dogs
How to cite this article: Alkattan LM and Helal MM (2013) Effects of ketamine-xylazine and propofol-halothane anesthetic protocols on blood gases and some anesthetic parameters in dogs, Vet. World 6(2): 95-99. Abstract Aim: The anesthetic effects and side effects of ketamine–xylazine and propofol–halothane at four different anesthetics protocol were examined in twenty healthy dogs. Methods: Four treatments were conducted using five dogs in each. The first group was treated with ketamine at 15 mg/kg intramuscularly and xylazine at 5 mg/kg. The second group was treated with ketamine–xylazine same as first group, but the dogs were underwent pneumoperitoneum with CO. The third group was anesthetized with propofol at 2 mg/kg intravenously 2 with inhalational halothane as maintenance anesthesia. The fourth group was treated as same as the third group but underwent pneumoperitoneum with CO. The behavioral changes, onset of action, induction time, the duration of surgical anesthesia, 2 reflexes, and recovery period, blood gas changes (pH, paO and paCO) were recorded pre treatment and 10 and 30 minutes 2 2 period after treatment. Results: The results showed differences in the quality of anesthesia among the four groups. The onset of anesthesia was the shortest in the third group (0.88±0.13 min). There were no significant changes in pH and paCO determined in all the groups. 2 No adverse reactions or complications were encountered during the anesthesia. The paO2 significantly increased 10 and 30 min after anesthesia in all group in comparison with respective pretreatment value. Conclusion: The anesthetic protocol of propofol as induction agent with halothane as maintenance anesthesia induced a good quality anesthesia with a short duration of action and rapid smooth recovery without complications during CO insufflations in 2 dogs.
Introduction
. It produces satisfactory sedation with good hemodynamic stability and fast, unexcited recovery [8].
Laparoscopy is minimally an invasive technique
Comparing propofol hemodynamic stability with for viewing the internal structures of the abdominal inhalant anesthetic agents, as sevoflurane propofol has cavity.The procedure involved distention of the less haemodynamic stability so the amount of abdominal cavity with gas then using a rigid telescope ephedrine needed to maintain haemodynamic stability placed through portal position into abdominal wall [1].
was lower during sevoflurane anaesthesia than This procedure has an advantage.It includes relativity propofol [9].Propofol causes decreases in brain non invasive nature but it needs a technical skill [2].
functional integration simultan-eously induce loss of Before performing any operation, pneumoperitonum is consciousness [10].It can be mixed with thiopentone an essential step and it is obtained by insufflating the with minimum signs of apnea,smooth induction and abdomen with carbon dioxide [3].Blood-gas and acidrecovery [11].base balances give an idea about the biochemical status Ketamine is used as intramuscular general anesthetic.during pneumoperitoneum.On the other hand, It appears to provide good somatic analgesia but poor pneumoperitoneum may also cause undesired effects visceral analgesia.Ketamine increases muscle tone such as transient or permanent arterial thrombosis, and may cause rigidity, if administered alone, so it is bleeding, hematoma, and infection [4,5].Anesthesia is usually combined with drugs with good muscle defined as a state of unconsciousness produced by a relaxant properties as alfa2 adrenoceptor agonists [12].process of controlled reversible drug-induced intoxication Anesthesia during the laparoscopic procedure is more of the central nervous system [6] patients with diminished opportunities for compen-Anesthesia induced by propofol at 2 mg/kg sation [13].
intravenously.The trachea was intubated to maintain The present study was conducted to examine the source of O and inhalation with halothane 1-1.5% was 2 effects of four different anesthetic protocols and the used to maintain anesthesia.A Fluotec halothane influence of pneumoperitoneum on the outcome of the vaporizer and closed rebreathing circuit were used.anesthesia in twenty healthy dogs.
Materials and Methods
linked with the administration of propofol were also observed.Both sexes of mixed breed of adult dog were used in the current study.The mean ±SE of their weight and
Treatment was given same as the third group
Ethical approval:
except the operative animals underwent pneumo-All treated animals were received humane care peritoneuim with CO at pressure 12 mmHg and flow -anesthesia and lateral recumbency within 1-3 min, but Five animals were premedicated with xylazine at there are wide differences in the quality of anesthesia the dose 5 mg/kg intravenously, then 5 min later with between the four groups as manifested by a significant Ketamine given intramuscularly at 15 mg/kg.Blood differences in onset of action, induction, diminish of samples were collected into heparanized test tubes in pedal reflexes duration of action and recovery time secure aseptic condition at 10 and 30 min post injection between first and second group (Table-1).There was and at recovery for blood gas analysis.
adequate muscle relaxation, and analgesia for the
Second group (G2):
surgical procedures to be performed.The time for intubation was 2.5-3.5 min fast, easy and convenient Five animals were subjected to the same treatwhich was in the third and fourth groups.In spite of the ment as the first group, peumoperitoneum with CO 2 rapid and smooth recovery after propofol anesthesia The different letters BA and BC means significant differences between groups at P<0.05.intravenous but is used for maintenance [22].We used increased 10 and 30 min after anesthesia in all the it with halothane to obtain balanced anesthesia.In spite groups in comparison with respective pretreatment of these advantages, there are few signs after using value (Table -4).
propofol such as: pain on injection, apnea, cyanosis, Discussion excitement, retching and vomiting.Limb withdrawal during propofol injection was considered to be a sign of General anesthesia was induced by using pain perception [23].The signs of apnea in this study in inhalational agent or the injectable propofol and third and fourth groups were alleviated by supplying of ketamine.Ketamine and xylazine were selected oxygen [24].PaCO increased immediately after 10 because of the safe use for clinical anesthesia in several 2 species of animals.The combinations induce rapid min post anesthesia until recovery especially in second sedation with good analgesia and adequate muscular and fourth group but not significantly.The hypercarrelaxation which are dose dependent [7].Balanced pnia occurs due to the increase of CO level in blood 2 anesthesia is characterized by muscle relaxation, because of hypoxia due to insufflations with CO gas 2 unconsciousness and analgesia induced by combination [10,25].This suggests that bicarbonate level increased of drugs each have different predominant mechanism due to an increase of carbon dioxide level of blood action.It permits the decreases in the doses of drugs during pneumoperitoneum in dogs.There were no used and as well as their side effect [16].Total significant changes especially in third and fourth group intravenous anesthesia includes much more smooth due to the use of endotracheal tube which provided recovery from anesthesia with minimal postoperative continuous ventilation, so that respiratory acidosis did side effect as vomiting [17]
Third group (G3):
Onset college of Veterinary Medicine, University of Mosul.ofanalgesia, disappearance of the pedal reflex, The dogs were provided with free access to standard duration of anesthesia, duration of recumbency and chow and tap water prior to the experiment.recovery time.Two way analysis of variance (ANOVA) Twenty dogs were divided into four group, five was used to determine the statistical significance.P dog in each group.All animals were prepared value was considered significantly at P<0.05.aseptically for arterial blood sample collection from 2 according to the standard local guidelines.The study rate 5 L/min [14,15].protocol was approved by the Animal House of the In all groups the criteria recorded included:
Table - 1
. The anesthetic quality and parameters of four protocols in the dogs.
Table - 2
. pH value in four anesthetic protocols in dogs.The result of gas analysis showed no very low frequency of occurrence of unusual reactions significant difference among the four treatment groups in local dogs premedicated with xylazine[21].inpH(Table-2).There was no significant change inPropofol is highly lipid soluble, results in rapid blood paCO in second and fourth group once after anesthesia brain equilibrium and hence rapid onset of action 2occurs.Rapid clearance makes it unsuitable as single until recovery (Table-3).The paO significantly 2
Pre anesthesia 0 minute Post anesthesia 10 minute Post anesthesia 30 minute Recovery
It returns to normal at 15 occurs due to effect of ketamine on muscle tone and minutes after termination of propofol infusion in spontaneous muscle activity.For this reason, ketamine buffalo calves [30].could not be used alone in dogs but can be used with doi:10.5455/vetworld.2013.95-99Table-3.paCO value in four anesthetic protocols in dogs.
Table - 4
. paO value in four anesthetic protocols in dogs.Different letters means significant differences between groups at P<0.05 | 2014-10-01T00:00:00.000Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "1237e60c9a59162e21b90965bf969c5b5427bebb",
"oa_license": "CCBY",
"oa_url": "http://www.veterinaryworld.org/Vol.6/February%20-%202013/Effects%20ketamine-xylazine%20and%20propofol-halothane%20anesthetic%20protocols%20on%20blood%20gases%20and%20some%20anesthetic%20parameters%20in%20dogs.pdf",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "1237e60c9a59162e21b90965bf969c5b5427bebb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
269829462 | pes2o/s2orc | v3-fos-license | Novel Endocrine Therapeutic Opportunities for Estrogen Receptor-Positive Ovarian Cancer—What Can We Learn from Breast Cancer?
Simple Summary Low-grade serous ovarian cancer is a rare type of ovarian cancer that usually has an indolent growth and affects younger women. It has markers that suggest it might respond to hormone therapy, but unfortunately, this treatment does not work well for many patients because the cancer becomes resistant to it. We do not fully understand why this happens. In breast cancer, similar resistance mechanisms are studied, so we are exploring if we can apply what we learn there to improve treatment for this type of ovarian cancer. This review looks at why hormone therapy might stop working in ovarian cancer and explores new ways to make it more effective. The goal is to find better treatment options for patients with advanced low-grade serous ovarian cancer, who currently do not have many choices for treatment. Abstract Low-grade serous ovarian cancer (LGSOC) is a rare ovarian malignancy primarily affecting younger women and is characterized by an indolent growth pattern. It exhibits indolent growth and high estrogen/progesterone receptor expression, suggesting potential responsiveness to endocrine therapy. However, treatment efficacy remains limited due to the development of endocrine resistance. The mechanisms of resistance, whether primary or acquired, are still largely unknown and present a significant hurdle in achieving favorable treatment outcomes with endocrine therapy in these patients. In estrogen receptor-positive breast cancer, mechanisms of endocrine resistance have been largely explored and novel treatment strategies to overcome resistance have emerged. Considering the shared estrogen receptor positivity in LGSOC and breast cancer, we wanted to explore whether there are any parallel mechanisms of resistance and whether we can extend endocrine breast cancer treatments to LGSOC. This review aims to highlight the underlying molecular mechanisms possibly driving endocrine resistance in ovarian cancer, while also exploring the available therapeutic opportunities to overcome this resistance. By unraveling the potential pathways involved and examining emerging strategies, this review explores valuable insights for advancing treatment options and improving patient outcomes in LGSOC, which has limited therapeutic options available.
Introduction
Low-grade serous ovarian cancer (LGSOC) is a rare histologic subtype of epithelial ovarian carcinoma, accounting for approximately 10% of all ovarian cancer cases [1].It is clinically, histologically, and molecularly distinct from the most common type of ovarian cancer, high-grade serous ovarian cancer (HGSOC).Generally, women with LGSOC are diagnosed at a younger age (43-55 years) and have a prolonged overall survival [1].LGSOC is often characterized by high estrogen and progesterone receptor positivity.Additionally, Cancers 2024, 16, 1862 2 of 18 they usually show an activated mitogen-activated protein kinase (MAPK) pathway with KRAS and BRAF mutations and, unlike patients with HGSOC, demonstrate a wild-type TP53 expression pattern [2].
LGSOC has a good prognosis in the early stages of the disease, but treating advanced and recurrent cases of the disease poses significant challenges.The current treatment approach to LGSOC primarily relies on the treatment strategies of HGSOC, involving debulking surgery and platinum-based chemotherapy, even though LGSOC is considered to be potentially resistant to chemotherapeutic agents.In recurrent cases, reported overall response rates (ORRs) to chemotherapy range from 2.1% to 17% [3,4].These numbers emphasize that there is a need for the development of improved, more tailored treatment strategies, especially in the recurrent setting, where no standard-of-care treatment is available [5].These treatment strategies should be focused on the molecular characteristics of LGSOC in order to improve clinical outcomes.
Numerous studies have emphasized the similarities between LGSOC and estrogen receptor-positive (ER+) breast cancer.The first similarity is the expression of the estrogen receptor (ER).The majority of LGSOCs have a strong ER positivity, and most tumors also demonstrate positivity for the progesterone receptor [6,7].Additionally, both cancer types have a relatively indolent disease course, compared with their hormone receptor-negative or high-grade counterparts.They may progress slowly over time, allowing for long-term management strategies [6].
In early-stage ER+ breast cancer, endocrine therapy has led to significant reductions in both recurrence and mortality rates [8].However, in the metastatic and recurrent setting, resistance to treatment and disease progression inevitably occurs, despite initial responses to endocrine therapy [9].The response to endocrine therapy may be influenced by acquired or intrinsic factors that contribute to endocrine resistance.In metastatic or advanced breast cancer, acquired resistance typically emerges after an initial response to therapy, which generally occurs after six or more months of treatment.In contrast, metastatic or advanced breast cancers that are intrinsically resistant may not respond to treatment within a shorter timeframe, typically less than 6 months [10].
According to published data, the level of ER expression has been identified as the most reliable predictor of sensitivity to endocrine therapy in breast cancer [11].Therefore, endocrine therapy is also commonly used in the treatment of recurrent ER+ ovarian cancer.However, no study has shown that ER positivity is related to the response to endocrine therapy in LGSOC, or ER+ ovarian cancer in general.Reported ORRs to endocrine therapy in LGSOC range between 9% and 14%, with aromatase inhibitors showing the highest ORRs [12,13].Besides these minimal ORRs, endocrine therapy demonstrates a relatively high clinical benefit of around 60% in LGSOC [12,14].
In ER+ metastatic breast cancer, multiple mechanisms of resistance have been characterized.However, in ER+ ovarian cancer, these mechanisms remain largely unexplored.Given the similarities between ER+ breast cancer and ER+ ovarian cancer, we tried to uncover mechanisms of endocrine resistance in LGSOC.By extrapolating insights from these mechanisms observed in ER+ breast cancer, we can generate potential hypotheses of endocrine resistance in LGSOC that warrant further investigation.Moreover, this review aims to promote the exploration of innovative therapeutic approaches to overcome the challenge of endocrine resistance in rare ovarian cancer, where treatment options are currently limited.
Mechanisms of Estrogen Receptor Signaling
ER signaling is a complex process mediated by various estrogen receptor isoforms, including estrogen receptor alpha (ERα, ESR1) and estrogen receptor beta (ERβ; ESR2), both primarily functioning in the cell nucleus.Estrogen binds to the ER, resulting in receptor dimerization.The ligand-bound ER dimers translocate from the cytoplasm to the nucleus.In the nucleus, ER dimers bind to specific DNA sequences known as estrogen receptor elements (EREs), located in the promoter regions of the target genes.This ER-mediated transcriptional regulation leads to changes in the expression of target genes involved in various cellular processes, including cell proliferation, survival, differentiation, and metabolism.This classical genomic signaling is further enhanced via G protein-coupled estrogen receptor 1 (GPER1), a membrane-bound protein receptor that binds estrogen and activates multiple downstream signaling cascades such as the MAPK/ERK pathway and PI3K/AKT/mTOR pathway [15].This so-called non-genomic signaling can elicit rapid cellular responses, including changes in cell proliferation, migration, and survival [16].
In ER+ breast cancer, estrogen signaling primarily involves the classical ER-mediated transcriptional activation of target genes leading to cancer cell proliferation and survival [16].
In ER+ ovarian cancer, the role of estrogen signaling is less well defined, and other pathways may be more influential in driving tumor growth and progression [17].In ovarian cancer models, the role of ERα in the growth regulation of ovarian cancer cells has been shown in vitro and in vivo [15].
Possible Mechanisms of Endocrine Resistance
Although the majority of ER+ breast cancer initially responds to endocrine therapy, approximately 15% to 20% of tumors inherently show resistance to the therapy and an additional 30% to 40% develop resistance over time [18].In LGSOC, endocrine therapy is often used for recurrent disease [12,13].However, more than 85% of recurrent LGSOCs are not responsive to endocrine therapy, demonstrating endocrine resistance is an even greater challenge in these types of tumors.
Loss of ER Expression
In some cases, breast cancer cells may lose their ER expression over time, resulting in the decreased efficacy of endocrine therapy [18,26].The transcriptional repression of the ESR1 gene might be caused by epigenetic modifications, including the abnormal CpG island methylation of the ER promoter and histone deacetylation mediated by histone deacetylase enzymes.These modifications result in a condensed nucleosome structure that restricts the transcription process [27].In ovarian cancer, the loss of ER has not been documented yet.
Mutations in the ESR1 Gene
Mutations in the ligand-binding domain of the ESR1 gene (Figure 1) are observed in resistant breast cancer cells [21,28].Activating ESR1 mutations can contribute to tumor cell resistance through several mechanisms.In metastatic breast cancer, about 50% of endocrine-resistant cases are associated with an ESR1 mutation.However, merely having an ESR1 mutation is not enough to cause complete endocrine resistance [21,29].
Ligand-independent activation
The most common mutations in the ESR1 gene are D538G and Y537S (Figure 1).They can alter the structure and function of the receptor, leading to reduced binding affinity to endocrine therapies and a decreased response to treatment.These activating mutations in the ligand-binding domain of ESR1, especially in helix 12, are an important mechanism of resistance to aromatase inhibitors [20,30].Aromatase inhibitors block the aromatase enzyme and, therefore, inhibit the conversion of androgens into estrogens.This mechanism of action leads to a significant decrease in estrogen levels in the body.Normally, ERα requires the binding of estrogen to become activated.However, with an activating ESR1 mutation, the receptor can be constitutively active even in the absence of estrogen.This means that the tumor cells can continue to grow and proliferate, despite the inhibition of estrogen production caused by aromatase inhibitors.
Ligand-independent activation
The most common mutations in the ESR1 gene are D538G and Y537S (Figure 1).They can alter the structure and function of the receptor, leading to reduced binding affinity to endocrine therapies and a decreased response to treatment.These activating mutations in the ligand-binding domain of ESR1, especially in helix 12, are an important mechanism of resistance to aromatase inhibitors [20,30].Aromatase inhibitors block the aromatase enzyme and, therefore, inhibit the conversion of androgens into estrogens.This mechanism of action leads to a significant decrease in estrogen levels in the body.Normally, ERα requires the binding of estrogen to become activated.However, with an activating ESR1 mutation, the receptor can be constitutively active even in the absence of estrogen.This means that the tumor cells can continue to grow and proliferate, despite the inhibition of estrogen production caused by aromatase inhibitors.
In recent publications, ESR1 mutations have been detected in a subset of patients with LGSOC and uterine endometrioid carcinomas, highlighting the potential significance of these mutations in hormone-responsive gynecological malignancies.There are some case reports of ESR1 mutations identified in LGSOC tumors.More specifically, McIntyre et al. detected an ESR1 Y537S mutation in one patient with LGSOC when analyzing 26 primary tumor samples [19].Additionally, Stover et al. also identified an ESR1 Y537S mutation in a patient with LGSOC.This patient had a sustained response to endocrine therapy for 5 years but developed progressive disease with an isolated recurrent lesion that harbored an ESR1 Y537S mutation [25].Gaillard et al. reported the frequency of ESR1 mutations in gynecological malignancies, which was relatively low overall (3.0% of all cases).Whether ESR1 mutations are enriched in low-grade gynecological malignancies could not be determined because of the restricted information regarding tumor histology subtypes [24].Fader et al. investigated ESR1 Y537S mutations in three patients diagnosed with recurrent LGSOC, but no mutations were detected [31].
Stergiopoulou et al. evaluated the frequency and the clinical relevance of ESR1 mutations in HGSOC.They reported the presence of ESR1 mutations in 9 out of 60 (15%) FFPE samples and in 11 out of 80 (13.8%) circulating tumor DNA (ctDNA) samples from advanced and metastatic ovarian cancer patients.However, these samples included all types of ovarian cancer [32].Besides this study and the case reports mentioned before, there has been a lack of subsequent reports regarding the identification of ESR1 mutations in ovarian cancer.Therefore, further investigations are needed to explore the prevalence and clinical implications of ESR1 mutations in LGSOC.These ESR1 mutations can be monitored using ctDNA, making it a promising tool to predict endocrine resistance in these patients [33].In recent publications, ESR1 mutations have been detected in a subset of patients with LGSOC and uterine endometrioid carcinomas, highlighting the potential significance of these mutations in hormone-responsive gynecological malignancies.There are some case reports of ESR1 mutations identified in LGSOC tumors.More specifically, McIntyre et al. detected an ESR1 Y537S mutation in one patient with LGSOC when analyzing 26 primary tumor samples [19].Additionally, Stover et al. also identified an ESR1 Y537S mutation in a patient with LGSOC.This patient had a sustained response to endocrine therapy for 5 years but developed progressive disease with an isolated recurrent lesion that harbored an ESR1 Y537S mutation [25].Gaillard et al. reported the frequency of ESR1 mutations in gynecological malignancies, which was relatively low overall (3.0% of all cases).Whether ESR1 mutations are enriched in low-grade gynecological malignancies could not be determined because of the restricted information regarding tumor histology subtypes [24].Fader et al. investigated ESR1 Y537S mutations in three patients diagnosed with recurrent LGSOC, but no mutations were detected [31].
Stergiopoulou et al. evaluated the frequency and the clinical relevance of ESR1 mutations in HGSOC.They reported the presence of ESR1 mutations in 9 out of 60 (15%) FFPE samples and in 11 out of 80 (13.8%) circulating tumor DNA (ctDNA) samples from advanced and metastatic ovarian cancer patients.However, these samples included all types of ovarian cancer [32].Besides this study and the case reports mentioned before, there has been a lack of subsequent reports regarding the identification of ESR1 mutations in ovarian cancer.Therefore, further investigations are needed to explore the prevalence and clinical implications of ESR1 mutations in LGSOC.These ESR1 mutations can be monitored using ctDNA, making it a promising tool to predict endocrine resistance in these patients [33].
Activation of alternative signaling pathways
ESR1 mutations can lead to the activation of downstream signaling pathways, like for example the PI3K/AKT/mTOR pathway, which can promote cell survival and growth.The activation of alternative pathways can compensate for the inhibition of estrogen signaling caused by endocrine therapy and render the tumor cells resistant to treatment [21].Consequently, the effectiveness of endocrine therapy can be compromised when cancer cells harbor activating ESR1 mutations.These mutations allow ER signaling pathways to remain active and alternative growth-promoting pathways to promote tumor growth.To overcome this resistance, strategies may involve targeting these modified signaling pathways and developing novel therapies that specifically target these ESR1 mutations [21].
Crosstalk between ER and Growth Factor Signaling Pathways
As depicted below, breast cancer cells can activate alternative signaling pathways, such as growth factor receptor pathways (e.g., HER2, EGFR, and FGFR), MAPK/ERK pathway, and PI3K/AKT/mTOR pathway, due to alterations (mutations or amplifications) in multiple genes.Subsequently, this can activate alternative survival and cell proliferation signals and contribute to endocrine resistance (Figure 2).
The activation of alternative pathways can compensate for the inhibition of estrogen signaling caused by endocrine therapy and render the tumor cells resistant to treatment [21].Consequently, the effectiveness of endocrine therapy can be compromised when cancer cells harbor activating ESR1 mutations.These mutations allow ER signaling pathways to remain active and alternative growth-promoting pathways to promote tumor growth.To overcome this resistance, strategies may involve targeting these modified signaling pathways and developing novel therapies that specifically target these ESR1 mutations [21].
Crosstalk between ER and Growth Factor Signaling Pathways
As depicted below, breast cancer cells can activate alternative signaling pathways, such as growth factor receptor pathways (e.g., HER2, EGFR, and FGFR), MAPK/ERK pathway, and PI3K/AKT/mTOR pathway, due to alterations (mutations or amplifications) in multiple genes.Subsequently, this can activate alternative survival and cell proliferation signals and contribute to endocrine resistance (Figure 2).
Growth factor receptor pathways
Research has shown that the amplification of HER2 (ERBB2), a receptor tyrosine kinase (RTK), can lead to the activation of the alternative MAPK/ERK pathway and PI3K/AKT/mTOR survival pathways.Both intracellular pathways play an important role in gene expression regulation, cellular growth, motility, and survival [34].
ERBB2 amplification also occurs in LGSOC, with reported rates ranging from 1.5% to 11.5% [35,36].The clinical significance of ERBB2 amplification in LGSOC is not yet fully understood.Anglesio and colleagues found that ERBB2-activating mutations are
Growth factor receptor pathways
Research has shown that the amplification of HER2 (ERBB2), a receptor tyrosine kinase (RTK), can lead to the activation of the alternative MAPK/ERK pathway and PI3K/AKT/mTOR survival pathways.Both intracellular pathways play an important role in gene expression regulation, cellular growth, motility, and survival [34].
ERBB2 amplification also occurs in LGSOC, with reported rates ranging from 1.5% to 11.5% [35,36].The clinical significance of ERBB2 amplification in LGSOC is not yet fully understood.Anglesio and colleagues found that ERBB2-activating mutations are associated with an increase in the activity of the MAPK/ERK signaling pathway [36].However, further research is necessary to better understand the role of ERBB2 amplification in LGSOC and to develop new treatment strategies.
Aberrations in components of the MAPK pathway, including NF1, KRAS/NRAS/HRAS, BRAF, and MAP2K1, are often reported in metastatic breast cancer [29,37].Alterations in these genes may lead to increased or uncontrolled cell proliferation, leading to resistance to endocrine therapy in breast cancer [38].As mentioned before, LGSOC is also molecularly characterized by MAPK pathway mutations.Several studies have reported KRAS mutations in 16-44% of cases, BRAF in 2-20%, and NRAS in as many as 26% of cases [39].Further studies are necessary to investigate the correlation between these mutations and endocrine resistance in ER+ ovarian cancer.
The hyperactivation of the PI3K/AKT/mTOR pathway has also been associated with endocrine resistance in ER+ breast cancer [40][41][42].A study performed by Beltrame et al. in ovarian cancer showed mTOR missense mutations in HGSOC and LGSOC [43].However, it seems evident that LGSOC is characterized by a relatively low frequency of aberrations in the PI3K/AKT/mTOR pathway [44,45].The phase II trial in patients with unresectable LGSOC comparing the combination of pimasertib, a MEK inhibitor, with SAR245409, a PI3K inhibitor, to pimasertib alone was terminated early due to high rates of discontinuation and low ORRs [46].
Epigenetic Modification
Endocrine resistance has previously been shown to be associated with epigenetic alterations in breast cancer, including DNA methylation, chromatin accessibility, histone modifications, and the binding of different transcription factors, due to their effect on gene expression [47,48].Differential DNA methylation has been implicated in endocrineresistant tumors [49].For example, in breast cancer, different methylation profiles were found between tamoxifen-sensitive and tamoxifen-resistant cell lines [50].Another study showed that the hypermethylation of estrogen-responsive enhancers modulates endocrine response in cell lines, which could be used to identify patients who positively respond to endocrine therapy [51].Furthermore, the methylation profile of estrogen-responsive enhancers can increase during endocrine therapy and differ between endocrine-sensitive and endocrine-resistant patients [52].
In ER+ breast cancer, several preclinical studies and early-phase clinical trials have investigated the efficacy of epidrugs in combination with endocrine therapies or other targeted agents in ER+ breast cancer [47].While some promising results have been observed, further research is needed to determine the optimal treatment combinations, patient selection criteria, and long-term outcomes.
MiRNAs have the capacity to regulate ER expression.In breast cancer, they play pivotal roles in both normal breast development and breast tumor formation [53].Research states that miRNAs are dysregulated in endocrine-resistant breast cancer [53,54].Several upregulated as well as downregulated miRNAs and their targets have been associated with endocrine resistance.These identified miRNAs can be useful as predictive serum biomarkers for developing endocrine resistance.In this way, patients who will or will not benefit from endocrine therapy can be determined.
In LGSOC, epigenetic research has not been explored yet.
Novel Potential Therapeutic Strategies
In the treatment of ER+ breast cancer, emerging endocrine therapies are being developed to overcome common mechanisms of endocrine resistance, for example, ESR1 mutations, including combination therapies and next-generation selective estrogen receptor degraders (SERDs) and selective estrogen receptor modulators (SERMs), but also other new classes of endocrine therapies [55].In this regard, it is important to understand that resistance to endocrine therapy can be agent-dependent.For instance, when aromatase inhibitor therapy has failed, tumors can still respond to alternative endocrine therapy approaches, including a different class of aromatase inhibitors (steroidal versus non-steroidal), SERMs (e.g., tamoxifen) or SERDs (e.g., fulvestrant) or other next-generation endocrine therapies [56].
Combination Therapies with Molecularly Targeted Agents
In ER+/HER2-breast cancer, the use of molecularly targeted therapies in combination with endocrine therapy has been widely explored.These compounds include cyclindependent kinase 4 and 6 (CDK4/6) inhibitors, mitogen-activated protein/extracellular signal-regulated kinase (MEK) inhibitors, and PI3K/mTOR inhibitors.The main goals are to improve patient outcomes by targeting multiple pathways and to reduce side effects and the development of resistance to therapy [57].Following encouraging outcomes in clinical trials in ER+ breast cancer, distinct molecularly targeted combination agents have also been or are being tested in patients with LGSOC (Table 1) [58].
CDK4/6 Inhibitors Combinations
In patients with advanced and metastatic ER+ breast cancer, the addition of CDK4/6 inhibitors to endocrine therapy has significantly increased survival rates [59][60][61].Several trials have investigated the efficacy and safety of CDK4/6 inhibitors and they have been approved for the treatment of advanced ER+ and early high-risk breast cancer [60][61][62][63].
The results of these trials can form the rationale for designing clinical trials with the same drug combination in patients with ER+ ovarian cancer.Preclinical research demonstrated the biological activity of palbociclib in ovarian cancer cells with low p16 expression [64].However, the use of CDK4/6 inhibitors as a single agent in a phase II clinical trial showed limited benefits in recurrent ovarian cancer with Rb-proficiency and low p16 expression, with a median progression-free survival (PFS) of 3.7 months [65,66].In this trial, all types of epithelial ovarian tumors were included; no distinction was made between HGSOC and LGSOC.Nevertheless, combination therapies with CDK4/6 inhibitors seem to have potential in the treatment of LGSOC.Colon-Otero et al. found a promising clinical benefit of ribociclib in combination with letrozole in three out of three patients with recurrent LGSOC, including one complete response and two partial responses [67].
Upcoming and ongoing clinical trials testing the efficacy of CDK4/6 inhibitors in combination with endocrine therapy in LGSOC are listed in Table 1 [68].
MEK Inhibitor Combinations
Previous research has shown that mitogen-activated extracellular signal-regulated kinase (MEK) inhibitors increase PFS and ORRs in patients with recurrent LGSOC [69], with ORRs ranging between 15% and 26%.However, there is some evidence that the addition of another therapy to MEK inhibitors, for example, endocrine therapy or targeted therapy, can improve the ORRs even more [70,71].
In advanced breast cancer, the addition of selumetinib (a MEK inhibitor) to fulvestrant resulted in a poor tolerance and worse disease control rate than with fulvestrant alone [72].The authors hypothesize selumetinib may have deteriorated the effect of fulvestrant.
However, in an ovarian cancer mouse model, the addition of a MEK inhibitor to fulvestrant improved the tumor response compared with a MEK inhibitor alone [73].Additionally, a case report of a heavily pretreated and endocrine-and platinum-resistant patient with LGSOC treated with trametinib and fulvestrant revealed a PFS of 9 months [71].In an upcoming trial in patients with LGSOC, the combination of regorafenib, a multikinase inhibitor, with fulvestrant will be investigated to provide more insight into the efficacy of MAPK pathway inhibitors in combination with endocrine therapy (NCT05113368).
Additionally, there is some evidence that combination therapies with inhibitors of the MAPK pathway and inhibitors of the PI3K/AKT/mTOR pathway could provide benefits in LGSOC because they might have a synergistic activity [74].However, additional studies investigating the efficacy of combined MEK and PI3K inhibition are necessary to evaluate its utility in LGSOC [46].
PI3K/mTOR Inhibitors Combinations
For the treatment of advanced ER+ breast cancer, several clinical trials have evaluated combinations of endocrine therapy and inhibitors of the PI3K/AKT/mTOR pathway, including mTOR inhibitors and dual PI3K/mTOR inhibitors [75].In the SOLAR-1 trial, treatment with the PI3Kα-specific inhibitor alpelisib in combination with fulvestrant improved outcomes in patients with PIK3CA-mutated ER+ advanced breast cancer [76].
The phase II basket study in ER+ recurrent, metastatic gynecological cancers, including LGSOC, which aims to evaluate the efficacy of letrozole in combination with alpelisib or ribociclib (ACTRN12621000639820), is an expansion upon the PARAGON trial, which focused on the use of single-agent anastrazole [13].It aims to assess whether combining letrozole with alpelisib or letrozole with ribociclib results in a higher ORR.Participants will be allocated to one of the two treatment groups based on the PIK3CA mutation status.
In the BOUQUET trial, a study evaluating the efficacy and safety of multiple biomarkerdriven therapies in patients with recurrent and advanced rare epithelial ovarian tumors, several combination therapies were introduced based on alterations in the tumor DNA.
In Table 1, a detailed overview of the different therapies in the BOUQUET trial is defined.However, this trial was prematurely terminated due to unknown reasons.
Additional clinical trials on the combination of molecularly targeted agents with endocrine therapy are strongly warranted to investigate their synergistic effect in ER+ ovarian cancer.In the future, it is imperative for clinical trials to incorporate a broader array Cancers 2024, 16, 1862 9 of 18 of biomarkers for patient selection.This proactive approach will significantly augment the overall efficacy and clinical benefit of the trial participants.
Next-Generation SERMs and SERDs to Overcome Endocrine Resistance
In ER+ breast cancer, next-generation oral SERMs and SERDs are increasingly being applied in clinical research, both as single agents and in combination with confirmed targeted therapies [38].Additionally, other novel endocrine therapies, including proteolysis targeting chimeras (PROTACs), selective estrogen receptor covalent antagonists (SERCAs), and complete estrogen receptor antagonists (CERANs) are also emerging (Figure 3).SERMs competitively bind the ER and exhibit tissue-dependent agonistic or antagonistic properties.In breast tumors, they potentiate an anti-estrogenic effect by downregulating the transcriptional activity of EREs.Tamoxifen is the most commonly used SERM in the treatment of ER+ metastatic breast cancer.However, recently, the development of novel agents has emerged [55].
SERDs also bind the ER and, therefore, inhibit receptor dimerization, causing the degradation and downregulation of the ER.Fulvestrant was the first-in-class approved (d) SERDs bind to the ER, inhibiting its translocation to the nucleus, and promoting its destabilization and degradation; (e) PROTACs mediate an interaction between the ER and the E3 ubiquitin-ligase complex, facilitating ubiquitination of the ER and subsequently its degradation; (f) SERCAs covalently bind to the C530 residue of both wild-type and mutant ERs, leading to the inactivation of these receptors; (g) CERANs bind the ER, leading to the inactivation of AF1 and AF2 of both wild-type and mutant ERs, consequently inhibiting gene transcription.Created with BioRender.com.Abbreviations: AF1, activation function 1; AF2, activation function 2; CERAN, complete estrogen receptor antagonists; ER, estrogen receptor; ERE, estrogen response element; PROTAC, proteolysis targeting chimera; SERCA, selective estrogen receptor covalent antagonist; SERD, selective estrogen receptor modulator; SERM, selective estrogen receptor modulator.
SERMs competitively bind the ER and exhibit tissue-dependent agonistic or antagonistic properties.In breast tumors, they potentiate an anti-estrogenic effect by downregulating the transcriptional activity of EREs.Tamoxifen is the most commonly used SERM in the treatment of ER+ metastatic breast cancer.However, recently, the development of novel agents has emerged [55].
SERDs also bind the ER and, therefore, inhibit receptor dimerization, causing the degradation and downregulation of the ER.Fulvestrant was the first-in-class approved SERD and is used to treat ER+ metastatic and advanced breast cancer alone, as well as in combination with CDK4/6 inhibitors.However, fulvestrant seems ineffective in patients with advanced breast cancer and ESR1 mutations [77].
In recent years, multiple novel oral SERMs, SERDs, and other next-generation endocrine therapies have been investigated in clinical trials with ER+ breast cancer [78].An overview of completed trials in advanced and metastatic ER+ breast cancer with new endocrine drug therapies to overcome endocrine resistance can be found in Table 2. Abbreviations: ER+, estrogen receptor-positive; HER2-, human epidermal growth factor receptor 2-negative; HR, hazard ratio; PFS, progression-free survival; OS, overall survival; SERD, selective estrogen receptor degrader; SERM; selective estrogen receptor modulator.
Overall, these new agents have more potent anti-estrogen activities and have properties to overcome endocrine resistance, including the degradation of the ER, higher potency and specificity, and the ability to target ESR1 mutations.These properties could lead to a more effective and durable response in patients with ER+ breast cancer.These novel endocrine agents have shown activity after fulvestrant and CDK4/6 inhibitors, and on mutant ESR1 in breast cancer.Hence, these agents also offer a viable possibility for determining a 'what comes next' approach in addressing LGSOC.
Phase II ELAINE demonstrated a numerically improved PFS and ORR with lasofoxifene compared to fulvestrant.These findings support further investigation of lasofoxifene in this patient population [79].
The phase III EMERALD trial demonstrated improved outcomes with the oral SERD elacestrant in ER+ metastatic breast cancer compared with standard-of-care endocrine therapy, including fulvestrant or aromatase inhibitors.These patients progressed on one or two prior lines of endocrine therapy and were pretreated with a CDK4/6 inhibitor [80].Based on these positive results, elacestrant was granted accelerated FDA approval in November 2023 for the treatment of this patient population.
Among the ESR1-mutated population, a longer prior exposure to CDK4/6 inhibitors was associated with an extended PFS.Additionally, encouraging findings in other novel SERDs, including amcenestrant [81], camizestrant [84], giredestrant, imlunestrant, and rintodestrant, have led to the development of several clinical trials in advanced ER+ breast cancer.
PROTACs bind to the ER and recruit the E3 ubiquitin-ligase, leading to ER ubiquitination and the subsequent proteasomal degradation of the receptor [86].SERCAs are a class of compounds that covalently bind to a unique cysteine residue at position 530.This cysteine residue is not present in other hormone receptors, leading to the inactivation of both wild-type and mutant ERs [87].CERANs inhibit activation function 1 (AF1) and activation function 2 (AF2), both domains of the ER that activate gene transcription.CERANs can bind both wild-type and mutant ERs [88].
However, clinical trial data need to be collected further to draw any strong conclusions about the efficacy and safety of these next-generation anti-estrogens.An overview of ongoing trials in advanced and metastatic ER+ breast cancer with new endocrine drug combination therapies to overcome endocrine resistance can be found in Table 3. Ongoing randomized trials in advanced and metastatic ER+ breast cancer with new endocrine drug combination therapies to overcome endocrine resistance (randomized) and Table 4 (non-randomized).
In ovarian cancer, next-generation SERMs and SERDs are still largely unexplored, despite their potential clinical benefit in ER+ breast cancer.It would be of great interest to explore the activity of these agents in patients with LGSOC, especially when resistance to other endocrine therapeutic agents has been developed.Abbreviations: CERAN, complete estrogen receptor antagonist; ER+, estrogen receptor-positive; HER2-, human epidermal growth factor receptor 2-negative; PROTAC, proteolysis targeting chimera; SERCA; selective estrogen receptor covalent antagonist; SERD, selective estrogen receptor degrader; SERM, selective estrogen receptor modulator.
Future Directions
Initiating clinical trials to investigate the efficacy of novel endocrine combination therapies, previously shown successful in breast cancer, is one of the most important goals of future research on low-grade serous ovarian cancer.Especially in patients already treated with traditional endocrine therapies who developed endocrine resistance to this treatment.Next-generation endocrine therapeutic agents, designed to specifically target mechanisms of endocrine resistance, are emerging in the field of breast cancer and represent a pivotal area of research in ER+ ovarian cancer.
Patient-derived ovarian cancer organoid models could provide a powerful tool to screen novel endocrine combination therapies in the future [99,100].These models present a promising platform for preclinical drug testing and screening, accelerating the discovery of effective treatments and exploring the repurposing of existing medicines for new indications, including rare cancers.Ultimately, this may lead to the development of tailored therapeutic approaches and improved outcomes for patients.
Conclusions
Research on the mechanisms of endocrine resistance in ER+ ovarian cancer is currently limited.Although there may be some overlap in the mechanisms of endocrine resistance between ER+ breast cancer and ER+ ovarian cancer, there are also distinct differences.As a result, findings from studies on ER+ breast cancer cannot be directly applied to ER+ ovarian cancer, highlighting the need for further research.Therefore, it is crucial to develop translational research models to identify biomarkers that can predict response or resistance to endocrine (combination) therapies and to define novel treatment strategies.
In ER+ breast cancer, the utilization of endocrine combination therapies has demonstrated enhanced efficacy, resulting in improved control of breast cancer and lowered rates of recurrence.Ongoing research endeavors aim to refine these endocrine combination therapies for ER+ breast cancer, particularly in cases of endocrine-resistant disease.This involves assessing novel agents such as next-generation SERDs and SERMs, as well as investigating alternative combinations.
Exploring next-generation SERDs and SERMs in the context of ER+ ovarian cancer presents a promising area of research.While current clinical data regarding their application in ER+ ovarian cancer remain scarce, the ongoing research and clinical trials hold promise for the emergence of innovative and efficacious treatment strategies for this complex disease.
Figure 1 .
Figure 1.Schematic representation of the estrogen receptor-α (ERα), including the most common LBD point mutations.The structural domains of ERα are shown, including the transcription activation function domains (AF1 and AF2), the DNA-binding domain, the receptor dimerization and nuclear localization (hinge) domain, and the ligand-binding domain.Abbreviations: AF1, activation function 1 domain; AF2, activation function domain 2; DBD, DNA-binding domain; LBD, the ligandbinding domain.Created with BioRender.com.
Figure 1 .
Figure 1.Schematic representation of the estrogen receptor-α (ERα), including the most common LBD point mutations.The structural domains of ERα are shown, including the transcription activation function domains (AF1 and AF2), the DNA-binding domain, the receptor dimerization and nuclear localization (hinge) domain, and the ligand-binding domain.Abbreviations: AF1, activation function 1 domain; AF2, activation function domain 2; DBD, DNA-binding domain; LBD, the ligand-binding domain.Created with BioRender.com.
Figure 2 .
Figure 2. Activation of alternative pathways.(a) Normal estrogen signaling; (b) crosstalk between growth factor receptor pathways (MEK and PI3K/mTOR pathways) and the estrogen pathway.The estrogen pathway is activated in the absence of estrogen as a mechanism of endocrine resistance.Created with BioRender.com.Abbreviation: ERE; estrogen response element.
Figure 2 .
Figure 2. Activation of alternative pathways.(a) Normal estrogen signaling; (b) crosstalk between growth factor receptor pathways (MEK and PI3K/mTOR pathways) and the estrogen pathway.The estrogen pathway is activated in the absence of estrogen as a mechanism of endocrine resistance.Created with BioRender.com.Abbreviation: ERE; estrogen response element.
Cancers 2024 , 20 Figure 3 .
Figure 3. Mechanisms of action of different endocrine therapies.(a) Estrogens bind to the ER, promoting receptor dimerization and translocation to the nucleus.The estrogen-bound ER dimer regulates gene transcription, resulting in gene expression and cancer cell growth and survival; (b) aromatase inhibitors block the aromatase enzyme, preventing the conversion from androgens into estrogen and leading to the inhibition of gene transcription; (c) SERMs competitively bind to the ER, leading to the inhibition of ER-dependent gene transcription by recruiting transcriptional co-repressors;(d) SERDs bind to the ER, inhibiting its translocation to the nucleus, and promoting its destabilization and degradation; (e) PROTACs mediate an interaction between the ER and the E3 ubiquitin-ligase complex, facilitating ubiquitination of the ER and subsequently its degradation; (f) SERCAs covalently bind to the C530 residue of both wild-type and mutant ERs, leading to the inactivation of these receptors; (g) CERANs bind the ER, leading to the inactivation of AF1 and AF2 of both wild-type and mutant ERs, consequently inhibiting gene transcription.Created with BioRender.com.Abbreviations: AF1, activation function 1; AF2, activation function 2; CERAN, complete estrogen receptor antagonists; ER, estrogen receptor; ERE, estrogen response element; PROTAC, proteolysis targeting chimera; SERCA, selective estrogen receptor covalent antagonist; SERD, selective estrogen receptor modulator; SERM, selective estrogen receptor modulator.
Figure 3 .
Figure 3. Mechanisms of action of different endocrine therapies.(a) Estrogens bind to the ER, promoting receptor dimerization and translocation to the nucleus.The estrogen-bound ER dimer regulates gene transcription, resulting in gene expression and cancer cell growth and survival; (b) aromatase inhibitors block the aromatase enzyme, preventing the conversion from androgens into estrogen and leading to the inhibition of gene transcription; (c) SERMs competitively bind to the ER, leading to the inhibition of ER-dependent gene transcription by recruiting transcriptional co-repressors; (d) SERDs bind to the ER, inhibiting its translocation to the nucleus, and promoting its destabilization and degradation; (e) PROTACs mediate an interaction between the ER and the E3 ubiquitin-ligase complex, facilitating ubiquitination of the ER and subsequently its degradation; (f) SERCAs covalently bind to the C530 residue of both wild-type and mutant ERs, leading to the inactivation of these receptors; (g) CERANs bind the ER, leading to the inactivation of AF1 and AF2 of both wild-type and mutant ERs, consequently inhibiting gene transcription.Created with BioRender.com.Abbreviations: AF1, activation function 1; AF2, activation function 2; CERAN, complete estrogen receptor antagonists; ER, estrogen receptor; ERE, estrogen response element; PROTAC, proteolysis targeting chimera; SERCA, selective estrogen receptor covalent antagonist; SERD, selective estrogen receptor modulator; SERM, selective estrogen receptor modulator.
Table 1 .
Ongoing and upcoming clinical trials with combination therapies for LGSOC.
Table 2 .
Completed trials in advanced and metastatic ER+ breast cancer with new endocrine drug therapies to overcome endocrine resistance.
Table 3 .
Ongoing randomized trials in advanced and metastatic ER+ breast cancer with new endocrine drug combination therapies to overcome endocrine resistance.
Table 4 .
Ongoing non-randomized trials in advanced and metastatic ER+ breast cancer with new endocrine drug (combination) therapies to overcome endocrine resistance. | 2024-05-18T15:53:55.762Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "4e4c3e25e1471822cf9e40ff031ed9a170d26d9f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/16/10/1862/pdf?version=1715604771",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e79f18cb999e1491e0843a3554612c4b1c1c0b74",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19275611 | pes2o/s2orc | v3-fos-license | Prevalence of neck and upper limb musculoskeletal disorders in artisan fisherwomen / shellfish gatherers in saubara , Bahia , Brazil
This study was conducted in an artisanal fishing community. The main health complaints included musculoskeletal disorders (MSD) attributable to working conditions. The present work found a prevalence of neck and distal upper limb MSD among the artisan fisherwomen/shellfish gatherers in Saubara, Bahia, Brazil. This was a cross-sectional cohort epidemiological study involving 209 artisanal fisherwomen/shellfish gatherers. The Brazilian version of the Job Content Questionnaire (JCQ), the Nordic Musculoskeletal Questionnaire (NMQ) and a survey listing physical demands adapted to shellfish gathering were used for the study. The MSD values obtained in some part of the body, neck or shoulder, and distal upper limb were 94.7%, 71.3% and 70.3%, respectively. The shellfish gatherers were found to work long shifts despite the high prevalence of MSD. The factors that cause these women to keep performing such activities include the need to make a living and provide food for their families through the sale and consumption of seafood.
Introduction
Few epidemiological studies assessing musculoskeletal disorders (MSDs) in artisan fishermen/ shellfish gatherers have been published in the literature.Those workers mostly live in traditional communities and develop informal seafood trade and processing practices 1,2 .Work-related MSDs were identified among the main demands of a community of artisan fisherwomen/shellfish gatherers in the course of a research study conducted in Saubara in the municipality of Baía de Todos os Santos (BTS), whose population lives almost exclusively on artisan fishing.
As BTS fishing communities are considered traditional communities, they are culturally differentiated groups who recognize themselves as such.They have their own forms of social organizational and occupy and use territories and natural resources as a condition for their cultural, social, religious, ancestral and economic reproduction, using knowledge, innovations and practices generated and transmitted by tradition 3 .
Data reported in the Statistical Bulletin of Sea and Estuarine Fisheries of the State of Bahia (Boletim Estatístico da Pesca Marítima e Estuarina do Estado da Bahia) 4 indicate that the total annual production estimated in 2003 for the group of 14 municipalities of Baía de Todos os Santos was 14,413.45tons of fish, which corresponded to 33.22% of the production estimated for the State in the same year.In 2010, the Brazilian production of fish in Brazil was approximately 1,265,000 tons of fish, reaching 19 th in the world rankings of fish production for this year 5 .Approximately 45% of the annual production comes from artisan fishing 6 .The vast majority (75%) of the fish produced in the Northeast comes from artisan fishing, according to the diagnosis of fishing in Brazil 7 .
Despite their large contribution to Brazilian fishing, artisanal fishing communities are generally among the poorest segments of the population, which may be explained by their dependence on exploiting a limited natural resource and the inherent uncertainty of the fishing profession 8 .
The shellfish gatherers of BTS may be characterized as artisan fisherwomen because they perform their work as a method of subsistence or for commercial purposes, simply and individually (autonomously) or as a type of family business (as opposed to an industrial company), with family support 9 , and are responsible for their work tools and all stages of the production process 10 .The work of shellfish gatherers ranges from the preparation of materials for shellfish gathering to the final product for sale and is performed in dwellings, outbuildings and outside environments.
Shellfish gatherer shave no vacations, weekly rest or paid holidays, according to Pena et al. 10 .Their illness may cause losses at work, compromising their food security.
The activities performed by shellfish gatherers, including the gathering of crustaceans and mollusks by hand, along the Brazilian coast may cause health problems to these workers, according to Rios et al. 11 .They are subjected to muscle strain in the neck, shoulders, back, upper limbs and lower back and repetitive strain injury of the wrist.Thus, the activities performed by shellfish gatherers are an ergonomic risk for these workers 10 .
MSDs are found worldwide, in both industrial and non-industrial groups 12 ,and there is growing concern about their social and economic consequences, especially in the workplace 13 .Although their cause does not exclusively result from occupations or working conditions 14,15 , MSDs comprise a key part of all work-related diseases recorded in many countries 14 .The National Research Council/Institute of Medicine 16 reports that the onset of MSDs depends on the interaction of three main risk factors: individual factors, mechanical stressors (physical demands) and the psychological characteristics of individuals (psychological demands).
Some authors indicate the heterogeneity of MSDs 12,14,17 because they involve different issues and body parts and are common problems in various occupations and work groups 12 .These disorders include inflammatory and degenerative conditions that affect muscles, tendons, ligaments, joints, blood vessels, peripheral nerves and nerve roots in different body segments, according to Punnett & Wegman 14 .
The presence of MSD symptoms 18-21 among fish industry workers 22,23 and populations of rural workers 24 has already been reported among shellfish gatherers and fishermen in general.
Knowledge of characteristics of the environments in which shellfish gatherers live, the particularities of these groups of workers and their relationship with the work they perform becomes necessary to understand the factors that may affect neck and upper limb MSDs of these workers.Artisan fishing activity is important for Bahia and Brazil.Thus, MSDs may have impacts on the economy and food security of these populations.
Thus, the present study aims to identify the prevalence of neck/shoulder and distal upper limb MSDs and their main risk factors among artisan fisherwomen/shellfish gatherers in Saubara, Bahia, Brazil.
methods
This study is included in a larger project titled "Health, Environment and Sustainability of artisan fishing workers" (Saúde, ambiente e sustentabilidade de trabalhadores da pesca artesanal).A literature search and epidemiological research were used in the present study.The literature search, performed in the PubMed Database until February 2014, aimed to identify all the national and international literature on neck and upper limb MSDs already published.The search for scientific articles was performed using the keywords "musculoskeletal disorders of the upper limbs" AND "occupation" AND "epidemiology".The following inclusion criteria were used: the issue was addressed preferably with women, the study was epidemiological and provided data on neck and upper limb MSD prevalence, the case definition was provided, and the full article was available at the journal website of the Coordination for the Improvement of Higher Education Personnel (Comissão de Aperfeiçoamento de Pessoal de Nível Superior, CAPES).
This cross-sectional epidemiological study was performed with 209 artisan fisherwomen/ shellfish gatherers from the municipality of Saubara, Bahia.The Brazilian versions of the Job Content Questionnaire (JCQ) and the Nordic Musculoskeletal Questionnaire (NMQ) and a questionnaire addressing physical demands 25 adapted to the work of shellfish gatherers were used for this study.Informed consent forms were signed by the participating subjects, and the project was approved by the Research Ethics Committee (Comitê de Ética e Pesquisa, CEP) of the School of Medicine, Federal University of Bahia.
Population and area
Saubara is a city located 94 km from Salvador by highway and less than 20 nautical km, in the interior of BTS and near the mouth of the Paraguaçu River.The city covers an area of approximately 163 km² and consists of villages (Cabuçu, Bom Jesus dos Pobres and Araripe) 26 .According to the 2010 census, it has a population of 11,201 inhabitants 27 , including 48.9% men and 51.1% women.The economically active population (EAP) of Saubara consists of5196 people 27 .Thus, the 568 artisan fishermen registered in the association of shellfish gatherers correspond to 11% of the EAP of Saubara.These data show the importance of artisan fishing for the municipality, which is considered one of the main economic activities.
sampling and inclusion criteria
Sampling was performed randomly, simply and without replacement, and a drawing of individuals was performed based on the total number of shellfish gatherers registered in the Association of Artisan Fisherwomen/Shellfish Gatherers of Saubara (Associação de Pescadoras Artesanais/Marisqueiras de Saubara).A 50% prevalence, a 5% error and the total population (N) of 426 artisan fisherwomen registered in the association of shellfish gatherers were used to calculate the sample, according to the equation for determining sample size (n) based on the estimated population proportion.The final sample consisted of 209shellfish gatherers, 3% over the expected minimum sample.
Being female, 18 years of age or older and active in shellfish gathering for at least one year were among the inclusion criteria to enter the study because this activity is primarily performed by women in this community.To minimize the healthy worker survival effect, workers who were drawn and were not working in shellfish gathering had the opportunity to participate if they explained that they left the profession because of diseases possibly related to MSDs.
Data were collected from April 10 to May 10, 2013.This was a primary source database.The questionnaire used included the main risk factors reported in the literature.The following items were included in the questionnaire: identification; sociodemographic characteristics; job information; current and past occupational history; time worked in shellfish gathering; daily work hours; lifestyle, including tobacco smoking, alcohol consumption, use of medication, and practice of physical activities; comorbidities; housework; musculoskeletal symptoms; and physical and psychosocial demands at work.Most data were self-reported, except for weight, height and waist circumference (WC), which were measured by trained interviewers.The weight and height measurements were assessed to calculate the body mass index (BMI), and WC was measured to assess the accumulation of fat in the abdominal region.
Physical demands at work were adapted to the work of artisan fisherwomen/shellfish gatherers from the questionnaire prepared by Fernandes 25 .Physical demands were evaluated in this adaptation according to the stages of shellfish gathering (gathering, washing, transport, cooking and processing).The questions covered working postures (sitting, standing, walking, crouching, bending over at the waist, twisting the torso, holding the arms aloft),repetitive and precise hand movements, arm muscle strength, and cargo handling.The variables were measured using a 6-point (0 to 5) response scale regarding the frequency, intensity and duration.
Data on psychosocial demands assessed with scores for psychological demand, control and social support at work and job dissatisfaction were collected using the JCQ 28,29 .Exposure to psychosocial demands was rated according to Devereux 30 as high and low exposure to psychosocial demands.Psychosocial demands were dichotomized by the medians.The shellfish gatherers with high exposure to these demands received a demand score higher than 34, the control score was equal to or lower than 66, and the social support score was equal to or lower than 13.Low exposure was rated as demand equal to or lower than 34, the control score was higher than 66, and the social support score was higher than 13.At least two of the criteria had to be met for the shellfish gatherer to be rated in each group.Job satisfaction was also analyzed using the median and was stratified as low job satisfaction (satisfaction > 0.40) and high job satisfaction (satisfaction ≤ 0.40).
The data on musculoskeletal symptoms were collected using the expanded version of the NMQ, an instrument used worldwide in research on musculoskeletal disorders.The presence of pain or discomfort in the previous 12 months in anatomical areas of the musculoskeletal system was also assessed, along with the severity, duration and frequency of these symptoms 31 .MSD was defined as pain in the last twelve months lasting at least one week or with a monthly minimum frequency, which motivated the individual to seek medical care or absence from work or to change jobs, with grade 3 or higher severity in a scale from 0 to 5.Shellfish gatherers who suffered acute trauma in the segment of interest were excluded from the MSD calculation.
statistical methods
The measures of central tendency (means, medians) and dispersion (standard deviations) were calculated for continuous variables.Categorical variables were expressed as absolute values and percentages.
The statistical software programs Ri386 version 2.15.2 and Epi Info version 7.1.3.3 were used for the data analysis.
results
Shellfish gathering in Saubara is a predominantly female activity encompassing 75% of the individuals registered as shellfish gatherers in the Association of Artisan Fisherwomen/Shellfish Gatherers of Saubara.The sample characteristics are described in Table 1.
The sample predominantly consisted of shellfish gatherers with little schooling and who self-reported as black or brown for ethnicity.The age of the respondents ranged from 21 to 68 years.Only 10.1% reported having children younger than two years of age.The income exclusively from shellfish sales ranged from R$0.0 to R$600.0, averaging R$137.1 (SD = 104.7),corresponding to approximately 20% of the minimum wage at the time, which was R$678.0 32 .
The experience of shellfish gatherers in the work they perform, their early age at the start of work and the high average daily working hours exclusively in shellfish gathering are notable among the occupational variables, which are outlined in Table 1.The average years of work was approximately 27 years (SD = 12.9).The average age at the start of work was approximately13 years (SD = 7.2), with a minimum of 4 years and a maximum of 53 years.The average working hours was 8.7 hours (SD = 3.1).There was a high number of weekly hours dedicated to housework in addition to work hours, which was reported by most shellfish gatherers interviewed (76.6%), characterizing a double work shift.Some study participants (29.2%) reported working in another occupation at the time of the interview, although the vast majority (70.8%)only worked in shellfish gathering.
Among the compensation measures, most shellfish gatherers (78.9%) notably consumed alcohol at least once a week.Tobacco smoking was only found in 5.3% (n = 11) of the sample.Most shellfish gatherers (67.5%) reported practicing physical activity during their leisure time, including running, gymnastics, swimming, playing soccer, cycling, walking, and caring for their vegetable garden or backyard at least three times a week for at least 30 minutes at a time.Overweight (BMI ≥ 25 kg/m 2 ) was identified in 70.3% (n = 147) and obesity (BMI ≥ 30 kg/m 2 ) in 32.5% of the sample.Excess fat in the abdominal area (WC ≥ 80 cm) was present in 74.6% (n = 156) of workers.
Table 2 outlines the prevalence of pain or discomfort in the last twelve months and the MSDs in any body part (upper limbs, lower limbs or back), in the neck or shoulder, in the wrist or hand, in the forearm or elbow and in the distal upper limbs (wrist or hand or forearm or elbow).
The values of musculoskeletal symptoms in the last twelve months and MSDs detected in some body part were 97.6% (n = 204) and 94.7% (n = 198), respectively.High prevalence rates of neck or shoulder (71.3%) and distal upper limb (70.3%)MSDs were noted.
The physical demands at work (means ± SD) are outlined in Table 3, according to the main stages of shellfish gathering.The main demands identified in the gathering were, in descending order, performing repetitive hand movements (4.55 ± 1.07), using arm or hand muscle strength (4.05 ± 1.14), bending over at the waist (3.94 ± 1.51), physical pressure with hands on a work tool (3.92 ± 1.29) and crouched posture (3.53 ± 1.73).The walking (4.44 ± 1.13), holding the arms aloft (3.54 ± 1.78) and standing (3.22 ± 2.00) postures prevailed during transport, along with the use of arm and hand muscle strength (3.81 ± 1.42) and load lifting (3.60 ± 1.46).The greatest physical demands in shellfish processing were the sitting posture (4.55 ± 0.99) and performing repetitive (4.54 ± 1.06) and precise and fine (3.70 ± 1.81) movements.The psychosocial demands and job satisfaction are outlined in Table 4.
High psychosocial demand was detected in 50.7%and low job satisfaction in 56% of the shellfish gatherers interviewed.Only eight of the 69 articles gathered in the literature review met the inclusion criteria (Table 5).
Body part
In
Discussion
High prevalence rates of neck or shoulder and distal upper limb MSDs were assessed in artisan fisherwomen/shellfish gatherers.Almost all shellfish gatherers reported pain or discomfort in any body part in the last year.Only 2.9% (n = 6) of shellfish gatherers with symptoms had no MSDs when applying the severity-rating criterion (higher than or equal to 3 on a scale from zero to 5).This finding illustrates the importance of this painful condition for the population of shellfish gatherers in Saubara.
Lower prevalence rates than those reported in the present study were reported among all the studies assessed that evaluated shellfish gatherers 20 , commercial fishermen 21 , fishing industry workers 22,23 , workers from other branches 30,33,34 and rural populations 24 , according to MSDs in some body part, in the neck or shoulder or in the distal upper limbs.Symptoms retrieved from self-reported musculoskeletal pain, per body part, were rated and assessed differently from this study, which considered stricter case definition criteria.However, the high prevalence rates of musculoskeletal pain among shellfish gatherers from other continent, with different cultures, work regulations and hours, are highlighted 20 .
Self-reported symptoms may often be more informative than physical examination, according to Punnet & Wegman 14 .The authors argue that objective measures are extremely useful in establishing a safer diagnosis, but subjective measures best capture the impact on the patient.Some authors mention the difficulty of comparing upper limb MSD studies 12,35,36 .The comparison difficulties compromise assessing the extent of the problems because the case definitions, diagnostic criteria 12,36 and official statistics differ between studies 12 .The shellfish gatherers of Saubara noticeably had higher MSD prevalence rates than the other workers, from rural to industrial, even with the stricter MSD rating of the present study.The different MSD ratings and prevalence rates per body part are outlined in Table 5.
In a population of rural Greek workers, 82.6% of respondents reported at least one musculoskeletal problem during the previous year, and 48.1% of subjects reported limited activity because of their symptoms during the same time period 24 .
The study by Andersen et al. 33 showed that nursing assistants and cleaning staff complained of pain in both functional units more than the other workers.The occurrence of pain among these workers was much lower than that found among the shellfish gatherers in Saubara, despite having considerable values.Chiang et al. 22 highlighted that the greater the use of strength and the performance of tasks involving repetitive movement, the more musculoskeletal symptoms would be reported.Attention was drawn to the shoulder disorders, which nearly tripled in group 2 compared to group 1 (Table 5).The prevalence rates of self-reported symptoms were highest in group 3 for all body parts assessed.
Shellfish gatherers are highly vulnerable to ergonomic hazards at all stages of shellfish gathering.Gathering, transport and processing were the stages considered more important because they require longer time dedicated to the task and greater volume and workload.The posture bending at the waist and/or crouched and performing repetitive movements is the most used in gathering.Hand and arm strength are also used when working with tools and lifting loads, in addition to posture, according to Pena et al. 10 : The work of shellfish gatherers in sandy beaches and mangroves consists of walking, wherein dorsiflexion is sustained for extended periods of time.They roam and dig with fast-paced movements of the upper limbs, almost always in dorsiflexion, moving over rocks and sandy beaches, under the intense sun and with eyes fixed on the sand to identify seafood.
In transport, shellfish gatherers usually carry what they gather in overhead buckets, walking from their workplace to their homes.The posture of holding the arms a loft, standing and walking, the use of muscle strength and lifting loads were the physical demands with the highest means, which statistically demonstrate the characterization of this stage.In gathering, shellfish gatherers remain seated, performing repetitive and precise movements almost constantly until China; Fish processing workers from eight small-to medium-sized factories on the outskirts of the Kaohsiung port in Taiwan.The sample was divided into 3 groups: G1.Low repeatability and low use of strength (managers, office personnel and specialists); G2.High repeatability and high use of strength (semiskilled workers working on conveyor belts, fish processing and packers); G3.High repeatability and strength (workers who cut, separated and rated the fish or seafood); Cross-sectional Study; Chiang et al. (1993).sponses 16 .The explanation for the involvement of psychosocial factors in the onset of MSDs relates to muscle tension secondary to stress 37,38 .The literature demonstrates that MSDs affect more women than men, and studies should consider the work demands according to gender 24,39 .
The predominance of women in shellfish gathering activities has been measured or reported in other studies 10,[40][41][42] , except in an article with workers -mostly men (88.4%) -who performed fishing and shellfish gathering activities in the sea 21 .
Rheumatic disorders were the most reported comorbidities (17.2%) among the shellfish gatherers of Galicia 20 .The prevalence of diabetes mellitus in the present study was higher than that reported for shellfish gatherers in Galicia, with the illness reported by 3.6% (n = 33) of the sample 20 .Conversely, the prevalence of diabetes mellitus in the sample of shellfish gatherers was lower than the percentage of adults (35 years or older) who reported having diabetes in the year 2012, according to the total and the five Brazilian regions 43 .The difficulty that shellfish gatherers from Saubara experience accessing healthcare services may contribute to the failure to diagnose these diseases.The values are substantial, even with this difficulty, indicating the importance of health actions for these communities.
The overweight and obesity prevalence rates of the study were much higher than the respective prevalence rates among Brazilian women aged 18 years or older (47.5% and 17.9%,respectively, in 2012) 43 ."A striking level of overweight and obesity" was noted in the community of shellfish gatherers and artisan fishermen of Ilha de Maré, located in Baía de Todos os Santos.Many shellfish gatherers are obese but do not always feel sick 44 .Overweight is described in the literature as a factor related to upper limb MSDs 16,33 .
Although no report of the role of income in MSD development was found during the literature review, its importance for these populations is emphasized.According to Dias et al. 45 , the average monthly income from the shellfish gathering activity was 108.00 Brazilian Reais per month, a value lower than the average income resulting from the sale of seafood in the present study.Pena et al. 10 reportedan even more extreme situation regarding income from shellfish gathering (approximately 50 Brazilian Reais per month).According to Pena et al. 10 , social misery imposes an intense work pace to generate more products for sale.
Similar to the study conducted by Pena et al. 10 , the shellfish gatherers of Saubara are also responsible for their work tools and for all stages of production.They have autonomy to decide on their work.However, these women perform their tasks even in the presence of pain to ensure the livelihood that comes from the sea.The shellfish gatherers of the present study noticeably used their production not only to gain income but also to ensure daily access to nutrients.Only the surplus production is sold to middlemen.
The way in which the shellfish-gatherer work is developed and the individual characteristics are important for MSD occurrence.These workers constitute the management forces for their own work and showed great experience in the activity.Although they have autonomy to perform their activities, these shellfish gatherers function for notably long working hours, even with high prevalence rates of MSDs, demonstrating that the need to ensure the livelihood and food security of their families by selling and consuming seafood is included among the determining factors of the permanence of these people in the activity.
table 2 .
any body part (upper limbs, lower limbs or back) Prevalence of pain and MSDs in any body part, in the neck or shoulder and in distal upper limbs in a sample * of shellfish gatherers from Saubara, BA, 2013.Shellfish gatherers who suffered acute trauma in the body part of interest were excluded from the MSD calculations.Therefore, the numbers of shellfish gatherers of the sample for wrist or hand and forearm or elbow MSDs were 197 and 202, respectively.The sample remained the same (n = 209) for the other ratings. *
table 3 .
Physical demands at work, according to the main work stages of a sample (n = 209) of shellfish gatherers from Saubara, BA, 2013.
table 4 .
Psychosocial demands and job satisfaction in a sample (n = 209) of shellfish gatherers from Saubara, BA, 2013.end of this stage.Pena et al. 10 , in their study with shellfish gatherers from Ilha de Maré, Bahia, observed muscle overload in the neck, shoulders, back, upper limbs and lower back and repetitive strain injury of the wrist.According to Anders-en et al., physical demands at work are related to worsened pain in specific areas 33 .Symptoms, injuries and disabilities have different meanings among individuals, determining a wide variety of psychological and social re- the
table 5 .
Prevalence and rating of the musculoskeletal symptoms or neck and distal upper limb MSDs from self-reports in epidemiological studies with various categories of workers. | 2017-06-17T15:10:23.149Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "a4e6cda4ad43517de0f05241e36a2b430bff1d5c",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/csc/a/MkvXZWD8qvyVCskcst7Rv7h/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ae62d94847ef997921c7d3c110bcf2d130d46f12",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
190538119 | pes2o/s2orc | v3-fos-license | Aβ(M1–40) and Wild-Type Aβ40 Self-Assemble into Oligomers with Distinct Quaternary Structures
Amyloid-β oligomers (AβOs) self-assemble into polymorphic species with diverse biological activities that are implicated causally to Alzheimer’s disease (AD). Synaptotoxicity of AβO species is dependent on their quaternary structure, however, low-abundance and environmental sensitivity of AβOs in vivo have impeded a thorough assessment of structure–function relationships. We developed a simple biochemical assay to quantify the relative abundance and morphology of cross-linked AβOs. We compared oligomers derived from synthetic Aβ40 (wild-type (WT) Aβ40) and a recombinant source, called Aβ(M1–40). Both peptides assemble into oligomers with common sizes and morphology, however, the predominant quaternary structures of Aβ(M1–40) oligomeric states were more diverse in terms of dispersity and morphology. We identified self-assembly conditions that stabilize high-molecular weight oligomers of Aβ(M1–40) with apparent molecular weights greater than 36 kDa. Given that mixtures of AβOs derived from both peptides have been shown to be potent neurotoxins that disrupt long-term potentiation, we anticipate that the diverse quaternary structures reported for Aβ(M1–40) oligomers using the assays reported here will facilitate research efforts aimed at isolating and identifying common toxic species that contribute to synaptic dysfunction.
A major goal in efforts to understand mechanisms of synaptic dysfunction caused by endogenous AβOs is the preparation of AβOs in vitro that are structurally and functionally similar to their in vivo counterparts. However, AβOs self-assemble via a dynamic equilibrium process, which is characterized by unstable intermediates that interconvert and structurally rearrange upon isolation. Self-assembly is sensitive to environmental conditions [19,20], and experimental handling influences the relative abundance of different oligomeric intermediates [21][22][23]. Therefore, the structures of biologically active AβO assemblies cannot easily be separated for structural characterization, and conditions used to generate different AβO intermediates are difficult to reproduce between laboratories. Therefore, robust methods for generating and analyzing AβO structure-function relationships would accelerate mechanistic studies towards understanding AβO toxicity.
The recombinant Aβ expression system developed by Walsh and coworkers represents a high-yielding source of Aβ that can be produced quickly and in high purity [24][25][26]. The peptide sequence is called Aβ(M1-40) due to the presence of an N-terminal methionine introduced at a start codon. Aβ(M1-40) exhibits fibril aggregation rates that are kinetically faster than synthetically prepared wild-type Aβ40 (herein referred to as WT Aβ40). Nevertheless, oligomers derived from both Aβ(M1- 40) and WT Aβ40 potently disrupt long-term potentiation [16]. Structurally, the conformation of di-tyrosine cross-linked Aβ(M1-40) dimers differ from both WT Aβ40 and oxidized Aβ(M1-40)S26C dimers and form different aggregated end-products. These data suggest that their shared synaptotoxicity may be a function of a common aggregation intermediate that features shared quaternary structural elements. Given the importance of endogenous Aβ40 dimers in disrupting synaptic dysfunction [12,16,27], the Aβ(M1-40) monomer might also adopt quaternary structures with biological activity that mirror native AβOs in vivo.
Here, we asked whether common oligomer quaternary structures were shared among intermediates derived from the self-assembly of monomeric WT Aβ40 and Aβ(M1-40). We cross-linked oligomer samples after they were self-assembled to stabilize their native morphologies. We developed a simple assay to characterize the morphology of low-molecular weight (LMW, 8-36 kDa) and high-molecular weight (HMW, 36-250 kDa) species. Under our assay conditions, WT Aβ40 self-assembled into 2mer to 5mer oligomers and Aβ(M1-40) oligomers assembled into a polydisperse mixture of LMW and HMW oligomers. The predominant morphologies of AβOs derived from the two peptides were also distinct, and common intermediates were only present at low concentrations. Using this assay, we rapidly identified oligomer self-assembly conditions that favored HMW Aβ(M1- 40) oligomers. Taken together, our results provide evidence that the major oligomeric products of Aβ(M1-40) self-assembly are morphologically dissimilar from those generated by the self-assembly of WT Aβ40, though self-assembly conditions, such as the addition of biomimetic amphiphiles, significantly alter the relative abundance of different quaternary structures. We anticipate that the methods we report to characterize AβO morphologies will find general utility in preparing diverse quaternary structure variants that assist in reconciling mechanisms of pathophysiology during AD.
Expression and Purification of Monomeric Aβ(M1-40)
Aβ(M1-40) was expressed using the pET(Aβ1-40) Sac plasmid (provided by D. Walsh) transformed into BL21(DE3)-pLysS competent E. coli. Denatured inclusion bodies from expression lysates were purified by anion-exchange followed by gel filtration chromatography as described previously [25]. Synthetic WT Aβ40 was purified on a preparative Superdex HiLoad 16/60 column and eluted with a retention volume of approximately 80 mL ( Figure 1A), which was confirmed to be monomeric by SDS-PAGE analysis ( Figure 1B). Aβ(M1-40) also eluted at approximately 80 mL retention volume. The peak in Figure 1C labeled as "Aβ(M1-40)" was collected and the presence of a protein with a molecular weight (MW) of approximately 5 kDa was confirmed using SDS-PAGE ( Figure 1D). LC-MS characterization indicated that the purity of both WT Aβ and Aβ(M1-40) were approximately 95% (Supplementary Figures S1-S3). Self-assembly studies were therefore carried out without further purification.
WT Aβ40 Oligomers Assemble with Two Distinct Morphologies
WT Aβ40 oligomers were generated from monomeric peptide diluted to 100 µM in phosphatebuffered saline (PBS) for 24 h. Relative oligomer abundances were analyzed using SDS-PAGE ( Figure 2A). Because LMW oligomers are sensitive to SDS-PAGE conditions, oligomer mixtures were crosslinked using photo-induced cross-linking of unmodified proteins (PICUP) [28,29]. PICUP covalently cross-links closely associated heteroaromatic sidechains during photo-irradiation, preventing dissociation of oligomers under denaturing conditions [30][31][32]. WT Aβ40 oligomers that were not subjected to PICUP cross-linking (unmodified WT Aβ40) were sensitive to SDS during PAGE and migrated as monomers and, to a lesser extent, dimers ( Figure 2A, lane 1). Cross-linked AβOs migrated as bands corresponding to dimers through pentamer assembly states (referred to as 2mer-5mer). Thermal denaturation (lanes 3 and 4) did not significantly alter the abundance of species that were also observed in lanes 1 and 2. Protein bands were observed in the loading wells that resisted thermal and SDS denaturation, which may be attributed to larger aggregates, such as protofibrils or fibrils with molecular weights (MWs) greater than 250 kDa. These data are consistent with previous WT Aβ40 oligomer cross-linking studies, which found that cross-linked oligomer distributions ranging from 2mer-5mer resolve as monomers by SDS-PAGE in the absence of cross-linking [31,32].
WT Aβ40 Oligomers Assemble with Two Distinct Morphologies
WT Aβ40 oligomers were generated from monomeric peptide diluted to 100 µM in phosphate-buffered saline (PBS) for 24 h. Relative oligomer abundances were analyzed using SDS-PAGE ( Figure 2A). Because LMW oligomers are sensitive to SDS-PAGE conditions, oligomer mixtures were cross-linked using photo-induced cross-linking of unmodified proteins (PICUP) [28,29]. PICUP covalently cross-links closely associated heteroaromatic sidechains during photo-irradiation, preventing dissociation of oligomers under denaturing conditions [30][31][32]. WT Aβ40 oligomers that were not subjected to PICUP cross-linking (unmodified WT Aβ40) were sensitive to SDS during PAGE and migrated as monomers and, to a lesser extent, dimers ( Figure 2A, lane 1). Cross-linked AβOs migrated as bands corresponding to dimers through pentamer assembly states (referred to as 2mer-5mer). Thermal denaturation (lanes 3 and 4) did not significantly alter the abundance of species that were also observed in lanes 1 and 2. Protein bands were observed in the loading wells that resisted thermal and SDS denaturation, which may be attributed to larger aggregates, such as protofibrils or fibrils with molecular weights (MWs) greater than 250 kDa. These data are consistent with previous WT Aβ40 oligomer cross-linking studies, which found that cross-linked oligomer distributions ranging from 2mer-5mer resolve as monomers by SDS-PAGE in the absence of cross-linking [31,32]. SDS reportedly perturbs AβO assembly states in the absence of covalent cross-linking, so we analyzed cross-linked WT Aβ40 by analytical size-exclusion chromatography (SEC). WT Aβ40 oligomers were cross-linked as described above and separated without denaturation on a tandem gel filtration chromatography system equipped with a Bio-Rad ENrich 650 and two Bio-RAD ENrich 70 columns linked sequentially to improve separation of LMW from HMW AβOs. AβO morphology impacts migration through sepharose columns, resulting in retention volumes that correlate with linear dextran polymer standards [16,17]. Thus, we assigned SEC peaks based on retentions that correlated either to a globular protein or linear dextran MW standard calibration curve ( Figures S4-S6). Cross-linked WT Aβ40 oligomeric states migrated predominantly as globular monomers and dimers in the chromatogram shown in Figure 2B. A peak shoulder that eluted at 40 mL corresponding to a globular tetramer was present at low concentrations. However, a distinct globular trimer peak was not observed despite the observation of an intense band corresponding to trimer in the SDS-PAGE gel shown in Figure 2A. AβOs that correlated more closely to linear dextran, or "linear AβOs", were also observed in low quantities and were retained as putative 2mer, 6mer, and higherorder oligomers ("12+"mers), though no linear trimer was observed. The oligomeric species eluting at 36.5 mL could not be unambiguously assigned, as both calibration curves predicted a globular 9mer or linear 2mer at this retention time. The SEC platform thus complements SDS-PAGE Proteins were separated on Criterion™ XT 4−12% bis-tris gradient gels. Samples in lanes 1 and 3 were not covalently cross-linked. Samples in lanes 2 and 4 were subjected to photo-induced cross-linking of unmodified proteins (PICUP). Lanes 3 and 4 were thermally denatured (∆) for 5 min at 95 • C prior to SDS-PAGE separation. (B) SEC chromatogram of cross-linked WT Aβ40 oligomers separated using three analytical gel filtration columns linked in tandem. Approximate molecular weights of globular (top row, boxed labels) and linear (bottom row) standards are listed above the chromatogram. The oligomer assembly state corresponding to each peak is listed above an arrow for globular (G1, G2, etc.) and linear (L1, L2, etc.) morphologies. SDS reportedly perturbs AβO assembly states in the absence of covalent cross-linking, so we analyzed cross-linked WT Aβ40 by analytical size-exclusion chromatography (SEC). WT Aβ40 oligomers were cross-linked as described above and separated without denaturation on a tandem gel filtration chromatography system equipped with a Bio-Rad ENrich 650 and two Bio-RAD ENrich 70 columns linked sequentially to improve separation of LMW from HMW AβOs. AβO morphology impacts migration through sepharose columns, resulting in retention volumes that correlate with linear dextran polymer standards [16,17]. Thus, we assigned SEC peaks based on retentions that correlated either to a globular protein or linear dextran MW standard calibration curve ( Figures S4-S6). Cross-linked WT Aβ40 oligomeric states migrated predominantly as globular monomers and dimers in the chromatogram shown in Figure 2B. A peak shoulder that eluted at 40 mL corresponding to a globular tetramer was present at low concentrations. However, a distinct globular trimer peak was not observed despite the observation of an intense band corresponding to trimer in the SDS-PAGE gel shown in Figure 2A. AβOs that correlated more closely to linear dextran, or "linear AβOs", were also observed in low quantities and were retained as putative 2mer, 6mer, and higher-order oligomers ("12+"mers), though no linear trimer was observed. The oligomeric species eluting at 36.5 mL could not be unambiguously assigned, as both calibration curves predicted a globular 9mer or linear 2mer at this retention time. The SEC platform thus complements SDS-PAGE characterization of oligomeric states by detecting morphological differences and low-abundant AβO quaternary structures.
Aβ(M1-40) Oligomers Are More Polydisperse Than WT Aβ40
Having demonstrated the utility of PICUP cross-linking followed by SDS-PAGE and SEC quaternary structural analyses, we used these methods to characterize Aβ(M1-40) oligomers formed under the same conditions. SDS-PAGE analysis in Figure 3A shows that Aβ(M1-40) assembles into a polydisperse mixture of oligomers, of which only certain species resisted both SDS and thermal denaturation ( Figure 3A). For example, SDS treatment denatured unmodified oligomers into 2mers and 3mers (lane 1 versus lane 2), and these bands were more intense after thermal denaturation (lane 1 versus lane 3). Bands in lanes 1 and 2 that migrated with an apparent MW of 25 kDa, shifted to a slightly lower MW upon thermal denaturation (lanes 3 and 4). These changes to both cross-linked and unmodified AβOs observed after thermal denaturation may be due to conformational or morphological changes that influence migration in SDS-PAGE, which are independent of cross-linking. quaternary structures.
Aβ(M1-40) Oligomers are More Polydisperse than WT Aβ40
Having demonstrated the utility of PICUP cross-linking followed by SDS-PAGE and SEC quaternary structural analyses, we used these methods to characterize Aβ(M1-40) oligomers formed under the same conditions. SDS-PAGE analysis in Figure 3A shows that Aβ(M1-40) assembles into a polydisperse mixture of oligomers, of which only certain species resisted both SDS and thermal denaturation ( Figure 3A). For example, SDS treatment denatured unmodified oligomers into 2mers and 3mers (lane 1 versus lane 2), and these bands were more intense after thermal denaturation (lane 1 versus lane 3). Bands in lanes 1 and 2 that migrated with an apparent MW of 25 kDa, shifted to a slightly lower MW upon thermal denaturation (lanes 3 and 4). These changes to both cross-linked and unmodified AβOs observed after thermal denaturation may be due to conformational or morphological changes that influence migration in SDS-PAGE, which are independent of crosslinking.
Protein bands corresponding to HMW oligomers were also observed prominently, regardless of whether the sample was treated with cross-linking reagents. Silver staining artifacts precluded quantification of these species by densitometry, though the low intensity of bands corresponding to dimeric and trimeric species in cross-linked samples suggests that these two species comprise the building blocks of HMW Aβ(M1-40) oligomers. Staining of aggregates in the loading well was less intense for recombinant Aβ(M1-40) than for WT Aβ40 (Figure 2A), indicating that higher order aggregates of Aβ(M1-40) are primarily HMW oligomers after 24 h incubation. Protein bands corresponding to HMW oligomers were also observed prominently, regardless of whether the sample was treated with cross-linking reagents. Silver staining artifacts precluded quantification of these species by densitometry, though the low intensity of bands corresponding to dimeric and trimeric species in cross-linked samples suggests that these two species comprise the building blocks of HMW Aβ(M1-40) oligomers. Staining of aggregates in the loading well was less intense for recombinant Aβ(M1-40) than for WT Aβ40 (Figure 2A), indicating that higher order aggregates of Aβ(M1-40) are primarily HMW oligomers after 24 h incubation.
We next used analytical SEC to determine whether Aβ(M1-40) oligomer abundances were consistent under conditions that are not expected to denature quaternary structures. Cross-linked samples containing Aβ(M1-40) oligomers were separated using serial analytical SEC columns and analyzed as described for WT Aβ40. Cross-linked Aβ(M1-40) oligomers migrated as linear and globular morphologies, though relative abundances were different for Aβ(M1-40) oligomers ( Figure 3A). Similar to WT Aβ40, globular monomers and 2mers were present in the chromatogram shown in Figure 3B. No trimers were observed in the SEC trace after cross-linking, consistent with the SDS-PAGE analysis. A large peak corresponding to either a globular 9mer or linear 2mer eluted at approximately 36 mL. Given that two bands corresponding to 37 kDa aggregates were detected by SDS-PAGE in Figure 3A, this species is more likely to be a globular 9mer than a linear 2mer.
Globular 6mers and aggregates corresponding to HMW oligomers were abundant in the cross-linked Aβ(M1-40) oligomer SEC trace that were absent in the WT Aβ40 chromatogram. Linear morphologies unique to Aβ(M1-40) eluted as a broad peak between 30-34 min as a mixture of quaternary structures. Distinguishable peaks corresponding to linear 9mers and 12mers were among this mixture. Unfortunately, this region also corresponds to globular HMW AβOs. SDS-PAGE protein bands between 60-150 kDa in Figure 3A are expected to be retained between 29.8 and 34.1 mL in SEC. HMW oligomers with globular and linear morphologies likely co-elute under these SEC conditions, making it difficult to resolve the relative abundance of these HMW AβOs. Therefore, the tandem arrangement of sepharose columns used for SEC in this study cannot adequately resolve certain HMW oligomer quaternary structures.
Aβ(M1-40) Oligomer Folding Pathways Are Sensitive to Micellar Amphiphiles
Given evidence that Aβ(M1-40) oligomer synaptotoxicity results from oligomer intermediates that are larger than 2mers but smaller than their aggregation end-products, we assayed conditions that might be used to favor the generation of large quantities of different oligomeric states. We focused on the use of lipid-mimetic surfactants as additives during self-assembly because they have been shown previously to stabilize or inhibit WT Aβ self-assembly [19,22,23]. As initial validation of the screening approach, we co-incubated Aβ(M1-40) with Tween 20 or SDS above their critical micelle concentration (CMC). Tween 20 has been shown to stabilize pre-formed HMW Aβ42 oligomers, whereas SDS has variable effects on WT Aβ42 self-assembly, depending on the concentration of SDS [22]. Therefore, we initially co-incubated SDS at a concentration of 35 mM, which is above its CMC and is the concentration used in SDS-PAGE loading buffers.
Aβ(M1-40) was incubated at 100 µM in PBS under various conditions listed in Table 1. After PICUP cross-linking, the resultant oligomers were separated on a 12% polyacrylamide gel, which separates HMW species that were not well-resolved by SEC in Figure 3B. After 24 h, samples were cross-linked using PICUP and oligomer dispersity was analyzed by SDS-PAGE ( Figure 4A). A qualitative analysis of silver-stained gels showed that most Aβ(M1-40) exists as lower order aggregates ranging from 2mers to 6mers that were not resolvable using the 12% polyacrylamide gels. However, 9mers to 12mers were resolved clearly enough to measure band intensities using densitometry. Tween 20 had the most significant influence on 12mer and HMW oligomer abundances, even after 24 h incubations (lane 3, Figure 4A,B). Figure 4C shows that after 48 h, oligomers were more polydisperse, though only 5mers-12mers could be resolved for quantification using densitometry. Relative oligomer abundances after 48 h compared to those after 24 h incubations, with the exception of 12mers that were present at levels nearly six-fold higher than any other assembly state, regardless of whether Tween 20 was added. In the absence of PICUP (lane 9), only 12mers were detected in significant quantities when co-incubated with Tween 20. Co-incubation of Aβ(M1-40) with SDS inhibited assembly of 12mers and other HMW oligomers (lane 8). Therefore, co-incubation with Tween 20 has a stabilizing effect on unmodified HMW, but not LMW, oligomers. of Aβ(M1-40) with SDS inhibited assembly of 12mers and other HMW oligomers (lane 8). Therefore, co-incubation with Tween 20 has a stabilizing effect on unmodified HMW, but not LMW, oligomers. Table 1. Proteins were separated on 12% Mini-PROTEAN ® TGX™ polyacrylamide gels. (B) Graphical representation of oligomer abundance determined using densitometry after 24 h incubation. ND: no 6mers were detected in lane 1. (C) Silver-stained 12% TGX polyacrylamide gel of oligomers assembled under conditions 5-9 in Table 1.
Condition a Incubation Time (h) Temperature (°C) Co-Incubation Additive b PICUP
(D) Oligomer abundances from (C) determined using densitometry. Note, the y-axis is segmented due to the intensity of bands corresponding to 12mers.
Temperature increased the rate of oligomer formation in lane 2 of Figure 4A but did not impact the relative distribution of HMW assembly states, particularly after 48 h incubations. Co-incubation of Aβ(M1-40) monomer with concentrations of SDS used during SDS-PAGE inhibited self-assembly of Aβ(M1-40) oligomers. However, once formed, HMW oligomers are stable to SDS denaturation. Many quantitative analyses of native Aβ in tissue are conducted using SDS-PAGE at room temperature in the presence of detergents such as Tween 20. The results in Figure 4 suggest that native oligomer abundance may be perturbed by these conditions. Cross-linking native AβOs prior to analysis may therefore improve the accuracy of quantitative assays. Table 1. Proteins were separated on 12% Mini-PROTEAN ® TGX™ polyacrylamide gels. (B) Graphical representation of oligomer abundance determined using densitometry after 24 h incubation. ND: no 6mers were detected in lane 1.
Discussion
(C) Silver-stained 12% TGX polyacrylamide gel of oligomers assembled under conditions 5-9 in Table 1. (D) Oligomer abundances from (C) determined using densitometry. Note, the y-axis is segmented due to the intensity of bands corresponding to 12mers.
Temperature increased the rate of oligomer formation in lane 2 of Figure 4A but did not impact the relative distribution of HMW assembly states, particularly after 48 h incubations. Co-incubation of Aβ(M1-40) monomer with concentrations of SDS used during SDS-PAGE inhibited self-assembly of Aβ(M1-40) oligomers. However, once formed, HMW oligomers are stable to SDS denaturation. Many quantitative analyses of native Aβ in tissue are conducted using SDS-PAGE at room temperature in the presence of detergents such as Tween 20. The results in Figure 4 suggest that native oligomer abundance may be perturbed by these conditions. Cross-linking native AβOs prior to analysis may therefore improve the accuracy of quantitative assays.
Discussion
Converging evidence implicates specific AβO quaternary structures as pathophysiologically relevant species that induce the synapse loss underlying dementia caused by AD. To unravel AβO structure-activity relationships, AβOs that faithfully model endogenous aggregates must be readily accessible. Recombinant Aβ(M1-40) represents a potential source of AβOs in this regard, as the monomer can be produced in high yields [24][25][26], and oligomers exhibit potent synaptotoxicity. Based on our hypothesis that the LMW oligomers common to both synthetic WT Aβ40 and recombinant Aβ(M1-40) self-assembly pathways contribute to observed synaptic dysfunction, we examined the quaternary structures of oligomers generated from monomers derived from the two peptides under in vitro self-assembly conditions. To overcome the meta-stability of AβOs, they were trapped after 24 h incubation times using PICUP prior to analyses, which enabled characterization of native oligomer orders and morphologies by SDS-PAGE and SEC analyses.
We observed similarities in oligomer size and morphology between WT Aβ40 and Aβ(M1-40) LMW oligomers that may constitute common toxic elements. WT Aβ40 assembled into LMW oligomers ranging from 2mers to 5mers in SDS-PAGE experiments shown in Figure 2A that were sensitive to denaturation by SDS. Aβ(M1-40) assembled into 2mer to HMW oligomers in Figure 3A that could not be easily resolved in silver stained gel. SEC separation revealed that morphologies of WT Aβ40 oligomers are primarily globular, while Aβ(M1-40) oligomers adopted abundant linear and globular morphologies ( Figure 2B versus Figure 3B). Linear 6mers and 2mers of WT Aβ40 were also observed in SEC traces ( Figure 2B) that were also present in Aβ(M1-40) oligomer samples. However, these shared quaternary structures were detected at low levels in WT Aβ40 samples, suggesting that common toxic quaternary structures, excluding globular dimers, are present at low concentrations.
A major difference between WT Aβ40 and Aβ(M1-40) oligomers was in the abundance of trimers generated from WT Aβ40 after PICUP modification (Figure 2A) that were absent in lanes containing cross-linked Aβ(M1-40) oligomers. SEC traces of oligomers derived from both peptides also lacked trimers, suggesting that WT Aβ40 aggregates comprising these building blocks are unstable under SDS-PAGE conditions. Given that HMW oligomers of WT Aβ40 are present at low concentrations, the cross-linked trimers in SDS-PAGE gels may be derived from dissociated protofibrils arranged into quaternary structures that cannot be cross-linked using PICUP. Conversely, Aβ(M1-40) trimers are stable to SDS-PAGE in the absence of cross-linking (lane 1, Figure 2A), and likely constitute the HMW oligomers (e.g., 6mers, 9mers, 12mers) observed.
WT AβOs assembled with trimer building blocks have been reported as pathophysiologically relevant species with clinical significance [4,7,33]. For example, Lesné and coworkers described an endogenous AβO species in transgenic AD mice, referred to as Aβ*56, that denatured as trimeric building blocks during SDS-PAGE [4,7]. Aβ*56 is synaptotoxic [8,9], despite being present at only a small fraction of the total Aβ pool in AD mouse brain extracts [4,7]. This species differs from other preparations of putative 12mer AβOs, such as globulomers, which dissociate into tetramers and monomers upon denaturation [14]. Aβ*56 and globulomers are derived from the more toxic Aβ42 isoform, which contains two C-terminal amino acids that are absent in Aβ40. Like Aβ(M1-40), some Aβ42 oligomer pathways produce HMW oligomers. Therefore, despite differences in their Nand C-termini, the resultant quaternary structure adopted by Aβ(M1-40) oligomers could potentially model pathologically relevant Aβ42 oligomers.
In an effort to generate abundant quantities of certain oligomeric states derived from Aβ(M1-40), we screened different self-assembly conditions and analyzed the relative abundance of resulting oligomers. PICUP and SDS-PAGE were used to rapidly assess the self-assembly products and their relative distributions. Previous reports describing WT Aβ42 oligomer self-assembly in the presence of amphiphiles showed that Tween 20 stabilizes HMW oligomers once they form. We found this phenomenon to be consistent for Aβ(M1-40) oligomers, as HMW oligomers were detected at higher levels in samples co-incubated in the presence of Tween 20. The use of common biomimetic additives to perturb recombinant AβOs thus generates diverse collections of AβO quaternary structures in a time-and cost-effective fashion.
In addition to identifying preparation conditions for diverse Aβ oligomeric states, the structural characterization methods reported in the present study are expected to assist in LMW and HMW AβO quaternary structure determination. The sensitivity of the SEC platform used in these studies resolved globular and linear LMW AβOs after cross-linking, which enabled us to detect WT Aβ40 oligomers with linear morphologies that were not clearly visualized using SDS-PAGE analysis with silver staining (Figure 2A versus Figure 2B). Cross-linked oligomer distributions determined using SDS-PAGE generally reflected their dispersity observed in non-denaturing SEC analyses. Although, a caveat is that some higher-order oligomers do not cross-link efficiently using PICUP and denature under SDS-PAGE conditions. Therefore, oligomer order and building block composition determined using non-denaturing SEC analyses are an ideal complement to SDS-PAGE when assigning quaternary structures.
Our results also highlight the difficulty in resolving oligomers of similar molecular weight ranges using a single polyacrylamide matrix. Gradient gels in Figures 1-3 were well-suited for separating LMW oligomers, while 10% tris-glycine gels separated intermediate and HMW oligomers more effectively (Figure 4). The use of silver-staining to detect AβOs provided the necessary sensitivity to visualize low-abundant oligomeric species, however, artifacts introduced during the development process precluded quantification of HMW oligomers by densitometry. This is especially important given batch-to-batch variability and environmental sensitivity of AβO self-assembly. Therefore, gel matrices must be chosen judiciously based on the desired AβO being quantified.
Finally, we did not investigate the physicochemical origins underlying the differences in Aβ(M1-40) and WT Aβ40 oligomer assembly. However, it is surprising that the presence of an N-terminal methionine has such a profound influence on oligomer self-assembly, particularly because NMR structures of Aβ40 fibrils consistently show that the N-terminus is flexible and disordered in these aggregated end-products [34,35]. The diversity of Aβ(M1-40) oligomers observed during our analyses support previous studies showing that N-terminal modification of Aβ influences oligomer aggregation. For example, molecular dynamic simulations of Aβ dimers predict that the N-terminus stabilizes a β-hairpin between residues 23-27 [36]. The kinetics of β-hairpin folding have been shown experimentally to be a critical step during early stages of Aβ self-assembly [37][38][39]. Moreover, spectroscopic analyses of N-terminal amino acid dynamics in Aβ consistently show that N-terminal residues are involved in early conformational changes during aggregation [40,41]. N-terminal amino acids also play a role in clinical manifestations of AD. Point mutations in this region, such as D7N and H6R, enhance Aβ aggregation propensity in favor of HMW oligomers and lead to early-onset AD [42]. Point mutations that have a neuroprotective effect, such as A2T, reduce aggregation rates of Aβ40 with this mutation [43]. These previous reports implicate the N-terminal methionine as playing a role in Aβ(M1-40) oligomer self-assembly, though the precise interactions that influence the formation of diverse quaternary structures are likely complex and deserve further study.
Materials and Methods
General. All reagents and supplies were used as received unless otherwise noted. Part numbers are listed for reagents that may impact reproducibility of oligomer assembly and characterization studies. Aqueous buffers were prepared using deionized water filtered on a Barnstead™ Pacific TII water purification system. All PICUP and SEC experiments were conducted in a cold room at 4 • C using pre-chilled plastic microcentrifuge and pipette tips and with solutions stored at 4 • C.
Protein expression. The pET-Sac-Aβ(M1-40) was a gift from Dominic Walsh (Addgene plasmid # 71876; http://n2t.net/addgene:71876; RRID:Addgene_71876). The plasmid was transformed in BL21(DE3)-pLysS competent E.coli (Thermo Fischer, cat. no. C600003) [24]. Agar plates containing 50 µg/mL ampicillin and 34 µg/mL of chloramphenicol were streaked with bacterial glycerol stocks and incubated at 37 • C overnight. The following day, individual bacterial colonies were picked and placed in starter cultures consisting of 50 mL of LB broth (25 g/L, Fisher Scientific, cat. no. BP9723) containing 50 µg/mL ampicillin and 34 µg/mL of chloramphenicol in a 250 mL Erlenmeyer flask. The starter cultures were allowed to grow overnight at 37 • C and were shaken at 125 RPM. The next morning, the optical density (OD) at 600 nm of each starter culture was measured using a NanoDrop UV-vis spectrophotometer. Desired ODs ranged from 1.5-1.8 absorbance units (au). A 5 mL aliquot of each starter culture was added to 500 mL of LB broth containing 50 µg/mL ampicillin and 34 µg/mL of chloramphenicol in a 2.8 L flask. The flasks were placed on a shaker at 37 • C at 225 RPM until the OD reached~0.6 au, which took approximately 4-6 h. Once the desired OD was reached, expression was induced with isopropyl β-d-1-thiogalactopyranoside (IPTG) diluted into the flask to a final concentration of 0.1 mM. Following induction, the bacteria were incubated until growth plateaued as measured using spectrophotometry. Typical incubation times after induction ranged from 4 to 5 h. Cells were pelleted at 4500× g for 12 min at 4 • C in 1 L centrifuge bottles. The resulting pellets were stored in 25 mL of buffer A (25 mM tris-base, pH 8.5 containing 5 mM EDTA) at −80 • C until further use. Following expression in E. coli, inclusion bodies containing the Aβ(M1-40) peptide were solubilized in 8 M urea buffer after a series of sonication and pelleting steps as described previously [24]. Briefly, the cell pellets were thawed at 4 • C and sonicated at 40 W in 30 s intervals for a total of 8 min (4 min total sonication). The resulting solution was centrifuged at 39,000× g for 12 min at 4 • C. Following centrifugation, the supernatant was decanted, and the resulting pellet was resuspended in 25 mL of buffer A. These sonication and centrifugation steps were repeated for a total of three cycles. Following the third centrifugation, the pellet was resuspended in 15 mL of 8 M urea in buffer A. The suspension was sonicated and centrifuged as previously described for one cycle. Following the final centrifugation step, the supernatant containing the solubilized inclusion bodies was decanted and stored on ice. The remaining pellet was discarded. Aβ was further purified using anion exchange chromatography (DEAE -Sepharose Fast Flow Resin, Sigma, DFF100, St. Louis, MO, USA).
Anion exchange chromatography. Approximately 15 mL of DEAE-Sepharose Fast Flow Resin (Sigma, DFF100) was added to a 60 mL fritted syringe. The resin was equilibrated through 10 successive washes of 30 mL buffer A until the flow-through and buffer A pH were equivalent. The 15 mL inclusion body solution was diluted up to approximately 40 mL with buffer A, added to the equilibrated resin, and allowed to incubate for 30 min. The plunger was used to push the solution through the frit until the liquid reached the top of the settled resin. The remaining resin in the syringe was incubated with 25 mL of buffer A for 5 min with gentle shaking. Following incubation, buffer A was eluted using gravity flow and collected. Again, the resin was not allowed to run dry at any point during this process. These steps of incubation and gravity elution were repeated with a 25 mL low salt wash (25 mM NaCl in buffer A) and four 10 mL high salt washes (50 mM NaCl in buffer A). Each elution was characterized by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) followed by silver staining (see below). The elutions containing Aβ(M1-40) as determined by the appearance of a band at approximately 10 kDa were carried forward for further gel filtration purification.
Size exclusion chromatography (SEC) of denatured inclusion bodies. Following anion-exchange chromatography, fractions containing Aβ(M1-40) were dialyzed in SnakeSkin tubing, 3500 MWCO (Thermo Fisher, cat. no. 88242, Waltham, MA, USA) in 4 L of 50 mM ammonium bicarbonate. The ammonium bicarbonate solution was exchanged three times after 8 h of dialysis. Following dialysis, the samples were lyophilized and reconstituted in 5 mL of disaggregation solution (50 mM tris-base, pH 8.5 containing 7M guanidine HCl and 5 mM EDTA) and incubated overnight at room temperature. Then, 5 mL of the dialyzed sample were injected onto a Bio-RAD NGC SEC system equipped with a GE Superdex HiLoad 16/60 column at 4 • C. The peptides were eluted in 50 mM ammonium bicarbonate, pH = 8.5 at a flow-rate of 0.8 mL/min. Fraction collection was begun after a dead volume of approximately 40 mL and peaks were collected in 2 mL fractions, lyophilized, and stored at −80 • C for future use.
SDS-PAGE analysis using gradient gels: 4-12% Criterion™ XT Bis-Tris Protein Gels (Bio-Rad, cat. no. 3450123). To begin, 16.6 µL of 4X XT Sample Buffer containing 10% 2-mercaptoethanol (Bio-Rad, cat. no. 1610791) was added to 50 µL of each sample in a 0.5 mL microcentrifuge tube. Samples requiring thermal denaturation were heated to 95 • C for 5 min in a heating block (ThermoFisher, cat. no. 88870001). When analyzing non cross-linked peptides, samples were centrifuged at 2000× g using a benchtop centrifuge for 10 s immediately prior to gel loading. Then, 40 µL of supernatant was loaded into the 45 uL loading well. Gel electrophoresis was performed within a Criterion Cell™ (Bio-Rad, cat. no. 1656001) at 150 V for 60 min using XT MES running buffer (Bio-Rad, cat. no. 161-0789, Hercules, CA, USA). Gels were developed using a Pierce Silver Stain Kit (Thermo Fisher, cat. no. 24612) using the manufacturer's recommended protocol.
WT Aβ40 monomer preparation. Aβ40 was obtained commercially as a film evaporated from a solution of hexafluoro-2-propanol (HFIP) (GenScript, cat. no. RP1004, Piscataway, NJ, USA). Upon receipt, the peptide was reconstituted in 1 mL HFIP, aliquoted into 100 µg fractions, freeze-dried, and stored at −80 • C before use. Thawed aliquots were reconstituted in DMSO to a concentration of 2 mM and then diluted to desired working concentrations using an appropriate buffer.
WT Aβ40 and Aβ(M1-40) oligomer incubations. To being, 100 µg of either WT Aβ40 or Aβ(M1-40) that had been aliquoted and stored at −80 • C were thawed on ice. While the peptides thawed, detergent stock solutions containing 2% Tween 20 and 20% SDS were prepared in phosphate-buffered saline (PBS). The peptides were diluted in PBS to the desired working concentration and split into 18 µL aliquots in 0.5 mL tubes (Fisher Scientific basix, cat. no. 02-682-000, Waltham, MA, USA). Then, 1 µL of PBS or the appropriate detergent stock solution was added to the peptide solution and the tubes were incubated for 24-48 h at 4 • C.
Oligomer photo-induced cross-linking of unmodified proteins (PICUP). Solutions of 1 mM Tris(2,2'-bipyridy)dichloro-ruthenium (II) hexahydrate (Ru(bpy) 3 Cl 2 ; Sigma Aldrich, 544981) and 20 mM ammonium persulfate (APS, Affymetrix, cat. no. 76322, Santa Clara, CA, USA) were prepared in PBS in 0.5 mL microcentrifuge tubes (0.5 mL, Fisher Scientific basix, cat. no. 02-682-000). Tubes were wrapped in aluminum foil prior to the experiment. Notably, APS was prepared fresh daily. To each tube, 1 µL of 1 mM Ru(bpy) 3 Cl 2 was added followed by 1 µL of 20 mM APS. The sample was mixed thoroughly by pipetting the solution ten times using a 10 uL micropipette. Following mixing, the solution was irradiated with a MagLite™ flashlight light source for 10 s with manual rotation of the microcentrifuge tube around the beam of light. Following light exposure, the reaction was quenched with 8 µL of 3X Laemmli SDS sample buffer containing 20 mM of DTT (Fisher Scientific, cat. no. CX25040) if the samples were to be analyzed by SDS-PAGE. Samples analyzed using analytical size-exclusion chromatography (see below) were quenched with 8 µL of 20 mM DTT in PBS. Following oxidative cross-linking, the tubes were stored at −20 • C pending purification via SEC.
Analytical size exclusion chromatography (SEC) of cross-linked oligomers. SEC was performed on an NGC Bio-RAD equipped with one Bio-RAD ENrich 650 (cat. no. 780-1650) and two Bio-RAD ENrich 70 columns (cat. no. 780-1070) linked in series. Each 20 µL sample containing 100 µM Aβ was loaded onto the column and oligomers were eluted in PBS at 0.3 mL/min. The desired peaks were collected in 2 mL fractions after a dead volume of approximately 30 mL, pooled, and lyophilized.
SDS-PAGE separation in 12% polyacrylamide gels for screening oligomer growth conditions. Protein mixtures were separated via SDS-PAGE using 12% Mini-PROTEAN ® TGX™ polyacrylamide gels (Bio-Rad, cat. no. 4561044). Then, 10 µL of 6X Laemmli SDS sample buffer containing 20 mM dithiothreitol (Alfa Aesar, cat. no. J60660, Haverhill, MA, USA) was added to 50 µL of each sample. The protein mixture was thermally denatured by heating the sample in a 0.5 mL microcentrifuge tube at 95 • C for 5 min immediately prior to gel loading. A 43 uL aliquot of sample containing loading buffer was loaded into the 50 µL gel loading well. Gel electrophoresis was performed within a Bio-Rad Mini-PROTEAN ® Tetra Cell (Bio-Rad, cat. no. 1658004) at 200 V for 25 min in a TGX running buffer (25 mM tris-base, 192 mM glycine, and 0.1% SDS, pH = 8.3). Protein bands were visualized using silver staining, and band density was measured using ImageJ analysis software [44,45].
LC-MS analysis of WT Aβ40 and Aβ(M1-40). Prior to LC-MS analysis, peptides were desalted using Pierce™ C-18 Spin Columns (Thermo Fisher, cat. no. 89870) using the manufacturer's protocol. Samples were concentrated using a freeze-dryer. Approximately 20 µg of each peptide was dissolved in 40 µL of 20% acetonitrile (ACN) containing 0.1% formic acid. Then, 8 µL of each solution was injected onto a Thermo Scientific UltiMate 3000 UHPLC equipped with an Agilent Zorbax 3000SB-C3 column (0.3 × 100 mm) connected to a Thermo Scientific LTQ XL™ Linear Ion Trap Mass Spectrometer. HPLC grade ACN and deionized water, each containing 0.1% formic acid, were used as the mobile phase. The sample was eluted at 15 µL/min with a 10-90% ACN gradient over 23 min, at 40 • C. Chromatograms were recorded by measuring absorbance at 215 nm.
MALDI-TOF mass spectrometry analysis of WT Aβ40 and Aβ(M1-40). Prior to MALDI analysis, peptides were desalted using Pierce™ C-18 Spin Columns (Thermo Fisher, cat. no. 89870) using the manufacturer's recommended protocol. Then, 1 µL of each eluted solution was co-spotted onto a MALDI-TOF sample analysis plate with 1 µL of Super-DHB (2,5-dihydroxybenzoic acid, Sigma Aldrich cat. no. 50862-1G-F) matrix. All analyses were performed using the positive reflector mode collected over a mass range of 1000-6000 Da.
Conclusions
A detailed mechanistic understanding of AβO-mediated neurotoxicity remains unresolved, and mounting evidence suggests that AβO toxicity is dependent on the quaternary structure of individual component oligomers. Accessing AβOs that morphologically and functionally model endogenous AβOs is expected to facilitate efforts towards unraveling disease mechanisms and targeting toxic species therapeutically. As a step in this direction, we showed that the quaternary structure of oligomers generated with recombinant Aβ(M1-40) are polydisperse and morphologically diverse, adopting both globular and linear morphologies. Conversely, the morphologies of WT Aβ40 oligomers are primarily globular, though linear aggregates are observed at low concentrations. We also identified conditions that stabilize HMW Aβ(M1-40) oligomers, which we expect will provide researchers with accessible biomimetic AβO quaternary structure isoforms that model synthetic or brain-derived oligomers for investigating the biological activity of AβO species. Taken together, our results demonstrate that Aβ(M1-40) and WT Aβ40 oligomers share common quaternary structures, though Aβ(M1-40) oligomers are more structurally diverse. These structural similarities and differences deserve further functional characterization to identify relationships between quaternary structure, biological activity and their clinical significance. | 2019-06-19T13:13:34.194Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "a60f43576fbeed320db4b8679537dda80f6b2d67",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/24/12/2242/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a60f43576fbeed320db4b8679537dda80f6b2d67",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
235097327 | pes2o/s2orc | v3-fos-license | Nutri-bullets Hybrid: Consensual Multi-document Summarization
We present a method for generating comparative summaries that highlight similarities and contradictions in input documents. The key challenge in creating such summaries is the lack of large parallel training data required for training typical summarization systems. To this end, we introduce a hybrid generation approach inspired by traditional concept-to-text systems. To enable accurate comparison between different sources, the model first learns to extract pertinent relations from input documents. The content planning component uses deterministic operators to aggregate these relations after identifying a subset for inclusion into a summary. The surface realization component lexicalizes this information using a text-infilling language model. By separately modeling content selection and realization, we can effectively train them with limited annotations. We implemented and tested the model in the domain of nutrition and health – rife with inconsistencies. Compared to conventional methods, our framework leads to more faithful, relevant and aggregation-sensitive summarization – while being equally fluent.
Introduction
Articles written about the same topic rarely exhibit full agreement. To present an unbiased overview of such material, a summary has to identify points of consensus and highlight contradictions. For instance, in the healthcare domain, where studies often exhibit wide divergence of findings, such comparative summaries are generated by human experts for the benefit of the general public. 2 Ideally, this capacity will be automated given a large number of relevant articles and continuous influx of new ones that require a summary update to keep Pubmed studies on Pears and Cancer. The key facts (bold) and consensus (contradiction) are realized in the text generated by our model. it current. However, standard summarization architectures cannot be utilized for this task since the amount of comparative summaries is not sufficient for their training.
In this paper, we propose a novel approach to multi-document summarization based on a neural interpretation of traditional concept-to-text generation systems. Specifically, our work is inspired by the symbolic multi-document summarization system of (Radev and McKeown, 1998) which produces summaries that explicitly highlight agreements, contradictions and other relations across input documents. While their system was based on human-crafted templates and thus limited to a narrow domain, our approach learns different components of the generation pipeline from data.
To fully control generated content, we frame the task of comparative summarization as concept-totext generation. As a pre-processing step, we ex-tract pertinent entity pairs and relations (see Figure 1) from input documents. The Content Selection component identifies the key tuples to be presented in the final output and establishes their comparative relations (e.g., consensus) via aggregation operators. Finally, the surface realization component utilizes a text-infilling language model to translate these relations into a summary. Figure 1 exemplifies this pipeline, showing selected key pairs (marked in bold), their comparative relation -Contradiction (rows 1 &3 and rows 4&5 conflict), and the final summary. 3 This generation architecture supports refined control over the summary content, but at the same time does not require large amounts of parallel data for training. The latter is achieved by separately training content selection and content realization components. Since the content selection component operates over relational tuples, it can be robustly trained to identify salient relations utilizing limited parallel data. Aggregation operators are implemented using simple deterministic rules over the database where comparative relations between different rows are apparent. On the other hand, to achieve a fluent summary we have to train a language model on large amounts of data, but such data is readily available.
In addition to training benefits, this hybrid architecture enables human writers to explicitly guide content selection. This can be achieved by defining new aggregation operators and including new inference rules into the content selection component. Moreover, this architecture can flexibly support other summarization tasks, such as generation of updates when new information on the topic becomes available.
We apply our method for generating summaries of Pubmed publications on nutrition and health. Typically, a single topic in this domain is covered by multiple studies which often vary in their findings making it particularly appropriate for our model. We perform extensive automatic and human evaluation to compare our method against state-of-the-art summarization and text generation techniques. While seq2seq models receive competent fluency scores, our method performs stronger on task-specific metrics including relevance, content faithfulness and aggregation cognisance. Our method is able to produce summaries that receive 3 We compare the selected content with other entries in the database, identifying two contradictions. an absolute 20% more on aggregation cognisance, an absolute 7% more on content relevance and 7% on faithfulness to input documents than the next best baseline in traditional and update settings.
Related Work
Text-to-text Summarization Neural sequence-tosequence models (Rush et al., 2015;Cheng and Lapata, 2016;See et al., 2017) for document summarization have shown promise and have been adapted successfully for multi-document summarization Lebanoff et al., 2018;Baumel et al., 2018;Amplayo and Lapata, 2019;Fabbri et al., 2019). Despite producing fluent text, these techniques may generate false information which is not faithful to the original inputs (Puduppully et al., 2019;Kryściński et al., 2019), especially in low resource scenarios. In this work, we are interested in producing faithful and fluent text cognizant of aggregation amongst input documents, where few parallel examples are available.
Recent language modeling approaches (Devlin et al., 2018;Stern et al., 2019;Shen et al., 2020;Donahue et al., 2020) can also be extended for text completion. Our work is a text-infilling language model where we generate words in place of relation specific blanks to produce a faithful summary.
Prior work (Mueller et al., 2017;Fan et al., 2017;Guu et al., 2018) on text generation also control aspects of the produced text, such as style and length. While these typically utilize tokens to control the modification, using prototypes to generate text is also very common (Guu et al., 2017;Li, 2018;Shah et al., 2019). In this work, we utilize aggregation specific prototypes to guide aggregation cognizant surface realization.
Data-to-text Summrization Traditional approaches for data-to-text generation have operated on symbolic data from databases. McKeown and Radev (1995); Radev and McKeown (1998); Barzilay et al. (1998) introduce two components of content selection and surface realization. Content selection identifies and aggregates key symbolic data from the database which can then be realized into text using templates. Unlike modern data-totext systems (Wiseman et al., 2018;Puduppully et al., 2019;Sharma et al., 2019;Wenbo et al., 2019) these approaches capture document consensus and aggregation cognisance. While the neural approaches alleviate the need for human intervention, they do need an abundance of parallel data, Figure 2: Illustrating the flow of our Nutribullets Hybrid system. In this example, our model takes in four Pubmed studies to produce a database (a). The Content Selection model selects two tuples (bold) and identifies the aggregation operator as Contradiction (b). Finally, the Surface Realization model takes in the tuples and aggregation operator to produces a summary which is faithful to input entities and aggregation cognizant (c).
which are typically from one source only. Hence, modern techniques do not deal with input documents' consensus in low resource settings.
Method
Our goal is to generate a text summary y for a food from a pool of multiple scientific abstracts X. In this section, we describe the framework of our Nutribullets Hybrid system, illustrated in Figure 2.
Overview
We attain food health entity-entity relations, for both input documents X and the summary y, from entity extraction and relation classification modules trained on corresponding annotations (Table 2). Notations: For N input documents, we collect X G = {G x p } N p=1 , a database of entity-entity relations G x p . G p = (e k 1 , e k 2 , r k ) K k=1 is a set of K tuples of two entities e 1 , e 2 and their relation r. r represents relations such as the effect of a nutrition entity e 1 on a condition e 2 (see Table 2). 4 We have raw text converted into symbolic data.
Similarly, we denote the corpus of summaries as where y m is a concise summary, G y m is the set of entity-entity relation tuples and O y m is the realized aggregation, in M data points.
Modeling: Joint learning of content selection, information aggregation and text generation for multi- 4 We train an entity tagger and relation classifier to predict G and also for computing knowledge based evaluation scores. More details on models and results are shared later. document summarization can be challenging. This is further exacerbated in our technical domain with few parallel examples and varied consensus amongst input documents. To this end, we propose a solution using Content Selection and Aggregation and Surface Realization models.
Raw text from N input documents is converted into a mini-database X G of relation tuples. The content selection and aggregation model operates on such symbolic data. We use X G and Y to train the content selection model. During inference, we identify from X G a subset C of content to present in the final output. In order to produce a summary cognizant of consensus amongst inputs, we identify the aggregation operator O based on C and other relevant tuples in X G . The surface realization model produces a relevant, faithful and aggregation cognizant output. The model is trained only using Y . During inference, the model realizes text using the selected content C and the aggregation operator O.
Content Selection and Aggregation
Our content selection model takes a mini-database of entity-entity relation tuples X G as input, and outputs the key tuples C and the aggregation operator O.
Content selection and aggregation consists of two parts -(i) identifying key content P (C|X G ) and (ii) subsequently identifying the aggregation operator O using C, X G . Content Selection Identifying key content in- volves selecting important, diverse and representative tuples from a database. While clustering and selecting from the database tuples is a possible solution, we model our content selection as a finite Markov decision process (MDP). This allows for an exploration of different tuple combinations while incorporating delayed feedback from various critical sources of supervision (similarity with target tuples, diversity amongst selected tuples etc). We consider a multi-objective reinforcement learning algorithm (Williams, 1992) to train the model. Our rewards (Eq. 2) allow for the selection of informative and diverse relation tuples.
where t is the current step, {c 1 , . . . , c t } is the content selected so far and {z 1 , z 2 , ..., z m−t } is the remaining entityentity relation tuples in the m-sized database. The action space is all the remaining tuples plus one special token, Z ∪ {STOP}. 5 The number of actions is equal to |m − t| + 1. As the number of actions is variable yet finite, we parameterize the policy π θ (a|s t ) with a model f which maps each action and state (a, s t ) to a score, in turn allowing a probability distribution over all possible actions using softmax. At each step, the probability that the policy selects z i as a candidate is: where c i * = arg max c j (cos(ẑ i ,ĉ j )) is the selected content closest to z i ,ẑ i andĉ i * are the encoded dense vectors, cos(u, v) = u·v ||u||·||v|| is the cosine similarity of two vectors and f is a feedforward neural network with non-linear activation functions that outputs a scalar score for each action a.
The selection process starts with Z. Our module iteratively samples actions from π θ (a|s t ) until selecting STOP, ending with selected content C and a corresponding reward. We can even allow for the selection of partitioned tuple sets by adding 5 STOP and NEW LIST get special embeddings. an extra action of "NEW LIST", which allows the model to include subsequent tuples in a new group.
We consider the following individual rewards: • R e = c∈C cos(ê 1c ,ê 1y ) + cos(ê 2c ,ê 2y ) is the cosine similarity of the structures of the selected content C with the structures present in the summary y (each summary structure accounted with only one c), encouraging the model to select relevant content.
the similarity between pairs within selected content C, encouraging the selection of diverse tuples.
• r p is a small penalty for each action step to encourage concise selection.
The multi-objective reward is computed as where w e , w d and r p are hyper-parameters. During training the model is updated based on the rewards. During inference the model selects an ordered set of key and diverse relation tuples corresponding to appropriate health conditions.
Consensus Aggregation Identifying the consensus amongst the input documents is critical in our multi-document summarization task. We model the aggregation operator of our Content Selection using simple one line deterministic rules as shown in Table 1. The rules are applied to the key C entity-entity relation pairs in context of X G . In our example in Figure 1, O is Contradiction because of rows 1&3 and rows 4&5 (rows 1&3 only would also make it Contradiction).
Surface Realization
The surface realization model P (y|O, C), performs the critical task of generating a summary guided by both the entity-entity relation tuples C and the aggregation operator O. The model allows for robust, diverse and faithful summarization compared to traditional template and modern seq2seq approaches. We propose to model this process as a prototypedriven text infilling task. The entities from C are used as fixed tokens with relations as special blanks in between these entities. This is prefixed by a prototype summary corresponding to O. For the example shown in Figure 2, we concatenate using |SEN | a randomly sampled contradictory summary "Kale contains substances ... help fight cancer ... but the human evidence is mixed ." to C "<blank> pears <controls> ovarian cancer <de-creases> breast cancer <blank>". The infilling language model produces text corresponding to relations between entities while maintaining an overall structure which is cognizant of O. 6 The model is trained on the few sample summaries from the training set using G y m and O y m to produce y m . Providing aggregation and content guidance during generation alleviates the lowresource issue.
Summary and Update Setting
In this section we describe the setting of summary updates. In a real world setting, we would often receive new input documents such as scientific studies about the same subject which necessitate a change in an old summary. In context of our food and health summarization task, the goal is to update an old summary about a food and health condition on receiving results from new scientific studies from Pubmed. Our model can accommodate this scenario fairly easily. We describe the minor changes to the Content Selection and Aggregation and Surface Realization models for such a setting.
We are provided an original summary and can extract it's content C and can also construct the mini-database X G from the text of the new documents. We identify the aggregation between the new studies' X G and original summary's content C first. Depending on the aggregation identified, 6 Summaries in our training data are labelled with O y m as belonging to one of the four categories of Under-reported, Population Scoping, Contradiction or Agreement to accommodate such training. corresponding content C is selected from X G . For instance, in case of a contradiction, we are keen on identifying content leading to this contradiction. The subsequent Surface Realization is dependent on O, the selected C and the C present in the original summary (P (y|O, C + C )).
Experiments
Dataset We utilize a real world dataset for Food and Health summaries, crawled from https:// www.healthline.com/nutrition (Shah et al., 2021). The HealthLine dataset consists of scientific abstracts as inputs and human written summaries as outputs. The dataset consists of 6640 scientific abstracts from Pubmed, each averaging 327 words. The studies in these abstracts are cited by domain experts when writing summaries in the Healthline dataset, forming natural pairings of parallel data. Individual summaries average 24.5 words and are created using an average of 3 Pubmed abstracts. Each food has multiple bullet summaries, where each bullet typically talks about a different health impact (hydration, diabetes etc). We assign each food article randomly into one of the train, development or test splits. Entity tagging and relation classification annotations are provided for the Pubmed abstracts and the healthline summaries. Settings: We consider three settings. 1. Single Issue: We use the individual food and health issue summaries as a unique instance of food and single issue setting. We split 1894 instances 80%,10%,10% to train, dev and test.
Multiple Issues:
We group each food's article Pubmed abstract inputs and multiple summary outputs as a single parallel instance. 464 instances are split 80%,10%,10% to train, dev and test. 3. Summary Update: We consider two kinds of updates -new information is fused to an existing summary and new information contradicts an existing summary. For fusion we consider single issue summaries that have multiple conditions from different Pubmed studies (bananas + low blood pressure from one study and bananas + heart health from another study). We partition the Pubmed studies to stimulate an update. The contradictory update setting is where we artificially introduce conflicting results in the input document set so that the aggregation changes from Agreement to Contradictory. We have a total of 103 test instances. All models are trained atop of Single issue data. Evaluation We evaluate our systems using the following automatic metrics. Rouge is an automatic metric used to compare the model output with the gold reference (Lin, 2004). KG(G) computes the number of entity-entity pairs with a relation in the gold reference, that are generated in the output. 7 This captures relevance in context of the reference. KG(I), similarly, computes the number of entityentity pairs in the output that are present in the input scientific abstracts. This measures faithfulness with respect to the input documents. Aggregation Cognisance (Ag) measures the accuracy of the model in producing outputs which are cognizant of the right aggregation from the input, (Under-reported, Contradiction or Agreement). We use a rule-based classifier to identify the aggregation implied by the model output and compare it to the actual aggregation operator based on the input Pubmed studies.
In addition to automatic evaluation, we have human annotators score our models on relevance and fluency. Given a reference summary, relevance indicates if the generated text shares similar information. Fluency represents if the generated text is grammatically correct and written in well-formed English. Annotators rate relevance and fluency on a 1-4 likert scale (Albaum, 1997). We have 3 annotators score every data point and report the average across the scores. Baselines In order to demonstrate the effectiveness of our method, we compare it against text2text and 7 We run entity tagging plus relation classification on top of the model output and gold summaries. We match the gold (e g i , e g j , r g ) tuples using word embedding based cosine similarity with the corresponding entities in the output structures Implementation Details Our policy network is a three layer feedforward neural network. We use a Transformer (Vaswani et al., 2017) implementation for Surface Realization. We train an off-the-shelf Neural CRF tagger (Yang and Zhang, 2018) for entity extraction. We use BERT (Devlin et al., 2018) based classifiers to predict the relation between two entities in a text trained using crowdsourced annotations from (Shah et al., 2021). Futher implementation details can be found in A.
Results
In this section, we describe the performance of our Nutribullet Hybrid system and baselines on summarization and summary updates. We report empirical results , human evaluation and present sample outputs, highlighting the benefits of our method.
Single and Multi-issues Summarization: We describe the results on the task of generating summaries. Table 3 presents the automatic evaluation results for the food and single issue summarization task. High KG(I) and KG(G) scores for our method indicate that the generated text is faithful to input entities and relevant. In particular, a high Aggregation Cognisance (Ag) score indicates that our model generates summaries which are cog-Transformer (baseline) * Whole -grain cereals may protect against obesity , diabetes and certain cancers. However , more research is needed .
* Whole grains , such as mozambican grass , are safe to eat with no serious side effects . * Whole -grain cereals may protect against obesity , diabetes and certain cancers. However , more research is needed .
* Whole grains , such as blueberries , are likely safe to eat with no serious side effects . * Whole grains are safe to eat. However , people with type 2 diabetes should avoid whole grains . * Whole grains are lower in carbs than whole grains , making them a good choice for people with type 2 diabetes.
Our Method * Whole grains has been shown to lower weight gain and improve various type 2 diabetes risk factors . * Whole grains has been shown to lower insulin resistance and improve various cancer risk factors . * Whole grains has been linked to several other potential health benefits , such as improved CVD risk , eyesight , and memory. However , more studies are needed to draw stronger conclusions. * There is some evidence , in both animals and humans , that whole grains can reduce mortality by regulating the hormone ghrelin. Table 4: Example outputs of our model and the Transformer baseline for a multi-issues summary. Trained on limited parallel data, the Transformer baseline produces repetitive text with factual inaccuracies, while our method is able to provide more accurate and diverse summarization. nizant of the varying degrees of consensus in the input Pubmed documents. Compared to other baselines we also receive a competitive score on the automatic Rouge metric, beating Copy-gen, Entity Data2text and GraphWriter baselines while falling short (by 1.7%) of the Transformer baseline. The baselines, especially Transformer, tend to produce similar outputs for different inputs (see Table 4). Since a lot of these patterns are learned from the human summaries, Transformer receives a high Rouge score. However, as in the low resource regime, the baseline does not completely capture the content and aggregation, it fails to get a very high KG(G) or Ag score. A similar trend is observed for the other baselines too, which in this low resource regime produce a lot of false information, reflected in their low KG(I) scores. Human evaluation, conducted by considering scores,on a 1-4 Likert scale, from three annotators for each instance, shows the same pattern. Our model is able to capture the most relevant information, when compared against the gold summaries while producing fluent summaries. The Transformer baseline produces fluent summaries, which are not as relevant. The performance is poorer for the Copy-gen, Entity Data2text and GraphWriter models.
In the multi-issues setting, the baselines access the gold annotations with respect to the input documents' clustering. Our model conducts the extra task of grouping the selected tuples, using the "New List" action. Our model performs better than the baselines on both the KG(I) and KG(G) metrics as seen in Table 5. Again, the pattern of producing very similar and repetitive sentences hurts the baselines. They fail to cover different issues and tend to produce false information, in this low resource setting. Our model scores an 7% higher on KG(G) and 17% higher on KG(I) compared to the next best performance, in absolute terms. Table 4 shows the comparison between the outputs produced by our method and the Transformer baseline on the benefits of whole-grains. Our method conveys more relevant, factual and organized information in a concise manner. Summary Update: We study the efficacy of our model to fuse information in existing summaries on receiving new Pubmed studies. As the KG(G) metric in 6 shows, our model is able to select and fuse more relevant information. Table 7 shows two examples of summaries on flaxseeds where our model successfully fuses new information. evaluation results to demonstrate the efficacy of maintaining Aggregation Cognisance (Ag), which is critical when updating summaries on receiving contradictory results. The high performance in this update setting demonstrates the Surface Realization model's ability to produce aggregation cognizant outputs, in contrast to the baselines that do not learn this reasoning in a low resource regime. Analysis: Information Extraction and Content Aggregation Information extraction is the critical first step performed for the input documents in order to get symbolic data for content selection and aggregation. To this end, we report the performance of the information extraction system, which is composed of two models -entity extraction and relation classification. As reported in Table 8, the entity extraction model, a crf-based sequence tagging model, receives a token-level F1 score of 79%. The relation classification model, a BERT based text classifier, receives an accuracy of 69%.
The performance of the information extraction models is particularly important for the content aggregation sub-task. In order to analyse this quantitatively, we perform manual analysis of the 179 instances in the dev set and compare them to the system identified aggregation -information extraction followed by the deterministic rules in Table 1. Given the simplicity of our rules, system's 78% accuracy in Table 8 is acceptable. Deeper analysis shows that the performance is lowest for Population Scoping and Contradiction with an accuracy of 52% and 56% respectively. The performance of Population Scoping being low is down predominantly to the simplicity of the rules. Most mistakes occur when the input studies are review studies that don't mention any population but analyze results from several past work. Contradiction suffers because of the information extraction system and stronger models for the same should be able to alleviate the errors. Table 8: Performance of our information extraction system and its impact on content aggregation.
Conclusion
While modern models produce fluent text in multidocument summarization, they struggle to capture the consensus amongst the input documents. This inadequacy -magnified in low resource domains, is addressed by our model. Our model is able to generate robust summaries which are faithful to content and cognizant of the varying consensus in the input documents. Our approach is applicable in summarization and textual updates. Extensive experiments, automatic and human evaluation underline its impact over state-of-the-art baselines. | 2021-05-23T13:28:04.004Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "4a470df590d90494696fce37aa1eed2d2659bc0a",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/2021.naacl-main.411.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "4a470df590d90494696fce37aa1eed2d2659bc0a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
203058673 | pes2o/s2orc | v3-fos-license | THE DESIGN OF EXPERT SYSTEM APPLICATION FOR DIAGNOSING AUTISM DISORDERS IN CHILDREN
ABSTRAK The level of public understanding of autistic disorders and how to handle them that still lacking is the main reason in this study. How to know that a child has an autistic disorder or not is generally done by manual and simple way, it done by filling out a checklist or questionnaire that contains facts, attitudes and behaviors that often appear in children. The checklist is filled by parents who will then be observed again by the therapist to get clear, accurate, and reliable results. The weakness from this method in diagnosing there is an autistic disorder or not is it must involve therapists/experts who are currently still few in number. The purpose of this study is to make a software namely an expert system application that able to diagnose autistic disorders in children that are tailored to the needs of parents. The results of this study are a series of product design systems for expert systems to diagnose autistic disorders in children including ongoing system flowcharts, computerized system flowcharts, Data Flow Diagrams, application program flowcharts and display of application programs that have been built. The results of designing expert system software applications are expected to be used by therapists in diagnosing children who come to consult.
INTRODUCTION
he phenomenon which is the basis of this research is that in 2011 from UNESCO data, the data was obtained that 35 million people with autism in the world. And an average score of 6 out of 1000 people has autism. Especially in Indonesia there is a ratio of 8 out of 1000 people [1]. Public knowledge about autism is considered still low, this has an impact that autistic people are discriminated against while families of patients have difficulty in getting treatment for helping the patients [2]. Some characteristics of autism behavior in children can be seen from language in communication, reactions shown when dealing with other people, concern for the environment, responses shown to the senses/sensory, and attitudes of inequality towards behavior development [3]. While the factors that cause autism are psychological and family factors, socio-cultural factors, and biological factors. Biological factors include genetic factors, pre-natal, additives that pollute the child's brain, neurobiology, and digestive system disorders) [4]. There are many features that can be seen from people with autism, for a prominent feature in Kanner syndrome is empty facial expressions such as being daydreaming, loss of mind and difficult for others to attract attention or invite them to communicate [5]. There are five stages in interpersonal communication to achieve effective communication in children with autism. The five stages are openness, empathy (empathy), support (supportive), positive feelings (positiveness), and equality (equality) [6]. Therapy methods that are carried out for the success of therapy for people with autism can be done by means of behavioral therapy, speech therapy, occu therapy, remediation therapy, play therapy, music therapy, visual therapy, and togetherness therapy. In addition to these therapies, it can also be done by giving special attention, training and education. So that the child is able to develop himself in communicating and interacting with his friends [7] In various big cities whose people are relatively modern and always keep abreast of the times, the term autistic disorder is well known. On the other hand, in small cities like Blitar, understanding of autistic disorders is still very lacking. One of the institutions specifically dealing with autistic disorders in Blitar is Whising Kids. The process of diagnosing and handling is still done in simple way and uses manual methods. It done by filling out a checklist or questionnaire that contains facts, attitudes and behaviors that often appear in children. The checklist is filled by parents who will then be observed again by the therapist to get clear, accurate, and reliable results. The manual process in diagnosing autistic disorders that always requires experts/therapists causes the system have many shortcomings and weaknesses. So, it needs to have a software application system that can replace the role of expert/therapist in diagnosing autism disorders in patients. This expert system will also be able to provide direction and solutions for parents of patients in handling them.
The purpose of this study is to design and build an expert system to diagnose autistic disorders in children. The design of this expert system is hoped can help parents to know the symptoms of autism early and how to deal with symptoms of autism without having to meet and consult an expert.
RESEARCH METHODS
This study used research and development methods where the result of the research is an application design that is able to provide alternative solutions and provide facilities for existing systems. In this study, the following steps were taken :
Data Collection.
Data collection is the initial stage in the research method. Where the data used is in the form of primary data and secondary data. Primary data includes symptom data obtained directly from experts and from parents. The experts in question are therapists who are experts in diagnosing autistic disorders. In addition to the common symptoms that are often faced by children, also gather information about the description of the system or procedure that must be done by experts/therapists in the process of diagnosing children who have autistic disorders, until the handling process. Primary data collection is done by conducting interviews and observations of the system that is running at whising kids in the Blitar City. Besides primary data, secondary data is also needed. Secondary data was obtained to supplement the primary data that had been obtained by digging information through reference books and several previous research journals related to autistic disorders
Data Analysis and System Needs,
Primary data and secondary data that have been obtained from the stages of data collection need to be analyzed to get the main problem to be solved. Where the problems faced by parents are difficulty detecting early on autism disorders since early stage. Basic knowledge about autism in the parents is very lacking, especially to recognize symptoms and provide therapy/treatment periodically from early. To be able to find out more about symptoms, signs and therapy in order to be able to take steps and decisions early on dealing with autistic children, there is a need for a system that adapts the knowledge of an expert in autism. Based on the analysis of the problems above, then through this system it is expected to be an alternative choice of consultation, detection, and companion information for parents, where the problem analyzed is about the characteristics of autism and how to handle it.
System planning
The stages of system planning are an important part of the process of building expert system applications. Stages in system planning include: 1. Make a system flowchart The system flowchart in question is the flow of the old system, namely the flow of procedures carried out in whising kids by the therapist on children suspected of having symptoms of autism disorder. Furthermore, from the old system flowchart with deeper analysis, a new system flowchart is created where of the weaknesses and shortcomings and constraints encountered in the old system flowchart to repair the system into a computerized system. In this case the conventional system will be given a solution with an expert system. This expert system will replace the role of experts in diagnosing autism disorders in children so that in the process of diagnosis and consultation do not need to meet and face to face with experts in autism 2. Creating a Data Flow Diagram, this diagram is a diagram that describes the flow of data flowing in a system of software applications. The data flow starts from the beginning of filling in the child's data by inputting the symptoms encountered, and from this data it will be processed so that it will produce a result of the type of autism disorder category. From the type of autism produced by the application system, it will provide the right alternative treatment solution.
3. Make a Flowchart program and display expert system software. This expert system program flowchart is designed to formulate the right programming so that the flow of expert system outputs is produced in accordance with the design and results of previous and expected analysis in accordance with the user needs
The running system
The running system flowchart is the flow of the system where the patient comes to the registration counter by bringing complete patient data identity. Then the registration counter section records the patient data. Based on the patient data base, the patient meets with an expert/therapist to conduct a consultation. On the basis of the consultation the experttherapist diagnoses the patient whether the symptoms resulting from the consultation process indicate the patient has an autistic disorder or not. The flowchart of the current system flowchart can be illustrated in Figure 1 Citation
Proposed System
The proposed system flowchart is an advanced stage of the system flowchart that is running. Analysis of the proposed diagnosis (new system). This analysis is made with the aim to facilitate performance in diagnosing a disorder that may be experienced by autistic children. The following is a new system flowchart for expert systems as shown in Figure 2. administrator. First the user must log in by entering a username, password, user level (administrator). Then it will enter the main diagnosis menu. In the main diagnosis menu there is a diagnostic option menu, a list of patients and looking for patients. On the patient list menu the user must first enter the patient's biodata to make the diagnosis process. Then the user answers all the symptoms that are felt by the patient. After that the program will do a diagnostic process to save. On the diagnosis menu, the user can see and search for diagnoses that have been done before. Logout menu to exit the system and return to the main login menu. System flow with administrator level is complete.
DAD level 0
Zero level DAD is a data diagram that shows the flow of data in general by patient and expert admins. This DAD level comes from Figure 2, with the data flow in the expert system can be seen as shown in the following figure 3::
DAD level 1
DAD level 1 is a data flow diagram that refers to the flow of the previous level in figure 3. This diagram is a more detailed description of the application process of expert systems of autism disorders in children, where in this picture breaks down the process into several processes including process data disruption, process data symptoms , the weighting process, the process of patient data and the process of diagnosis. DAD level 1 is shown as in Figure 4.
Flowchart programs diagnose autistic disorders
Flowchart of autistic disorder diagnosis refers to the Data Flow Diagram in Figures 3 and 4, where the program flowchart is more detailed and detailed which better describes the flow of the application program. The application program starts with the user entering the login input. If the login is incorrect, the program will ask the user to log in again until it is correct. If the login entered is appropriate, the process will continue in the diagnosis process. The appearance of the diagnostic process begins with the display of menu options, where the user is asked to make a choice whether to consult or not. If the user does not want to consult the application will return to the menu. Whereas if you choose to do a consultation, the application program will display the data contents of the child, and then the diagnosis process will display the choice of data on the symptoms that appear in the child. From the symptom entry data, the application system performs the process of determining the type of autism disorder category in children which are displayed on the monitor. The flow of the expert system application program flowchart can be seen in figure 5 below: disorders that may appear in a child. By selecting each symptom, the system will process and categorize according to the choices available, so the application program will produce a type of autism disorder according to symptom input. Figure 8 is the result of expert system software that classifies autistic disorders according to the symptoms in Figure 7.
CONCLUSION
Based on the problems faced by whising kids in the process of diagnosing autism disorders in children, the research process began with data collection and analysis the system requirements, so the results of this study were the resulting design of software applications that can diagnose autistic disorders. The design started from the design of the old system Flowchart and computerized system Flowchart, the design of DAD level 0 and DAD level 1 and the Flowchart Application program along with the application display diagnoses autism disorder. From this design, a software application that has built was expected to help parents and therapists diagnose autism in children.
SUGGESTION
This study resulted in the design of an expert system application of autism disorders in children that have not been tested to parents who have children with symptoms of autistic disorders, so it is recommended to continue and develop this expert system design to make it more useful.
THANK-YOU NOTE
The author would like to thank the Chancellor and Elementary officials below him at Balitar Islamic University who have provided material and non material support so that this research can be carried out as expected. | 2019-09-27T06:33:40.944Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "aa3170d9a63b96d6af56341ca9b54460eae3f5e1",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.unisbablitar.ac.id/index.php/jares/article/download/757/633",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f55b14c926dcda6afbfd043e6e0d65988b9328b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
} |
257879354 | pes2o/s2orc | v3-fos-license | Exfoliated graphite with spinel oxide as an effective hybrid electrocatalyst for water splitting
The aim of the conducted research was to develop hybrid nanostructures formed from MnCo2O4 and exfoliated graphite. Carbon added during the synthesis allowed for obtaining a well-distributed MnCo2O4 particle size with exposed active sites contributing to the increased electric conductivity. The influence of the weight ratios of carbon to a catalyst for hydrogen and oxygen evolution reactions was investigated. The new bifunctional catalysts for water splitting were tested in an alkaline medium with excellent electrochemical performance and very good working stability. The results for hybrid samples show better electrochemical performance compared to the pure MnCo2O4. The highest electrocatalytic activity was for sample MnCo2O4/EG (2/1), where the value of the overpotential was 1.66 V at 10 mA cm−2, and also for this sample a low value of Tafel slope (63 mV dec−1) was denoted.
Synthesis of MnCo 2 O 4
In the method used, MnCo 2 O 4 was synthesized by co-precipitation using oxalic acid as a precipitating agent. First, 3 mmoles of Co(NO 3 ) 2 · 6H 2 O and 3 mmoles of Mn(CH 3 COO) 2 · 4H 2 O were dissolved in 25 mL of distilled water. The resulting mixture was added to a solution containing 22.5 mmoles of oxalic acid, 25 mL of ethyl alcohol, and 25 mL of distilled water and left on a magnetic stirrer for 2 h. After this time, the mixture was placed in a laboratory dryer at 50°C for 6 hours. Then the obtained suspension was filtered on a Büchner funnel and dried in a laboratory dryer at 80°C for 12 hours. In the final step, the obtained mass was subjected to an annealing process conducted in an atmosphere of inert gas N 2 , at 900°C with a heating rate of 3.14°C min 1 , and after reaching the set temperature, the sample was annealed for 8 hours.
Synthesis of hybrid nanostructures
In the first step, the carbon material was obtained by electroexfoliation of graphite sheet in 0.5 M H 2 SO 4 at +5 V until the graphite sheet was completely disintegrated, and the material obtained after exfoliation was washed with distilled water, drained on a slow funnel, and then dried at 80°C for 24 h and was denoted as EG. In the next step, EG material was suspended in 10 mL of ethyl alcohol and 10 mL of distilled water and mixed with a mass of spinel-type metal oxide before annealing. The hybrid nanostructures based on MnCo 2 O 4 and EG (MnCo 2 O 4 /EG) were obtained by three different mas ratios of reagents 2:1, 1:1, 1:2, 1:3, and 3:1, then the materials were annealed the same way as MnCo 2 O 4 . The obtained samples were labeled
Material characterization
Information on the crystallographic structure of the studied materials at the atomic level was provided using images obtained using high-resolution transmission electron microscopy (HRTEM) produced by FEI Europe, model Tecnai F20 X-Twin, operating at an accelerating voltage of 200 kV. X-ray diffractographic (XRD) analysis of the materials obtained was performed using a Philips X "Pert X-ray diffractometer, with an X'Celerator Scientific detector employing Cu Kα radiation (=0.15406 nm), angles ranging from 10° to 80° with step size 4 0.02. Raman spectra were obtained by a Renishaw InVia Raman analyzer (laser wavelength 532 nm, Renishaw Company). X-ray photoelectron spectroscopy (XPS) analysis was performed with a PHI5000 VersaProbe II Scanning XPS Microprobe spectrometer (ULVAC-PHI, Chigasaki), using a monochromatic Al Kα radiation source.
Electrochemical measurements
To check the potential application of the electrocatalytic activity of the obtained hybrid materials in the H 2 evolution reaction and the O 2 evolution reaction samples were investigated using linear sweep voltammetry (LSV). Electrochemical measurements were performed with an Autolab electrochemical analyzer (PGSTAT128, the Netherlands) using a three-electrode system. The reference electrode was Ag/AgCl (saturated KCl), a platinum plate was used as the counter electrode, and hybrid catalysts applied to a glassy carbon electrode (GCE) with a diameter of 3 mm were used as the working electrode. The catalytic ink was prepared by suspension of 3 mg of the samples in 0.75 mL of distilled water, 0.2 mL of isopropanol, and 0.05 mL of 5% aqueous Nafion solution. The obtained suspension was sonicated in an ultrasonic bath for 60 minutes. In the next step, the prepared catalytic ink was placed on the polished surface of the glassy carbon electrode, then left to dry oven for a few minutes at 60°C.
The catalytic activity of reference materials such as Pt/C catalyst (20 wt% Pt) for the hydrogen evolution reaction (HER) and a mixture of ruthenium (IV) oxide and iridium (IV) oxide in a 1:1 ratio for the OER reaction were also measured. The potential measurements carried out were recalculated against a reversible hydrogen electrode (RHE). The activity of the samples in the OER and HER reactions was tested at a scan rate of 1 mV s 1 in aqueous electrolyte of 1 M KOH. Stability for the best samples from LSV measurements was performed at the constant potential to achieve a current density of 10 mV s 1 . In order to characterize the obtained catalysts, cyclic voltammetry measurements were also performed before and after the HER and | 2023-04-02T05:07:20.501Z | 2023-03-27T00:00:00.000 | {
"year": 2023,
"sha1": "753022ff4fffbb01395307daacce21c6dbc97d62",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "753022ff4fffbb01395307daacce21c6dbc97d62",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258918975 | pes2o/s2orc | v3-fos-license | The Net Micronutrient Balance Value Concept: Revisiting Orthomolecular Nutrition
Nutrition research has been pivotal in establishing causality between dietary (nutrient) intake and health outcome measures. Nutrition is also relevant in the determination of dietary requirements and levels of supplementation to achieve specific physiological outcomes. Careful nutritional research led to the conclusion that food products considered the same or equivalent may have significant differences due to soil quality, agricultural methods, contaminants, food processing, additives, and cooking methods. We propose the concept of the net micronutrient balance value (NMBV), which refers to the actual micronutrient content of the food product minus the portion not absorbed and the amount depleted in metabolic processes due to additives, contaminants, medication, and faulty metabolism. Diet quality over time determines physical growth, mental development, and numerous health risks, including cardiovascular disease, cancer, diabetes, and many other chronic conditions. Therefore, research in nutrition needs to identify and consider the specific variables that determine NMBV to provide better uniformity in nutrition research and produce more significant and meaningful findings
The Net Micronutrient Balance Value Concept: Revisiting Orthomolecular Nutrition
INTRODUCTION
Nutrition, diet, and physical activity play important roles in promoting health and preventing and treating disease.Proper nutrition offers one of the most effective and least costly ways to decrease the burden of many diseases and their associated risk factors.The importance of nutrition as an integral part of the solution to many societal, environmental, medical, and economic challenges facing the world has been neglected for years.It may have just started to be meaningfully appreciated.Cultural influences, societal convenience, marketing, and other factors have influenced eating habits for decades.The detrimental impact of poor nutrition on the health and wellbeing of individuals, healthcare systems, and the economy is substantial.
Nutrition refers to the integrated processes by which cells, tissues, organs, and the whole body acquire the energy and nutrients through the diet needed for normal structure and function and the capacity to transform substrates and use cofactors necessary for metabolism.
The science of human nutrition considers the nature and interaction of two internal and external systems.The external system is composed of the food ecosystem.It involves complicated factors determining the human ability to source a complete diet from the wider environment that provides adequate energy and nutrients.It encompasses the world created by the family and community, incorporating complex social systems and interactions that influence lifestyle choices.The internal system is the body's regulated biochemical, physiological, and metabolic processes that create an internal environment where cells, tissues, and organs can maintain their structure and function to ensure ongoing optimal health.Health is enabled and protected when the two systems are balanced and harmonious.
Defining healthy or optimal nutrition is a very complex task.Nutritional requirements (both maximum and minimum) may vary substantially according to age, sex, body weight, genotype, level of activity, physiological status (growth, pregnancy, and lactation), and the presence or absence of disease.Good nutrition is not simply met by the absence of nutrient deficiencies but by defining the appropriate intake to sustain growth and development across the life cycle, including immune development and function.
Nutritional status has been shown to play a key role in important physiological processes such as mucosal integrity and barrier function (e.g., respiratory and gastrointestinal), cognitive function, and immune response.Nutritional status can also affect resilience, susceptibility to disease, and response to therapy.It can also affect the body's response to medication.It is not surprising, therefore, that poor nutritional status, caused by either an unhealthy diet or malabsorption of nutrients, is a major risk factor for many chronic diseases.
Poor diet is a leading cause of ill health worldwide.Poor nutrition (under-and overnutrition) is not confined to developing or transitioning economies but also affects high-income, industrialized countries (the hidden hunger concept).Providing adequate nutritional support is vital for those with acute or long-term health conditions, whether during treatment, recovery, or palliative care.
Nutrition researchers are trained to examine the complex interplay between foods eaten and health and disease status in individuals or populations.Given the huge impact of diet on a person's health and the fact that everyone eats, it is no surprise that studies in human nutrition are crucially important.There are four main types of research studies about nutrition: animal or laboratory, case-control, cohort, and randomized.Nutrition research is needed to establish the required nutritional needs that best support survival, growth, and development in subpopulations such as chronically diseased patients, children, and aging adults.
To fully appreciate the many challenges surrounding nutrition and nutrition research, it is important to understand some key elements involved, including research designs, the complexity of the food environment, and approaches to collecting and analyzing dietary data.
NUTRITION RESEARCH CHALLENGES
For nutrition research and its associated disciplines, ethical considerations are often complicated by factors that range from overly holistic study designs to inextricable links with marketing.Consequently, constant vigilance is needed to assess and deal with apparent conflicts of interest.Also, there are few scientific disciplines that are more defined by cultural, religious, or political codifications than nutrition.Therefore, nutrition research questions are often extremely multifaceted and require dealing with complex variables.
Foods and food products are available in different varieties, brands, and flavors.There are many options for the same type of food, yet the ingredients in each option may differ in ways that matter greatly to nutrition and nutrition research.How food is cooked can change its nutrient profile.People vary in many ways, including by sex, race/ ethnicity, BMI, economic status, metabolic rate, food preferences, exercise patterns, and fitness levels.All these different variables could affect what study participants eat, how they metabolize what they eat, and how much they remember about what they eat.Food and nutrition databases have many limitations.
Nutrition and nutrition research should focus on the following high-priority areas: (1) variability in individual responses to diet and foods; (2) healthy growth, development, and reproduction; (3) health maintenance; (4) medical management; (5) nutrition-related behaviors; (6) food supply/ environment; and (7) nutritional supplementation and diseases.Findings from these areas will help elucidate strategies that can be applied toward the prevention and treatment of both infectious and non-communicable diseases, including cardiovascular disease, diabetes, and cancer. 1
FOOD QUALITY ISSUES
Diet Quality Indices (DQIs) are evaluation tools used to quantify the overall quality of an individual's diet by scoring food and/or nutrient intakes, and sometimes lifestyle factors, according to how closely they align with dietary guidelines. 2 There is a variety of DQIs that employ a variety of scoring matrices.Some use food frequency, quantity, or food group consumption, while others use nutrient intake.The quantity, quality, frequency, and specific timing of the ingestion of nutrients are all important considerations, as any variable by itself can make a difference.Quantity, which refers to the total amount of macronutrients in the diet and its associated calories, is the most obvious factor to measure.
Evaluating food quality criteria is much more complex.Many quality determinants are involved when investigating a particular food or food product, macronutrient, or food group source such as fruits, vegetables, grains, proteins, and dairy.These determinants include wholeness, ripeness, freshness, additives, pesticides, and herbicides.Crop origin is particularly important for several reasons, including soil quality, agricultural methods, crop processing, packaging, and distribution.Soil quality is known to determine the nutritional content of the crop. 3,4Decades of bad farming practices produced historical declines in crop micronutrient density.In fact, between 1950 and 1999, there was a sustained decline in levels of six nutrients (protein, Ca, P, Fe, riboflavin, and ascorbic acid) in at least 43 foods. 5[8]
PROCESSING, REFINING, AND THE NUTRIENT SUBTRACTION EFFECT
Food refining processes that cause the most nutrient loss expose foods to high heat, light, and/ or oxidative conditions. 9Processing is commonly used to improve taste and shelf stability and to simplify and shorten food preparation time.However, processing can also increase the glycemic index and reduce fiber, vitamin, mineral, and phytochemical content. 10od additives are prevalent in packaged processed foods to increase shelf life and improve appearance or flavor.In fact, over 10,000 substances are approved as additives to food products in the United States. 11Some of these additives give rise to health concerns, especially for vulnerable populations like children.Additives include colorings, flavorings, and chemicals deliberately added to food during processing (direct food additives).In addition, indirect additives are substances in food contact materials that contaminate food as part of the packaging or manufacturing process, such as adhesives, dyes, coatings, paper, paperboard, plastics, and other polymers. 11It has been stated by the Council of Environmental Health that the designation "generally recognized as safe" (GRAS) is insufficient to ensure the safety of food additives.It also does not contain sufficient protection against conflicts of interest.Furthermore, the FDA needs more authority to acquire data on all chemicals on the market or independently verify their safety for human health. 11 addition, for nearly 80% of chemical additives intentionally added to food, it has been reported that the FDA's database lacks the information necessary to predict the amount people can safely consume.Moreover, despite FDA requirements, 93% of additives lack reproductive or developmental toxicity data. 12ese additives may have metabolic, enzymatic, or other biological consequences and thus must be considered when evaluating a food product.Many are already associated with health risks.Trans fats used for texture have been shown to increase the risk of heart disease, type 2 diabetes, and stroke. 13Nitrates and nitrites used in processed meats can lead to certain cancers and other significant health problems. 14igh fructose corn syrup (HFCS) is a very common and cheap sweetener, but research indicates it can lead to obesity and type 2 diabetes. 15Recent studies show that fructose-induced uric acid production causes mitochondrial oxidative stress that promotes fat accumulation independent of excessive caloric intake. 16It has been proposed that fructosemediated generation of uric acid may have a causal role in diabetes and obesity. 17Artificial sweeteners have been shown to induce glucose intolerance by altering the gastrointestinal microbiota. 18In addition, aspartame is an artificial sweetener commonly found in sugar-free sodas, and many other products.Several studies have shown that aspartame can cause headaches in vulnerable individuals and may cause symptoms of depression or anxiety. 19,20
THE IMPACT OF PESTICIDES
Pesticides and herbicides are very common in current commercial farming practices.Testing conducted by the environmental working group (EWG) revealed that more than 90% of samples of strawberries, apples, cherries, spinach, nectarines, and leafy greens tested positive for residues of two or more pesticides. 21Also, glyphosate, a herbicide linked to many diseases, is widely used in the agricultural industry.The EWG found glyphosate in over 95% of popular oat-based food samples.The report included 12 wheat-based products, five dried pasta samples, and seven cereal samples. 22ese toxic substances present in food travel through the gastrointestinal tract, where they damage the microbiome, 23 possibly including bacteria necessary for the synthesis of vitamins such as vitamin K and B group vitamins, including biotin, cobalamin, folates, nicotinic acid, pantothenic acid, pyridoxine, riboflavin, and thiamine. 24In addition, these toxic compounds enter the systemic circulation and are distributed to various tissues.Pesticides from food undergo phase I and II enzyme detoxification, 25 which requires micronutrients such as B complex vitamins, vitamin C, vitamin E, and glutathione as cofactors.
Increased use of these pollutants contaminates food and, when chronically consumed, can harm health.An animal study of long-term exposure to two pesticides demonstrated a subsequent increase in mitochondrial malondialdehyde, swelling, dysfunction, and a decrease in glutathione. 26Herbicides and pesticides have been associated with increased oxidative stress, inflammation, and depletion of glutathione and antioxidants. 27,28Pesticides in food can cause an excess of reactive oxygen species (ROS) and subsequent depletion of antioxidants, micronutrients, and glutathione, which can result in altered net balance of nutrients.
THE MICROBIOTA AND MICROBIOME
The human microbiota refers to the vast array of commensal microorganisms that live mostly in the intestines, with some residing in the oral and vaginal mucosa, skin, and other tissues.These microorganisms include bacteria, viruses, and fungi.The term microbiome denotes the set of genomes of all these microorganisms.Microbes in the human body are nourished by food and molecules in their environment, and they produce metabolites according to their metabolism and genome, which profoundly impact human metabolism.This influence can be broadly categorized in different areas, such as human nutrition, physiology, immunity, behavior, and disease. 29Research shows an important connection between the gut microbiome and stress response, inflammation, depression, and anxiety. 30trient-dense foods are high in micronutrients and relatively low in calories. 31Consuming fast food and processed food, which are low in fiber and contain additives such as sweeteners, antibiotics, persistent organic pollutants, and pesticides, provides calories from added sugars and refined oils with minimal micronutrients.Low nutrientdense processed foods create an environment that is hostile to microbiota health. 18,23They are also associated with the increased incidence of chronic disease seen over the past several decades.Therefore, any comprehensive nutritional analysis should also examine the influence of diet on the human microbiome.
NET MICRONUTRIENT BALANCE VALUE (NMBV)
The NMBV is the sum of all micronutrients present in a food minus those depleted (micronutrient depleting potential).The micronutrient depleting potential accounts for the depleting effect of antinutrients such as phytates, tannins, lectins, and oxalates, as well as the method of consumption (raw, sprouted, fermented, cooked).3][34] The NMBV can be reduced by natural antinutrients, refining processes, drug: nutrient interactions, and contaminants such as pesticides, herbicides, and medication.In contrast, organic produce is more nutritious because it contains only naturally occurring antinutrients, not agricultural contaminants.The animal's diet will determine the nutritional profile of animal-based food products such as meat, milk, and eggs.Grass-fed, free-range animals, wildcaught fish, and seafood contain beneficial lipids that can significantly positively affect physiology over time.
The Net Micronutrient Balance Value concept proposes to distill a more accurate evaluation of nutrients by considering components that enhance the nutritional/physiologic benefit (enhanced value factors) and decrease the nutritional/physiologic benefit (subtraction factors) (Figure 1).Enhanced value or subtraction factors can relate to the food itself, for example, soil quality, where it was produced, genetic modification, pesticide use, how crops/animals were nourished, and level of processing.In addition, enhanced value or subtraction factors about the human organism include health status, genomics, age, and microbiome profile.
FOOD EXAMPLES Eggs
6][37] In one study, the highest content was found in eggs from organic farming. 38Hens that get to spend time in the sun lay eggs that contain significantly more vitamin D. Omega-3 eggs have about five times as much omega-3 and 39% less arachidonic acid than a regular egg.In other words, these eggs contain significantly more anti-inflammatory and less inflammatory fats, and significantly more vitamin D. 39
Meat
1][42] Red meat from ruminants such as cows, sheep, and goats is an important dietary source in developed countries and high socio-economic groups in developing countries.The diet of ruminants significantly influences the concentration of omega-3 fatty acids in their meat.Meat from grassfed or forage ruminants has a greater concentration of omega-3 fatty acids than grain-fed counterparts. 43n accurate nutritional assessment requires consideration of the meat's nutritional value based on the animal's diet.
AGE-RELATED DIGESTIVE ENZYME PRODUCTION
With aging, digestion is slowed down 44 as pancreatic exocrine digestive secretions decrease after the third decade of age. 45,46The reduction of pancreatic digestive enzymes may eventually result in decreased nutrient absorption, insufficiencies, and deficiencies with subsequent immune weakening, especially in old people or those with genetic or metabolic vulnerabilities. 47pplementation of digestive enzymes that include proteases, amylases, lipases, pancreatin, cellulase, and lipase, in addition to improving digestion of foods, can decrease post-surgery recovery time and, in some cases, may act as an adjuvant in cancer treatment. 48,49e and pancreatic enzyme function need to be considered in nutrition research as they may affect digestion and nutrient availability.The use of digestive dietary supplementation might reduce age-dependent pancreatic enzyme deficiencies.risk in this population.In addition, specific diet scores were also associated with a statistically significant 20%-23% lower risk of cancer mortality. 50nother study examined the relationships between four indices -HEI-2010, AHEI-2010, aMED, and DASH -and all-cause, CVD, and cancer mortality in the NIH-AARP Diet and Health Study (N=492,823).Data from a 124-item food-frequency questionnaire were used to calculate scores.Higher index scores were associated with a 12%-28% decreased risk of all-cause, CVD, and cancer mortality. 51
CONCLUSION
Everyone must eat, making diet one of the most common factors influencing health and disease outcomes.Nutrition research is complicated by numerous factors inherent in this area of inquiry.More accurate and complete reporting in the lay press and even peer-reviewed publications can further complicate things and create confusion among the public.Nevertheless, nutrition research has enormous potential to clarify concepts necessary for preventing and improving disease management and overall health.With its myriad health implications, nutrition is an exciting, challenging, and ever-evolving area of research.The production and consumption of processed food have increased over the last decades, 52 coinciding with consistently rising trends of obesity and chronic disease. 53reful nutritional research must recognize that similar food products that are considered to have the same nutritional value may have significant differences in terms of their NMBV.Variables that affect NMBV include soil quality, agricultural methods, contaminants, food processing, additives, and cooking methods.These factors can enhance or decrease the micronutrient value.Identifying and considering the effects of these and other specific variables will help produce more uniformity and better insight into the role of nutrition in health and disease. | 2023-05-27T15:14:34.620Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "47d2fb37a6fea8a905135d92443b5479d485aeb8",
"oa_license": null,
"oa_url": "https://aarm.s3.amazonaws.com/journal/net-micronutrient-balance-value-concept-revisiting-orthomolecular-nutrition.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "27382915c399d2dfecacaac880e53f2187318f76",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
246305314 | pes2o/s2orc | v3-fos-license | Do pandemic COVID 19 and business cycle influence the Indonesia composite index?
The impact of the Covid 19 pandemic on the Indonesian economy was worse than the impact of the global financial crisis in 2008. We can see the business cycle from the contraction of economic growth, volatility of the exchange rate, and inflation. The shocks to these business cycles could create systemic risks and influence financial system stability. Financial system stability is more sensitive to shocks from changes in one of the macroprudential indicators, namely the Indonesia Composite Index (ICI). This study will examine the effect of the Covid 19 pandemic and the business cycle on ICI. This research was conducted in Indonesia using quarterly secondary data from 2008-2021. The data was obtained from Bank Indonesia. The model used is the Auto Regressive Distributed Lag (ARDL) model. The purpose of using this model is to see the long-term and short-term relationships between the variables studied. The results show that the Covid 19 pandemic and business cycle variables such as; economic growth, exchange rate, and inflation have a relationship on the ICI both in the short and long term. The decline in economic growth during a crisis is difficult to avoid. However, the government must continue to accelerate the economic growth so it does not decline sharply during the crisis. Monetary authorities must also maintain exchange rate stability and inflation so it does not fluctuate unsteadily.
Introduction
Economic shock is characterized by a significant decline in economic growth. The indicators of economic shocks can be seen from fluctuations in economic growth [1]. Economic fluctuations will usually be followed by inflation and unemployment as a result of the cycle [2]. In the Indonesian case, the economic recession in 2020 is because of the Covid-19 pandemic. It caused a deep contraction shown by negative growth in the second and third quarters of 2020. This can be seen in Figure 1, where economic growth was recorded at -5.32% in the second quarter and -3.49% in the third quarter. This was the lowest recorded compared to pre-pandemic crises such as the 2008/2009 global crisis and the 2015 European crisis. During these two periods, economic growth is declined but continued to grow positively.
In 2020, the Indonesian economy experienced a sharp contraction due to the COVID-19 pandemic. This pandemic was firstly discovered in China [3] COVID-19 and caused a severe acute respiratory illness that not only spreads fast around the world but also causes a lot of fatalities. It was confirmed to have spread to Indonesia in March 2020. The enormous number of deaths and sicknesses caused by the pandemic has caused extraordinary shocks to public health and the global economy. Stock markets, global supply * Corresponding author: chenny@unsyiah.ac.id networks, labor markets, and consumer behavior have all been impacted negatively by the pandemic.
Economic recession shown by negative growth did not only affect Indonesia but also became a threat to the global economy. The Covid 19 pandemic has infected approximately 50.27 million people in the world and nearly 1.26 million people died [4]. The pandemic is expected to cause the world economy to grow negatively related to mass movements (social and physical distancing), such as the results of studies by [5,6]. Even in developed countries such as the United States and the Euro Area, the predicted crisis due to this pandemic is greater than the 2008/2009 Global crisis and almost equal to the Great Depression period of 1930 [7].
Other indicators of the economic shock are fluctuations in inflation and the rupiah exchange rate. Figure 2 shows the growth of inflation as a variable that is strongly influenced by the economic cycle. The decline in economic performance due to economic shocks has complex impacts, such as; declining household consumption and declining business performance. Weakening purchasing power causes deflation. This condition is shown, Inflation has fallen and deflation occurred. Moreover, in the financial sector, capital outflows due to the shock caused the rupiah to depreciate. Despite being shaken by the pandemic, the rupiah exchange rate against the dollar in the second quarter became weakened to Rp16,310/USD. Furthermore, economic shocks have a major impact on Financial System Stability. The existence of a potential system makes the financial system vulnerable and leads to a deeper crisis. The instability of financial system causes several things; first, the monetary policy transmission mechanism becomes ineffective, the intermediation function is hampered, public distrust of financial institutions, and high-cost restructuring [8].
The financial system is divided into two, namely; financial market or capital market, financial intermediary institutions such as banking [9]. There are several steps in the macroprudential policy, including; financial institutions, macroeconomic conditions, financial markets and infrastructure, corporations, and households [10]. Several macroprudential variables are used as financial system measurements, one of which is the Composite Stock Price Index (ICI). A stable capital market reflects good economic conditions, on the other hand, unfavourable and unstable economic conditions will cause capital market performance to be vulnerable to decline. This can be seen from the decline in the ICI.
Economic cycle fluctuations will greatly affect macroprudential indicators, including the Jakarta Composite Index. Several studies analyse how the impact of economic shocks on the stock market. [11,12], analysed the effect of shock on the economic cycle in the stock market using the Vector Autoregressive (VAR) approach. The findings found that economic turmoil is a condition of uncertainty and affects the decline in stock market performance. [13] examines the distribution of the economic crisis in global finance and the real sector in several regions, such as Asia and Europe. A larger regional shock to the real sector can be seen from the collapse of financial markets due to the crisis. [14] looks at the synchronization between business cycles in the United States and the rest of the world. The crisis in America triggered a global crisis and financial crises in various countries. The related equity markets across countries experienced a sharp decline during the crisis period.
Research on the effect of economic shocks and their comparison with the period before and after the occurrence of Covid 19 using dummy variables is relatively new in this study. Financial System Stability is also newly being studied in depth by monetary authorities. Based on this background, the researcher will examine the effect of the business cycle and the shocks of the Covid 19 pandemic on the Composite Stock Price Index in the long and short term.
Research methodology
This study analyses the effect of the economic cycle and the Covid 19 pandemic on the Indonesian Composite Stock Price Index as one of the macroprudential variables in Indonesia. The data used in this study is secondary data sourced from Bank Indonesia, Federal Reserve Economic Data (FRED), and the Financial Services Authority (OJK). The data used is quarterly data from 2008-2020. This study uses a dynamic model and analyses the long-term and short-term relationships between variables. This research uses ARDL (Autoregressive Distributed Lag) model. The following equation will be regressed with the ARDL model: The dependent variable used is one of the macroprudential indicators, namely ICI or the Composite Stock Price Index as the dependent variable. Independent variables as proxies for economic shocks include economic growth, the exchange rate of EXR or Rupiah against the dollar, INF or inflation, and D_Covid19 or Dummy before and before the Covid 19 pandemic. Ø, φ, γ, , ω shows a short-term relationship, λ1, λ2… λ5 shows long term relationship, t is time, ut is error term. All variables are in log form, except those in the form of proportions whose purpose is to see the sensitivity of the dependent variable to the independent variable (economic shocks). In the ARDL model, several tests were carried out, including (1) Stationarity Test, (2) Optimum Lag Test, and (3) Model Stability Test (classical assumption test, such as autocorrelation, multicollinearity, and heteroscedasticity).
Results and discussion
Before entering the regression stage, a unit root test is firstly performed on each variable used. Based on the test results with the Phillips-Perron test statistic, all variables are not stationary at the level. The test is then continued at the first difference and the results show that all variables are stationary at the first difference. After the stability test, the next step is the optimal lag test using Akaike Information Criterion (AIC)). Based on these tests, the optimal lag used in this study is the ARDL lag (1,1,1,2,3). This lag will be used in the estimation of ARDL in this study. The next stage is regression using the ARDL model, followed by model testing and classical assumption tests, namely autocorrelation, multicollinearity and heteroscedasticity tests. Table 2 shows the Autocorrelation Test Results. Based on the test results, the estimation model is free from autocorrelation problems, as seen from the insignificant probability value at the 5% confidence level. Table 2. Autocorrelation test.
Q-statistic probabilities adjusted for 1 dynamic regressor
The test was carried out with a multicollinearity test and heteroscedasticity test. The multicollinearity test uses the Breusch-Godfrey Serial Correlation LM Test method. The result shows that this model is free from multicollinearity problems, where the probability of Chi-Square is not significant. Likewise, the heteroscedasticity test using Breusch-Pagan-Godfrey, the model also rejects the heteroscedasticity problems as seen from the insignificant Chi Square probability value. The last stage is testing the value of the model category as shown in Figure 3. Based on the results of the CUSUM and CUSUM OF SQUARE tests, it can be seen that the most stable model (as seen from Cusum and Cusum from the boxes in a significant area). After going through all the testing stages, the ARDL model can be used for interpretation. Fig. 3. Model stability test. Table 3 is the result of short-term estimation. In general, the economic cycle and the COVID-19 pandemic significantly affect the Jakarta Composite Index (JCI) in the short term. This can be seen from the value of CointEq (-1) known as the Error Correction Term (ECT) which is negative and significant, and <1. Partially, almost all variables are significant at 5% alpha. The Covid 19 pandemic in the t-1 and t-2 periods greatly affected the decline in ICI. The pandemic caused stress for investors and demand for portfolio investments such as stocks and bonds to decline. These conditions cause a general decline in stock prices. The economic cycle variables, namely economic growth, exchange rates, and inflation also show a significant effect in the short term. The decline in economic growth caused the ICI to be sharply corrected in period t (current year). The weakening of the exchange rate in period t also has a negative effect on ICI. The inflation variable previously showed a positive effect on the ICI. This condition is probably caused by targeting inflation which ensure the inflation to be under control and thus, its percentage is still low. Even a short-term increase raises positive expectations among investors so that public demand or purchasing power is still quite high. Based on the results of the long-term test using the Bound Test in Table 4, it can be seen that all independent variables have a long-term relationship with the dependent variable. This can be seen from the values of I(0) and I(1) which are less than the value of F Statistics, at a confidence level of 1-10 percent. This means that in the long term, the economic cycle and the Covid-19 pandemic will greatly affect ICI. Partially, in the long term, all variables do not show a significant effect on ICI. This can be seen in equation 2 above. The relationship between the variables above is in line with the results of research by Tsai (2017) and Caldara et al (2016). Therefore, the slowdown in economic growth in the event of an economic shock will have a negative impact on the composite stock price index, especially in the short term.
Conclusion
The decline in economic growth during a crisis is difficult to avoid. However, the government must continue to accelerate the economic growth so it does not decline sharply during the crisis. Monetary authorities must also maintain exchange rate stability and inflation to not fluctuate. The weakening of the exchange rate and rising inflation will further exacerbate the crisis. Regarding the Covid 19 pandemic, the government must try harder to reduce the spread of the coronavirus in various regions in Indonesia, considering that the root cause of the economic downturn that occurred in the 2020-2021 period is the Covid 19 pandemic. In addition, it is essential to weaken the profound impact of the pandemic on the financial system and economy. | 2022-01-28T16:46:39.172Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "5cd4bac80dd0e4ce6af9abbbefa26949576737df",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2022/07/e3sconf_aiwest-dr2021_05004.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e0aac4295a8c414577d158bb2cbb4e1fc6a43f5a",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
73534361 | pes2o/s2orc | v3-fos-license | “Testing performance of an interest rate commission agent banking system (AIRCABS)”
This paper sought to analyze data and interpret statistical results in testing the performance of an interest rate commission agent banking system. Primary and secondary data were collected from banking industry in Ethiopia to test the research hypotheses, credit risk and liquidity crunch have no impact on AIRCABS, investor loan funding has a positive impact on profitability and sustainability of AIRCABS and discrete market deposit interest rate incentive has a positive impact on stable deposit mobilization in a bank. To test the hypothesis, statistical tools such as Cronbach’s alpha, Kuder-Richardson (KR-20), canonical correlation and multinomial logistic regression were used. The result showed that credit risk and liquidity crunch have no effect on an interest rate commission agent banking system, investor loan funding has a significant strong relationship with profitability and sustainability of AIRCABS and discrete market deposit interest rate incentive has also a significant strong relationship with stable deposit mobilization. This led to a conclusion that an interest rate commission agent banking system (AIRCABS) model is viable and reliable.
INTRODUCTION
Disbursing loan holding customer deposit as an asset has exposed the banking business to credit risks and liquidity crunch.To solve banking crises which arose from credit risk and liquidity crunch, business models adopted by banks were a catalyst for financial crisis ( Transferring credit and liquidity risks to entrepreneurs and investors enables the bank to maintain its sustainability and profitability in the market.This can be done by empowering money depositors to exercise their full right for the use of their money to get reasonable credit price rather than offering an unreasonable deposit interest rate that forced them to join the informal market (Simon-oak & Jolaosho, 2013).Transferring credit risk using financial instruments such as derivatives caused to aggravated financial crisis (Gogoncea & Paun, 2013).The main reason behind this fact is banks are disbursing loan considering customers' deposit as their own asset on their balance sheet.
Banks and Bank Systems, Volume 12, Issue 3, 2017 To maintain the mutual benefit of investor, entrepreneur and the bank, an interest rate commission agent banking system was developed (Tessema & Kruger, 2016).An interest rate commission agent banking system is defined as a system adopted by the bank to be an agent for investors loan funding to entrepreneurs getting the fund seller and buyer agreement to administer the loan after disbursement by retaining reasonable interest rate commission from agreed investors' loan funding credit price (Tessema & Kruger, 2016).Since the agent bank doesn't hold customer deposits as an asset, it is exempted from expensing deposit interest.As a matter of fact, the agent banks collect interest commission from investor loan funding administration without credit and liquidity risks higher than the interest rate margin collected by the traditional banking system.Whilst investor and entrepreneurs present to process loan transaction at the agent bank, the selected agent bank assesses the entrepreneur's project in accordance with the central bank rules and regulation.After the agent bank made them sign a tripartite loan contract agreement between investor, entrepreneur and also the agent bank itself, investor and entrepreneur opened a special deposit account by which the loan transaction from an investor account to entrepreneur account has taken place.The agent bank administers the loan after disbursement by maintaining off balance sheet account for the loan accounting record.While the entrepreneur periodically repays the portion of interest and principal, the agent bank transfers the repayment into investor account excising agreed interest rate commission, which stated in the loan contract.So the investor has an opportunity to collect the money sold to entrepreneur, duly or in lump sums per the agreement.Since the investor collects its benefit throughout the loan period without waiting till the bank accounting period where its profit and loss disclosed, this enables an investor to mitigate risks related to credit and liquidity.The agent bank also mitigates investor and entrepreneurs' risks related to credit and liquidity by maintaining operational efficiency through enhanced human capital efficiency, structural capital efficiency and capital employed efficiency (Tessema & Kruger, 2016).
CREDIT AND LIQUIDITY RISK TRANSFER MECHANISM OF AN INTEREST RATE COMMISSION AGENT BANKING SYSTEM
An interest rate commission agent banking system (AIRCABS) maximizes profitability, sustainability, operational efficiency, liquidity and capital by transferring credit risk and liquidity crunch to investors and entrepreneurs.Being an interest commission agent banking system needs to be more efficient in technology, human capital and finance, and applying inadequate technology and human capital most likely exposes the agent bank to operational risk (Tessema & Kruger, 2016).
In modern banking system, risk can be transferred either by selling the loan or buying insurance through credit default swap.In bank credit risk transferring, insurance company plays a big role giving insurance coverage for loans under the administration of a bank.So under conventional banking, transferring credit risk can only improve risk diversification if the risk transfer is between bank and insurance sectors.However, transferring credit risk can bring contagion effect on the institutions where the transaction was carried out which can in turn increases the systematic risk which makes market participants financially damaged, in particulars, and later has an impact on the economy as a whole (Allen & Carletti, 2006).
Since the credit risk depended on the borrower's internal and external factors such as failure to administer bank loan, commodity price and market price inflation before getting started the business, credit risk is not entirely manageable in the conventional banking system.However, credit risk and liquidity crunch which caused the banking business to prone were managed by an interest rate commission agent banking system, which transfers credit and liquidity risks to entrepreneurs and investors rendering service selling to get an interest rate commission and fees.
An interest rate commission agent banking system which shall be adopted by the bank needs to develop three lending strategies: 360 degrees, 180 degrees and 90 degree (Tessema & Kruger, 2016): • 360-degree lending strategy: involves investor and entrepreneur who know to each other and an agent bank.Investor and entrepreneur are presented at agent bank at the same point in time.An investor can fund loans to entrepreneur by selecting the entrepreneur's project through an interest rate commission agent bank with or without pledging entrepreneur's collateral.
• 180-degree lending strategy: involves investor and entrepreneur who do not know each other and an agent bank.Investor and entrepreneur are presented at different point in time at the agent bank.By this lending strategy, an interest rate commission agent bank selects entrepreneur's project to be financed by an investor having pledged entrepreneur's collateral.In selecting the entrepreneur's project, the bank charges investor project selection fee.
• 90-degree lending strategy: involves the fund provider and the bank.By this lending strategy, the fund provider is money depositor who later shifts full or partial fund for investment to fund the entrepreneur's project through an agent bank to collect partial or full credit price through the loan period.Otherwise the investor sells fund to the bank to collect discrete market deposit interest incentive according to the deposit increment level.
The lending strategies were developed designed to shift credit risk and liquidity crunch to investor and entrepreneurs and thereby maximize the agent bank's profitability and sustainability in the market.Transferring credit risk to nonbank parties enables to enhance more stable financial sector than transferring credit risk within the banking sector (Wagner & Marsh, 2006).While the investor provides loan funding to entrepreneur, the agent bank (AIRCAB) doesn't hold disbursed funds as an asset and ceases paying deposit interest expense on the fund disbursed amount.
As has depicted in Figure 1, an interest rate commission agent banking system was designed to transfer credit risk and liquidity crunch to investors and entrepreneurs to maximize profitability and sustainability in the market.In 360 degree lending strategy, investor and entrepreneur know each other and pledging collateral by the entrepreneur to an investor is optional.However, if the entrepreneur failed to pay debt obligation, the agent bank (AIRCABS) of an investor searches an entrepreneur that has the same project interest and rents the project till the loan has settled without ownership transfer, which can be made by ultimate decision of an investor and the agent bank.Because of some internal and external factors, an investor that has already invested in an entrepreneurial project may want to withdraw between the loan periods.In this case, the agent bank sells the project to a new investor who has the same project interest in the market refunding former investor the balance to date of sales carried out.The agent bank administers loan disbursed to an entrepreneur till the loan is settled by collecting interest rate commission and additional service fee from investors by transferring the credit risk to new investors and entrepreneurs.
Investors who have sufficient fund and wish to invest funds in alternative investment projects get an advice from the agent bank how to place their fund into investment.In this case, in 180 degree lending strategy, the agent bank can select a feasible project from those entrepreneurs who applied for funding at the agent bank earlier or can select from market to meet the investor's interest.Since investor and entrepreneur had no early acquaintance, pledging collateral by the entrepreneur is mandatory.The collateral may be the project under investment or an asset that gives service during the application period.The pledged collateral against the loan disbursed from an investor's account should have a safe margin rate between 91% and 100% to get a 90% loan on the collateral value unless an entrepreneur covers the remaining more than or equal to 10% safe margin by buying insurance from an insurance company.The insurance company may recover the default amount beyond the original loan balance or according to the agreement between the entrepreneur and the insurance company.Here the agent bank may not advise the entrepreneur to buy insurance for loan repayment coverage rather it manages the loan to get paid in due time specified in the loan contract.Otherwise, if an entrepreneur failed to pay the debt obligation above 100% of the collateral value, the agent bank would auction the collateral together with the project under investment and obliged to collect the fund disbursed together with the interest accrued to reimburse the remaining unpaid balance to investor and its uncollected interest rate commission and additional administrative expenses.The agent bank sells the pledged collateral when no alternative investment solution could be found.However, the main target of the agent bank is to benefit the investor and the entrepreneur by mitigating the risk related with the business running by entrepreneurs.As a matter of fact, the agent bank rented the project of entrepreneur for new entrepreneur that has the same project interest till the loan is settled holding the collateral and without ownership transfer.Here the benefit of the new entrepreneur has that the business runs with the support of entrepreneur's own fund without paying house rent with full-fledged facility and collect business profit beyond loan repayment mad to investor.Here the agent bank transfers the credit risk of an investor and an entrepreneur to new entrepreneur.
In 90 degree lending strategy, the depositor who wished to be an investor in the deposit periods has consulted the bank that had already invested the depositor's fund in a selective project earlier.The bank which consulted by the depositor shifts to agent position after having a formal agreement between the new investor and the agent bank for the portion of funds invested.The agent bank ceases calculating deposit interest into a deposit account of new investor to benefit proportional credit price according to proportional fund considered in the total fund that was already disbursed by the bank to the debtor earlier and thereby collected proportional interest rate commission from the investor credit price.This can be done if an interest rate commission agent banking system is a unit of a bank which already runs under the conventional banking system.The loan already disbursed to a debtor has insurance coverage and pledged collateral against the loan.So while the depositor moves later to investor position in the deposit period, the credit risk of an investor transfers to on the fate of pledged collateral and insurance engaged by an entrepreneur.
So an interest rate commission agent bank can transfer credit risk by selling loans to new investors, renting the project to new entrepreneurs, selling collateral and getting loan repayment insurance coverage.
Since the agent bank does not hold the customer deposit as an asset on its balance sheet, it is not affected by market and credit risks.Furthermore, the agent bank is equipped with high cutting age risk predictor employees who devoted their ultimate capacity to lay off credit risk of investors and entrepreneurs.
PROBLEM STATEMENT
A business model adopted by banks made them either retain risk or transfer risk to other financial institutions, which later led to have the same effect on the industry.
Research hypotheses
The research study aims to investigate and analyze the relationship between independent and dependent variables of the following research hypotheses: H0: credit risk and liquidity crunch have no positive effect on an interest rate commission agent banking system; H1: investor loan funding has a positive effect on profitability and sustainability of an interest rate commission agent banking system; H2: discrete market deposit interest rate incentive has a positive effect on stable deposit mobilization in the bank.
MATERIAL AND METHODS
The research study follows positivist data collection methods which help to test the hypotheses based on primary and secondary data.
Primary data were collected using self-administered and structured survey questionnaires from 300 commercial banks' employees among 1000population of banks, which are found in Addis Ababa, Ethiopia.The reliability and validity of survey questionnaires tested using Cronbach alpha for Likert scale and Kuder-Richardson(KR-20) for binary scale and analyzed using factor analysis.As a rule of thumb, Cronbach's alpha greater than or equal to 70% for Likert scale and Kuder-Richardson (KR-20) greater than or equal to 60% for binary scale, as it is not uncommon in exploratory research, accepted for the degree to which the measurement instrument succeeds in describing research interest (Chronbach, 1951).
Secondary data were collected from commercial banks' audited financial statements and national bank publications of economic indicators considered for the period covering from July 1, 1993 to June, 2016.To measure variables in secondary data, financial ratios were applied.
Measurement instruments
The research variables' indicators used in the survey instrument were measured by Likert and binary scale questions.The survey questionnaires and data collection method from secondary data stated below were adopted from Tessema and Kruger (2016).
Measurement instruments used to collect primary data using survey questionnaires
Measurement instruments of credit risk and liquidity crunch, investor loan funding, discrete market deposit interest rate incentive, and AIRCABS used to collect primary data (see Appendix 1, Table 1-4).
Measures of continuous data type instruments applied in the models
Ratios used to collect secondary data of liquidity crunch, credit risk, investor loan funding, discrete market deposit interest incentive and AIRCABS from financial statements and economic indicators (see Appendix 2, Table 5-9).
Method of analysis
To investigate the impact of credit risk and liquidity crunch on an interest rate commission agent banking system, canonical correlation was used.To predict the relationship between investor loan funding and sustainability and profitability of AIRCABS, on the one hand, discrete market deposit interest incentive and stable deposit mobilization in bank, on the other hand, multinomial logistic regression used.
Canonical correlation analysis
The correlations between the linear combinations defined as canonical correlations.The linear combination of credit risk and liquidity crunch variables (u) with a group of set of AIRCABS (W) by using each set of variables can construct credit risk and liquidity crunch and AIRCABS variants by the following equations: is calculated by eliminating one variable from each set and the (p-1) and (q-1) variables will be paired.
Canonical correlation between credit risk and liquidity crunch and AIRCABS dependes on the level of significance, magnitude of canonical root and redundancy index (Hair, Anderson, Tatham, & Black, 1998).However, the research study interest is to find the null hypothesis true to find no relationship between independent variables, credit risk and liquidity crunch, and dependent variable, AIRCABS, and the regression coefficients except for the intercepts are all equal to zero.
To arrive at a single redundancy index, all redundancies across all root can be summed, otherwise, the first significant root can be considered as proposed by Stewart and Love (1968).However, the redundancy of coefficient that explained less than 10% of the remaining variance, after that, was explained by a certain number of functions considered as significant non-correlation (Sherry & Henson, 2005).
The three methods of determining the relative importance of the canonical function of relationship are canonical weights, canonical loading and canonical cross loading.Among all the methods, some authors considered canonical loading as alternate to cross-canonical loading to interpret the result (Thompson, 1991 Rotation in canonical correlation led to lose the optimal interpretation of the analysis.However, canonical functions, canonical loadings, and standardized canonical coefficients interpreted using Kaiser's (1974) normalized varimax rotation criterion.
While testing the correlation to avoid type I error, the significance value to interpret the result is set at a 95% confidence interval level.To interpret the magnitude or practical significance of the results, the value of squared canonical correlation that has the value 1.96% for small, 13.04% for medium and 25.92% for large and partial correlation has the value 14%, 36% and 51%, respectively (Cohen, 1992).
The data for canonical correlation were analyzed using statistical software called SPSS.
Multinomial logistic regressions analysis
To investigate whether investor loan funding has a relationship with profitability and sustainability and discrete market deposit interest rate has a relationship with stable deposit in a bank, dependent variable was considered as a categorical variable for which data entered into the analysis as dummy coding 1 for the existence of profitability and sustainability; 0 for the non-existence of profitability and sustainability.Similarly, stable deposit was considered as a categorical variable for which the data entered into the analysis as dummy coding 1 for the existence of stable deposit, whereas 0 for the non-existence of stable deposit.When the value of predicting coefficient is equal to zero in the multinomial logistic regression model, the hypothesis under testing is said to be the null hypothesis, no relationship exists between the predicted independent variable and the value of the predicted dependent variable outcome, which mean that the independent variables do not predict closer to the value of the dependent variable.However, the significance of the hypothesis is that at least one of the coefficient values of the predictors is greater than zero and closer to the value of the predicted dependent variable outcome.
To identify which of independent variables' indicators were predictors of dependent variable, profitability and sustainability of an agent bank in the market in the first alternative research hypothesis are depicted by the following multinomial logistic regression equation: where gross profitability and sustainability (GPS) calculated based on return on capital (ROC) which in turn calculated total profit as percentage of total capital and the result of which interpreted as greater than for presence of GPS and less than for absence of GPS.
Investor loan funding (ILF): calculated total loan as percentage of total deposits.Similarly, which of independent variables were good predictors of dependent variable, stable deposit mobilization, in the second alternative research hypotheses, depicted by the following multinomial logistic regression equation: where a is the y(GPS or SD) intercept and b is the parameter which lay between interval (0,1); stable deposit (SD) calculated as the change in deposit (CD) less average deposit (AD), which can be interpreted as stable deposit greater than 0 ( ) as a presence of SD and less than 0 ( ) as absence of SD since the change should be greater than the average deposit; discrete market deposit interest incentive (DMDI) calculated as change in ordinary saving deposit interest rate as a percentage of the period interest rate.Since the minimum deposit interest rate was determined by National Bank of Ethiopia, the change in interest rate was not frequent.
To assess the fit of the model against data collected to test the hypotheses, four inferential tests, such as Brown chi-square, the Pearson chi-square, de-viance-based and descriptive measures were adopted (Brown, 1982, Prentice, 1976;Hosmer & Lemeshow, 2000).
Mixing individual survey respondents' perception with quantitative data analysis result
Though the research study focused on positivist research philosophy, explains the quantitative result of the study with support of human perception on survey instruments to answer the same research question, mixed method was applied.Integrating the quantitative result of the study with individual survey respondents' perception helped to explore the best method in strengthening problem-centered findings of the research study by overcoming the weaknesses of qualitative/quantitative method with the strengths of qualitative/quantitative method (Creswell, 2003; Castro, Kellison, Boyd, & Kopak, 2010).
Coefficient of variation, calculated as the standard deviation as proportion of mean or standard error estimate as proportion to the estimate itself, was applied to measure the precision of individual survey respondents to the point of survey instruments.Though coefficient of variation does not measure bias due to non-response bias, it measures the precision of estimated mean and can be applied as estimator of population parameter (Schouten, Calinescu, & luiten, 2013).It is used to compare samples of data from the same variables when mean measures were very different (Lovie, 2005).It is a measure of relative variability of positive random variable distribution whose standard deviation is less than the mean to show the reliability of the respondents' perception to the point of survey instruments (Pryseley, Mintiens, Knapen, Stede, & Molenberghs, 2010).It is applied in finance to determine the relative risk to choose the best alternative investments.The higher the coefficient of variation means the deviation from central mean is high (Curto & Pinto, 2009).Though the coefficient of variation measure is widely applied in the field of science, it is not widely applied in social science (Kelley, 2007).Because of this fact, the threshold of the coefficient of variation was not fixed at some referral point to interpret the individuals' point of agreement with survey instruments.To analyze the survey respondents' agreement or disagreement in the research study, the coefficient of variation reference point was set based on the significant level of parameter estimated using quantitative data.Accordingly, the maximum level of Likert scale survey questionnaire set as below or equal to .30was considered as acceptable rates while the coefficient of variation above .30was considered to be explained in caution referring to the mean.On the other hand, for the binary survey questionnaire of the research study, coefficient of variation set as below or equal to .50 was considered as acceptable rate, while a ratio above .50was explained in caution referring to the mean of the survey instruments.
STATISTICAL RESULT AND ANALYSIS
Though the research study focused on positivist research philosophy, the individual perception gathered using survey questionnaires analyzed together with the quantitative result to answer the question of the research study.
The following section detailed how the validity and reliability was constructed to measure the perception of individual survey participants.To identify the strong relationship between the independent variables, credit risk and liquidity crunch, and dependent variable, AIRCABS, canonical correlation statistical result discussed in section 4.2 was followed by multinomial logistic statistical results that showed prediction of profitability and sustainability of AIRCABS using investor loan funding and of stable deposit mobilization using discrete market deposit interest incentive discussed in section 4.3.
The statistical result of individual perception responses on credit risk and liquidity crunch survey instruments
The quality of measuring instrument of survey questionnaires can be ascertained by testing for validity and reliability.However, getting the measuring instrument's validity, in turn, enables to ensure its reliability.The internal consistency of items in instruments measurement was assessed by Cronbach's alpha measuring statistical tool (Cronbach, 1951).The alpha measures the interrelatedness of items in the survey instrument even though it is affected by the test length and dimensionality, because it is not sufficient to measure the homogeneity or unidimensionality of test items in survey instruments (Cortina, 1993 KMO and Bartlett's test result for credit risk and liquidity crunch, on the one hand, and AIRCABS, on the other hand, showed a strong relationship among items in survey instruments.These resultshelped to proceed further factor analysis to construct validity of survey instruments.
Factor analysis for validity of credit risk and liquidity crunch and AIRCABS survey questionnaires
In analyzing variance of survey instruments, the dimension of credit risk and liquidity crunch survey instruments was reduced and five factors were retained from the total of 16, and the dimension of AIRCABS survey questionnaires was reduced and three factors were retained from the total of nine factors.The retained factors whose eigenval-ues greater than 1 considered for further factorial analysis, since the estimated variance explained by factors was more than the average variances in a data set.Accordingly, factors of credit risk and liquidity crunch and AIRCABS whose eigenvalue greater than 1 accounted for 84.26% and 76.175% of the total variances, respectively, which, in turn, implied that the factors were reliable and highly defined.
The rotated component matrix reduced the numbers of factors to make further analysis of credit risk and liquidity crunch and AIRCABS scale dimensions easier.Since KMO is significant, the data collected using measurement instruments were factorial and factor loading greater than .50which approached to one for each survey instrument obtained.As the value of the factor loading approached to 1, the variables correlation with that factor increased.The strong correlation between the variables and the factorial loading was created when variables loaded highly on that factor.This indicates that the data demonistrated factorial validity where diferent insrument's measurment of credit risk and liquidity crunch and AIRCABS that stand to measure a particuar dimension within the domain of survey instruments were highly correlated (Engel & Schutt, 2013).So the extracted five foctors for credit risk and liquidity crunch, on the one hand, and three factors for AIRCABS were found uni-dimensional and factorially distinct.All items used to operationalize a particular construct were loaded onto a single factor.So each survey questionnaire of credit risk and liquidity crunch and AIRCABS have a strong correlation with selected component's loadings.
Investor loan funding
The alpha value of investor loan funding was .616and the alpha value for discrete market deposit interest incentive was .701.Though the minimum requirement alpha value of Likert scale items was .70 and more, the alpha value .60 and more was not uncommon in exploratory research.Salvucci, Walter, Conley, Fink, and Saba (1997) calculated a range of reliability measures of alpha value between .50 and .80,as moderate and alpha value above .80were stated as highly reliable.So the alpha value of Kuder-Richardson test results obtained allowed to proceed factor analysis to con-
Factor analysis for validity of investor loan funding and discrete market deposit interest incentive survey questionnaires
Items in the survey questionnaires of investor loan funding and discrete market deposit interest rate incentive were further analyzed using factor analysis to reveal the validity of the survey instruments.funding and discrete market deposit interest incentive items in survey instruments strongly contributed to the principal components that made them considered in the analysis.
Since investor loan funding and discrete market deposit interest incentive survey questionnaires items' eigenvalue approached to 1, high inter-correlation was found with factor one and was less loaded to the second factor, respectively.
Based on the significant result of Cronbach's alpha to proceed factor analysis, all factor loading found greater than .50 and the measurement instruments of investor loan funding and discrete market deposit interest rate incentive were found reliable and valid.This implied that the survey in-struments developed were correct for data collection and for analysis of individuals' perception.
Canonical correlation statistical result
The relationship between credit risk and liquidity crunch, on the one hand, and an interest rate commission agent banking system, on the other hand, identified using Canonical correlation to answer the research question of the following hypothesis.
H0: Credit risk and liquidity crunch have no positive effect on an interest rate commission agent banking system in administrating investors loan funding to entrepreneurs.To investigate the impact of credit risk and liquidity crunch (deposit run, credit crunch, liquidity risk, non-performing asset and credit risk) on AIRCABS (non-interest income, bank efficiency, return on asset, return on equity and capital adequacy) the following mean and standard deviation developed to ascertain the variables' deviation from the central tendency.
As discussed in Table 16, the relationship between independent variables, credit risk and liquidity crunch, and dependent variables, AIRCABS, was described by simple statistical mean and standard deviation.A very high deviation of variables from central tendency has made a great diversity of variables uncorrelated.Since the standard deviation of liquidity risk, nonperforming loan, credit risk, bank efficiency and capital adequacy were greater than their mean value, there was seen high variability of variables from central tendency except return on asset variables.
Canonical correlation interpreted by the level of significance, the size of canonical correlation and the magnitude of redundancy index.To investigate further the relationship between independent variables, credit risk and liquidity crunch, and dependent variable, AIRCABS, simple statistical correlation conducted by the following table 18.
Level of significance of canonical correlation
As was indicated in Table 17, the t-value, followed by t-distribution to test the null hypothesis that canonical coefficients of independent and dependent variables were zero.Since the probability of t statistics was greater than the alpha level (.05), the canonical correlation coefficient between independent and dependent variables found zero.This, in turn, implied that there no established linear relationship between independent variables, credit risk and liquidity crunch, and dependent variables, AIRCABS.Accordingly, items neither on independent side nor on dependent sides created correlation with one another.So the null hypothesis that there was no relationship between credit risk and liquidity crunch on the one hand, and AIRCABS, on the other hand, was accepted.However, additional analysis to investigate the relationship between independent and dependent variables conducted.Generally, all the statistical result showed that test of canonical correlation between independent, credit risk and liquidity crunch, and the dependent variables, AIRCABS, found insignificant relationship and the null hypothesis of the research study was accepted.
The magnitude of canonical correlation
The significant level of canonical function is based on the size of canonical correlation.
Though no accepted rules established either to accept or reject the size of canonical correlation, the research study based on the significant level of multivariate test and factor analysis.
To run factor analysis, the sampling adequacy and model fit test were conducted using Kaiser-Meyer-olkin and Bartlett's test as follows.As indicated in Table 20, the sampling adequacy is below the minimum requirement .60 and model fit test of Barteltt's test of Chi-square found insignificant at P=.868, which was above the required significant level (P>.05).The data were not suited to proceed factor analysis, because there was no relationship between independent variables, credit risk and liquidity crunch, and dependent variables, AIRCABS.
Redundancy measure of share variances
As indicated in Table 21 below, the share variance account for 65.83% of total shared variances between the canonical variables.However, squared canonical correlation did not represent the variance extracted from the sets of variables except the variance shared by the linear composites of the sets of dependent and independent variables (Alpert, Mark, & Robert, 1972).As a matter of fact, instead of squared canonical correlation, redundancy index ).
was calculated to use as a measure of shared variance as proposed by Lambert and Durand (1975).
The redundancy index which measures the amount of share variance in the dependent variables explained by independent canonical variate is less than 10% of variance in their function except for the first canonical function, which was less impressive to interpret the corresponding canonical function, since the overall model was insignificant.
The proportion of variance shared between the variable sets across all functions calculated as 82.27% for full model which is higher than the first squared canonical correlation, 65.83%(=.81132 2 ), even though the sum of squared canonical correlation effect size always greater than the full model effect (Sherry & Henson, 2005).This implied that the second function was not created after the first has explained by as much variability as observed variables.This indicates that no relationship found between variate of credit risk and liquidity crunch, on the one hand, and variate of AIRCABS, on the other hand.
Because of insignificant canonical correlation between independent and dependent variables the model did not fit the data and further interpretation of canonical root and redundancy index was not considered to reveal the significant size of original variables in canonical correlation using factor analysis.
As a result of a statistical test, the null hypothesis of the research study, credit risk and liquidity crunch have no impact on an interest rate commission agent banking system, was accepted.
Individual perception of credit risk and liquidity crunch survey questionnaires
As depicted in The coefficient of variation calculated for the all survey questionnaires was very close to zero, as the individual survey participants' perception was very close to the central mean.Since all survey instruments assessed by the individual survey participants enhanced the contents of independent and dependent variables that applied in quantitative analysis, the mix of individual survey participants' perception and quantitative analysis of financial statement showed credit risk and liquidity crunch have no impact on an interest rate commission agent banking system.
Therefore, the null hypothesis that credit risk and liquidity crunches have no positive effect on an interest rate commission agent banking system was accepted.
Statistical result of investor loan funding and discrete market deposit interest rate incentive
Investor loan funding prediction of sustainability and profitability of AIRCABS and discrete mar-
Model fitting information
The model fitting information detailed the dependent and independent variables together with their control variables to assess the final model.
To identify the relationship between sustainability and profitability of AIRCABS with investor loan funding, identifying the risk related within the predictor and predicted variables is vital (Bayaga, 2010).Analyzing the risk between independent and dependent variables using multinomial logistic regression helped to identify the overall relationship.
Tables 22 and 23 detailed the model fitting information of sustainability and profitability of AIRCABS which was predicted by investor loan funding together with its control variables, such as bank efficiency, return on asset, return on equity and capital adequacy.The Chi-square (8.912) in Table 22 and Chi-square (17.323) in Table 23 which were the difference between -2Log-likelihood of the null model and the final model found significant at P=.003 and P=.000, which are less than cut off P=0.05.This accounted for the model fitted the data better and more accurately than a null model.The value of AIC and BIC, which are information theory based on the significant of model fitting the data were very close to -2 Log Likelihood both in Table 22 and Table 23.The closeness in distance among AIC, BIC and -2 Log Likelihood implied that the likelihood of the models to the true expected value.So the null hypothesis that can be stated as no difference between the model without independent variables and the model with independent variables was rejected and the alternative hypotheses (H1 and H2) of the research study were accepted.24) and .899and .966(in Table 25), respectively, greater than p-value (0.05) that made the model a good overall fit to the data and predicted probabilities that did not deviate from the observed probabilities to the extent that binomial distribution did predict.
Goodness-of-fit
The independent ways by which independent variables predict the dependent variables using Pearson's correlation and deviance in Table 24 and in Table 25 based on the amount of information in the data that helped to estimate the value of unknown population parameters were 14(df) and 19(df), respectively.Insignificant of the goodnessof-fit model implied that independent variables, investor loan funding together with its control variables, predicted the dependent variables, sustainability and profitability of AIRCABS, as the insignificant result of Pearson's correlation and deviance stated in Table 24.On the other hand, the independent variables, discrete market deposit interest rate incentive, predicted the dependent variables, stable deposit mobilization, as the insignificant result of Pearson's correlation and deviance stated in Table 25.
Likelihood ratio tests
It is a test for obtaining likelihood of observations with predictor variables considered in the model.Though similar statistical results of the null and full models in the model fitting information reported in Tables 22 and 23, the likelihood ratio tests in Tables 26 and 27 components of independent variables compared to the full model and each predicting independent variables significantly contributed to the full effect.According to the statistical result, the independent variables such as investor loan funding and discrete market deposit interest incentive together with their independent control variables significantly contributed to the effect of profitability and sustainability of AIRCABS and stable deposit mobilization, respectively.The Chi-square (8.912) in Table 26 and Chi-square (17.323) in Table 27 were significant at p=0.003 and p=0.000 respectively, which are less than the cut off P< 0.05.This indicated that the independent variables, such as Investor loan funding, Financial deepening, Per capita income, Growth domestic saving to GDP, Total private investment to bank deposit and Management efficiency, have created strong relationship with dependent variables, profitability and sustainability of AIRCABS as stated in Table 26.On the other hand, Table 27 displayed the independent variables such as dis-
Parameter estimates
Tables 28 and 29 showed outcomes of multinomial logistic coefficient (B), standard error, Wald statistics, significant level, odd ratio (Exp (B)) and confidence interval of odd ratio.
The models estimated likelihood occurrence of sustainability and profitability of AIRCABS relative to likelihood of event occurrence no sustainability and profitability of AIRCABS in Table 28.
Similarly, the likelihood occurrence of stable deposit mobilization estimated relative to likelihood occurrence of no stable deposit mobilization.So the model predicted the dependent variables using independent variable based on the magnitude of the parameter estimator, Coefficient, corresponding to the odd ratio.
As depicted in Table 28, a one-unit increment of each independent variable, investor loan funding together with its control variables such as financial deepening, Per capita income, growth domestic saving to GDP, total private investment to bank deposit and management efficiency increased the likelihood of predicting sustainability and profitability of AIRCABS by 111.242 times.Similarly, in Note: the Chi-square statistic is the difference in -2 Log Likelihoods between the final model and a reduced model.The reduced model is formed by omitting an effect from the final model.The null hypothesis is that all parameters of that effect are 0.
5.293E+89
Note: a.The reference category is: no profitability and sustainability of AIRCABS.
the likelihood of predicting stable deposit mobilization by 205.965 times.In Tables 28 and 29, the odd ratios (Ext (B)) associated with each predictor increased and became greater than 1.0, which indicated that the likelihood of dependent variables strongly predicted by the independent variable.As depicted in Tables 28 and 29, as the coefficient was away from zero, the predictor variable had a high influence in predicting the logit which is being predicted and is the likelihood of the outcome variables.The Wald statistical test (5.228) which is significant at P=0.022 in Table 28 and Wald statistics test (4.217) which is significant at P=.04 in Table 29 increased the model fit to the data sufficiently.This assured that the individual predictors significantly contributed for the improvement of the model and the parameter is useful to the model (Bewick, Cheek, & Ball, 2005;El-Habil, 2012).
The confidence interval (95%), which is the interval where the true effect lied, was very supe-rior level though confidence interval that didn't include the null value (1) was greater than 1 and found to be significant.This implied that when exposed to risk, likelihood of having sustainability and profitability of AIRCABS in Table 28 and stable deposit in Table 29 more increased more than when an event was not exposed to risk.The odd ratio was very superior to 1, which, in turn, indicated the likelihood of predictors that predicted the dependent variables was very superior.The combination of independent variables selected to predict the dependent variables was efficient.As evidenced by Table 22 up to Table 29 the dependent variables significantly created strong relationship with dependent variables.The models also predicted 87.5% and 90.5% correctly as indicated in Tables 30 and 31.So disregarding abnormally wide confidence interval the independent variables displayed in Tables 28 and 29 efficiently predicted the respective dependent variables.
Classification table
Tables 30 and 31 showed the classification of how well our full model correctly predicted observed outcome of yes/no profitability and sustainability of AIRCABS and yes/no stable deposit in a bank, respectively.Therefore, the overall accuracy of the models predicted 87.5% and 90.5%, as stated below for profitability and sustainability of AIRCABS and stable deposit, respectively.by rejecting the null hypothesis that there is no difference between a model with and without independent predicting variables to predict the dependent variables.
Similarly, the result of multinomial regression to predict stable deposit using predictor variables such as discrete market deposit interest incentive together with its control variables such as deposit interest incentive rate, average deposit interest rate, special deposit ratio, efficiency of deposit utilization ratio and deposit interest payment capacity was significant.So the alternative hypothesis stated that discrete market deposit interest incentive has a positive impact on stable deposit mobilization in case a bank finance an entrepreneur which later shift to agent position when depositor needs to be an investor was accepted by rejecting the null hypothesis that there is no difference between models with and without independent predicting variables to predict the dependent variables.
Individual perception on investor loan funding and discrete market deposit interest incentive
In quantitative measurement, the coefficient of variation standard error to its parameter estimator value.In Tables 28 and 29, the coefficient of variation of investor loan funding predicting variables was calculated as .44 at significant level (P=.022), whereas the coefficient of variation for discrete market deposit interest rate incentive calculated as .49at a significant level (P=.040),where the standard level of significance for this research study is P<.05.The coefficient of variation calculated above helped to assimilate the perception of individuals' survey participants with the real practice depicted based on financial statements.
To investigate the degree of agreement and disagreement, the cut off point for coefficient of variation (CV) to interpret survey instrument of investor loan funding (ILF) and discrete market interest incentives considered below .50, and above this ceiling level, data were interpreted with caution.Accordingly, all survey participants strongly agreed with the point of survey questions except survey questions ILF-Q3 (CV .55)and ILF-Q4(CV .85),which were a little bit far from central tendency.
Generally, among all survey respondents, 79% of individual participants of investor loan funding survey questionnaires agreed, whereas 88% of individual participants of discrete market deposit interest incentive questionnaires agreed with majority of questions in survey instruments.
In quantitative analysis, significant result showed that investor loan funding predicted and created strong relationships with sustainability and profitability of AIRCABS.On the other hand, discrete market deposit interest incentives predicted and have a strong relationship with stable deposit mobilization.The coefficient of variation calculated based on the quantitative data to show investor loan funding was a true predictor of sustainability and profitability of AIRCABS and discrete market deposit interest rate incentive was a true predictor of stable deposit variables.The coefficient of variation calculated based on perception of individual survey participant showed almost below the coefficient of variation calculated in quantitative measurement.This implied that the individual perception to the point of survey question and the result of quantitative measurement were found to be concurrent.Therefore, alternative hypotheses (H1 and H2) were accepted.
CONCLUSION
In conventional banking activities, banks either retain or transfer credit and liquidity risk to other financial institutions which later have the same impact on the overall industry.Since bank sells customer deposit having considered as its own asset in its balance sheet, in most instances, banks are exposed to toxic asset, non-performing asset or contagion and short of liquidity problems.Once the bank is exposed to credit risk, it is indirectly affected by hidden financial cost, which the bank was paying interest to depositors on uncollectible loan already disbursed from the depositors' accounts.
To increase sustainability, profitability and stable fund of AIRCABS by transferring credit risk and liquidity crunch to investors and entrepreneurs, an interest rate commission agent banking system was developed.This can be done by empowering money depositor to exercise their full right for the use of their money to get reasonable credit price rather than offering unreasonable deposit interest rate that forced them to join the informal market.
An interest rate commission agent banking system (AIRCABS) transfers risk to investors and entrepreneurs through its lending strategies, 360 degrees, 180 degrees and 90 degree (Tessema & Kruger, 2016).
The reliability and viability of an interest rate commission agent banking system was investigated based on the significant test result of individual survey participants' perception and financial data.
Before getting into analysis of individual perceptions, the validity and reliability of survey questionnaires were tested using Cronbanch's alpha, Kuder-Richardson, descriptive statistics and factor analysis and significant results weres found.The individual survey participants' perception supported the empirical analysis result, which was based on financial statements of all commercial banks in Ethiopia.
Since an interest rate commission agent banking system administers the fund of investor loan funding to entrepreneur by transferring credit and liquidity risks to investor and entrepreneurs, credit risk and liquidity crunch had no effect on the sustainability and profitability of AIRCABS.This idea was supported by testing the first hypothesis (H0) independent and dependent variables applying canonical correlation.Accordingly, the impact of credit risk and liquidity crunch (deposit run, credit crunch, liquidity risk, non-performing asset and credit risk) on AIRCABS (non-interest income, bank efficiency, return on asset, return on equity and capital adequacy) was investigated and no relationship was found.
The main activity of traditional banking was maximizing the net interest margin, which is the income from buying and selling of fund, having borne credit risks and liquidity crunch.However, the main target of AIRCABS is to maximize the agreed interest rate commission from investor loan funding administration and project selection fee.Since AIRCABS did not hold customer fund as an asset on its balance sheet, no financial expense thinkable.So AIRCABS enables the agent bank to collect loan interest rate commission and fee by transferring liquidity and credit risk to investors and entrepreneurs.This notion was supported by testing the research hypothesis (H1) to investigate the relationship between dependent variables, investor loan funding with its control variables such as financial deepening, per capita income, growth domestic saving to GDP, total private investment to bank deposit and management efficiency, and dependent variables, profitability and sustainability of AIRCABS, using multinomial logistic regression.The statistical result showed that investor loan funding together with its control variables predicted sustainability and profitability of AIRCABS.This implied that as the agent bank's efficiency in administering investor loan increases, the agent bank's sustainability and profitability increases.As a result, investment on innovative entrepreneurs' project increases, this, in turn, increases import substitution products and the country's GDP in general.
As the competition among banks in service excellence increases the likelihood of deposit stability at the origin, depository bank decreases.Money deposit of customers is the lifeblood of the traditional banks to maintain its sustainability and profitability in the market, in particular, and financial stability in general.In most instances, retail deposit, deposited by major society, is more stable than wholesale deposit, deposit by few society, in connection with the benefit of deposit interest rate.As interest rate increases, the interest of small money depositors' increases and thereby stable deposit will be established.So applying discrete market deposit interest rate incentive on the marginal increment of money deposited enables to have more deposit clientele which, in turn, enhances stability of deposit.This notion was supported by testing the research hypothesis (H2) by investigating the relationship between independent and dependent variables using multinomial logistic regression.The statistical result showed that there was a strong relationship between independent variables, discrete market deposit interest incentive with its control variables such as a special deposit ratio, average deposit interest rate, deposit interest incentive rate, efficiency of deposit utilization ratio and deposit interest payment capacity, and the dependent variable, stable deposit.This implied that a fraction of the unit increment on discrete market deposit interest rate incentive enables the bank to have a wide margin of deposit from time to time, which, in turn, significantly increases the stability of deposit.
In general, an interest rate commission agent banking system was tested using different statistical tools to find internal and external resistance of the model at a buffer stage where the market and economic shock exhibited.Since the statistical test result found was significant on cause and effect relationship in testing the hypotheses, this brought into a conclusion that an interest rate commission agent banking model is viable, as well as reliable.Bank can transfer credit risk using AIRCABS to the fund holder and investor to increase its profitability and sustainability 7 AIRCABS enables the fund owner to search potential borrowers with or without collateral in the market to provide a credit facility using the bank as an agent 8 The right of the investor and depositors to get their fund return will be safely kept by the bank using AIRCABS 9 Under AIRCABS the bank's profit will be simply maximized without financial expense APPENDIX 2
∆
This ratio measures the deposit run on bank as a percentage of change in deposit to total deposit.As the percentage of change in deposit decline, there is a run on bank by depositors.
∆
This ratio measures the decline of supply of loan at macro level as percentage of decline of change in loan and advance to total outstanding loan.
This ratio measures the liquidity risk of the bank as percentage of liquid assets to the sum total of customers deposit and bank's short term borrowing.
Indicator Interpretation
Non-performing asset ratio
This ratio measures the level of non-performing asset to total loan portfolio.
This is a measure of loan loss provision to total loan.As the ratio increases, the bank is exposed to credit risk.
Commodity price shock ratio 0 p p
∆
This is a measure of change in the current price as percentage of the last price
Indicator Interpretation
Financial deepening
This ratio measures by Broad money (M2) ratio to GDP.It indicates the increased provision of financial service as a result of more liquid money available in the economy.
This ratio measures by Gross National Income (GNI) per total population.
This ratio measures by total domestic saving to GDP ratio.
Gross private domestic investment to total bank deposit ratio
This measured total gross domestic investment in domestic production using private business capital to total domestic saving raise bay bank.
This measured total noninterest expense as percentage of total noninterest income.
Bank's efficiency ratio ( EFR )
This ratio measured a total of non-interest income as percentage of non-interest expense.
Return on asset ( ROA )
This ratio measured the gross interest rate commission as a percentage of total fixed assets.
Return on equity ( ROE )
This ratio measured net income excluding interest expense as percentage of the equity of the bank.
Capital adequacy ratio ( CA )
This ratio measures the bank's capital as percentage of administrative expenses.
Banks and Bank Systems, Volume 12, Issue 3, 2017 struct validity of survey questionnaires.
Because of this fact, financial crisis emanated from credit risk and liquidity crunch resulted in bank failure has not yet solved (Moise & Ilie, 2012; Adrian, 2015; Memmel, Sachs, & Stein, 2012).To solve these problems, an interest rate commission agent banking business model that transfers credit and liquidity risks to investors and entrepreneurs by increasing the agent bank's sustainability, profitability and stable deposit was not yet empirically tested to explore the model viability and reliability (Tessema & Kruger, 2016).
Table 10 .
Descriptive statistics for credit risk and liquidity crunch
Table 11 .
Descriptive statistics for AIRCABS
Table 13 .
Model summary rotation aNote: a -rotation method: varimax with Kaiser normalization, b -Total Cronbach's alpha is based on the total eigenvalue.
Table 14 .
Variance accounted for
Table 15 .
Variance accounted for
Table 16 .
Descriptive statistics for credit risk and liquidity crunch and AIRCABS
Table 17 .
Linear combination for canonical correlation
Table 18 .
Multivariate tests of significance
Table 19 .
Dimension reduction analysis
Table 21 .
Redundancy index and effect of shared variance Note: *(SV)( R c 2
Table 16
, dispersion of variables of credit risk and liquidity crunch and AIRCABS relative to the mean showed high variability.The variables' proportion of standard deviation to the mean that should have been less than 1 revealed no relation-
Table 22 .
Model fitting information of profitability and sustainability of AIRCABS
Table 24 and
Table 25 reported further evidence of statistical insignificant level of Pearson's correlation and deviance goodness-of-fit model.The Pearson's correlation and deviance for sustainability profitability of AIRCABS and stable deposit were the difference between the current model and the full model whose null value were .376and .525(in Table
Table 23 .
Model fitting information of stable deposit
Table 26 .
Likelihood ratio tests of profitability and sustainability of AIRCABS Note: the Chi-square statistic is the difference in -2 Log Likelihoods between the final model and a reduced model.The reduced model is formed by omitting an effect from the final model.The null hypothesis is that all parameters of that effect are 0.
Table 27 .
Likelihood ratio tests of stable deposit
Table 28 .
Parameter estimates of profitability and sustainability of AIRCABS
Table 29 .
Parameter estimates of stable deposit
Table 31 .
Percentage classification of stable deposit
Table 32 .
Case processing summaryNote: a.The dependent variable has only one value observed in 16 (100.0%)subpopulations.
Table 33 .
Case processing summary
Table 1 .
Indicators of credit risk and liquidity crunch measures High illiquid asset that is unaccepted for common valuation in the market is the source liquidity risk 7Instability of depositors led the bank to liquidity risk 8 Diversifying loan funded by bank out of intended purpose led the borrower to default 9 Funding loan from bank to entrepreneur as own asset increases the bank's credit risk 10 Credit operation weakness of borrower leads the loan to default 11 Loan sanctioned by corruption leads the borrower to default 12 Lack of good credit assessment and follow up by bank leads to increase of nonperforming asset 13 Borrowers default for lack of management support from credit institutions 14 Buying and selling of money exposes the bank to credit risk 15 Decline of commodity prices for exporters who used bank loan facility can result in higher nonperforming loans (NPLs) 16 As capital adequacy increases, credit risk of the bank decreases
Table 2 .
Indicators of investor loan funding measures Funding loan by investor to entrepreneur through an interest rate commission agent banking system eliminates the bank exposure to credit risk and liquidity crunch 5As the supply of loan funding by investor to entrepreneur's increases through an interest rate commission agent banking system, investment in a country enhances and thereby increases the country's GDP 6Benefiting credit price to investor loan funding enhances the agent bank interest rate commission
Table 3 .
Indicators of discrete market deposit interest incentive measures Applying discrete market interest rate incentive for those deposit's volumes increases the demand of depositor to keep their deposit stable 3 Applying various level deposit interest rate incentives for depositors enable the bank to get a more stable deposit 4 Allowing depositor to participate in bank's investment by paying proportionate credit price for their partial or full fund enable the bank to have a more stable fund 5 Interest incentive on deposit in terms of incentive in kind enables the bank to hold more clientele
Table 4 .
Indicators of AIRCABS measuresAs deposit and credit interest rate approach equilibrium point the bank shall work as an interest rate commission agent for investor loan funding to entrepreneur to enhance its sustainability in market 3Providing alternative investment opportunity to fund provider by AIRCABS enable to enhance stable fund in the bank 4Providing high deposit interest rate and credit price by AIRCABS enable the bank to attract funds from the unbanked and banked society
Table 5 .
Measures of liquidity crunch ratio
Table 8 .
Measures of discrete market deposit interest incentive ratioDeposit rate ( AVDR ) This ratio measured commercial bank average deposit interest rate.As the deposit rate increases the bank deposit mobilization increases.Special deposit ratio ( SPDR )This ratio measured money deposited in the bank for the specific purpose of the customer benefit and will not be withdrawn at any time by the customer as a ratio of total deposit.Deposit interest incentive rate ( DIIR )This measured the change in growth of deposit interest rate as a percentage of total deposit interest rate.Efficiency of deposit utilization ratio ( EDUR )This measured total interest expense as percentage of total loan interest rate.Deposit interest incentive payment capacity ratio ( DIPC )This ratio measures deposit interest expense as percentage of total capital.
Table 9 .
Measure of AICABS ratio | 2018-12-27T02:37:18.742Z | 2017-09-07T00:00:00.000 | {
"year": 2017,
"sha1": "7157256b2d04254a98883f2ce986a6e701c06f55",
"oa_license": "CCBY",
"oa_url": "https://businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/9089/BBS_2017_03_Tessema.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3032d5d1b3743ce4cb7d773d1faf93ff95ac6e8b",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
243944710 | pes2o/s2orc | v3-fos-license | Fear, distress, and perceived risk shape stigma toward Ebola survivors: a prospective longitudinal study
Background During the 2014–15 Ebola Virus Disease (EVD) epidemic, thousands of people in Sierra Leone were infected with the devastating virus and survived. Years after the epidemic was declared over, stigma toward EVD survivors and others affected by the virus is still a major concern, but little is known about the factors that influence stigma toward survivors. This study examines how key personal and ecological factors predicted EVD-related stigma at the height of the 2014–2015 epidemic in Sierra Leone, and the personal and ecological factors that shaped changes in stigma over time. Methods Using three waves of survey data from a representative sample in the Western Urban and Western Rural districts of Sierra Leone, this study examines factors associated with self-reported personal stigma toward Ebola survivors (11 items, α = 0.77) among 1008 adults (74.6% retention rate) from 63 census enumeration areas of the Western Rural and Western Urban districts of Sierra Leone. Participants were randomly sampled at the height of the EVD epidemic and followed up as the epidemic was waning and once the epidemic had been declared over by the WHO. Three-level mixed effects models were fit using Stata 16 SE to examine cross-sectional associations as well as predictors of longitudinal changes in stigma toward EVD survivors. Results At the height of the EVD epidemic, female sex, household wealth, post-traumatic stress, EVD-related fear and perceived infection risk are a few of the factors which predicted higher levels of stigma toward survivors. On average, stigma toward EVD survivors decreased significantly as the epidemic declined in Sierra Leone, but female sex, EVD fear, and risk perceptions predicted a slower rate of change. Conclusion This study identified key individual and psychosocial characteristics which may predict higher levels of stigma toward infectious disease survivors. Future studies should pursue a better understanding of how personal characteristics and perceptions, including psychosocial distress, fear, and perceived infection risk serve as pathways for stigma in communities affected by infectious disease. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-12146-0.
The 2014-2015 Ebola virus epidemic in Sierra Leone resulted in thousands of people becoming infected with the virus and left tens of thousands of families broadly affected. In Sierra Leone, Ebola virus disease (EVD) affected an estimated 14,122 confirmed, probable and suspected people, among whom 3955 died; however, thousands more survived after a confirmed EVD diagnosis [1]. Long before the EVD outbreak, Sierra Leone's 11-year civil war (1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002) resulted in the deaths of over 50,000 people, devastating the country's health system and fracturing its social fabric. As a result, Sierra Leone's health system was extremely vulnerable to the EVD outbreak. Resulting from the failure to strengthen and sustain health and social systems after the war, many survivors of EVD are faced with healthcare neglect, discrimination, and social exclusion [2,3].
To date, little research has been done to examine the nature, predictors, and effects of stigma toward survivors of infectious disease outbreaks like the 2014-15 EVD epidemic in West Africa. Although some studies such as James et al. (2019) and (2020) have investigated the experiences of EVD survivors, themselves, an enduring gap in the literature persists regarding the key factors that predict stigma and discrimination against people affected by EVD and how, if at all, stigma is affected by trends in infection rates. Despite the persistent gap in EVDrelated evidence, research on stigma among individuals affected by HIV/AIDS is informative in the study of stigma toward survivors of EVD. For example, people living with HIV/AIDS who experience stigma and discrimination report high rates of comorbid mental health problems, poor adherence to treatment, among other harmful effects [4].
Research on HIV/AIDS also underscores factors which may predict stigma toward those affected by EVD and other infectious diseases like COVID-19. For example, evidence from HIV/AIDS research suggests that the perceived contagiousness and degree to which physical problems are manifest appear to influence stigma and discrimination against people with the disease [5]. Similarly, research in the context of HIV/AIDS points to several personal and ecological factors thought to contribute to infectious disease-related stigma. Lessons from HIV/AIDS research offer a strong foundation for stigma research and planning in response to the 2014-15 EVD outbreak in West Africa and 2019-2020 EVD outbreaks in Democratic Republic of Congo and Guinea. The recent (2021) EVD outbreak in Guinea is especially concerning, as it underscores the real possibility that new EVD outbreaks may emerge in Sierra Leone and elsewhere. Overall, more evidence is needed about key predictors of stigma and discrimination, as this informative is important in design of evidence-based approaches to prevent stigma toward survivors of other infectious diseases, including the ongoing global COVID-19 pandemic.
This study uses a prospective longitudinal design to examine predictors and trends of stigma toward EVD survivors, cross-sectional associations, and changes in stigma from the height of the 2014-2015 EVD outbreak in Sierra Leone until after it was declared over. First, we explore which individual, social, ecological factors, if any, are associated with stigma toward EVD survivors in their community at the height of the outbreak; second, we investigate how, if at all, stigma toward EVD survivors in Sierra Leone changed as the epidemic declined; and third, we examine how, if at all, individual, social, and ecological factors predicted changes in stigma toward adult EVD survivors from the height of the 2014-2015 until it was declared over.
Methods
Surveys were conducted in the Western Rural and Western Urban districts of Sierra Leone (including the capital, Freetown) at three time points: January to April 2015, March to September 2015, and February to April 2016 ( Fig. 1). During the 2014-15 outbreak, this region of Sierra Leone was among the worst affected by EVD, where over 40% of confirmed cases occurred. Census enumeration areas (EAs) were the primary sampling units. To facilitate sampling, Statistics Sierra Leone provided a list of EAs for the two districts and maps defining the EA boundaries as well as a map with specified number of streets (2-5) among each of the EAs following procedures for the Sierra Leone 2004 Population and Housing Census [6].
Ethical review and approval for this study were obtained from the Sierra Leone Ministry of Health and Sanitation Ethics and Scientific Review Committee as well as the Institutional Review Board of the Harvard T. H. Chan School of Public Health (Protocol #15145, Approval #17). All participants in the survey were over the age of 18 and enrolled following informed consent delivered by a team of 16 trained local staff. All participants gave informed consent orally due to low levels of literacy in the sample. All research methods and protocols were reviewed and approved by a local community advisory board comprising community members and health care professionals.
Sample
Participants in the study were over 18 years of age and sampled using multistage cluster sampling in the Western Urban and Western Rural districts of Sierra Leone (Fig. 2). Of the 9671 enumeration areas (EAs) in the Western Area Urban and Western Area Rural districts of Sierra Leone, 63 EAs were randomly sampled. Within each EA, 16 households were selected using random geographic sampling techniques. Interviewers then selected a proportional sample of equally distanced households ("households" were defined as persons residing together). Households were approached and one adult among those available on first contact was chosen at random from the household by alphabetizing first names in ascending order and choosing the first one. When a randomly selected individual was unable or refused to participate, another individual from the same household was selected using the randomization procedure. Over the course of 1 day, if no member was available after three attempts, another household was selected using the same geographic randomization techniques. Similarly, if a participant who was interviewed at the first wave of data collection could not be located at the second or third wave following three attempts, another eligible adult in the household was interviewed.
Procedures
A team of 16 local research assistants (eight women and eight men) were trained in survey administration using Android tablets running Kobo Toolbox digital data collection software [7]. All research assistants received 3 days of training in survey procedures and research ethics and interviewers were assigned to interview participants of the same sex. Despite the challenging work conditions during the ongoing EVD epidemic, the study team and Community Advisory Board (CAB) determined that one-on-one interviews could be performed safely in an outdoor location close to participants' homes that provided privacy and confidentiality. Oftentimes, this private location was a compound away from family members and neighbors, or a nearby sitting area. To ensure the safety of respondents and enumerators, precautions were taken, such as no physical contact and sitting at a safe social distance to complete interviews. Participants were given $2 worth of food for their participation, as well as contact information in case they had any questions after participating in the study. In the event of a risk of harm situation, outreach was conducted with local medical and social service providers. Two such cases, related to suicidal ideation, were identified and referred to local mental health and social services for follow-up care.
Measurement
All measures that were new to this setting were reviewed by a local community advisory board (CAB) and also by local collaborators for face validity. Scales were examined for local comprehension and forward-and backward-translated following a standard protocol [8][9][10]. Stigma toward EVD survivors (Cronbach's α = 0.77) was measured using a Krio (the lingua franca of Sierra Leone) translation of an eleven-item adaptation of the HIV-related stigma scale, with items scored as a 4-point Likert scale. Items on the scale assessed whether a respondent agreed (1 = strongly disagree, 2 = somewhat disagree, 3 = somewhat agree, 4 = strongly agree) with statements designed to assess stigma and discriminatory attitudes toward individuals who were given a certificate to confirm their EVD-free status by the government health authority (Table S1). To assess personal EVD exposure, participants were asked whether a member of their household, a family member whom they did not live with, a friend, a neighbor, or someone in their community had been diagnosed with EVD in the past 12 months (0 = no, 1 = yes). To examine the effect of antistigma messaging, one item asked respondents whether they had seen or heard any messages, in the past month, condemning discrimination and stigmatization of Ebola survivors (0 = no, 1 = yes).
Scales previously validated for use in Sierra Leone, Liberia, and other conflict-affected countries in sub-Saharan Africa were used to assess mental health [11][12][13][14][15][16]. Depression and anxiety symptoms were assessed using a Krio adaptation of the Hopkins Symptom Checklist-25 (HSCL-25), scored on past week symptom intensity (1 = not at all, 2 = a little, 3 = quite a bit, 4 = extremely), previously adapted for Sierra Leone [11][12][13]. Both the depression (Cronbach's α = 0.91) and anxiety (Cronbach's α = 0.93) subscales had excellent internal consistency. PTSD symptoms were assessed using a 16item scale (0 = no, 1 = yes), the PTSD Symptom Scale-Interview, an adaptation of the Civilian PTSD checklist used in Liberia (Cronbach's α = 0.93) [14]. Perceived risk of EVD infection was measured using three items that assessed whether participants were concerned that they themselves, someone in their family, and someone in their community would get sick with Ebola in the following month (Cronbach's α = 0.96).
Ebola-related fear was measured using ten Likert-style questions about whether certain daily activities made them fearful in the context of the ongoing EVD outbreak (α = 0.76). To capture health care-seeking behaviors and contact with health formal and informal health services, respondents were asked about the how many times in the previous 12 months they sought treatment for health concerns from traditional healers and at a hospital or clinic, respectively. Last, to examine participants' EVDrelated social ecology, two scales we used to assess respondent perceptions of leadership efficacy and community resilience. In scoring, the means of all scales were used to facilitate interpretation in terms of their original response scales. Limitations of the study design include the potential for bias arising from the repeated survey designs, including respondent bias and social desirability bias.
Analysis
We used Stata 16 SE to conduct descriptive and inferential statistical analyses [17]. To better understand predictors of stigma toward EVD survivors, we estimated three-level hierarchical linear models (also called "multilevel" or "mixed" effects models) to accommodate the clustering of participant data within EAs and over time and to reduce consequent biases in the estimation of standard errors. For model interpretation, we used robust standard errors. Because the study used an equal allocation sampling design with 16 households per EA, we applied sampling weights based on the population of each EA to allow for generalization to the Western Urban and Western Rural districts of Sierra Leone where EAs were sampled from. Upon review of the data, we determined that households in two EAs were undersampled due to a clerical error and up-weighted the remaining households so that those EAs would be properly represented in the sample. We placed misidentified households in their proper EAs, resulting in oversampling of those EAs. In analysis, households in these EAs were down-weighted to prevent overrepresentation in the weighted sample.
To study basic trends in stigmatizing attitudes and behaviors, we observed longitudinal changes in participants' stigma toward EVD survivors, from the height of the outbreak until it was declared over. We fit mixed effects models in Stata 16 SE to examine predictors of stigma and longitudinal growth. To retain and use all available data and avoid possible bias associated with listwise missing value deletion, we created 20 multiply imputed datasets for the analyses, using all analysis variables plus additional demographic measures (participant age, sex, marital status, education, and household wealth, based on household land and asset ownership). The number of missing values for a given variable can be determined by comparing the n for a given variable in Tables 1 and 2 (e.g., 979 for depression symptoms score) to 1008, the total number of participants interviewed.
Hierarchical modeling approach
Our statistical modeling approach was hierarchical such that we added predictors to the unconditional growth model in 7 groups, retaining all predictors notwithstanding statistical significance in previous models. We compared each subsequent model to the null growth model as well as the preceding model to examine the proportion of variance in stigma explained by the added predictors. We specified all models with unstructured covariance to prevent Stata from setting the covariance and corresponding correlations to zero by default. We estimated deviance values to determine the fit of each model. Stata's mixed program uses a Bayesian approach to estimate the hierarchical linear model and calculates the mean of the a posteriori distribution of the random effect with robust standard errors. With respect to the model properties, Rabe-Hesketh and Skrondal (2012) explain that Stata mixed model results are conditionally biased, such that for any individual cluster the estimation is biased; however, the advantage of Stata's estimation method is that the conditional bias is countered by a lower mean-squared error for the entire population [18].
Multilevel data structure
We estimated two intraclass correlations (ICC) for the three-level mixed effects model to assess the similarity of stigma scores between respondents from the same enumeration area and within individuals over time. The ICC at level-3 (the correlation of responses from individuals in the same EA) was equal to 0.11. The low level-3 ICC suggests that only a small amount of the variance in stigma responses is attributed to differences between individuals from the same enumeration area. The level-2 ICC also indicates a modest within-person correlation of 0.18, indicating that there was not a high correlation between stigma scores from the same respondent over time.
Longitudinal, hierarchical linear models predicting stigma toward EVD survivors
In the null model, the fixed effect for initial stigma status is the mean of the first plausible value for stigma in the sample, not accounting for between-enumeration area, between-person differences, or within-person variation over time. The first plausible stigma score in the null model is estimated as 0.586. In the unconditional growth model, the grand mean for stigma in the sample accounts for random between-enumeration area, betweenperson, and within-person variation. Accounting for the between-enumeration area, between-person, and withinperson variance, the mean of stigma in the unconditional growth model was estimated as 0.582slightly lower than the mean in the same model without random effects. Controlling for all personal, social, and ecological factors, the estimated mean stigma score among respondents at the height of the outbreak was equal 0.44 (95% CI: 0.25, 0.64).
Based on the analysis of residuals comparing the full model with all variables and the null unconditional growth model, sociodemographic characteristics, EVD exposures, mental health, fear, and perceived risk to EVD infection explain 57% of the variance in stigma at the height of the epidemic. Sociodemographic characteristics (i.e., sex, age, education, marital status, and household wealth) explained 7% of the variance in stigma at the height of the outbreak. EVD-related exposures (i.e.. someone in the community, neighborhood, friend, nonhousehold family, and household family) explained only 4% of the variance in stigma. Mental health symptoms, fear and risk perceptions explained 16 and 31% of the residual variance in stigma, respectively, at the height of the outbreak. Neither health care-seeking behaviors nor ecological trust contributed any additional information, with 0.0% variance explained by the variables in both models. Based on the sample-size adjusted-BIC, the full model retaining all individual and ecological predictors fits the data better (BIC = 944.85) than the unconditional growth model (BIC = 2969.98).
Predictors of stigma toward EVD survivors at the height of the 2014-15 epidemic
On average, several individual and ecological characteristics predicted significantly higher levels of stigma toward EVD survivors at the height of the outbreak (Table 3). Female respondents reported significantly higher levels of stigma compared to male respondents (Cohen's d = 0.818; t = 7.04, p = 0.000) and respondents from wealthier households also reported higher levels of stigma toward EVD survivors (Cohen's d = 0.082; t = 2.47, p = 0.014). Interestingly, respondents' physical proximity to Ebola survivors living in their community was not significantly associated with stigma at the height of the epidemic; however, those who had been exposed to messaging intended to prevent EVD-related stigma reported higher levels of stigma toward EVD survivors compared to those who had not been exposed to antistigma messaging (Cohen's d = 0.445; t = 0.14, p = 0.001).
Post-traumatic stress symptoms (Cohen's d = 1.14; t = 4.77, p = 0.000), EVD-related fear (Cohen's d = 0.97; t = 6.08, p = 0.000), and perceived EVD risk (Cohen's d = 0.178; t = 3.41, p = 0.001) were all strongly positively associated with stigma toward survivors at the height of the outbreak. Neither a recent history of health careseeking at a local clinic nor with traditional healers was found to be significantly associated with stigma toward EVD survivors at the height of the epidemic. Leader trust and perceptions of community resilience had no significant effect on stigma. On average, there was a small but non-significant reduction in stigma toward EVD survivors from the height 2014-15 EVD epidemic in Sierra Leone until it was declared over (Cohen's d = − 0.02; t = − 0.29, p = 0.711). Although stigma toward EVD survivors in the sample attenuated on average as the outbreak declined, certain personal characteristics and perceptions predicted a slower rate of change (Table 4)
Discussion
This study examined predictors of stigma toward survivors at the height of the 2014-15 EVD outbreak in Sierra Leone and factors that shaped change in stigma over time. On average, female participants demonstrated higher levels of stigma at the height of the outbreak compared to male participants, controlling for all other variables. Similarly, participants with above-average household wealth reported significantly higher levels of stigma toward EVD survivors in their community. Mental health and psychosocial factors, including posttraumatic stress, EVD-related fear and risk perceptions were also positively and significantly associated with stigma at the height of the outbreak. A significant decrease in stigma was observed from the height of the epidemic until it was declared over, but several individual and psychosocial factors predicted higher levels of stigma toward survivors of EVD. Female participants, as well as participants from wealthier households, demonstrated a slower rate of change in stigma toward EVD survivors. Participants with above average posttraumatic stress symptoms, EVD-related fear, and risk perceptions also demonstrated a slowed rate of change in stigma compared to those without these characteristics.
The finding that anti-stigma messaging was associated with elevated stigma was unexpected and warrants further investigation, as it could emphasize that the design and implementation of anti-stigma messaging are key to success. The uncontrolled spread of misinformation and falsehoods regarding EVD and the epidemic at the start of data collection may have been instrumental in increasing stigma and discrimination people who have had and who were affected by the infectious disease. In particular, at the start of the EVD outbreak in Sierra Leone, misinformation and unclear public health messaging gave rise to risk behaviors like secret burials and other potential harms linked with EVD-related fear and stigma. Misconceptions like these may have counteracted anti-stigma messaging, or been stigmatizing themselves. Similarly, throughout the study period, data collection was ongoing to understand EVD viral reservoirs within the body and routes of transmission. Rumors and misinformation about this work proliferated through the Western Areas and led many to believe that EVD could be sexually transmitted by survivors over a long period of time. Finally, it is possible that the observed relationship between anti-stigma messaging exposures and higher levels of stigma toward survivors may be an artefact of Sierra Leone's reliance on governmentissued certificates to prove EVD-free status.
Conclusion
This study found that female sex, mental distress, EVD fear, and risk perceptions were all important predictors of stigma toward EVD survivors over the course of the 2014-15 epidemic in Sierra Leone. The findings offer important insights into stigma both EVD toward survivors as well as survivors of other infectious diseases. The findings also raise important questions about the role of health communication strategies and media outlets as major factors contributing to stigma and discrimination against infectious disease survivors. As the world continues to grapple with the effects of the ongoing COVID-19 pandemic, several similarities shared between the current SARS-COV-2 pandemic and recent EVD epidemics stand out; namely the high degree of contagiousness, proliferation of virus-and outbreak-related misinformation across media platforms, and targeting of certain social groups for blame related to the outbreak. In light of the growing importance of social and other forms of media as sources of health information, more research is needed to understand how the form, content, and delivery of health communication, broadly, and antistigma campaigns, in particular, contribute to the effectiveness of such approaches. In communities affected by infectious disease epidemics, approaches to reduce distress, fear, and the spread of misinformation may serve an important dual purpose --preventing new infections as well as stigma and discrimination toward those affected by the disease. | 2021-11-11T14:30:30.189Z | 2021-11-11T00:00:00.000 | {
"year": 2021,
"sha1": "590954b513af7ae7797d23a619dd9e9a3e5f92ed",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-021-12146-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "590954b513af7ae7797d23a619dd9e9a3e5f92ed",
"s2fieldsofstudy": [
"Medicine",
"Sociology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264388457 | pes2o/s2orc | v3-fos-license | Preclinical study of CD19 detection methods post tafasitamab treatment
Introduction Several CD19 targeted antibody-based therapeutics are currently available for patients with diffuse large B-cell lymphoma (DLBCL), including the Fc-modified antibody immunotherapy tafasitamab. This therapeutic landscape warrants the evaluation of potential sequencing approaches. Prior to a subsequent CD19-targeted therapy, CD19 expression on tafasitamab-treated patient biopsy samples may be assessed. However, no standardized methods for its detection are currently available. In this context, selecting a tafasitamab-competing CD19 detection antibody for immunohistochemistry (IHC) or flow cytometry (FC) may lead to misinterpreting epitope masking by tafasitamab as antigen loss or downregulation. Methods We analyzed a comprehensive panel of commercially available CD19 detection antibody clones for IHC and FC using competition assays on tafasitamab pre-treated cell lines. To remove bound tafasitamab from the cell surface, an acidic dissociation protocol was used. Antibody affinities for CD19 were measured using Surface Plasmon Resonance (SPR) or Bio-Layer Interferometry (BLI). Results While CD19 was successfully detected on tafasitamab pre-treated samples using all 7 tested IHC antibody clones, all 8 tested FC antibody clones were confirmed to compete with tafasitamab. An acidic dissociation was demonstrated essential to circumvent CD19 masking by tafasitamab and avoid false negative FC results. Discussion The current study highlights the importance of selecting appropriate CD19 detection tools and techniques for correct interpretation of CD19 expression. The findings presented herein can serve as a guideline to investigators and may help navigate treatment strategies in the clinical setting.
Introduction
A range of CD19 targeted therapies have been approved for the treatment of patients with DLBCL.In 2020, the anti-CD19 Fcmodified antibody immunotherapy tafasitamab received accelerated approval for patients with relapsed or refractory (r/r) DLBCL, not eligible for autologous stem cell transplant, in combination with the immunomodulatory drug lenalidomide (1).Tafasitamab, in combination with lenalidomide and R-CHOP, is currently being tested as a frontline therapy for newly diagnosed DLBCL patients (NCT04824092, frontMIND).Other anti-CD19-targeted DLBCL therapies include the antibody-drug conjugate loncastuximab tesirine, currently approved for r/r DLBCL patients after at least two previous lines of therapy, and the anti-CD19 chimeric antigen receptor T-cell (CART19) therapies axicabtagene-ciloleucel (axicel) and lisocabtagene maraleucel (liso-cel), approved as a second line of therapy and tisagenlecleucel (tisa-cel), as a third line of therapy (2)(3)(4)(5).
The availability of different anti-CD19 therapies opens possibilities for therapeutic sequencing.Reportedly, CART19 treatment may induce CD19 loss, highlighting the importance of CD19 expression monitoring post-treatment (6,7).To confirm target expression prior to a subsequent anti-CD19 therapy, biopsy samples from patients treated with tafasitamab may be analyzed using flow cytometry (FC) or immunohistochemistry (IHC).At present, CD19 detection methods in routine clinical practise are not universal, with different institutions utilising different platforms and different commercially available CD19 detection antibodies.The target epitopes of most commercially available CD19 detection antibodies are often unknown to end users for business reasons, and it is unclear whether they compete with tafasitamab.Fine epitope mapping of the CD19 extracellular domain has demonstrated that three commonly used anti-CD19 antibody clones (FMC63, 4G7-2E3, and 3B10) bind overlapping epitopes of CD19 (8), and tafasitamab is derived from the clone 4G7 (9).Importantly, using a tafasitamab-competing antibody to detect CD19 on tafasitamab-treated samples may lead to signal reduction and confusion of CD19 epitope masking with antigen loss.
The current study aims to evaluate a comprehensive set of commercially available anti-CD19 antibodies on tafasitamab pretreated cell lines by FC and IHC.
Antibodies
Tafasitamab was provided by MorphoSys AG.Tafasitamab-AF488 was conjugated using AlexaFluor ™ 488 C 5 maleimide.The amino acid sequences of the variable regions of RB4 (loncastuximab) were obtained from the Inxight Drugs database of the National Center for Advancing Translational Sciences (NCATS).RB4 was recombinantly produced in house, as human IgG1-kappa.In addition, RB4 was expressed in house bearing a fluorescent mScarlet tag.
DAB Enhanced Liquid Substrate System (Sigma, Cat# D3939-1SET) was used for signal detection and no hematoxylin counterstaining was performed before slide mounting.Staining quantification was performed using ImageJ Software (NIH, version 1.53f51) and GraphPad Prism (version 8.4.3.).
Flow cytometry competition assays
All antibodies were diluted in FACS buffer consisting of D-PBS supplemented with 3% FCS.Raji, MEC-1 or JVM-2 cells (5E+04 per test) were incubated with different concentrations of tafasitamab (ranging from 50 to 0.00064 nM, 5-fold titration) for 20 min on ice.Cells were washed with FACS buffer 3 times and incubated with a saturating concentration (50 nM) of a PE-labelled CD19 detection antibody for 20 min on ice, protected from light (Table S3, Figure S7).In case the concentration of the commercial CD19 detection antibody was unknown (LT19 and REA675, Table S3) or too low (J3-119, 4 µg/mL, Table S3), antibodies were used at the supplier recommended concentration per test.Antibody clone HD37 was used unconjugated and detected using a secondary anti-rabbit antibody (Table S4).Next, cells were washed 3 times using FACS buffer and analyzed using FACS Verse I. DAPI (4',6-diamidino-2phenylindole) was used as a live/dead differentiator.Data analysis was performed using FlowJo software v10.5.
Acidic dissociation assay
U-2932, SU-DHL-1, -4 and -6 cells were incubated with different concentrations of tafasitamab (ranging from 200 to 0.2 µg/mL, 10-fold titration) and washed, as described above.Next, cells were either left untreated or treated with an acidic dissociation buffer to remove pre-bound tafasitamab off the cell surface.Cells were plated in 96 KingFisher Deepwell V-bottom plates (Thermo Scientific, #95040455) and re-suspended via pipetting (a minimum of 10 times) in 200 µL acidic dissociation buffer (D-PBS + 3% FCS, pH 2.1 adjusted with HCl).The suspension was neutralized using 1200 µL ice-cold FACS buffer (a minimum of 5 times re-suspension via pipetting) and spun down.The procedure was repeated three times before staining using the tafasitamab-competing PEconjugated anti-CD19 antibody clone (HIB19).The samples were analyzed as described above.
Results
Using an antibody competition approach, we evaluated 7 anti-CD19 antibodies for IHC and 8 for FC (Tables S1, S3).The IHC assays were performed on formalin-fixed paraffin-embedded (FFPE) cell pellets, generated using the high and low CD19 expressing cell lines Raji and JVM-2, respectively.Prior to fixation, the cells were incubated with tafasitamab at a saturating concentration (50 nM) or left untreated (Figure 1A).CD19 was confirmed completely blocked by tafasitamab (Figure S2).Before proceeding with competition experiments using the cell pellets, optimal staining conditions were established using FFPE human tonsil samples.We tested 12 commercially available CD19 detection antibodies for IHC and successfully established staining protocols for 7 clones: 3 targeting the intracellular and 4 the extracellular domain of CD19 (Table S1).Next, the established clones were tested on tafasitamab-treated Raji (CD19 hi ) and JVM-2 (CD19 low ) cell pellet samples, alongside untreated controls to assess antibody binding competition (Figures 1A, S1).Thus, reduction or absence of a signal on tafasitamab-treated samples, compared to untreated controls, would indicate CD19 masking by tafasitamab (Figure 1B).No differences in CD19 surface staining pattern were observed on tafasitamab-treated vs. untreated samples with any of the 7 antibody clones tested.This result was in line with expectations for all antibodies targeting the intracellular domain of CD19 (BT51E, LE-CD19 and D4V4E).To confirm that tafasitamab was indeed still present in the FFPE samples and not lost during FFPE processing, tafasitamab was detected using two different antihuman IgG antibodies, as demonstrated in Figures 1C, S4.Prior to this experiment, low/lack of surface IgG expression on Raji and JVM-2 was also confirmed using FC, thus excluding potential interference with tafasitamab detection (Figure S3).Furthermore, our experiments highlighted another important feature of the IHC technique, to be kept in consideration: the method does not allow for quantitative evaluation of differential antigen expression.Despite the ~5-fold difference in CD19 expression between Raji and JVM-2 cells, quantified by FC, IHC could not discriminate between these levels and staining intensity can easily be affected by duration of chromogen incubation (Figures 1B, S1, S5A).
The FC assays were performed on 3 CD19+ malignant B-cell lines expressing CD19 at high (Raji), medium (MEC-1), and low (JVM-2) levels (Figure S1).The cells were first incubated with tafasitamab at a wide range of concentrations, washed, and subsequently incubated with a single fluorescently labelled CD19 detection antibody at a constant concentration.In this setup, reduction of the fluorescent signal on tafasitamab-treated cells compared to untreated cells indicates CD19 binding competition (Figure 2A).With all 8 clones, fluorescent signal decreased with increasing concentrations of tafasitamab, and was completely abrogated at saturating tafasitamab concentrations (above 10 nM) on all 3 cell lines tested (Figures 2B, S6).Clone OTI3B10, demonstrated to detect CD19 in IHC, is also marketed for FC on live cells.It appeared not to compete with tafasitamab on Raji cells, but it did not bind to MEC-1 or JVM-2 cells in a FC assessment, which rendered it unsuitable as a FC detection tool (Figures S8, S9).In summary, the CD19 detection capabilities of all 8 FC antibody clones tested were reduced or diminished on tafasitamab pretreated cells.Interestingly, therapeutically relevant antibody clones such as FMC63 (CD19 targeting moiety of axi-cel, tisa-cel and liso-cel) and RB4 (loncastuximab tesirine) also competed with tafasitamab and each other (Figure 2B, Figure S10) (11-14).CD19 affinity characterization revealed that tafasitamab has a higher or similar affinity for CD19 and slower dissociation rate than the other antibodies tested, explaining why tafasitamab was not replaced by the CD19 detection antibodies during FC staining (Table S7).Nevertheless, it is important to note that CART cells express multiple chimeric antigen receptors and are thus characterized with avidity, which is not accounted for using the experimental setups of this study.This parameter would be vital to consider from a therapeutic point of view.
As epitope binding competition prevented most screened FC antibodies from detecting CD19, an acidic dissociation protocol was developed, designed to strip tafasitamab from the cell surface prior to staining with a competing CD19 clone (Figure 2C).The technique employs incubating the tafasitamab-treated cells using a low pH (pH = 2.1) buffer, leading to dissociation of the antibody from the antigen (15).This method was tested on 4 DLBCL cell lines (U-2932 and SU-DHL-2/4/6) treated with tafasitamab (Figure 2D).Acidic dissociation efficiently unmasked CD19 on the surface of all cell lines and allowed for detection of CD19 levels comparable to untreated controls, using a tafasitamabcompeting antibody clone.
Discussion
CD19 is an attractive target for the treatment of B cell malignancies due to its consistent expression at most stages of B cell development.This has led to the development of multiple therapeutic modalities targeting CD19, e.g.naked monoclonal antibodies, antibody-drug conjugates, and chimeric antigen receptor T cells (CART).The availability of different anti-CD19 therapies opens up the possibility to administer them sequentially to patients, which is however connected to concerns relating to potential effects on CD19 expression after treatment with prior CD19-directed therapies.
To address these concerns, the current study provides an overview of commonly used monoclonal antibody tools for CD19 detection post-tafasitamab treatment, some of which may be used in routine clinical practice.Our findings reveal that when attempting to detect CD19 by FC, prior acidic dissociation of tafasitamab from CD19 is critical for accurate measurements of CD19 levels on tafasitamab-treated samples.This acidic dissociation step is especially important, as we have seen that CD19 can be detected by IHC using commercial antibodies independent of tafasitamab pretreatment.(A) Schematic of an IHC workflow designed to assess TAFA competition with commercial anti-CD19 antibody clones for IHC.CD19+ cell lines were pre-incubated with saturating concentrations of TAFA (50 nM), washed, and processed to form a cell pellet.The pellets were formalin-fixed, paraffin-embedded (FFPE), sectioned, and stained with commercial anti-CD19 antibodies.Competition was confirmed when a reduction in signal was observed on TAFA pre-treated samples.Schematic was created using BioRender.com.(B) IHC competition experiments.Raji and JVM-2 cells were processed as described above.CD19 detection antibodies against the cytoplasmic tail of CD19 (clones BT51E, LE-CD19, D4V4B) and the extracellular domain of CD19 (clones OTI3B10, 109, ZR212, 3117) were tested.Samples were analyzed using the following equipment/parameters: Axiolab 5 microscope, Zeiss Axiocam 208 color camera, Zeiss N-Achroplan objective, magnification = 40x, numerical aperture = 0.65, at room temperature.Images were processed using the Zeiss Axiocam 208 color camera built-in software (firmware version 1.3.6)(C) To confirm that TAFA is still present in the FFPE samples and not lost during cell pellet processing, untreated and TAFA pre-treated samples were stained with anti-human IgG.Sample acquisition and analysis specifics were as described above.Quantification was performed with 30 cells per condition and relative staining intensity normalized to untreated cells is shown.Statistical analysis: Mann-Whitney test.CD19 masking by tafasitamab does not allow for CD19 detection by FC using commercial monoclonal antibodies.(A) Schematic of a FC workflow designed to assess TAFA competition with commercially available anti-CD19 antibody clones.CD19+ tumor cell lines were pre-incubated with different concentrations of TAFA for 20 min on ice, washed, and incubated with a fluorescently labelled commercially available anti-CD19 antibody.Competition between the two antibodies was confirmed by measuring reduction/loss of fluorescent signal from the commercial clone using FC.Schematic was created using BioRender.com.(B) Flow cytometry competition experiments.Raji (Burkitt lymphoma, ~105000 CD19 ABC), MEC-1 (B-CLL, ~66000 CD19 ABC) and JVM-2 (B-PLL, ~20000 CD19 ABC) cells were processed as described above.Commercially available clones HIB19, 4G7, FMC63, SJ24C1, J3-119, LT19, REA675 and HD37 were tested.N≥3 individual experiments.(C) Schematic of an acidic dissociation procedure, which allows removal of pre-bound TAFA and exposure of CD19 on cell line samples for binding to a competing anti-CD19 antibody applied e.g. for detection of CD19 surface levels (right).Cell lines, pre-incubated with TAFA underwent 3 rounds of dissociation using a low pH buffer (pH=2.1).Without the acidic dissociation step, CD19 would be masked by TAFA and not accessible to a competing anti-CD19 antibody (left).(D) Four DLBCL cell lines (U-2932, SU-DHL-2, -4 and -6) were incubated for 20 min on ice with different concentrations of TAFA.After a wash, cells were incubated with the competing clone HIB19 at a concentration of 50 nM, with and without prior acidic dissociation.The samples were analyzed using flow cytometry.Detectable CD19 molecules were quantified using the Quantibrite system, BD.N=3 individual experiments.B-CLL, B-cell Chronic Lymphocytic Leukemia; B-PLL, B-cell Prolymphocytic Leukemia; ABC, Antibodies Bound Per Cell.
tafasitamab exhibits high affinity to CD19 and a slow dissociation rate compared to the other tested anti-CD19 antibodies.Additionally, performing FC staining with and without acidic dissociation on the same samples can provide information about both CD19 occupancy by tafasitamab and total CD19 levels on the cell surface.When using the IHC antibodies screened in this study, CD19 could be detected independent of tafasitamab treatment, meaning that a wide range of commercially available anti-CD19 antibodies for IHC can be used freely on tafasitamab-treated samples, without the risk of data misinterpretation.Conversely, this also means that those antibodies would not provide information about CD19 occupancy by tafasitamab, and thus potentially the availability of CD19 for binding by subsequent anti-CD19 therapies if the epitopes should overlap.
As IHC was unable to discriminate between Raji and JVM-2 cells in respect to their CD19 levels in our experiments, while FC showed a 5-fold difference in expression, the semi-quantitative nature of IHC was confirmed by our results.However, in the JULIET study testing tisagenlecleucel in DLBCL, similar treatment responses were observed in patients with normal versus low/absent CD19 expression as determined by quantitative immunofluorescence (16), suggesting that accurate determination of CD19 expression levels may not be required when considering eligibility for CART19 therapy.Any positive staining obtained via IHC, regardless of intensity, may be sufficient for subsequent CART19 therapy after prior treatment with other anti-CD19 therapies.This would fit to the hypothesis that CAR T cells are capable of mediating anti-tumor activity against cells with low antigen density (17).
The data outlined in this study is intended to help investigators select appropriate tools for CD19 detection and could serve as a valuable reference in both basic research and clinical practice.Differentiating total from masked antigen on patient biopsies can allow clinical investigators to properly interpret CD19 availability after anti-CD19 therapy, which could be critically important for future treatment strategies.
About Tafasitamab
Tafasitamab is a humanized Fc-modified cytolytic CD19 targeting monoclonal antibody.In 2010, MorphoSys licensed exclusive worldwide rights to develop and commercialize tafasitamab from Xencor, Inc. Tafasitamab incorporates an XmAb® engineered Fc domain, which mediates B-cell lysis through apoptosis and immune effector mechanism including Antibody-Dependent Cell-Mediated Cytotoxicity (ADCC) and Antibody-Dependent Cellular Phagocytosis (ADCP).In January 2020, MorphoSys and Incyte entered into a collaboration and licensing agreement to further develop and commercialize tafasitamab globally.Following accelerated approval by the U.S. Food and Drug Administration in July 2020, tafasitamab is being co-commercialized by MorphoSys and Incyte in the United States.Conditional/Accelerated approvals were granted by the European Medicines Agency and other regulatory authorities.Incyte has exclusive commercialization rights outside the United States.
XmAb® is a registered trademark of Xencor Inc. associated patents.Author DA is employed by the company and is a stockholder in Incyte Corporation.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-10-22T15:01:52.595Z | 2023-10-20T00:00:00.000 | {
"year": 2023,
"sha1": "f78b408e80ded0e06ca3f699d2581b26651157da",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1274556/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a785c9f89951f32b1fbdd06b7d400caf23102a0f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
197608383 | pes2o/s2orc | v3-fos-license | The influence of the dye adsorption time on the DSSC performance
. Dye-sensitized solar cells (DSSC) of third generation photovoltaic technology, are nowadays one of the most investigated due to possibility to apply ecological and natural materials (dyes) such as alizarin. This paper reports the influence of electrode immersion time on alizarin-based dye-sensitized solar cells performance. The absorption spectra of alizarin dye were measured in the range of 300-800 nm. Fully structured dye-sensitized cells of working area equal to 0.8 cm 2 have been fabricated in the sandwich way using four different immersion times of the TiO 2 electrodes: 10’, 40’, 1 h, 24 h. The high -performance EL- HPE electrolyte was instilled into the space between electrodes. Current-voltage (I-V) dark and illuminated characteristics have been measured using solar light simulator. Solar cells characterization was carried out under standard test conditions. The solar irradiance was set at 100 mW/cm 2 and temperature of the module was maintained at 25 ° C. Characteristic parameters of the fabricated cells were determined on the basis of measured I-V curves. Series resistances were extracted from I-V characteristics at an open circuit voltage using first order derivatives. It was found that 60 minutes of electrode immersion in dye solution is sufficient to obtain appropriate stage of the dye adsorption.
Introduction
In recent years, the intensive development of renewable energy sources have been noticed all over the world. It is motivated by depletion of fossil fuel resources, increasing of their prices and still growing global energy demands. It is estimated that taking into account the current demand for electricity, the reserves of fossil fuels will last only 40, 60 and 200 years for oil, natural gas, and coal respectively [1]. The climatic consequences of fossil fuels combustion are not without significance because of greenhouse gases emission to the atmosphere. Preservation of the environment during the process of energy production is crucial. According to Directive 2009/28/EC of the European Parliament and of the Council of 23 April 2009, the target for European countries to be achieved by the end of 2030 is 32% of EU's final energy consumption [2]. The above mentioned factors are the most significant reasons to explore alternative energy sources, especially renewable ones. Many research groups focus on solar energy because it is considered as environmentally friendly source with huge potential, which is able to meet the current and subsequent generations energy requirements. Direct conversion of the energy from the Sun into electricity can be achieved by photovoltaic (PV) devices. Photovoltaic cells are divided into three categories called generations, which are related directly to different materials and have various photoelectric conversion efficiencies. The first generation of solar cells is based on crystalline silicon material. It is relatively expensive which causes high cost of producing. However, silicon wafer-based solar cells are the most effective ones. Yoshikawa et al. [3] obtained highest noticed silicon cell efficiency of 26.7%. The second generation of solar cells is characterized by less material usage [4], which allows to produce cells of high efficiency. Materials that are typically used in thin film solar cells are cadmium sulfide (CdS), cadmium telluride (CdTe) [5], amorphous silicon (a-Si) [6] and copper-indium-diselenide/copper-indiumgallium-diselenide (CIS/CIGS) [7,8]. Recent highest noticed efficiency of the laboratory CIGS cell equals 22.4% and is close to the efficiency of the silicon devices, which makes the possibility of replacing Si crystalline solar cells with the CIGS [9]. The third generation of photovoltaic cells is a promising alternative to conventional cells based on the p-n junction. Dye-sensitized solar cell (DSSC) technology can be considered as an economical substitute with relatively high conversion efficiency. Typical DSSC device consists of photoanode, counter electrode with platinum catalyst layer and electrolyte, made of mesoporous semiconductor layer and dye sensitizer. Both electrodes are based on transparent conductive oxide materials (TCOs), especially on indium tin oxide (ITO) [10] or fluorine-doped tin oxide (FTO) [11]. Interesting alternative to ITO is also ZnO doped with trivalent elements such as Al [12][13][14][15], Ga [12,16] or B [17] which is a material that can even improve its electrical properties, like conductivity, during annealing process required in DSSC.
The dye sensitizer is considered as a crucial component that strongly affects the performance of the working cell. The phenomenon which plays an important role in the performance of dye-sensitized solar cells is electron transfer from the excited state of the dye molecule to the conduction band of titanium dioxide nanoparticle [18][19][20]. The course of the light-induced transfer of electrons from the sensitizer to a semiconductor depends mainly on the type of the dye i.e. the shape of its molecule and energy levels positions. The best performing dyes are made of Ru complexes (e.g. N3, N719, and black dyes) [21,22]. Mathew et al. [23] obtained the record efficiency of 13% for Ru-based DSSC. However, the high production cost, complicated synthesis and harmful effect on the environment, made the researchers search alternative to Ru-based compounds [24]. Variety of natural resources, which are environmentally friendly, can be used as a origin of dyes sensitizers, such as plants, flowers, fruits [25,26]. Nevertheless, the efficiency of natural dye-based cells should be improved for the extensive application of DSSC. Various groups [27][28][29] have been working on implementing natural dyes, such as alizarin, quercetin or luteolin. Alizarin is a red dye extracted from the root of Rubia tinctorum. However, this dye can be also obtained by laboratory methods from anthraquinone. Alizarin is characterized by rapid injection photoexcited electrons into the semiconductor conduction band.
The amount of the adsorbed dye also influences the efficiency of the DSSC device and can be adjusted by the change of the concentration of the dye solution or the time of the photoelectrode dipping in the solution. This paper presents the time of electrode immersion influence on alizarin-based dye-sensitized solar cells performance. DSSC devices were prepared by the use of prefabricated components and dyed with alizarin dye.
Materials and methods
In this research, the dye-sensitized solar cells were prepared with the use of alizarin dye ( Fig. 1) purchased by Sigma-Aldrich. Initially, the 2 mM solution of alizarin dye was prepared in ethanol (99.8%). The absorption spectra were measured in the range of 300-800 nm by Shimadzu UV-vis spectrophotometer. Addition of dye resulted in colour progress, from transparent to dark purple. In order to improve solubility, the dye solution was ultrasonically mixed for 10 minutes. Afterwards, the photoanodes coated with TiO2 (Greatcell Solar) were placed in a dye solution. The immersion time was different for each series of experiments: 10 minutes, 40 minutes, 1 hour and 24 hours. The container of dye solution was left without access to light, owing to the fact that particles are extremely photosensitive, especially in a solution form. After that, the electrodes were taken out and rinsed with the ethanol in order to eliminate excessive dye molecules. The photoelectrodes were dried for 30 minutes in the ambient environment and then were immediately used in the DSSC structures. DSSCs were assembled in the sandwich way. Counter electrode with a platinum catalyst layer (Greatcell Solar) was used to face the titania surface deposited onto photoanode. DSSC devices were sealed by a thermoplastic sealant (Dye Sol). The whole assembly was heated in high-temperature titan hotplate. The heat was supplied to the structure from the top and the bottom. Heating was carried out in three steps. Firstly, the structure was heated up to 90°C for 2 minutes, secondly, the temperature was raised to 110°C and held by 2 minutes. Finally, the temperature of 115°C was set and kept by the time of 2 minutes. After this procedure, the DSSC assemblies were left to cool down. Then, the highperformance EL-HPE electrolyte by Greatcell Solar was instilled into the space between electrodes through a drilled hole in the counter electrode. The working area of the DSSC structure was fixed as 0.8 cm 2 . Then, the contacts were made by the use of silver tape. A scheme of the dye-sensitized solar cell is presented in Fig. 2. The current-voltage characteristics were immediately measured in dark and in illuminated conditions by SUN 3000 solar light simulator of Abet Technologies, class AAA. The simulator is equipped with Xe lamp (450 W). The solar irradiation was set at 100 mW/cm 2 . The data were collected by a Keithley Instruments, type 2440, electrometer with integrated power supply. The ReRa Tracer software was used to measure and calculate various performance parameters.
Results and discussion
Initially, alizarin dye was characterized in order to obtain absorption spectrum by the use of spectrophotometer (Fig. 3). The dye exhibits the wide absorption spectrum in the range of 400 to 600 nm (in the visible range) with a major peak at 432.5 nm. It can be seen that characterized dye has a strong absorption in the violet region. On the basis of the absorption data, the value of optical band gap, HOMO-LUMO distance was estimated to be equal to 2.13 eV. The dye sensitized solar cells were prepared in the way described in Experimental section. The obtained cells sensitized with alizarin dye as a sensitizer are shown in Fig.4. The change in colour from orangish to dark purple was noticed for different times of photoanode immersion in dye solution (10 min, 40 min, 60 min and 24 h). As can be seen in Figure 4, the time of the immersion has a strong influence on the colour of the semiconductor layer. Electrodes immersed for short time, such as 10 minutes and 40 minutes are characterized with a brighter colour (near to orange) while longer immersion times results in the dark purple colour of the electrodes, which means that the adsorption process proceeds in time.
The I-V characteristics of the cells based on a TiO2 substrate using alizarin as a sensitizer were measured under the standard light intensity of 100 mW/cm 2 by a solar simulator. On the basis of obtained parameters, i.e. short circuit current (Isc), short-circuit current density (Jsc), open circuit voltage (Voc), maximum power point current (IMPP) and maximum power point voltage (VMPP), the value of fill factor was calculated. The fill factor is a parameter which defines the quality of the solar cell and can be defined as the ratio of the maximum power to the short-circuit current density and open circuit voltage values. The more the characteristic resembled a rectangle, the higher value is reached by fill factor. The fill factor value can be calculated on the basis of equation (1) [26].
Fill factor is used to assess the power loss of electric current resulting from the internal resistance of the cell, the losses caused by recombination and other losses. Dark characteristics of the prepared dye-sensitized cells are shown in Fig. 5. Their shape is typical for semiconductor diode. The comparison with the I-V curves obtained for illuminated devices, which are presented in Fig. 6., depicts that introducing the light results in the less bent shape of the curves. Thus the role of series resistance is visible. The values of the series resistance were obtained as a reverse shape of the straight line fitted near Isc=0. Series resistance, Rs, is a parameter that represents the series resistive losses which occur inside the solar cell during the current flow through the solar cell volume. Current generated inside the solar cell travels through resistive semiconductor layer of FTO/TiO2. There are also resistances on TCO interconnections and cell metallization. Although the open circuit voltage is independent of the series resistance Rs, the short circuit current and especially the fill factor strongly depends on Rs and decreases with the increase of series resistance of the cell. The value for Rs can be calculated from the current-voltage (I-V) curve of the solar cell [30]. The slope at open circuit voltage Voc provides the negative inverse value for Rs,0 according to the formula (2).
Using the extracted value of Rs,0 from I-V curve the series resistance can be calculated on the basis of the formula (3). where n = 1, Vth is the thermal voltage (kT/q, k = 1.38110 -23 J/K, q = 1.60210 -19 C and T = 297 K). Reverse saturation current, I0 can be calculated by equation (4).
On the basis on the above-mentioned method, the Rs values for immersion times equal to 10 min, 40 min, 60 min, and 24 h were estimated to be 2.11 kΩ, 2.27 kΩ, 2.57 kΩ, and 2.74 kΩ respectively. Figure 6 shows the current-voltage characteristics for the devices with the various immersion times. Usage of the electrode immersed for 10 minutes leads to the worse properties of DSSC devices. Adsorption of the dye molecules onto the TiO2 layer, in such a short time, was insufficient which resulted in low parameters values. A comparison of various performance parameters for different immersion times are presented in Table 1. As can be seen, the highest short-circuit current density value was found for an electrode submerged for 10 minutes. Increasing of the immersion time led to decreasing Jsc value in the range of 0.095-0.058 mA/cm 2 . Open circuit voltage increases for longer absorption times and its value was found to be the best for cell 4 and equal to 318.82 mV, whilst the lowest was obtained for cell 1 (270.52 mV). The open circuit voltage is the parameter depending on the energy of LUMO level of the dye and its distance from to the conduction band of a semiconductor used in the device. The main limitation of Voc value is the recombination of electrons present in conduction band with the ions from the electrolyte. Comparing all the parameters, it can be summarized that 60 minutes immersed electrode-based cells present the best performance parameters, especially fill factor, with 24 times shorter time, than cell 4. The fill factor parameter was found to increase with increasing immersion time up to 60 min. Further increase of the immersion time have not led to better performance of the device. What is more, the slightly worse value of fill factor has been noticed for the immersion time of 24 h (cell 4).
Conclusion
The influence of the immersion time of the photoelectrode in the dye solution on dye-sensitized solar cells performance was studied. DSSC devices were obtained on the basis of alizarin dye with various immersion time equal to 10 min, 40 min, 60 min and 24 hours. The performed investigation leads to the conclusion that alizarin dye is appropriate to be used as a sensitizer in dye-sensitized solar cells because it is characterized by a wide absorption spectrum in the visible range of 400 to 600 nm, which can be compared with others dyes reported in the literature. The performance of obtained devices was studied by employ illuminated and dark current-voltage characteristics. It was found that the shape of dark I-V curves is typical for semiconductor diode. The influence of the series resistance on the illuminated characteristics is visible. Rs values have a strong influence on short circuit current values, which is very low. However, the cells were characterized by Voc values in the range from 270.52 to 318.82 mV, which are appropriate for dye-sensitized assemblies. The fill factor values changed in the range of 29.22%-38.06%. The cell 3 characterized by 60 minutes of immersion time shows maximum FF value, while the lowest value was found for immersion time equals 24 hours. It can be summarised that DSSC structure immersed for 60 minutes reveals the best performance parameters and this time can be considered as optimal for alizarin dye. Further increasing of the immersion time does not lead to better parameters, fill factor especially. Further investigation is necessary in order to improve the values of all significant parameters of working dye-sensitized solar cells prepared by the use of natural dye, such as alizarin. | 2019-07-20T02:03:45.568Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "222d9f77f99901b270f416b189153cda0db8c4dd",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/26/e3sconf_eko-dok2019_00040.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "90e6ef19c790e2543bc08d0be4fd57c0f6504be4",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.