id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
2021646 | pes2o/s2orc | v3-fos-license | The More You Know: Using Knowledge Graphs for Image Classification
Humans have the remarkable capability to learn a large variety of visual concepts, often with very few examples, whereas current state-of-the-art vision algorithms require hundreds or thousands of examples per category and struggle with ambiguity. One characteristic that sets humans apart is our ability to acquire knowledge about the world and reason using this knowledge. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. Specifically, we introduce the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a fully end-to-end learning system. We show in a number of experiments that our method outperforms baselines for multi-label classification, even under low data and few-shot settings.
Introduction
Our world contains millions of visual concepts understood by humans. These often are ambiguous (tomatoes can be red or green), overlap (vehicles includes both cars and planes) and have dozens or hundreds of subcategories (thousands of specific kinds of insects). While some visual concepts are very common such as person or car, most categories have many fewer examples, forming a long-tail distribution [39]. And yet, even when only shown a few or even one example, humans have the remarkable ability to recognize these categories with high accuracy. In contrast, while modern learning-based approaches can recognize some categories with high accuracy, it usually requires hundreds or thousands of labeled examples for each of these categories. Given how large, complex and dynamic the space of visual concepts is, this approach of building large datasets for every concept is unscalable. Therefore, we need to ask what humans have that current approaches do not.
One possible answer to this is structured knowledge and reasoning. Humans are not merely appearance-based classi- fiers; we gain knowledge of the world from experience and language. We use this knowledge in our everyday lives to recognize unfamiliar objects. For instance, we might have read in a book about the "elephant shrew" (maybe even seen an example) and will have gained knowledge that is useful for recognizing one. Figure 1 illustrates how we might use our knowledge about the world in this problem. We might know that an elephant shrew looks like a mouse, has a trunk and a tail, is native to Africa, and is often found in bushes. With this information, we could probably identify the elephant shrew if we saw one in the wild. We do this by first recognizing (we see a small mouse-like object with a trunk in a bush), recalling knowledge (we think of animals we have heard of and their parts, habitat, and characteristics) and then reasoning (it is an elephant shrew because it has a trunk and a tail, and looks like a mouse while mice and elephant do not have all these characteristics). With this information, even if we have only seen one or two pictures of this animal, we would easily be able to classify it.
The focus of this work is to exploit structured visual knowledge and reasoning to improve image classification. Recently, there has been a lot of work that has focused on building knowledge graphs that might be useful for images problems. For example, Never Ending Image Learner [38] learns relationships between visual concepts directly from images from the web. Similarly, Never Ending Language Learner [4] learns relationships between semantic categories, but in this case by reading text on the web. There are also a number of human labeled knowledge bases such as WordNet [23], which was the starting point for the Im-ageNet [28] categories, and Visual Genome [15] which has human annotated scene graphs for each image. However, given the scale and ambiguity issues, these knowledge graphs are quite noisy. The knowledge exists; the question is how to use it effectively in the presence of noise.
There has been a lot of work in end-to-end learning on graphs or neural network trained on graphs [30,3,6,11,24,22,9,21]. Most of these approaches either extract features from the graph or they learn a propagation model that transfers evidence between nodes conditional on the type of relationship. An example of this is the Gated Graph Neural Network [18] which takes an arbitrary graph as input. Given some initialization (annotation specific to the task such as starting and ending node for shortest path), it learns how to propagate information and predict the output for every node in the graph. This approach has been shown to solve basic logical tasks as well as program verification.
Our work improves on this model and adapts end-toend graph neural networks to the image classification task. We introduce the Graph Search Neural Network (GSNN) which uses features from the image to efficiently annotate the graph, select a relevant subset of the input graph and predict outputs on nodes representing visual concepts. These output states are then used to classify the image. GSNN learns a propagation model which reasons about different types of relationships and concepts to produce outputs on the nodes which are then used for image classification. Our new architecture mitigates the computational issues with the Gated Graph Neural Networks for large graphs which allows our model to be efficiently trained for image tasks using large knowledge graphs. We show how our GSNN model is effective at reasoning about concepts to improve image classification tasks across the entire spectrum-from one-shot learning to large-scale dataset learning. Importantly, our GSNN model is also able to provide explanations on classifications by following how the information is propagated in the graph.
The major contributions of this work are (a) the introduction of the GSNN as a way of incorporating potentially large knowledge graphs into an end-to-end learning system that is computationally feasible for large graphs; (b) a framework for using noisy knowledge graphs for image classification; (c) the ability to explain our image classifications by using the propagation model; (d) the introduction of a new subset of Visual Genome designed to test 1-shot and fewshot learning without any overlap with classes from Ima-geNet [28] or COCO [19]. Our method significantly out-performs baselines for multi-label classification both in full data and low-data settings.
Related Work
Learning knowledge graphs [38,4,29] and using graphs for visual reasoning [39,20] has recently been of interest to the vision community. For reasoning on graphs, several approaches have been studied. For example, [40] collects a knowledge base and then queries this knowledge base to do first-order probabilistic reasoning to predict affordances. [20] builds a graph of exemplars for different categories and uses the spatial relationships to perform contextual reasoning. Approaches such as [17] use random walks on the graphs to learn patterns of edges while performing the walk and predict new edges in the knowledge graph. There has also been some work using a knowledge base for image retrieval [12] or answering visual queries [41], but these works are focused on building and then querying knowledge bases rather than using existing knowledge bases as side information for some vision task.
However, none of these approaches have been learned in an end-to-end manner and the propagation model on the graph is mostly hand-crafted. More recently, learning from knowledge graphs using neural networks and other end-toend learning systems to perform reasoning has become an active area of research. Several works treat graphs as a special case of a convolutional input where, instead of pixel inputs connected to pixels in a grid, we define the inputs as connected by an input graph, relying on either some global graph structure or doings some sort of pre-processing on graph edges [3,6,11,24]. However, most of these approaches have been tried on smaller graphs such as molecular datasets. In vision problems problems, these graphs encode contextual and common-sense relationships and are significantly larger, leading to scalability issues.
Li and Zemel present Graph Gated Neural Networks (GGNN) [18] which uses neural networks on graph structured data. This paper (itself an extension of Graph Neural Networks [30]) serves as the foundation for our Graph Search Neural Network (GSNN). Several papers have found success using variants of Graph Neural Networks applied to various simple domains such as quantitative structureproperty relationship (QSPR) analysis in chemistry [22] and subgraph matching and other graph problems on toy datasets [9]. GGNN is a fully end-to-end network that takes as input a directed graph and outputs either a classification over the entire graph or an output for each node. For instance, for the problem of graph reachability, GGNN is given a graph, a start node and end node, and the GGNN will have to output whether the end node is reachable from the start node. They show results for simple logical tasks on graphs and more complex tasks such as program verification.
There is also a substantial amount of work on various types of kernels defined for graphs [35] such as diffusion kernels [14], graphlet kernels [32], Weisfeiler-Lehman graph kernels [31], deep graph kernels [26], graph invariant kernels [25] and shortest-path kernels [1]. The methods have various ways of exploiting common graph structures, however, these approaches are only helpful for kernel-based approaches such as SVMs which do not compare well with neural network architectures in vision.
Our work is also related to attribute approaches [8] to vision such as [16] which uses a fixed set of binary attributes to do zero-shot prediction, [33] which uses attributes shared across categories to prevent semantic drift in semi-supervised learning and [5] which automatically discovers attributes and uses them for fine-grained classification. Our work also uses attribute relationships that appear in our knowledge graphs, but also uses relationships between objects and reasons directly on graphs rather than using object-attribute pairs directly.
We further evaluate our model on one and few-shot learning. There have been a few works in this area recently such as [2] which inverts the process that creates the input and [37] which learns new categories by finding common patches between classes. Both evaluate on handwritten characters and it is unclear how well they would scale to realistic images. [36] improves few-shot classification by training a network on a high-data image dataset and then learning a model-to-model transformation onto the new dataset.
Graph Gated Neural Networks
The idea of GGNN is that given a graph with N nodes, we want to produce some output which can either be an output for every graph node o 1 , o 2 , ...o N or a global output o G . This is done by learning a propagation model similar to an LSTM. For each node in the graph v, we have a hidden state representation h (t) v at every time step t. We start at t = 0 with initial hidden states x v that depends on the problem. For instance, for learning graph reachability, this might be a two bit vector that indicates whether a node is the source or destination node. In case of visual knowledge graph reasoning, x v can be a one bit activation representing the confidence of a category being present based on an object detector or classifier.
Next, we use the structure of our graph, encoded in a matrix A which serves to retrieve the hidden states of adjacent nodes based on the edge types between them. The hidden states are then updated by a gated update module similar to an LSTM. The basic recurrence for this propagation network is where Eq 1 is the initialization of the hidden state with x v and empty dimensions. Eq 2 shows the propagation updates from adjacent nodes. Eq (3-6) combine the information from adjacent nodes and current hidden state of the nodes to compute the next hidden state. After T time steps, we have our final hidden states. The node level outputs can then just be computed as where g is a fully connected network, the output network, and x v is the original annotation for the node.
Graph Search Neural Network
The biggest problem in adapting GGNN for image tasks is computational scalability. NEIL [38] for example has over 2000 concepts, and NELL [4] has over 2M confident beliefs. Even after pruning to our task, these graphs would still be huge. Forward propagation on the standard GGNN is O(N 2 ) to the number of nodes N and backward propagation is O(N T ) where T is the number of propagation steps. We perform simple experiments on GGNNs on synthetic graphs and find that after more than about 500 nodes, a forward and backward pass takes over 1 second on a single instance, even when making generous parameter assumptions. On 2,000 nodes, it takes well over a minute for a single image. Clearly using GGNN out of the box is infeasible.
Our solution to this problem is the Graph Search Neural Network (GSNN). As the name might imply, the idea is that rather than performing our recurrent update over all of the nodes of the graph at once, we start with some initial nodes based on our input and only choose to expand nodes which are useful for the final output. Thus, we only compute the update steps over a subset of the graph. So how do we select which subset of nodes to initialize the graph with? During training and testing, we determine initial nodes in the graph based on likelihood of the concept being present as determined by an object detector or classifier. For our experiments, we use Faster R-CNN [27] for each of the 80 COCO categories. For scores over some chosen threshold, we choose the corresponding nodes in the graph as our initial set of active nodes.
Once we have initial nodes, we also add the nodes adjacent to the initial nodes to the active set. Given our initial nodes, we want to first propagate the beliefs about our initial nodes to all of the adjacent nodes. After the first time step, however, we need a way of deciding which nodes to expand next. We therefore learn a per-node score that estimates how "important" that node is. After each propagation step, for every node in our current graph, we predict an importance score where g i is a learned network, the importance network.
Once we have values of i v , we take the top P scoring nodes that have never been expanded and add them to our expanded set, and add all nodes adjacent to those nodes to our active set. Figure 2 illustrates this expansion. At t = 0 only the detected nodes and their neighbors are expanded. At t = 1 we expand chose nodes based on importance values and add their neighbors to the graph. At the final time step (t = 2 in this case) we compute the per-node-output and re-order the outputs into the final classification net.
To train the importance net, we assign target importance value to each node in the graph for a given image. Nodes corresponding to ground-truth concepts in an image are assigned the important value 1. The neighbors of these nodes are assigned a value of γ. Nodes which are two-hop away have value γ 2 and so on. The idea is that nodes closest to the final output are the most important to expand.
We now have an end-to-end network which takes as input a set of initial nodes and annotations and outputs a per-node output for each of the active nodes in the graph. It consists of three sets of networks: the propagation net, the importance net, and the output net. The final loss from the image problem can be backpropagated from the final output of the pipeline back through the output net and the importance loss is backpropagated through each of the importance outputs. See Figure 3 to see the GSNN architecture.
One final detail is the addition of a "node bias" into GSNN. In GGNN, the per-node output function g(h takes in the hidden state and initial annotation of the node v to compute its output. In a certain sense it is agnostic to the meaning of the node. That is, at train or test time, GSNN takes in a graph it has perhaps never seen before, and some initial annotations x v for each node. It then uses the structure of the graph to propagate those annotations through the network and then compute an output. The nodes of the graph could have represented anything from human relationships to a computer program. However, in vision problems, the fact that a particular node represents "horse" or "cat" will probably be relevant, and we can also constrain ourselves to a static graph over image concepts. Hence we introduce node bias terms that, for every node in our graph, has some learned values. Our output equations are now where n v is a bias term that is tied to a particular node in the overall graph. This value is stored in a table and its value are updated by backpropagation.
Image pipeline and baselines
Another problem we face adapting graph networks for vision problems is how to incorporate the graph network into an image pipeline. For classification at least, this is fairly straightforward. We simply take the output of the graph network, reorder it so that nodes always appear in the same order into the final network, and zero pad any nodes that were not expanded by our algorithm. Therefore, if we have a graph of 316 nodes and each node uses a 5-dim hidden variable. We create a 1580-dim feature vector from the graph. We also concatenate this feature vector with fc7 layer (4096-dim) of fine-tuned VGG-16 network [34] and top-score for each COCO category predicted by Faster R-CNN (80-dim). This 5756-dim feature vector is then fed into 1-layer final classification network which is trained with dropout.
For baselines, we compare to: (1) VGG Baseline -feed just fc7 into final classification net; (2) Detection Baselinefeed fc7 and top COCO scores into final classification net.
Datasets and Graphs
For our experiments, we wanted to test on a dataset that better represents the complex, noisy visual world with its many different kinds of objects, where labels are potentially ambiguous and overlapping, and categories fall into a long-tail distribution [39]. Humans do well in this setting, but vision algorithms still struggle with it, and this is where we would expect knowledge-based approaches to help the most. To this end, we chose the Visual Genome dataset [15] v1.0.
Visual Genome contains over 100,000 natural images from the Internet. Each image is labeled with objects, attributes and relationships between objects entered by human annotators. Annotators could enter any object in the image rather than from a predefined list, so as a result there are thousands of object labels with some being more common and most having many fewer examples. There are on average 21 labeled objects in an image, so compared to datasets such as ImageNet [28], COCO [19] or PASCAL [7], the scenes we are considering are far more complex.
To evaluate our method, we perform two different experiments. In the first experiment, we evaluate the performance on multi-label classification. In the second experiment, we explore the performance on low-shot recognition.
Multi-Label Classification
For the first experiment, we create a subset from Visual Genome which we call Visual Genome multi-label dataset or VGML. In VGML, we take the 200 most common objects in the dataset and the 100 most common attributes and also add any COCO categories not in those 300 for a total of 316 visual concepts. Our task is then multi-label classification: for each image predict which subset of the 316 total categories appear in the scene. We randomly split the images into a roughly 80-20 train/test split. Since we used pre-trained detectors from COCO, we ensure none of our test images are from COCO training images.
We also use Visual Genome as a source for our knowledge graph. Using only the train split, we build a knowledge graph connecting the concepts using the most common object-attribute and object-object relationships in the dataset. Specifically, we counted how often an object/object relationship or object/attribute pair occurred in the training set, and pruned any edges that had fewer than 200 instances. This leaves us with a graph over all of the images with each edge being a common relationship. The idea is that we would get very common relationships (such as grass is green or person wears clothes) but not relationships that are rare and only occur in single images (such as person rides zebra). See Appendix for more details.
The Visual Genome graphs are useful for our problem because they contain scene-level relationships between objects, e.g. person wears pants or fire hydrant is red and thus allow the graph network to reason about what is in a scene. However, it does not contain useful semantic relationships. For instance, it might be helpful to know that dog is an animal if our visual system sees a dog and one of our labels is animal. To address this, we also create a version of graph by fusing the Visual Genome Graphs with Wordnet [23]. Using the subset of Wordnet from [10], we first collect new nodes in WordNet not in our output label by including those which directly connect to our output labels and thus likely to be relevant and add them to a combined graph. We then take all of the WordNet edges between these nodes and add them to our combined graph (see Appendix for more details on these graphs).
Low-Shot Recognition
The second dataset split, which we call the Visual Genome few-shot multi-label dataset or VGFS, tries to make the problem even more difficult. We would like to have a multi-label dataset where we had only a few examples per class and classes that have no overlap to ImageNet (used for learning visual features) or COCO (used for activating initial nodes in the graph).
From Visual Genome, we select 100 classes that are distinct from any ImageNet or COCO classes (see Appendix for the complete list). For each class, we hand-select up to five training images per category for the dataset. For the "one-shot" experiments, we only have the 100 images corresponding to one image selected per category, and for the 5-shot, we have the 500 images. Note that because it is a multi-label dataset, we end up with more than one example per class since we choose 1 image per category, but the category may happen to appear in the image for another category as well. For the 1-shot dataset, 46 categories have one image, 22 have 2, 15 have 3, and 17 have 4 or more. We divide the remaining 53,155 images that have at least one of the 100 categories and are not in the COCO 2014 Train split into a test and holdout set. We use the holdout set to create a graph over the 100 new categories as well as the categories from VGML (see Appendix for more details on the dataset and the knowledge graph). This knowledge graph represents common relationships among categories and can be learned from other sources such as NEIL [38]. For our experiments, we also use another version of knowledge graph by fusing WordNet knowledge similar to VGML setting.
Quantitative Evaluation
We first report results on the VGML dataset. We train the models using ADAM [13] with an initial learning rate of 10 −3 for all networks, except the pre-trained VGG where we use an initial learning rate of 10 −4 , and an initial momentum of 0.9, except the GSNN which used 0. We set our GSNN hidden state size to 10, importance discount factor γ to 0.3, number of steps T to 2, initial confidence threshold to 0.1 and our expand number P to 5. Our GSNN importance and output networks are single layer networks with sigmoid activations. All networks were trained for 2,653 iterations with a batch size of 32. Table 1 shows the result of our method on the multi-label classification task. We show the performance for different training dataset sizes. In all experiments, we see that the GSNN model with the combined graph outperforms all baselines by a significant margin. In the low data regime, both GSNN nets outperform baselines, likely because the baseline models cannot learn the relationships between all of the output categories, as they have not seen enough examples of many of the categories. In the full data regime, baselines achieve comparable performance to the GSNN with just the Visual Genome graph but are outperformed by the combined graph. This suggests that including the outside semantic knowledge from WordNet and performing explicit reasoning on a knowledge graph allows our model to learn better representations compared to the other models. Table 2 shows the result of our method on the few shot task (VGFS). We use the same learning and network parameters as with our previous experiments for 20 epochs. We choose our GSNN hidden state size again as 10, our expand number P as 5 for 1-shot and 20 for 5-shot, our importance discount factor γ as 0.3, our initial confidence threshold as 0.1 and our number of steps T as 2. The P for 1-shot is chosen lower to avoid overfitting since we had many fewer examples to train with.
The dataset is much smaller and the categories are not in COCO or ImageNet, so all models achieve lower precision. Nevertheless, our GSNN models show improvements over baselines, exploiting the information from our knowledge Table 2. Mean Average Precision for multi-label classification on Visual Genome few-shot dataset. Results shown for 1-shot (100 total images) and 5-shot (500 total images).
graphs, especially with the combined graph. As one might suspect, our method does not perform uniformly on all categories, but rather does better on some categories and worse on others. Figure 4 shows the differences in average precision for each category between our GSNN model with the combined graph and the detection baseline for the 1k VGML experiment. Figure 5 shows the same for our 5-shot VGFS experiment. To see this analysis for the other experiments, see Appendix. As expected, since the GSNN models have higher mAP, for the majority of categories, the AP improves. Performance on some classes improves greatly, such as "laptop" in our 1k VGML experiment and "runway" in our VGFS 5-shot experiment. Others however, such as "field" in our VGML 1k experiment perform worse. In the next section, we analyze our GSNN models on several examples to try to gain a better intuition as to what the GSNN model is doing and why it does well or poorly on certain examples.
Qualitative Evaluation
One way to analyse the GSNN is to look at the sensitivities of parameters in our model with respect to a particular output. Given a single image I, and a single label of interest y i that appears in the image, we would like to know how information travels through the GSNN and what nodes and edges it uses. We examined the sensitivity of the output to hidden states and detections by computing the partial derivatives ∂yi ∂h (1) ∂yi ∂h (2) ∂yi ∂x det with respect to the category of interest. These values tell us how a small change in the hidden state of a particular node affects a particular output. We would expect to see, for instance, that for labeling elephant, we see a high sensitivity for the hidden states corresponding to grey and trunk.
In this section, we show the sensitivity analysis for the GSNN combined graph model on the VGML 1k experiment and the VGFS 5-shot experiments. In particular, we examine classes that performed well under GSNN and a few that performed poorly to try to get a better intuition into why some categories improve more over baselines. More examples for the other experiments are provided in the Appendix. Figure 6 shows the graph sensitivity analysis for the experiments with VGML experiment on the left and VGFS on the right, showing four successful detections by the GSNN network and two failures. Each example shows the image, the ground truth output we are analyzing and the sensitivities of the concept of interest with respect to the hidden states of the graph or detections. For convenience, we display the names of the top detections or hidden states. We also show part of the graph that was expanded, to see what relationships GSNN was using.
For the VGML experiment, the top left of Figure 6 shows that using keyboard, laptop, cat and dining table detections, GSNN is able to reason that because laptop and keyboard occur together that it is looking at a desk, making dining table and teddy bear less likely. In the next hidden state the attributes for laptop, silver and open become more important. The middle left shows that the airplane detection is important. The GSNN in particular uses the "has" connection between airplane and wing. The bottom left shows the failure case for the 1k. It uses the person detection very strongly and in the next hidden states looks at less helpful parts of the graph, such as head and building that are not related to our output class "field." On the top right for the VGFS 5-shot experiment, we see that our network highly correlates the airplane detection, and reasons that runway, smoke and sky are also in the figure using the graph. On the middle right we see that for neck, it uses the giraffe detec-tion and uses the fact that there is a "has" edge in the graph from giraffes to neck. It also weakly used the couch detection, but the graph reinforces the stronger giraffe detection during expansion. The bottom right shows a missclassified 5-shot example. While the GSNN is sensitive to the pizza detection which has a connection to onion, it is also fooled by a high vase detection sensitivity and expanded to unrelated nodes bouquet and flower.
Conclusion
In this paper, we present the Graph Search Neural Network (GSNN) as a way of efficiently using knowledge graphs as extra information to improve image classification. We show that even in low-data settings, our model performs well. We provide analysis that examines the flow of information through the GSNN and provides insights into why our model improves performance. We hope that this work provides a step towards bringing symbolic reasoning into traditional feed-forward computer vision frameworks.
The GSNN and the framework we use for vision problems is completely general. Our next steps will be to apply the GSNN to other vision tasks, such as detection, Visual Question Answering and image captioning. Another inter- esting direction would be to combine the procedure of this work with a system such as NEIL [38] to create a system which builds knowledge graphs and then prunes them to get a more accurate, useful graph for image classification or detection. Figure 7 shows the graph sensitivity analysis for the VGFS 1-shot experiment on the GSNN combined graph model. The top example shows that it uses the bowl, bottle, pizza, spoon and sandwich detections, many of which connect on the graph to counter, allowing it to reason that the scene contains a counter. The middle example shows that the bed connection to headboard as well as the person detection to bed allows it to reason that it is looking at a bed which relates to headboard. In the failed example on the bottom, the GSNN wrongly detects orange, and the other detections person, tie, horse and truck are not correct and don't have any graph connections to flower, so it fails to classify correctly. Figure 8 shows the graph sensitivity analysis for the VGML full data experiment on the GSNN combined graph model. The top example shows that using the connection between the cat, person and couch detections, it is able to reason that the scene is on a couch or bed and therefore the connection from both of these to pillow allows it to make the right classification. The middle example shows that it is able to use the connection from dining table, to plate, to cake to reinforce the weak cake detection to correctly classify the image. The failed example on the bottom shows that the network fails to use a detection for dining table which causes it to fail to find any graph connection to wooden. It does expand to bench which has wooden, but it fails to propagate all the way to wooden enough to make a correct classification. Figure 9 shows the graph sensitivity analysis for the VGML 5k experiment on the GSNN combined graph model. The top expample shows that although train is only the second strongest-used detection, it is able to reason using the connections between train and tracks that it should classify the image as tracks. The middle example shows that detecting TV and Laptop reinforce each other on the graph and cause it to make desk more likely and dining table less likely. The bottom example shows a failed classification where the graph expands on the person detector into less useful states such as car, white and hair. There is no obvious connections in the graph such as surfboard to beach that might have been helpful to make this classification. Figure 10 shows the graph sensitivity analysis for the VGML 500 experiment on the GSNN combined graph model. The top example shows that using the airplane detection and using the "has" connections from airplane to wing, GSNN is able to classify the given example as wing. In the middle example, the elephant detection is important, which has the obvious connection to trunk. It is also sensitive to the person detection, and this leads to some nodes such as visible becoming important, but it is still correctly able to reason that trunk is in the image. In the failed classification on the bottom row, we see that it uses the dining table and chair detections and even weakly uses the mouse and keyboard detections. Unfortunately, the graph does not contain edges that reinforce a connection between mouse and keyboard, so the graph expands irrelevant nodes such as white and black and fails to classify mouse.
B.1. Dataset and splits
Our dataset splits of Visual Genome into Visual Genome Multi-Label and Visual Genome Few-Shot will be made publicaly available.
The classes we use for VGML are:
B.2. Graphs
The graphs we built for our experiments will also be publicly released. Below is some basic information about the graphs we use. Figure 11. Difference in Average Precision for each of the 316 labels in VGML between our GSNN combined graph model and detection baseline for full data experiment. Figure 14. Difference in Average Precision for each of the 100 labels in VGFS between our GSNN combined graph model and detection baseline for 1-shot experiment. Top categories: beak, tusk, runway, snow, headboard. Bottom categories: road, onion, wave, bacon, sidewalk. | 2016-12-14T21:18:30.000Z | 2016-12-14T00:00:00.000 | {
"year": 2016,
"sha1": "79baf8cf6be6510f69be8c515516136138678cf5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1612.04844",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bb21a57edd10c042bd137b713fcbf743021ab232",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
189762245 | pes2o/s2orc | v3-fos-license | Experimental hybrid quantum-classical reinforcement learning by boson sampling: how to train a quantum cloner
We report on experimental implementation of a machine-learned quantum gate driven by a classical control. The gate learns optimal phase-covariant cloning in a reinforcement learning scenario having fidelity of the clones as reward. In our experiment, the gate learns to achieve nearly optimal cloning fidelity allowed for this particular class of states. This makes it a proof of present-day feasibility and practical applicability of the hybrid machine learning approach combining quantum information processing with classical control. Moreover, our experiment can be directly generalized to larger interferometers where the computational cost of classical computer is much lower than the cost of boson sampling.
INTRODUCTION
Machine learning methods are extensively used in an increasing number of fields, e.g., automotive industry, medical science, internet security, air-traffic control etc. This field conveys many algorithms and structures ranging from simple linear regression to almost arbitrarily complex (limited by computational resources and amount of training data) neural networks which are able to find solutions to highly nonlinear/complex problems. Recently, a considerable attention was drawn to the overlap between quantum physics and machine learning [1]. Depending on the type of input data and data processing algorithms, we can distinguish four types of so called quantum machine learning (QML), i.e., CC (classical data and classical data processing -classical limit of quantum machine learning), QC (quantum data and classical data processing), CQ (classical data and quantum data processing), and QQ (quantum data and quantum data processing).
In addition to its fundamentally interesting aspects, QML can offer reduced computational complexity with respect to its classical counterpart in solving some classes of problems [1]. Depending on the problem at hand the speedup can be associated with various features of quantum physics. A number of proposals and experiments focused on QML have been reported, such works include for instance quantum support vector machines [2], Boltz- mann machines [3], quantum autoencoders [4], kernel methods [5], and quantum reinforcement learning [6,7]. In reinforcement learning a learning agent receives feedback in order to learn an optimal strategy for handling a nontrivial task. Next, the performance of the agent is tested on cases that were not included in the training. If the agent performs well in these cases the validation is completed.
In this paper, we demonstrate experimentally that reinforcement learning can be used to train an optical quantum gate (see conceptual scheme in Fig. 1). This problem is related to the boson sampling problem [8][9][10][11][12] where one knows the form of the scattering matrix of a system and learns its permanent. However, here we optimize the probabilities of obtaining certain outputs of arXiv:1906.05540v1 [quant-ph] 13 Jun 2019 the gate by finding the optimal parameters of the scattering matrix. Calculating the probabilities (permameters of scattering matrix) is in general a computationally hard task while measuring them is much faster. This feature of quantum optics allows us to expect that complex integrated interferometers could be applied as specialpurpose quantum computers (for a 16 photon quantum interferometer see [13]). This sets our problem in the class of CQ quantum machine learning tasks. There are other QML approaches that could be used to optimize a quantum circuit. One approach uses classical machine learning to optimize the design of a quantum experiment in order to produce certain defined states [14]. Another QML approach consists of optimizing quantum circuits to improve the solution to some problems solved on a quantum computer [15]. The latter method is motivated by recent developments in quantum computing which suggest that using a quantum computer even to minor tasks within an experiment can save computational resources [16,17].
We applied online reinforcement learning methods to train an optimal quantum cloner. Quantum cloning is essential both for certain experimental tasks and for fundamental quantum physics. It is indispensable for safety tests of quantum cryptography systems or of other quantum communications protocols. Perfect quantum cloning of an unknown state is prohibited by the no-cloning theorem [18]. However, it is possible to prepare imperfect clones that resemble the original state to a certain degree. Usually, the approach towards quantum cloning involves explicit optimization of the interaction between the system in the cloned state and another systems to maximize the fidelity of the output clones. Then, one uses such results explicitly to set the parameters of the experimental setup. In contrast to that, we present a quantum gate (learning agent) that is capable to self-learn such interaction (policy) based on provided feedback (implicit setting of the parameters). For the purposes of this proof-ofprinciple experiment, we limit ourselves to qubits in the form of where |0 and |1 denote logical qubit states. These qubits can be found on the equator of the Bloch sphere, hence, we call them equatorial qubits. Cloning of these states is known to be the optimal means of attack on individual qubits of the famous quantum cryptography protocols BB84 [19] and RO4 [20,21] or quantum money protocol [22] (see also Refs. [23,24] for experimental implementation).
RESULTS
We demonstrate reinforcement-learned quantum cloner for a class of phase-covariant quantum states. For the training procedure, the figure of merit is the individual fidelity of the output copies. The fidelity of the j-th clone is defined as overlap between the state of the input qubit |ψ in and the state of the cloneˆ j : In case of the state in Eq. (1), the maximum achievable fidelity of symmetrical 1 → 2 cloning accounts for F 1 = F 2 = 1 2 1 + 1 √ 2 ≈ 0.8535 [25,26]. We have constructed a two qubit gate on the platform of linear optics. Qubits were encoded into polarization states of the individual photons (|0 ⇔ |H and |1 ⇔ |V ). The gate operates formally as a polarization dependent beam splitter with tunable splitting ratios for horizontal (H) and vertical (V ) polarization. This tunability provides two parameters for self-learning labeled φ and θ throughout the text. The third learnable parameter ω is embedded in the state of the ancillary photon |ψ a = cos 2ω |H + sin 2ω |V . ( We have experimentally implemented two machine learning models using two and three parameters, respectively. In the first model, we fixed the ancilla state to its theoretically known optimum |ψ a = |H . The remaining two parameters of the gate φ and θ were machine learned. To minimize the cost function (i.e. optimize the performance of the cloner) we applied Nelder-Mead simplex algorithm which iteratively searches for a minimum of a cost function. We chose the cost function to be in the form of where F 1 and F 2 stand for the fidelity of the first and second clone, respectively. This choice reflects the natural requirements to obtain maximum fidelities of both the clones as well as to force the cloner into a symmetrical cloning regime. Training of the gate consists of providing it with training instances of equatorial qubit states (randomly generated in each cost function evaluation, i. e. an online machine learning scenario) and with the respective fidelity of the clones. In each training run, the underlying Nelder-Mead algorithm sets the gate parameters to vertices of simplexes in the parameter space and then decides on a future action. In the case of a twoparameter optimization, these simplexes correspond to triangles as depicted in Figure 2. In this Figure, we plot the exact path taken by the Nelder-Mead simplex algorithm to minimize the cost function C for the case of a real experiment and its simulation. The selected initial simplex was intentionally chosen well away from the optimal position -its first vertex resembles the trivial cloning strategy [24,28]. In Figure 3a, we illustrate the evolution of both the fidelities F 1 and F 2 during the training. After 40 runs (i.e. 40 instances from the training set), this model was deemed trained because the size of simplexes dropped to the experimental uncertainty level (i.e. ∼ 0, 1 • on rotation angles of wave plates). However, in general, setting the simplex to converge within a given precision is a nontrivial problem [29]. In the second model, we let the gate learn the optimal setting of the ancilla ω along with the gate parameters φ and θ. The training procedure ran similarly to the first model. The initial value of the ω parameter was set naively to ω = π 8 so it lied on the equator of the Poincaré-Bloch sphere. We present evolution of the intermediate fidelities of this three-parameter model in Figure 3b. Using a similar stopping criterion as in the first model, the training of the second model was terminated after 60 runs.
We have tested the performance of both our models on independent random test sets each populated by 40 instances of equatorial states. We summarize the results of the two models in Table I, where we provide the final learned parameters together with the mean values of the fidelities on the test sets F 1 and F 2 . The observed fidelities on test sets are bordering on the theoretical limit (at most 0.013 below it) which renders our gate highly precise in context of previously implemented cloners [25,[30][31][32][33][34].
EXPERIMENTAL REALIZATION
We constructed a device composed of a linear optical quantum gate and a computer performing classical information processing. While the gate itself is capa- ble of a broad range of two qubit transformations, this paper focuses on its ability to act as a phase-covariant quantum cloner. The experimental setup is depicted in Fig. 4. Pairs of photons are generated in Type I spontaneous parametric down-conversion occurring in a nonlinear BBO crystal. This crystal is pumped by Coherent Paladin Nd-YAG laser with integrated third harmonic generation of wavelength at λ = 355 nm. The generated pairs of photons are both horizontally polarized and highly correlated in time. These photons are then spectrally filtered by 10 nm wide interference filters and spatially filtered by two single mode optical fibers each guiding one photon from the pair. In our experimental setup, qubits are encoded into polarization states of the individual photons. The pho- ton in the upper path (spatial mode 2) represents the signal qubit, quantum state of which we want to clone, and the photon in the lower path (spatial mode 1) serves as the ancilla.
Using polarization controllers (PC) we can ensure that both photons are horizontally polarized at the output of the fibers. From now on, polarization states of the photons are set using a combination of half-wave plates (HWPs) and quarter-wave plates (QWPs). There are eight wave plates in total, two stationary QWPs fixed at angle 45 • and six motorized HWPs which make it possible to control the whole quantum gate using a computer. The first two half-wave plates HWP 1 and HWP 2 are used to set input polarisation states of the photons. The core part of the presented quantum gate is a Mach-Zehnder-type interferometer which consists of two polarizing beam splitters (PBS) and two reflective pentaprisms, one of which is attached to the piezoelectric stage (PS). With the addition of two HWPs (HWP 3 and HWP 4 ) placed in its arms, this whole interferometer implements a polarization dependent beam splitter with variable splitting ratio. Mathematically, the scattering matrix of the gate reads (5) is formally equivalent to the transformation by a polarization dependent beam splitter, the intensity splitting ratios of which for horizontal and vertical polarizations are cot 2 2φ and cot 2 2θ, respectively. The two spatial modes at the output of the interferometer are subjected to polarization projection (QWP 5 , QWP 6 and HWP 7 , HWP 8 ) and then led via singlemode optical fibers to a pair of avalanche photodiodes by Perkin-Elmer running in Geiger mode. We use detection electronics to register both single photons at each of the detectors and coincident detections as successful operation of the gate is indicated by the presence of single photon in each output of the interferometer. The electronic signal is then sent to a classical computer.
For specific parameters of the presented linear-optical elements, this quantum gate functions as a 1 → 2 symmetric phase-covariant cloner, optimal analytical cloning transformation of which is well known [25]. On a linearoptical platform, this optimal cloning transformation can be achieved by a polarization dependent beam splitter with intensity transmitivities for horizontal and vertical polarization at t H ≈ 0.19 and t V ≈ 0.81, while setting the ancilla to be horizontally polarized. Note that our quantum gate is capable of implementing this transformation when set approximately to φ = 31.3, θ = 13.7 and the maximum theoretical fidelities of the clones are To showcase the capability of our gate to learn to clone phase-covariant states optimally, we deliberately ignore this analytical solution and employ self-optimization procedure seeking to maximize the cloning fidelities. The optimization process consists of a number of measurements (runs), each performed for a set of variable optimization parameters φ, θ, ω. That is, variable splitting ratio for horizontal (φ) and vertical (θ) polarization as well as the state of the ancilla (ω) (Eq. 3) controlled by the rotation of HWP 1 . In each run, output clones fidelities are evaluated and supplied to the classical Nelder-Mead algorithm for a decision about the parameters of the future runs.
In between any two runs, the setup is stabilized. We first minimize temporal delay between the two individual photons. In this case, all HWPs are set to 0 • with the exception of HWP 4 being at 22.5 • . In this regime, we minimize the number of two-photon coincident detections (Hong-Ou-Mandel dip) by changing the temporal delay between the photons using a motorized translation stage MT. In the next step, the phase is stabilized in the interferometer. Moreover, we make use of the fact that the phase shift in the interferometer additively contributes to the phase η of the signal state (Eq. 1). This allows us to use interferometer phase stabilization for setting of any signal state of the equatorial class. We achieve this task by setting HWP 2 to 22.5 • and HWP 8 to the value corresponding to orthogonal state with respect to the required input signal state. All other HWPs are set to 0 • and a minimum in single-photon detections on Det 2 is found by tuning the voltage applied to PS. Note that the entire stabilization procedure is completely independent of the learning process itself.
While all six of the HWPs are controlled by a PC, only three (HWP 1 , HWP 3 and HWP 4 ) are specifically controlled by the optimization algorithm. In contrast to that, HWP 2 is used to set the quantum state of the cloned qubit and HWP 7 and HWP 8 are used to choose polarization projections, therefore their configuration shall not be accessible to the optimization algorithm.
In this reinforced-learning scenario, the cloner is trained on a sequence of random equatorial signal states (Eq. 1) different for each run. The phase η is randomly picked from interval (0; 2π). The optimization algorithm then rotates HWP 1 , HWP 3 and HWP 4 to chosen angles φ, θ, ω. Finally, the cloner is fed back the measured fidelities F 1 , F 2 of the clones.
The fidelities are obtained by measuring coincidence detections in four different projection settings that correspond to the angles set on HWP 7 and HWP 8 . We label these coincident detections cc ij , where i, j ∈ { ; ⊥}. The and the ⊥ sign denote projection on the signal state |ψ s and its orthogonal counterpart ψ ⊥ s . We calculate the fidelities as where Σ denotes cc + cc ⊥ + cc ⊥ + cc ⊥⊥ . The core part of the optimization process is the Nelder-Mead simplex algorithm which minimizes a chosen cost function C in a multidimensional space corresponding to the number of the function parameters N [27]. The algorithm takes (N + 1) points in the parameter space to create a (N + 1) dimensional initial simplex (each point corresponds to one of the simplex vertices). For example, with 2 parameters being optimized, the algorithm creates a triangle, for 3 parameters it creates a tetrahedron and so on. The value of the cost function at each point of the simplex is then evaluated and the algorithm transforms the simplex in such way to find a point of local minimum. In our case, we define the cost function C so that the first two elements maximize the obtained fidelities, while the last element achieves symmetry of the cloning.
CONCLUSIONS
In our proof of principle experiment, we implemented a CQ reinforcement quantum machine learning algorithm driven by a hybrid of classical Nelder-Mead method and quantum computing based permanent measurement. This approach was used to train a practical quantum gate (i.e., a quantum cloner). The task of the training was to optimize parameters of the gate (interferometer) φ, θ and ω (setting of the ancilla in the second experiment) to perform phase-covariant cloning. The quality of both the clones measured by their fidelities F 1 and F 2 which were evaluated within both experiments (Figure 3) successfully reached the theoretical limit for phasecovariant cloning 0.854. Remarkably, the cloner managed to achieve almost optimal cloning by learning setup parameters, slightly different from analytical values, that counter all experimental imperfections including imperferctions in the cloner itself and in the input state preparation.
To see the connection between boson sampling and our results, let us focus on computing the permanent Perm of scattering matrix describing the gate operation. The unitary scattering matrix U performs linear transformation on the annihilation operators a i of the input modes (i can be an index labeling both the polarization and the spatial degrees of freedom). Then the input-output relation of an quantum-optical interferometer is given as If all the input modes of an interferometer are injected with single photons and single photons are detected at specific outputs (no bunching) the probability of obtaining the desired detection coincidence is p = |PermU | 2 . However, this expression becomes more complex if some modes are occupied by more than one photon. Then factorials of mode-specific photon numbers appear as denominator and the respective rows/columns of U must be repeated a corresponding number of times [8]. If some output modes are not to be populated, the respective row of U matrix is deleted. Calculating the permanents of the scattering matrix associated with our cloner by hand is already challenging (we have polarization and spatial degrees of freedom for two photons) and in general it falls into the #P -hard complexity class. Our experiment can be directly generalized to larger interferometers, where the computational cost of classical computer is much lower than the cost of boson sampling. This makes our research a relevant application of so-called quantum circuit learning described in Ref. [35].
Our results also opens possibilities of further research or applications in the field of quantum key distribution. Suppose a typical attack on the key distribution scheme: Bob and Alice share quantum states and the attacker Eve is eavesdropping on them. Bob and Alice exchange quantum states and, via a classical line, they can decide to stop exchanging qubits (because of noise). Let us assume that Eve is eavesdropping on both quantum and classical communication. Eve can in principle use reinforced-learning to train a cloner to perform the attack by feeding it with information on the behavior of Bob and Alice, e.g., their decision on continuing or aborting the exchange of a quantum key and/or their decision on parameters of privacy amplification. For such application the proposed gate would have to be modified since Eve does not know the specific class of states used by Bob and Alice, but that is out of the scope of this paper. | 2019-06-13T08:16:06.000Z | 2019-06-13T00:00:00.000 | {
"year": 2019,
"sha1": "050c270f2c43114e06f12e3f85a51fd1be2aed48",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.27.032454",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "050c270f2c43114e06f12e3f85a51fd1be2aed48",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Physics",
"Mathematics"
]
} |
211749872 | pes2o/s2orc | v3-fos-license | Factors Affecting Design and Management of Residential Community for Enhancing Well-being of Thai Early Stage Elderly
This article presents factors related to environmental perception, residential community, and design of the healing environment, which affect Thai early stage elderly perceptions and requirements. Structural equation modeling was used to assess influences among the factors. Empirical data were collected from 419 samples of Thai early stage elderly. The research revealed that requirements of the residential community were classified into three groups including (1) general facilities and activities, (2) facilities related to health and security, and (3) facilities related to physical exercise. Results showed that factors of the elderly’s requirements have direct effects on willingness to join in with project activities of the residential community. The result led to formulating policies in design and management of the residential community which enhanced the quality of life both physically and mentally for the Thai early stage elderly.
Introduction
The United Nations anticipates that by 2050, the number of elderly people, defined as older than 60 years, will reach 2 billion worldwide. This poses challenges for rich and poor countries alike. Thailand also faces this challenge as its population ages. As a result of modern medical technology and improved public health policy and planning the death rate in Thailand is declining, leading to an aging population. In its annual report for 2012, the Thailand National Committee on the Elderly [1] reported that the proportions of Thai elderly over 60 year in 2020, 2030, and 2040 will be 19.1%, 26.6% and 32.1% respectively. The key question is therefore how Thailand's government can address this scenario. In response, the National Committee on the Elderly under the Ministry of Social Development and Human Security Thailand [2] created a formal structure to define the condition of the elderly in Thailand: Firstly, those elderly people defined as having good living standards: are in good physical and mental health; have a happy family life; have access to social care friendly environment; have security and stability; have access to welfare and social services; lead valuable, independent lives with dignity; participate in the lives of family and community, and continue to follow news and information sources. Secondly, the family and the community are a key source of support for the elderly, acting as an efficient and enabling institution. Thirdly, the system of welfare and services guarantees a higher standard of living allows the elderly to participate fully in family and community life. Fourthly, all involved parties and sectors must work to ensure the accessibility of the welfare and services system so the elderly can use these services in a safe and supported manner with the protection of the system. Lastly, the correct approach shall be implemented in order to recognize the needs of all elderly people, including those facing difficulties, to ensure they are welcomed to participate fully in the community in all areas. This structure leads subsequently to the next issue which is that of where the elderly will live once they have retired and they are in the physical twilight of their lives. At this stage, they require help from their children, or other people, whenever they need to visit a hospital or seek medical assistance. The ideal solution may be the residential community to enhance elderly well-being. This researcher sought to examine a facility which would provide for all the needs of the elderly, which is well-managed, where the elderly receive medical care when they are sick, where they can enjoy a happy social life, where they can be a part of society, and where they can live conveniently and safely for the remainder of their lives. The aim of this study is to determine how such a place should operate. This will first require a study of the needs of the early stage elderly in Thailand in order to apply environmental design and management of residential community principles through the use of structural equation modeling (SEM). To achieve these goals, the study first of all determines the factors comprising the requirements of the elderly in Thailand along with a means of measuring these. Then an SEM is created to show the associations between these various factors along with the willingness of the elderly to join such a residential community. Recommendations can then be made to residential community design and management so that the well-being of the Thai elderly can be enhanced.
Elderly
The National Committee on the Elderly under the Ministry of Social Development and Human Security Thailand [2] explained clearly that "the elderly are not a vulnerable nor social burden, but able to take part as the social development resources, so they shall be entitled to recognition and support by the family, community and the state to lead a valuable life with dignity and sustain their healthiness and living standards as long as possible". Meanwhile, the World Health Organization (WHO) [3] explained that throughout the world in general, it is commonly accepted that the elderly can be defined as those people aged 65 or older. However, this is a concept which may apply well in westernized nations but is less appropriate in Africa. Although the choice of number may seem arbitrary, it is also the age at which many countries begin to pay pensions. The UN offers no specific age definition, but has stated that 60 years is the agreed age at which the "older population" begins. The Act on the Elderly B.E. 2546 Thailand [4] sets out three different age groups as below: -Early stage elderly (60-69 years): people are still able to look after themselves. Late stage elderly (80 and older): people's health deteriorates further, including disability and degeneration. This paper serves to build upon earlier studies [5], [ 6] which examined the factors associated with the environmental perceptions of such residential communities and the design of healing environments which are able to positively affect those Thai elderly residents in the early stage who are aged 60-69. This study focuses in particular on the early stage elderly.
Healing environment
The WHO [7] originally defined health in a manner which did not incorporate the notion of spirituality, but this was adjusted in 1998 to create a definition which covered the four dimensions of physical, mental, and social health, along with spiritual well-being. Meraviglia [8] defined spirituality as the expression and experience of the spirit in a manner which reflects faith in a supreme being or god, along with a feeling of self-connectedness integrating the minds, body and spirit. In elderly adults, this concept can be linked to an improved quality of life [9]. In Thai culture, this idea of spiritual well-being is connected to general respect for the elderly and will also involve Buddhism. It may involve an appreciation of religious practices such as pouring water upon the hands of respected older people, or requests for blessings and participation in other rituals or meditation [10].Barrera-Hernandez et al. [11] added that within the notion of the spiritual environment can be found the links between the physical and intangible elements which form a positive spiritual environment creating human well-being, environmental quality, and sustainable activities. The Cambridge Dictionary [12] gave the definition of a spiritual home as a place where an individual has a sense of belonging, despite not having been born in that place, probably as a result of the connections with the culture, people, and way of life. This definition fits the aims of this research to enhance elderly well-being in the spiritual and cultural environment of Thailand. Jonas and Chez [13] took the view that the future of medical management in the context of chronic diseases will lie in a focus on healing if a sustainable health care approach is to be employed. Healing can be defined as an ongoing process of repair and recovery, and forms the foundation of a new medical vision which brings together diverse techniques from all over the world to relieve suffering, treat chronic conditions, and improve well-being. Healing requires the creation of suitable attitudes and intentions on the part of both the recipient and the provider of the care, along with the use of self-care activities in the part of the elderly. The aim of to form healing relationships, making use of a deeper understanding of health promotion and maintenance, and suitable ways to effectively combine complementary and conventional medical techniques. Nelson et al. [14] explained that the "healing environment" can be considered similar to the therapeutic environment, which is typically designed to make use of the latest medical advances to provide high-quality patient care in a safe and secure setting, as well as inviting the family of the patient to collaborate in creating a psychosocially beneficial environment. From this description, it might be concluded that a spiritual environment enhancing the well-being of the Thai early stage elderly might incorporate many aspects of the therapeutic or healing environment. These concepts should be compared and further analyzed using the literature in the Thai context, and the findings can be incorporated in the design of the research questionnaire.
Environmental design
Plunz [15] described environmental design as a system of taking into consideration the surrounding environmental parameters as plans and policies are formulated to develop buildings, or products. Environmental design also plays a part in the arts and sciences when considering the human-designed environment. Such fields would include urban planning, interior design, and landscape architecture, as well as architecture and geography. Environmental design involves relating the physical surroundings to the needs of human activity, and therefore this context would include parks and buildings on a smaller scale, and whole community spaces on a larger scale. Environmental design can also be defined as the physical and constructed environment where people live their daily lives. It takes into account the experiences of the users of the designed space as well as their aesthetic perceptions of the environment created. In the context of this study, environmental design must be researched in order to meet the needs of the early stage elderly in Thailand.
Residential community
Paul et al. [16] explained that any community can be described as a social unit located in a particular area and sharing common values. It can be of any size, but will typically comprise a group of people who are connected for reasons other than family ties, and who value those connections in a social context. The WHO Regional Office for Europe [17] gave the definition of a community residential health facility as a non-hospital, community-based mental health facility which offers full-time accommodation and care for those personas who suffer mental health problems. Such facilities might include supervised housing in group homes without staff, group homes which have a number of residential staff or visiting staff, hotels which provide staff by day and night, homes or hostels which offer nursing staff full-time, or simply therapeutic communities or halfway houses. These facilities would include both public and private institutions, and some would seek profits while others would not. Perkins et al. [18] developed a textbook covering the main considerations in building design for elderly people, and noted that there are a number of features which must be offered in skilled-nursing facilities for the elderly. These include a multipurpose room, library, coffee shop or snack bar, gift shop, outdoor seating areas, recreation facilities, art or activity rooms, clinics, and rehabilitation centers. The book added that where communities are designed for senior adults, it is important to create landscaped areas for walking, while facilities for leisure activities such as golf, lawn sports, fishing, gardening and so forth should be offered.
Factor identification and questionnaire design
The related factors and their indicators were identified. A literature review was undertaken of related theories and concepts from many sources such as textbooks, research articles, on-line databases and annual reports. All factors were summarized and listed to compare similarities and differences in meanings of their components. The listed factors were saturated and categorized by the researcher, and constructed and prepared for analysis in the next section. A questionnaire survey was designed for data collection to confirm the factors or items of requirements in the residential community. The subjects comprised a group of Thai early stage elderly. The questionnaire was designed in three parts by referring to the listed factors. The first part requested general information of the respondents including gender, marital status, age, health condition, education, and economic status (6 questions). The second part concerned their expected requirement items in the residential community (30 questions), while the last part identified the elderly's willingness to participate in residential community activities (3 questions). The first part was measured by frequency (percentage) of respondents, while the second and third parts were assessed by a 5-level Likert scale from 'strongly disagree' to 'strongly agree'. The questionnaire items are listed in Table 1.
Validity and reliability
To ensure that the research items were appropriate for the elderly, interviews were held with five experts who had relevant experience in elderly behaviors and residential communities. The experts reviewed and suggested whether the items were accurate representations to measure the research model. They also suggested some items which were more appropriate in the context of the research. This exercise was useful to provide content validity and ensure that the items were neither ambiguous nor confusing. Cronbach's alpha was used to evaluate the reliability of the questionnaire. A pilot study was conducted on 30 target elderly to assess reliability. The coefficients for location, activity, and facility were 0.942, 0.953, and 0.850 respectively. The value for all items (Q1-Q30) was 0.964. All coefficients were above 0.7 and demonstrated that the questionnaire was reliable [19].
Data collection
Once the questionnaire was designed, a target group of Thai early stage elderly (age of 60 to 69 years) was selected using a convenient non-probability sampling technique. Almost all of the respondents lived in Bangkok while some resided in large cities surrounding the Bangkok area. The survey period was three months. Face-to-face interviews were conducted to explain the details of the questionnaire and ensure that the respondents understood the purpose of the survey. In total, 500 questionnaires were completed, with 81 rejected due to incomplete and bias responses. As such, 419 data sets were accepted as valid and used for analysis in the next section.
Exploratory factor analysis
Exploratory factor analysis (EFA) with varimax rotation was implemented to categorize the 30-items (Q1-Q30) and determine the underlying factor structure construct of the environmental design and management of the residential community. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was .97 (KMO>.7) [20], Bartlett's test of sphericity had a significant value of .000 (less than .05), approx. Chi-Square was 11805.305, and df was 435. Factor loading values of less than 0.5 were eliminated. The EFA output showed that three factors of the 30 items (Q1-Q30) were grouped as factor 1: Q2, Q4, Q6, Q7, Q8, Q9, Q13, Q14, Q15, Q16, Q17, Q23, Q26, Q28, Q29, and Q30; factor 2: Q1, Q3, Q5, Q10, Q11, Q12, and Q25; and factor 3: Q18, Q19, Q20, Q21, Q22, Q24 and Q27. Furthermore, the reliability of the questionnaire in terms of the three factors was assessed on the basis of Cronbach's alpha coefficient. According to Nunnally [19], Cronbach's alpha coefficients of 0.7 or higher are recognized as acceptable values. Thus, values of the coefficient were acceptable for all three factors, ranging 0.923, 0.939 and 0.962. The Cronbach's alpha value of all items (Q1-Q30) was 0.973. Outputs for all the questionnaire items are shown in Table 2. In Table 2, EFA results indicated that requirements in the residential community of the Thai early stage elderly could be classified as three factors. These three factors were named by considering the majority of the included items as factor 1 'activities and general facilities', factor 2 'health and security', and factor 3 'exercises'.
Research hypotheses
To better understand the elderly's expected requirements in their residential community, research hypotheses were formulated. Once EFA had been performed, the three factors of the elderly's requirements in the residential community including 'activities and general facilities', 'exercises', and 'health and security' were treated as independent variables [21] which had direct effects on the dependent variable as the elderly's willingness to join in with the activities has of the residential community. This concept was presented as the conceptual research model (Figure 1). Three research hypotheses were developed as follows: : The factor of activities and general facilities has an effect on the elderly's willingness to join in with the activities of the residential community. H2: The factor of health and security has an effect on the elderly's willingness to join in with the activities of the residential community. H3: The factor of exercises has an effect on the elderly's willingness to join in with the activities of the residential community.
Demographic information
Data profiles of the 419 Thai early stage elderly respondents were analyzed in terms of demographics as shown below in Table 3.
Structural equation modeling
Structural equation modeling (SEM) was used for data analysis. Byrne [22] stated that SEM can explain influences or effects between latent variables (factors) on other latent variables (factors) in the model. Here, once the EFA had been performed, the latent variable named 'willingness' with its three observed variables; W1 (interest in the community), W2 (willingness to live in the community) and W3 (willingness to recommend the community to others) was added to the model. The first output of the model analysis did not fit. The software suggested that some variables (items) should be deleted from the model and some covariance relations should be added. These suggestions were implemented. Then The research hypotheses were tested using the outputs of the model as implemented above. Table 4 presents the test results of each factor and variables with significant effects (p-values < .05) between each other. Note: *** = p < .001 The SEM results in Figure 2 and Table 4 showed path coefficients in the standardized estimate of regression weight. All hypotheses (H1, H2, and H3) were significantly supported at the .05 level.
Conclusions
Factors related to environmental perception, residential community, and design of the healing environment which have effects on Thai early stage elderly requirements were studied. All of the elderly in this study resided in metropolitan areas. Profiles of the respondents are shown in Table 3. The requirements of the elderly were classified as three factors including 'activities and general facilities', 'exercises', and 'health and security'. The factor of 'activities and general facilities' comprised four variables including the requirements of 'building maintenance service' (Q9), 'health food shop' (Q4), 'beauty salon' (Q6), and 'cleaning service' (Q8). The factor of 'health and security' comprised two variables as the requirements of 'calm and natural environment' (Q1), and 'near hospitals' (Q3). For the factor of 'exercises', four variables included the requirements of 'garden and outdoor patio' (Q21), 'indoor activities' (Q22), 'recreational activities' (Q27), and 'sidewalks and bike lanes' (Q24). In Figure 2, the factor of 'activities and general facilities' had a negative effect on "willingness", thus, the Thai early stage elderly who required 'activities and general facilities' were not willing to join in with the activities of the residential community. The 'health and security' factor had a positive effect on 'willingness', and Thai early stage elderly who required 'health and security' were willing to join in with the activities of the residential community. Finally, the factor of 'exercises' had a positive effect on 'willingness', meaning that Thai early stage elderly who required 'exercises' were also willing to join in with the activities of the residential community. From the results, Thai early stage elderly with strong health do not require all facilities and activities in the residential community. The most important requirements in terms of 'activities and general facilities' were 'building maintenance service' and 'health food shop'. The most important requirements in terms of the 'health and security' were 'calm and natural environment' and 'near hospitals'. The most important requirements in terms of 'exercises' were 'garden and outdoor patio' and 'indoor activities'. In the aspect of willingness to join in with the activities of the residential community, the factor which had the highest effect on the elderly was not 'activities and general facilities' or 'exercises' but 'health and security'. | 2019-10-31T09:13:07.827Z | 2019-10-30T00:00:00.000 | {
"year": 2019,
"sha1": "cfa8861ce994c6563b14730ac83138b8d70a83df",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/650/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7d7d111d32471b0951c723f6924806f6076c160b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
239054647 | pes2o/s2orc | v3-fos-license | Pharmacokinetics of Orally Administered Prednisolone in Alpacas
This study aimed to determine the pharmacokinetics of prednisolone following intravenous and oral administration in healthy adult alpacas. Healthy adult alpacas were given prednisolone (IV, n = 4), as well as orally (PO, n = 6). Prednisolone was administered IV once (1 mg/kg). Oral administration was once daily for 5 days (2 mg/kg). Each treatment was separated by a minimum 4 month washout period. Samples were collected at 0 (pre-administration), 0.083, 0.167, 0.25, 0.5, 0.75, 1, 2, 4, 8, 12, and 24 h after IV administration, and at 0 (pre-administration), 0.25, 0.5, 0.75, 1, 2, 4, 8, 12, 24 after the first and 5th PO administration. Samples were also taken for serial complete blood count and biochemistry analysis. Prednisolone concentration was determined by high pressure liquid chromatography. Non-compartmental pharmacokinetic parameters were then determined. After IV administration clearance was 347 mL/kg/hr, elimination half-life was 2.98 h, and area under the curve was 2,940 h*ng/mL. After initial and fifth oral administration elimination half-life was 5.27 and 5.39 h; maximum concentration was 74 and 68 ng/mL; time to maximum concentration was 2.67 and 2.33 h; and area under the curve was 713 and 660 hr*ng/mL. Oral bioavailability was determined to be 13.7%. Packed cell volume, hemoglobin, and red blood cell counts were significantly decreased 5 days after the first PO administration, and serum glucose was significantly elevated 5 days after the first PO administration. In conclusion, serum concentrations of prednisolone after IV and PO administration appear to be similar to other veterinary species. Future research will be needed to determine the pharmacodynamics of prednisolone in alpacas.
INTRODUCTION
The growing population of South American camelids within the United States has resulted in the need for veterinary care of both common and uncommon disease processes in these species. There are currently no drugs approved by the Food and Drug Administration for camelids and pharmaceutical companies cannot economically justify seeking approval of drugs in these species. The pharmacokinetics for multiple classes of drugs for camelids have been described; including antibiotics (1)(2)(3)(4)(5), non-steroidal anti-inflammatories (6, 7), gastric acid suppressants (8)(9)(10), opioids (11,12), and other pharmaceuticals (13,14), however, no pharmacokinetic studies for prednisolone exist for camelids. Many of the dosage regimens used in camelids are empirical or extrapolated from species with different physiology and metabolism. However, several drugs have dissimilar kinetics in camelids as compared to other livestock species which can result in improper dosing or unwanted side effects. There is a need for understanding the pharmacokinetics of drugs in camelids to optimize medical treatment and reduce side effects.
Prednisone and prednisolone are synthetic analogs of cortisol. Prednisone is more affordable than prednisolone but it needs to be converted by the liver to its active metabolite, prednisolone, to have a therapeutic effect. Prednisone is readily converted into prednisolone in humans and dogs (15,16). However, in the cat and horse there is evidence that prednisone is not efficiently metabolized to prednisolone, and therefore not therapeutic (17,18). There are no published studies to date to determine if camelids are able to convert prednisone into prednisolone. Prednisolone is available in oral and injectable formulations for use in some domestic species. In camelids, oral administration would be preferred as venous access can be challenging for most owners to administer and stressful to the animal.
Even though steroids are fundamental for the treatment of certain conditions such as autoimmune diseases, there can be adverse side effects which often make practitioners wary of using this therapy. Exogenous steroids can cause suppression of the hypothalamo-pituitary-adrenal (HPA) axis, which often leads to harmful side effects if discontinued abruptly, especially after prolonged therapy. Other side effects include polyuria/polydipsia, changes in appetite, muscle atrophy, susceptibility to infection, gastrointestinal ulceration, changes in liver function, and abortion (19). Given the prominent adverse side effects of glucocorticoid therapy, it is important to use the lowest effective dose possible. Determining the pharmacokinetics of oral prednisolone is an essential step in determining the most appropriate dose for camelids.
To date there are no studies analyzing the pharmacokinetics of orally administered corticosteroids in camelids. The aim of this study is to determine the bioavailability and pharmacokinetics of prednisolone in alpacas after oral administration, and to evaluate possible side effects during and after a 5 day treatment. We hypothesize that oral administration of prednisolone will result in blood levels comparable to levels of clinical value in other species and that a 5 day course of treatment will result in no or minimal side effects.
MATERIALS AND METHODS
This study was approved by the Institutional Animal Care and Use Committee of the University of Tennessee (protocol #2400-1215). Four clinically healthy alpacas were used, housed in box stalls at least 24 h before and during the experiment. Alpaca ages and weights (mean ± standard deviation) were 8.0 ± 4.3 years and 68.8 ± 9.7 kgs. Two intravenous jugular catheters were placed (one in each jugular vein) the day prior to the intravenous component of the study. Each alpaca (n = 4) was administered 1.0 mg/kg prednisolone (Prednisolone, USP, Rockville, MD, USA), intravenously in one catheter and samples were collected from the other catheter. Blood samples were collected at 0 (preadministration), 5, 10, 15, 30, and 45 min as well as 1, 2, 4, 8, 12, and 24 h after administration and were centrifuged at 1,000 g for 15 min. Plasma was removed and stored at −80 • C until analysis.
After a washout period of 4 months six alpacas were housed in box stalls and had intravenous catheters placed in the jugular vein. Prednisolone tablets (PrednisTab R , Lloyd Inc., Shenandoah, IA), were dosed at 2 mg/kg every 24 h for 5 days to six alpacas. Blood samples were collected at 0 (preadministration), 15, 30, and 45 min as well as 1, 2, 4, 8, 12, 24 (pre-second administration) hours. On days 2-4 blood samples were collected at peak (2 h after administration) and trough times (immediately prior to drug administration). On the 5 th day samples were collected at 0 (pre-administration of the fifth dose), 15, 30, and 45 min as well as 1, 2, 4, 8, and 12 h after the last administration. Samples were immediately centrifuged after collection with the serum stored at −80 • C until analysis. Additionally, before the initial drug administration (day 1), on the day of the last dose (day 5), and 5 days after the last dose (day 10), whole blood was collected into a tube containing ethylenediaminetetraacetic acid (EDTA) anticoagulant for complete blood count (CBC) and a tube containing heparin anticoagulant for plasma chemistry testing. These samples were submitted to the University of Tennessee Veterinary Medical Center (UTCVM) clinical pathology laboratory, with testing performed according to the laboratory standard operating procedure using an ADVIA 2,120 hematology analyzer (Siemens, Munich, Germany) and a Cobas C501 chemistry analyzer (Roche Diagnostics, Basel, Switzerland) (20).
Analysis of prednisolone in plasma samples was conducted using reversed phase HPLC. The system consisted of a 2695 separations module and a 2487 UV detector (Waters, Milford, MA, USA.). Separation was attained on a Waters Symmetry Shield RP 18 4.6 x 150 mm (5 µm) protected by a 5 µm Symmetry Shield RP 18 guard column. The mobile phase was an isocratic mixture of 100 mM ammonium acetate pH 4.0 with concentrated glacial acetic acid and acetonitrile (70:30). It was prepared fresh daily using double-distilled, deionized water filtered (0.22 µm) and degassed before use. The flow rate was 1.0 ml/min and UV absorbance was measured at 254 nm.
Prednisolone was extracted from plasma samples using liquidliquid extraction. Briefly, previously frozen plasma samples were thawed and vortexed, and 500 µL was transferred to a clean screw-top test tube followed by 25 µL internal standard (10 µg/mL methylprednisolone). Methylene chloride (3 mL) was added and the tubes were rocked for 20 min and then centrifuged for 20 min at 1,000 X g. The organic layer was transferred to a clean tube and evaporated to dryness with nitrogen gas. Samples were reconstituted in 250 µL of mobile phase and 100 µL was analyzed.
Standard curves for plasma analysis were prepared by fortifying untreated, pooled alpaca plasma with prednisolone to produce a linear concentration range of 5-1,000 ng/mL. Calibration samples were prepared exactly as plasma samples. Average recovery for prednisolone was 96% while intra and interassay variability ranged from 2.5 to 5.8% and 1.01 to 6.25%, respectively. The lower limit of quantification was 5 ng/mL and the limit of detection was 2.5 ng/mL.
Pharmacokinetic Analysis
Pharmacokinetic parameters for prednisolone were calculated using Phoenix WinNonlin 6.4 (Certara USA, Inc., Princeton, New Jersey 08540, USA). Values for elimination rate constant (λ z ), plasma half-life (t ½ ), plasma concentration back extrapolated to time 0 (C 0 ), maximum plasma concentration (C max ), time to maximum plasma concentration (T max ), total body clearance (Cl), volume of distribution (Vd area ), apparent volume of distribution at steady-state (Vd ss ), mean residence time (MRT 0−∞ ), area under the plasma concentration time curve from time 0 to infinity (AUC 0−∞ ) and area under the plasma concentration time curve from time 0 to last time point (AUC 0−Last ) were calculated from non-compartmental analysis. The AUC was calculated using the log-linear trapezoidal rule. Variability in pharmacokinetic parameters was expressed as the standard deviation. In the case of the half-life, harmonic mean and pseudostandard deviation were used. Absolute systemic bioavailability (F) of prednisolone was calculated from noncompartmental parameters with the following equation: The global extraction ratio (E body ) was calculated as reported by Toutain and Bousquet-Melou (21), with: First calculated for each individual animal, and then combined for a mean value as previously described (22). With the alpaca cardiac output calculated as follows:
Statistical Analysis
Pharmacokinetic variables for prednisolone following oral and intravenous administration were calculated with a commercial computer software program (Phoenix 6.30, Pharsight Corp.). Pharmacokinetic parameters were tested for normality of distribution and equal variance (Graphpad Prism, La Jolla, CA) when data were normally distributed and had equal variances; a t test was performed to determine whether differences existed between pharmacokinetic parameters from the IV prednisolone administration to Day 1 of oral administration, as well as Day 1 and Day 5 oral administration of prednisolone. Values of P < 0.05 were considered significant for all statistical tests. CBC and plasma chemistry results were inspected for abnormalities using alpaca reference intervals established in the UTVMC clinical pathology laboratory. Additionally, paired ttests were used to evaluate for statistical difference between values on days 0, 5, and 10 (MedCalc Software Ltd, version 20.009).
RESULTS
Mean and standard deviation for plasma prednisolone concentrations for the single IV (Figure 1) and multidose PO study (Figure 2) are represented graphically. Pharmacokinetic parameters were summarized in Table 1. The mean of trough concentrations was similar on each of the days (D2, 18 ng/mL; D3, 17 ng/mL; and D4, 18 ng/mL).
A non-compartmental model was used to evaluate plasma concentrations after both IV and PO dosing. The halflife, volume of distribution at steady-state and clearance for prednisolone after IV administration were 2.98 ± 0.795 h, 1,295 ± 242 mL/kg, 347 ± 54 mL/h/kg, respectively. The half-life of prednisolone after oral dosing once a day for 5 days was 5.39 ± 1.36 h. The mean prednisolone bioavailability after oral dosing was 13.7%. The extrapolated area under the curve was 6.0 ± 4.6% for intravenous administration; 26.5 ± 11.5% (after the first oral administration) and 25.7 ± 8.6% (after last oral administration). Observed extraction ratio was 4.16 ± 0.58.
When IV vs. PO administration was statistically compared, significant differences were observed for lambda Z (p = 0.0153), area under the curve last (p = <0.0001), mean residence time (p = 0.0129), and extrapolated area under the curve (p = 0.0135). The P value for comparison of elimination half-life was low (p = 0.0610), but not significantly different. There were no significant differences between any of the pharmacokinetic parameters between Day 1 and Day 5 for oral administration.
No clinical adverse effects were observed in any of the alpacas at any point during the study period. On the CBC, packed cell volume (PCV), hemoglobin, and red blood cell counts were significantly lower on day 5 compared to days 0 and 10, and eosinophil numbers were significantly lower on day 10 compared to days 0 and 5, but all results were still within the reference intervals with the exception of one alpaca with a very mildly decreased red blood cell count on day 5 (Figure 3). Additionally, glucose was significantly higher on day 5 compared to days 0 and 10 and was above the reference interval in all 6 alpacas on that day. There were trends toward increasing white blood cell count, neutrophils, lymphocytes, and monocytes over time, but these were not statistically significant and did not go above the reference interval in any individuals. No other relevant changes were identified on CBC or chemistry results.
DISCUSSION
To the author's knowledge, this is the first study to determine the pharmacokinetics of prednisolone in alpacas. Glucocorticoids, such as prednisolone, are a powerful and effective therapeutic tool for many inflammatory disease processes. This class of drugs includes dexamethasone, prednisone, prednisolone, and hydrocortisone among others. They are used as antiinflammatory agents, immunosuppressives, for the treatment of lymphoma (cytotoxicity toward neoplastic lymphocytes), or to replace glucocorticoid activity in patients with adrenal insufficiency. They can be clinically beneficial in diseases where inflammation has detrimental effects such as uveitis, immune mediated diseases, asthma, inflammatory bowel disease, skin allergies, certain neoplasias, lameness, and many neurologic conditions (16,19). Descriptions of the use of prednisolone in the camelid medical literature are sparse. Historical reports describe prednisolone for chemotherapy of lymphosarcoma or lymphoma (23,24). Anti-inflammatory doses are used for management of soft tissue injury from causes such as cerebrospinal nematodiasis and rattlesnake envenomation (25)(26)(27). Additional descriptions exist for the treatment of certain dermatological conditions (28). Topical use is described in the species for ophthalmologic cases (29). 1 | Pharmacokinetic parameters (mean ± SD) in alpacas on Day 1 and Day 5 after multi-dose oral administration of 2 mg/kg prednisolone (n = 6) and IV administration of 1 mg/kg (n = 4). There are limited reports of pharmacokinetic data for prednisolone among large animal species. Studies exist for cattle, horses, and sheep (30)(31)(32). The clearance observed in the alpacas in this study (0.347 L/kg/hr) is similar to the ranges observed in horses (0.235-0.374), as well as a cattle (0.42) and less than those reported for sheep (0.93) (30,31,33). The elimination half-life of intravenous prednisolone reported in the alpacas in this study (2.98 h) is similar to the elimination half-life reported for cattle (3.6), but longer than horses (1.15-1.65) or sheep (0.85) after intravenous administration (30)(31)(32)(33). Table 2 displays pharmacokinetic information for prednisolone in sheep, cattle, horses as well as the alpacas from this study. The bioavailability demonstrated by the alpacas in our study after oral administration was low (13.7%). Reports of the oral bioavailability of prednisolone in large animal species are limited, however, this is similar to the low range of bioavailability (18%) reported for prednisolone tablets in dogs (34). When comparing the pharmacokinetic parameters from our study to other large animal studies, it is important to note the limit of quantification. When limits of quantification are more sensitive, there is the potential for some pharmacokinetic parameters, such as elimination half-life, to be increased (22). All of the large animal assays employed similar sensitivity (2-3 ng/mL), so it is likely that the pharmacokinetic parameter differences are true species differences instead of analytical method discrepancies.
Due to the multiple downstream effects of prednisolone, there currently are not many recommendations regarding therapeutic concentrations of prednisolone in the veterinary literature. In beagles administered oral prednisolone at 2.0 mg/kg, maximum plasma concentrations of 58.2 ng/mL have been observed, this is similar to the maximum concentrations of 74 and 68 ng/mL observed in the alpacas in this study (35). This dosage in dogs has been described for the use as an anti-inflammatory agent as well as and antineoplastic agent (19,36). While more investigation is necessary, this comparative observation may suggest that the oral dosing regimen used in the alpacas in this study may have similar plasma concentrations to other species.
Adverse effects reported in association with the administration of steroids in veterinary medicine include: leukocytosis with neutrophilia, monocytosis, lymphopenia, and eosinopenia. Also a mild elevation in albumin and in liver enzymes has been reported after treatment with steroids in dogs (37). In this alpaca study, there was a significantly decreased PCV, hemoglobin, and red blood cell count after 5 days of prednisolone administration, which was resolved at the recheck 5 days later. In the majority of individuals, all three of these parameters were still within the reference interval at all time points, so this may not be a clinically relevant change. Nevertheless, based on these results it may be worthwhile to periodically monitor for anemia in alpacas that are treated with prednisolone. There was also a mild but significant decrease in eosinophils noted at day 10. Eosinopenia can be a consequence of corticosteroids, but is unlikely to be clinically relevant. When serum biochemistry information was evaluated, the only consistent change across all animals was mild hyperglycemia noted 5 days after administration of prednisolone, which may be due to the gluconeogenic effects of corticosteroids.
Future directions for prednisonolone in alpacas include pharmacodynamic studies. One key area for further pharmacodynamic investigation is the use of prednisolone for anti-neoplastic therapy, considering the alpaca's role as a primarily companion animal, and the propensity of older alpacas to develop cancer (24,38,39). Additional research is needed to elucidate the effects that body condition could have on the pharmacokinetics of prednisolone in alpacas, as overconditioning (increased body fat percentage) in some species, such as cats, is associated with higher serum concentrations (40). Another additional consideration is the effect of multiple drug administration on the pharmacokinetics of prednisolone in camelids. In dogs, the downregulation of P-glycoprotein via the administration of ketoconazole lead to an increased area under the curve for prednisolone due to the ketoconazole-induced P-glycoprotein inhibition (41). With the potential for biochemical and hematopoietic adverse effects, non-linear mixed-effect modeling could be utilized to investigate the potential factors for these adverse effects when prednisolone is administered to alpacas (42). Limitations of this study include the small sample size, however, in veterinary pharmacology studies sample sizes of 4-6 animals are typically customary for describing pharmacokinetic parameters (43).
In conclusion, prednisolone administered at one IV dose of 1 mg/kg or 5 consecutive oral daily doses of 2 mg/kg was well-tolerated by alpacas in this study. Intravenous pharmacokinetics had similarities to cattle, specifically elimination half-life and plasma clearance. Evaluation of complete blood counts and serum biochemistry data suggested mild hyperglycemia and neutrophilia may be encountered from prednisolone administration. The concentrations reached by repeated oral administration are similar to those noted in other veterinary species when dosed at similar regimens. Future studies will be necessary to evaluate the pharmacodynamics of prednisolone when evaluating the effects of prednisolone administration to alpacas.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The animal study was reviewed and approved by Institutional Animal Care And Use Committee.
AUTHOR CONTRIBUTIONS
RV, CS, DS, and SC developed the study design. RV and CS contributed to sample collection. SC developed the analytical method. SC and JS contributed to pharmacokinetic analysis. RV, CS, DS, JS, and SC all contributed to data interpretation and analysis. All authors contributed to manuscript construction.
FUNDING
This work was entirely funded by the Alpaca Research Foundation. | 2021-10-22T13:27:22.693Z | 2021-10-22T00:00:00.000 | {
"year": 2021,
"sha1": "fe2b39e9a7f8ba448a18ec1d0da2a66474375af7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2021.745890/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe2b39e9a7f8ba448a18ec1d0da2a66474375af7",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208110160 | pes2o/s2orc | v3-fos-license | A thermal control methodology based on a predictive model for indoor heating management
This paper presents a thermal control methodology based on an indoor temperature predictive model that regulates heating systems. Future indoor temperature is predicted from historical data of indoor temperature and other outside parameters. A control strategy is defined afterwards in order to retrieve the optimal parameters. The building simulation shows that the developped methodology provides satisfactory results by avoiding overheating and underheating periods and thus improving the thermal comfort and the energy consumption.
Context
In France, the building domain is the most energy consuming sector. It dominates the total consumed energy share with 44% (compared to 33% per example for the transport sector) and a CO2 emission that overpassed 123 million tonnes in 2015 (1).
In this context, France has established an energy policy for buildings with the aim of reducing both of the greenhouse gas emissions by a factor of four by 2050 and of at least 40% the energy consumption of government and public buildings by 2020 (1). This policy is called Factor 4. These commitments concern new and retrofited buildings. Consequently, in order to reach these objectives, energy efficiency improvement is a must. This has been regulated in France under the Energy Performance Contract (EPC) law which guarantees the improvement of the buildings energy efficiency based on several standards after an energy renovation operation (1).
Generally, in the building life cycle, a significant difference is observed between the energy consumption predicted during the design phase and the real consumed energy during the operating phase. This difference is often called Energy Performance Gap (EPGAP) (2). This EPGAP is due to several factors: -During the design phase: incorrect sizing and unsuitable simulation tools, -During the construction phase: poor quality of equipment and materials, -During the verification phase: lack of systems verification and work execution, -During the operation phase: occupant behaviour, weather and system control.
This paper presents a control methodology for indoor heating systems that optimizes indoor temperature and energy consumption in real time. It is organized as follows: Section 1 introduces the context of this work. Section 2 describes the proposed methodology. Section 3 presents an application case based on a simulated building. Finally, Section 4 concludes the paper and expresses some wayforwards of this work.
Building's Thermal modeling
Usually before each construction or renovation operation, the building is modelled in order to evaluate its performance and compare it with the requirements of current regulations. In the research and industry field, there are different methods for modeling building to study its energy efficiency. These methods can be classified as follows: -White box: studies physical phenomena and can estimate them at a given time and in a given space, it is performed through a Dynamic Thermal Simulation (DTS) software (e.g. TRNSYS, EnergyPlus, etc.). It requires a good knowledge of the building (materials, geometry, Ventilation and Air Conditioning (HVAC) description, control strategies, occupancy, location, etc.). It can provide very detailed results but they can be very timeconsuming (3), -Black box: a model based on mathematical and statistical methods (Machine Learning, Markov Chain, etc.). It may require more or less important training time depending on the method used and the calculation time is generally low (3), -Grey box: solution combining the two previous models requiring less learning data and less knowledge of physical phenomena (3). One of the most famous grey box models is the RxCy model (4).
As part of this work, the following criteria are set up to choose the suitable prediction model: -Low calculation time: the model will be used to predict the indoor temperature and will also be used to evaluate a high number of control strategies, -Adaptable: in the building, the year is composed of a heating period, a cooling period and an inter-season, the model must be efficient during these 3 periods, -Generalizable: the model must be easily deployable on any type of building if the measurements are available.
Based on these criteria, it was decided to test Artificial Neural Networks (ANN) and multiple linear regression models. Indeed, many studies (5)(6)(7)(8) demonstrate the ability of ANNs to predict thermal data in the building sector. Nevertheless, the ANNs has shown a problem of robustness. Consequently it was decided to use the linear regression model. A comparison of the two methods will be discussed.
Control methodology
The porposed heating control methodology is based on a model insuring the predicting of the indoor temperature in order to avoid discomfort problems. The steps of the control methodology are presented in Figure 1. Based on this methodology, a platform was developed. The functioning of this platform is shown in Figure 2.
Step 1: Data gathering
In the first step, dynamic data (indoor measurements, HVAC and weather data) are gathered from sensors and Building Management System (BMS). Then collected data are structured and pre-processed (Normalization and Averaging).
Step 2: Predicting indoor temperature
A multiple linear regression model is implemented. To determine the model inputs, a correlation study was conducted. The most influencing inputs were selected. The data are normalized to be in the same scale. From the measured data, the averages of the last hour and the last two hours are calculated to take into account the inertia of the building. The final architecture (Figure 3) of the model is validated by varying the input parameters (adding or deleting one or more parameters). The prediction horizon is set to 3 hours which is sufficient to take into consideration the time response of the building while maintaining an acceptable model accuracy level (Figure 4). At each step, 12 data are predicted with a time step of 15 minutes. unworking hours: 16°C, while reducing the heating consumption as much as possible. An anomaly is reported if the temperature is outside this range.
Step 4: Application of the new strategy
The selected control strategy is tested with the prediction model and then sent to the BMS.
Application
Tests were carried out on a simulated building. This building is composed of two thermal zones ( Figure 6) and equipped with a power-controlled heating system. Between 8am and 6pm, the power is set to 100% and 0% for the rest of the day. This type of control is close to reality and can ensure the comfort criteria defined above. The control curve of the heating system is shown in the Figure 5. The building is described in Table 1. Based on previous historical data (from simulation results) the prediction model is trained then the indoor temperature is predicted. At every time step, the prediction model is retrained using previous data (150), this method is called sliding-window (7). The size of the sliding-window (training data size) was determined by performing tests. Figure 7 shows that the temperature exceeds 25°C which is the comfort limit defined. As a consequence, the data analysis detects anomalies. To face this anomaly, the best heating control strategy is defined for the next 3 hours. Since the prediction model is fast, all possible strategies are tested. For this applicatiuon case, the heating is controlled by an On/Off mode, then 4096 (corresponding to 2 12 with 12: nombre of time steps to predict, 2: ON/OFF state) possibilities were tested. At each time step (15 minutes), heating is switched ON or OFF. Figure 7 compares real indoor temperature to predicted indoor temperature and Figure 8 shows the result obtained by applying the new heating control strategy from the 152th until the 155th hour, with data from the 3 rd to the 151th hour are used to train the prediction model. This methodology is done in real time, obtaining the results shown in Figure 10. 30 hours of obtained result of the thermal zone 1 based on the simulated building is shown. Figure 9 shows the evolution of the indoor temperature and the heating power setpoints with the use of a standard control of the heating, while Figure 10 shows their evolution using the methodology proposed above. Comparing the data on the right of the two dotted lines representing the start time of the heating control correction, in Figure 10 the overheating zones are significantly reduced while respecting the comfort interval (here taken between 21 and 25°C for representativeness reasons) represented by the horizontal lines, which improves user's comfort and reduces energy consumption.
Conclusion
This work aimed to develop thermal control methodology for reducing discomfort and energy consumption. In this context, a platform was developed based on a linear regression model to predict indoor temperature and to test possible control strategies. To validate this platform, a simulated building is studied using a BIM-based approach. Obtained results during heating period are promising. The heating control methodology allows the reduction of the heating energy consumption and improves the user comfort. In the next step, a real case-study will be examined. This study-case is a building located in Wasquehal (France) which has been equipped with a weather station and comfort sensors. | 2019-10-24T09:23:03.375Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "d5213c4f72b014de3813b0061a30d543c8fd7c07",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/44/matecconf_suslille2019_01001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d13762de29608184063ed2766269b49e3a863bb7",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4991746 | pes2o/s2orc | v3-fos-license | Extensive localization of long noncoding RNAs to the cytosol and mono- and polyribosomal complexes
Background Long noncoding RNAs (lncRNAs) form an abundant class of transcripts, but the function of the majority of them remains elusive. While it has been shown that some lncRNAs are bound by ribosomes, it has also been convincingly demonstrated that these transcripts do not code for proteins. To obtain a comprehensive understanding of the extent to which lncRNAs bind ribosomes, we performed systematic RNA sequencing on ribosome-associated RNA pools obtained through ribosomal fractionation and compared the RNA content with nuclear and (non-ribosome bound) cytosolic RNA pools. Results The RNA composition of the subcellular fractions differs significantly from each other, but lncRNAs are found in all locations. A subset of specific lncRNAs is enriched in the nucleus but surprisingly the majority is enriched in the cytosol and in ribosomal fractions. The ribosomal enriched lncRNAs include H19 and TUG1. Conclusions Most studies on lncRNAs have focused on the regulatory function of these transcripts in the nucleus. We demonstrate that only a minority of all lncRNAs are nuclear enriched. Our findings suggest that many lncRNAs may have a function in cytoplasmic processes, and in particular in ribosome complexes.
Background
The importance of noncoding RNA transcripts for key cellular functions has been well established by studies on for example XIST [1], which acts in X-chromosome silencing, and TERC [2], which functions in telomeric maintenance. Genomic studies performed in the last decade have shown that these are likely not isolated examples as many more long non protein-coding transcripts were identified [3][4][5].
Although it remains to be demonstrated that all of these transcripts have specific functions [6], functional studies showing the importance of long noncoding RNAs (lncRNAs) as regulators in cellular pathways are accumulating rapidly (for example, [7][8][9][10][11][12]). However, the function and the mechanisms of action of the majority of lncRNAs are still unexplored [13].
Cellular location is an important determinant in understanding the functional roles of lncRNAs. Subcellular RNA sequencing (RNA-seq) has been performed to explore the differences between nuclear, chromatin-associated and cytoplasmic transcript content in several cell lines [14] and macrophages [15]. Derrien et al. [3] specifically estimated the relative abundance of lncRNAs in the nucleus versus the cytosol and concluded that 17% of the tested lncRNAs were enriched in the nucleus and 4% in the cytoplasm. This is in line with the function of some individual lncRNAs, such as NEAT1 and MALAT1, which were shown to be involved in nuclear structure formation and gene expression regulation [7,8]. However, it has been argued that relative enrichment does not mean that the absolute number of transcripts for each lncRNA is also higher in the nucleus [13]. Some lncRNAs were enriched in the cytoplasm and ribosome profiling demonstrated that part of the cytoplasmic lncRNAs is bound by ribosomes [16]. More detailed characterization of the ribosome profiling data showed that ribosomal occupation of lncRNAs does not match with specific marks of translation [17].
While these results suggest diverse roles of lncRNAs in different cellular compartments and biological processes, comprehensive knowledge on the relative abundances of lncRNAs in ribosomes, the cytosol and the nucleus is currently still lacking. Moreover, as ribosomal profiling measures single sites in RNA molecules that are occupied by ribosomes, this technique does not yield information on the number of ribosomes that are present per single (physical) lncRNA transcript [18]. In a different method, named ribosomal fractionation, a cytosolic size separation is performed that results in the isolation of translation complexes based on the amount of ribosomes associated per transcript [19]. This method has been used in combination with microarrays to analyze ribosomal density on protein-coding transcripts [20][21][22] but not on lncRNAs.
Here we perform subcellular RNA-seq on nuclei, cytosol and mono-and polyribosomes separated by ribosomal fractionation. Our data show relative enrichment of specific lncRNAs in the nucleus, but also demonstrate that most lncRNAs are strongly enriched in the cytosol and in ribosomal fractions.
Nuclear, cytosolic and ribosomal fractions differ in transcript content
Different subcellular RNA fractions were isolated from the human cell line LS-174 T-pTER-β-catenin [23] (Figure 1). The cells were first subjected to a mild lysis after which the nuclei were separated from the cytosol and other organelles by centrifugation. Microscopic inspection and nuclear staining confirmed the presence of clean nuclei in the pellet and thus the co-sedimentation of the rough endoplasmic reticulum-derived ribosomes with the cytosolic supernatant (Additional file 1). The cytosolic sample was fractionated further using a sucrose gradient and ultracentrifugation, which sediments the sample components based on size and molecular weight. UV was used to measure the RNA content of the fractions and the amount of ribosomes in each of the fractions was established based on the resulting distinct peak pattern. We isolated each of the fractions containing one, two, three, four, five and six ribosomes and the fraction containing seven or more ribosomes. In addition, we isolated the fraction that contained the cytosolic part without ribosomes, which we will refer to as the 'free cytosolic' sample. RNA molecules in the free cytosolic fraction are, however, associated with various other types of smaller protein complexes that reside in the cytosol. The fractions containing 40S and 60S ribosomal subunits were also extracted and these two samples were pooled for further analysis. The RNA of three ribosomal fractionation experiments was pooled to level out single experimental outliers. Through this experimental setup we obtained a complete set of subcellular samples from which RNA was extracted.
Strand-specific RNA-seq was performed after rRNA depletion on all the subcellular samples and for each we obtained at least six million aligned reads. The GENCODE annotation [24] of coding and noncoding transcripts was used to establish the read counts per gene (Additional file 2). In our data analyses, we considered three types of transcripts: protein-coding transcripts; small noncoding RNAs (sncRNAs), which included small nuclear RNAs (snRNAs) and small nucleolar RNAs (snoRNAs); and lncRNAs, which included antisense transcripts, long intergenic noncoding RNAs and processed transcripts (these were transcripts that did not contain an open reading frame (ORF) and could not be placed in any of the other categories) [3]. We left out some small RNAs such as miR-NAs, because these were not captured in our experimental setup. Also, to prevent false assignments of sequencing reads to noncoding transcripts, we did not consider lncRNAs in which the annotation partially overlapped with protein-coding transcripts on the same strand. We selected expressed transcripts using a stringent threshold to allow us to reliably detect quantitative differences. Our expressed transcript set contained 7,734 genes including 7,206 protein-coding genes, 152 lncRNAs (46 antisense transcripts, 71 long intergenic noncoding transcripts and 35 processed transcripts) and 376 sncRNAs (134 snoRNAs and 242 snRNAs).
To determine the similarity of the RNA content of the different subcellular samples we analyzed the correlations between each sample pair ( Figure 2A). The highest correlations were seen between ribosomal fractions, ranging from 0.60 to 0.97. By contrast, the correlations between the different ribosomal fractions and the nuclear sample ranged from 0.35 to 0.53. We investigated the source of the variable correlation between subcellular RNA samples by comparing the origin of the RNA reads from each fraction ( Figure 2B). This analysis showed that more than half of the reads in the nuclear sample aligned to sncRNAs and this group of small RNAs was visible as a distinct cloud in the comparative scatter plots (Figure 2A and Additional file 3). The ribosomal fractions primarily consisted of protein-coding genes as expected, but highly expressed lncRNAs were also clearly present. Because these read count distributions did not directly translate into transcript composition of the different samples, we also analyzed the sample composition based on reads per kilobase per million. This resulted in essentially the same distribution among the samples, but the relative contribution of sncRNAs was larger (Additional file 4).
Combined, these analyses show that subcellular RNA samples have very different compositions and that lncRNAs are found in each of the subcellular RNA samples.
Long noncoding RNAs are primarily enriched in the cytosol and in the ribosomal fractions The clear difference in composition of the subcellular RNA samples raises the question how individual transcripts are distributed among the samples and in particular how lncRNAs behave compared to protein-coding transcripts. Therefore we investigated the distribution of each lncRNA across the cellular fractions versus the distribution of each protein-coding transcript ( Figure 3). The correlation between each protein-coding transcript-lncRNA pair was calculated and the obtained scores depicted in a clustered heatmap ( Figure 3). A high correlation between two transcripts in this heatmap meant that the two showed a very that there is no general negative correlation between cellular localization of lncRNAs and protein-coding transcripts, but that the relationships are complex.
To reduce this complexity and to focus on the distribution of protein-coding transcripts and non-proteincoding RNAs across the subcellular fractions we applied model-based clustering on the normalized read counts per transcript [25]. We applied the clustering algorithm using variable amounts of clusters and found that a separation in 11 clusters best describes the data ( Figure 4A and Additional files 5 and 6). All RNA-seq transcript levels were normalized to the total amount of sequencing reads produced per sample. Therefore, the normalized value of a transcript depended on the complexity of the sample (number of different transcripts) and the expression level of all other transcripts. Because of the large fraction of reads that arose from sncRNAs, we tested the effect of omitting these RNAs from the dataset and found that this did not affect the clustering results (Additional file 7). The final set of 11 clusters included one cluster (XI) containing transcripts that did not show an obvious enrichment in any of the samples, and 10 clusters (I to X) containing genes that did show a specific cellular localization. Clusters I, II and III all contained transcripts enriched in the nucleus and depleted from the ribosomal fractions, but the clusters differed from each other based on the relative transcript levels in the free cytosolic and the 40S/60S sample. Cluster IV and V contained transcripts enriched in the free cytosolic sample and transcripts enriched in the 40S/60S sample, respectively. Clusters VI through X contained transcripts enriched in specific ribosomal fractions. Each of these ribosomal-enriched clusters also showed mild enrichment in the free cytosolic sample, except for cluster X, which was higher in the nucleus than in the free cytosol.
Overall, we consider clusters I, II and III as enriched in the nucleus; IV and V as enriched in the ribosome-free cytosol; and VI, VII, VIII, IX and X as enriched in the ribosomes. The distribution of protein-coding genes and sncRNAs among the clusters was largely as expected ( Figure 4B). Protein-coding transcripts were present in all of the clusters, but the majority (60%) was found in the ribosomal-enriched clusters. Nonetheless, 14% of the protein-coding transcripts were found in the nuclear clusters and depleted from ribosomes, suggesting that this large part of the protein-coding transcripts is not actively translated or has a rapid turn-over rate in the cytosol. sncRNAs were found only in the nuclear and ribosomefree cytosolic clusters and not in the ribosomal clusters, which matched expectations and thus demonstrated the effectiveness of the fractionation. The majority of the sncRNAs could be found in cluster III, showing high levels both in the nucleus and free in the cytosol, suggesting that many of these small RNAs shuttle between nucleus and cytoplasm.
The most notable result was the distribution of the lncRNAs among the different clusters. In line with previous analyses [3], 17% of the lncRNAs were found in one of the nuclear clusters ( Figure 4B). However, in contrast to previous studies, a relatively large part of the lncRNAs (30%) was located in clusters enriched in the ribosomefree cytosol and a striking 38% was present in ribosomeenriched clusters. As noted above, the transcript levels determined by RNA-seq represent which part of the total RNA samples can be assigned to each specific transcript. These results thus show that many individual lncRNAs (38% of the expressed lncRNAs) make up a larger part of specific ribosomal fractions than of the nuclear sample.
Although the correlations between ribosomal fractions were high (Figure 2A), these clustering results highlight the transcripts that are differential across the ribosomal samples. Previous studies have shown that many proteincoding transcripts are not evenly distributed among the ribosomal fractions, but rather show enrichment for a specific number of ribosomes [20,21]. The coding sequence length was shown to be a major determinant of the modular amount of ribosomes per transcript. In our data, the total transcript length of protein-coding transcripts in the five ribosomal clusters also increased with increasing numbers of ribosomes present ( Figure 4C). For lncRNAs, we could determine such a relationship only between cluster VI (80S and two ribosomes) and VII (three and four ribosomes), because the number of lncRNAs in the clusters with a higher number of ribosomes was too low ( Figure 4A). lncRNAs in cluster VII (three and four ribosomes) had a longer transcript length, longer maximum putative ORF length and more start codons than the lncRNAs in cluster VI (80S and two ribosomes) ( Figure 4C and Additional file 8). However, the maximum ORF lengths of the lncRNAs were much shorter than the coding sequence length of the protein-coding genes in the same cluster, so these ORF lengths likely do not determine the number of ribosomes associated with a lncRNA.
Combined, these analyses showed that many lncRNAs were enriched in specific subcellular fractions. Although some lncRNAs were enriched in the nucleus, many more were enriched in the cytosolic and ribosomal fractions.
Known long noncoding RNAs are enriched in different ribosomal fractions
The cellular localization of some lncRNAs was established previously and our results were largely in agreement with earlier findings. For example, MALAT1 and NEAT1, which are known to regulate nuclear processes such as gene expression [8] and the formation and maintenance of nuclear speckles and paraspeckles [7,26] respectively, were located in nuclear cluster I ( Figure 5). Another lncRNA with a known nuclear function is TUG1 (Figure 5), which is involved in the upregulation of growth-control genes [27]. We indeed found high levels of TUG1 in the nucleus, but the transcript also showed a clear enrichment in the fractions containing five or six ribosomes. The association of TUG1 with polysomes has not been described previously and suggests mechanisms of action in regulation of translation at the ribosome in addition to the previously described function in the nucleus.
In the ribosome-free cytosolic sample we found enrichment of lncRNAs that are known components of cytosolic complexes, for example RPPH1 and RN7SL1. RPPH1 is part of ribonuclease P [28] and RN7SL1 is part of the signal recognition particle that mediates co-translational insertion of secretory proteins into the lumen of the endoplasmic reticulum [29,30]. In addition, we also found many unstudied lncRNAs in the free cytosolic fraction. In cluster V, which showed enrichment in the 40S/60S sample, we found the lncRNA DANCR ( Figure 5). DANCR was recently shown to be involved in retaining an undifferentiated progenitor state in somatic tissue cells [10] and osteoblast differentiation [31]. The exact mechanisms through which DANCR acts are unknown, but our data suggest a role for DANCR predominantly outside of the nucleus. One of the most abundant lncRNAs in our data was the evolutionary conserved and imprinted H19. This transcript is a strong regulator of cellular growth and overexpression of H19 contributes to tumor initiation as well as progression, making it a frequently studied noncoding RNA in cancer [9,32]. An enrichment of H19 in the cytoplasm over the nucleus has previously been observed [3]. Here, we found only moderate levels of H19 RNA in the nucleus and ribosome-free cytosol, but very high levels of H19 RNA associated with ribosomes ( Figure 5). This predominant association with ribosomes suggests a possible role for H19 in the regulation of the translation machinery and, more specifically, in polysomal complexes.
CASC7 was the only lncRNA that was enriched in the sample with seven or more ribosomes. Even though CASC7 has been identified as a cancer susceptibility candidate, not much is known about this transcript. Our data indicate that it is sequestered to large polysomal complexes and it may thus function in regulation of translation.
Using quantitative PCR, we confirmed the enrichment of NEAT1 and MALAT1 in the nucleus and the enrichment of TUG1 and H19 in ribosomes (Additional file 9).
These results reveal the subcellular enrichment of known and unknown lncRNAs and suggest that many lncRNAs function primarily outside the nucleus.
Discussion
We performed transcriptome analyses on subcellular samples of the human cell line LS-174 T-pTER-β-catenin and found that the lncRNAs that were expressed in these cells were present in all subcellular fractions, but the majority of the expressed lncRNAs were enriched in the cytosol and in ribosomes. Our data partially contradict an earlier study in which most lncRNAs were found enriched in the nucleus, compared to the cytoplasm [3]. This discrepancy could have resulted from the use of different cell types, but may also have partially resulted from measuring and comparing relative enrichments between multiple samples. Measuring the whole cytoplasm would thus result in different enrichment values compared to analysis of a specific subset of the cytoplasm, such as the ribosomes.
We are not the first to find lncRNAs associated with ribosomes. Ribosome profiling in mouse embryonic stem cells also showed examples of these interactions and our results overlap with the results from that study [16]. For example, both our work and work from Ingolia et al. pinpoint the lncRNA NEAT1 as not highly associated with ribosomes. The results for MALAT1 are more intricate, as we found that MALAT1 was strongly enriched in the nucleus, but but previous work showed binding of ribosomes to the 5-part of this lncRNA [16,33]. It is possible that a small proportion of the MALAT1 transcripts is bound by ribosomes. It is also likely that ribosomal association with lncRNAs is specific to cell type, growth condition and organism.
Our data add significant insight into ribosomal association of lncRNAs, because ribosomal profiling and ribosomal fractionation provide different, yet complementary, information. In ribosome profiling, specific binding sites of ribosomes are measured and the amount of binding is estimated based on the total amount of reads in the ribosome-bound versus the total RNA sample. By applying ribosomal fractionation we can directly measure the amount of ribosomes associated per lncRNA. Moreover, we measured the full range of subcellular samples including free cytosolic and nuclear RNA in one analysis. From our data we can conclude that many lncRNAs are found in complexes that contain multiple ribosomes. In addition, the enrichment of lncRNAs in ribosomal fractions shows that many lncRNAs make up a relatively larger part of the ribosomal samples than of the nuclear sample. This did not change when sncRNAs were excluded from the analyses. It should be noted that the identification of the ribosomes was based on size fractionation and RNA content. We can therefore not fully exclude that the lncRNAs are associating with protein complexes of sizes similar to the specific amounts of ribosomes [34]. However, these thus far unknown complexes would have to be present in such high quantities that the result is an enrichment of the associated transcripts equal to the enrichment of proteincoding transcripts. Moreover, we found lncRNAs in different ribosomal fractions, so the alternative explanation would require the involvement of multiple different protein complexes.
So why do lncRNAs associate with ribosomes? The possibility that these lncRNAs all code for proteins was recently eliminated by in-depth comparison of ribosome occupancy around translation termination codons [17]. lncRNAs did not show a steep drop in ribosomal binding after the translation termination codons (determined by the ribosome release score), as was seen for protein-coding genes. However, that does not exclude the possibility that ribosomes spuriously bind initiation codons in lncRNAs. In our data, the amount of ribosomes per lncRNA correlates with lncRNA length, maximum ORF length and the number of ORFs present per lncRNA, but those three factors are not independent of each other.
It is possible that one of the processes that keep lncRNAs at ribosomes is nonsense-mediated decay (NMD). NMD functions via ribosomal binding and has previously been described as a possible breakdown route of the noncoding RNA GAS5 [35]. However, if NMD of a transcript results in such strong enrichment in the ribosomal fractions as observed in our experiments, it would mean that under standard culturing conditions a very significant portion of transcripts at ribosomes are engaged in NMD and not in active translation.
Arguably the most attractive hypothesis is that lncRNAs have functional roles in regulating translation. This could be a general phenomenon in which the lncRNAs occupy the ribosomes to keep them in a poised state and inhibit the energetically expensive process of translation until specific stimulatory cues are received. Alternatively, lncRNAs could regulate translation of specific protein-coding transcripts, for example by sequence-specific pairing. Indeed, recent data show that at least some lncRNAs associate with ribosomes to exert such a function [36]. For another class of noncoding RNAs, the microRNAs, similar roles have also been described [34]. One specific lncRNA, the antisense lncRNA of Uchl1, has been shown to regulate the association of sense Uchl1 with active polysomes in mice [36]. This regulatory function was partially established via the sequence homology between the lncRNA and the target mRNA. Translation regulatory mechanisms based on sequence homology have also been found for noncoding transcripts in bacteria [37]. Of the 25 antisense lncRNAs expressed in our data, only three pairs had both partners expressed and showed subcellular co-localization: DYNLL1 and DYNLL1-AS1, PCBP1 and PCBP1-AS1, and WAC and WAC-AS1 (Additional file 10). The fact that we found so few co-localizing sense-antisense pairs makes it unlikely that a similar mechanism is abundant in the human system studied here.
Conclusions
Our data show that different subcellular compartments differ significantly in RNA content, especially when the nucleus is compared to the ribosomal fractions. The lncRNAs expressed in this cell line are found in all subcellular samples and show an intricate correlation profile to protein-coding transcripts. Most lncRNAs are enriched in the cytosolic (free and the 40S/60S) samples and in the subcellular samples containing one, two or three ribosomes. The fact that lncRNAs show enrichment in diverse subcellular fractions and not only the nucleus suggests that lncRNAs may have a wider range of functions than currently anticipated. Our study provides insight into this diversity and our data can serve as a valuable resource for the functional characterization of individual lncRNAs.
Accession numbers
All next-generation sequencing data used in this study can be downloaded from EMBL European Nucleotide Archive [PRJEB5049].
Cell culture and media
Human colon cancer cells carrying a doxycycline-inducible short hairpin RNA against B-catenin (LS-174 T-pTER-βcatenin [23]) were cultured in 1X DMEM + GIBCO GlutaMAX™ (Life Technologies, Carlsbad, CA, USA) supplemented with 10% fetal calf serum and penicillin streptomycin. Cells were harvested during the exponential growth phase.
Ribosome fractionation
All steps of the mono-and polyribosome profiling protocol were performed at 4°C or on ice. Gradients of 17% to 50% sucrose (11 mL) in gradient buffer (110 mM KAc, 20 mM MgAc and 10 mM HEPES pH 7.6) were poured the evening before use. Three replicates of 15 cm dishes with LS-174 T-pTER-β-catenin cells were lysed in polyribosome lysis buffer (110 mM KAc, 20 mM MgAc, 10 mM HEPES, pH 7.6, 100 mM KCl, 10 mM MgCl, 0.1% NP-40, freshly added 2 mM DTT and 40 U/mL RNasin (Promega, Madison, WI, USA)) with help of a Dounce tissue grinder (Wheaton Science Products, Millville, NJ, USA). Lysed samples were centrifuged at 1200 g for 10 min to remove debris and loaded onto the sucrose gradients. The gradients were ultra-centrifuged for 2 h at 120,565 g in an SW41 Ti rotor (Beckman Coulter, Indianapolis, IN, USA). The gradients were displaced into a UA6 absorbance reader (Teledyne ISCO, Lincoln, NE, USA) using a syringe pump (Brandel, Gaithersburg, MD, USA) containing 60% sucrose. Absorbance was recorded at an optical density of 254 nm. Fractions were collected using a Foxy Jr Fraction Collector (Teledyne ISCO). Corresponding fractions from each of the three replicates were merged prior to RNA isolation.
Nuclei isolation
Pelleted nuclei of LS-174 T-pTER-β-catenin cells were obtained by centrifugation at 1200 g after whole-cell lysis prior to ribosome fractionation (see previous section). To exclude the presence of rough endoplasmic reticulum and thus validate the purity of the isolated nuclei, nuclear staining and imaging were performed (Additional file 1).
Data analysis
RNA-seq reads were mapped using Burrows-Wheeler Aligner [38] (BWA-0.5.9) (settings: -c -l 25 -k 2 -n 10) onto the human reference genome hg19. Only uniquely mapped, non-duplicate reads were considered for further analyses. Reads that mapped to exons were used to determine the total read counts per gene. Exon positions were based on the GENCODE v18 annotation [24]. The polyribosomal samples (from two to seven or more associated ribosomes) yielded 13 to 32 million reads. For the nonpolyribosomal samples (nuclear, free cytosolic, combined 40S and 60S, and 80S (monosomes)), data from three sequencing lanes (technical replicates) were merged yielding 6 to 64 million reads. Data analysis was performed on the genes with GENCODE gene_type: protein coding, antisense, processed transcript, long intergenic noncoding RNA and snRNA/snoRNAs. Filtering was performed on the read count per gene over all samples combined. The per transcript sum of the sequencing reads in all samples showed a bimodal distribution (Additional file 11). Based on these data we used a total read count threshold of 2,500 per transcript to select the expressed genes. Genes with total read count below 2,500 were filtered out, leaving 7,734 genes for further analysis. Subsequently, normalization was performed using the DEseq [39] to correct for library size and technical biases. Gene clustering was performed using a model-based clustering approach with the R package HTSCluster [25]. The protein coding-lncRNA correlation matrix (Figure 3) was calculated using Spearman rank correlation. The matrix was visualized | 2016-05-16T09:08:17.673Z | 2014-01-07T00:00:00.000 | {
"year": 2014,
"sha1": "59e15826de421f29374b3c4369e23d8f185e9eca",
"oa_license": "CCBY",
"oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2014-15-1-r6",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "663c5dcdce22a1f9db88041d011da5117b9b6e78",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221014853 | pes2o/s2orc | v3-fos-license | The Clinical Infection with Pigeon Circovirus (PiCV) Leads to Lymphocyte B Apoptosis But Has No Effect on Lymphocyte T Subpopulation
The pathology of pigeon circovirus (PiCV) is still unknown, but it is regarded as an immunosuppressant. This study aimed to find a correlation between PiCV natural infection and immunosuppression. The study was conducted with 56 pigeons divided into the following groups: PiCV-positive but showing (group S) or not (group I) non-specific clinical symptoms and asymptomatic pigeons negative for PiCV (group H). The percentage and apoptosis of T CD3+ and B IgM+ splenocytes; the expression of CD4, CD8, and IFN-γ genes in splenic mononuclear cells; the number of PiCV viral loads in the bursa of Fabricius; and the level of anti-PiCV antibodies were analyzed. The results showed that the percentage of B IgM+ cells was almost two-fold lower in group S than in group H, and that ca. 20% of the lymphocytes were apoptotic. No increased apoptosis was detected in TCD3+ subpopulation. The PiCV viral loads were approximately one thousand and ten thousand times higher in group S than in groups I and H, respectively. Our results indicate a possible correlation between the number of PiCV viral loads and severity of PiCV infection and confirm that PiCV infection leads to the suppression of humoral immunity by inducing B lymphocyte apoptosis.
Introduction
One of the most significant infectious agents found in pigeons is the pigeon circovirus (PiCV), which belongs to the genus Circovirus and the family Circoviridae [1]. The circovirus infection in pigeons was initially documented almost 30 decades ago in Canada, the USA, and Australia [2,3]. Today, PiCV is distributed worldwide with an average prevalence at ca. 70% [4]. Asymptomatic infections with this virus are quite common and were noted in ca. 44% of domestic pigeons, on average, in Poland and in ca. 63% of domestic pigeons in China [5][6][7]. The virus is transmitted mainly horizontally through ingestion or inhalation of virus-contaminated fecal material and feather dust [8]. The high PiCV prevalence results from the specifics of pigeon breeding and rearing system. Bird racing, pigeon exhibitions, and "one loft race" breeding facilities could contribute to the rapid spread of PiCV infections in pigeon populations and to the production of recombinant variants of the virus, as has been noted for other avian circoviruses such as those infecting parrots [9]. The recombination events often detected in pigeon circovirus genome could result from the procedures mentioned above [5,10,11].
The pathology of PiCV infections in pigeons has not been fully investigated to date. PiCV mainly targets the bursa of Fabricius (bF), but its genetic material has also been detected in other organs associated with the immune system, including the thymus and the spleen. The virus or its genetic The pathology of PiCV infections in pigeons has not been fully investigated to date. PiCV mainly targets the bursa of Fabricius (bF), but its genetic material has also been detected in other organs associated with the immune system, including the thymus and the spleen. The virus or its genetic material have also been detected in various organs of the digestive tract and also in the skin, thyroid gland, and third eyelid [5,12,13]. Pigeon circovirus infections have led to the loss of lymphoid tissue in immune system organs, and for this reason, PiCV is regarded as an immunosuppressive agent in pigeons [14,15]. A higher prevalence of accompanying infections with various pathogens in PiCVpositive pigeons suggests that this virus could induce immunosuppression [15][16][17]. The mechanism of immunosuppression induced by PiCV has not been thoroughly elucidated, but it appears that PiCV infections cause lymphocyte damage [15]. The porcine circovirus type 2 and duck circovirus can activate lymphocyte apoptosis [18,19], and a similar mechanism could be involved in PiCV infection. The laboratory protocol for PiCV culturing has not been developed yet, which is why there is no possibility of performing an experimental challenge with this virus. The above makes performing experiments with PiCV difficult, but investigations with pigeons naturally infected with the circovirus are a possible alternative [20].
This study aimed to answer whether there is any correlation between PiCV natural infection and immunosuppression and, if yes, which main mechanisms of immunity are impaired by the virus.
The Percentage of T CD3 + and B IgM + Lymphocytes in the Spleen
Representative figures of all cytometric analyses of splenic mononuclear cells isolated from the examined pigeons are shown in Figure 1. The average percentage of TCD3 + lymphocytes was the highest in the spleen samples isolated from group S pigeons (75.22 +/− 8.92%). The values of this parameter differed statistically (p = 0.00) between birds from group I (59.05 +/− 9.20%) and group H (50.45 +/− 10.56%). In turn, the average percentage of splenic B IgM + cells was the highest in the control group (H) pigeons, reaching 30.68 +/− 8.82%, and differed statistically (p = 0.00) between groups I (15.81 +/− 6.38%) and S (6.55 +/− 2.44%). The percentage of this lymphocyte subpopulation also differed between pigeons from groups S and I (p = 0.00) ( Figure 2). Pathogens 2020, 9, x FOR PEER REVIEW 3 of 14 and late apoptotic cells, located upper part/Q1 and Q2 (Annexin V-APC/ 7-AAD = -/+ and +/+). The samples were standardized to 1 x 10 6 of mononuclear cells.
Figure 2.
The results of the flow cytometry analyses. The samples were standardized to 1 × 10 6 of mononuclear cells. The mean percentage and the ratio of T CD3+ to B IgM+ splenic lymphocytes isolated from the examined pigeons. The different letters ( A,B,C ) indicate a statistically significant difference between investigated groups (p < 0.01 and p < 0.05, respectively) in the Kruskal-Wallis nonparametric test for independent samples. Error bars represent the standard error of the mean.
Apoptosis and Necrosis in Splenic T and B Lymphocytes
The percentage of apoptotic T CD3 + cells was the highest in group H pigeons (4.33 +/− 1.25%) and differed (p = 0.02) only compared to the birds from group I (2.60 +/-0.96%). There was no statistically significant difference in apoptotic TCD3+ cell percentage between birds from group S (3.00 +/− 1.56%) and the other two groups.
There were no statistically significant differences in the percentage of necrotic T CD3 + cells between all the investigated groups of pigeons. The average percentage of this splenocyte subpopulation was at 0.13 +/− 0.11%.
The highest percentage of necrotic B IgM + cells was detected in group S pigeons and reached 2.38 +/− 1.64%. It differed statistically (p = 0.00) from the values reported in groups I (1.14 +/− 0.58%) and H (0.70 +/− 0.35%), while there were no statistically significant differences between these two latter groups ( Figure 3B).
Apoptosis and Necrosis in Splenic T and B Lymphocytes
The percentage of apoptotic T CD3 + cells was the highest in group H pigeons (4.33 +/− 1.25%) and differed (p = 0.02) only compared to the birds from group I (2.60 +/− 0.96%). There was no statistically significant difference in apoptotic TCD3+ cell percentage between birds from group S (3.00 +/− 1.56%) and the other two groups.
There were no statistically significant differences in the percentage of necrotic T CD3 + cells between all the investigated groups of pigeons. The average percentage of this splenocyte subpopulation was at 0.13 +/− 0.11%.
The highest percentage of necrotic B IgM + cells was detected in group S pigeons and reached 2.38 +/− 1.64%. It differed statistically (p = 0.00) from the values reported in groups I (1.14 +/− 0.58%) and H (0.70 +/− 0.35%), while there were no statistically significant differences between these two latter groups ( Figure 3B).
The Expression of CD4, CD8, and IFN-γ Genes
The mean relative expression of all analyzed genes in both infected groups was similar to that found in the control group (H). The highest expression was detected in the case of the IFN-γ gene in splenocytes isolated from group S pigeons (1.90 +/− 2.68). However, the difference was not statistically significant due to a high standard deviation. The expression of the CD8 gene was insignificantly higher in the splenocytes isolated from group I pigeons (1.34 +/− 0.45) than group S birds (0.94 +/− 0.26). In turn, the expression of CD4 gene was statistically higher (p = 0.00) in the lymphocytes isolated from the spleens of group S pigeons (1.23 +/− 0.24) than in those isolated from the group I birds (0.86 +/− 0.24) ( Figure 4).
The Expression of CD4, CD8, and IFN-γ Genes
The mean relative expression of all analyzed genes in both infected groups was similar to that found in the control group (H). The highest expression was detected in the case of the IFN-γ gene in splenocytes isolated from group S pigeons (1.90 +/− 2.68). However, the difference was not statistically significant due to a high standard deviation. The expression of the CD8 gene was insignificantly higher in the splenocytes isolated from group I pigeons (1.34 +/− 0.45) than group S birds (0.94 +/− 0.26). In turn, the expression of CD4 gene was statistically higher (p = 0.00) in the lymphocytes isolated from the spleens of group S pigeons (1.23 +/− 0.24) than in those isolated from the group I birds (0.86 +/− 0.24) ( Figure 4).
Determination of Anti-PiCV IgY
The OD450 value reached 1.28 +/− 0.97, 0.51 +/− 0.27 and 0.22 +/− 0.05 in groups S, I, and H, respectively. The statistical difference was detected only between group H and the other two groups at confidence levels of 99% and 95% (p = 0.00 and p = 0.02 for groups S and I, respectively) ( Figure 5).
Determination of Anti-PiCV IgY
The OD 450 value reached 1.28 +/− 0.97, 0.51 +/− 0.27 and 0.22 +/− 0.05 in groups S, I, and H, respectively. The statistical difference was detected only between group H and the other two groups at confidence levels of 99% and 95% (p = 0.00 and p = 0.02 for groups S and I, respectively) ( Figure 5).
PiCV Viral Loads in the Bursa of Fabricius Samples
The results of ddPCR for PiCV viral loads are presented in Table 1 and Figure 6.
PiCV Viral Loads in the Bursa of Fabricius Samples
The results of ddPCR for PiCV viral loads are presented in Table 1
Discussion
Because good-quality racing pigeons are very valuable domestic animals, it is essential to discover mechanisms of the pathology of the most common pathogens occurring in those birds, which is fundamental for the development of proper preventive schedules. One of the most significant infectious agents found in pigeons is the PiCVs. The circoviruses are well known immunosuppressive factors for various animals. The pigeon circovirus is characterized by bursotropism, and the intracytoplasmic inclusion bodies are found mainly in bursal macrophages [13][14][15]. Histologic examination of the bF section positive for PiCV inclusion bodies revealed lymphocyte depletion and necrosis. However, lymphocytic depletion and inclusion bodies were also found in another primary lymphatic organ-thymus. This has underlain a theory that circoviruses could be important factors causing general immunosuppression in pigeons by affecting both T and B lymphocytes [14]. Additionally, one of the previous studies has indicated apoptosis of bursal cells in pigeons positive for PiCV [15]; however, pigeon thymus samples were not investigated with methods allowing the detection of cellular death. One of the methods useful for such analyses is flow
Discussion
Because good-quality racing pigeons are very valuable domestic animals, it is essential to discover mechanisms of the pathology of the most common pathogens occurring in those birds, which is fundamental for the development of proper preventive schedules. One of the most significant infectious agents found in pigeons is the PiCVs. The circoviruses are well known immunosuppressive factors for various animals. The pigeon circovirus is characterized by bursotropism, and the intracytoplasmic inclusion bodies are found mainly in bursal macrophages [13][14][15]. Histologic examination of the bF section positive for PiCV inclusion bodies revealed lymphocyte depletion and necrosis. However, lymphocytic depletion and inclusion bodies were also found in another primary lymphatic organ-thymus. This has underlain a theory that circoviruses could be important factors causing general immunosuppression in pigeons by affecting both T and B lymphocytes [14]. Additionally, one of the previous studies has indicated apoptosis of bursal cells in pigeons positive for PiCV [15]; however, pigeon thymus samples were not investigated with methods allowing the detection of cellular death. One of the methods useful for such analyses is flow cytometry. Its application is feasible because in the early stage of apoptosis, membrane-bound phosphatidylserine is transferred from the cytosol side to the outside of the cell. If fluorochrome-labelled Annexin V is added to the samples, it will create a complex with phosphatidylserine present outside the cells. In this way, the use of Annexin V staining allows distinguishing between viable, necrotic, and apoptotic cells. However, the flow cytometry has some limitations in staining for an apoptosis -the isolated mononuclear cells used for extracellular staining have to be alive. For this reason, every method used for cell isolation should not lead to cellular death. Unfortunately, bF as well as thymus contain high amounts of fibrous connective tissue, and therefore, collagenase needs to be used during mononuclear cell isolation [21]. The treatment with collagenase leads to cellular death, which is why neither bF nor thymus was used for flow cytometry in our study. One of our previous studies has revealed that the secondary immune organ-spleen-could be useful for flow cytometry, because the isolation of splenic mononuclear cells does not require digestion with collagenase, and simple tissue homogenization with the use of a manual grinder is enough. Moreover, high populations of both T and B cells are present in this organ [21,22], and for this reason, we decided to use spleen samples for the isolation of mononuclear cells.
The results of flow cytometry seem very interesting to us. We detected the significant differences in percentages of both analyzed lymphocyte subpopulations, depending on the PiCV infection status. There were two opposite trends. The percentage of TCD3 + lymphocytes increased depending on the PiCV infection severity (PiCV viral loads) and was the highest in group S. In contrast, the B IgM + cells subpopulation was almost two times lower in the birds from group S than in these from group H. The ratio of T CD3 + lymphocytes to B IgM + varied from 1.6:1 in group H to 11:1 in group S. Those differences suggest that PiCV infection could affect the B lymphocytes. The Annexin V staining revealed that, despite a difference in group H, apoptosis was at a similar level in T CD3 + cells subpopulation isolated from spleens of the examined pigeons. The lack of any trend in the group H birds suggests that this difference could be random and result from the fact that the examined birds originated from different clinical cases, not from experimental inoculation. Much clearer results were present for B IgM + cells. We noted that approximately 20% (two times more than in the subclinically infected pigeons) of those cells were in the early apoptosis state. The mutual proportions between apoptotic T CD3 + and B IgM + lymphocytes ranged from 1:2 in group H to 1:6 in group S. Apoptosis is a form of cellular death, which regulates cellular homeostasis by removing unnecessary or damaged cells. The role of virus-induced apoptosis in the lymphocyte depletion and the progression of viral disease has been reported for the best-known animal circovirus-porcine circovirus 2 (PCV-2) [23]. One of the most common biochemical signs of apoptosis is the irreversible fragmentation of genomic DNA, resulting from the activation of nuclear endonucleases that cleave DNA between nucleosomal units. These DNA fragments can be detected at the cellular level on tissue sections by the TUNEL assay, which was performed by Abadie et al. (2001) [15] in pigeons infected with PiCV. The results of those assays partially correspond to ours, because different patterns of lesions (based on apoptosis intensity) were observed in pigeons regarding the PiCV infection status. The number of bF apoptotic cells in sick and PiCV-infected birds was much higher than in the uninfected individuals. Besides, the atrophy of bursal cells was detected. Similar observations were also made for other avian circoviruses like goose circovirus and psittacine beak and feather disease virus [24,25]. Until now, the mechanism of PiCV-induced apoptosis remains unknown. However, in the case of PCV-2, the ORF C3 protein is responsible for lymphocyte depletion and further immunosuppression [26]. The apoptotic activity of ORF 3 protein was also confirmed for duck circovirus [19]. The ORF C3 protein is present in the pigeon circovirus genome [1]; however, its role is still unclear.
The highest percentage of T CD3 + cells in pigeons clinically infected with PiCV suggests the cellular immune response to viral infection. The CD3 receptor is present on all T cells, and, for this reason, we decided to investigate the expression of genes encoding TCD4 and CD8 cellular receptors. Our results showed that the expression of both genes in groups S and I was a slightly higher than in group H. However, those differences were not so apparent like those detected in our previous study [20]. In our opinion, it could be because in the previous study we compared the cellular immune response in PiCV-positive and PiCV-negative pigeons in the context of an additional stimulating factor, which was the immunization with PiCV recombinant capsid protein. However, there is no other literature data to compare our findings. The IFN-γ gene plays a significant role in both immediate and long-term immune responses to a viral infection; hence, its expression was also evaluated in this study. The changes in IFN-γ gene expression were the highest amongst all genes analyzed in this study; however, the result was blurred by massive variation between the samples, especially those collected from pigeons classified as group S. The above could be due to using birds naturally infected with PiCV, which originated from various clinical cases. Those birds were likely to be at different stages of infection.
Pathogens 2020, 9, 632 8 of 12 A surprising observation was made for anti-PiCV IgY. All of the examined birds were seropositive, which corresponds with the results of our previous study, which revealed that PiCV seroprevalence was ca. 70% regardless of pigeons' infectious status [6]. The highest antibody levels as well as variability between samples were observed in group S. In this group of pigeons, the values of standard deviation were significantly higher than in the other groups. This phenomenon is typical of investigations conducted with clinical field cases and, similarly like IFN-γ gene expression, suggests that the examined pigeons could be at different stages of infection, which fully corresponds with findings from one of our previous studies [6]. However, due to a lack of possibility of performing experimental infection with PiCV under controlled conditions, experiments conducted with birds originated from clinical cases are the only possible method. One doubt could arise because the B IgM+ lymphocyte percentage was the lowest, whereas the IgY antibody level was the highest in pigeons from group S. It must be remembered, however, that IgY antibodies appear later than IgM. Moreover, the half-life of antibodies may be longer than the life of the lymphocyte. Therefore, with time since infection, the antibody level increases, and the virus-infected cells gradually die.
Until recently, PiCV was considered as one of the putative factors contributing to the complex disease, called Young Pigeon Disease Syndrome, YPDS [27]. This theory was based on the fact that each pigeon presenting symptoms of YPDS was PiCV positive. Moreover, semi-quantitative analyses showed that the amount of PiCV genetic material in bF samples was much higher in the PiCV-positive and diseased birds than in the healthy pigeons [28]. Those observations were partially confirmed by using the qPCR method developed by Duchatel et al. (2009) [29], who revealed that PiCV viral loads were significantly higher in liver samples collected from birds suffering from YPDS than in the subclinically infected individuals. A similar trend was noted for bF and spleen samples, but the differences were not significant, probably because of the small number of examined pigeons. In our study, we decided to use ddPCR for PiCV viral loads quantification, because of its high sensitivity, which allows detecting even single copies of the PiCV genome [20]. The results obtained indicate a strong correlation between PiCV viral loads in bF and the pigeons' clinical status, because the number of PiCV genome copies was approximately a thousand times higher in the group S pigeons than in the subclinically infected birds. These results correspond to those obtained by the authors mentioned above. Very interesting is also that pigeons classified as group H (negative for PiCV in cloacal swabs and showing no clinical symptoms of YPDS) proved to be positive in ddPCR. The average PiCV viral load was the lowest in this group and was approximately ten times lower than in bF from pigeons classified as subclinically infected. The above indicates that screening pigeons for PiCV based on swab sample examination is not a perfect method and could lead to a false-negative result. However, the swab samples are much better material for PiCV screening than blood samples, which was described earlier [29]. The experimental infection with PiCV performed by Schimdt et al. (2008) [30] has revealed that it is impossible to induce YPDS by pigeon inoculation with this virus and that the combined effect of various factors, like stress factors and virus-induced immunosuppression, could occur. However, since confirmation of rotavirus infection (RVA) in domestic pigeons in Australia, causing the disease similar to YPDS [31], the role of PiCV in the etiology of the disease syndrome has been depreciated. One of the most recent theories says that the characteristic clinical symptoms of the disease are directly related to RVA infection [32,33].
Given the current state of knowledge as well as facts found in this study, it should be noted that, despite no evidence for the role of PiCV in the etiology of YPDS, infection with this virus cannot be ruled out as an immunosuppression-inducing trigger. Our results indicate that the pigeon circovirus induces B cell apoptosis, which may affect the impairment of humoral immunity. The above can also explain the slower-developing post-vaccination immunity demonstrated in one of our earlier studies [20].
Given the current epizootic situation of PiCV, the intensity of circoviral infections in pigeons showing clinical symptoms of YPDS and the emergence of RVA, it seems reasonable to conduct further studies to determine the real/ potential role of these two viruses in the etiology of YPDS.
Ethical Statement
The experiment was carried out in strict observance of the Local Ethics Committee on Animal Experimentation of the University of Warmia and Mazury in Olsztyn (Authorization No. 41/2019). The researchers made every effort to minimize the suffering of birds.
Pigeons
The experiment was conducted in three groups of pigeons. The first group included pigeons naturally infected with PiCV showing non-specific symptoms of the disease (Sick group-S; n = 14); the second one included pigeons that were positive for PiCV but clinically healthy (Infected group-I; n = 35); and the third one included PiCV-negative pigeons showing no clinical symptoms (control group-H; n = 7). All birds were purchased from private breeding facilities. The classification into each group was based on the clinical status of birds and the presence of PiCV genetic material in cloacal swabs. The pigeons showing no less than five of the following clinical symptoms were classified as group S: apathy, decreased body weight, decreased food intake, watery droppings or greenish diarrhea, crop filled with liquid or regurgitation from the crop, increased water intake, extensive diphtheria lesions in beak cavity, respiratory disorders and ruffled feathers. The birds classified as group I showed none of the above clinical symptoms but were positive in PCR screening for PiCV. The pigeons were free of most common pigeon viruses such as avian orthoavulavirus 1, pigeon adenovirus, pigeon herpesvirus, and pigeon rotavirus, which was confirmed by PCR. All investigated birds were young (6-9 weeks old) racing/carrier pigeons. From each bird, blood samples were collected to separate the serum for the detection of anti-PiCV antibodies. Afterward, the birds were euthanized in the CO 2 chamber, and organs of their immune system (spleen and bursa of Fabricius) were collected for further analyses.
Isolation of Mononuclear Cells from the Spleen
Mononuclear cells from the whole spleens collected from each bird were isolated with the method described previously [21]. Afterward, the absolute lymphocyte count (ALC) was determined in each sample with a Vi-cell XR automatic cell viability analyzer (Beckman Coulter, Brea, CA, USA). Next, each sample was divided into two parts: the first was used for flow cytometry examination, whereas the second one for RNA extraction for expression of selected genes.
Extracellular Staining for TCD3 + and B IgM + Lymphocytes
The mononuclear cells isolated from spleen samples were standardized to 1 × 10 6 . Next, they were stained for T cells with anti-CD3 (PE) specific monoclonal antibodies obtained in our previous study [21] and for B cells with anti-IgM (FITC) polyclonal antibodies (Biorad, Hercules, CA, USA). After staining, the samples were incubated in darkness on ice for 30 min. Next, the cells were rinsed twice in PBS, centrifuged, and the resulting pellets were resuspended and used for staining for apoptosis evaluation.
Staining for Apoptosis and Necrosis Evaluation
The staining was performed acc. to the protocol described by Maślanka et al. [34]. Cells stained for CD3 and IgM extracellular markers were washed on ice in the Annexin V binding buffer (BD Biosciences, Franklin Lakes, NJ, USA). The supernatants were removed by centrifugation, and the cells were suspended in the same buffer. Next, the APC-conjugated Annexin V (BD Biosciences, Frnaklin Lakes, NJ, USA) and 7-AAD (BD Biosciences, Frnaklin Lakes, NJ, USA) were added to the cells. After incubation, the cells were diluted in the Annexin V binding buffer and analyzed with flow cytometry within 1 h using a FACSCanto II flow cytometer (BD Biosciences, Franklin Lakes, NJ, USA). Data were acquired in FACSDiva Software 6.1.3. (BD Biosciences, Franklin Lakes, NJ, USA). Cells were analyzed and immunophenotyped in FloJo 10.6.2 (BD Biosciences, Franklin Lakes, NJ, USA). Data were expressed as the mean percentage of a particular subpopulation of lymphocytes +/− standard deviation.
RNA Isolation and qPCR for Selected Genes Expression
The number of mononuclear cells isolated from spleen samples was standardized to 5 × 10 6 and used for RNA isolation. Genomic RNA was isolated using a commercial reagent kit (RNeasy Mini Kit; Qiagen, Hilden, Germany). Reverse transcription was conducted with the commercial kit (High-Capacity cDNA Reverse Transcription Kit; Life Technologies, Carlsbad, CA, USA) according to the manufacturer's guidelines. The relative expression of the genes encoding CD4 and CD8 T cell receptors and IFN-γ was determined as described previously [22], using a commercial reagent kit (Power SYBR ® Green PCR Master Mix kit; Life Technologies, Carslbad, CA, USA) and a LightCycler 96 (Roche, Basel, Switzerland). The relative expression was calculated using the 2 −∆∆Cq method [35] normalized to efficiency corrections, average RT and qPCR repeats, control group (H), and reference gene coding glyceraldehyde 3-phosphate dehydrogenase (GAPDH) in GenEx v. 6.1.0.757 data analysis software (MultiD Analyses, Göteborg, Sweden).
In-House ELISA for Determination of Anti-PiCV IgY
The assay was performed according to the protocol described in the previous study [6]. The concentration of detecting antigen was 20 µg/mL, and pigeon sera were diluted at 1:400. The dilution rate of primary antibodies (rabbit anti-pigeon IgG; Antibodies-online, Atlanta, GA, USA) was 1:30,000, whereas the dilution of secondary antibodies (goat anti-rabbit IgG with horseradish peroxidase (HRP); BD biosciences, Franklin Lakes, NJ, USA) was 1:1000. Each rinsing step was performed with an ELx 405 automatic washer (Biotek, Winooski, VT, USA). Optical density was measured with an ELx 800 spectrophotometer (Biotek, Winooski, VT, USA) at the wavelength of 450 nm. Data were expressed as mean OD 450 +/− standard deviation.
Digital Droplet PCR (ddPCR) for PiCV Viral Loads in the Bursa of Fabricius Samples
The genomic DNA was extracted with the use of DNeasy Blood & Tissue Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer's instructions. The volume of the tissue homogenate used for DNA extraction was 200 µL. The homogenate was obtained by homogenization (TissueLyser II; Qiagen, Hilden, Germany) of 400 µg of the bursa of Fabricius in 500 µL of the Phosphate Buffered Saline (PBS, Sigma-Aldrich, Schnelldorf Germany). The ddPCR was carried out in a C1000 Touch thermal cycler (Bio-Rad, USA) with the use of ddPCRTM EvaGreen Supermix (Bio-Rad, Hercules, CA, USA). The primer sequences, the composition of the reaction mixture, and thermal cycler conditions were described in our previous study [20]. After amplification, the plate containing the samples was placed in a QX 200 droplet reader (Bio-Rad, Hercules, CA, USA) for analysis. The results were calculated with the following formula: the mean number of positive droplets in the sample minus the mean number of the positive droplets in the negative control (background). The results were expressed as mean PiCV genome copy number +/− standard deviation per 22 µL of the sample.
Statistical Analysis
The statistical analysis of the values of parameters determined in the investigated groups was conducted with the Kruskal-Wallis non-parametric test for independent samples (flow cytometry, ddPCR, in-house ELISA) and Mann-Whitney U test (gene expression) using Statistica.pl v.10.0 software (Statsoft, Kraków, Poland). Differences were considered significant at p < 0.05 and p < 0.01. Funding: Project financially co-supported by Minister of Science and Higher Education in the range of the program entitled "Regional Initiative of Excellence" for the years 2019-2022, Project No. 010/RID/2018/19, amount of funding 12,000,000 PLN. | 2020-08-06T09:06:09.437Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "17e29a605d24dd8930380b11f7bcf36cec9b1c8a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/9/8/632/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "50ba20ad16bd7c18aa6458c957f1e797cad25757",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
2951679 | pes2o/s2orc | v3-fos-license | Moving Beyond Retribution: Alternatives to Punishment in a Society Dominated by the School-to-Prison Pipeline
There is a growing national trend in which children and adolescents are funneled out of the public school system and into the juvenile and criminal justice systems—where students are treated as criminals in the schools themselves and are expected to fall into this pattern rather than even attempt to seek opportunities to fulfill the ever elusive “American Dream”. There is a blatant injustice happening in our schools, places that ironically should be considered safe havens, places for knowledge, and means of escape for children who have already been failed by the system and sequestered to under-resourced, overcrowded, and over-surveilled inner cities. Focusing on the damage the public education system has caused and the ways in which policies and practices have effectively made the school-to-prison pipeline a likely trajectory for many Black and Latinx students, we hope to convey the urgency of this crisis and expose the ways in which our youth are stifled, repeatedly, by this form of systematic injustice. We will describe models of restorative justice practices—both within and beyond the classroom—and hope to convey how no matter how well intentioned, they are not adequate solutions to a phenomenon tied to neoliberal ideologies. Thus, we ultimately aim to exemplify how a feminist approach to education would radically restructure the system as we know it, truly creating a path out of this crisis.
Introduction
Universal public education has two possible-and contradictory-missions. One is the development of a literate, articulate, and well-informed citizenry so that the democratic process can continue to evolve and the promise of radical equality can be brought closer to realization. The other is the perpetuation of a class system dividing an elite, nominally 'gifted' few, tracked from an early age, from a very large underclass essentially to be written off...The second is the direction our society has taken. The results are devastating in terms of the betrayal of a generation of youth. The loss to the whole of society is incalculable ([1], p. 162) 1 Our public education system is said to be one of the great equalizers of the United States-the leveler of the playing field in society, a way in which anyone can achieve the American Dream. Though, 1 Bolded emphasis added. Adrienne Rich's Arts of the Possible: Essays and Conversation is a collection of essays that imagine a more socially just world and speaks to those possibilities. as we have seen throughout history and exemplified by Adrienne Rich's words, universal public education can create even starker class divides among those deemed worthy of a good education, and those who are not [1]. Rich's assessment is bleak enough; however, what happens when the educational system continuously fails students, to the point that their trajectory into normative society is jeopardized?
Educational policy changes in the past 20 years have intensified the inherent inequalities caused by the education system in the United States [2]. Currently, there are two journeys, as Alice Goffman notes, that lead people from childhood to adulthood: college (the ideal path for the elite) and prison (the predetermined path for many Black and Latinx 2 students) [3]. Rather than fostering an atmosphere of understanding, learning, collaboration, and limitless opportunity, current school-based practices and education models have more and more blurred the lines between school and jail, as we will shortly discuss. The school-to-prison pipeline is the pattern of (either subtly or forcibly) removing students from educational institutions, primarily through zero-tolerance policies 3 , and putting them directly and/or indirectly on the track to the juvenile and adult criminal justice systems [4,5].
Entering the school-to-prison pipeline is not by random chance [5]. Howard Witt's study shows that, disproportionately, the poor, the disabled, and the students of color-in particular Black students-are suspended and expelled more often than their peers [6]. In part, the school-to-prison pipeline exists because, as The Free Thought Project 4 expands, "America's public schools contain almost every aspect of the militarized, intolerant, senseless, overcriminalized [sic], legalistic, surveillance-riddled, totalitarian landscape": we criminalize small disciplinary infractions with detention, suspension, or expulsion with zero-tolerance policies, we have police or security officers stationed at schools to "monitor" the daily happenings and to discipline "disorderly" students, and focus on "standardized testing that emphasizes rote answers over critical thinking [7,8]. In essence, many schools, and, usually, schools in lower-income communities that are often majority minority, look more and more like prisons rather than spaces for learning and knowledge, acclimating "young people to a world in which they have no freedom of thought, speech, or movement" [8]. There is also a highly correlated risk for students who are suspended or expelled to end up in the prison system at some point in time during their adolescent and/or adult life [2].
The school-to-prison pipeline cannot and does not exist in a vacuum: it is deeply connected to our current political and social climate which is increasingly harsh and interested in punitive punishment rather than understanding. It is also tied to neoliberal ideas of restoration which reactively seek to offer forms of "justice" instead of proactively combating the mass prison industrial complex through complete deconstruction of both the education and carceral systems. We see this through well-intended efforts of rehabilitation seen both in and out of the classroom. Can we imagine a justice system that prioritizes recovery and rehabilitation over retribution in a mass prison industrial complex without replicating ideologies of destructive neoliberalism? Our society depends on truly transforming our education and justice systems to move out of this time of crisis so that the hierarchy of lives can be rethought.
The American Psychological Association writes that "Zero Tolerance Policies" were "originally developed as an approach to drug enforcement . . . [and] became widely adopted in schools in the early 1990s as a philosophy or policy that mandates the application of predetermined consequences, most often severe and punitive in nature, that are intended to be applied regardless of the gravity of behavior, mitigating circumstances, or situational context. Such policies appear to be relatively widespread in America's schools, although the lack of a single definition of zero tolerance makes it difficult to estimate how prevalent such policies may be. Zero tolerance policies assume that removing students who engage in disruptive behavior will deter others from disruption...and create an improved climate for [others]." 4 The Free Thought Project, founded by Jason Bassler and Matt Agorist, seeks to "foster the creation and expansion of liberty minded solutions to modern day tyrannical oppression" through an online platform "hub", forwarding a "revolution of consciousness." With this context in mind, this piece seeks to extend our understanding of these questions: what is this time of crisis, and what do our current system and restorative justice practices say about whose lives we value more in this society? How can feminist pedagogy create our path out of this crisis?
Reflection on Crisis
Public education is under assault by a host of religious, economic, ideological and political fundamentalists. The most serious attack is being waged by advocates of neoliberalism, whose reform efforts focus narrowly on high-stakes testing, traditional texts and memorization drills. At the heart of this approach is an aggressive attempt to disinvest in public schools, replace them with charter schools, and remove state and federal governments completely from public education in order to allow education to be organized and administered by market-driven forces. [9] The "American Dream" is an illusion. It is an image created for us to aspire to. It is a promise to be fulfilled contingent upon following a path circumscribed by champions of "democracy" and justice. The questions become: what does democracy look like and for whom is it intended? What is justice if there is no investment in the poor, the marginalized and the disenfranchised? How do we get there? A common misconception is that education is the answer. If that is the case, what has our education system become to betray us?
The United States education system has devolved into an inherently flawed vehicle of global capitalism. Public education was developed in response to the need to equalize access to school for both poor folks and women, for whom the cost of private education was prohibitive [10]. In fact, in the South, public education stems from the agitation of former slaves during the era of radical reconstruction. Their fight was an effort to gain access to non-commoditized education, which ultimately benefited poor people, both White and Black [11]. During this time, education was equal to liberation. We have since deviated from these ideologies. Our current models of education are inextricably linked to neoliberal logics of hierarchy, consumerism and ultimately the destruction of our most vulnerable members of society-those with the least socioeconomic capital. The objective of education is no longer to create global citizens, who value and promote democratic visions and a just world, but rather to sustain a dystopic society in which wealth and power are maintained and unchallenged. Who we value and who we render disposable are tied to the structure of our schools and to methods of punishment within these schools, as punishment often results in the shuffling of the students deemed expendable into the penal system at staggering rates. Overwhelmingly, Black and Latinx students, especially from lower-income neighborhoods, are the victims of this process. Forget imagining the "Dream"-these children are disqualified from the start. This is the crisis we find ourselves in. Prominent news outlets, social media, politicians, and pundits carefully inundate us with propaganda to convince us that our most exigent crisis and the domestic war we are up against is waged by the hands of radical organizations, women-led, youth-driven, Black, Brown, queer, trans, intersectional protests, grassroots committees, and passionate dissenters, who refuse to pander to the status quo and are daring to reclaim their agency to create the change our collective humanity demands 5 . What these influencers fail to acknowledge is the real war that we are fighting-in the classrooms-and that Black and Latinx children are the casualties. Black and Latinx children are disproportionately targeted as fodder for the ravenous school-to-prison 5 This reference intentionally describes the evolving #BLACKLIVESMATTER movement founded by Black, queer activist Alicia Garza, Patrisse Cullors and Opal Tometi in response to the acquittal of George Zimmerman after he murdered Trayvon Martin on 12 February, 2012 in Sanford, Florida. Due the epidemic of Black genocide, the movement has gained momentum in correlation with each Black life taken with impunity-from Ferguson, South Carolina, Baltimore and to New York, the list continues to grow. Since its inception the movement has expanded to include activists, students, and community members of all intersecting identities, united in their goal of justice and fighting against the crisis that is systematic racism and state-sanctioned Black death. pipeline that not only robs them of their right to an uninterrupted education but also of the opportunity to forge a life unrestrained by the shackles of a criminal record. Through punitive "zero tolerance" and "broken window 6 " policies, these children are disciplined in ways that are not commensurate with their "misbehavior". Instead of a pedagogy grounded in justice and empathy, these children are being taught by educators who are more interested in penalizing them than investing in their futures. As Henry Giroux asserts throughout his critiques of neoliberalism and its permeation into our education system, schools have become havens for social indifference and degrading individualism 7 .
This is not to say that Black and Latinx children are more likely to commit minor infractions than their white counterparts. However, when 51% of high schools that are mainly comprised of Black students have police officers who function as security guards, it is no wonder that Black students are 2.3 times more likely to be disciplined by law enforcement than white students [13]. Giroux indicts public schools for their resemblance to "punishing factories", where punishment and fear are critical modalities mediating youth to the larger social order. Further, he describes the way in which selective punishment of infractions of poor or disabled students of color neglects to address the underlying social needs of these students. What he describes as potential "teachable" moments become criminal offenses. 8 These instances of punitive treatment are congruous to the function of the neoliberal machine that governs this country. Giroux and Brad Evans note, "The regime of neoliberalism is precisely organized for the production of violence. Such violence is more than symbolic. Instead of waging a war on poverty it wages a war on the poor-and does so removed from any concern for social costs or ethical violations [15]." This iteration of racialized violence against children is unconscionable.
If this argument sounds familiar, it is because it is. This paradigm is analogous to that of the United States' carceral state-a system that also targets, arrests, and sentences Black and Latinx men and women at disproportionate rates, namely for petty drug offenses. Black people comprise 13% of the United States' population, yet 40% of those incarcerated. Latinx people comprise 16% of the population, yet 19% of those incarcerated. White people comprise 64% of the United States total population, yet only 39% of those incarcerated [16]. If the United States public school system continues to function as a microcosm of its penal system, these are the statistics Black and Latinx children are projected to join. The material impact of this trajectory is already well established. Research studies have elucidated the ways in which incarceration deteriorates the fabric of communities and the deleterious effects it imposes on its members' physical, emotional and mental health [17]. Every fundamental right, from returning to school, securing a job, procuring safe housing to staying alive, is at jeopardy [18]. When simple prerequisites for living escape their grasp, the rate of recidivism increases [19]. When these trends become routine, why is there not coverage, commotion, concern? Why is the system the least gentle and most negligent with its most vulnerable members? Why are local legislatures and school administrators not being held accountable?
Instead of criminalizing Black and Latinx children, who with every turn of the fall are less and less present in the classrooms and more and more confined to juvenile or adult detention centers, a reimagining of the structure of public school and their methods of punishment are necessary 9 . The livelihoods of Black and Latinx children matter. Despite the flawed logic of white supremacist, capitalist patriarchy 10 that manifests in public education institutions-incentivized to push-out Black and Latinx students who "underperform", commit minor infractions and disobey, and are supported 6 Similar to the logic of "zero tolerance" policing, broken window policing originated in 1982 and argues that punishing low-level offenses will deter acts of major crime and therefore benefit the overall safety of a community. 7 Taken from several passages from: Henry Giroux's "Education Incorporated?: Corporate Culture and the Challenge of Public Schooling," Educational Leadership ( [12], pp. 12-17). 8 A reference to Henry Giroux's article "Schools as Punishing Factories: The Handcuffing of Public Education" [14]. 9 According to the sentencing project's report in 2013 on racial disparities in youth commitments and arrests, racial disparities among juvenile detainees have increased despite decreases in overall arrests and commitment of juveniles across the nation. 10 Prominent author, scholar, and activist bell hooks defines and uses this term widely to describe the interlocking socio-political systems that shape the politics of our society. While her analysis does not include the politics or actions of individuals, she notes that both individuals and systems can uphold, support and perpetuate white supremacist, capitalist patriarchy [20].
by the leniency afforded to them-the education system cannot be a tool weaponized against them. 11 We must consider what we are teaching these children-and all their witnesses-when we betray their humanity this way. Effectively, we are saying that they are dangerous, without value, need to be surveilled, hounded, followed, attacked, gunned down, and dead. There is a better way, a more tender way, a more transformative way of empowering and preparing these children-our children-for a promising future rather than a prison cell or a tombstone.
Neoliberal Rehabilitation
How can we take seriously strategies of restorative rather than exclusively punitive justice? Effective alternatives involve both transformation of the techniques for addressing "crime" and of the social and economic conditions that track so many children from poor communities, and especially communities of color, into the juvenile systems and then on to prison. The most difficult and urgent challenge today is that of creatively exploring new terrains of justice, where the prison no longer serves as our major anchor. [21] Rehabilitation models are the status quo's and reactionary approach to combating the prison industrial complex. We can tie Michelle Chen's writing regarding rehabilitation in the private prison industrial complex to this similar situation: "Reform initiatives like rehabilitation . . . focus on making 'corrections' less punitive . . . rather than dismantling antisocial systems" [22]. So, while in many ways well-intentioned, these efforts are not enough. Through examining the value and efficacy of neoliberal rehabilitation models based on the concept of restorative justice within and beyond the classroom, which in part can serve to elevate instead of condemn our most vulnerable children, we can conclude that a feminist framework and mindset of radically undoing the current system as we know it will be a true advance towards justice and equity.
Rehabilitative Models Within and Beyond the Classroom
There are some current models of rehabilitation programs that provide some alternatives to the practice of retribution and jail time and try to combat the effects of the school-to-prison pipeline. Rather than tackle the root of the issue, these models provide ways to handle the crisis we see in our classrooms after the fact. Nevertheless, these models are the dominant and prevailing way to handle this crisis and are designed with good, albeit flawed, thoughts of neoliberal intentions. It is, therefore, important to understand the current landscape of rehabilitative models.
The concept of "zero tolerance" and similar forms of policing were allegedly enforced under the premise that removing disruptive students will cease all classroom disruptions, and thus create an environment more suitable for learning. According to the American Psychological Association's (APA) Zero Tolerance Task Force, the impacts of these biased policies have not only been ineffectual but have also negatively impacted students' performance, created a hostile classroom climate, targeted Black and Latinx children, and effectively compromised children's right to an education. This study offers approaches rooted in an understanding of the socio-psychological needs of children-approaches that even still most public schools, as they stand, are unequipped to enforce. The task force research found that effective school discipline and anti-violence programs must include three levels of strategy: bullying prevention, threat assessment, and restorative justice. Implementation of these strategies has resulted in reduced office referrals, school suspensions, expulsions, and an improved school climate [4]. Overall, the APA task force echoes a fact that people across all disciplines have discovered to be true, which is that "zero tolerance" does more harm than good and that its most violent offenses are committed against Black and Latinx children. While the recommendations in and of itself are not transformative on a large scale, we know that they support the vehement protests against hostile classrooms that treat Black and Latinx like criminals.
Another more holistic and "zen" approach was introduced at Robert W. Coleman Elementary School in West Baltimore, MD. Students, instead of being sent to the principal's office, are being taught how to meditate in the "Mindful Moments Room" [23]. This policy has been in place since 2015 and has already shown remarkable results. Since implementation, there have been no suspensions and the effects of meditation have even demonstrated an impact on the children's home lives. It is easy to overlook the many ways children bring their baggage from home to school and vice versa. Imagine a method of teaching geared toward nourishing both their educational and personal lives. This idea was spearheaded by the Holistic Life Foundation in Baltimore; however, the benefits of meditation-physical, spiritual and mental-have been topics of research for centuries, and can be found inherent to many Chinese, Hindu, Jain and Buddhist traditions. What all forms of meditation share is the emphasis on being present in the moment. When used as an alternative to disciplining or reprimanding children, it allows the child to connect to what may be responsible for their behavior, reflect on it, and reach a state of calmness that subdues feelings of anger or frustration. This is the operative benefit. We are too quick to eradicate a "problem" and not invested enough to address it. When children misbehave, break the rules or agreements, when they throw a fit, act out, or emote erratically, it is not without reason. What meditation illuminates is that our feelings and their physical manifestations stem from something deep, much deeper than our prejudices allow us to see in others. We cannot diagnose a problem without then tending to its source. Most importantly, we cannot react to a child's misbehaving without offering forgiveness-especially for themselves-and the chance to repair. This is restorative justice-healing the broken pieces, not disposing of them. There have been material impacts of this much more tender method in this Baltimore school.
Furthermore, there are efforts beyond the classroom that react to and seek to combat the school-to-prison pipeline. Esperanza, an organization in New York City, is one type of program that is attempting to employ "community-based alternatives" to incarceration for court-involved youth living in New York City [24]. With both youth charged in family court as well as youth undergoing prosecution in Criminal or Supreme Court (being charged as juvenile offenders or as adults with felony charges), Esperanza is making a tangible difference in the lives of young people-mostly young people of color-by engaging with them through direct services and reducing the placement of youth in juvenile detention or prison. Through direct services, such as counseling for families and the young person involved, case management, and crisis intervention, Esperanza seeks to support and rehabilitate rather than punish them in a prison cell [24].
These are just a few examples of the ways in which our society has taken a reactive approach to combating the school-to-prison pipeline. Unfortunately, reactionary measures play into the logic of neoliberalism by failing to acknowledge the vast socio-political problems that undergird behavioral differences. If we ask ourselves the motive behind punishment, especially when applied selectively, we can no longer evade the obvious elephant in the room. We cannot pretend that criminalizing Black and Latinx children in the classrooms is not an extension of how they are perceived and valued within the larger socio-political hierarchies of our society. We need to shift the narrative to condemn the broken system that creates the need for these programs and work towards establishing a transformative educational system.
Feminism as a Tool for Educational Transformation
I often like to talk about feminism not as something that adheres to bodies, not as something grounded in gendered bodies, but as an approach-a way of conceptualizing, as a methodology, as a guide to strategies for struggle. [11] The principles of feminism are antithetical to neoliberalism. The foundation of any intersectional feminist approach to problem-solving is rooted in notions of deconstruction and transformation. This is what we need to employ when dealing with this pernicious cycle of racialized violence and marginalization that plagues our education and penal systems. Prominent scholar-activists, such as Angela Davis, distance themselves from language of reform towards language of abolition. If we are to mimic her logic, the way towards a just, truly democratic and anti-violent education system would be to rebuild it from the ground up, not to offer solutions as a reaction to an intractable problem. Our current model is irreparably mired by racist, classist, and ableist ideologies to no one's benefit, not even the white and wealthy's. Feminist pedagogy asserts that until we are all free, none of us are. Feminist pedagogy urges us to think of how this broken, neoliberal inflicted system, while enacting violence primarily on Black and Latinx bodies, has ruined us all. It has stunted our imagination. It has stifled our ability to challenge authority and to demand our agency. It has created a culture of social disengagement and complacency.
"Whenever you conceptualize social justice struggles, you will always defeat your own purposes if you cannot imagine the people around whom you are struggling as equal partners. You are constituting them as an inferior in the process of trying to defend their rights. The abolitionist movement has learned that without the participation of prisoners, there can be no campaign" [11].
So, this is where we begin. With feminism as our most potent tool in contending with the school-to-prison pipeline, we look to the students and their families to guide us towards a reconstruction of an education system that works for them, that holds their humanity tenderly, that fights for their potential and that aims to secure their futures. What is created will honor accurate and difficult histories that challenge the status quo and delegitimize notions of hierarchy, meritocracy and our current model of democracy. It will empower students by promoting the value of both their personal convictions and their compassion for others. It will approach socio-political and economic terrains through intersectional frameworks and encourage speaking truth to power. It will value the poor, queer, disabled and people of color and not only recognize how their humanity has been wrongfully threatened but also facilitate reparations. Feminism calls for the eradication of the current system as an acknowledgement of how deeply it has failed us. It may be difficult to imagine now, but a feminist approach looks like freedom.
Conclusions
Criminalizing children will have constitutional implications for generations to come. It is corrosive and rends the fabric of our erstwhile civil society, makes a lie of equal opportunity, and rewards authoritarian personality disorder at the expense of our humanity. [25] For many vulnerable youth in our country, the school-to-prison pipeline, though at odds with the ideals of a society said to provide equal opportunity to all, is a well-known path. This matter has led us to a dire time of crisis-one that is endemic to our sociopolitical systems and to all people. The reality is that Black and Latinx students are being pushed from the classroom to the jail cell and treated as disposable in our society. This is where we are. These are the structures that exist, that have existed and persisted for so long that it can be hard to imagine something different. Maybe a world where Black children are allowed to be children? A universe where Latinx children are allowed to be human? However difficult it may be, we must. We seek to imagine a society in which all people are granted the benefit of humanity. We seek to envision a total eradication of social hierarchies that separate the valued from the unvalued and the living from the better off dead. We seek to imagine a time where our empathy for others in our communities belies our history of hegemonic social control and isolation. We dream of a day when the sources of tension, hatred and violence are acknowledged and repudiated. Demonizing and criminalizing communities of people, including their children, who are reminded daily of their tenuous and provisional affiliation to the "American Dream" is never just or to anyone's benefit. This is the lie we have been sold. This is the nightmare we must wake ourselves from. We are amidst an escalating crisis and something needs to be done.
With a feminist framework and complete transformation, we need to implement compassion and justice into the education system to stymie its evolution into an iteration of the penal system. We need to move past just functional models of rehabilitation to undo the psychological, physical, and material damage the foot soldiers of white supremacist capitalist patriarchy have already done to Black and Latinx children. We need methods of addressing and stopping the social, economic, and health disparities that plague marginalized communities that can induce behavioral and emotional issues which we are too quick to condemn. Unlike the punitive and reactionary system we have now that does not prioritize the lives of those involved, a radical and feminist approach will take love, time, and patience. But we will get there. We must. | 2017-05-22T09:27:18.417Z | 2017-04-07T00:00:00.000 | {
"year": 2017,
"sha1": "b50600f34e6777608c3aad0d836c6f6fa72dec64",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0787/6/2/15/pdf",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "b50600f34e6777608c3aad0d836c6f6fa72dec64",
"s2fieldsofstudy": [
"Education",
"Law",
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
12990996 | pes2o/s2orc | v3-fos-license | Contrasting Behaviors of Mutant Cystathionine Beta-Synthase Enzymes Associated With Pyridoxine Response
Cystathionine b -synthase (CBS) deficiency is a recessive genetic disorder characterized by extremely elevated levels in plasma homocysteine. Patients homozygous for the I278T or R266K mutations respond clinically to pharmacologic doses of pyridoxine, the precursor of a cofactor for the enzyme, 5 0 -pyridoxal phosphate (PLP). Here we test the hypothesis that these mutations are pyridoxine responsive because they lower the affinity of the enzyme for PLP. We show that recombinant R266K has 30 to 100% of the specific activity of the wild-type enzyme, while I278T only has only 1 to 5% activity. Kinetic studies show that the decreased activity in both enzymes is due to reduced turnover rate and not substrate binding. Neither I278T nor R266K appear to greatly affect multimer status of the enzyme. The R266K enzyme has reduced affinity for PLP compared to the wild-type enzyme, providing a mechanism for the pyridoxine response observed in patients. Surprisingly, the I278T enzyme does not have altered affinity for PLP. To confirm that this was not an in vitro artifact, we examined pyridoxine response in mice that stably express human I278T as their sole source of CBS activity. These mice have extremely elevated plasma homocysteine levels and do not respond significantly to large doses of pyridoxine. Our findings suggest that there may be multiple mechanisms involved in response to pyridoxine.
INTRODUCTION
Human cystathionine beta-synthase (CBS; MIM]: 236200) is the first enzyme in transsulfuration pathway that converts homocysteine to cysteine. The enzyme uses 5 0 pyridoxal phosphate (PLP) as a cofactor and can efficiently catalyze the condensation of serine or cysteine with homocysteine to form cystathionine (Fig. 1A). The wild-type enzyme has a molecular weight of 63 kDa and functions as a homotetramer [Jhee and Kruger, 2005].
CBS deficiency or classical homocystinuria is a recessive genetic disorder characterized by extremely elevated levels of plasma homocysteine (tHcy) [Mudd et al., 2001]. It is the most common inherited metabolic disorder involving sulfur metabolism. CBSdeficient individuals have resting tHcy levels of 200-300 mM, compared to about 10 mM in normal individuals. Individuals with untreated homocystinuria develop arterial occlusions and venous thromboembolisms at very young ages. Additional observed phenotypes include osteoporosis, mental retardation, psychiatric disorders, and dislocated lenses.
A subset of CBS deficient patients respond to pharmacologic doses of vitamin B 6 (pyridoxine) with a significant lowering of plasma homocysteine levels. Pyridoxine-responsive homocystinuric patients have much reduced rates of all of the major phenotypes associated with CBS deficiency [Mudd et al., 1985]. There is almost complete concordance of pyridoxine response between affected siblings, indicating that response to pyridoxine is linked to the CBS gene. More recently, molecular analysis of mutations in the CBS gene has identified several mutations present in pyridoxine-responsive patients. The most frequently observed allele is the pan-ethnic c.833T4C (I278T) Shih et al., 1995] (mutation nomenclature based on RefSeq NM_000071.1; 11 is A of ATG initiation codon). This allele probably arose from a 68-bp repeat polymorphism within the CBS gene [Sperandeo et al., 1996]. Almost all of the patients homozygous for this allele are characterized as pyridoxine responsive, although there are several documented cases of non-responsive I278T compound heterozygotes Kruger et al., 2003]. A second mutation definitively associated with the pyridoxine response is the R266K alteration. This allele was identified in several Norwegian patients with CBS deficiency [Kim et al., 1997]. Individuals with this allele tend to have very strong response to pyridoxine, with tHcy levels correcting to near normal levels.
The biochemical basis for pyridoxine response is not well understood. Since the CBS enzyme uses PLP as a cofactor, the most obvious hypothesis is that pyridoxine responsive mutations may affect the ability of the enzyme to bind and retain PLP. However, biochemical studies on fibroblast extracts from responsive and nonresponsive patients have failed to reveal any consistent differences with regards to pyridoxine response in vitro [Fowler et al., 1978;Lipson et al., 1980;Uhlendorf et al., 1973]. The only consistent finding is that almost all pyridoxineresponsive patients have at least some residual CBS enzyme activity in their fibroblasts. In previous work from our laboratory, we have expressed both human I278T and R266K CBS in S. cerevisiae deleted for endogenous yeast CBS [Kim et al., 1997;Kruger and Cox, 1995]. We found that I278T failed to complement growth on cysteine-free media, but that R266K could complement growth.
In this work, we compare and contrast the behavior of I278T and R266K CBS in detail in a variety of experimental systems in vitro and in vivo. Our results show that these two mutant enzymes behave quite differently, suggesting that there may be multiple mechanisms underlying the pyridoxine response.
MATERIALS AND METHODS CBS Enzyme Preparation and Puri¢cation
For bacterial expression, Escherichia coli BL21 (DE3) containing either plasmids pGEX-CBS ], pGEX-I278T, or pGEX-R266K were used to produce CBS protein.
Plasmids pGEX-I278T and pGEX-R266K were constructed using pGEX-CBS as a template for site directed mutagenesis using the QuikChange site-directed mutagenesis kit (Stratagene, La Jolla, CA) followed by sequencing of the entire open reading frame (ORF). The resulting plasmids were then introduced into E. coli strain BL21 and a 1-liter culture was grown to an A 600 of 0.6 in Lucia Broth (LB) medium at 371C (no PLP was added). Isopropyl 1-thio-b-D-galactopyranoside was added to a final concentration of 0.05 mM to induce the expression of fusion protein at 201C. The cells were then resuspended in 20 mL of phosphate buffered saline (PBS) containing 10 mM dithiothreitol, 100 mM MgCl 2 , 0.5 mg/mL lysozyme, 2 units/mL DNase, and 0.86 mg/mL protease inhibitor mixture (Sigma-Aldrich; www.sigmaaldrich.com) for 1 hr at 41C and then lysed by freeze-thawing two times. The lysates were incubated at 41C for an additional 30 min, briefly sonicated on ice to reduce viscosity, centrifuged, and the clarified supernatants containing fusion proteins were filtered and applied to a 1-mL GSTrap column (Amersham Biosciences; www. amershambiosciences.com, Piscataway, NJ) that was connected to a Bio-Rad (www.bio-rad.com) Biologic Duo Flow FPLC equilibrated with PBS (pH 7.3) and 5 mM dithiothreitol. After being washed with 20 column volumes of the same buffer, the column was filled with thrombin protease, sealed, and incubated at room temperature for 16 hr. Cleaved protein was eluted using 20 mL of PBS, and bound Glutathione-S-Transferase (GST) and uncleaved GST fusion proteins were eluted with 20 mM reduced glutathione in Tris-HCl (pH 8.0). Analysis of the purified human CBS protein by SDS-PAGE and Coomassie Brilliant Blue staining indicated that the protein was more than 95% pure. Yield was significantly lower for the I278T protein due to inefficient cleavage by thrombin.
To express human CBS in yeast, we used yeast strain WY35, a strain that is deleted for yeast CBS and that contains a plasmid expressing either wild-type, I278T, or R266K CBS Cox, 1994, 1995]. Whole-cell yeast extracts were prepared by mechanical lysis as previously described [Shan et al., 2001].
CBS Enzyme Activity Assays
For kinetic and other studies, enzyme activities were measured using standard reaction containing 100 mM Na/Bicine (pH 8.6), 200 mM PLP, 1 mM Tris (2-carboxyethyl) phosphine hydrochloride (TCEP), 100 mM AdoMet (S-adenosyl methionine), 0.25 mg/mL bovine serum albumin, 10 mM L-Serine, and 10 mM L-homocysteine. Total reaction volume was 50 mL; 50 mg of yeast extract, 10 mL of in vitro translated CBS protein, or 50-200 ng of purified CBS protein were used in each reaction. The concentrations of cosubstrates serine, cysteine, or homocysteine were between 0 and 40 mM. Reactions were carried out at 371C for 1 hr, and the cystathionine production was measured by an amino acid analyzer. The amount of cystathionine produced was determined by comparison with a known standard, using the same batch of ninhydrin. The coefficient of variance (CV) between assays is less than 5%. The kinetic parameters were determined using EnzKinetics (Trinity Software; www.trinitysoftware.com).
PLP Assay
PLP-free CBS was prepared by dialyzing samples for 6 hr at 41C against 50 mM Tris-HCl buffer, pH 8.6, 25 mM NaCl, and 5 mM hydroxylamine, followed by dialysis of the sample against the same buffer lacking hydroxylamine overnight at 41C. To reconstitute the apoprotein with PLP, 50 mg of PLP-free yeast extract expressing CBS or 100 ng of purified enzyme was added to the standard reaction that contains 0 to 40 nM of PLP. Catalytic activity of the reconstituted enzyme was determined as described above.
Mouse Studies
Tg-I278T Cbs -/and Tg negative Cbs -/mice were derived from mating Tg-I278T Cbs 1/animals to Tg-negative Cbs 1/animals as previously described [Wang et al., 2005]. Matings were kept on zinc water to induce transgene expression and eliminate the neonatal lethality associated with homozygosity of Cbs. After weaning, the animals were put back on normal water. For the pyridoxine and zinc responsive experiment, 2-to 7-month-old animals were put on water containing either zinc (25 mM) or pyridoxine (100 mg/L) or both. Blood was collected from the eye socket, put on ice, and then centrifuged at 41C at 12,000g for 5 min. Serum was then isolated and stored at -701C until analysis. Total homocysteine was analyzed using a Biochrom 30 amino acid analyzer as previously described [Wang et al., 2005].
Native Gel Analysis
For the native gel assays, H 2 S production was assayed by reaction with Pb-acetate using a modified previously described procedure [Chen et al., 2004]. In brief, 100 mg of yeast extract was loaded on the 8% native Tris-glycine gels (Novex) (Invitrogen; www.invitrogen.com) at 4 1C. After gel electrophoresis was finished, active protein bands in native gels were detected by soaking the gel in 50 mL of the reaction assay mixtures for several hours at room temperature. The reaction mixture contained 200 mM Na/Bicine (pH 8.6), 50 mM PLP, 0.25 mg/mL bovine serum albumin, 0.4 mM lead acetate, 30 mM L-cysteine, and 30 mM 2-mercaptoethanol. The gel was then photographed.
CBS Production in Reticulocyte Lysate
The constructs of plasmid DNA expressing CBS proteins were made by PCR amplification of human CBS ORF with primers containing a NotI and SalI restriction sites. This insertion product was then cloned directionally into plasmid pTnT TM (Promega; www.promega.com) using NotI and SalI. The CBS proteins were expressed in vitro using TNT s Coupled Reticulocyte Lysate Systems (Promega) according to the manufacturer's instruction. To confirm protein expression, radiolabeling was performed with L-[ 35 S]methionine (1.153 Ci/mmol; Amersham Corp; www. amershambiosciences.com). The radioactive polypeptides were analyzed on SDS-PAGE, dried, and subjected to autoradiography.
Kinetic Analysis
Recombinant wild-type, I278T, and R266K CBS was produced as a GST-fusion proteins in E. coli. The GST-tag was then subsequently removed by thrombin cleavage during the purification process. Figure 1B shows a Coomassie-stained gel of the proteins after purification. We used these purified enzymes to determine the steady-state kinetic parameters of the enzymes using three different substrates, serine, homocysteine, and cysteine. As shown in Figure 1B, we did not see large differences in the K m values for the wild-type and mutant enzymes. The wildtype enzyme had a K m[ser] of 1.74 mM, a K m[Hcy] of 7.17 mM, and a K m[cys] of 6.11 mM. These values are quite consistent with previously published values [Kery et al., 1998;Taoka et al., 1998]. Surprisingly, the I278T enzyme actually had slightly lower K m values for all three of the substrates compared to the wild-type enzyme, with a K m[ser] of 1.41 mM, a K m[Hcy] of 2.0 mM, and a K m[cys] of 3.59 mM. The R266K enzyme had K m values quite similar to the wild-type enzyme. These results show that these mutations have minimal effect on substrate binding. More significant differences between the wild-type and the mutant enzymes were observed with respect to V max . The R266K V max values were two-to three-fold lower than the wild-type enzyme for both the serine and cysteine reactions (Fig. 1B). The I278T enzyme was much more severely impaired, with V max values of only 0.8 to 1.4% of the wild-type enzyme. These results show that both recombinant R266K and the I278T enzyme are impaired in their ability to catalyze the condensation reaction, and that the I278T enzyme is much more impaired than the R266K enzyme.
We also examined the specific activity of the same three enzymes expressed in S. cerevisiae deleted for CBS (cys4D). In these experiments the enzyme activity was assessed in total yeast cell extracts. Yeast expressing wild-type human CBS had 1,322 units of activity (nmol/mg-hr), while R266K yeast had 654 units and I278T had only 65 units of activity. Western blot analysis showed that these differences in activity were not the result of differential CBS protein levels (data not shown). Thus the mutant enzymes expressed in yeast showed similar behavior to the purified recombinant proteins from E. coli. These findings show that the large differences in enzyme activity between the R266K and I278T enzyme is not an artifact of the E. coli expression system or the purification procedures.
I278T Stability and Multimer Status
Full length human CBS has been reported to be primarily a tetramer, although higher molecular multimers have been observed [Kery et al., 1998]. We were interested in assessing whether the decrease in V max might be caused by a change in the CBS multimer state. To assess multimer status, we ran total yeast extracts expressing either wild-type, I278T, or R266K CBS on a native gel and then identified the CBS by examining enzyme activity in situ (Fig. 2). The basis for the in situ reaction is the production of H 2 S from condensation of homocysteine and cysteine, which in turn reacts with lead acetate. The resultant lead sulfide precipitates on the gel indicating the position of the CBS multimer on the gel. As shown in Figure 2, yeast expressing either wild-type human CBS, I278T, or R266K all have enzyme activity centered near 240 kDa, corresponding to a tetramer. As expected, the enzyme activity of I278T is much reduced relative to both the R266K extract and the wild-type extract. These results indicate that neither I278T or R266K have a large effect on multimer status. Interestingly, human CBS produced in E. coli appears to have two forms, a faster migrating species that corresponds with the yeast form of the enzyme, as well as a slower migrating form that is a higher order multimer.
We also examined multimer status in human CBS produced in transgenic mice. Livers were harvested and total cell extracts were made from mice expressing either wild-type mouse CBS, human hemagglutinin (HA)-tagged CBS (hCBS) or human HA-tagged I278T (hI278T) (described in the next section). These extracts were then subjected to Native PAGE followed by immunoblot analysis with anti-hCBS antiserum. As shown in Figure 2B, the predominant CBS species in both the hCBS and I278T extract was a tetramer running at about 240 kDa. Interestingly the liver expressing mouse CBS showed some additional bands, suggesting that higher ordered multimers may also occur as well. These results support the view that the I278T mutation does not have a dramatic affect on the multimer status of CBS.
We also determined whether the enzyme activity of wild-type and I278T human CBS was stable over time. As shown in Figure 3, both the wild-type and I278T product formation increases linearly over time, indicating that the enzymes are stable under normal reaction conditions. Interestingly, at the very earliest time point (15 sec) we have no detectable cystathionine formed from the I278T enzyme, but at the 45 sec time-point we observed 0.131 nmol of cystathionine. The long lag in the production of cystathionine is consistent with the idea that I278T has a very slow catalytic rate compared to the wild-type enzyme. These results argue that the observed decreased V max of I278T is due to the catalytic impairment of the enzyme and not decreased enzyme stability.
Binding of PLP
Since both the I278T and R266K allele are associated with pyridoxine response in humans, we decided to examine the effect of PLP on these enzymes in vitro. The addition of excess PLP in the reaction mixture did not result in any increase in enzyme activity for any of the three alleles, suggesting that under standard reaction conditions the enzymes are fully saturated for PLP (data not shown). Therefore, we examined the affinity of the different mutant forms of the enzyme to bind PLP after stripping of endogenous PLP with hydroxylamine. We then added increasing amounts of PLP to determine the K m [PLP] for each of the three enzymes. This was done for both the purified recombinant protein and for yeast extracts expressing the various mutants. Our K m [PLP] value for the wild-type enzyme of 1.1870.42 was quite similar to the 0.770.1 mM shown previously [Kery et al., 1999]. As shown in Figure 3, both the purified R266K and the R266K produced in yeast had significantly higher K m [PLP] values (three-to 10-fold) compared to the wild-type enzyme produced in the same system. In contrast, the I278T enzyme's K m [PLP] value was quite similar to the wild type enzyme. These findings suggest that the in vivo pyridoxine response of R266K, but not I278T, could be explained by reduced affinity of the enzyme for pyridoxine.
We also examined the sensitivity of wild-type and I278T human CBS to inhibition by hydroxylamine. Hydroxylamine competes with the enzyme for PLP binding, thus reducing enzyme activity. We compared inhibition of enzyme activity by hydroxylamine at 0, 10, and 1,000 mM concentrations. In this experiment human CBS was produced in vitro using programmed red blood cell reticulocyte lysates. As shown in Figure 4, although the wild-type human enzyme was over six times more active than I278T, the percent inhibition of enzyme activity was essentially identical. At 1,000 mM concentration, wild-type CBS showed 61% inhibition while I278T showed 71% inhibition. These results show that the I278T enzyme does not bind PLP with significantly reduced affinity compared to the wild-type enzyme.
I278T Expressed in Mice Shows Minimal Response to Pyridoxine
One possible explanation for the contrasting behaviors of the R266K and I278T enzymes is that the I278T enzyme produced in yeast or E. coli behaves differently than the enzyme produced in a mammalian system. To test this possibility we created a mouse in which in which the endogenous mouse Cbs gene was deleted, and an HA-tagged human CBS gene with the I278T mutation was expressed from the mouse metallothionein promoter (mMT-I) [Wang et al., 2005]. Previously, we showed that in these mice the human I278T transgene is inducible by addition of zinc to the mouse drinking water and is expressed stably in the liver and kidney, tissues that normally express high levels of CBS [Wang et al., 2005] (see Fig. 2). We refer to these animals as Tg-I278T Cbs -/mice.
We next used these mice to determine if I278T showed pyridoxine response in vivo. A group of 12 Tg-I278T Cbs À/À and three transgene-negative Cbs -/control animals were put on normal water for 3 weeks and then serum homocysteine was measured (see Fig. 5). As expected, both groups of animals had greatly elevated total homocysteine, with the mean of each group being over 300 mM. We next put the animals on zinc water (25 mM) for a week to induce the transgene and measured tHcy again. As expected, we found no significant lowering in the tHcy when the transgene was activated by zinc. The mice then were put on water that contained both zinc and pyridoxine. The amount of pyridoxine added to the water was calculated to give the animals a dose of 19.5 mg/kg/day, which is the upper end of the range that is given to human patients [Mudd et al., 2001]. This level of pyridoxine is approximately six times the level normally found in rodent chow. To confirm that this increased level of pyridoxine FIGURE 2. Multimer and stability analysis. A: Native gel enzyme activity assay; 100 mg of yeast extract or 1 mg of puri¢ed E. coli produced protein was loaded in each lane. After the gel was run it was incubated in the presence of 2-mercaptoethanol and cysteine to assess CBS activity (see Materials and Methods). Activity is indicated by the black-stained regions. Null control represents the yeast strain expresses no CBS protein. HCBS-T contains truncated human CBS (aa1-409, MW 45 kd). B: Native Western gel of CBS produced in the liver of mice. The lane labeled mCBS has endogenous mouse CBS, while the lanes labeled hCBS-I278T and hCBS-wild-type (wt) contain extracts from livers of either Tg-hCBS or Tg-I278T animals that are homozygously deleted for the endogenous mouse CBS gene. Marker, activity stain, and Western all came from the same gel. C: Enzyme stability of wild-type and I278T CBS. Puri¢ed wild-type and I278T CBS were incubated under standard reaction conditions for varying amounts of time and cystathionine formation was measured.The inset shows a blowup of the region between 0 and 1.5 min.
was actually increasing the PLP levels in vivo, we measured PLP levels in six mice before and after pyridoxine supplementation. We found that serum PLP increased from 44.9 nmol/L to 302 nmol/L (n 5 9; Po0.0006; paired t-test) indicating that the water supplementation was increasing PLP levels in vivo.
After 2 weeks on the zinc-pyridoxine water, mice were again bled and total homocysteine was analyzed. We did observe a slight lowering of tHcy, with the Tg-278 Cbs -/animals going from 291 mM on zinc water to 240 mM on zinc1pyridoxine (P 5 0.04). However, the three control animals also showed a similar lowering, going from 300 to 256 mM, suggesting that the lowering by pyridoxine does not depend on the transgene. To explore if this response is related to the concentration of pyridoxine in the water, we increased the pyridoxine concentration in the water four-fold and again measured tHcy after 2 weeks. We saw no significant difference between the mice given the normal dose and the fourfold dose. These findings suggest that whatever small effect pyridoxine 1 zinc is having, it is not enhanced by increasing pyridoxine levels and does not depend on the transgene.
We also examined response to PLP in vitro in CBS activity in liver extracts from Tg-I278T Cbs -/animals. We found that addition of five times the normal PLP concentration to the reaction mixture had no effect on I278T CBS activity (data not shown). These results indicate that I278T expressed in mouse behaves similarly to that observed in the E. coli and yeast systems.
DISCUSSION
The goal of this study was to understand the molecular basis for clinical response to pyridoxine in CBS deficient patients. We chose to study two different mutations associated with clinical pyridoxine response, R266K and I278T. We expressed these mutant enzymes in both bacteria and S. cerevisiae. From these studies, we observed widely differing behaviors for these two mutant enzymes.
The R266K CBS enzyme was only modestly impaired in its catalytic ability in vitro, and also had substantial activity in vivo as demonstrated by its ability to complement growth in CBS deficient S. cerevisiae [Kim et al., 1997]. This activity was not an artifact of the expression system, as activity was similar when expressed in E. coli as a GST-fusion protein, and in S. cerevisiae, in which no fusion partner was used. Furthermore, it was recently reported at a meeting that R266K expressed in E. coli without the fusion tag had 23% activity of wild-type when grown at 371C and 83% activity when grown at 181C (Viktor Kozich, personal communication). Despite the high level of residual activity under optimal conditions, we did observe a significant difference between R266K and wild-type CBS with regard to PLP binding. Specifically, the R266K mutation it showed a three-to six-fold reduction in PLP binding. Based on these results we propose that in R266K containing patients, oral pyridoxine results in increased intracellular PLP and this in turn keeps the R266K enzyme saturated and fully active, while in untreated patients the enzyme is not fully saturated and therefore exhibits reduced enzyme activity relative to the wild-type enzyme.
In sharp contrast, the I278T enzyme was much more severely impaired both in vitro (this study) and in S. cerevisiae [Kim et al., 1997]. The I278T enzyme had only 1 to 5% activity compared to the wild-type enzyme. Unlike R266K we did not observe any significant difference in PLP binding between I278T and wild-type CBS. Furthermore, mice expressing human I278T showed essentially no homocysteine lowering response to pyridoxine.
In some respects the widely differing behavior R266K and I278T observed in vitro and in S. cerevisiae does mimic the behavior observed in human patients. In general, the I278T homozygous patients do not respond as robustly to pyridoxine as R266K patients. In a study of pyridoxine-treated Dutch homozygous I278T patients (n 5 9), the mean total homocysteine was 30 mM, compared to a mean treated tHcy levels of 12 mM in Norwegian R266K patients (n 5 4) [Kim et al., 1997;Kluijtmans et al., 1999]. Furthermore, the Norwegian patients received only 40 mg of pyridoxine per day vs. 750 mg per day for the Dutch patients. Thus, in humans, it might be appropriate to call the R266K allele a ''hyper''-responsive allele. Despite being not quite as responsive as R266K, there can be no doubt that the I278T allele in humans does show significant pyridoxine response. How can we reconcile the findings presented here with the patient data?
One possible explanation is that the I278T enzyme may behave differently in human tissue compared to in the various systems described in this article. Specifically, one may speculate that a missense mutation like I278T may affect protein folding and that this effect may vary between cell types and organisms due to different intracellular folding environments. There are data supporting the view that subtle changes in CBS protein folding may be important for function. Previous work from our laboratory has shown that it is possible to restore enzymatic function to human I278T produced in yeast by either truncation of the C-terminal regulatory domain, or by the introduction of specific missense mutations in this domain [Shan et al., 2001;Shan and Kruger, 1998]. Recently, we have shown that one of these suppressing mutations, T424N, which restores activity very well in S. cerevisiae, had absolutely no ability to restore function in combination with I278T in mice [Wang et al., 2005]. This finding implies there may be subtle differences in the protein folding environment between S. cerevisiae and mice. There may also be subtle differences in the behavior of mutant proteins in different tissues. We have found that the wild-type CBS enzyme is abundantly expressed in fibroblasts from normal individuals, but cannot be detected by Western blot analysis in fibroblasts from I278T homozygous patients (X. Chen, unpublished results) [Janosik et al., 2001]. The fact that I278T patients respond to pyridoxine clearly implies that there must be some stable enzyme in the liver and kidney. In mice, I278T protein is stable in the liver and kidney in the absence of pyridoxine [Wang et al., 2005]. Taken together, these findings support the idea that the proper intracellular environment may be critical for proper folding of mutant CBS protein.
An alternative explanation may be that the pyridoxine response observed in human I278T patients may be occurring by a mechanism that does not involve direct binding to the CBS enzyme. Our data clearly show that the I278T CBS does have some residual activity, ranging from 0.8 to 4.9% depending on the expression system. Early studies on extracts from CBS-deficient patient fibroblasts have also shown that the vast majority of pyridoxine responsive patients have low residual CBS activity [Fowler et al., 1978;Uhlendorf et al., 1973]. However, these same studies fail to consistently find a strong response to pyridoxine in vitro. A possible hypothesis to explain these findings is that pharmacologic doses of pyridoxine may be altering methionine metabolism such that the low amount of CBS activity found in I278T patients may actually be sufficient to keep homocysteine levels low. For example, it may be that high levels of pyridoxine reduce the rate of formation of homocysteine from upstream metabolites, such that the lower levels of CBS activity are sufficient to reduce the buildup of the homocysteine pool. However, it should be noted that in our mouse studies we did not see a response to pyridoxine, suggesting that such an effect would be specific for human physiology.
In summary, we have found very different behaviors of two CBS alleles associated with pyridoxine-responsive homocystinuria. R266K CBS was only slightly impaired in catalytic turnover relative to the wild-type enzyme and showed a significantly reduced affinity for pyridoxine. I278T CBS was much more severely impaired, but surprisingly showed no decrease in affinity for pyridoxine. In addition, the I278T allele did not show FIGURE 5. Total homocysteine levels in zinc and pyridoxine treated animals. A: A group of 12 Tg-I278T cbs/cbs animals and three transgene negative cbs/cbs animals had their serum total homocysteine determined in four di¡erent conditions. Normal indicates the animals were on normal water,''Zn'' indicates that they were on water containing 25 mM zinc sulfate,''B6'' indicates the water contained 100 mg/mL pyridoxine, ''Zn 1 B6'' indicates water containing 25 mM zinc sulfate and 100 mg/mL pyridoxine, and ''Zn14XB6'' indicates they were on 25 mM zinc sulfate and 400 mg/mL pyridoxine. The error bars show standard deviation for the experiments. B: This panel shows the same information as A, but plotted for each individual animal. Solid lines are Tg-I278T containing animals, while dashed lines are nontransgenic.
pyridoxine response in a mouse model of homocystinuria. Our results suggest that although both mutations are associated with clinical response to pyridoxine, the molecular mechanism behind this response is probably quite different. | 2018-04-03T00:35:24.829Z | 2006-05-01T00:00:00.000 | {
"year": 2006,
"sha1": "8bb5fb9c6ca57617d62f4c8951b67f3e870b9e10",
"oa_license": null,
"oa_url": "https://doi.org/10.1002/humu.20320",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "09cd1b00a58614d26d54bf4643d96af6010e13e4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
211033927 | pes2o/s2orc | v3-fos-license | The Renoprotective Effects of Naringenin (NGN) in Gestational Pregnancy
Introduction Gestational diabetes mellitus (GDM) is defined as glucose intolerance that is first diagnosed during pregnancy, a condition risking the health of both the mother and the baby. Naringenin (NGN) has been demonstrated to have multiple therapeutic functions, while it is also considered to exhibit antidiabetic properties. The present study aimed to investigate the protective effects of NGN in pregnant diabetic rats. Methods GDM was induced by feeding the rats with a high-fat diet for 5 weeks, followed by intraperitoneal injection of streptozotocin (35 mg/kg). The fasting blood glucose were determined with a glucometer and the 24-h urine protein (24-UPro) were determined by the sulfonyl salicylic acid method. The pathological morphological changes and apoptosis of glomeruli cells of kidney tissue using hematoxylin and eosin (H&E) staining and TUNEL analysis. Enzyme-linked immunosorbent assay (ELISA) kits were used to detect the serum T-AOC, the activity of SOD, the levels of GSH-Px, CAT and MDA, TNF-α, IL-6, TGF-β, ICAM-1.The expression of related genes were measured by RT-qPCR and Western blot analyses. Results In the NGN-treated group, it was observed that the general status of the rats was improved, while the levels of blood glucose and 24-UPro were significantly decreased. In addition, the histopathological changes in renal tissues and renal cell apoptosis were significantly improved upon treatment with NGN. The expression levels of oxidative stress and inflammation-associated factors also differed signifigcantly between the model and NGN-treated groups. Upon treatment with NGN, the levels of peroxisome proliferator-activated receptor α were significantly increased, while the activity of enzymes involved in the oxidative metabolism of fatty acids was significantly decreased. Conclusion These preliminary experimental findings demonstrate that NGN has a certain renoprotective effect on GDM, which provides a novel therapeutic option for this condition.
Introduction
Gestational diabetes mellitus (GDM) is identified as glucose intolerance that is first diagnosed during pregnancy, a common condition risking the health of both the mother and the baby. 1 During pregnancy, women with GDM have an increased risk of preeclampsia, hypertension and premature delivery, with increasing possibility of developing into type 2 diabetes in the long term. [1][2][3][4] Therefore, timely and effective control of the occurrence and development of GDM and its complications is of great importance for the mother and baby. Naringenin (NGN) is a common dietary flavanone in citric fruits, including grapefruits, lemons and oranges. 5 It is a flavanone glycoside which has a molecular formula of C 15 H 12 O 5 and molecular weight of 272.26 g/ mol, 6,7 and the molecular formula is shown in Figure 1A. NGN has been reported to have multiple therapeutics properties, including antioxidant, antithrombotic, antihypertensive, anti-hyperlipidemic and anti-inflammatory properties. [8][9][10][11][12] Furthermore, it has been suggested that Figure 1 NGN decreases blood glucose and 24-h urine protein levels. STZ was administered after 5 weeks of HIF to induce gestational diabetes mellitus. (A) Chemical structure of NGN. (B) Presented the results that a glucometer was used to measure the change of blood glucose levels in different groups following NGN treatment for 1 and 2 weeks in the form of histograms and line graphs. (C) Presented the results that the sulfonyl salicylic acid method was used to determine 24-h urine protein levels in different groups following NGN treatment for 1 and 2 weeks in the form of histograms and line graphs. Data are expressed as the mean ± standard deviation. *P<0.05, **P<0.01 and ***P<0.001, vs control group; # P<0.05, ## P<0.01 and ### P<0.001, vs model group. NGN, naringenin; STZ, streptozotocin; HIF, high fat feeding. NGN regulates lipoprotein metabolism and may be used in the management of diabetes, insulin resistance and atherosclerosis, which has been widely discussed in a previous study. 13 In diabetes mellitus, NGN is considered to reduce the plasma glucose levels. 14 Previous study suggested that in a type 2 diabetes rat model, NGN was able to ameliorate cognitive deficits via oxidative stress, pro-inflammatory factors and PPAR signaling. 15 In high-cholesterol fed rats, NGN was able to attenuate renal and platelet purinergic signaling perturbations by inhibiting ROS and NF-κB signaling pathways. 16 NGN exhibits a potential cardio-protective effect via the regulation of oxidative stress and inflammatory markers in doxorubicin and isoproterenol-induced cardiotoxicity. 17 It has been reported that NGN exhibits a protective effect on glycerolinduced acute renal failure in the kidney of rats. 18 However, the role of NGN in GDM has not been reported thus far.
Therefore, the present study aimed to explore the renoprotective effect of NGN on GDM in rats, as well as its effects on oxidative stress, PPAR signaling pathway, inflammatory response and cell apoptosis in rats.
Materials and Methods Animals
A total of 63 female Sprague-Dawley rats (age, 6-8 weeks; weight, 220-250g) were purchased from the Shanghai Laboratory Animal Center of the Chinese Academy of Sciences (Shanghai, China). The animals were housed in a specific pathogen free animal facility and maintained under controlled room temperature (22-26°C) and humidity (40-70%) with a 12:12 h light: dark cycle. All animal experiments and the use of mice were approved by the Ethics Committee of Yixing People's Hospital. All experimental protocols conducted in the rats were carried out in accordance with the Guide for the Care and Use of Laboratory Animals by the National Institutes of Health.
Establishment of a GDM Animal Model
Female rats were fed with high fat (HIF) feeding for 5 weeks and kept in the same cage with male rats in the stage of estrus. Upon the presence of a vaginal mucous plug the next morning, pregnant rats could be confirmed, and that day was marked gestation day 1. After 5 days, the pregnant rats were weighed, injected STZ (35 mg/kg, S0130; Sigma-Aldrich; Merck KGaA, Darmstadt, Germany) by which destroyed pancreatic islet β cells, in order to generate gestational diabetic models. After 24 and 72 h, the results of fasting blood glucose test was above 13.5 mmol/l stably, which was considered to indicate a successful model. NGN (SN8020, Beijing Solarbio Science & Technology Co., Ltd., Beijing, China) was dispersed in 0.9% saline solution. A total of 45 model rats were selected and randomly divided into a GDM group (n=9), an NGN gavage therapy group, which was administered p.o. with 30 (n=9), 50 (n=9) or 100 mg/kg (n=9) NGN and a metformin hydrochloride therapy group (n=9) (which received 200 mg/kg metformin hydrochloride; Zhonghui Pharmaceutical Co., Ltd. Beijing, China). Rats in each treatment group were administered the corresponding drug once a day for 2-3 weeks. In addition, untreated gestational rats were used as the normal group (n=9), while nongestational female rats of the same age were used as the control group (n=9). Control and normal gestation rats were fed on a normal diet, not injected with STZ. Rats in the control, normal and GDM groups received the same volume of saline solution (NaCl 0.9%).
General Observation
The mental state (spontaneous activity), appearance (coat color), eating, drinking, urine volume and weight of rats were observed and recorded carefully every day prior and subsequent to treatment for 2-3 weeks.
Detection of Blood Glucose and 24-h Urine Protein (24h-UPro) Levels
Upon drug treatment for 1 week and 2 weeks, blood was extracted from the caudal vein of the rats in each group in the fasting state. The levels of fasting blood glucose were determined with a glucometer (OneTouch Basic 200-200; LifeScan, Inc., Milpitas, CA, USA). Subsequently, 24h-UPro levels were determined by the sulfonyl salicylic acid method according to the manufacturer's protocol (Shenzhen Mindray Bio-Medical Electronics Co. Ltd. Shenzhen, China).
Determination of Serum Total Antioxidant Capacity (T-AOC)
After drug treatment, rats were sacrificed by cervical dislocation and blood was collected from the abdominal aorta, and blood serum was extracted by centrifugation at 15,000 x g at 4°C for 10 min. Then, the T-AOC was determined with an ultraviolet-visible (UV-VIS) spectrophotometer (UV-5100, Shanghai Precision Instrument Co. Ltd., Shanghai, China) according to the procedure of the Total Antioxidant Capacity Assay Kit (Sigma-Aldrich; Merck KGaA).
Pathology Examination
Upon blood collection, the tissue of the left kidney was collected. Samples were then fixed in 4% paraformaldehyde overnight and embedded in paraffin. Tissue paraffin sections of 5-μm thickness were cut for histological analysis. Hematoxylin and eosin (H&E) staining was performed using a standard protocol. 19 Then, the pathological morphological changes of kidney tissue were observed under an inverted light microscope (CKX41; Olympus Corporation, Tokyo, Japan).
Apoptosis Assay
After treatment, rats were sacrificed by cervical dislocation. Then, in order to investigate apoptosis, terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) analysis were performed on 5 μm paraffin sections using the in situ Cell Death Detection kit (Roche Molecular Biochemicals) following the manufacturer's protocal. Apoptosis of rat glomeruli cells was observed under light microscopy. The number of apoptotic cells and the total number of cells were calculated respectively. A poptosis index was calculated.
Oxidative Stress Assessment
After drug treatment, the tissue of the right kidney was extracted, shredded and homogenized in cold phosphate buffered saline (PBS; ThermoFisher Scientific). After centrifugation at 15,000 x g for 10 min, the clear supernatant was collected and analyzed for markers of oxidative stress. Following the manufacturer's instructions, the activity of superoxide dismutase (SOD), the levels of glutathione peroxidase (GSH-Px), catalase (CAT) and malondialdehyde (MDA) were measured with a UV-VIS spectrophotometer (UV-5100; Shanghai Precision Instrument Co.,Ltd.).
Statistical Analysis
Statistical analysis was performed with SPSS v 20.0 software (IBM Corp., Armonk, NY, USA). Each experiment was repeated in triplicate and all data were presented as means ± standard deviations. Differences among multiple groups were analyzed using variance ANOVA followed by the Dunnett's post hoc test. P<0.05 was considered to indicate a statistically significant difference.
Effects of NGN on Body Weight, Blood Glucose and 24h-UPro Levels
Following NGN treatment for 1 and 2 weeks, a glucometer was used to detect blood glucose levels, while the sulfonyl salicylic acid method was used to determine the 24h-UPro levels. The results indicated that, compared with the normal gestation group, the blood glucose and 24h-UPro levels were significantly increased in the GDM model group ( Figure 1B and C, P<0.001). Upon treatment with NGN, the blood glucose and 24h-UPro levels among the NGN groups were significantly decreased in a dose-dependent manner, while metformin treated group has a significant difference compared with non-treatment group. Overall, the general status of the rats in the NGN-treated groups appeared to improve.
Effects of NGN on T-AOC in the Serum
T-AOC level was determined by UV-VIS spectrophotometry. As shown in Figure 2A, compared with the normal gestation group, the T-AOC in the serum was markedly decreased in the GDM group (P<0.01). Following treatment with NGN, the T-AOC in the NGN (30, 50 and 100 mg/kg) groups was significantly increased compared with that in the model group in a dose-dependent manner, while the metformin-treated group (positive control) also exhibited a significant difference compared with the model group (P<0.01).
Effects of NGN on Renal Tissue Histopathological Changes in Gestational Diabetic Rats
H&E staining was performed to detect pathological morphological changes in the kidney tissue of rats. As shown in Figure 2B, the renal tissues in the model group exhibited evident histopathological changes. The glomerular volume was increased, the renal tubular epithelial cells exhibited vacuolar degeneration, and there was basement membrane thickening, as well as lymphocytes infiltration. Upon treatment with NGN, the histopathological morphologies in the renal tissues were evidently improved in a concentration-dependent fashion, while the metformintreated group also exhibited marked improvement compared with the model group.
Effects of NGN on Renal Cell Apoptosis in Gestational Diabetic Rats
To explore the effect of NGN on glomerular cell apoptosis, renal cell apoptosis was observed by TUNEL assay, and the apoptosis index was calculated. As shown in Figure 2C, the number of renal cells undergoing apoptosis was significantly increased in GDM model group than that in control group (p<0.001, Figure 2D). In contrast, in the three NGN groups, renal cell apoptosis was evidently improved, and the apoptosis rate was markedly decreased as compared with the model group, which were also dose-dependent ( Figure 2D). Similarly, the metformin-treated positive control group also exhibited a significant reduction in the apoptosis rate compared with the GDM model group (p<0.001, Figure 2D). These results revealed that NGN was able to significantly inhibit the apoptosis of renal cells in GDM rats.
Effects of NGN on Oxidative Stress
The activities of SOD, GSH-Px and CAT, and the content of MDA in renal tissues were determined by ELISA. As shown in Figure 3, compared with the normal gestation group, the activities of SOD, GSH-Px and CAT in renal tissues were significantly decreased (P<0.01 or P<0.001), while the content of MDA in renal tissue was significantly increased (P<0.01) in the GDM model group. Upon treatment with NGN, the activities of SOD, GSH-Px and CAT in the renal tissues of GDM rats were significantly increased, while the content of MDA in renal tissue was significantly decreased compared with model group in a dose-dependent manner. When compared with GDM model group, metformin-treated sharply elevated the activities of SOD, GSH-Px and CAT but decreased MDA in renal tissue (P<0.01 or P<0.001).
Effects of NGN on Inflammatory Cytokines
The levels of inflammation-associated factors were also determined by ELISA. The results revealed that, compared with the normal gestation group, the levels of plasma TNF-α, IL-6, TGF-β and ICAM-1 were significantly increased in the GDM model group (Figure 4, P<0.001). Upon treatment with NGN, the levels of TNF-α, IL-6, TGF-β and ICAM-1 were significantly decreased with increasing NGN dosages (Figure 4), as well as the metformin-treated group also exhibited a significant difference compared with the model group (P<0.001).
Effects of NGN on PPARα Signaling and Fatty Acid Oxidative Metabolic Enzymes
To explore the effect of NGN on fatty acid metabolism in the liver, RT-qPCR and Western blotting were performed to evaluate the expression levels of PPARα, Acox1, L-PBE and MCAD. As shown in Figure 5, the mRNA and protein levels of PPARα were significantly decreased (P<0.001), but the mRNA and protein levels of Acox1, L-PBE and MCAD were significantly increased in the model group compared with the normal gestation group. Upon treatment with NGN, the expression of PPARα was significantly increased, while the expression levels of Acox1, L-PBE and MCAD were significantly decreased in a dosedependent manner in the GDM rats when compared with GDM model group. Meanwhile, metformin-treated rats also exhibited a significant difference in expression levels of these genes when compared with the model group.
Discussion
Previous evidence suggested that NGN may reduce plasma glucose levels in diabetes. 14 Several animal studies have shown that NGN is an anti-hyperglycemic drug, as it ameliorates the complications of diabetes. 11,20 In the present study, blood glucose and 24h-UPro levels were significantly decreased following treatment with NGN in GDM rats. These results suggest that NGN may reduce blood sugar levels in gestational diabetes.
Hyperglycemia during pregnancy can induce complications in multiple tissues of the mother, including the kidney. 21 In recent years, with the development of pathophysiology, oxidative stress injury and inflammatory response caused by abnormal glucose levels and lipid metabolism have been reported to play a key role in the development of nephrotic complications. 22 Previous evidence suggested that oxidative stress plays an important role in the pathogenesis of diabetes mellitus. 23 NGN is considered to be an oxidative stress reliever in the treatment of diabetes mellitus. Previous study indicated that NGN reduces MDA levels and controls the insult of lipid peroxidation in diabetes. 24 Consistent with the aforementioned findings, the present study demonstrated that NGN could elevate the activities of SOD, GSH-Px and CAT and reduce the content of MDA in these tissues of GDM rats.
It was also observed that NGN improved pathological alterations in the organs of diabetic mice, 25 while it significantly inhibited lipid peroxidation in the kidney and liver. In the present study, the results of H&E staining revealed evident histopathological damages in the GDM model group. However, upon treatment with NGN, the histopathological damages in renal tissues were markedly improved. This indicated that NGN had a protective effect on kidney tissue in GDM rats. In addition, it has been reported that NGN reduced hepatic triglyceride and cholesterol levels, and significantly upregulated the expression of PPARα. 26 The present study results demonstrated that, following NGN treatment, the expression of PPARα was significantly increased, while the expression levels of Acox1, L-PBE and MCAD were significantly decreased in the GDM rats. Furthermore, it has previously been demonstrated that NGN normalized glucose levels and lipid metabolism, and improved vascular dysfunction in type 2 diabetic rats by reducing oxidative stress and inflammation. 27 Similarly, the findings of the present study revealed that NGN promoted the activities of SOD, GSH-Px and CAT in renal tissues, while significantly decreasing the expression levels of inflammatory factors, including TNF-α, IL-6, TGF-β and ICAM-1.
In conclusion, the present study is the first to elucidate renoprotective effects of NGN in GDM. The results indicated that NGN improved blood glucose and 24h-UPro levels in GDM rats, and increased the T-AOC of ROS in the serum. Furthermore, NGN improved the histopathological damages in renal tissues, inhibited the apoptosis of Figure 5 Effects of NGN on PPARα and fatty acid oxidative metabolic enzymes. (A) Reverse transcription-quantitative polymerase chain reaction analysis was conducted to determine the mRNA expression levels of PPARα, Acox1, L-PBE and MCAD in liver tissues. The mRNA level of PPARα was decreased significantly, and Acox1, L-PBE and MCAD were increased significantly in the GDM model group when compared with normal and control groups. Dramatically, NGN treatment up-regulated PPARα but reduced Acox1, L-PBE and MCAD expression in a dose-dependent manner. (B) Western blot analysis was conducted to measure the protein expression levels of PPARα, Acox1, L-PBE and MCAD. The protein levels from Western blot results were similar to those of RT-qPCR in (A). Data are expressed as the mean ± standard deviation. **P<0.01 and ***P<0.001 vs control group; # P<0.05, ## P<0.01 and ### P<0.001, vs model group. NGN, naringenin; PPARα, peroxisome proliferator-activated receptor α; Acox1, acyl-coenzyme A oxidase 1; L-PBE, L-peroxisomal bifunctional enzyme; MCAD, medium-chain acyl-coenzyme A dehydrogenase. renal cells, improved the activity of antioxidant enzymes, and reduced oxidative stress damage and the levels of inflammatory factors in the kidneys of GDM rats. These findings suggested that NGN was able to significantly improve the antioxidant and anti-inflammatory abilities of the kidneys in GDM rats, and may therefore serve a renoprotective role in GDM.
Ethics Approval and Consent to Participate
All animal experiments and the use of mice were approved by the Ethics Committee of Yixing People's Hospital. All experimental protocols conducted in the rats were carried out in accordance with the Guide for the Care and Use of Laboratory Animals by the National Institutes of Health.
Data Sharing Statement
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. and commentaries are all considered for publication. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2020-01-09T09:13:53.951Z | 2020-01-08T00:00:00.000 | {
"year": 2020,
"sha1": "6c74beeb6b269827e705370238cfc2203f74ce17",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=55248",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f64d31140b44e7710e657ce01aaccc3b7176f987",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20071447 | pes2o/s2orc | v3-fos-license | Inferring amino acid interactions underlying protein function
Protein function arises from a poorly defined pattern of cooperative energetic interactions between amino acid residues. Strategies for deducing this pattern have been proposed, but lack of benchmark data has limited experimental verification. Here, we extend deep-mutation technologies to enable measurement of many thousands of pairwise amino acid couplings in members of a protein family. The data show that despite great evolutionary divergence, homologous proteins conserve a sparse, spatially distributed network of cooperative interactions between amino acids that underlies function. This pattern is quantitatively captured in the coevolution of amino acid positions, especially as indicated by the statistical coupling analysis (SCA), providing experimental confirmation of the key tenets of this method. This work establishes a clear link between physical constraints on protein function and sequence analysis, enabling a general practical approach for understanding the structural basis for protein function.
The basic biological properties of proteins --structure, function, and evolvability --arise from the pattern of energetic interactions between amino acid residues (1-5). This pattern represents the foundation for defining how proteins work, for engineering new activities, and for understanding their origin through the process of evolution. However, the problem of deducing this pattern is extraordinarily difficult. Amino acids act heterogeneously and cooperatively in contributing to protein fitness, properties that are not simple, intuitive functions of the positions of atoms in atomic structures (6). Indeed, the marginal stability of proteins and the subtlety of the fundamental forces make it so that many degenerate patterns of energetic interactions could be consistent with observed protein structures. The lack of knowledge of this pattern has precluded effective mechanistic models for the relationship between protein structure and function.
In principle, an experimental approach for deducing the pattern of interactions between amino acid residues is the thermodynamic double mutant cycle (7-9) (TDMC, Fig. 1A). In this method, the energetic coupling between two residues in a protein is probed by studying the effect of mutations at those positions, both singly and in combination. The idea is that if mutations and at positions and , respectively, act independently, the effect of the double mutation (∆ '( )* ) must be the sum of the effects of each single mutant (∆ ' ) + ∆ ( * ). Thus, one can compute a coupling free energy between the two mutations (∆∆ '( )* ) as: the difference between the effect predicted by the independent effects of the underlying single mutations and that of the actual double mutant. ∆∆ '( )* is typically proposed as an estimate for the degree of cooperativity between positions and .
However, there are serious conceptual and technical issues with the usage of the TDMC formalism for deducing the energetic architecture of proteins. First, ∆∆ '( )* is not the coupling between the amino acids present in the wild-type protein (the "native interaction"). It is instead the energetic coupling due to mutation, a value that depends in complex and unknown ways on the specific choice of mutations made (10). Second, global application of the TDMC method requires a scale of work matched to the combinatorial complexity of all potential interactions between amino acid positions under study.
For even a small protein interaction module such as the PDZ domain (~100 residues, Fig. 1B) (11), a complete pairwise analysis comprising all possible amino acid substitutions at each position involves making and quantitatively measuring the equilibrium energetic effect of nearly two million mutations.
Finally, even if these two technical issues were resolved, it is unclear how to go beyond the idiosyncrasies of one particular model system to the general, system-independent constraints that underlie protein structure, function, and evolvability.
Recent technical advances in massive-scale mutagenesis of proteins open up new strategies to
address all these issues. In the PDZ domain, a bacterial two-hybrid (BTH) assay for ligand-binding coupled to next-generation sequencing enables high-throughput, robust, quantitative measurement of many thousands of mutations in a single experiment -a "deep mutational scan" (12)(13)(14). Parameters of the BTH assay are tuned such that the binding free energy between each PDZ variant and cognate ) is quantitatively reported by its enrichment relative to wild-type before and after selection Fig. S1 and Supplementary Methods). This relationship enables extension of single mutational scanning to very large-scale double mutant cycle analyses -a "deep coupling scan" (DCS) study (15).
Indeed, the throughput of DCS is so high that it enables the study of double mutant cycles in several homologs of a protein family in a single experiment. Thus, DCS provides a first opportunity to deeply map the pattern and evolutionary conservation of interactions between amino acid residues in proteins, a strategy to reveal the fundamental constraints contributing to protein function.
We focused on a region of the binding pocket of the PDZ domain, a protein-interaction module that has served as a powerful model system for studying protein energetics (13,16). PDZ domains are mixed αβ folds that typically recognize C-terminal peptide ligands in a binding groove formed between the α2 and β2 structural elements (Fig. 1B). We created a library of all possible single and double mutations in the nine-residue α2 helix of five sequence-diverged PDZ homologs (PSD95 pdz3 , PSD95 pdz2 , (Table S1). Thus, we can (1) analyze the distributions of double mutant cycle coupling energies for nearly all pairs of mutations in the α2 helix and (2) study the divergence and conservation of these couplings over the five homologs.
We first addressed the problem of how to estimate native coupling energies from mutant cycle data. In general, the effect of a mutation at any site in a protein is a complex perturbation of the elementary forces acting between atoms, with a net effect that depends on the residue eliminated, the residue introduced, and on any associated propagated structural effects. Thus, the distribution of thermodynamic couplings at any pair of positions over many mutation pairs could in principle be arbitrary and difficult to interpret. However, we find surprising simplicity in the histograms of coupling energies.
In general, the data follow a double-Gaussian distribution, with either both mean values centered at zero or with one of the means different from zero ( (15)), suggesting that the binary character of couplings may be universal in proteins. A simple mechanistic model is that the observed free energy of ligand binding arises from a cooperative internal equilibrium between two functionally distinct conformational states, with mutations at some sites capable of dramatically perturbing this equilibrium ( Fig. S9). Indeed, such a two-state internal equilibrium has been observed in PDZ domains, and is part of an allosteric regulatory mechanism controlling ligand binding (17). Thus, the population-weighted mean of the distribution of coupling energies for each position pair (Fig. 2, dashed lines) provides an empirical estimate of the native interaction between amino acids through mutagenesis.
Two technical points are worth noting. First, the spread of the distributions is large, generally exceeding the estimated magnitude of the native interactions (Fig. 2). This means (1) that traditional mutant cycle studies carried out with specific choices of mutations are more likely to just reflect the choice of mutations rather than the native interaction, and (2) that the only way to obtain good estimates of the native interaction between residues is to average over the effect of many double mutant cycles per position pair. Second, we find that the BTH/sequencing approach displays such good reproducibility that it is possible to detect coupling energies with an accuracy that is on par with the best biochemical assays.
For example, the average standard deviation in mean coupling energies for position pairs over four independent experimental replicates in PSD95 pdz3 is ~0.06 kcal/mol. Thus, we can map native amino acid interactions with high-throughput without sacrificing quality.
What do the data tell us about the pattern of amino acid interactions? Figure 3 shows idiosyncratically. The conserved couplings form a chain of physically contiguous residues in the tertiary structure that both contact (1, 5, 8) and do not contact (4) the ligand (Fig. 4B). Interestingly, position 4 is part of a distributed allosteric mechanism in some PDZ domains for regulating ligand binding (17), providing a biological role for its energetic connectivity with binding pocket residues. Overall, the pattern of couplings does not just recapitulate all tertiary contacts between residues (compare Fig. 4A with 4E, white and black circles) or the pattern of internal backbone hydrogen bonds that define this secondary structure element. Instead, conserved amino acid interactions in the PDZ α2 helix are organized into a spatially inhomogeneous, cooperative network that underlies ligand binding and allosteric coupling.
This result begins to expose the complex energetic couplings underlying protein function, but also highlights the massive scale of experiments required to deduce this information for even a few amino acid positions. How can we generalize this analysis to deduce all amino acid interactions in a protein, and for many different proteins? There are potential strategies for pushing deep mutational coupling to larger scale, but quantitative assays such as the BTH are difficult to develop, mutation libraries grow exponentially with protein size, and the averaging over homologs will always be laborious, expensive, and incomplete.
A different approach is suggested by understanding the rules learned in this experimental study for discovering relevant energetic interactions within proteins. The bottom line is the need to apply two kinds of averaging. Averaging over many mutations provides an estimate of native interaction energies between positions, and averaging the mutational effects over an ensemble of homologs separates the idiosyncrasies of individual proteins from that which is conserved in the protein family. Interestingly, these same rules also comprise the philosophical basis for a class of methods for estimating amino acid couplings through statistical analysis of protein sequences. The central premise is that the relevant energetic coupling of two residues in a protein should be reflected in the correlated evolution (coevolution) of those positions in sequences comprising a protein family (16,(18)(19)(20). Statistical coevolution also represents a kind of combined averaging over mutations and homologs, and if experimentally verified, would (unlike deep mutational studies) represent a scalable and general approach for learning the architecture of amino acid interactions underlying function in a protein. The data collected here provides the first benchmark data to deeply test the predictive power of coevolution-based methods.
One approach for coevolution is the statistical coupling analysis (SCA), a method based on measuring the conservation-weighted correlation of positions in a multiple sequence alignment, with the idea that these represent the relevant couplings (16,21). In the PDZ domain family (~1600 sequences, pySCA6.0 (22)), SCA reveals a sparse internal organization in which most positions evolve in a nearly independent manner and a few (~20%) are engaged in a pattern of mutual coevolution (16,21,22). In this case, the coevolving positions are simply defined by the top eigenmode (or principal component) of the SCA coevolution matrix, and represent a biologically important allosteric mechanism connecting the β2-β3 loop with the α1-β4 surface through the binding pocket and the buried α1 helix (Fig. 1B, and (23)).
Extracting the coevolution pattern in the top eigenmode for just the α2 helix (Fig. 4C), we find that coevolution as defined by SCA in fact nearly quantitatively recapitulates the homolog-averaged experimental couplings collected here ( 4 = 0.82, = 10 ;<= by F-test, Fig. 4D). The predictions also hold for individual homologs (Fig. S10A-E), consistent with the premise that the essential physical constraints underlying function are deeply conserved. This relationship is robust to alignment size and method of construction ( Fig. S11A-C), but depends on both of the basic tenets that underlie the SCA method -conservation-weighting ( Fig. S11D-E) and correlation (Fig. S11 F-G) (22).
Another approach for amino acid coevolution is direct contact analysis (DCA, (24,25)), a method developed for the prediction of tertiary contacts in protein structures. DCA uses classical methods in statistical physics to deduce a matrix of minimal pairwise couplings between positions ( '( , Fig. 4E) that can account for the observed correlations between amino acids in a protein alignment, with the hypothesis that the strong couplings in '( will be direct contacts in the tertiary structure. Indeed, studies convincingly demonstrate that the top /2 (where L is the length of the protein) couplings are highly enriched in direct structural contacts (26). Consistent with this, this method successfully identifies direct contacts in the PDZ α2 helix (Fig. 4E, compare heat map to white and black circles) to an extent that agrees with the reported work. However, DCA-based predictions of functional energetic couplings between mutations are weakly (though significantly) related to the homolog-averaged experimental data ( 4 = 0.33, = 10 ;B by F-test, Fig. 4F-G). These results are similar or poorer for prediction of couplings in individual domains ( Fig. S10G-K). Interestingly, a DCA model in which only the top pairwise couplings in '( that define tertiary contacts are retained and the weaker non-contacting couplings are randomly scrambled shows predictions that are unrelated to the experimental data ( 4 = 0.05, = 0.09 by F-test, Fig. 4H). Thus, non-contact couplings in the DCA model, which represent noise from the point of view of structure prediction, contribute significantly to prediction of function.
These findings clarify the current state of sequence-based inference of protein structure and function (27,28). DCA successfully predicts contacts in protein structures in the top couplings, but in its current form, does not appear to capture the cooperative constraints that underlie protein function well. In contrast, SCA does not predict direct structural contacts well, but instead seems to more accurately capture the energetic couplings that contribute to protein function(s). As explained previously, these two approaches sample different parts of the information contained in a sequence alignment (29,30), and therefore are not mutually incompatible. These results highlight the need to unify the mathematical principles of contact prediction and SCA-based energetic predictions towards a more complete model of information content in protein sequences.
In summary, the collection of functional data for some 56,000 mutations in a sampling of PDZ homologs demonstrates an evolutionarily conserved pattern of amino acid cooperativity underlying function. This pattern is well-estimated by statistical coevolution based methods, suggesting a powerful and (given the scale of experiments necessary) uniquely practical approach for mapping the architecture of couplings between amino acids. Indeed, the remarkable conclusion is that with a sufficient ensemble of sequences comprising the evolutionary history of a protein family, the pattern of relevant amino acid interactions can be inferred without any experiments. and Syntrophin pdz (1Z86, blue)), emphasizing the conserved αβ-fold architecture of these sequencediverse proteins (33% average identity, Table S1). Structural elements discussed in this work are indicated. (C), The nine-amino acid α2-helix, which forms one wall of the ligand-binding site. (D-E), The distribution of experimentally determined binding free energies, ∆ .'/0 , for all single mutations (D, 855/855) and nearly all double mutations (E, 56,694/64,980) in the α2-helix for the 5 PDZ homologs, with the affinity of wild-type PSD95 pdz3 indicated (wt). The red lines indicate the independently validated range of the assay ( Fig S1B); essentially all measurements fall within this range. These data comprise the basis for a deep analysis of conserved thermodynamic coupling in the PDZ family. | 2017-11-12T01:13:54.842Z | 2017-11-07T00:00:00.000 | {
"year": 2017,
"sha1": "a1b19a3baa9039ad11f42116ceb271593b752a82",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.34300",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "4361dfe0ff09cf96bf300a0474cea44f622184b9",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
118422181 | pes2o/s2orc | v3-fos-license | Red-Sequence Galaxies at High Redshift by the COMBO-17+4 Survey
We investigate the evolution of the galaxy population since redshift 2 with a focus on the colour bimodality and mass density of the red sequence. We obtain precise and reliable photometric redshifts up to z=2 by supplementing the optical survey COMBO-17 with observations in four near-infrared bands on 0.2 square degrees of the COMBO-17 A901-field. Our results are based on an H-band-selected catalogue of 10692 galaxies complete to H=21.7. We measure the rest-frame colour (U_280-V) of each galaxy, which across the redshift range of our interest requires no extrapolation and is robust against moderate redshift errors by staying clear of the 4000A-break. We measure the colour-magnitude relation of the red sequence as a function of lookback time from the peak in a colour error-weighted histogram, and thus trace the galaxy bimodality out to z~1.65. The (U_280-V) of the red sequence is found to evolve almost linearly with lookback time. At high redshift, we find massive galaxies in both the red and the blue population. Red-sequence galaxies with log M_*/M_sun>11 increase in mass density by a factor of ~4 from z~2 to 1 and remain nearly constant at z<1. However, some galaxies as massive as log M_*/M_sun=11.5 are already in place at z~2.
INTRODUCTION
The Lambda Cold Dark Matter (ΛCDM) cosmological model predicts a bottom-up hierarchical scenario of structure formation in which large structures have been formed recently (z 1) through mergers of existing smaller structures. It is still unclear when and how in this context galaxies have formed and how they have evolved into the baryonic structures made of stars we see today.
The most evolved and massive systems known in the Universe are found especially among red-sequence galaxies. They are straightforward to identify and seem to mark an end point in galaxy formation. Model predictions can thus be tested against measurements of the build-up of this population through cosmic time. To this end, the population needs to be tracked towards higher redshift. Their colour, mass and number density hold clues about their mass assembly over time.
It is still unknown when in the history of the universe the red-sequence population first emerged. Using the optical COMBO-17 data, Bell et al. (2004) have shown that the galaxy bimodality is present at all redshifts out to z = 1 and that the red sequence was already in place at z = 1, see also Weiner et al. (2005). Several spectroscopic studies have observed massive red galaxies at z 1.5 (Cimatti et al. 2004;Glazebrook et al. 2004;Daddi et al. 2005;McGrath et al. 2007; 1 Conselice et al. 2007;Kriek et al. 2008;Cassata et al. 2008), but did not allow the study of large samples. On the other hand, photometric redshift surveys combining optical and NIR data can provide photometric redshifts for larger samples of high redshift galaxies. Recently, Taylor et al. (2009) demonstrated the persistence of the red sequence up to z ∼ 1.3 and Cirasuolo et al. (2007) as well as Franzetti et al. (2007) up to z ∼ 1.5. Using colour-colour diagrams (Williams et al. 2009) claimed the presence of quiescent galaxies analogous to a red sequence up to z ∼ 2. A persistent issue for photometric redshift surveys is an increasing uncertainty in the measurement of rest-frame colours at high redshift (z 1.5). It increases the scatter in colour-magnitude and colour-stellar mass diagrams, thus blurring the signature of a possible red sequence at high redshift. Especially, the expected small red-sequence population in field samples may be undetectable.
In this paper, we maximise the separation of red quiescent and blue star-forming galaxies by using the rest-frame colour (U 280 − V ) as a diagnostic. Wolf et al. (2009) have shown how in galaxy clusters this colour also tends to render star-forming red galaxies bluer to leave a more purified red sample. We use colour-error-weighted histograms to improve the recognition of a red sequence in the face of increasing colour uncertainties at high redshift (see Sect. 3 for all methods). In Sect. 4 we investigate the colour distribution up to z = 2 and derive a colour-magnitude relation and a bimodality separation evolving with redshift. Thus, we attempt to estimate when the red-sequence population emerges. Ultimately, we quantify the evolution of the following galaxy properties: colour, luminosity, mass and number density for the redsequence galaxy population since z = 2 (Sect. 5). We summarize and conclude this study in Sect. 6 Throughout this paper, we use the cosmological parameters Ω m = 0.3, Ω Λ = 0.7 and H 0 = 70.7 km s −1 Mpc −1 . All magnitudes are in the Vega system.
The COMBO-17+4 Survey
The COMBO-17+4 survey is the NIR extension of the optical COMBO-17 survey and is designed to probe galaxy evolution since z = 2. Near infrared data were necessary to obtain optical restframe properties for galaxies in the redshift range 1 < z < 2, where the optical rest-frame of galaxies is shifted into the NIR. The survey consists of observations in four NIR bands (λ/∆λ): the three medium-bands Y (1034/80), J 1 (1190/130), J 2 (1320/130), and the broad-band H(1650/300). In the long run, the COMBO-17+4 survey targets three independent fields (A901, A226, and S11) for a total coverage of 0.7⊓ ⊔ • (see Table 1 for coordinates and integration times). The results derived in this paper are based on the observations of the A901-field only. This field contains the supercluster of galaxies A901/2 located at z = 0.165 where further multi-wavelength coverage from Xray to radio was obtained by the STAGES survey (Gray et al. 2009).
Data
The NIR data were obtained in several observing runs from December 2005 to April 2009 with the NIR wide field camera Omega2000 at the prime focus of the 3.5-m telescope at Calar Alto Observatory in Spain. The camera has a pixel size of 0.45 ′′ and a wide field of view of 15.4 ′ × 15.4 ′ , so that a half-degree COMBO-17 field can be covered with a 2 × 2-mosaic. On the A901-field, only three of four pointings could be finished in the time awarded to the project, and hence the NIR data are missing in the south-west quadrant.
Also, an area of 68 ′′ ×76 ′′ centred at (α, δ) J2000 = 09 h 56 m 32 s .4, −10 • 01 ′ 15 ′′ was cut out to avoid spurious objects created by the halo of a very bright (K = 5. m 75) Mira variable star, also known as the IRAS point source 09540-0946. The total NIR coverage is thus 690⊓ ⊔ ′ ≃ 0.19⊓ ⊔ • . The optical data include a combination of 17 broad and medium bands centred between 365 nm and 915 nm and were obtained with the Wide Field Imager (WFI) at La Silla Observatory between February 1999 and January 2001 by the COMBO-17 survey (Wolf et al. 2003;Gray et al. 2009).
Data Analysis
The NIR data reduction and photometry were performed using the software ESO-MIDAS in combination with the MPIAPHOT package developed by Röser & Meisenheimer (1991) and the OMEGA2k data reduction pipeline developed by Faßbender (2003). The image data reduction consisted of flatfielding, dark current and sky background subtraction as well as correction for bad pixels and pixel hits by cosmic ray events. Additive stray light appearing in a ring shape in all scientific and calibration images taken with the Omega2000 camera have been subtracted at the flatfield level for images taken in the Y, J 1 , and J 2 filter where the stray light, if not corrected, would have contributed an additive 5%, 5%, and 10% to the flux, respectively. No stray light correction has been done for the H-band images where the additive contribution is negligible at < 0.5%.
From the coadded H-band images of the three pointings we created an H-band mosaic image with a total exposure time of 11600 sec pixel −1 and 1.03 ′′ seeing on average. The summation process assigned a weight to each input image according to its transmission, background noise and PSF, so that the H-band mosaic has an optimal PSF. Using SExtractor (Bertin & Arnouts 1996) with default parameters we obtained a deep H-band source catalogue with 31747 objects. The astrometry was performed with IRAF (Tody 1993) using hundreds of bright (H 16. m 0) 2MASS point sources in common with our catalogue reaching an accuracy of 0.1 ′′ in RA and DEC.
The optical photometry has been re-derived for the source positions in the H-band catalogue instead of the previous R-band catalogue. Hence, optical and NIR photometry in all 21 bands of the COMBO-17+4 survey are measured in apertures matched in location and in size. In practice, we use a Gaussian weighting function in our aperture to give more weight to the bright central parts of an object and less weight to the fainter outer parts. Using a Gaussian sampling function and assuming a Gaussian seeing PSF allows us to sample identical areas of an object independent of seeing: we adjust the width of the Gaussian aperture to counteract seeing changes such that the integral over the aperture is conserved under seeing changes. Mathematically, our brightness measurements are identical to placing apertures with a Gaussian weighting function of 1. ′′ 7 FWHM onto a seeing-free image in all bands (COMBO-17 chose 1. ′′ 5 given the slightly better seeing of its optical data). The NIR filters Y , J 1 , J 2 and H reach 10σ aperture magnitude limits of 22. m 1, 21. m 5, 21. m 4, and 21. m 0, respectively.
The COMBO-17 spectrophotometric stars have been used to ensure the calibration of the optical bands between each other (Wolf et al. 2001b). We extended the calibration into the NIR using the Pickles (1998) spectral library by visually matching the colours of point sources in our data to main sequence stars in the library. We estimate that the relative calibration between optical and NIR has a limited accuracy on the order of 7%. The calibration of our photometry in the H-, J 2 -, and J 1 -band has been verified by comparing the H magnitude and the average value of the J 2 and J 1 magnitude respectively to the H and J magnitudes of 340 stars with H 16 in common with the 2MASS catalogue. For the point sources, we found a mean offset in magnitude of < 1% in the H-band and < 0.4% in the averaged J 1 -and J 2 -band. No verification could be done for the Y -band magnitude since no point source catalogue currently exists in this waveband.
Photometric Classification and Redshifts
Photometric redshifts were determined using the multi-colour classification code by Wolf (1998) as it was done in COMBO-17. Objects are divided into the four classes star, white dwarf, galaxy and quasar by comparing measured colours with colour libraries calculated from spectral templates. The templates for stars and quasars are identical to those in Wolf et al. (2004), while a modified library has been built for galaxies, in order to allow reliable estimates of the stellar mass (see below). At the bright end, the redshift accuracy is limited by systematic errors in the relative calibration of the different wavebands or in a mismatch between templates and observed spectra. At the faint end, photon noise dominates.
This dataset is a superset of the COMBO-17 data with the four NIR bands added. As a result, the photo-z's have changed very little at z < 1, where the NIR bands add little constraints, but it is reasonable to assume that they have improved over the optical-only results at z > 1. In Fig. 1 we show four examples of red galaxy SEDs and their best-fit templates. At z < 1 the fit is clearly constrained by the original COMBO-17 data at λ < 1µm. At z > 1 the four NIR bands act in concert with the optical data, while for red galaxies towards z = 2 they are the sole providers of significant flux detections. The two z > 1-galaxies shown are examples of EROs with R − H ∼ 5 and 5.7, respectively. To the eye, their redshift is clearly constrained by locating the break between neighbouring pairs of filters, while the fit takes all filters into account to constrain the redshift further.
COMBO-17 photo-z's have been shown to be accurate to σ z /(1 + z) < 0.01 at R < 21, < 0.02 at R < 23 (Wolf et al. 2004), albeit on a different field (the CDFS). On the A901 and S11 field, we only have spectra for galaxies at z < 0.3. From these, the photo-z dispersion of the cluster A901/2 has been measured as 0.005 rms (Wolf et al. 2005;Gray et al. 2009). Presently, we lack the ability to confirm our photo-z's at z > 1 here. Hence, we need to rely on the plausibility of the SED fits to the photometry as shown in Fig. 1.
However, the photometric errors and the grid of templates allow the estimation of probability distributions and confidence intervals in redshift for each galaxy. These estimated redshift errors are shown in Fig. 2, where they illustrate the change in behaviour with redshift and magnitude. Obviously, photometric errors increase towards faint magnitudes and propagate into redshift errors, but at z < 1.2 the redshift error is still very much driven by the deep optical photometry. Hence, the bulk of objects has σ z /(1 + z) < 0.05 even at our adopted H-band limit. At z > 1.2, however, where the main redshift constraints are in the NIR bands, a clear upswing of redshift errors with Hband magnitude can be seen.
Galaxy Samples
After eliminating ∼ 2300 objects classified as non-galaxies, we defined our sample by first applying an H-band selection above our 5-σ detection limit of H = 21. m 7. However, we are concerned with the evolution of galaxy samples that need to be complete in rest-frame V -band luminosity or stellar masses. We avoid a colour bias by applying further magnitude cuts and eliminate from our colour-magnitude diagrams those faint tails of the galaxy distribution that are known to be incomplete. Across our redshift range several observed bands with different completeness limits map onto the rest-frame V -band. Hence, we opted to apply the following two further cuts: (1) At z < 0.43 the entire observed R-band is redwards of the 4000Å-break and roughly coincides with the rest-frame V -band. Here, we require galaxies to have R < 23. m 5, which is the completeness limit of the optical-only COMBO-17 redshifts for z < 0.43-galaxies. The presence of NIR data may have deepened our completeness in this regime, but we opt to err on the conservative side and eliminate this faint end from our analysis. Note, that optically-faint red galaxies at higher redshift (such as EROs) are by definition NIRbright and have thus very well-constrained SED fits and (higher) redshift estimates (see Fig. 1).
(2) In the redshift range 0.43 < z < 1.4, the Y -band is redwards of the break and we require Y < 22. m 8 (our 5-σ-limit). Again, objects that are particularly red in Y − H are expected to reside at z > 1.4.
Finally, at z > 1.4 the SED around the 4000Åbreak is entirely sampled by our NIR filters and the H < 21. m 7 selection is sufficient to ensure completeness and an accurate redshift estimate. These selections are complete in stellar mass log(M * /M ⊙ ) to 8.5, 9.5 and 10 in the redshift ranges z < 0.43, 0.43 < z < 1.4, and z > 1.4, respectively.
After removing galaxies with bad flags, our sample contains 10692 galaxies. Fig. 3 shows this sample and reveals already a number of large-scale structures. The dominant overdensity at z ∼ 0.16 is the original main target of this COMBO-17 field, the supercluster region A901/2 with wellover 1000 galaxies. Further clusters and largescale structures have been identified and reported in this field, partly from weak gravitational lensing (Taylor et al. 2004;Simon et al. 2010) and partly from a galaxy cluster search (Falter et al. 2010). These include localised clusters embedded in largescale structures at z ∼ 0.26, 0.37, 0.5, 0.7, 0.8. At higher redshift and fainter magnitudes, i.e. z > 0.8 and H > 20, redshift focussing effects are possible, and structures which appear only at faint magnitudes but without a sharp tail to the bright edge are unlikely to be physically localised overdensities. In Fig. 3 the two possible unreal overdensities are the faint-end blobs at z ∼ 1 and z ∼ 1.2. Their appearance does not require redshift errors in excess of what is discussed above, but only a mild focussing within the allowed er-rors. At z > 1.2 no structure can be seen, as any physical contrast has been smoothed by our redshift errors.
The histogram of photometric redshifts of the complete galaxy sample is shown in Fig. 4. Here, we show first the optical-only COMBO-17 redshifts of four different fields, which demonstrate the signature of abundant large-scale structure. In case of the CDFS, these have been reported consistently by a variety of groups. Overall, the structures include six Abell clusters as well as further clusters and rich groups, usually characterised by a pronounced red sequence. It is thus clear that we investigate a variety of environments across redshift, and only the combination of several fields will eventually suppress error propagation from field-to-field variation.
The histogram in the right panel shows new redshifts after including the NIR bands of COMBO-17+4. It is selected by combined apparentmagnitude cuts in the R-, Y -and H-bands, instead of the sole R-band in the optical-only photoz's, and thus numbers at higher redshifts are increased. The inclusion of NIR data allows for good photo-z's at z > 1, but presently such data are only available for three quarters of the A901field covering 0.2⊓ ⊔ • area. This is our first look at z > 1.2-galaxies, while completion of our project will entail three fields with a different mix of environment at each redshift.
The bulk of our sample is in the range 0.7 < z < 1.1, consistent with the z = 0.9-peak in the n(z) found in COSMOS (Ilbert et al. 2010).
Spectral Library of Galaxies
The photometric redshifts of galaxies in the optical-only COMBO-17 (Wolf et al. 2004) are based on synthetic templates produced by the population synthesis model PEGASE (Fioc & Rocca-Volmerange 1997). A single, exponentially declining burst of star formation was assumed for the star formation history (τ = 1 Gyr). The templates form a 2-dimensional grid in which one parameter is the age since the start of the burst (60 steps between 0.05 to 15 Gyr) and the second one is extinction, which was modelled as a foreground screen of dust following the Small Magellanic Cloud (SMC) law defined by Pei (1992) in six steps from A V = 0 to A V = 1.5. Here, we employ a new library based on a two burst-model following the approach of Borch et al. (2006). The motivation for a twoburst model is to reproduce better the stellar population of blue galaxies that contain both an old stellar population and ongoing star-formation activity and to provide more realistic mass-tolight ratios. However, the two-burst model by Borch et al. (2006) failed to deliver accurate redshifts.
Our new two-burst library solves this problem by including extinction again and adjusting the age and strength of the second burst. For red galaxies (ages > 3 Gyr) we leave the single-burst templates unchanged (τ 1 = 1 Gyr), and for bluer galaxies we add a recent burst (τ 2 = 0.2 Gyr) to the old population starting 2.75 Gyr after the first one. Increasingly blue galaxy templates are generated by both increasing the relative strength of the second burst and by moving the final age (at which the galaxy is observed) from 3.0 Gyr to 2.80 Gyr (i.e. towards the start of the second burst). This ensures that the templates fall into a region of the rest-frame colour-colour diagrams that is occupied by observed galaxies, and thus leads to very accurate photometric redshifts. We assume a Kroupa IMF, an initial metallicity of 0.01 (∼ 2/3 Z ⊙ ) and neither infall nor outflow. More details will be given in Meisenheimer et al. (in preparation).
Stellar Mass Estimation
The stellar mass of a galaxy is derived from the best-fitting template SED, whose characteristics are constrained mostly by the rest-frame spectrum between 280 nm and the V -band, which is observed at all redshifts. Formally, we use data from all bands, but our masses can be seen to be derived from a galaxy's observed V -band luminosity and the V -band mass-to-light ratio of its best-fitting template.
Our use of a fixed rest-frame band to determine a galaxy's luminosity differs from that of Borch et al. (2006) who only used the reddest observed-frame band with good photometry, and thus may suffer unintended bias trends with redshift. At low redshifts z < 0.5, Borch et al. (2006) sample the rest-frame V -band as well so that our mass estimate should only differ from theirs due to the different templates. Only due to the NIR extension in COMBO-17+4 can we continue to use the rest-frame V -band out to z = 2.
Indeed, a comparison of galaxies at the redshift of the super-cluster Abell 901/2 (0.150 < z < 0.175) shows that systematic differences in the stellar masses do not exceed 0.1 dex. Beyond z = 0.5 the stellar masses derived here are superior due to the NIR photometry. We estimate a typical mass accuracy on the order of 30% across all redshifts.
We wish to assess possible biases in the stellar mass estimates arising from dust and consider the Bell & deJong (2001) (1) because it allows an analytic derivation of a reddening vector in colour-mass diagrams, and our masses are consistent with Eq. 1 within ±0.1 dex. Any dust reddening can now be seen as a mean reddening plus some structure. Approximating the mean reddening by a uniform screen of dust right in front of the stellar population changes its colour by ∆(B − V ) = E B−V and its V -band luminosity by ∆V = R V E B−V . The reddeninginduced overestimate of the M/L-ratio and the absorption-induced underestimate of L cancel exactly, if (2) which is almost true for the Milky Way, LMC or SMC dust law from Pei (1992). Dust acting like a foreground screen on the stellar population will thus not bias the mass estimates at all. However, if pockets of highly absorbed stars exist as well, they will be entirely withdrawn from the optical view and not contribute to either luminosity or colour. They will thus be plainly not taken into account and the final mass estimate will be underestimated by just the stellar mass present in highly obscured pockets.
Rest-Frame Luminosities and Colours
Rest-frame luminosities are derived from the observed photometry covering the wavelength range from the U -band to the H-band. We obtained rest-frame luminosities for the V -band in the Johnson photometric system as well as for the U 280 -band, a synthetic UV continuum band with a top-hat transmission curve that is centred at λ = 280 nm and 40 nm wide. These two filters are covered by our observed bands across almost the entire range of interest. This is essential as our study does not rely on SED extrapolation except for galaxies in the small range 0.2 < z < 0.3. The rest-frame colour U 280 − V is measured robustly against redshift errors since both filters are located in smooth continuum regions of the galaxy spectra. The Johnson U -band, in contrast, partly overlaps with the 4000Å-break and is hence affected strongly by small uncertainties in the redshift determination.
The rest-frame luminosities are derived as described by Wolf et al. (2004). The best-fitting SED is placed into the aperture photometry and integrated over the redshifted rest-frame bands. We have taken into account the interstellar foreground extinction as well as a correction from aperture to total photometry, which could be biased by colour gradients. It is determined from the total magnitude MAG-BEST derived by SExtractor on the H-band image and thus certainly correct for any rest-frame band overlapping with the observed H-band. The magnitude errors are determined from a propagation of photometric errors. They include a minimum error of 0. m 1 to take into account redshift errors and overall calibration uncertainties, for details see Wolf et al. (2004).
Error-Weighted Colour Histograms
Generally, colour and magnitude information can be used to separate passive red galaxies from star forming blue ones. However, at high redshift the well-known colour bimodality may be blurred by scatter from relatively large errors on the colour measurements. We try to manage this challenge and investigate the galaxy colour bimodality through cosmic time by using errorweighted colour histograms. This method represents each galaxy by its Gaussian probability distribution in colour c, i.e. p(c) ∼ e −(c−c0) 2 /2σ 2 c , where σ c is the colour error and p is normalized so that pdc=1. Thus, a galaxy with a small error has a more peaked distribution and contributes more structure to the summed distribution than a galaxy with a large error. As a result, the structure in our histograms is driven entirely by galaxies with small colour errors, while objects with large errors lift the overall counts without producing peaks. We produced errorweighted colour histograms by summing all the Gaussian distributions within many thin redshift slices (∆z ∼ 0.1) stepping through our full redshift range of 0 < z < 2. Towards highest redshifts, the increasing colour errors will dilute the contrast of the red-sequence peak. Redshift errors can also lead to some spill-over from a physical redshift bin into neighbouring photo-z bins and produce scatter in our measured colour evolution from bin to bin in redshift.
To better disentangle the red sequence from the blue cloud we have tilted the (U 280 − V ) colour in the colour-magnitude plane, see Eq. 3. The measured (U 280 − V ) of each individual galaxy is projected along the slope of the red sequence as determined in the CMD of the super-cluster A901/2 (see Fig. 8, top left) to the pivotal magnitude M V = −20: The derived slope of 0.3 is consistent with that of 0.08 in the (U − V ) colour in Bell et al. (2004).
Rest-Frame Colours and Colour Errors as a Function of Redshift
Scatter in measured galaxy colours affects the appearance of the colour bimodality in a colourmagnitude diagram and may render it invisible, especially in a sample of low density field galaxies where no clusters rich in red galaxies produce a well-defined sequence. As the scatter in colour increases with redshift, this effect can prevent us from detecting a physically present high-redshift bimodality.
For a first assessment of our ability to trace bimodality, we plot the rest-frame (U 280 − V ) colour and its error δ(U 280 − V ) as a function of redshift in Fig. 5. In panel (a), we see the number of red galaxies decreasing with redshift. Also, a lack of star-forming blue galaxies with (U 280 − V ) < 1 at z > 1.5 is caused by a red-object bias from our H-band selected catalogue. Panel (b) shows that the colour accuracy is very good for objects located at low redshift z 0.9. The bulk of the galaxies have a colour error close to the assumed minimum of 0. m 1. The large scatter around z ∼ 1 is caused by a locally increased magnitude error in the rest-frame V -band that results from M V being calculated from the relatively shallow narrowband J 1 . In contrast, the U 280 filter overlaps with the ∼ 3 mag deeper R-band at z ∼ 1. At z > 1.7, the colour uncertainties grow larger again since the rest-frame where the U 280 -band falls into the I-band, which is our shallowest broad-band.
Colour Bimodality and the Emergence of the Red Sequence
In Figure 6 we plot error-weighted colour histograms of galaxies in different redshift slices. Each panel shows two distinct peaks due to the bimodality, whereby the right peak represents redsequence galaxies sitting on top of a tail of blue cloud galaxies reaching smoothly towards the red due to lower star-formation rates or dust reddening. Clearly, a red galaxy sample defined by a colour cut will comprise both quiescent redsequence galaxies as well as dusty star-forming galaxies. At high redshift, we are unable to separate old red from dusty red galaxies, and so a red galaxy sample overestimates the space density of quiescent galaxies. However, the colourmagnitude relation should be well constrained, as only galaxies from the proper quiescent red sequence are focussed in colour and contribute to the peak in the colour histogram, while dusty red galaxies form a smooth underlying continuum spreading across the red-sequence and extending beyond.
The super-cluster A901/2 seen in the top left panel has a particularly clear red sequence. Our results show a clear galaxy bimodality at z < 1 as already established by Bell et al. (2004). However, the depth of our survey allows us to extend the detection of the galaxy bimodality up to z ∼ 1.65, the mean redshift of the highest interval where we can still detect two distinct distribution peaks. Beyond z ∼ 1.65 it is not possible to confirm a galaxy colour bimodality since the number of objects available for our analysis drops considerably. Figure 6 bottom right shows the distribution of 423 galaxies in the redshift interval 1.7 < z < 2. It is clearly not bimodal, though this does not mean that the red sequence does not exist there. Our redshift errors are unlikely to be the main source of this disappearance, since the redshift error σ z /(1 + z) of luminous red galaxies with M V = [−23.5, −22] degrades only from a median value of ∼ 0.01 at z = 0.8 over ∼ 0.025 at z = 1.2 to ∼ 0.04 at z = 1.6. Instead, it is the combination of a relatively small number of quiescent luminous red objects due to the small area surveyed and the washing out of any red sequence signal due to increasing colour errors (see Fig. 5b) that prevents us from detecting the red sequence beyond this redshift.
Evolution of the Colour-Magnitude Relation
We use the error-weighted colour histograms in thin redshift slices of ∆z ∼ 0.1 to track the colour evolution of the peak in the red sequence and plot (U 280 − V ) at the pivot point M V = −20 in Figure 7. The colour of the red-sequence peak is obtained with the MIDAS routine CENTER/GAUSS gcursor, which fits a Gaussian function to an emission line on top of a continuum. We mark the 'continuum' level on both sides of the peak interactively, and obtain the peak location from the fit. We obtain errors from the uncertainty of the peak position obtained by varying the interval in which the Gaussian is being fit, but we do not include systematic calibration uncertainties in colour (see Table 2 for results). We find that the (U 280 − V ) MV =−20 colour of the peak evolves almost linearly with lookback time τ , and obtain a linear best fit (dashed line) of (U 280 − V ) MV =−20 = 2.57 − 0.195 × τ (4) Our data points of the peak colour scatter somewhat around this fit, which is an indication of our uncertainties in measuring the colour, as any cosmological evolution of the average galaxy population is expected to be monotonic. However, these variations may also result from us looking at different environments at different redshift, which may be at different evolutionary stages, and not solely from our methodical uncertainties.
We obtain a colour-magnitude relation (CMR) as a function of redshift using the approximation for the lookback time τ ≃ 15Gyr · z · (1 + z) −1 , and separate blue and red galaxies with a parallel relation 0.47 mag bluer than the CMR, which is (U280−V ) lim = 2.10−0.3(MV +20)−2.92z/(1+z) (5) In the low redshift regime 0 < z < 1 this cut is consistent with the one derived by Bell et al. (2004) considering an approximate colour transformation derived from our templates, which is (U − V ) = 0.28 + 0.43(U 280 − V ). Our CMR is slightly steeper than the one derived by Bell et al. (2004), but that translates only into a minor colour difference of < 0.1 mag for galaxies in the relevant magnitude range −24 < M V < −18. Although we see no clear red sequence at z 1.65, we extrapolate the CMR all the way up to z = 2 to isolate red galaxies across our entire sample.
The solid line in Figure 7 shows the evolution predicted by PEGASE for a single stellar population formed at a lookback time of 12 Gyr (z f = 3.7) with solar metallicity. The bulk of our data are consistent with the pure aging model. Our lowest-redshift value is based entirely on the A901/2 super-cluster at z = 0.165 (Lookback time=2.0 Gyr), which is the highest-density environment in our sample.
EVOLUTION OF THE RED-SEQUENCE GALAXY POPULATION
We now focus on the colour and the mass evolution of the whole red galaxy population, and analyse colour-magnitude diagrams (CMD) and colour-stellar-mass diagrams (CM * D) in redshift slices. We divide our galaxy sample into red and blue populations with the cut in Eq. 5. Across the entire sample at 0 < z < 2, roughly a third of the galaxies are red (3163 out of 10692), but in the high-redshift part at z > 1 less than a quarter are red (843 out of 3479 galaxies).
Colour-Magnitude Diagrams
The evolution of the red-sequence population is presented in the CMDs of Fig. 8. In each panel, the solid line indicates the bimodality separation of Eq. 5 assuming the mean redshift of the slice. The colour of the data points shows their individual nature, and due to the width of the redshift intervals some galaxies scatter across the separating line. The top left panel in Fig. 8 shows the CMD of the A901/2 super-cluster centred at z = 0.165. Such a dense environment shows a clear red sequence, and this was used to derive the red sequence slope in Eq. 3. The red sequence is not as sharp and easy to visually distinguish on CMDs beyond z = 1, especially in the redshift slice 1.07 < z < 1.19 due to the local increase in the scatter caused by large colour uncertainties. This highlights the necessity of using errorweighted histograms to derive the CMR.
Altogether, we find that at a given magnitude both galaxy populations were bluer in the past, and in particular the bright end of the red sequence became ∼ 0.4 mag redder from z = 2 to z = 0.2. We also find an increasing population of bright (M V < −22) blue ((U 280 − V ) < 1) galaxies at z > 1, which has also been reported by Taylor et al. (2009) in (U − R)-vs.-M R CMDs of a K-selected sample at 0.2 < z < 1.8 in the Extended Chandra Deep Field South. Fig. 9 shows the evolution of the population in CM * D for the same redshift slices as the CMDs in Fig. 8. We select the red sequence (black points) again with Eq. 5. We reach smaller masses than Borch et al. (2006) because our galaxy sample is primarily H-band selected instead of R-band selected. Conversely, our blue galaxy population reaches less deep than that of Borch et al. (2006). We see a general trend whereby at fixed mass both galaxy populations were bluer in the past, just as they were at fixed magnitude. We find that massive galaxies between 0.2 z 1.0 are dominated by the red population. This was also observed by Borch et al. (2006), who derived mass functions for the red and the blue galaxy population at z < 1 using the optical COMBO-17 data, and by many other authors (see e.g. Bundy et al. 2005). However, at z > 1 there is a growing population of massive (log M * /M ⊙ 11) blue ((U 280 − V ) 1) galaxies as we go back in time, a phenomenon (also observed by Williams et al. 2009;Taylor et al. 2009) that has no analogue in the local Universe.
Colour-Stellar Mass Diagrams
The bottom right panel shows our highest redshift slice (1.78 < z < 2) where our sample contains 45 red galaxies. Among them are eight very massive (log M * /M ⊙ > 11.5) objects, which means that at z ∼ 2 the red sequence already contains very massive galaxies. The most massive object found in this high redshift slice has a stellar mass of log M * /M ⊙ = 12.0. Our red sequence will contain dusty star-forming galaxies besides old galaxies, but since these do not form a sequence and are not focussed in colour, they will not affect the measurement of the red-sequence colour with our tailored method.
Number Density Evolution of Mas-
sive Red Galaxies Figure 10 shows the evolution of the number density of massive red galaxies as a function of redshift (see also Table 3). These red galaxies include both quiescent old galaxies as well as dustreddened galaxies. We avoid the super-cluster A901/2 by constraining our sample to 0.2 < z < 2 and choose masses of log M * /M ⊙ > 11, where we are complete at all redshifts, thus retaining only 478 objects.
We find that the number density of massive red galaxies rises considerably (by a factor ∼ 4) from z ∼ 2 to z ∼ 1 and is more or less constant at z < 1. Due to the small survey area our results are affected by cosmic variance on the order of 30% as estimated using Moster et al. (2010). Nevertheless, our results are comparable within the error bars to results by the GOODS-MUSIC survey (0.04 deg 2 area) (Fontana et al. 2006) and the MUSYC survey (0.6 deg 2 area) (Taylor et al. 2009). In the highest redshift bin at z ∼ 2 the number density derived is consistent with the spectroscopic survey of Kriek et al. (2008). Since any sample of red galaxies could in principle be contaminated by some dusty red galaxies the number density derived here represents an upper limit. However, we do not expect the effect to be large at the high-mass end.
Stellar Mass Density Evolution of Massive Galaxies
The different evolution of stellar mass density in red and blue galaxies is shown in Fig. 11 (see also Table 3). Again we restrict ourselves to masses of log M * /M ⊙ > 11, where we are complete across 0.2 < z < 2. Like the number density, the stellar mass density of the red population increases by a factor of ∼ 4 from z ∼ 2 to z ∼ 1 and remains roughly constant at lower redshifts. In contrast, the stellar mass density of the blue population increases only by a factor of ∼ 2 from z ∼ 2 to z ∼ 1.2 and decreases by the same factor towards low redshift again. Altogether, the overall stellar mass density increases by a factor of ∼3 from z ∼ 2 to z ∼ 1 due to the combined contribution of red and blue galaxies, while it remains constant at lower redshift where the red sequence dominates. Thus, the main formation epoch of the massive red galaxy population is ranges over 2 > z > 1.
Comparing our stellar mass densities with the literature is not foolproof since different surveys have different selection criteria using e.g. colour, morphological types and star formation activity. They employ different methods to derive the stellar mass and they can be affected by cosmic variance.
Our results for the evolution of the mass density of the entire population between 1 > z > 0 are consistent with Conselice et al. (2007) who also found little evolution for galaxies with 11 < log M * /M ⊙ < 11.5. At higher redshift, our results differ: Conselice et al. (2007) found an increases by a factor 10.7 from z ∼ 2 to z ∼ 1, while our results show a more modest rise by a factor ∼ 3.
Based on a 3.6µ-selected sample of galaxies Arnouts et al. (2007) found an increase of the mass density in the quiescent population by a factor of 2 from z ∼ 1.2 to z ∼ 0, while the starforming population shows no evolution. Additionally, at higher redshift between 2 > z > 1.2 they found that the quiescent population increases by a factor of 10 while the star formation population increases by a factor of 2.5. However, these results are based on a magnitude-selected sample and not a mass-selected one. Cirasuolo et al. (2007) found in a sample of M k ≤ −23 galaxies that the space density of bright red galaxies is nearly constant over 1.5 z 0.5, while that of bright blue galaxies decreases by a factor of ∼ 2 over the same redshift interval.
Recently, Ilbert et al. (2010) found a rise in the stellar mass density of log M * /M ⊙ > 11 galaxies from z ∼ 2 to z ∼ 1 by a factor ∼ 14 for quiescent galaxies and a factor of ∼ 4.3 for red-sequence galaxies, similar to our result. This difference between quiescent and red-sequence galaxies is likely to arise from red star-forming galaxies that contaminate the red sequence more towards z ∼ 2. For both quiescent and red-sequence galaxies, they found little evolution at z < 1, as we do. Also, their highly star-forming sample compares well to our blue galaxy population. They find a rise in mass density by a factor of ∼ 1.5 from z ∼ 2 to z ∼ 1 and a decrease by a factor of ∼ 2.5 at lower redshift again.
SUMMARY AND CONCLUSION
In this work we investigated the evolution of the red sequence in terms of colour, luminosity, mass, and number density through cosmic time since z = 2. We derived an H-band catalogue of 10692 galaxies from 0.2 deg 2 of the A901-field surveyed by the deep NIR multi-wavelength survey COMBO-17+4. While deep multi-wavelength surveys provide photometric redshifts for large samples of galaxies, the measured colours suffer from large uncertainties at high redshift that wash out the contrast with which the red sequence appears on top of the tail extending from the blue cloud. Hence, we used colour histograms weighted by the colour error of each galaxy to trace even a diluted red-sequence signal. We also used the rest-frame colour (U 280 − V ) for maximum population contrast and minimum sensitivity to redshift errors.
As a result, we found a red sequence up to z ∼ 1.65, beyond which the situation is unclear. Tracking the colour evolution of the red sequence peak, we derived an evolving colour-magnitude relation up to z = 2 and used it to separate the red and blue galaxy populations. Our results show that the (U 280 − V ) colour evolution of the red sequence is consistent with pure aging. Further results are: 1. Both the red and blue galaxy population get redder by ∆(U 280 −V ) ∼0.4 mag since z = 2.
2. The population of massive blue galaxies grows from z ∼ 2 to z ∼ 1.
3. The massive end of CM * Ds is dominated by the red galaxy population at z < 1 and by both galaxy populations at z > 1.
4. Some massive red galaxies with log M * /M ⊙ ∼ 11.5 are already in place at z ∼ 2.
We investigated the number density and stellar mass density evolution of massive red galaxies and found that both increase by a factor ∼ 4 between 2 > z > 1 and shows little evolution since z = 1. This suggests that the main formation epoch of massive red galaxies is at 2 > z > 1 such that they have assembled most of their mass by z ∼ 1. Note, that our masses are unlikely to be biased much by dust.
It is clear that our results are affected by cosmic variance due to the small area surveyed. Once the data are available from the two other fields (A226 and S11) targeted by the COMBO-17+4 survey, we expect to create an H-band selected catalogue of ∼ 50000 galaxies in an area of 0.7 deg 2 , of which ∼ 12000 galaxies will be above z = 1. The more than threefold increase in area and the combination of unrelated fields on the sky will reduce the effect of cosmic variance by a factor three and firm up our findings quantitatively. Histograms are sums of Gaussian probability distributions representing single galaxies with their individual colour error. Structure is driven by bright objects with accurate colours while faint, largeerror objects with broad, smooth distributions don't add contrast to the peaks. The top left panel contains the super-cluster A901/2 with its prominent red sequence. The colour of the red sequence peak, indicated by an arrow, is determined by a Gaussian fit to the localised excess above a continuum of galaxy counts. Thus, a red sequence is observed up to z ≃ 1.6. Number of galaxies, mean redshift and mean lookback time τ = z/(1 + z) are noted. Table 2. The dashed line is a linear fit to the points, and the solid line is an example prediction using PEGASE for a single age stellar population with solar metallicity formed 12 Gyr ago (z f = 3.7). The error bars account only for the colour measurement of the peak of the bright end of the red sequence. However, the scatter among the points illustrates further uncertainties including systematics and cosmic variance. Solid lines indicate the bimodality separation (see Eq. 5) at the mean redshift of each bin. The real separation is given by the colour of the data points (black=red sequence, green=blue cloud). Total number of galaxies and red fraction are noted. All redshift bins are nearly equal comoving volumes except for the three highest bins, which enclose 2.5 times as much volume, and the A901/2 cluster bin with a small volume enclosed within 0.150 < z < 0.175. For reference, we show a reddening vector for A V = 1 mag. (A colour | 2010-11-17T21:01:06.000Z | 2010-11-17T00:00:00.000 | {
"year": 2011,
"sha1": "fccd7c4db5773c3236a5341ec36dc030060bd50d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1011.4067",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fccd7c4db5773c3236a5341ec36dc030060bd50d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119229705 | pes2o/s2orc | v3-fos-license | Mesoscopic p-wave superconductor near the phase transition temperature
We study the finite-size and boundary effects on a p-wave superconductor in a mesoscopic rectangular sample using Ginzburg-Landau (GL) and quasi-classical (QC) Green's function theory. Except for a square sample with parameters far away from the isotropic weak-coupling limit, the ground state near the critical temperature always prefers a time-reversal symmetric state, where the order parameter can be represented by a real vector. For large aspect ratio, this vector is parallel to the long side of the rectangle. Within a critical aspect ratio, it has instead a vortex-like structure, vanishing at the sample center.
We study the finite-size and boundary effects on a p-wave superconductor in a mesoscopic rectangular sample using Ginzburg-Landau and quasi-classical Green's function theory. Apart from a few very special cases, we find that the ground state near the critical temperature always prefers a time-reversal symmetric state, where the order parameter can be represented by a real vector. For large aspect ratio, this vector is parallel to the long side of the rectangle. Within a critical aspect ratio, it has instead a vortex-like structure, vanishing at the sample center. Studies of multicomponent superfluids and superconductors have excited many for decades because of the diversity of textures, complex vortex structures and collective modes. The superfluid 3 He with spin-triplet order parameter [1,2] is a well-established example. Many studies also show that superconductors with multicomponent order parameters can also be found in, for example, UPt 3 [3,4] and Sr 2 RuO 4 [5,6]. Recently, studies of multicomponent superconductor in a confined geometry draw much attention due to advancements in nanofabrication. Experiments claimed to find half-quantum vortices [7] and the Little-Parks effect [8] in Sr 2 RuO 4 quantum ring. Surfaces are expected to have non-trivial effects on such superconductors. Some theoretical works show that surface currents are present in broken timereversal symmetric superconductors. [9][10][11] In considering Ru inclusions, Sigrist and his collaborators [12] have shown that a time-reversal symmetry state can be favored near the interface between Ru and Sr 2 RuO 4 due to the boundary conditions. In our previous work [13], considering a thin circular disk with smooth boundaries and applying Ginzburg-Landau (GL) theory, we have shown that a two-component p-wave superconductor can exhibit multiple phase transitions in a confined geometry. At zero magnetic field, the superconducting transition from the normal state was found to be always first to a timereversal symmetric state (with an exception which occurs only far away from the isotropic weak-coupling limit), even though the bulk free energy may favor a broken time-reversal symmetry state, which can exist at a lower temperature. This time-reversal symmetric state has a vortex-like structure, with order parameter vanishing at the center of the disk. We have also argued there that these features are general, do not rely on the GL approximation and should exist for also general geometries. [14] In this paper, we investigate this question further by considering rectangular and square samples, employing both GL and quasiclassical (QC) Green's function method. Within GL, for rectangular samples with large aspect ratios, we show that the phase transition from the normal to the superconducting state is second-order and is to a state with order parameter being a real vector parallel to the long side of the sample. For smaller aspect ratios, the state near the transition temperature is again a time-reversal symmetric state with a vortex at the center, except for a square and only for gradient coefficients far away from the weak-coupling limit, much like what we found for the circular disk. At not too small sizes, the results from QC are qualitatively similar to GL except for the critical sizes and aspect ratios obtained. At very small sizes however, QC calculation suggests that a more complicated situation can arise for some special aspect ratios. The transition can either become first-order, or perhaps into a state with a more complicated order parameter. In this paper, we shall mostly concentrate on the parameter region where the phase transition is second-order and leave the detailed investigation of the above mentioned special case to the future.
We shall thus consider a superconductor where its orbital part is given by η = η xx + η yŷ . We shall consider the dependences of η x and η y on the coordinates x, y, assuming that they are constant along the z direction. We assume the length of the sample in x direction is L and the width in y direction is W , and these surfaces are smooth. The effects of rough boundary have been discussed in Ref. [13] for the circular disk. We shall also limit ourselves to zero external magnetic fields. Near the second-order transition temperature, the magnetic field generated by the supercurrent is also negligible, hence the vector potential can always be ignored.
First, we study this system via GL theory. The GL free energy density per unit area for the bulk, F b , can be written as where α = α ′ (t − 1) with α ′ > 0, t ≡ T /T 0 c is the ratio of the temperature T relative to the bulk transition temperature T 0 c , and . . . represents terms higher power in the order parameter which are irrelevant below since we are interested only in the physics at the (modified) transition temperature T c . In the presence of spatial variations, there is an additional contribution to the free energy given by where repeated indices j, l in the first three terms are summed over x, y, and the last term describes crys-tal anisotropy.
[15] Within weak coupling approximation, particle-hole symmetry, and for an isotropic Fermi surface, K 1 = K 2 = K 3 > 0 and K 4 = 0, but we shall treat these coefficients as general parameters. The GL equations need to be accompanied by boundary conditions. The perpendicular component of the order parameter at the surface should vanish [17]. Thus, for a point at the surface where the normal isn,n· η = 0. For a smooth surface, the parallel component η should have vanishing normal gradient [17]. That is, at the surface, (n · ∇)η = 0.
GL equations for η x,y can be obtained by the variation principle. Near the critical temperature, we can linearize these equations. The easiest way to match the boundary conditions is to superimpose the Fourier components. Written in matrix form, the GL equations for the Fourier component q become Here, q is the wavevector, It is easy to find that, if q x = 0 or q y = 0, Eq.(3) decouples. We obtain either (i) η x, q = 0 with η y, q = 0 or (ii) η y, q = 0 with η x, q = 0. We call these solutions as A phases. In case (i), we have two possibilities. One isq =x and the critical temperature is determined by Because η x (x) is independent of y, the possible solutions are η x = X sin mπx L , satisfying the boundary conditions, η x = 0, at x = 0 and L. Here X is a constant and m is an integer. The best choice is m = 1 and the critical temperature is α ′ (1 − t) = K 1234 (π/L) 2 . We call this the A 1 phase. The other isq =ŷ. The order parameter η x (y) is independent of x. Thus it is not possible to satisfy the boundary conditions at x = 0 and L. In case (ii), the best solution is η y ∝ sin πy W , which is just the solution in case (i) with x ↔ y. The critical temperature is determined by α ′ (1 − t) = K 1234 (π/W ) 2 . We call this the A 2 phase.
If both q x and q y are non-zero, both η x, q , η y, q are finite. We define this kind of solution as B phase. To simplify the calculations, we ignore crystal anisotropy for the moment, and set K 4 = 0. From Eq.(3), we find the smallest eigenvalue is K 1 q 2 (for K 23 > 0). To have the normal component to the surfaces at x = 0 and L to vanish, η x must have the factor sin(mπx/L), where m is an integer. Because the boundary conditions ∂η x /∂y = 0 at y = 0 and W , η x should be proportional to cos(nπy/W ), with n also an integer. We can use the same arguments for η y . Therefore The critical temperature is highest for m = n = 1, and thus determined by α ′ (1 − t) = K 1 [(π/L) 2 + (π/W ) 2 ]. In order to satisfy Eq.(3), we need (π/L)X + (π/W )Y = 0, which means X and Y are relatively real and have opposite signs.
Comparing the transition temperatures of the A and B phases, we find that the system prefers the A 1 phase for L ≫ W . For L ∼ W , it prefers the B phase. We define the aspect ratio of the sample as ρ = L/W . The critical aspect ratio separating these two phases is With the same reasoning, the system is in A 2 phase for W ≫ L but prefers the B phase if ρ is larger than ρ −1 c = K 1 /K 23 . The phase diagram is shown in Fig.1(a). The cartoon pictures under the ruler are used to specify main characteristic of the order parameter in the corresponding phases. The solution for the middle case (B phase) is qualitatively same as the circular disk in [13]. Because the relatively real coefficients in Eq.(4), we can represent η by a real vector, as done in the inset of Fig.2(c). It is clear that the order parameter forms a vortex-like structure, and vanishes at the sample center.
When crystal anisotropy is included, the critical ratio becomes It reduces to Eq.(5) for K 4 = 0. For small K 4 , the critical ratio is 2 − K 4 /K 123 within the weak-coupling limit. It shows that the crystal anisotropy for K 4 > (<)0 stabilizes(destabilizes) the order parameter with the direction parallel to the long side of the sample. The phase diagram is similar to Fig.1(a) except smaller(larger) region for vortex state. We ignore crystal isotropy in the following.
With decreasing K 23 /K 1 , the stability region for B phase becomes narrower. This phase diagram ( Fig.1(a)) will change qualitatively if K 1 > K 23 The B phase is never stable (at T c ), and the system is in the A 1 phase if ρ > 1, and in the A 2 phase if ρ < 1. The square sample ρ = 1 forms a special case, where the system still has C 4 symmetry in real space, and the A 1 and A 2 phases are therefore degenerate. One can combine these two solutions with a phase difference. As a result, if the higher order terms in eq (1) for the bulk free energy prefer a time-reversal-symmetry-broken state, then the system would enter such a state directly at T c . Therefore, the phase diagram becomes Fig.1(b) for K 23 < K 1 , where the ground state at ρ = 1 should break the time-reversal symmetry. [18] Our results for the square here provide further understanding of those we obtained earlier for the circular disk [13]. There, we found that, for K 1,2,3 near the weakcoupling values (K 1 = K 2 = K 3 ) the phase transition from the normal state is always to the state (named n = 1 there) which preserves time-reversal but with a vortex at the center. That phase obviously has the same qualitative behavior as our B phase here. For sufficiently small K 23 /K 1 , we found that the system can enter a broken time-reversal symmetry state directly. We find the same results here though the critical value for K 23 /K 1 obviously can depend on the geometry.
In order to check the validity of the phase diagram from GL theory, we employ quasi-classical (QC) Green's function theory. For simplification, we focus on the isotropic and weak-coupling case. As shown in Ref. [13], we have, after linearizing in the order parameter, (2iǫ n + iv fp · ∇)f = 2iπ(sgnǫ n )∆, where f (p, ǫ n , r) and ∆(p, r) describe separately the offdiagonal parts of the QC propagator and pairing function,p is the momentum direction, ǫ n is Matsubara frequency, and v f is the Fermi velocity. With pairing interaction written as V 1p ·p ′ , the gap equation reads where the angular bracket denotes angular average over p ′ and N (0) is the density of states at the Fermi level. For our square geometry and assuming smooth surfaces, we have the boundary conditions f (θ) = f (π − θ) at x=0 and L and f (θ) = f (−θ) at y=0 and W. Here θ is the angle betweenp andx. Before solving the case in a confined rectangle, we like to mention the connection between GL theory and QC theory for the bulk. To zeroth order in gradient, one finds which defines the bulk transition temperature T 0 c . The first order for f is odd in ǫ n and will not contribute to the gap equation. In the second order, we recover the GL theory with and similar expressions for K 2,3 , with K 1 = K 2 = K 3 . Our equations are consistent with those in Ref. [1,2,16] For the A 1 phase, we shall show that we can have a self-consistent solution in QC theory with the order parameter suggested by the GL theory. One finds that f is independent of y. With the ansatz satisfying the boundary conditions, solving for C 1,2 (θ, ǫ n ) via eq.(7) and using (9), we find For large L, one can replace the bracket in the denominator by 1, the LHS by (1 − t), recovering the GL result using Eq.(10). Hence we see that, beyond GL, one needs simply to include extra factors in the denominator of Eq.(13) and include the ln on the LHS. Now, we consider the B phase. The order parameter is ∆(p, r) = X sin πx L cos πy W cos θ + Y cos πx L sin πy W sin θ.
(15) To simplify writing, let A = πv f /L, B = πv f /W . Again solving for f from (7), we obtain the following coupled linear equations in X, Y : Here c 1 = (A 2 cos 2 θ + B 2 sin 2 θ)/(4ǫ 2 n ), c 2 = (A 2 cos 2 θ − B 2 sin 2 θ) 2 /(4ǫ 2 n ) 2 , c 3 = (AB sin 2 θ cos 2 θ)/(2ǫ 2 n ), and D = 1 + 2c 1 + c 2 . The critical temperature of the B phase is determined by the point which allows non-trivial X and Y . We note that if one keeps only the lowest orders in A 2 and B 2 , D → 1, and replaces the ln's on LHS by (1 − t), then Eq.(16) and Eq.(17) recover the corresponding equations (3) in GL theory. In Fig.2(a), we compare the critical temperatures for different sizes of square samples between GL and QC theories. We use the coherence length ξ ≡ K 123 /α ′ as the unit for length (ξ = 0.199909v f /T 0 c for QC). In GL theory, (1 − t), the relative suppression of critical temperature, is inversely proportional to the square of the length scale of the system. Therefore it is more convenient to set the vertical axes of the phase diagram to be (ξ/L) 2 . We obtain the straight line with crosses for the critical temperature of A phase and the line with pluses for that of B phase. It shows that the system prefers the B phase. On the other hand, we also present the critical temperatures calculated from QC theory. The line with squares is for the B phase and the line with circles is for the A phase. As expected, it shows that the results from QC theory are consistent with those from GL theory near t = 1, corresponding to ξ/L ≪ 1. In addition to the fact that the B phase is still preferred, we see that the critical temperature is more suppressed than GL as the size of the system is smaller. [19] As the aspect ratio ρ becomes larger than 1, GL theory shows that there is a phase transition from the B phase to the A phase as ρ increases beyond the critical ratio K 23 /K 1 (Fig.1(a)). In Fig.2(b), we find that this critical ratio ( √ 2 here) still applies for large systems. However, we find that the B phase occupies a slightly larger ρ region when the size decreases. An example is in Fig.2(b), where we show that the system processes a phase transition from the A phase (thick black line) to the B phase for ρ = 1.6 (thin red line) around t = 0.67 as the size of the system becomes smaller. As the aspect ratio increases further, the situation becomes more complicated because the phase boundary for A phase is not a monotonic function in t. The shape of the curve suggests that, at lower temperatures, the transition from the normal to the superconducting state cannot be a secondorder transition to the A 1 phase as described here. One possibility is a first-order phase transition between the normal and the superconducting A 1 phase, analogous to [20]. However, transition into a more exotic order parameter structure cannot be ruled out [21]. We shall leave the detailed investigation of this question for the future. For illustrative purposes, we indicate the resulting phase diagram by postulating a first-order transition line somewhere between the two dashed lines in Fig.2(b). Our obtained phase diagram for the normal to superconducting transition, with instead now the sample area as the vertical axes, is as shown in Fig.2(c). For the square samples, the stable superconducting phase is the B phase, with a vortex-like structure at the center, similar to the n = 1 state in [13] obtained in the disk geometry.
In conclusion, we studied a two-component p-wave superconductor in a rectangular geometry near its transition temperature. The order parameter can behave differently depending on the aspect ratio and size. Except for some special regions in parameter space, the phase always preserves time-reversal symmetry. Our results give further support to those obtained in [13]. This work is supported by the National Science Council of Taiwan under grant number NSC 101-2112-M-001 -021 -MY3. | 2013-07-09T00:34:44.000Z | 2012-11-16T00:00:00.000 | {
"year": 2012,
"sha1": "61be6e2c059b7fc8a34947d5b258952be7e1681b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1211.3791",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "61be6e2c059b7fc8a34947d5b258952be7e1681b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
269582881 | pes2o/s2orc | v3-fos-license | Poly(ethylene oxide)- and Polyzwitterion-Based Thermoplastic Elastomers for Solid Electrolytes
In this article, ABA triblock copolymer (tri-BCP) thermoplastic elastomers with poly(ethylene oxide) (PEO) middle block and polyzwitterionic poly(4-vinylpyridine) propane-1-sulfonate (PVPS) outer blocks were synthesized. The PVPS-b-PEO-b-PVPS tri-BCPs were doped with lithium bis-(trifluoromethane-sulfonyl) imide (LiTFSI) and used as solid polyelectrolytes (SPEs). The thermal properties and microphase separation behavior of the tri-BCP/LiTFSI hybrids were studied. Small-angle X-ray scattering (SAXS) results revealed that all tri-BCPs formed asymmetric lamellar structures in the range of PVPS volume fractions from 12.9% to 26.1%. The microphase separation strength was enhanced with increasing the PVPS fraction (fPVPS) but was weakened as the doping ratio increased, which affected the thermal properties of the hybrids, such as melting temperature and glass transition temperature, to some extent. As compared with the PEO/LiTFSI hybrids, the PVPS-b-PEO-b-PVPS/LiTFSI hybrids could achieve both higher modulus and higher ionic conductivity, which were attributed to the physical crosslinking and the assistance in dissociation of Li+ ions by the PVPS blocks, respectively. On the basis of excellent electrical and mechanical performances, the PVPS-b-PEO-b-PVPS/LiTFSI hybrids can potentially be used as solid electrolytes in lithium-ion batteries.
Introduction
As lithium-ion batteries (LIBs) possess the advantages of high energy density, long lifespan, and high charging speed, they are becoming increasingly prevalent in recent years [1][2][3].Despite the increase in efficiency and convenience generated by LIBs, commercially used liquid electrolytes in LIBs may lead to some potential problems, such as dendrite formation, fire, and explosion [4][5][6].Utilizing solid polymer electrolytes (SPEs) in place of liquid electrolytes could be a prospective solution.SPEs have been the focal point of research advancements since their capability to dissolve lithium salts effectively was pioneered by Wright et al. [7], with a significant body of subsequent studies dedicated to the optimization of SPEs' architecture [8][9][10][11].SPEs with high shear moduli possess excellent processability and are able to inhibit dendrite growth [12][13][14].The predominant challenges associated with SPEs revolve around harmonizing ionic conductivity with mechanical strength and enhancing the suboptimal interfacial adherence between the electrode and electrolyte [15][16][17][18].Innovations such as the integration of novel lithium salts and anions, the copolymerization process, graft-modified polymers, and the formation of composites with inorganic ceramic fillers have shown promise in mitigating these issues [19].
With its excellent ability to dissolve lithium salts effectively and make mobility requisite for Li + ion transport, salt-doped poly(ethylene oxide) (PEO) serves as the most representative polymer electrolyte [20,21].There are several methods to modify homogeneous PEO materials for higher ionic conductivity and shear modulus, such as regulation of molecular weight [22], salt doping ratio [23], and temperature [24], but the trade-off between polymer chain mobility and stiffness makes it difficult to attain both excellent conductivity and mechanical strength [13,15].
Recently, block copolymers (BCPs) have been extensively studied due to their unique self-assembly structures at the nanoscale, which conduces to a combination of various distinctive properties [25,26].The connection of PEO with a reinforcing block can provide SPE materials with both ion transport channels and improved mechanical strength [27,28].Hillmyer's group reported a kind of BCP electrolyte material based on polymerizationinduced phase separation and found that the ionic conductivity of the material could reach 10 −3 S/cm at room temperature with its elastic modulus close to 1 GPa [16].However, among the majority of BCP materials studied, how to decouple conductivity with modulus completely still remains a problem [13,29], since introducing insulated blocks with better mechanical properties into the electrolytes usually has an adverse impact on ionic conductivity.For example, Balsara et al. showed that lithium bis-(trifluoromethane-sulfonyl) imide (LiTFSI)-doped poly(styrene)-b-PEO (PS-b-PEO) brought out enhanced modulus but restricted conductivity compared to PEO/LiTFSI [9,30,31].Notably, our group found that BCPs with double conductive phases that both constructing blocks are able to dissolve Li + ions as SPEs, such as poly(propylene monothiocarbonate)-b-PEO (PPMTC-b-PEO), could achieve synchronous enhancement of conductivity and modulus via regulating the microphase separation behavior of the double conductive phases [32,33].In our previous work, we systematically studied the effects of phase structure, grain size, and interphase on the conductivity of block copolymer electrolytes with double-conductive phases [32][33][34][35].
In the past few years, zwitterions and polyzwitterions have been extensively applied for gel polymer electrolytes (GPEs) or SPEs in LIBs [36][37][38][39].Zwitterions, such as sulfobetaine and carboxybetaine, contain covalently bonded cation and anion in a single molecule, while polyzwitterions represent polymers with repeating units bearing zwitterions [40,41].Ionic conductivity studies on polyzwitterion/salt systems were carried out by Manero et al., and they found that the conductivity of the systems followed the Arrhenius equation [42].Segalman and coworkers proposed semicrystalline polymeric zwitterionic (PZI) electrolytes with superionic lithium transport [43].Unlike the classical vehicular conduction mechanism in polyether and polymeric ionic liquid (PIL) electrolytes [44,45], lithium ion transport in these PZI electrolytes existed in two distinctive circumstances: The amorphous phase led to vehicular motion like traditional SPEs, and the ordered crystalline structure of pendant ZI groups enabled superionic transport similar to inorganic solid-state electrolytes, where Li + ions diffused nearly one order of magnitude more quickly than vehicular motion.Decoupling of ion transport with polymer segmental arrangement and size-selective ion motion for Li + ions facilitated by ordered lattices in the system resulted in high ionic conductivity and lithium transfer number.Macfarlane et al. introduced zwitterion in place of ionic liquid as a plasticizer into polymer electrolytes [46], and the conductivity and diffusion of lithium ions were significantly improved.They postulated that the oleophobic nature of zwitterion could enhance the mobility of Li + ions by providing a polar medium.Yoshizawa-Fujita designed a series of oligoether/zwitterion diblock copolymers and found the ionic conductivity of the SPEs to be maintained even at a high salt doping ratio [47].They proposed that the aggregation of dissociated ions was hindered by the polar zwitterion structure.Further implementation of these SPEs in cathode coatings for LIBs showed that these copolymers exhibited a substantial electrochemical stabilization window and preserved stable discharge capacities over successive cycles.
According to the strong intramolecular and intermolecular interactions of zwitterion in bulk, the zwitterion phase in zwitterion-containing copolymers presents enormous immiscibility with the other non-charged components [48,49].The bulk solidity below the glass transition temperature (T g ) makes it possible that the polyzwitterion phase serves as physical cross-linking points, thus enhancing the mechanical strength of the materials [50].Beyer and Long compared the storage moduli of n-butyl acrylate-based zwitterionomers and ionomers [51].They observed well-defined microphase separation and a rubbery plateau in zwitterionomers only when zwitterion content reached a certain amount.
In this work, ABA triblock copolymers (tri-BCPs) composed of poly(4-vinylpyridine) propane-1-sulfonate (PVPS) (A-block) and PEO (B-block) were used as SPE for LIBs for the purpose of simultaneous improvement in conductivity and mechanical performance.In addition to providing stiffness, the PVPS blocks can also dissociate and dissolve lithium salt, thus enabling the formation of double conductive phases in this type of ABA tri-BCP.The ionic conductivity and mechanical strength of the block copolymer solid electrolytes (BCPSEs), PVPS-b-PEO-b-PVPS/LiTFSI, with different compositions and doping ratios were investigated.It was observed that some of the electrolytes attained ionic conductivities and storage moduli higher than the PEO/LiTFSI blends.These electrically and mechanically enhanced SPEs are expected to inform the design of PEO-based electrolytes in LIBs in the future.
Synthesis of PVPS
in bulk, the zwi erion phase in zwi erion-containing copolymers presents enormous miscibility with the other non-charged components [48,49].The bulk solidity below glass transition temperature (Tg) makes it possible that the polyzwi erion phase serve physical cross-linking points, thus enhancing the mechanical strength of the mate [50].Beyer and Long compared the storage moduli of n-butyl acrylate-based zwi erio mers and ionomers [51].They observed well-defined microphase separation and a bery plateau in zwi erionomers only when zwi erion content reached a certain amo In this work, ABA triblock copolymers (tri-BCPs) composed of poly(4-vinylpyrid propane-1-sulfonate (PVPS) (A-block) and PEO (B-block) were used as SPE for LIBs the purpose of simultaneous improvement in conductivity and mechanical performa In addition to providing stiffness, the PVPS blocks can also dissociate and dissolve lith salt, thus enabling the formation of double conductive phases in this type of ABA tri-B The ionic conductivity and mechanical strength of the block copolymer solid electrol (BCPSEs), PVPS-b-PEO-b-PVPS/LiTFSI, with different compositions and doping ra were investigated.It was observed that some of the electrolytes a ained ionic condu ities and storage moduli higher than the PEO/LiTFSI blends.These electrically and chanically enhanced SPEs are expected to inform the design of PEO-based electrolyte LIBs in the future.
Preparation of Polymer/LiTFSI Hybrids
A certain amount of PVPSn-b-PEO210-b-PVPSn (or PEO210) and LiTFSI were dis in TFEA and stirred for 12 h to obtain homogeneous solutions.After being dried dynamic vacuum at room temperature for 24 h, the above blends were placed in a v oven for 48 h at 60 °C.Finally, the products were collected and stored in the glove an atmosphere of nitrogen.Doping ratio (r) is defined as the molar ratio of Li + ions sum of EO units and sulfonic acid units, both of which have the ability to comple Li + ions, i.e., r = [Li + ]/([EO] + [VPS]).
Characterizations
The composition of P4VP-b-PEO-b-P4VP tri-BCPs was determined by 1 H-NM Bruker DMX 400 (400 MHz) (Bruker Corporation, MA, USA) utilizing deuterated c form (CDCl3) as solvent.Relative molecular weight and polydispersity of PEO and b-PEO-b-P4VP were measured by gel permeation chromatography (GPC) (Waters C ration, MA, USA) on a Waters system with DMF as eluent and polystyrene (PS) as ards for calibration.The flow rate of DMF was kept at 1 mL/min while the test tempe was set at 40 °C.Fourier-transform infrared (FTIR) spectra were collected from a N 6700 spectrometer (Thermo Fisher Scientific Inc., MA, USA) with a resolution of Differential scanning calorimetry (DSC) was carried out on a TA DSC25 (TA Instru MA, USA) instrument to characterize the crystallization behavior and glass transitio perature (Tg) of the samples.The tests were performed with heating and cooling r 10 °C/min under nitrogen.Temperature-variable small-angle X-ray sca ering (SAX periments were operated at beamline BL16B1 (Shanghai Synchrotron Radiation F (SSRF), Shanghai, China) .Two-dimensional SAXS pa erns (2D-SAXS) were record a Pilatus 2M detector (Dectris Ltd, Baden, Swi erland) and transformed into one-d sional profiles with FIT2D software (h ps://www.esrf.fr/computing/scientific/FISamples were annealed for 24 h at 120 °C and stepwise cooled to room temperatur
Preparation of Polymer/LiTFSI Hybrids
A certain amount of PVPS n -b-PEO 210 -b-PVPS n (or PEO 210 ) and LiTFSI were dissolved in TFEA and stirred for 12 h to obtain homogeneous solutions.After being dried under dynamic vacuum at room temperature for 24 h, the above blends were placed in a vacuum oven for 48 h at 60 • C. Finally, the products were collected and stored in the glove box in an atmosphere of nitrogen.Doping ratio (r) is defined as the molar ratio of Li + ions to the sum of EO units and sulfonic acid units, both of which have the ability to complex with Li + ions, i.e., r = [Li + ]/([EO] + [VPS]).
Characterizations
The composition of P4VP-b-PEO-b-P4VP tri-BCPs was determined by 1 H-NMR with Bruker DMX 400 (400 MHz) (Bruker Corporation, Billerica, MA, USA) utilizing deuterated chloroform (CDCl 3 ) as solvent.Relative molecular weight and polydispersity of PEO and P4VP-b-PEO-b-P4VP were measured by gel permeation chromatography (GPC) (Waters Corporation, Milford, MA, USA) on a Waters system with DMF as eluent and polystyrene (PS) as standards for calibration.The flow rate of DMF was kept at 1 mL/min while the test temperature was set at 40 • C. Fourier-transform infrared (FTIR) spectra were collected from a Nicolet 6700 spectrometer (Thermo Fisher Scientific Inc., Waltham, MA, USA) with a resolution of 1 cm −1 .Differential scanning calorimetry (DSC) was carried out on a TA DSC25 (TA Instruments, Wakefield, MA, USA) instrument to characterize the crystallization behavior and glass transition temperature (T g ) of the samples.The tests were performed with heating and cooling rates of 10 • C/min under nitrogen.Temperature-variable smallangle X-ray scattering (SAXS) experiments were operated at beamline BL16B1 (Shanghai Synchrotron Radiation Facility (SSRF), Shanghai, China).Two-dimensional SAXS patterns (2D-SAXS) were recorded by a Pilatus 2M detector (Dectris Ltd., Baden, Switzerland) and transformed into one-dimensional profiles with FIT2D software (https://www.esrf.fr/computing/scientific/FIT2D/ accessed on 20 March 2024).Samples were annealed for 24 h at 120 • C and stepwise cooled to room temperature prior to SAXS testing.The wavelength of the X-ray was 1.24 Å, and the distance between the sample and the detector was 1920 mm.The average exposure time of the samples was 20 s.The scattering vector of the SAXS curves was calibrated with silver behenate.Temperature-variable rheological measurements were performed on a HAAKE RS 6000 rheometer (Thermo Fisher Scientific Inc., MA, USA) using 20 mm diameter parallel plates with a 1 mm gap between the plates.Temperature sweeps with a frequency of 1 Hz between 30 and 100 • C were collected within the linear viscoelastic regime at a heating rate of 2 • C/min under nitrogen.Before the rheological tests, the samples were thermally pressed into films with a thickness of 1 mm and cooled down to room temperature under relatively high pressure.The impedance spectroscopy of the polymer electrolyte films was recorded by an electrochemical workstation (CHI660E, CH Instruments, Inc., Bee Cave, TX, USA) under nitrogen with a frequency range of 1 MHz to 1 Hz and a fixed amplitude of 0.05 V.After being annealed at 120 • C for 24 h and cooled to room temperature, the electrolyte films were sandwiched between two stainless steel electrodes and a Teflon washer.The resistance that originated from ion transport in the polymer electrolytes was determined by the local minimum in the Nyquist plot of the impedance.The conductivities (σ) of the electrolytes were calculated by the equation σ = L/(R•S), where L, R, and S represent the electrolyte thickness, resistance, and area, respectively.
Microphase Separation Behavior
Figure 1 shows the SAXS profiles of PVPS n -b-PEO 210 -b-PVPS n tri-BCPs and PVPS n -b-PEO 210 -b-PVPS n /LiTFSI hybrids.It is found that, above the melting temperature (T m ) of PEO, all the neat tri-BCPs exhibit ordered lamellar structure as indicated by the relative position of higher-order peaks (q) and the primary peak (q*) (q: q* = 1: 2: 3:. ..) in SAXS profiles (Figure 1a).Notably, for traditional non-charged diblock copolymers, only when the volume fraction of a phase reaches 40% to 60% can lamellar structure be formed [53], while in our samples, the volume fractions of PVPS in lamella-forming samples are far below 40%.This is because introducing Coulombic interaction into BCPs may shift the symmetry of the phase diagram, and the lamellar structure formed at a low content of charged phase is a consequence of physical crosslinking among the polyzwitterion blocks [48,49].With the increase in zwitterionic PVPS content, most samples present a reduced peak width and more high-order peaks, indicating stronger phase separation.
to SAXS testing.The wavelength of the X-ray was 1.24 Å, and the distance betwee sample and the detector was 1920 mm.The average exposure time of the samples w s.The sca ering vector of the SAXS curves was calibrated with silver behenate.Tem ture-variable rheological measurements were performed on a HAAKE RS 6000 rheom (Thermo Fisher Scientific Inc. , MA, USA) using 20 mm diameter parallel plates wit mm gap between the plates.Temperature sweeps with a frequency of 1 Hz betwee and 100 °C were collected within the linear viscoelastic regime at a heating rate of 2 °C under nitrogen.Before the rheological tests, the samples were thermally pressed into with a thickness of 1 mm and cooled down to room temperature under relatively pressure.The impedance spectroscopy of the polymer electrolyte films was recorde an electrochemical workstation (CHI660E, CH Instruments, Inc. , TX, USA) under nitr with a frequency range of 1 MHz to 1 Hz and a fixed amplitude of 0.05 V.After b annealed at 120 °C for 24 h and cooled to room temperature, the electrolyte films sandwiched between two stainless steel electrodes and a Teflon washer.The resis that originated from ion transport in the polymer electrolytes was determined by the minimum in the Nyquist plot of the impedance.The conductivities (σ) of the electro were calculated by the equation σ = L/(R•S), where L, R, and S represent the electr thickness, resistance, and area, respectively.
Microphase Separation Behavior
Figure 1 shows the SAXS profiles of PVPSn-b-PEO210-b-PVPSn tri-BCPs and PVP PEO210-b-PVPSn/LiTFSI hybrids.It is found that, above the melting temperature (T PEO, all the neat tri-BCPs exhibit ordered lamellar structure as indicated by the rel position of higher-order peaks (q) and the primary peak (q*) (q: q* = 1: 2: 3:…) in S profiles (Figure 1a).Notably, for traditional non-charged diblock copolymers, only w the volume fraction of a phase reaches 40% to 60% can lamellar structure be formed while in our samples, the volume fractions of PVPS in lamella-forming samples ar below 40%.This is because introducing Coulombic interaction into BCPs may shif symmetry of the phase diagram, and the lamellar structure formed at a low conte charged phase is a consequence of physical crosslinking among the polyzwi erion b [48,49].With the increase in zwi erionic PVPS content, most samples present a red peak width and more high-order peaks, indicating stronger phase separation.In contrast, as the doping ratio rises, the ordering degree of the microphase-sepa structures becomes lower.Hybrids at different doping ratios show ill-defined struc at room temperature, as their SAXS profiles (Figure 1b-d) show no higher-order pea higher-order peaks with uncharacteristic proportional position relations to the pri In contrast, as the doping ratio rises, the ordering degree of the microphase-separated structures becomes lower.Hybrids at different doping ratios show ill-defined structures at room temperature, as their SAXS profiles (Figure 1b-d) show no higher-order peaks or higher-order peaks with uncharacteristic proportional position relations to the primary peaks.Homogeneous structures are even formed in PVPS 3.1 -b-PEO 210 -b-PVPS 3.1 /LiTFSI hy-brids at doping ratios of 1/12 and 1/6, as indicated by their flat SAXS profiles (Figure 1c,d).This result is related to the competitive distribution of lithium salt in different blocks, as proved by the FTIR spectra shown in Figure S4.Lithium salt is first distributed in the PEO phase, leading to the charged characteristic of the PEO block, improved similarity with the PVPS block, and decreased microphase separation strength.The shoulder peaks superposed on the primary scattering peaks observed at the doping ratio of 1/16 are attributed to PEO crystals, which will disappear after the fusion of PEO crystals (Figure S5).
Temperature-variable SAXS experiments were carried out for PVPS n -b-PEO 210 -b-PVPS n /LiTFSI hybrids at r = 1/16, among which n corresponds to 4.5, 6.2, and 7.4, respectively.In Figure S5, it is observed that the scattering vector ratio of the primary and higher-order peaks remains 1:2 at the temperatures above the melting of PEO crystal, suggesting that these hybrids maintain a lamellar structure.
The effective Flory Huggins parameter (χ eff ) between PEO and PVPS at the doping ratio 1/16 was obtained by fitting the disordered scattering peak of PVPS 3.1 -b-PEO 210 -b-PVPS 3.1 /LiTFSI hybrids in terms of the theory developed by Leibler [54][55][56].It is found that χ eff of this hybrid is 0.51 at 30 • C (Figure S6), much larger than 0.09 reported for PS-b-PEO/LiTFSI hybrids at the same doping ratio and temperature (detailed formula and calculation are given in Supplementary Materials) [30] and 0.092 for PEO 114 -b-P4VP 27 BCP at 60 • C [57].The strong incompatibility between PEO and PVPS accounts for the maintenance of the ordered structure, which ensures the formation of physical crosslinking points in the thermoplastic elastomers.
Thermal Properties
The thermal behaviors of PVPS n -b-PEO 210 -b-PVPS n /LiTFSI hybrids were characterized by DSC (detailed information is given in Table S2).As illustrated in Figure 2, a glass transition at low temperature and an endothermic peak are observed for each sample, while a cold crystallization process was seen in some specific samples.The endothermic peaks are ascribed to the melting of PEO.The crystallinity of PEO given in Table S2 was calculated based on the melting enthalpy, and it was found to decrease with the doping ratio.Notably, when the doping ratio is raised to 1/6, the crystallinity of PEO decreases to below 5%.As a result, the impact of crystallization on the subsequent experiments operated above r.t. for the samples with r = 1/6 can be ignored.
Materials 2024, 17, x FOR PEER REVIEW 6 peaks.Homogeneous structures are even formed in PVPS3.1-b-PEO210-b-PVPS3.1/LiTFS brids at doping ratios of 1/12 and 1/6, as indicated by their flat SAXS profiles (Figure 1 This result is related to the competitive distribution of lithium salt in different block proved by the FTIR spectra shown in Figure S4.Lithium salt is first distributed in the phase, leading to the charged characteristic of the PEO block, improved similarity the PVPS block, and decreased microphase separation strength.The shoulder peak perposed on the primary sca ering peaks observed at the doping ratio of 1/16 ar tributed to PEO crystals, which will disappear after the fusion of PEO crystals (Figur Temperature-variable SAXS experiments were carried out for PVPSn-b-PEO PVPSn/LiTFSI hybrids at r = 1/16, among which n corresponds to 4.5, 6.2, and 7.4, re tively.In Figure S5, it is observed that the sca ering vector ratio of the primary and hi order peaks remains 1:2 at the temperatures above the melting of PEO crystal, sugge that these hybrids maintain a lamellar structure. The effective Flory Huggins parameter (χeff) between PEO and PVPS at the do ratio 1/16 was obtained by fi ing the disordered sca ering peak of PVPS3.1-b-PEOPVPS3.1/LiTFSIhybrids in terms of the theory developed by Leibler [54][55][56].It is found χeff of this hybrid is 0.51 at 30 °C (Figure S6), much larger than 0.09 reported for PEO/LiTFSI hybrids at the same doping ratio and temperature (detailed formula and culation are given in Supplementary Materials) [30] and 0.092 for PEO114-b-P4VP27 BC 60 °C [57].The strong incompatibility between PEO and PVPS accounts for the ma nance of the ordered structure, which ensures the formation of physical crosslin points in the thermoplastic elastomers.
Thermal Properties
The thermal behaviors of PVPSn-b-PEO210-b-PVPSn/LiTFSI hybrids were chara ized by DSC (detailed information is given in Table S2).As illustrated in Figure 2, a transition at low temperature and an endothermic peak are observed for each sam while a cold crystallization process was seen in some specific samples.The endothe peaks are ascribed to the melting of PEO.The crystallinity of PEO given in Table S2 calculated based on the melting enthalpy, and it was found to decrease with the do ratio.Notably, when the doping ratio is raised to 1/6, the crystallinity of PEO decreas below 5%.As a result, the impact of crystallization on the subsequent experiments ated above r.t. for the samples with r = 1/6 can be ignored.respectively.The T g s observed in Figure 2 are close to those of PEO, so they are assigned to the PEO-rich phase in the PVPS n -b-PEO 210 -b-PVPS n /LiTFSI hybrids.After introducing PVPS block into the system, the T g s of the PEO-rich phase are increased in most cases compared with pure PEO homopolymers.It is found that, with increasing the volume fraction of PVPS, the T g becomes lower at first and then increases.Two aspects account for such a trend.When the content of PVPS is lower, the phase separation between PEO and PVPS is incomplete, along with a weaker spatial confinement effect on PEO segments, but there are more rigid PVPS segments in the PEO-rich phase.On the other hand, the degree of microphase separation is high at a larger volume fraction of PVPS, resulting in a lower PVPS content in the PEO-rich phase but a stronger spatial restriction on the segmental motion of PEO.As the doping ratio increases from 1/16 to 1/6, T g of the PEO-rich phase in PVPS 7.4b-PEO 210 -b-PVPS 7.4 /LiTFSI hybrids increases on account of suppression of PEO segmental motion by disassociated LiTFSI salt.However, T g s of PVPS 4.5 -b-PEO 210 -b-PVPS 4.5 /LiTFSI and PVPS 6.2 -b-PEO 210 -b-PVPS 6.2 /LiTFSI hybrids first decrease then increase with increasing salt content, while it takes a monotonous uptrend in PEO/LiTFSI hybrids.The phenomenon observed in the above tri-BCP/LiTFSI hybrids is ascribed to the enhancement in dissociation of LiTFSI by introducing polyzwitterion blocks, and such a downward trend in T g s has also been reported in oligoether/zwitterion diblock copolymers doped with LiTFSI [47].
Changes in crystallization and glass transition behaviors have a significant impact on the segmental motion of PEO chains.Lower crystallinity and T g of PEO correspond to better mobility and less stiffness of the polymer chain, which is beneficial for the increase in conductivity [20].C ranges from 10 0 to 10 3 Pa and decreases with increasing temperature.As the doping ratio increases, the initial storage modulus of PEO/LiTFSI hybrids drops at first and then becomes a little larger.This originates from two opposite aspects.On the one hand, salt doping restricts the crystallizability of PEO, thus impairing its mechanical properties.On the other hand, salt also serves as a crosslinking point for the homogeneous melt of PEO, which will enhance the modulus.
Rheological Properties
ume fraction of PVPS, the Tg becomes lower at first and then increases.Two aspects account for such a trend.When the content of PVPS is lower, the phase separation between PEO and PVPS is incomplete, along with a weaker spatial confinement effect on PEO segments, but there are more rigid PVPS segments in the PEO-rich phase.On the other hand, the degree of microphase separation is high at a larger volume fraction of PVPS, resulting in a lower PVPS content in the PEO-rich phase but a stronger spatial restriction on the segmental motion of PEO.As the doping ratio increases from 1/16 to 1/6, Tg of the PEOrich phase in PVPS7.4-b-PEO210-b-PVPS7.4/LiTFSI hybrids increases on account of suppression of PEO segmental motion by disassociated LiTFSI salt.However, Tgs of PVPS4.5-b-PEO210-b-PVPS4.5/LiTFSI and PVPS6.2-b-PEO210-b-PVPS6.2/LiTFSI hybrids first decrease then increase with increasing salt content, while it takes a monotonous uptrend in PEO/LiTFSI hybrids.The phenomenon observed in the above tri-BCP/LiTFSI hybrids is ascribed to the enhancement in dissociation of LiTFSI by introducing polyzwi erion blocks, and such a downward trend in Tgs has also been reported in oligoether/zwi erion diblock copolymers doped with LiTFSI [47].
Changes in crystallization and glass transition behaviors have a significant impact on the segmental motion of PEO chains.Lower crystallinity and Tg of PEO correspond to be er mobility and less stiffness of the polymer chain, which is beneficial for the increase in conductivity [20].
Rheological Properties
Figure 3 shows the variation of storage modulus (G′) with temperature for the PVPSnb-PEO210-b-PVPSn/LiTFSI and PEO210/LiTFSI hybrids at different doping ratios.The initial G′ of PEO210/LiTFSI hybrids at 30 °C ranges from 10 0 to 10 3 Pa and decreases with increasing temperature.As the doping ratio increases, the initial storage modulus of PEO/LiTFSI hybrids drops at first and then becomes a li le larger.This originates from two opposite aspects.On the one hand, salt doping restricts the crystallizability of PEO, thus impairing its mechanical properties.On the other hand, salt also serves as a crosslinking point for the homogeneous melt of PEO, which will enhance the modulus.All the PVPSn-b-PEO210-b-PVPSn/LiTFSI hybrids exhibit higher G′ values than the PEO210/LiTFSI hybrids due to the reinforcement of the PVPS phase.Hybrids with a larger fraction of the PVPS block exhibit a higher G′ at the same doping ratio.The value of G′ for PVPS7.4-b-PEO210-b-PVPS7.4/LiTFSI hybrid at r = 1/6 can reach 3 × 10 5 Pa, which is at least 10 4 times higher than that of PEO210/LiTFSI hybrid at the same doping ratio.It is also observed that, with increasing temperature, the modulus of some tri-BCP hybrids first maintains a plateau in the low temperature range and then decreases gradually at higher temperatures.The turning point shifts to a higher temperature as fPVPS in the tri-BCPs increases.Since the temperature corresponding to the turning point can be higher than the Tm of PEO and no order-to-disorder transition is observed around the turning point (Figure S7), the decrease of G′ may be a ributed to the pulling-out of partial PVPS blocks from All the PVPS n -b-PEO 210 -b-PVPS n /LiTFSI hybrids exhibit higher G ′ values than the PEO 210 /LiTFSI hybrids due to the reinforcement of the PVPS phase.Hybrids with a larger fraction of the PVPS block exhibit a higher G ′ at the same doping ratio.The value of G ′ for PVPS 7.4 -b-PEO 210 -b-PVPS 7.4 /LiTFSI hybrid at r = 1/6 can reach 3 × 10 5 Pa, which is at least 10 4 times higher than that of PEO 210 /LiTFSI hybrid at the same doping ratio.It is also observed that, with increasing temperature, the modulus of some tri-BCP hybrids first maintains a plateau in the low temperature range and then decreases gradually at higher temperatures.The turning point shifts to a higher temperature as f PVPS in the tri-BCPs increases.Since the temperature corresponding to the turning point can be higher than the T m of PEO and no order-to-disorder transition is observed around the turning point (Figure S7), the decrease of G ′ may be attributed to the pulling-out of partial PVPS blocks from the physical crosslinking domains under dynamic shearing.The physical crosslinking domains become more stable at a higher f PVPS , and as a consequence, the PVPS 7.4 -b-PEO 210b-PVPS 7.4 /LiTFSI hybrids even keep nearly constant moduli in the temperature range studied (30-100 • C).The relatively high and constant moduli of the PVPS 7.4 -b-PEO 210 -b-PVPS 7.4 /LiTFSI hybrids in a broad temperature range are advantageous to the processing and application of SPEs for LIBs with good safety.It is reported that lithium dendrites can be suppressed when the shear modulus of the separator (approximately 6 GPa) is 1.8 times higher than that of Li metal (4.9 GPa) [60].Although the overall moduli of the PVPS-b-PEOb-PVPS/LiTFSI hybrids are at the magnitude of 10 5 Pa, due to the microphase-separated structure formed in the hybrids, the growth of lithium dendrites cannot bypass the PVPS microdomains, and it will be stopped when meeting the glassy PVPS microdomains with a high modulus.
Ionic Conductivity
The electrical performances of PVPS n -b-PEO 210 -b-PVPS n /LiTFSI hybrids were characterized by the impedance spectrum.Figure 4 /LiTFSI exhibits ionic conductivities comparable to PEO 210 /LiTFSI.This may be due to the relative enrichment of LiTFSI in the PEO phase, which is frequently observed in the hybrids of salt and BCPs where both blocks can complex with salt, especially at low doping ratios [33,34,61].With further increasing the PVPS fraction in the tri-BCPs, the ionic conductivities of PVPS 6. S2 shows that the minimal T g of PEO occurs in the PVPS 6.2 -b-PEO 210 -b-PVPS 6.2 /LiTFSI hybrids, suggesting a lower concentration of PVPS in the PEO-rich phase due to enhanced microphase separation.This implies that at this juncture, the limitation of chain segment motion is not the predominant influence on conductivity.Furthermore, the grain sizes resulting from microphase separation for each sample set, as detailed in Table S3, do not adequately justify the reduced conductivity observed at elevated PVPS content with low salt-doping ratios [62,63].It is postulated that with increased PVPS content, microphase separation becomes more pronounced and the phase interfaces are more defined.Consequently, at lower lithium salt concentrations, salts predominantly localize within the interstitial zones of the PEO phase, akin to the PS-b-PEO system [25].This distribution limits the efficacy of PVPS in facilitating salt dissociation, and the morphology effect and grain boundary effect culminate in actual conductivities that are less than theoretical predictions.In the hybrids of tri-BCP, the nominal doping ratio is denoted as r = [LiTFSI]/([EO] + [VPS]).Since the PEO block has stronger association ability toward Li + ions than the PVPS block, the salt concentration in the PEO phase of tri-BCPs hybrids is higher than that in PEO 210 /LiTFSI, leading to the higher conductivity of the former.We also notice that the ionic conductivities of PVPS 6.2 -b-PEO 210 -b-PVPS 6.2 /LiTFSI and PVPS 7.4 -b-PEO 210 -b-PVPS 7.4 /LiTFSI at 1/16 are quite low in the low temperature range.This can be attributed to the presence of un-melted PEO crystals at testing temperatures, as confirmed by their higher T m s of PEO (Figure 2 and Table S2).This can be attributed to the unique dipole nature of zwitterionic molecules, which probably induce the Li + ions to dissociate from the lithium salt.In the literature, it was also reported that the introduction of zwitterion or polyzwitterion into the electrolytes by blending [64] or random copolymerization [65] could improve the electrical performance.It should be noted that the effect of helping dissociate Li + ions becomes obvious only at high doping ratios and in tri-BCPs with high PVPS fractions, since at low doping ratios, there are few salts in the PVPS phase with weaker association ability.This also leads to the high ionic conductivities of PVPS 6.The Vogel-Tammann-Fulcher (VTF) equation was frequently utilized to describe the temperature dependence of the ionic conductivity of polymer electrolytes [66,67].It can be written as follows: ln where T 0 represents the reference temperature set as T g -50 °C, E a is the apparent activation energy for ion transport, A is a pre-factor, and R is the gas constant, which equals 8.314 J/(mol•K).When ln(σT 1/2 ) is plotted versus 1/(T-T 0 ), a linear relationship is acquired.The slope of the line corresponds to −E a /R, and the intercept corresponds to lnA.The plots of logarithmic ionic conductivity against 1/(T-T 0 ) for the hybrids are shown in Figure 5, and the calculated E a s are listed in Table S4.The obtained E a s in this work are comparable with the values (~10 kJ/mol) reported for other PEO-based SPEs [68,69].We notice that at low PVPS fractions, E a s increases as the doping ratio increases, while for PVPS 6.It is counter-intuitive that E a s decreases with increasing doping ratio after r reaches 1/12, since restriction of chain segment motion resulting from superfluous lithium salt can be verified by increasing T g s (Table S2).This abnormal phenomenon is attributed to different salt distribution situations.For hybrids with low PVPS contents, lithium salt is mainly located in the PEO-rich phase, but at high PVPS fractions, PVPS blocks help with the dissociation of LiTFSI, thus decreasing E a s.S5 summarizes the ionic conductivity and storage modulus of PVPS-b-PEO-b-PVPS/LiTFSI hybrids and other BCP-based SPEs reported in the literature.As we can see, the PVPS-b-PEO-b-PVPS/LiTFSI hybrids in this work exhibit a better balance in ionic conductivity and moduli as compared with most other BCP electrolyte materials.This shows that the integration of PVPS offers valuable insights for the development of novel SPEs for LIBs.
Conclusions
In summary, the solid electrolytes of LiTFSI-doped PVPS-b-PEO-b-PVPS ABA tri-BCP thermoplastic elastomers were successfully prepared.Asymmetric lamellar microphaseseparated structures were formed in all the PVPS-b-PEO-b-PVPS tri-BCPs studied.By increasing the f PVPS in the tri-BCPs, the lamellar structure becomes more ordered, while salt-doping weakens the tendency toward microphase separation.The melting temperature and glass transition temperature of the hybrids were influenced by the f PVPS , the spatial confinement of the microphase-separated structure, and the doping ratio.Due to the physical crosslinking domains formed by the glassy PVPS blocks, the moduli of the PVPS-b-PEO-b-PVPS/LiTFSI hybrids are greatly improved as compared with the PEO/LiTFSI hybrids.Moreover, the PVPS-b-PEO-b-PVPS/LiTFSI hybrids can outperform the PEO/LiTFSI hybrids in ionic conductivity due to the assistance of the PVPS blocks in the dissociation of Li + ions.As a result, simultaneous improvement of mechanical properties and ionic conductivity can be achieved by the introduction of PVPS at both ends of PEO, which makes the PVPS-b-PEO-b-PVPS/LiTFSI hybrids have potential application in LIBs as solid electrolytes.
-b-PEO-b-PVPS tri-BCPs PVPS-b-PEO-b-PVPS tri-BCPs were synthesized by reversible addition-fragmentation chain transfer (RAFT) polymerization.First, esterification of PEO with TTCA was carried out in the presence of oxalyl chloride dissolved in CH 2 Cl 2 , and the CTA-PEO-CTA double-ended macromolecular chain transfer agent was prepared.Next, a series of poly(4-vinylpyridine)b-poly(ethylene oxide)-b-poly(4-vinylpyridine) (P4VP-b-PEO-b-P4VP) ABA triblock BCPs were synthesized via RAFT polymerization of 4-VP in DMF.1,3-propanesultone was then added for further modification of the pyridine group in TFEA.The detailed synthesis process and characterization results of the polymers are provided in the Supplementary Materials.Scheme 1 shows the synthesis route of PVPS-b-PEO-b-PVPS tri-BCPs.Table
Table 1 . 1 a
Details for PVPS-b-PEO-b-PVPS tri-BCPs.The number of repeating units was determined by 1 H-NMR of P4VP-b-PEO-b-P4VP.b The n average molecular weight is calculated by 1 H-NMR spectra.c Polydispersity of P4VP-b-PEO-b Ð = Mw/Mn obtained from GPC. d The volume fraction of PVPS is calculated on the basis of t sities of PEO (1.128 g/cm 3 ) and PVPS (1.16 g/cm 3 ) [48,52].
Figure 2 .
Figure 2. DSC 2nd heating curves of PEO 210 /LiTFSI and PVPS n −b−PEO 210 −b−PVPS n /LiTFSI with different doping ratios.(a) r = 1/16, (b) r = 1/12, and (c) r = 1/6.It is reported in the literature that the T g s of PEO and PVPS are −65 • C and 226 • C [58,59],respectively. The T g s observed in Figure2are close to those of PEO, so they are assigned to the PEO-rich phase in the PVPS n -b-PEO 210 -b-PVPS n /LiTFSI hybrids.After introducing PVPS block into the system, the T g s of the PEO-rich phase are increased in most cases compared with pure PEO homopolymers.It is found that, with increasing the volume fraction of PVPS, the T g becomes lower at first and then increases.Two aspects account for such a trend.When the content of PVPS is lower, the phase separation between PEO and PVPS is incomplete, along with a weaker spatial confinement effect on PEO segments, but there
Figure 3
Figure 3 shows the variation of storage modulus (G ′ ) with temperature for the PVPS nb-PEO 210 -b-PVPS n /LiTFSI and PEO 210 /LiTFSI hybrids at different doping ratios.The initial G ′ of PEO 210 /LiTFSI hybrids at 30• C ranges from 10 0 to 10 3 Pa and decreases with increasing temperature.As the doping ratio increases, the initial storage modulus of PEO/LiTFSI hybrids drops at first and then becomes a little larger.This originates from two opposite aspects.On the one hand, salt doping restricts the crystallizability of PEO, thus impairing its mechanical properties.On the other hand, salt also serves as a crosslinking point for the homogeneous melt of PEO, which will enhance the modulus. | 2024-05-05T15:09:35.523Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "20d752a6fa63234fecc25ac4465b71f5e86c0f55",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/17/9/2145/pdf?version=1714728504",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4beaf3e05542c91de3962aa658b5a68826b9b14a",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": []
} |
7751077 | pes2o/s2orc | v3-fos-license | Reversibility Problem of Multidimensional Finite Cellular Automata
While the reversibility of multidimensional cellular automata is undecidable and there exists a criterion for determining if a multidimensional linear cellular automaton is reversible, there are only a few results about the reversibility problem of multidimensional linear cellular automata under boundary conditions. This work proposes a criterion for testing the reversibility of a multidimensional linear cellular automaton under null boundary condition and an algorithm for the computation of its reverse, if it exists. The investigation of the dynamical behavior of a multidimensional linear cellular automaton under null boundary condition is equivalent to elucidating the properties of block Toeplitz matrix. The proposed criterion significantly reduce the computational cost whenever the number of cells or the dimension is large; the discussion can also apply to cellular automata under periodic boundary condition with a minor modification.
Introduction
A frequently used technique for studying complex structure is dividing the system into smaller pieces that are elaborated accordingly; for most physical systems, the dynamical behavior usually depends on the interactions among their neighbors. Cellular automaton (CA), introduced by Ulam and von Neumann, is a particular class of discrete dynamical system consisting of a regular network of cells which change their states simultaneously according to the states of their neighbors under a local rule; this makes CA an appropriate approach to model systems with the above mentioned property. CAs has been found applications in simulating or modeling complex systems in Figure 1. A three-dimensional linear cellular automaton Φ which is elucidated in Example 3.11. The 64 cells are located in a 4 × 4 × 4 cube, and the states 0, 1, 2, 3, and 4 are represented by white, red, green, blue, and gray, respectively. The pattern (b) is seen as the 30th evolution of (a).
One of the fundamental microscopic properties of nature is physical reversibility, which is a motivation for studying the reversibility problem of a dynamical system [24]; reversible computing systems are defined as each of their computational configurations has at most one previous configuration; this makes every computation process can be traced backward uniquely. In other words, reversible computing systems are deterministic in both directions of time. Figure 1 illustrates the patterns of an initial pattern and its 30th evolution of a three-dimensional linear cellular automaton (see Example 3.11 for more details). Landauer's principle asserts that an irreversible logical operation, such as erasure of an unnecessary information, inevitably causes heat generation [22]; Bennett revealed that each irreversible Turing machine can be realized by a reversible one that simulates the former and leaves no garbage information on its tape when it halts [3]; later on, Toffoli demonstrated that every irreversible d-dimensional cellular automaton can be simulated by some (d + 1)-dimensional reversible cellular automaton [33], and Morita and Harao revealed computation-universality of one-dimensional reversible cellular automata [25]. Recently, it has been shown that a reversibly linear CA is either a Bernoulli automorphism or non-ergodic [5].
While the reversibility of one-dimensional CAs is elucidated [2,24,27], Kari indicated that the reversibility of multidimensional CAs is generally undecidable [16,17]. When restricted to linear CAs, however, the reversibility problem of multidimensional systems is demonstrated [15,23]. More explicitly, the necessary and sufficient condition for a multidimensional linear CA is given. For more information about the reversibility problem of CAs, the reader is referred to [18,24] and the references therein.
Recently, the reversibility problem of CA under boundary conditions has been widely studied since the number of cells is usually finite in practical applications. As the reversibility problem of one-dimensional cellular automata under periodic boundary conditions is generally answered, there are relatively few results about both one-dimensional and multidimensional cellular automata under null boundary conditions (see [7,8,10,21,28,36] and the references therein); beyond that, investigations about the reversibility problem of cellular automata with memory and σ-automata are also seen in the literature [32,35]. Meanwhile, the reversibility problem of cellular automata on Cayley trees under periodic boundary conditions is studied in [6].
In this paper, we consider the reversibility problem of d-dimensional linear cellular automata with the prolonged η-nearest neighborhood under null boundary conditions for d ≥ 3, η ∈ N, and characterize its inverse, if it exists. Figure 2 illustrates those cells that contribute to the evolution of the centered cells for η = 1 and η = 2. After revealing the matrix representation of a d-dimensional linear cellular automaton with the prolonged η-nearest neighborhood, studying the reversibility problem is equivalent to elaborating the invertibility of the corresponding matrix. We show that the associated matrix can be decomposed into the Kronecker sum of several smaller matrices, each of which is a Toeplitz matrix, and an algorithm for its inverse matrix, if it exists, is obtained. The main contributions of this paper are to significantly reduce the computational cost 1 of determining the reversibility of a multidimensional linear cellular automaton under null boundary condition and its reverse, if it exists, and the dynamical behavior of a multidimensional linear cellular automaton is characterized by the properties of block Toeplitz matrix (cf. [11,12] for more details about the discussion of block Toeplitz matrix). Additionally, the study can extend to the ones under periodic boundary conditions with a minor modification.
The rest of this investigation is organized as follows. Section 2 recalls definitions and fundamental results in matrix theory and number theory that are used in the later discussion. Sections 3 and 4 consider three-dimensional cellular automata with nearest neighborhood, and Section 5 extends the results to general multidimensional cellular automata with the prolonged η-nearest neighborhood. Conclusions and further discussion are given in Section 6.
1 Roughly speaking, the computational cost of an n × n matrix is O(n 3 ), and the computational cost of our approach is O(n 3 d ), where d is the dimension of the considered system.
Preliminary
This section devotes to introducing the definition of three-dimensional linear cellular automata with nearest neighborhood over the finite field Z p = {0, 1, . . . , p−1} and recalling some well-established theorems in matrix theory and number theory.
Consider a three-dimensional infinite lattice Z 3 , divided in regular cells, the state of each cell is taking values from Z p . At each time step, the state of a cell changes according to a deterministic rule which depends on the state of each cell next to it; this is the so-called cellular automaton. Formally speaking, Z 3 is the set of cells and the set Z Z 3 p is called the configuration space. For each X ∈ Z Z 3 p and i ∈ Z 3 , X i ∈ Z p refers to the state of X at the cell i.
(1) φ(y N ) = c 0 y 0 + ay −e 2 + by e 2 + cy −e 1 + dy e 1 + ey −e 3 + f y e 3 (mod p) A three-dimensional linear cellular automaton driven by the local rule φ with nearest neighborhood is defined as a pair ( for each i ∈ Z 3 . Namely, the state X i (t + 1) of cell i at time step t + 1 is determined by A cellular automaton under null boundary condition is one such that only finitely many cells are associated with nonzero state. More explicitly, for n, m, s ∈ N, n, m, s ≥ 2, a three-dimensional linear cellular automaton under null boundary condition is described as It is well-known that elaborating the behavior of a linear dynamical system is related to studying its corresponding matrix representation. Before characterizing the matrix representation of Φ N , which is postponed to the following section, we recall some definitions and results in matrix theory first.
Definition 2.1. Let A ∈ M j 1 ×j 2 (R) be a j 1 × j 2 real matrix and B ∈ M k 1 ×k 2 (R) a k 1 × k 2 real matrix. The Kronecker product (or tensor product) of A and B is the Kronecker products have many interesting properties. For example, Suppose that µ 1 , µ 2 , . . . , µ j are the eigenvalues of A and ν 1 , ν 2 , . . . , ν k are the eigenvalues of B. Then {µ i + ν r } 1≤i≤j,1≤r≤k is the set of eigenvalues of (I k ⊗A)+(B ⊗I j ). Furthermore, if x is an eigenvector of A corresponding to the eigenvalue µ and y is an eigenvector of B corresponding to the eigenvalue ν, then y ⊗ x is an eigenvector of (I k ⊗ A) + (B ⊗ I j ) corresponding to the eigenvalue µ + ν. These results hold for matrices either over R or finite field F p ; the proofs of finite field case are analogous to the original ones, hence are omitted for the compactness of this paper. The reader is referred to [14,29] for more details.
It is known that a given matrix A is reversible if and only if det A = 0, and det A is the product of its eigenvalues. Whenever A is decomposed into the Kronecker sum of small matrices B and C, it follows that A is reversible only if either B or C is reversible. To reveal the necessary and sufficient condition of A being reversible, we aim to characterize the eigenvalues of B and C completely. Completely characterizing the eigenvalues of B and C not only reduces the computational cost, but helps in determining the inverse matrix of A, if it exists. The related discussion is addressed later.
Next, we recall some results in number theory that will be used later for the investigation of eigenvalues. Suppose that F p is a finite field with characteristic p and F (x) is a polynomial over F p . Let α be a root of F (x); the multiplicity of α is the largest positive integer n for which (x − α) n divides F (x). α is a simple root if n = 1 and is a multiple root otherwise. When the degree of F (x) is 2, it is seen that be a family of polynomials.
A splitting field for F is an extension field E of F p satisfying: 2) E is the smallest field that contains the roots of each F i (x).
The next theorem describes an important property of irreducible polynomials over F p .
has a root α in an extension field E of F p with cardinality p k . Moreover, all the roots of F (x) are simple and are given by α, α p , . . . , α p k−1 .
For more details, the reader is referred to [26,30].
Three-Dimensional Cellular Automata: Reciprocal Cases
In the following two sections, we introduce the matrix representation of the three-dimensional linear cellular automata Φ N over Z p under null boundary conditions with local rule φ defined in (1) and elaborate the reversibility of Φ N via its matrix representation. This section considers the case where the evolution at each cell is independent of its current state (i.e., c 0 = 0) and each pair of parameters in every direction satisfies the quadratic reciprocity law (defined later). The general cases are elucidated in Section 4.
Fix n, m, s ≥ 2, define herein, O k refers to the k × k zero matrix. We remark that M s is the where v denotes the transpose of v. Notably, Θ is a one-to-one correspondence. The following theorem indicates that T RN is the matrix representation of the cellular automaton Φ N with local rule φ defined in (1) and Theorem 3.1. The three-dimensional linear cellular automaton Φ N over Z p under null boundary condition is characterized by T RN , and vice versa.
More explicitly, the diagram Proof. The verification is straightforward, thus it is omitted. For j ≥ 0, define is the floor function.
For any collection of sets {S i } k i=1 , let S 1 + S 2 + · · · + S k denote the Minkowski sum of sets; that is, Furthermore, we refer to S r (1, 1) as K r for the sake of simplicity. Theorem 3.2 characterizes all the eigenvalues of T RN completely. Herein, we consider the general cases (i.e., real coefficients) first, the case of coefficients in Z p is elaborated later on.
where R j denotes the set of roots of g j over C and α t 1 ,t 2 denotes the geo- Proof. Suppose that t 1 , t 2 ∈ R + are positive real numbers, and A is an r × r real matrix. Given a qr × qr matrix Define P T RN , P Ms , and P Sn as respectively. It follows immediately that Let g j (x) = det(xI j − K j ) be the characteristic polynomial of K j for j ≥ 1. Let g 0 (x) = 1 and g j (x) = 0 for j < 0. It can be verified that g j (x) satisfies the following recurrence relation: We can conclude that This completes the proof.
We remark that, when the discussion focuses on the finite field Z p , some of the eigenvalues of T RN are in the algebraic closure of Z p just like the real matrices considered in Theorem 3.2. Therefore, we study the eigenvalues of T RN in the extension field of Z p where the characteristic polynomial of T RN splits in. Furthermore, the discussion of Theorems 3.2 and 3.9 apply to real matrices analogously.
be the greatest common divisor of g k (x) and g (x). Then Proof. The proof of Theorem 3.2 reveals the generating function G(u, x) of This demonstrates that an alternative expression of g j (x) is Suppose that λ is a root of g j (x). Then It can be verified that Let {λ r } k r=1 and {λ q } q=1 be the set of roots of g k (x) and g (x), respectively. It is seen that λ r = λ q for some q, r if and only if r k + 1 = q + 1 .
To investigate the reversibility of T RN , Theorem 3.2 infers that it is essential to consider the case where the geometric means of the three pairs of Notably, k(t 1 , t 2 ) is well-defined if t 1 = t 2 or the pair {t 1 , t 2 } is in the same partition. For each prime p, let SR p ⊂ Z p × Z p be the domain of k(t 1 , t 2 ); we say that (t 1 , t 2 ) ∈ Z p × Z p satisfies the quadratic reciprocity law if However, it happens that deg(gcd(g instance, g 4 (x) and g 78 (x) are relatively prime in Z[x] while gcd(g [3] 4 (x), g [3] 78 (x)) ≡ t 4 + 1 (mod 3). On the other hand, numerical experiments indicate that deg(gcd(g [p] k (x), g [p] (x))) = deg(gcd(g k (x), g (x))) for p < 100 and k, < 77.
For the rest of this section, we assume that (a, b), (c, d), and (e, f ) satisfy the quadratic reciprocity law until stated otherwise. ii) k 1 ≡ k 2 ≡ k 3 and k 3 ≡ ±2k 1 (mod p).
where A ∼ B denotes that the matrices A and B are similar. n , g [p] s are irreducible over Z p , then is decomposed as two relatively prime irreducible polynomials, we can conclude that two factors of g 4 (x) are separable in their splitting field Z 3 (α), where α is a root of x 2 + 1 ≡ 0 (mod 3). It can be verified that k m = k n = k s = 1. Proposition
are a collection of invertible r × r matrices. Then Proof. The proof is straightforward, and thus it is omitted.
exists. Then where P 16 (a, b) = P 4 (a, b) ⊗ I 4 . Notably, Let U = U ⊗ (P 4 (d, c)U ). It follows that Let U = U ⊗ U , it is seen that Write To increase the readability, we assume that k c,d = k a,b = 1, k e,f = 4. It is easily seen that Analogous calculation to the above demonstrates that m (x), g
Lemma 3.8 illustrates that
n (x), and g s (x). For j ∈ {m, n, s}, there exists U j ∈ M j (E) such that U −1 j K j U j ≡ J j is the canonical Jordan form of K j in E. We divide the proof into several steps.
Step 2. Let P a,b = diag(k a,b −1 , k 2 a,b −1 , · · · , k s a,b −1 ) ⊗ I n . Since M s = I s ⊗ S n (c, d) + S s (a, b) ⊗ I n , we can derive that Let U Ms = P a,b · (U s ⊗ U Sn ). Then Step 3.
Let P e,f = diag(k e −1 ,f , k 2 e −1 ,f , · · · , k m e −1 ,f ) ⊗ I ns and let U T RN = P e,f · (U m ⊗ U Ms ). It follows that where A i = J Ms + k e,f λ m,i I ns for 1 ≤ i ≤ m, and m,i ∈ {0, 1} for 1 ≤ i ≤ m − 1. The desired generalized Jordan form J T RN of T RN is then obtained.
Step 4. Suppose that T RN is reversible. Lemma 3.8 asserts that the ex- where B i,j = k c,d J n + (k a,b λ s,j + k e,f λ m,i )I n for 1 ≤ j ≤ s. Lemma 3.8 demonstrates that where w i,j, = k c,d λ n, + k a,b λ s,j + k e,f λ m,i for 1 ≤ ≤ n.
Step 5. The desired algorithm for deriving the generalized Jordan form of T RN and its inverse matrix, if it exists, is as follows.
JFA1.
Find U r such that U −1 r K r U r ≡ J r is a canonical Jordan form over the splitting field for {g s }, where r = m, n, s.
This completes the proof.
For t 1 , t 2 ∈ Z p and j ∈ N, define Let E denote the splitting field for g [p] n;c,d (x), g s;a,b (x), and g [p] m;f,e (x), and let R j;t 1 ,t 2 be the collection of roots of g [p] j;t 1 ,t 2 (x) in E. Similar to Theorem 3.2, the reversibility of T RN is revealed after we characterize its eigenvalues.
Proof. Similar to the proof of Theorem 3.2, it suffices to show that g j;t 1 ,t 2 (x) is the characteristic polynomial of S j (t 1 , t 2 ) since the set of eigenvalues of Let g j;t 1 ,t 2 (x) = det(xI j − S j (t 1 , t 2 )) be the characteristic polynomial of S j (t 1 , t 2 ). Set g 0;t 1 ,t 2 (x) = 1 and g j;t 1 ,t 2 (x) = 0 for j < 0. It is easily seen that Let G(u, x) = j≥0 g j;t 1 ,t 2 (x)u j be the generating function. Then It follows immediately that The proof is complete.
Proof. The proof is similar to the proof of Proposition 3.3, thus it is omitted. Proof. The proof is similar to the discussion in the proof of Theorem 3.9, thus we only sketch the outline.
Given t 1 , t 2 ∈ Z p and k ∈ N, let U k (t 1 , t 2 ) ∈ M k (E) be the matrix consists of the eigenvectors of S k (t 1 , t 2 ); in other words, n;c,d (x − c 0 ). Furthermore, the algorithm for the computation of the generalized Jordan form of T RN (Theorem 4.4) remains to be true with a minor modification.
Reversibility for Multidimensional Cellular Automata
This section extends the results in Sections 3 and 4 to multidimensional linear cellular automata with the prolonged η-nearest neighborhood for η ∈ N. The demonstration is analogous to the discussion in the previous sections, thus it is omitted. 5.1. Nearest Neighborhood. Let n ∈ N, n ≥ 2, and let Z Z n p be the ndimensional lattice over finite field Z p . Suppose that {e k } n k=1 is the standard basis of R n ; set N = {v ∈ Z n : v = λe k for some k ∈ {1, . . . , n} and λ ∈ {−1, 0, 1}}.
Fix c, k , r k ∈ Z p for 1 ≤ k ≤ n; define φ : Z N p → Z p as φ(y N ) = cy 0 + n k=1 ( k y −e k + r k y e k ) (mod p) An n-dimensional linear cellular automaton Φ : Z Z n p → Z Z n p with nearest neighborhood is defined as for every i ∈ Z n . Given m 1 , m 2 , . . . , m n ∈ N, m k ≥ 2 for 1 ≤ k ≤ n, a linear cellular automaton under null boundary condition is described as for 1 ≤ k ≤ n, and i = (i 1 , i 2 , . . . , i n ).
First we consider the case where the parameter c = 0. Let Θ : Z m 1 ×m 2 ×···×mn p → Z m 1 m 2 ···mn p denote the transformation that designates the state X = (X i ) 1≤i k ≤m k ,1≤k≤n as a column vector with respect to the anti-lexicographic order. Set T 1 = S m 1 ( 1 , r 1 ) and where dim A refers to the dimension of the square matrix A. The following theorem is derived immediately. Since Θ is a one-to-one correspondence, the following statements are equivalent.
(1) Φ N is reversible; (2) T n is invertible over Z p ; (3) 0 is not an eigenvalue of T n over Z p .
Theorem 5.2. Let R k denote the collection of roots of g m k ; k ,r k (x) in the splitting field E for {g [p] m k ; k ,r k } 1≤k≤n , where g is defined in (4). Then the set E Tn of eigenvalues of T n is where "+" refers to the Minkowski sum.
Similar to Theorems 3.9 and 4.4, there is an algorithm for the computation of the generalized Jordan form of T n and T −1 n , if it exists. Rather than describing the steps of the corresponding algorithm, which is analogous to the discussion in the proof of Theorems 3.9 and 4.4, the following theorem illustrates the formula of the desired matrix that derives the generalized Jordan form of T n . Theorem 5.3. Let U m k ( k , r k ) ∈ M m k (E) be the matrix that transforms S m k ( k , r k ) to its canonical Jordan form over the splitting field E for {g then U Tn is invertible and U −1 Tn T n U Tn is a generalized Jordan form. Furthermore, U −1 Tn T n U Tn is a canonical Jordan form if and only if S m k ( k , r k ) is diagonalizable over E for k ≥ 2, and T n is diagonalizable over E if and only if S m k ( k , r k ) is diagonalizable over E for all k.
Remark 5.4. In the case where c = 0, we substitute T 1 as T 1 = T 1 + cI dim T 1 , then Theorems 5.1 and 5.3 still work; notably, U m 1 ( 1 , r 1 ) should also be substituted as U m 1 which transforms T 1 to its canonical Jordan form.
Furthermore, Theorem 5.2 remains to be true after replacing R 1 by R 1 , the collection of roots of g
Conclusion and Future Work
In this paper, we investigate the reversibility problem of multidimensional linear cellular automata under null boundary conditions. It follows that the matrix representation of n-dimensional linear cellular automata with the prolonged η-nearest neighborhood is the Kronecker sum of n smaller matrices, each of which is a Toeplitz matrix. Such a cellular automaton is reversible if and only if the Minkowski sum of the sets of eigenvalues of block Toeplitz matrices contains no zero. When the cellular automaton is reversible, we provide an algorithm for deriving its reverse rule.
The proposed method significantly reduces the computational cost when the number of cells is large or when the dimension n is large. Furthermore, the dynamical behavior of a multidimensional linear cellular automaton under null boundary condition is revealed by elucidating the properties of block Toeplitz matrices.
We remark that the elucidation in this work can extend to the investigation of cellular automata under periodic boundary conditions with a minor modification. The discussion is analogous, hence it is omitted. Furthermore, Dennunzio et al. [9] characterize the properties, such as quasi-expansivity and closing property, of multidimensional cellular automata by transposing them into some specific one-dimensional systems. It is of interest how the results obtained in the present paper are reflected on its associated onedimensional cellular automaton. The related work is under preparation. | 2016-11-03T20:37:40.093Z | 2016-09-30T00:00:00.000 | {
"year": 2016,
"sha1": "0a1048f9e6ee8873961c5812749dc0222a67815c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1609.09572",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9abc0ee06b4c95f0fbe43486bcd942f71179746b",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
56294999 | pes2o/s2orc | v3-fos-license | Persistent Homology applied to location problems
Different approaches to solve location problems in transport and logistics have been developed in the literature. This article introduces a new approach using the concept of persistent homology which has been proved to be an efficient method in topological data analysis; and has been served as an alternative new tool in many and various research areas such as image processing, material science and biological systems. Precisely, inspired by the notions of the first homology groups and the persistent homology which mainly describe the behaviour of connectivity relation between elements during a filtration of specific topological spaces; we develop a new method and approach for the treatment of facility location–network design problems.
Introduction
The concept of persistence is first introduced by Edelsbrunner, Letscher, and Zomorodian, in [6]; then refined by Carlsson and Zomorodian in [21].It was used to provide a rigorous response to the following problem: For a parameterized family of spaces, those topological features which persist over a significant parameter range are to be considered as signal with short-lived features as noise (see [7]).Since then, persistent homology has gathered an enormous attention as a fundamental tool in Topological Data Analysis (TDA).It has proved utility and significance to solve different problems in several domains such as biology, image processing and sensor networks (see, for instance, the papers [1,13,16,19]).For a general background on persistent homology, worth to mention the book by Zomorodian [20], survey by Edelsbrunner, Harer [5] and the survey by Ghrist [7].The mentioned authors offer an introduction to persistent homology which can be useful for mathematical researchers based on topology.
The main interest of our work is related to location decision using persistent homology.Location decision is a critical element in transport planning.In general, decision makers are asked to select a site (location) among several existing sites (cities, district, airports, ...) on a given network in order to satisfy a number of customers while considering some constraints.Several researchers have developed different mathematical models for location decision which are mainly based on the classical optimisation methods.The present paper provides a new treatment of location problems using the concept of persistent homology.Therefore, for a practical understanding, the second section presents some basic notions and background related to persistent homology.Section 3 highlights the problem related to transport design.In section 4, we will focus on our contribution, based on a location decision issue and the adopted approach using persistent homology to solve the problem.
Mathematical background
In this section, we give a short presentation on some basic notions concerning simplicial complexes, simplicial homology and persistent homology (more details can be found in [5,7,20]).
Simplicial complexes
An abstract simplicial complex or simply a simplicial complex is a set K together with a collection S of subsets of K which satisfies the two following conditions: S contains all singletons {v} with v ∈ K. S is closed under subsets: if τ ⊆ σ and σ ∈ S, then τ ∈ S.
An element σ of S of cardinal a positive integer n will be called an (n-1)-simplex or simply a simplex.So, 0simplices are the singletons which are usually called points; and 1-simplices consist of two elements of K called edges.There are also the 2-simplices called triangles and so on.In fact, any simplicial complex can be represented in a real affine space using points, edges, triangles and so on.Thus, using the induced topology on this geometric representation, one can talk about the connected components of a simplicial complex.Namely, this notion is of great interest in simplicial homology theory.Next section shows that the number of connected components of a simplicial complex is the dimension of some kind of vector spaces.Several examples of simplicial complex can be found in the literature.The most useful ones are geometric simplicial complexes, complexes constructed from graphs, complexes constructed from a covering of a geometric space (Nerve of a covering).
Simplicial homology
Simplicial homology is a powerful tool for description and characterization of some topological features such as connectivity and the existence of holes.Given K as a simplicial complex.The ℤ/2ℤ vector space generated by the p-dimensional simplices of K is denoted Cp(K).It consists of all p-chains, which are formal sums c=∑j γj σj (1) where the γj are 0 or 1 (in ℤ/2ℤ) and the σj are psimplices in K.For a positive integer p, consider the linear map ∂: Cp(K) → Cp-1(K), called boundary map, defined on n-simplices as follows: For every p-simplex σ, ∂(σ) is the formal sum of the (p−1)-dimensional faces (i.e., subsets of σ of cardinal p).Thus, the boundary of the chain c (i.e., ∂(c)) is obtained by extending ∂ linearly, and so where we understand that the addition between coefficients is modulo 2, i.e.
1 + 1 = 0. ( It is not difficult to check that: The p-chains that have boundary 0 are called p-cycles.They form a subspace Zp of Cp.The p-chains which are the boundary of (p + 1)-chains are called p-boundaries and form a subspace Bp of Cp.The fact that: is the p-th simplicial homology group of K with ℤ/2ℤ coefficients.The dimension of Hp(K) is called the p-th Betti number or simply the p-Betti number of K and it is denoted βp(K) or just βp.Overall, the homology groups describe spaces through their Betti numbers βp.Precisely, β0 refers to the number of connected components of the given simplicial complex, β1 refers to the number of holes of the simplicial complex.
Persistent homology
Persistent homology is considered as a powerful tool in topological shape analysis which describes the changes in homology that occur to an object with respect to a given parameter.
For a simplicial complex K, a filtration F of K is a finite sequence of complexes: The idea of persistent homology consists of, instead of examining the homology Hi(K;F) of each individual term Ki of the filtration F, the iterated inclusions are examined: h: Hi(K;F) → Hj(K;F) for all i < j. (8) These maps reveal which features persist.For a clear understanding, let us give a simple example visualised in the above figures: Although both pictures P1 and P2 are different, they have identical betti numbers.Persistent homology reveals the changes in homologies in the given filtration and prove the difference between both objects.Usually, the parameter intervals arising from the basis for H * (K;F) are visualised in the form of a barcode.Precisely, a barcode is a graphical representation of Hk(K; F) as a collection of horizontal line segments in a plane whose horizontal axis corresponds to the parameter and whose vertical axis represents an arbitrary) ordering of homology generators (i.e., generators of H * (K;F)).For example, the barcode corresponding to the filtration F1 above is of the form:
Location decision
Location decisions have been analysed since 1909 by Weber [18] who formulated a theory to locate an industry with a minimum transportation cost of raw and final products.
Mainly, the acquisition of a new facility within a network is a costly, time-sensitive target.Hence, experts on transport design focus on solving the following issues: The optimal choice of nodes to locate facilities Minimize the travel cost Reduce travel time Optimise networks: Infrastructure (roads, bridges, tunnels...), telecommunication (cables) water pipes, electricity cables.Hence, it can be understood that the choice of location is not arbitrary but requires strategic and expert planners which are challenged by several factors.As an aid for making decisions, the domain of facility location provides tools and methods for finding optimal locations using mathematical models with respect to quantifiable factors.O'Kelly [11] is one of the researchers who proposed a model for hub facility location based on the optimisation method.
Persistent Homology as a tool for facility location
This paper reports a new formulation for facility location by developing algorithms inspired from persistent homology.We treat the problem given in [17] through our method which can be applied in different contexts.
The starting point will be a given network consisting of n districts annotated by (A1, A2, A3,…, An) (nodes) in an area of a city with information about distances di,j between districts Ai and Aj , and numbers of population ni in each node Ai.A chain of supermarkets decided to open a facility within the districts and they need to examine the most optimum location to serve the whole area.
Method
The proposed method will follow a procedure which will be explained as follows: First, it is necessary to fix some nodes as candidates where the facility can be located.The choices are based on transport design criteria which are adopted by planning experts.Then, for a fixed candidate Ai, ponderate each other node Aj by the following coefficient which reflect the "importance" of the node Aj with respect to Ai (see below for the explanation): In fact, fixing two candidates Am and An , and consider two nodes Ae and Af .We discuss the following particular situations: If we suppose that our network consists only of three nodes Am , An and Ae.Then, Am is the optimal solution if ≤ .Assume, for example, de,m=de,n , the inequality can be explained by the phrase : the bigger is nm, the more attractive is Am.On the other hand, if nm= nn, de,m ≤ de,n explain the chose of Am as the optimal solution. Now suppose that our network consists only of four nodes Am , An , Ae and Af.Then, Am will be the optimal solution if: For example, if nm= nn and ne= nf , the means above can be reduced to the means of distances.So clearly the optimal solution corresponds to the smallest mean.Also, the ratios are there since the number of clients has to affect the number in some sense.
Basing on the above observations we construct a complex Km,r associated to a candidate Am and for a fixed strictly positive real r as follows (we focus only 0simplices and 1-simplices): -The 0-simplices are all the points Aj which are different from Am.
-Two nodes Ai and Aj is related by an edge (i.e., {Ai, Clearly, for < ′ , Km,r ⊆ Km,r' .This means that we can construct a filtration Km,r1 ⊆ ⋯ ⊆ Km,rs , where r1,…, rs is an increasing sequence of strictly positive real numbers.We choose such a sequence in the way that at the last real all complexes (for all candidates) have one connected component (i.e., all nodes are connected).In fact, passing from a complex to the next one in a filtration we obtain more edges (the nodes will be "more" connected) and then less connected component (the provided sequence of 0-Betti numbers decreases).Then, we compare between the sequences of 0-Betti numbers.The one which reaches 1 at the earliest time is the most attractive between all the other ones and then can be adopted as a solution following this approach.This can be seen easily when comparing the provided barcodes.
Algorithms
There are softwares which deal with the computation of persistent homology in various context.In our context we made our own programs and so all calculation is carried out using the C++ programming by developing the following algorithms.Indeed, to determine the 0-Betti number of a complex associated to one of the candidates, we follow a procedure of three steps as follows: Fixe a candidate Ai and a real number r.
We need at first an algorithm to calculate all .
Algorithm 1
For j=1 , j≠i to n do End for Then, we construct the simplicial complex Ki,r associated to r using the following algorithm.
For j'=1, j' ≠ i to n do
Example
Suppose that we have a network consisting of 8 districts annotated by (A1, A2, A3,…, A8) (eight nodes; see [17]): Finally, one can observe clearly that the A5 is the desired solution.
Conclusion
We have applied the proposed method to an example of eight districts; which has provided a successful location decision.Thus, this article is the first significant step made to solve location problems based on the concept of persistent homology.It will certainly attract the attention of several researchers.We expect further research to develop the solution for more complex situations with multiple parameters and for different types of location problems.
Figure 4 :
Figure 4: Barcode of happy face And the barcode corresponding to the filtration F2 is of the form:
Figure 5 :
Figure 5: Barcode of sad face The two barcodes illustrate clearly the difference. | 2018-12-19T12:59:21.847Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "8d56d8288670a9793a79c4b6482a7b4db55f7ac1",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/59/matecconf_iwtsce2018_00003.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8d56d8288670a9793a79c4b6482a7b4db55f7ac1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
267484834 | pes2o/s2orc | v3-fos-license | Exploring Promising Therapies for Non-Alcoholic Fatty Liver Disease: A ClinicalTrials.gov Analysis
Background Non-alcoholic fatty liver disease (NAFLD) is a common disease and has been increasing in recent years. To date, no FDA-approved drug specifically targets NAFLD. Methods The terms “Non-alcoholic Fatty Liver Disease” and “NAFLD” were used in a search of ClinicalTrials.gov on August 24, 2023. Two evaluators independently examined the trials using predetermined eligibility criteria. Studies had to be interventional, NAFLD focused, in Phase IV, and completed to be eligible for this review. Results The ClinicalTrials.gov database was searched for trials examining pharmacotherapeutics in NAFLD. The search revealed 1364 trials, with 31 meeting the inclusion criteria. Out of these, 19 were finalized for evaluation. The dominant intervention model was Parallel. The most prevalent studies were in Korea (26.3%) and China (21.1%). The most common intervention was metformin (12.1%), with others like Exenatide and Pioglitazone accounting for 9.1%. Conclusion Therapeutics used to manage NAFLD are limited. However, various medications offer potential benefits. Further investigations are definitely warranted.
Introduction
Non-alcoholic fatty liver disease (NAFLD) is a common disease today, reflecting the increasing rates of obesity, metabolic syndrome, and type 2 diabetes mellitus (T2DM). 1,2The pathophysiology of NAFLD includes multifaceted interactions between insulin resistance, abnormal lipid homeostasis, oxidative stress, and inflammation. 3,4While nonpharmacological interventions, especially dietary and weight loss strategies, are the foundational management techniques for NAFLD, the persistence and adherence to such measures remain a challenge. 5everal promising therapeutic agents such as Peroxisome proliferator-activated receptor agonists, Farnesoid X receptor agonists, Glucagon-like peptide-1 (GLP-1) receptor agonists, and sodium-glucose cotransporter 2 (SGLT2) inhibitors are undergoing further analysis for their potential efficacy to counteract NAFLD progression.7][8] Current management emphasizes lifestyle modification including weight management.For instance, the Mediterranean diet, abundant in whole foods and beneficial fats, has shown promise in ameliorating hepatic lipid content and insulin dynamics in NAFLD individuals. 9,10linical trial databases, with ClinicalTrials.govleading the cohort, serve as indispensable repositories for gauging the efficacy and safety of innovative interventions.11 As of today, ClinicalTrials.govalone encompasses a large range of clinical trials.Despite the growing amount of literature highlighting the potential of pharmacological interventions for NAFLD, the methodical assessment of clinical trials assessing these remedies is conspicuously lacking.Although ClinicalTrials.gov seves as an invaluable corpus of such trials, a dedicated review encapsulating the role of pharmacotherapeutics within the NAFLD therapeutic landscape is still awaited.
Methodological Framework and Research Design Search Strategy and Inclusion Criteria
On August 24, 2023, a thorough search of ClinicalTrials.govwas conducted using the keywords "Non-alcoholic Fatty Liver Disease" and "NAFLD".Two evaluators reviewed the trials using established eligibility standards to ensure objectivity.To qualify, studies needed to focus predominantly on NAFLD and be interventional, in addition to being in Phase IV and being concluded.
Results
The preliminary search of the ClinicalTrials.govdatabase yielded 1364 trials.These were subsequently screened and filtered.Exclusion criteria were applied, eliminating trials that were incomplete (n = 762), non-interventional (n = 98), or outside of phase IV (n = 473).Initially, 31 studies met the inclusion parameters for our review.Upon further examination, 12 trials were excluded due to their inadequate focus on NAFLD.Thus, a total of 19 clinical trials were finalized for evaluation.The stepwise selection methodology is graphically represented in Figure 1, detailing the approach used to discern the pertinent clinical trials for this study.
Characteristics of Included Studies
Table 1 shows the description of the clinical trials studied.Most studies (89.5%) were randomized, while 10.5% were non-applicable regarding allocation.The primary intervention model used was Parallel (84.2%), followed by Single Group (10.5%) and Crossover (5.3%).
In terms of masking, nearly half (47.4%) had no masking, and the remaining studies employed Single (10.5%),Double (5.3%), Triple (15.8%), or Quadruple (21.1%) masking techniques.Geographically, Korea was the most common location for these studies, hosting just over one quarter (26.3%).China followed with 21.1%, and other countries, including Egypt and Germany, ranged from 5.3% to 10.5% of the studies.
Various interventions were used, with metformin being the most prevalent (12.1%).Other interventions such as Exenatide and Pioglitazone were used in 9.1% of the trials, and a diversity of other treatments appeared in 3.0% to 6.1% of the studies.Further details can be found in Table 2.
Pharmacological Treatment and Management Strategies Allopurinol
Allopurinol, a xanthine oxidase inhibitor, is used mainly for hyperuricemia and gout.Other indications include cardiovascular and kidney diseases. 12,13Allopurinol exerts its therapeutic effect by inhibiting xanthine oxidase, a critical enzyme in the metabolic pathway that transforms hypoxanthine into xanthine and subsequently into uric acid. 13Through this inhibition, allopurinol not only effectively lowers serum uric acid concentrations but also diminishes the production of reactive oxygen species. 14,15levated serum uric acid levels correlate independently with a heightened severity of hepatic steatosis and fibrosis. 16- 18The underlying pathophysiological mechanisms that might bridge hyperuricemia with NAFLD include insulin resistance and oxidative stress. 19Notably, one specific trial incorporated allopurinol to evaluate the impact of xanthine oxidase inhibitors on MAFLD by regulating uric acid concentrations.
Empagliflozin
Empagliflozin, an SGLT2 inhibitor, is predominantly used to manage type 2 diabetes.Its mechanism of action involves inhibiting the SGLT2 protein within the proximal tubules of the kidneys, which in turn facilitates the excretion of glucose via the urine.This results in a notable reduction of blood glucose concentrations. 20Empagliflozin is used also for weight reduction, decreased blood pressure, and enhancements in cardiovascular health outcomes. 21he intricate relationship between T2DM and NAFLD has been conclusively identified.NAFLD is currently acknowledged as the hepatic representation of the metabolic syndrome. 22Given the proven efficacy of SGLT2 inhibitors in managing T2DM, there is' growing interest in their potential applicability in NAFLD therapy. 23,24Empirical evidence from studies on empagliflozin indicates that its administration is linked to a substantial reduction in hepatic fat accumulation and improved liver enzyme in people with T2DM. 25 Moreover, Empagliflozin's potential to exert anti-inflammatory and antifibrotic effects is a subject of ongoing research.Preliminary findings are pointing toward encouraging results. 26Notably, two clinical trials incorporated empagliflozin to report its influence on liver lipid content, energy metabolism, and overall body composition in recently diagnosed T2DM patients.
Evogliptin
Evogliptin, a dipeptidyl peptidase-4 (DPP-4) inhibitor, is predominantly used for managing T2DM by augmenting insulin secretion and reducing glucagon secretion. 27Research has suggested that DPP-4 inhibitors could have therapeutic implications in NAFLD due to their influence on hepatic lipid metabolism and their inherent anti-inflammatory attributes. 28In preclinical models, Evogliptin has demonstrated notable benefits by mitigating hepatic steatosis, inflammation, and fibrosis. 29One trial incorporated Evogliptin to evaluate its efficacy and safety for T2DM patients concurrently diagnosed with NAFLD.
Exenatide
Exenatide, a GLP-1 receptor agonist, is commonly prescribed for managing T2DM.Its mechanism of action involves enhancing insulin release, curbing glucagon secretion, and decelerating gastric emptying. 30,31Within the realm of NAFLD, Exenatide, alongside other GLP-1 agonists, has attracted attention due to its potential benefits: promoting weight reduction, heightening insulin sensitivity, and exhibiting hepatoprotective properties. 32,33Notably, the weight reduction achieved via exenatide might indirectly confer advantages to NAFLD patients, considering the pronounced link between obesity and the onset of NAFLD. 33hree trials integrated exenatide into their experimental designs with varied objectives.One aimed to assess the outcomes of substituting premeal insulin with exenatide in terms of hepatic steatosis, therapeutic efficacy, insulin secretion, weight modulation, rates of hypoglycemia, and pertinent biomarkers in patients with T2DM and NAFLD.Another sought to evaluate whether a 24-week Exenatide regimen ameliorates histological activity in NASH compared to dietary guidance alone.The last delved into the comparative efficacy of exenatide and insulin glargine in diminishing liver fat in patients newly diagnosed with T2DM and NAFLD over 24 weeks.
Febuxostat
Febuxostat, a non-purine selective xanthine oxidase inhibitor, is mainly prescribed for managing hyperuricemia in patients with gout. 34Numerous studies have underscored the potential influence of uric acid on the pathogenesis of NAFLD, pinpointing oxidative stress as a pivotal mechanism. 14,15Emerging evidence indicates that Febuxostat has the potential to counteract oxidative stress, reduce lipid accumulation in the liver, and alleviate inflammation; factors that are fundamental to the onset and progression of NAFLD. 35One trial incorporated Febuxostat in its design, aiming to evaluate the effects of xanthine oxidase inhibitors on MAFLD through the modulation of uric acid levels.
Gliclazide
Gliclazide is an oral sulfonylurea antidiabetic agent that stimulates insulin secretion from the pancreatic beta cells by binding to specific receptors on these cells, causing ATP-sensitive potassium channels to close. 36,37ntriguingly, research has shed light on gliclazide's potential for mitigating liver steatosis and inflammation, especially as evidenced in rodent models. 380][41] In a singular trial, gliclazide was incorporated to compare its effects with those of liraglutide and metformin, specifically addressing diabetes concomitant with NAFLD over 24 weeks.
Glimepiride
Glimepiride is an oral medication often used for treating T2DM.It operates similarly to gliclazide, enhancing insulin release from the pancreas by specifically interacting with the sulfonylurea receptor.This interaction leads to the shutting down of ATP-sensitive potassium channels. 42Notably, glimepiride exhibits a range of additional effects, including significant anti-inflammatory and antioxidative properties.Emerging research suggests that glimepiride may also protect the liver.This potential benefit is primarily due to its ability to reduce oxidative stress and inflammation, key factors in the development of NAFLD. 43Moreover, glimepiride's role in improving blood sugar control and increasing insulin sensitivity could indirectly help in alleviating NAFLD. 44In one study, the effectiveness of glimepiride was compared to that of tofogliflozin, focusing on their effects on liver health and metabolic indicators in NAFLD patients with T2DM over a period of 48 weeks.
Insulin Glargine
Insulin glargine, a long-acting insulin, is used for DM.6][47] While there is' a theory that exogenous insulin administration might intensify hepatic steatosis, 48 insulin glargine, through its potential to enhance glycemic control and reduce glycemic variability, might counteract these adverse effects. 49,50This aspect was investigated in two studies: one compared the efficacy of liraglutide with metformin against that of sitagliptin and insulin glargine, for treating T2DM patients with NAFLD.The other study evaluated whether exenatide is more effective than insulin glargine in reducing liver fat in newly diagnosed T2DM and NAFLD patients over 24 weeks.
Ipragliflozin
Ipragliflozin, part of the SGLT2 inhibitor medication class, is noted for its promising effects on liver steatosis and fibrosis. 51,523][54] These positive effects are thought to stem from several mechanisms, such as weight reduction, increased insulin sensitivity, and direct anti-inflammatory effects on the liver. 53,55One trial incorporated Ipragliflozin to investigate its impact on visceral fat and the extent of fatty liver in T2DM patients undergoing metformin and pioglitazone treatment.
Liraglutide
Liraglutide, a GLP-1 receptor agonist similar to Exenatide, has garnered attention in the study of NAFLD.The LEAN trial highlighted its effectiveness, demonstrating liraglutide's ability to resolve non-alcoholic steatohepatitis more effectively than a placebo, without worsening fibrosis. 568][59][60][61] Two trials incorporated liraglutide into their study design with distinct objectives.One trial sought to compare the effects of gliclazide, liraglutide, and metformin on diabetes patients afflicted by NAFLD over 24 weeks.Another aimed to compare the efficacy of combining liraglutide with metformin with the effectiveness of sitagliptin and insulin glargine, specifically targeting NAFLD patients diagnosed with T2DM.
Lobeglitazone
Lobeglitazone is a member of the thiazolidinedione family, known for enhancing insulin sensitivity. 62These medications work by activating the peroxisome proliferator-activated receptor gamma, leading to improved insulin response in peripheral tissues. 63Research indicates that Lobeglitazone may reduce liver fat, improve the histological characteristics of NASH, and potentially have antifibrotic effects. 64,65One study was conducted to evaluate the effectiveness and safety of Lobeglitazone in reducing intrahepatic fat in patients with T2DM patients and NAFLD over 24 weeks.
Metformin
Metformin, a biguanide antihyperglycemic agent, is widely used to treat T2DM, mainly by reducing liver glucose production and increasing muscle insulin sensitivity. 66Its potential effectiveness in managing NAFLD has become a subject of interest.However, research outcomes have been inconsistent; while some studies suggest benefits like reduced liver enzymes and liver fat, others find its effects comparable to those of lifestyle changes in terms of improving liver health. 67,68Recent clinical trials have further explored its efficacy, including its use in combination with other treatments like ipragliflozin, pioglitazone, liraglutide, gliclazide, sitagliptin, and insulin glargine, in patients with coexisting T2DM and NAFLD.
Omega-3 Acid Ethyl Esters (OMACOR)
In the field of molecular biochemistry, Omega-3 acid ethyl esters, which are derived from longer-chain omega-3 fatty acids like eicosapentaenoic acid and docosahexaenoic acid, have been recognized for their heart-protective properties. 69,70Recently, there has' been growing interest in these fatty acids within the scope of NAFLD, largely due to their reported anti-inflammatory, antioxidative, and lipid-modulating effects. 5,71In a trial, Omega-3 acid ethyl esters were incorporated to discern the influence of long-chain n-3 fatty acid supplementation on NAFLD biomarkers and the risk factors associated with cardiovascular disease and T2DM over an 18-month span.
Pentoxifylline
Pentoxifylline, a drug derived from methylxanthine, is primarily used to treat intermittent claudication resulting from peripheral artery disease. 72Its possible role in treating NAFLD arises from its significant anti-inflammatory and antifibrotic properties.A key mechanism of pentoxifylline involves inhibiting tumor necrosis factor-alpha (TNF-α), a critical cytokine involved in inflammation and the progression of NAFLD.Various clinical studies have assessed its effectiveness in patients with NAFLD/NASH, with initial results highlighting its potential benefits, particularly in reducing liver enzymes and indicators of liver damage.Additionally. 73One trial incorporated pentoxifylline to examine the liver's contribution to the onset of T2DM and its correlation with the pathogenesis of NAFLD.
Pioglitazone
Pioglitazone, a key drug in the thiazolidinedione class, primarily acts as an insulin sensitizer by targeting the peroxisome proliferator-activated receptor gamma. 74The drug helps reduce liver steatosis and inflammation, possibly due to improved insulin sensitivity, reduced inflammation in fat tissue, and altered lipid metabolism. 75Recent studies also highlight its potential in managing NAFLD by reducing liver fat and inflammation, attributed to enhanced insulin sensitivity and adjustments in lipid metabolism. 76Despite its efficacy in improving NAFLD, particularly in diabetic patients, caution is necessary due to side effects like weight gain and the risks of bone fractures and bladder cancer, underscoring the need for a balanced risk-to-benefit assessment. 8,77Three clinical trials involving pioglitazone were conducted with distinct goals: assessing ipragliflozin's impact on visceral fat in T2DM patients on metformin and pioglitazone; evaluating the efficacy and safety of evogliptin in T2DM patients with NAFLD, and examining the effectiveness of pioglitazone hydrochloride and metformin hydrochloride combination therapy in newly diagnosed T2DM patients with NAFLD symptoms.
Rifampicin
Rifampicin is primarily recognized as a potent antibiotic targeting Mycobacterium tuberculosis.Intriguingly, it also is a robust inducer of the liver's cytochrome P450 system. 78Its connection to NAFLD is primarily underscored by its influence on bile acid metabolism and its potential therapeutic role in alleviating pruritus, a symptom often associated with liver diseases, including primary biliary cholangitis. 79While direct evidence advocating for rifampicin's application in NAFLD remains scarce, its ability to regulate bile acid transport and synthesis might indirectly affect the disease's progression.However, the use of rifampicin for NAFLD must be judiciously considered due to its potent antibiotic nature and the possible hepatotoxic repercussions, particularly with extended usage. 80One trial incorporated rifampicin to discern the effects of PXR activation on hepatic fat content by comparing rifampicin with a placebo in volunteers.
Salsalate
Salsalate, a derivative of salicylate that does not contain acetyl groups, is primarily known for its anti-inflammatory properties. 81Recently, it has attracted attention for its potential use in treating metabolic disorders, especially due to its ability to reduce inflammation, a key aspect in conditions like T2DM and NAFLD. 82,83NAFLD is characterized by chronic, low-level inflammation, and targeting this aspect of the disease could offer significant therapeutic benefits.Studies have shown that salsalate can improve insulin sensitivity and glucose control in people with T2DM.Its effects on NAFLD are believed to stem from its ability to inhibit the nuclear factor-kappa B pathways, leading to a decrease in the release of proinflammatory cytokines from the liver. 84In one trial, salsalate was included in a treatment plan for patients with osteoarthritis who also had NAFLD, to observe changes in NAFLD indicators following its administration.
Sitagliptin
Sitagliptin, a type of DPP-4 inhibitor, plays a crucial role in treating T2DM. 85Its primary mechanism involves prolonging the action of incretin hormones, which leads to increased insulin production and decreased glucagon release, thereby achieving a balanced metabolic effect. 86Recently, the use of DPP-4 inhibitors like sitagliptin has shown promise in treating NAFLD, especially in patients with both T2DM and NAFLD.Studies indicate that sitagliptin can significantly improve liver function and reduce liver fat, as evidenced by advanced imaging methods such as magnetic resonance spectroscopy. 87The drug works by reducing cell death in liver cells, decreasing fat production in the liver, and managing oxidative stress and related inflammation. 88In one clinical trial, researchers explored the effectiveness of combining sitagliptin with insulin glargine for treating NAFLD in T2DM patients, comparing it to the combination of liraglutide and metformin.
Tofogliflozin
Tofogliflozin, a key SGLT2 inhibitor, is mainly used to treat T2DM. 89It works by blocking SGLT2 in the kidney's proximal tubules.This action significantly lowers glucose reabsorption in the kidneys, leading to increased glucose excretion in urine and improved blood sugar control. 90Recent studies also highlight the potential benefits of SGLT2 inhibitors for treating NAFLD. 232][93] Specific research on tofogliflozin has shown its ability to reduce liver fat and improve indicators of liver damage in patients with both NAFLD and T2DM. 94ne trial compared tofogliflozin with glimepiride, focusing on their effects on liver histology and metabolic markers in NAFLD patients with T2DM over a 48-week period.
Ursodeoxycholic Acid (UDCA)
Ursodeoxycholic acid (UDCA) is a naturally occurring bile acid that is traditionally used to treat primary biliary cholangitis. 94Due to its cytoprotective, immunomodulatory, and anti-apoptotic properties, there is growing interest in exploring its use in treating other liver diseases, such NAFLD. 95,96Preliminary studies have posited that UDCA may improve liver function and reduce liver cell damage in NAFLD by stabilizing cell membranes, reducing harmful bile acids, and protecting against oxidative stress. 97One trial incorporated UDCA to understand the hepatic role in the onset of T2DM and its interrelation with NAFLD pathogenesis.
Vitamin E
Vitamin E, recognized for its robust fat-soluble antioxidant qualities, is of interest to researchers due to its possible role in addressing NAFLD and its more advanced form, NASH. 98The antioxidant effects of Vitamin E show promise in potentially decelerating or reversing the progression of these conditions.However, it is' important to approach its use with caution.Certain studies have highlighted possible adverse effects associated with prolonged consumption, such as an increased risk of overall mortality, hemorrhagic stroke, and a greater likelihood of prostate cancer in men, particularly when used in therapeutic doses. 98,99One trial studied the impact of a tocotrienol-rich fraction of vitamin E on liver enzymes and DNA damage in overweight children with NAFLD over a span of 6 months.
Vitamin D
1][102] Epidemiological data reveal an inverse correlation between vitamin D levels and NAFLD prevalence, with deficiencies more prevalent in NAFLD patients. 103,104This vitamin has shown the potential to inhibit NAFLD progression through anti-inflammatory antifibrotic effects and to improve insulin sensitivity. 1057][108] Two trials explored Vitamin D's influence on NAFLD and its related metabolic parameters in T2DM patients.
Discussion
An exploration of the therapeutic potential of various agents in managing NAFLD indicates the multifaceted nature of the disease and its intricate interconnections with other metabolic disorders, especially T2DM.This complexity is aptly mirrored in the multitude of therapeutic agents being considered.
Thiazolidinediones like Lobeglitazone and Pioglitazone have traditionally been harnessed for their insulin-sensitizing capacities. 61,73Their potential utility in NAFLD hinges on enhancing insulin sensitivity in peripheral tissues, consequently diminishing liver fat accumulation and ameliorating histological indicators of the disease. 62,75Nevertheless, caution is imperative with pioglitazone, given the associated risks of bone fractures, weight gain, and potential long-term safety concerns. 76etformin is a cornerstone for T2DM management in the same metabolic milieu, and its implications for NAFLD management cannot be sidelined. 65However, its efficacy is contested, with some studies highlighting its potential https://doi.org/10.2147/DMSO.S448476
DovePress
Diabetes, Metabolic Syndrome and Obesity 2024:17 556 advantages in liver aminotransferase reduction and hepatic steatosis mitigation, and others debating its superiority over lifestyle modifications. 67 noteworthy entry into this therapeutic spectrum is the Omega-3 acid ethyl esters or OMACOR.These derivatives, notably eicosapentaenoic acid and docosahexaenoic acid, bring cardioprotective capabilities.69 Their burgeoning role in NAFLD is rooted in their anti-inflammatory, lipid-regulating, and antioxidative attributes, elucidating their potential as a therapeutic adjunct.7,70 Agents like Pentoxifylline and Salsalate, despite their primary roles in managing peripheral artery disease and inflammation, respectively, have demonstrated potential benefits in NAFLD due to their anti-inflammatory properties.Their therapeutic impact, especially that of pentoxifylline, can be attributed to their potential to suppress vital inflammatory cytokines, such as TNF-α, which are pivotal in NAFLD progression.71,83 While rifampicin's primary role as a potent antibiotic is widely acknowledged, its implications for NAFLD hinge on its influence on bile acid metabolism and potential therapeutic role in alleviating pruritus, a symptom often associated with liver diseases.78 However, it must be prescribed with caution due to possible hepatotoxic repercussions.88 Moreover, traditional agents like UDCA and vitamins like Vitamin E and Vitamin D underscore the multifactorial therapeutic approach to NAFLD.While UDCA's hepatoprotective and anti-apoptotic properties offer therapeutic possibilities, Vitamin E's antioxidant prowess suggests potential therapeutic benefits.Still, the latter must be approached cautiously, given the potential risks associated with high doses.96,97 Vitamin D, on the other hand, bridges the realm of bone health with cellular growth, immune function, and inflammation, suggesting an intricate role in NAFLD's development and progression.101
Future Directions
As NAFLD research and therapy advance, it is' evident that a multifaceted approach is crucial.Personalized treatments rooted in genomic and metabolic profiling appear to be the way forward, while long-term studies remain vital to establish the safety and efficacy of new agents.Understanding NAFLD's molecular foundations can lead to novel therapeutic targets, potentially offering more effective treatments.Given the disease's multifactorial nature, there is' potential for combining various therapeutic agents and merging them with lifestyle modifications.
Conclusion
Since NAFLD has become a major health concern, more clinical trials and different pharmacological approaches are required to counteract the potential side effects associated with the disease.Agents like Thiazolidinediones, Metformin, Omega-3 acid ethyl esters, and DPP-4 inhibitors, among others, highlight the diverse approaches toward NAFLD management.At the same time, each agent presents potential therapeutic benefits but has associated risks and limitations.Further investigations are definitely warranted.
Figure 1 547 Dovepress
Figure 1 Flow diagram of trial selection process.
Table 1
Trial Characteristics
Table 2
Overview of the Clinical Trials | 2024-02-06T17:08:33.415Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "fc7caccb777ab99e2a63f6fce35c8764870ade80",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=96481",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1eb597c64ec9ad4901eab1311baae16380db8394",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14765629 | pes2o/s2orc | v3-fos-license | Interplanetary Lyman $\alpha$ line profiles: variations with solar activity cycle
Interplanetary Lyman alpha line profiles are derived from the SWAN H cell data measurements. The measurements cover a 6-year period from solar minimum (1996) to after the solar maximum of 2001. This allows us to study the variations of the line profiles with solar activity. These line profiles were used to derive line shifts and line widths in the interplanetary medium for various angles of the LOS with the interstellar flow direction. The SWAN data results were then compared to an interplanetary background upwind spectrum obtained by STIS/HST in March 2001. We find that the LOS upwind velocity associated with the mean line shift of the IP \lya line varies from 25.7 km/s to 21.4 km/s from solar minimum to solar maximum. Most of this change is linked with variations in the radiation pressure. LOS kinetic temperatures derived from IP line widths do not vary monotonically with the upwind angle of the LOS. This is not compatible with calculations of IP line profiles based on hot model distributions of interplanetary hydrogen. We also find that the line profiles get narrower during solar maximum. The results obtained on the line widths (LOS temperature) show that the IP line is composed of two components scattered by two hydrogen populations with different bulk velocities and temperature. This is a clear signature of the heliospheric interface on the line profiles seen at 1 AU from the sun.
Introduction
The interplanetary background has been used to study the interplanetary hydrogen distribution since its discovery in the late 1960's ( Thomas and Krassa 1970;Bertaux and Blamont 1970). Lists of previous experimental studies of the interplanetary Lyman α background can be found in Ajello et al. (1987Ajello et al. ( , 1994 or Quémerais et al. (1994).
The SWAN instrument on the SOHO spacecraft (Bertaux et al., 1995) is able to study the IP Lyman α line profile using hydrogen absorption cells. These devices are used to scan the line profile taking advantage of the variation of the Doppler shift between the cells and the interplanetary hydrogen as the spacecraft rotates on its orbit around the sun. Previous studies of the IP line profile with H cells were made by the Mars 6 spacecraft (Bertaux et al., 1976) the Prognoz 5/6 UV photometers (Bertaux et al. 1985) and the ALAE instrument on ATLAS-1 shuttle mission (Quémerais et al. 1994).
IP Lyman α line profiles can also be obtained from high-resolution UV spectrometers in the Earth's orbit. Clarke Send offprint requests to: E. Quémerais et al. (1995,1998) report measurements made by the GHRS (Goddard High-Resolution Spectrograph) instrument on Hubble Space Telescope. More recently, new spectra were obtained from the STIS intrument that replaced GHRS on HST.
Studying the IP Lyman α line profiles has proved to be of interest for studying the heliospheric interface because effects imprinted on the hydrogen distribution at a large distance from the sun are still visible on Lyman α line profiles seen at 1 AU. This is due to the fact that the flow of hydrogen in the heliosphere is collisionless. Izmodenov et al. (2001) have presented model computations that predict how the hydrogen distribution is affected by the heliospheric interface. The heliospheric interface is the boundary region where the expanding solar plasma interacts with the interstellar plasma. Using the derived hydrogen distributions, Quémerais and Izmodenov (2002) calculated line profiles including a complete model of radiative transfer effects. The derived line shifts and line widths were clearly modified by the heliospheric interface.
In this paper, we continue the work presented by Quémerais et al. (1999) about the line profile reconstruction technique based on the SWAN H cell data. This early work was concentrated on the first year of data. Here we show results obtained from 6 one-year orbits of SOHO around the sun. Some of the details of the data processing are not repeated here and the interested reader should refer to the original paper. Previous analyses of the SWAN H cell data were published by Costa et al. (1999), Lallement et al. (2005), and Koutroumpa et al. (2005).
The first section briefly presents the line profile reconstruction technique and the SWAN data. The following sections give the results obtained for the line shifts and the line widths, as well as the variations during the solar cycle. The last section presents a STIS/HST upwind line profile obtained in March 2001 and compares the derived line shift and line width with the SWAN results.
SWAN H Cell Data from 1996 to 2003
The SOHO spacecraft was launched in December 1995 from Cape Canaveral. The SWAN instrument started operation in January 1996 and has been active since then except for a few periods of time (June to October 1998 andJanuary 1999).
Characteristics of the SWAN instrument are given in Bertaux et al. (1995). The SWAN instrument is a UV photometer with a passband between 110 nm and 160 nm. The instrument is made of two identical units placed on opposite sides of the SOHO spacecraft (+Z and -Z sides). Each unit is equipped with a periscopic scanning mechanism that allows to point the field of view in any direction of the half sky facing the side the unit is attached on. The instantaneous field of view is 5 • by 5 • divided in 25 pixels. Each pixel has a 1 • by 1 • field of view. Measurements are performed every 15 seconds, with at least 13 seconds of integration time to get a good signal-to-noise ratio.
Each unit is equipped with a hydrogen absorption cell. This cell is placed on the photon path to the detector. The cell is filled with molecular hydrogen and has MgF 2 windows. A tungsten filament passes through the cell. When a current goes through the filament, H 2 is partially dissociated into atomic hydrogen, which creates a small cloud that can absorb Lyman α photons. Typical values for the optical thickness of the active cell are around 3 to 5. Descriptions of the observing programmes and of the various subsets of data are detailed by Bertaux et al. (1997). Here we concentrate on the hydrogen absorption cell measurements obtained between June 1996 and June 2003.
The most common observation programme of SWAN is the so-called full-sky observation, during which each sensor unit covers the complete hemisphere that is on its side by moving a two-mirror periscope mechanism. One full-sky observation is performed in one day. The data obtained by both sensors are then combined into one image of the whole sky at Lyman α. It must be noted that the areas of the sky viewed by each sensor overlap, which enables us to compare both sensors on a regular basis. SWAN performs these observations 4 times per week. During the first year, one of the four full-sky observations made each week was made with cyclic activation of the hydrogen cells. For these observations, we then obtained a full-sky image at Lyman α as well as a full-sky image of the reduction factor, which is defined below. The mechanism is kept fixed during both measurements, cell OFF and cell ON, before moving to another direction in the sky.
The reduction factor R used in what follows, is a dimensionless quantity. It is the ratio of intensities measured in a given direction when the cell is on (absorption) and when it is off (no absorption). If we consider an incoming intensity expressed as I(λ c ) where λ c is the wavelength in the cell rest frame and if we note T (λ c ) the transmission function inside the cell, the reduction factor is defined by (1) Note that the quantity A = 1 − R measures the absorbed fraction of the incoming intensity. The transmission function inside the cell can be approximated with excellent accuracy for low optical thicknesses (τ < 10) by where the variable x is defined as the normalized frequency by The ν variable represents frequency, λ wavelength. Here, λ o is the wavelength of the Lyman α transition at 1215.66 Å. The Doppler width of the cell, ∆λ d , is related to the temperature and the thermal velocity in the cell by The absorbing power of a hydrogen cell can be characterised by a quantity called the equivalent width, W λ , in wavelength units.
Typical maps of reduction factor values are shown in Bertaux et al. (1997). Quémerais et al. (1999) have shown how H cell measurements over a 1-year period can be used to reconstruct the line profiles of the IP glow. The same technique was applied here, and we refer readers to Sect. 3 of Quémerais et al. (1999). We use the same terms and notations as in this paper.
The data used in this work cover a much longer period of time than our previous work which was concentrated on the first year of SWAN data. To reconstruct the line profiles, we need a full rotation of the Earth on its orbit. For each year we selected data from early June of the year until end of May of the following year. For instance, the spectra labelled 2000 were derived from measurements starting in June 2000 and ending in May 2001. The reason for this sampling was the lack of data between June 1998 and October 1998. As a consequence the year 1998 (June 1998 to May 1999) is missing.
As done by Quémerais et al. (1999), we use Eq. (5) to calibrate the absorbing power of the cell. Hydrogen cells are known to age with time. This may be due to ageing of the tungsten filament itself or due to trapping of hydrogen by the glass or even to some leak. This results in a decreasing optical thickness of the cell for a given filament current level. Over a 6-year period, both H cells have aged in very different ways. The H cell in the unit attached to the -Z side of the spacecraft seems to have lost all absorbing power very rapidly in 1999. This suggests that the cell has lost its H 2 rather rapidly. The H cell in the other unit (+Z side) still retains most of its H 2 cloud as shown by the strong absorption still seen in the 2005 data. However, the optical thickness has decreased with time. This was calibrated by determining the equivalent width of the cell (Eq. 5) from the data (see Sect. 4.1 of Quémerais et al. 1999).
The values for the equivalent width of the +Z hydrogen absorption cell are given in Table 1. Note that we use the transformation into velocity units given by Eq. (11) of Quémerais et al. (1999). These values were used to determine the line profiles as presented in section 4.2 of Quémerais et al. (1999).
Line-of-sight velocities
This section presents the results found for each of the 6 orbits analysed. As mentioned earlier, each data set starts in June and ends at the end of May of the following year. There is roughly one map per week, which means that about 50 maps are used to derive a line profile in each direction of the sky. In 1996 and 1997, both H cells were active, so we have a full-sky velocity map for both orbits. However, due to the leak in the -Z unit H cell, only the northern ecliptic hemisphere is available from 1999 to 2003. We do not show individual plots of the line profiles. Examples are given in Fig. 5 to 7 of Quémerais et al. (1999).
The line-of-sight (LOS hereafter) velocities shown here correspond to the mean Doppler shifts of the line profile expressed in terms of velocity in the solar rest frame. If the line is not symmetrical, as in Quémerais and Izmodenov (2002) for example, then there is a small difference between the mean Doppler shift and the maximum of the line.
Following Quémerais et al. (1999), we express the relation between velocity v projected on the LOS and the wavelength in the solar rest frame as where v is the projection of the atom velocity V on the line of sight U . Note that the direction of the LOS is opposite the direction of propagation of the photon. In that case, the LOS velocity < v > is given by 3.1. Comparison of the first two orbits Quémerais et al. (1999) show the velocity map derived from data obtained in 1996 and early 1997. This map corresponds to the minimum of activity of cycle 22. In the case of a flow with constant velocity, the velocity projected on the LOS is simply V ∞ · cosθ, where θ is the angle of the LOS with the upwind direction and V ∞ is the constant bulk velocity. The actual velocity maps observed by SWAN are more complex than this simple case. Figure 1 displays the difference between the velocity maps of 1996 and 1997. The variation in the upwind velocity is a deceleration of 0.4 km/s, with a value of -25.7 km/s in 1996 and -25.3 km/s in 1997. Considering the uncertainties given in Table 2, the velocity can be considered to be the same. However, a slight deceleration in the solar rest frame is quite possible because of the increase in radiation pressure from the Sun between 1996 and 1997. The LOS (line-of-sight) velocity in the upwind direction is the result of two antagonistic effects. First, selection effects linked to ionization processes increase the mean velocity by ionizing the slower hydrogen atoms. Second, radiation pressure tends to push the atoms away from the sun and slow them down. During most of the solar cycle, radiation pressure is larger than the solar gravitational attraction (Pryor et al. 1998). As the cycle changes from minimum to maximum, the interplanetary hydrogen atoms feel an increasing radiation pressure that becomes more efficient to slow them down in the upwind side of the inner heliosphere.
The downwind velocity is mostly unchanged with a value equal to 21.6±1.3 km/s, which is not surprising because the hydrogen distribution in the downwind side of the heliosphere is less affected by variations in radiation pressure. In this direction, the hydrogen is strongly depleted by ionization effects. A partial filling happens beyond a few AU from the sun because of the relatively hot temperature of the interplanetary gas. It is therefore not surprising that radiation pressure plays a lesser role in this direction as compared to the upwind direction. Figure 2 shows the velocity in the solar rest frame as a function of the angle with the upwind direction. The upwind direction is the one defined by the isovelocity contours (Quémerais et al., 1999) with a longitude of 252.3 • and a latitude of 8.7 • . Changes between the two orbits are within the statistical uncertainties of the velocity values.
Variation from solar minimum to solar maximum
The SWAN instrument did not operate between the end of June 1998 and October 1998, because of the accidental loss of contact with the SOHO spacecraft. Operations resumed at the end of 1998 but soon were stopped after the failure of the last gyroscope. Normal operations resumed in spring 1999.
In this section, we present maps obtained from measurements made beween June 1999 and June 2003. This period covers the solar activity maximum of 2001 and the beginning of cycle 23. There are no measurements in the southern ecliptic hemisphere because the -Z unit H cell stopped absorbing in 1999 for a reason that is still unclear, while the +Z unit hydrogen cell is still functional in 2006. Table 1 shows the variation in the equivalent width of the cell as a function of orbit year. Between 1997 and 1999, the +Z H cell shows a strong decrease in absorption and after that a steady but slower decrease. The equivalent width in 2002 is only 60% of the post launch value. Figure 3 shows the difference between the 1996 orbit velocity profile and the five other orbits. The differences are shown as a function of the LOS angle from the upwind direction. First, we note that the 2000 to 2002 orbits yield very similar values, within 1 km/s almost everywhere. Since the 1996 and 1997 orbits are also very similar, we see two main regimes for the velocity: one for solar minimum conditions with an upwind LOS velocity around -25.5 km/s and one for solar maximum conditions with an upwind LOS velocity around -21.5 km/s. The 1999 and the missing 1998 ones are intermediary states. Table 2 gives the numerical values of the upwind velocity for each orbit. The values are slightly different from those in In this table, velocities are averaged within 20 • of the upwind direction to get better statistics. The apparent variation in upwind velocity can be explained partially by effects of the radiation pressure; however, detailed calculations will be necessary to see if the models match the measured variations. The general deceleration observed in the solar rest frame corresponds to increasing values of radiation pressure as expected from solar Lyman α flux measurements (Rottman et al. 2006). The change between 1997 and 1999 is very abrupt (3 km/s) and seems too important for the change of radiation pressure (+10%) derived from solar flux measurements. Changes of ionization processes could also be partly responsible for this change of LOS velocity. Indeed, ionization processes favour fast atoms because slower atoms have a higher probability of being ionized. This heads to the selection of fast atoms and an increase in the bulk velocity in the inner heliosphere. However even the fast atoms are ionized with increasing efficiency of the ionization processes. This means that the atoms contributing to the intensity are farther away from the sun. How this will affect the line-ofsight velocity is not clear. Only time-dependent calculation of the hydrogen distribution will allow us to discriminate between the various effects involved here. Figure 4 presents the LOS velocity difference between 2001 and 1996 for all directions in the northern ecliptic hemisphere. The difference in the upwind direction is roughly 4 km/s, as seen before. The difference in the downwind direction is small, which suggests that changes in radiation pressure over the cycle is not as important with the velocity distribution of hydrogen atoms in the downwind direction. We clearly see a shift between the iso-contours of velocity for 2001 and 1996. This is explained by the changes in the velocity contours induced by anisotropies of the solar ionization fluxes Koutroumpa et al. 2005). In 2001, when solar ionizing fluxes are almost isotropic, the constant velocity contours are well fitted by cones centered on the upwind direction. In 1996, however, due to different ionization fluxes at different heliographic latitudes the resulting iso-velocity contours are elongated towards high latitudes. This is demonstrated in Fig. 4 where the difference of velocity maps shows a maximum that is shifted towards higher ecliptic latitudes. In this section, we have shown how the LOS velocity of the interplanetary Lyman α line profile changes during the solar cycle. The amplitude is large, more than 4 km/s in the upwind direction. It is also abrupt because we find a change of 3 km/s between 1997 and 1999. The main cause for this variation in the velocity is the change of radiation pressure during the solar cycle (Pryor et al. 1998). Other effects linked to ionization processes and their solar cycle variations may also be involved.
Line-of-sight Temperatures
This section presents the LOS kinetic temperature maps deduced from the line-profile reconstruction technique presented in Quémerais et al. (1999). What we call the line-of-sight (LOS) kinetic temperature, also called apparent temperature in Quémerais and Izmodenov (2002), is actually the line width converted to temperature units using the relation between thermal velocity V th and line width.
Using the same notation as before, we have the following expression where I(v) gives the intensity profile as a function of the LOS projected velocity v. Line width values are more difficult to derive from the SWAN H cell data than line shifts. Indeed, if there is some stellar light, noted I star here, added to the Lyman α data for a given LOS, the absorption becomes If the value of I star is not equal to zero, the value of the mean line shift will not be affected much, but it will change the integral of the absorption profile and thus change the derived temperature significantly. To solve this problem, we used the measurements of the so-called BaF 2 pixel of the +Z sensor. As mentioned by Bertaux et al. (1995), each detector has two active pixels on the sides of the 5 × 5 array. One of the side pixels of the +Z sensor was covered by a window made with BaF 2 . This window is opaque for Lyman α photons. Using the data recorded by this pixel over many months, we compiled a full-sky map that excludes the Lyman α background. This map can then be used to determine which areas of the sky are free of stellar contamination. This way, we created a mask that allows us to keep only the data not contaminated by starlight. Figure 5 shows the LOS temperature as a function of the upwind angle derived from the orbits of 1996 and 1997. For each orbit, there are 2 curves, one for angles larger than 30 • and one for angles smaller than 30 • . Because the upwind direction is very close to the galactic plane, applying a strict limit on stellar counts for each LOS removes all points within 30 • from upwind. However, some lines of sight show a low stellar contamination, of a few percent of the upwind intensity. Keeping those points, we were able to determine a curve for angles between zero and 30 • . This second curve shows larger uncertainties. The most striking feature of these two curves is that they are not monotonic. The temperature reaches a minimum value around 11000 K for an upwind angle of 60 • . This feature does not appear in hot model calculations whether they include full radiative transfer effects on the line profile or simply compute the first-order scattering of photons (Quémerais 2000). For a hot model, the temperature dependence of line profiles from upwind to downwind is always monotonic. We also find that The values were obtained for the two orbits in 1996 and 1997. For each orbit, there are 2 curves, one for angles smaller than 30 • and one for angles larger than 30 • . For angles smaller than 30 • , the limit on stellar light counts has been relaxed to ensure that some points with a small contamination are kept. Contrary to hot model predictions, the temperature curve is not monotonic and shows a minimum value of 11000 K for angles close to 60 • . The 1996 values are given by the diamonds and the 1997 values are given by the triangles. the upwind LOS temperature is around 14000 K and the downwind value is close to 18000 K. This departure from a monotonic variation with the upwind angle was also noted by Costa et al. (1999) using a different method. Quémerais (2000) computed LOS temperatures as seen from Earth's orbit for various models of the hydrogen distribution. What was found is that the LOS temperature of the IP line within 40 • to 50 • in the upwind direction is almost constant. The line width starts to increase for angles larger than 50 • . This increase in the line width is due to changes of the line shape due to acceleration and deceleration by radiation pressure and also to selection effects of the fast atoms through ionization processes.
In that respect, we can argue that the decrease in LOS temperature between 0 to 50 • is a clear signature of the existence of two hydrogen populations contributing to the total line profile. Following Izmodenov et al. (2001), we can divide the hydrogen atoms in the heliosphere into four distinct populations. First, we consider the interstellar component that has gone through the heliospheric interface without any interaction with the protons. This population at large distance from the sun has the distribution parameters of the interstellar gas, i.e. a bulk velocity close to 26 km/s and a temperature close to 6000K. The second population is the one created by charge exchange with protons of the compressed interstellar plasma. Models predict a strong deceleration and heating of this population. These two populations are the main contributors to the IP line profile at 1 AU, while the other two populations created by charge exchange with the solar wind can be neglected here (Quémerais and Izmodenov 2002).
A simple model of the two populations is shown in Fig. 6. The profiles are represented by Gaussian functions. The plots are made in the solar rest frame. One population has the pa- Fig. 6. Model of the line profile generated for two distinct populations of hydrogen atoms. One has the parameters of the interstellar gas, i.e. velocity of 30 km/s (26 km/s accelerated by selection effects, see text) and a temperature of 6000 K, the other has parameters reflecting a deceleration and heating due to the heliospheric interface, i.e. velocity of 20 km/s and a temperature of 14000 K. The resulting line profile in the upwind direction has a mean shift of -25 km/s and a temperature of a bit less than 14000 K. The line profiles are shown in the solar rest frame. rameters of the interstellar gas, i.e. a velocity of 30 km/s and a temperature of 6000 K. We used a velocity of 30 km/s for the primary component to account for selection of faster atoms by ionization processes (See Quémerais and Izmodenov 2002, Table 4). The other component is decelerated in the solar rest frame and heated. Its parameters are given by a velocity of 20 km/s and a temperature of 14000 K. We have computed the line shift and line width of the sum of these two line profiles projected on an LOS with an angle from upwind between zero and 50 • . The results are shown in Table 3. We find values similar to the observations. We do not claim that this simple model fits the data. It is just an example to illustrate why the LOS temperature decreases between 0 and 50 • , i.e. because the Doppler shift between the two components of the line decreases as the cosine of the angle with the upwind direction. After 50 • from upwind, dynamic effects on the hydrogen distribution make the line width increase again. Actual modeling is required here to correctly interpret these data. Figure 7 shows the LOS temperatures found for the four orbits from 1999 to 2002. The data from these orbits are noisier due to the degradation of the sensitivity of the sensor units (Quémerais and Bertaux 2002). It was not possible to recover profiles for LOS's within 20 • of the upwind direction or within 30 • of the downwind direction. However, two results can be seen from Fig. 7. First, the temperature minimum around 60 • from upwind has either shifted to lower angle values or even disappeared. This suggests that the shift between the two components is smaller than in 1996 or that one of the components has become less important relative to the other giving more weight to the parameters of the other component in the line profile. Second, the curves show lower temperatures than what was found in 1996 and 1997. The overall decrease is around The two curves from 1996 (dashed line) and 1997 (dotted line) have been added for comparison. First, the temperature minimum seen previously around 60 • seems to have shifted towards lower values. Second, the profiles appear cooler by roughly 1000 K everywhere.
1000 K. This feature also suggests that one of the components has become relatively less important, thus yielding an apparently cooler profile. It is unlikely that the shift between the components has decreased, because this wouldn't change the LOS temperature for crosswind lines of sight. The LOS temperature profiles presented in this section show that the IP line profile is most likely made up of two components due to distinct hydrogen populations. The first one is composed of unperturbed interstellar hydrogen atoms getting close to the sun. The second one is composed of hydrogen atoms created after charge exchange with interstellar protons compressed and heated in the heliospheric interface. The bulk velocity difference explains why the IP line profile line width decreases when the upwind angle of the LOS goes from zero to 50 • . We also find that the profiles appear cooler during solar maximum than during solar minimum. This could indicate that the increase of the ionization processes but also radiation pressure with the solar cycle is more effective on one population than the other. The slower population will be more ionized than the fast one for instance. Consequently, this will result in nar-rower line profiles. These results will be compared with model computations in a future work.
Comparison with HST profiles
Direct measurements of the Lyman α line profile require a very good spectral resolution which is not easily obtained with a space instrument. Fortunately, the STIS instrument on the Hubble Space Telescope has provided a few measurements in June 2000 and March 2001. This section presents the available upwind spectrum and a comparison with the results obtained from the SWAN H cell data.
The data
The data presented in this section were obtained with the STIS instrument on-board the Hubble Space Telescope. One measurement was obtained in March 2001 (Upwind LOS).
The main problem for the observation of the interplanetary line profile from Earth's orbit is caused by the existence of the strong emission of the geocorona. At the altitude of the Hubble Space Telescope, the geocoronal emission is 5 to 15 times brighter than the interplanetary line depending on the direction of the LOS. The best time to observe the interplanetary line is when the Doppler-shift between the two lines is at its maximum. For the upwind direction, this is in March when the Earth velocity vector is toward the upwind direction. In that case, the relative motion between the H atoms and the observer is close to 50 km/s. Crosswind observations are limited to a relative velocity of 30 km/s because their LOS is perpendicular to the velocity vector of the interstellar H atoms. This maximum value is obtained when the Earth velocity vector is perpendicular to the interstellar wind direction, i.e. either when the Earth is upwind from the Sun (early June) or downwind (early December each year).
The upwind measurement was obtained on March 21, 2001 when the Doppler shift between the interplanetary line and the geocorona is close to its maximum (Position 2). In that case, the two lines are well separated. Figure 8 shows both the geocorona and the interplanetary line. A symmetric line shape fitting the geocorona has been removed from the data. The residual shows the upwind interplanetary line.
We estimated the LOS values of the velocity and the temperature of the line. The results for apparent velocity corrected for the Earth's motion is Those values are larger than the actual values because of the convolution of the actual line profile with the line spread function of the instrument. The geocoronal emission from Fig. 8 gives a good estimate of the line spread function (LSF). Its LOS temperature is equal to 5300 K whereas the actual temperature of the geocorona is around 1000 K.
We have used the LSF deduced from the geocorona to deconvolve the upwind line profile. The result is shown in Fig. 9. First, the data were fitted to a Voigt function. Then, this function was deconvolved yielding the spectrum. By assuming that the geocorona gives the LSF, we have slightly overestimated the actual width of the LSF, as reflected in the larger uncertainty in the temperature estimate. Future observations of the martian neutral H atom emission at Lyman α will better estimate the LSF because the martian emission profile has a thermal width equivalent to a few hundred K (≈ 200 K).
The resulting parameters for the upwind LOS are after deconvolution, The results found from the HST upwind line profile are compatible with the SWAN H cell measurements. First, the mean line shift measured in 2001 was 20.3 km/s. The SWAN result is 21.4 km/s. Taking a possible bias into account due to the removal of the geocoronal line from the HST spectrum, we get a correct agreement, thus confirming the change in the line shift of the IP line from solar minimum to solar maximum. Note also that a previous HST measurement made by GHRS (Clarke et al 1995(Clarke et al , 1998 agreed with the SWAN value of -25.7 km/s for the solar minimum IP mean line shift. The diamonds show the data, the thin solid line shows a Voigt function fit to the data, the thick solid line shows the deconvolved spectrum obtained from the Voigt function fit assuming that the LSF is given by the geocoronal profile.
The temperature found from the profile is close to 11000 K. This value may be slightly underestimated because the LSF of the STIS instrument is not as wide as the Earth's coronal line. Comparing with the values shown in Fig. 7, we find correct agreement. This also confirms that the upwind IP line profile LOS temperature has decreased from 14000K to 11000K from solar minimum to solar maximum It also shows that the temperature inflexion seen at 60 • from upwind at solar minimum more or less disappeared at solar maximum.
Conclusion and discussion
This analysis of the SWAN H cell data has allowed us to reconstruct interplanetary Lyman α line profiles between 1996 and 2003. This period covers the solar activity minimum of 1996 and the maximum of 2001.
We have found that the mean line shift changes from a LOS velocity of 25.7 km/s to 21.4 km/s in the solar rest frame. This deceleration is mainly due to changes in radiation pressure with increasing activity, although changes of the ionizing fluxes are also involved. Detailed modelling will be necessary to reproduce this large variation in the mean line shift.
A comparison with a spectrum recorded by STIS on HST yields a good agreement. The STIS line mean shift corresponds to an LOS velocity of 20.3 km/s which, given the uncertainties and possible biases involved in both analyses, is quite acceptable. We should also point out that the mean line shift change seen by SWAN is very rapid. We find a variation by 3 km/s between 1997 and 1999. This seems hard to explain only by changes in radiation pressure.
The SWAN H cell data were also used to determine line widths (LOS temperature). It is found that the line width variation with upwind angle is not monotonic as is usually found from hot model computations. This can be explained as a proof that the IP line profile is made of two distinct components scattered by populations with different bulk velocities and temperatures. These different components have been theoretically predicted by models of the hydrogen interaction of the helio-spheric interface. Here we have an observable effect on the line width which is created by these two populations. Actual model computations of hydrogen distribution and backscattered profiles will be made to test this explanation.
Finally, we found that the LOS temperature profiles are cooler during solar maximum than solar minimum. The line width change corresponds to a decrease in the LOS temperature by 1000 K. A tentative explanation is that the slow hydrogen population component is more effectively ionized than the fast one during solar maximum. This results in a smaller contribution to the total line shape and hence a narrower line profile.
The results obtained in this analysis are summarized by this list variation with the angle from upwind, which is not monotonic but has a minimum around 60 • . This suggests that the IP line is composed of two components with different mean line shifts. -During solar maximum, the LOS kinetic temperatures decrease slightly and the minimum is less pronounced, suggesting that the ratio between the different components is changed from solar minimum conditions. -Spectra obtained by HST in 1995 and 2001 give LOS velocities and temperatures are compatible with the SWAN H cell measurements.
These empirical results will be confronted to model calculations in a future work. | 2014-10-01T00:00:00.000Z | 2006-09-01T00:00:00.000 | {
"year": 2006,
"sha1": "b1161c6dfbe58a017c7f786d25c7eae7346bd16d",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2006/33/aa5169-06.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "bbd406551f30d43065777ca8765d6a514998de27",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
5077854 | pes2o/s2orc | v3-fos-license | Key site residues of pheromone-binding protein 1 involved in interacting with sex pheromone components of Helicoverpa armigera
Pheromone binding proteins (PBPs) are widely distributed in insect antennae, and play important roles in the perception of sex pheromones. However, the detail mechanism of interaction between PBPs and odorants remains in a black box. Here, a predicted 3D structure of PBP1 of the serious agricultural pest, Helicoverpa armigera (HarmPBP1) was constructed, and the key residues that contribute to binding with the major sex pheromone components of this pest, (Z)-11- hexadecenal (Z11-16:Ald) and (Z)-9- hexadecenal (Z9-16:Ald), were predicted by molecular docking. The results of molecular simulation suggest that hydrophobic interactions are the main linkage between HarmPBP1 and the two aldehydes, and four residues in the binding pocket (Phe12, Phe36, Trp37, and Phe119) may participate in binding with these two ligands. Then site-directed mutagenesis and fluorescence binding assays were performed, and significant decrease of the binding ability to both Z11-16:Ald and Z9-16:Ald was observed in three mutants of HarmPBP1 (F12A, W37A, and F119A). These results revealed that Phe12, Trp37, and Phe119 are the key residues of HarmPBP1 in binding with the Z11-16:Ald and Z9-16:Ald. This study provides new insights into the interactions between pheromone and PBP, and may serve as a foundation for better understanding of the pheromone recognition in moths.
recognize distinct pheromone components and enhance the sensitivity of PRs in response to pheromones 15,16 . Because of the high sensitivity to pheromone components, PBPs are often served as the molecular targets to design the attractants of moths or other insect species 17,18 .
It is well accepted that insect PBPs play important roles in pheromone perception 7,9 . However, the detail interaction mechanism between pheromones and PBPs is still unknown. Many three-dimensional (3-D) structures of insect PBPs have been solved in the crystal forms or in solution since the structure of BmorPBP/bombykol complex was reported [19][20][21][22][23] . Most insect PBPs exhibit series of identical structure characteristics including six or seven α-helices, three strictly conserved disulfide bridges, and a hydrophobic binding pocket. However, structure diversity is also observed and such differences make insect PBPs show different cavity shapes and openings to accommodate distinct ligands 19,[23][24][25][26][27] . Various studies suggested that lepidopteran PBPs existed pH-dependent conformational change associated with significant decrease in affinity at low pH values 18,19,21,22,[28][29][30][31] . The C-terminals of moth PBPs fold into an additional α-helix and enter the binding pocket to occupy the corresponding pheromone-binding sites at acid pH, whereas at neutral pH, the additional helix withdraws from the binding pocket and made it available for pheromone binding 19,24 . Other insect PBPs with short C-terminals, such as the LmaPBP in cockroach, could not form the additional helix but make a lid to cover the binding pocket, and such 'lid' would also affect the binding between PBPs to ligands 23 . All the research revealed that insect PBPs own diverse mechanisms in ligand binding and release, and such mechanisms relate closely to the structures of PBPs. It also suggested that the structural study at molecular level should be helpful in understanding of the action mode and binding specificity between pheromones and PBPs.
In recent years, the interactions between ligands and insect PBPs have been proposed based on the diversity of key residues. Many amino acids have been identified as the critical residues for ligands binding 19,25,32 . In moth species, the structure of BmorPBP/bombykol complex revealed that Ser56 forms a specific hydrogen bond between bombykol and BmorPBP 19 , and in A. polyphemus, Asn53 had been confirmed to be the key site in specific recognition of acetate 25 . Besides, the structure of LUSH/cVA complex in Drosophila melanogaster showed that cVA forms two polar interactions with Ser52 and Thr57 in the binding pocket 32 .
The cotton bollworm, Helicoverpa armigera, is one of the most serious agriculture pests worldwide and cause great damage to cotton and other crops 33 . This insect utilize Z11-16:Ald and Z9-16:Ald as the primary components of the pheromone blend 3 . Previously, three PBP genes, HarmPBP1-3 have been identified and the results of fluorescence-binding assay revealed that HarmPBP1 equally bind the two principal pheromone components with strong affinities 34,35 . HarmPBP1 may play key roles in the pheromone perception of H. armigera. In the present study, we built a 3D model of the HarmPBP1 structure to predict the potential binding sites by homology modeling and molecular docking. The binding roles of these predicted residues were further investigated by site-directed mutagenesis and fluorescence binding assays. This work will help to deeply understand the interaction between HarmPBP1 and sex pheromone components in H.armigera.
Results
Expression of recombinant HarmPBP1. The coding region of HarmPBP1 was sub-cloned into an E. coli expression vector pET-32a/TEV and confirmed by PCR and sequencing. The protein expression was induced for 12 h by adding IPTG (1.0 mM) into the cell culture. The induced and non-induced cells were solicited and the crude inclusion body and supernatant were analyzed by SDS-PAGE. It was found that the recombinant HarmPBP1 was expressed in both supernatant and inclusion body. Then, the supernatant was collected and purified by His-Trap affinity columns (GE Healthcare, USA) followed by removal of the his-tag with TEV Protease. SDS-PAGE analysis indicated that the molecular weight of the final purified HarmPBP1 was about 15kD (Fig. 1), which is consistent with the theoretical molecular weight calculated by a computer pI/Mw online program (http:// web.expasy.org/compute_pi/). (PDB), four structurally determined OBPs, Bombyx mori PBP (BmorPBP), Amyelois transitella PBP (AtraPBP1), Antheraea polyphemus PBP (ApolPBP) and Bombyx mori OBP (BmorGOBP2) were selected to share sequence similarities with HarmPBP1. The total sequence identity between the target protein (HarmPBP1) and the template protein (BmorPBP) is 67% ( Fig. 2A). Thus, to guarantee the quality of the homology model, BmorPBP with the high level of sequence identity was used as a template to construct the 3D structure of HarmPBP1.
The overlap between 3D model of HarmPBP1 and template showed a high similarity of 0.828, which revealed that the overall conformation of target protein is very similar to the template (Fig. 2B,C). The predicted 3D structure demonstrated that HarmPBP1 is a "classical PBP". Six α-helices were located between residues Ser1-Asp24 (α1), Asp27-Trp37 (α2), Asn45-Glu60 (α3), Gln64-Gly81 (α4), Asp83-Thr101 (α5), and Asp107-Asn127 (α6). Four antiparallel helices (α1, α4, α5 and α6) converge to form the hydrophobic binding pocket. The converging ends of the helices formed the narrow end of the pocket, and the opposite end of the pocket is capped by α3 (Fig. 2B). Disulphide bonds and helix-helix packing enforce the organization of the helices. Three pairs of disulfide bridges are observed between Cys19-Cys54, Cys50-Cys109, and Cys97-Cys118 (Fig. 2B). In this model, most of the amino acid residues that formed the pocket were hydrophobic, such as phenylalanine, tryptophan, alanine, valine, leucine, and isoleucine.
To further investigate the potential key residues in HarmPBP1, Z11-16:Ald and Z9-16:Ald were selected to dock with the 3D model. The docking results showed that both the two ligands are consistent in orientation, and Figure S1).
Site-directed mutagenesis and binding characterization of mutants.
Based on the 3-D structure modeling and molecular docking described above, combined with an X-ray structure of the HarmPBP1/Z9-16:Ald complex (unpublished data), we predicted that four residues (Phe12, Phe36, Trp37, and Phe119) may play important roles in ligand binding. To verify the importance of such residues, the alanine scanning mutagenesis modeling have been performed, and the binding free energy for Z11-16:Ald and the wild-type (WT) or four mutants of HarmPBP1 were calculated (Table S1). Mutants F12A and F119A showed significant differences on binding to Z11-16:Ald from the WT. Meanwhile, W37A also showed a certain effect on the binding of Z11-16:Ald. However, the binding free energy of Z11-16:Ald and F36A changed only slightly compare to that of Z11-16:Ald and WT.
All the four residues were mutated to alanine, respectively, by using a site-directed mutagenesis kit. In addition, Gln64, a randomly selected residue on the loop between helices α3 and α4, was mutated to alanine as a control. The recombinant mutants F12A, F36A, W37A, F119A, and Q64A were expressed and purified as described above. The purified proteins were also checked by SDS-PAGE (Fig. 1). It was showed that the expression levels of mutants were apparently the same as that of wild-type HarmPBP1.
The affinities of all mutants with Z11-16:Ald and Z9-16:Ald were also investigated by fluorescence binding assays (Fig. 4). The results showed that compared to the wild-type HarmPBP1, each of the four mutants, F12A, F36A, W37A and F119A showed a different degree of decline in their binding capacities to the sex pheromone compounds, whereas, there was almost no change in the binding ability of Q64A with the two ligands. Three mutants, F12A, W37A and F119A had lower affinities to both Z11-16:Ald and Z9-16:Ald than that of F36A (
Discussion
PBPs are known to bind and transport hydrophobic pheromone molecules across the sensillum lymph to PRs, and enhance the sensitivity of PRs to sex pheromones 13,14,16,36-39 . It was also reported that PBPs could specifically bind distinct pheromone components 11,15,40 , and such binding specificity was attributed to the spatial structure of proteins and ligands, especially their specific interactions 41 . As a result, clarifying the structure of insect PBPs should be helpful in better understanding of their binding mechanisms and biological roles in pheromone perception. In previous study, some crystal structures of lepidopteran PBPs have been solved by NMR or X-ray diffraction 19,21,22 . However, the structures of H. armigera OBP/PBPs are still lack.
Three PBPs of H. armigera have been reported in our previous study 35 . The results of fluorescence binding assay showed that HarmPBPs could specifically bind to different pheromone components of H. armigera 34,35,42 . The main composition of H. armigera pheromone blend contain two hexadecane, Z11-16:Ald and Z9-16:Ald 43 . Both Z11-16:Ald and Z9-16:Ald own similar size of the carbon chain, and HarmPBP1 showed stronger affinities to these two aldehydes than to other minor components 34,35 . Therefore, we decided to predict the structure of HarmPBP1 by using 3D homology modeling, and Z11-16:Ald and Z9-16:Ald were selected as suitable ligands to dock with this structure.
From a BLAST research in the PDB, BmorPBP1 (1DQE) with most sequence similarity (67% identify) to HarmPBP1 was selected as the template to build a 3D homology structure of HarmPBP1. Subsequent docking results revealed that the binding cavity of HarmPBP1 is mainly formed by hydrophobic residues, and Z11-16:Ald and Z9-16:Ald are well overlapped in the binding packet ( Figure S1). Widely hydrophobic interaction was observed to contribute the binding between protein and ligands, but no hydrogen action was found in this structure. Actually, although hydrogen bonds have been confirmed to be the primary link between proteins and ligands in several insect OBPs [44][45][46][47] , there are still some OBPs that only form hydrophobic interactions or van der Waals interactions 48,49 . In the docking structure of HarmPBP1, Phe12 and Phe119 are located on the two sides of the ligands, respectively, and the molecular plane of ligands is sandwiched by these two residues with their aromatic rings parallel (Fig. 3). Such sandwich-like pose contributes to solidify the binding conformation of ligands, so we suspected that Phe12 and Phe119 should be the important binding sites. Phe36 and Trp37 are close to the ligands, which may also play roles in the formation of hydrophobic interactions. Hence, we predicted that four active sites, Phe12, Phe119, Phe36 and Trp37, were possibly responsible for the ligand binding of HarmPBP1. The alanine scanning mutagenesis modeling was later performed to verify such prediction. The results showed that mutants F12A and F119A were of remarkable difference in binding to Z11-16:Ald from the wild-type of
For recombinant proteins expression
HarmPBP1-forward GGCCATGGCGTCGCAAGATGTTATTA a Table 1. Primers used in this study. a "__"represent the restriction sites, b "__" represent the mutation sites.
For site-directed mutagenesis
HarmPBP1, suggesting that these two residues of HarmPBP1 should be important on the ligand binding. W37A also showed a certain effect on the binding with Z11-16:Ald, indicating its potential contribution to the ligand binding. F36A demonstrated a slight change on the binding free energy of Z11-16:Ald, which suggested that this residue might not vital to the ligand binding. Further site-directed mutagenesis and fluorescence binding assays were performed to characterize the binding abilities of the four mutants of HarmPBP1. A random mutation, Q64A was set as one of the control. The results of binding tests revealed that Q64A had no difference in affinity to Z11-16:Ald and Z9-16:Ald compared with the wild-type protein, which suggested that non-specific mutation could not affect the interactions between proteins and ligands. Both the single amino acid mutants, F12A and F119A could not efficiently bind to Z11-16:Ald and Z9-16:Ald. A possible explanation is that ligands cannot remain in the binding cavity due to the loss of the hydrophobic interactions between ligands and residues. Ligands are sandwiched by Phe12 and Phe119 with their aromatic rings, and such stable binding conformation was broken when any of these two residues was mutated to alanine. As a result, we suggested that Phe12 and Phe119 play the key roles in the ligand-binding of HarmPBP1. Mutant W37A showed a certain decrease in affinity to Z11-16:Ald and Z9-16:Ald due to the changes of hydrophobic interaction between the mutant and ligands. Thus, W37 is also an important binding site of HarmPBP1. Another mutant F36A, however, showed nearly no change in its binding ability to Z11-16:Ald and Z9-16:Ald. Therefore, we suspected that Phe36 may not be involved in the binding with Z11-16:Ald and Z9-16:Ald, or may participate in the binding with other ligands. All the four residues are highly conserved in lepidopteran PBPs and most GOBPs 19,25,35 , but only Phe12 and Phe119 contribute significantly to bind with the Z11-16:Ald and Z9-16:Ald. Interestingly, these two residues also play important roles in the binding process between BmorPBP1 and Bombykol 19 . Moreover, in SlitOBP1, the mutants of Phe12 and Phe118 result in lower docking scores to all tested chemicals in the simulation of site-direct mutagenesis, and the recombinant mutant Phe12 could not bind to all the ligands which exhibit good affinities to the wild-type protein 50 . Such results suggest that some conserved hydrophobic residues, such as Phe12 and Phe119, may be responsible for non-specific binding among different lepidopteran OBPs. On the other hand, strictly conserved Phe36 had been confirmed to be the key residue of LdisPBP1 in binding with its pheromone and analogues 51 . However in the current study, the affinity of mutant F36A to Z11-16:Ald and Z9-16:Ald showed nearly no change compared with the wild-type protein. In view of such difference, we speculated that beside the amino acids which contribute to non-specific binding, some other residues should be the key sites in binding with specific components in lepidopteran OBPs. And it is important and interesting to further clarify such functional difference between the conserved residues in the binding pocket.
Our data indicated that multiple hydrophobic interactions play the key roles in the ligand binding of HarmPBP1. It was also revealed that besides the NMR or X-ray diffraction of protein-ligand complexes, molecular docking and the mutant binding assay could be a potential and effective tool to further analyze the molecular mechanisms of ligand-protein interactions. Moreover, the results of this study may serve as a foundation for future studies on integrated pest management through manipulating the pheromone detection of target insects.
Insects. A colony of H. armigera was maintained in the laboratory of the Institute of Plant Protection, Chinese
Academy of Agricultural Sciences. Larvae were reared on an artificial diet, and the conditions were maintained at 26 ± 1 °C, 60% ± 5% RH, and L 14 h: D 10 h. After emergence, adult moths were fed with 10% honey solution. Antennae were removed from three days old male moths and were immediately stored in liquid nitrogen till to use. RNA extraction and cDNA synthesis. Total RNA was isolated from antennae samples by SV Total RNA Isolation System (Promega, Madison, USA) following the manufacturer's protocol. The integrity of the RNA was checked by using 1.2% agarose gel electrophoresis and quantified using a ND-1000 spectrophotometer (NanoDrop, Wilmington, DE, USA) at OD260 nm. The high concentration (>800 ng/μL) of the total RNA showed that the high quality of the RNA sample meet the standard of reverse transcriptase reaction. The first strand cDNA was synthesized using the SuperScript TM III Reverse Transcriptase System (Invitrogen, Carlsbad, CA, USA).
Expression and purification of recombinant HarmPBP1. The full sequence of HarmPBP1 was identified from H. armigera antennal cDNA library in our previously work 42 was amplified by PCR with gene-specific primers ( Table 1). The PCR product was purified and sub-cloned into pGEM-T vector (Promega, Madison, USA). Target sequence was excised with Nco I and Hind III and then cloned into pET-32a/TEV vector (Novagen, Germany) with T4 DNA ligase. The correct recombinant plasmid pET/ HarmPBP1 was transformed to BL21 (DE3) competent cells. Cells were incubated at 37 °C until OD 600 reached 0.6-0.8, and the proteins were expressed after induction with 0.2 mM IPTG for 12 h. Cells were harvested by centrifugation at 7000 rpm for 20 min, and precipitate was re-suspended with 1 × phosphate-buffered saline (PBS). After ultrasonic, cells were centrifugalized at 16000 rpm for 20 min, then inclusion bodies and supernatant was collected and checked by 15% SDS-polyacrylamide gel electrophoresis (SDS-PAGE) analysis. The supernatant was filtered with a 0.22 μm ultrafiltration and purified by two rounds of Ni ion affinity chromatography (GE Healthcare,USA), and the His-tag was removed with Tobacco Etch Virus (TEV) protease (GenScript, Nanjing, China). The highly purified proteins were desalted through extensive dialysis. The size and purity of recombinant HarmPBP1 were confirmed by 15% SDS-PAGE analysis.
3D structure modeling and molecular docking. Simulation of Site-directed mutagenesis and the expression of mutants. The alanine scanning mutagenesis modeling were performed by the AMBER 14 package 54 to verify the predicted key binding sites, and the binding free energy between the active site and Z11-16:Ald was calculated by the MM-GBSA method 55 . Four mutations of HarmPBP1, F12A (mutating phenylalanine to alanine at position 12), F36A (mutating phenylalanine to alanine at position 36), W37A (mutating tryptophan to alanine at position 37) and F119A (mutating phenylalanine to alanine at position 119) were generated by using the Quick-change lightning site-directed mutagenesis kit (Stratagene, USA), and a random mutation, Q64A (mutating glutanine to alanine at position 64) was set as control. The pGEM-T Easy/HarmPBP1 construct was used as template, and the specific primers designed for mutations were also listed in Table 1. The PCR conditions were 95 °C for 30 s, followed by 18 cycles of 95 °C for 30 s, 60 °C for 1 min, 68 °C for 4 min. Valid mutants were sub-cloned into pGEM-T easy vector (Promega, USA). Same expression vector and competent cells were used as the HarmPBP1. The recombinant mutant protein prokaryotic expression and purification were conducted as mentioned above.
Fluorescence binding assays. Fluorescence binding assays were conducted on the F-380 fluorescence spectrophotometer (Gangdong Sci. & Tech, Tianjin, China) in a 1-cm light path quartz cuvette to further investigate the binding abilities of the principal pheromone components of H. armigera, Z11-16:Ald and Z9-16:Ald, to mutants. The fluorescent probe N-phenyl-1-naphthylamine (1-NPN) was dissolved in methanol to yield a 1 mM stock solution. Both of the excitation and emission slit widths were 10 nm. Fluorescence of 1-NPN was excited at 337 nm and the emission spectra were recorded between 390 and 490 nm. Z11-16:Ald and Z9-16:Ald were purchased from Sigma-Aldrich (purity >98%). All chemicals used in this study were dissolved in HPLC purity grade methanol. Fluorescence measurements were performed according to Gu et al. 11 . Dissociation constants of the competitors were calculated from the corresponding IC 50 | 2018-04-03T00:15:57.834Z | 2017-12-04T00:00:00.000 | {
"year": 2017,
"sha1": "113cbef812d2cee9282383c0aba543123fd5a435",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-17050-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00e35ab47f3540d738c2fe7500bb3646cd5fb3ee",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
51934030 | pes2o/s2orc | v3-fos-license | Projecting Pharmaceutical Expenditure in EU5 to 2021: Adjusting for the Impact of Discounts and Rebates
Background Within (European) healthcare systems, the predominant goal for pharmaceutical expenditure is cost containment. This is due to a general belief among healthcare policy makers that pharmaceutical expenditure—driven by high prices—will be unsustainable unless further reforms are enacted. Objective The aim of this paper is to provide more realistic expectations of pharmaceutical expenditure for all key stakeholder groups by estimating pharmaceutical expenditure at ‘net’ prices. We also aim to estimate any gaps developing between list and net pharmaceutical expenditure for the EU5 countries (i.e. France, Germany, Italy, Spain, and the UK). Methods We adjusted an established forecast of pharmaceutical expenditure for the EU5 countries, from 2017 to 2021, by reflecting discounts and rebates not previously considered, i.e. we moved from ‘list’ to ‘net’ prices, as far as data were available. Results We found an increasing divergence between expenditure measured at list and net prices. When the forecasts for the five countries were aggregated, the EU5 (unweighted) average historical growth (2010–2016) rate fell from 3.4% compound annual growth rate at list to 2.5% at net. For the forecast, the net growth rate was estimated at 1.5 versus 2.9% at list. Conclusions Our results suggest that future growth in pharmaceutical expenditure in Europe is likely to be (1) lower than previously understood from forecasts based on list prices and (2) below predicted healthcare expenditure growth in Europe and in line with long-term economic growth rates. For policy makers concerned about the sustainability of pharmaceutical expenditure, this study may provide some comfort, in that the perceived problem is not as large as expected.
Introduction
European healthcare systems are under pressure to manage rising healthcare costs associated with changing demographics, rising patient expectations, and the launch of new, premium-priced medicines (and healthcare technologies more generally) addressing areas of unmet need. A seminal paper on the drivers of healthcare costs pointed to 'innovation' as the major cost driver; this was considered more important than demographics [1]-bearing in mind that the analysis focused on the US healthcare market, which is significantly different from that in Europe. Within (European) healthcare systems, the predominant goal for pharmaceutical expenditure is cost containment, with a tendency to adopt a 'silo mentality' and separately consider expenditure on particular healthcare resources, in this case pharmaceuticals [2]. According to the most recent Organisation for Economic Co-operation and Development (OECD) data (2018), pharmaceutical expenditure accounts for between 11.4% (UK) and 19.1% (Spain) of total healthcare expenditure across the five largest European drug markets, i.e. France, Germany, Italy, Spain, and the UK (the EU5) [3]. This proportion in the EU5 has fallen slightly since 2008, largely due to both cost-containment mechanisms imposed after the global financial crash and to a wave of patent expiries [3]. However, it is worth noting the limitations in the OECD data (discussed in Sect. 4), such as excluding drugs used in inpatient settings in some countries (although Italy does include that data) and including over-the-counter products; overall, total pharmaceutical expenditure should be higher.
There is a general belief among healthcare policy makers that pharmaceutical expenditure-driven by high prices-will be unsustainable in future unless further reforms are enacted. A recent OECD initiative on access to innovative pharmaceuticals and sustainability of pharmaceutical expenditure stated that "high prices compromise patient access and put an unsustainable strain on healthcare budgets" [4,5]. The same premise underlies the recent European Council conclusions on "strengthening the balance in the pharmaceutical systems in the European Union and its Member States" [5].
While this concern is widespread, there is a lack of agreement about what constitutes a 'sustainable' rate of growth for pharmaceutical expenditure and a paucity of forecasts of future growth rates upon which to inform policy making in Europe. We were unable to identify any published forecasts from governmental bodies, at either European or member state levels, that predict pharmaceutical expenditure from 2018 onwards. However, forecasting of pharmaceutical expenditure has been undertaken in some European regions (e.g. Stockholm), and horizon scanning and budgeting activities are also increasing among European countries [6][7][8]. Some forecasts are also available for orphan medicines, where the high cost per patient does not necessarily translate into issues of 'affordability' [9].
Forecasts are available from commercial organisations, with two different fundamental methodologies utilised. Most common are predictions based on reported sales data from pharmaceutical company financial returns, which extrapolate forward based on historical trends. These forecasts tend to be global in perspective, as many companies do not report sales data split out by region. Such forecasts are not particularly informative for European policy makers, as they do not reflect the differences in drug markets between Europe, the USA, and Asia. In addition, such forecasts may have less use if they report sales at 'list' prices without considering rebates, 'basket deals', and discounts (the latter being a common feature of sales to secondary care organisations).
The other forecast methodology is that applied by IQVIA (formerly Quintiles IMS) using its proprietary audited volume data collected from representative samples of pharmacies and hospitals globally. These data are used to provide estimates of historical pharmaceutical expenditure at the country, region, and global level and to forecast future trends in market growth [10]. The most recent (2016) IQVIA forecast for European pharmaceutical expenditure growth predicts a compound annual growth rate (CAGR) of between 1 and 4% across EU5 countries between 2016 and 2021 [5,11].
While IQVIA data are considered robust and are used by commercial, governmental, and academic researchers, certain aspects of the methodology may affect policy makers' interpretations of the estimates. One consideration is that IQVIA forecasts include pharmaceutical expenditure by both public (reimbursed) and private (out of pocket, private insurance) sources, whereas policy makers are primarily interested in the former. More importantly, the IQVIA methodology (described in more detail in Sect. 2) estimates historical and future expenditure using 'list' (also referred to 'official') prices (net of published discounts), which do not reflect confidential discounts and rebates ('discounts') provided to public healthcare systems by manufacturers, especially for new medicines used in the hospital setting (rather than dispensed by retail pharmacists) [12][13][14]. Broadly speaking, two types of discounts are most commonly applied: (1) discounts or agreements at the product level, which may be negotiated by national, regional, or local payers; (2) rebates at the industry level, whereby manufacturers retrospectively pay back money to national payers when total pharmaceutical expenditure exceeds a certain threshold. As mentioned in the following, the level of discounts for some agreements are publicly available, albeit in aggregate.
The existence of such discounts has potentially important implications for policy makers. Historical and future estimates of pharmaceutical expenditure that are based on list prices, rather than net prices, will overstate aggregate pharmaceutical expenditure and its proportion of overall healthcare expenditure. Equally, if the magnitude of discounts is changing over time, excluding these from forecasts will also affect the predicted growth rate of future pharmaceutical expenditure. Discounts may be applied differently in different countries or healthcare settings and may be driven by different mechanisms and incentives in different settings.
While many discounts are confidential (hence their exclusion from IQVIA estimates), their prevalence and importance are believed to have increased in Europe over the last decade [15]. Indeed, the use of such agreements may in part explain the decline in relative pharmaceutical expenditure observed by the OECD between 2008 and 2015-noting the caveats around OECD data as mentioned and that most discounting takes place at the hospital level, which is not included in OECD data. This is in addition to substantial savings made during this period when several standard medicines lost their patents, including atorvastatin, clopidogrel, and esomeprazole, as well as various angiotensin receptor blockers and atypical antipsychotics. The increase in the use of discounts has been driven by increasing price pressures and international reference pricing systems [16] that incentivise manufacturers to negotiate confidential agreements that do not affect list prices [15].
A factor that may influence pharmaceutical prices and level of discounts are patient access agreements that are based on achievement of a mutually agreed treatment outcome (see, for instance, Jommi [17], Adamski et al. [18], Pauwels et al. [19] and Clopes et al. [20]). Whilst these agreements may be confidential, restricted at present to certain specific treatment and health problems, and have a proportionately small impact on pharmaceutical expenditure, it is important to acknowledge that the impact could be greater if some countries increase their use of these schemes.
Against this context, the objective of this study was to estimate future pharmaceutical expenditure growth rates in France, Germany, Italy, Spain, and the UK (EU5) at net prices by adjusting the established IQVIA analysis ('list forecast') for discounts that are not currently incorporated ('net forecast'). In doing so, the paper aims to provide more realistic expectations of pharmaceutical expenditure for all key stakeholder groups. Adjustments were made to the list forecast as follows: • Historical estimates of pharmaceutical expenditure from 2010 to 2016 ('historical list estimate') were adjusted to reflect discounts not previously considered ('historical net estimate'). • Forecasts of future expenditure from 2017 to 2021 were re-run using the adjusted historical data to derive the net forecast.
The focus of the analysis was the EU5 countries individually and in aggregate, as they contribute appreciably to the overall expenditure of medicines in Europe. An example of the type of sensitivity analyses that could be performed is also provided (impact of biosimilars).
Methods
We followed a four-step approach, as depicted in Fig Fig. 1). We then estimated the discounts that historically have been observed in each country and that are not included in the historical list estimates to create the historical net estimates ('3' in Fig. 1). Finally, we adjusted the list forecast for each country to reflect the historical net estimates to arrive at a net forecast ('4' in Fig. 1).
Step 1: Historical List Estimate (2010-2016)
Our starting point was IQVIA's data and forecasts, which we revised accordingly. IQVIA MIDAS ® data are volume based, tracking virtually every medicine through retail and non-retail channels, with official, non-confidential prices applied at pack level to assess value spend [11,21]. Price data are captured at different points in the supply chain by market, e.g. pharmacy selling price, wholesaler price, Fig. 1 Methodological approach. Numbers in circles indicate steps-see main text for explanation. Rx medicines that require a prescription ex-manufacturer price. However, country-specific mark-ups are used to reflect price at the publicly available ex-manufacturer level.
IQVIA data capture expenditure on medicines that require a prescription (labelled as Rx), as well as those that do not (non-Rx). Rx expenditure represents the majority of sales value, given the high levels of reimbursement across EU5 markets-ranging between 87% in France and 97% in the UK (IQVIA data on file). The value split between Rx and non-Rx has remained mostly stable over the last 10 years. It should be noted that we are interested in total pharmaceutical expenditure and thus do not report the expenditure on branded medicines and generics separately. However, given the future importance of biosimilars, we report the impact of some sensitivity analysis around biosimilar uptake and price competition.
Step 2: List Forecast (2017-2021)
IQVIA's country-specific forecasts combine historical sales data, macroeconomic indicators, and expected events (e.g. new product launches) to estimate future pharmaceutical expenditure [11,21]. First, historical volume and price data are analysed and plotted. Second, baseline projections are developed using exponential smoothing techniques to represent the extrapolation of underlying conditions. Third, events are assessed, quantified, and applied to baseline projections. Events can include major new product launches (informed by IQVIA LifeCycle R&D Focus, a global database covering more than 31,000 medicines in research or development), generic competition, and legislative/policy change (among others). Macroeconomic trends are based on econometric modelling from the Economist Intelligence Unit. For each event, the date, probability of occurrence, time to impact, and level of impact are assessed and modelled, drawing on analogue analysis (i.e. based on past experience in other therapeutic areas) and interviews with market experts.
The baseline forecast is refined and adjusted with internal expertise and insight within each country. This is supplemented with extensive primary and secondary research among all key stakeholders in the industry, including government representatives, regulatory authorities, key opinion leaders, specialists, physicians, pharmacists, pharmaceutical companies and wholesalers.
Step 3: Historical Net Estimate (2010-2016)
IQVIA data use publicly available prices. These can reflect the real cost to public payers in some cases, but further discounts also exist in many situations. As previously discussed, a range of (complex) mechanisms now impact net pharmaceutical expenditure.
Step 3 was to adjust the historical list estimate to reflect these discounts and derive the historical net estimates. Table 1 describes the most important discount mechanisms across the five countries and whether they were already accounted for within the IQVIA list forecast. These include discounts at the national level agreed between individual manufacturers (or industry collectively) and government agencies and agreements at the regional or hospital level, usually on a product basis.
At the national level, four countries have some form of national rebate, whereby a cap is set on total pharmaceutical expenditure and rebates are paid by industry collectively if the limit is exceeded. These limits may take the form of agreed growth rates for a specific period (e.g. 2014-2018 for branded medicines in the UK, via the Pharmaceutical Price Regulation Scheme [PPRS]), linking pharmaceutical expenditure growth rate to gross domestic product (GDP) growth rate (Spain), or allocating a maximum percentage ight grey shading indicates the national level; the regional/hospital level is not shaded Adj adjusted forecast, AMNOG Arzneimittelmarkt-Neuordnungsgesetz, MEA managed entry agreement, NICE National Institute for Health and Care Excellence, PPRS Pharmaceutical Price Regulation Scheme, PVAs price-volume agreements, QI QI forecast, SHI statutory health insurance ✓ indicates included in forecast, ✗ indicates excluded in forecast, ~ indicates in the UK, those factors were adjusted only for the PPRS part of the market of public healthcare expenditure (Italy). Other nationallevel agreements include mandatory discounts applied across a particular drug class (e.g. discounts applied to retail medicines in Germany) and product-specific confidential discounts that are negotiated with national payer agencies at the time of launch (e.g. the Italian Medicines Agency [AIFA] in Italy and the Ministry of Health in Spain). These national product-specific discounts also encompass more complex managed entry agreements (MEAs); these could be financial-based, such as most of the Patient Access Schemes used by the National Institute for Health and Care Excellence (NICE), the All Wales Medicines Strategy Group (AWMSG), and the Scottish Medicines Consortium (SMC) in the UK, or outcome-based agreements, such as the payment-by-results schemes used by AIFA in Italy. At the local level, product-specific discounts are often negotiated by regional payer bodies, hospitals networks, and individual hospitals. Tenders, which are sometimes used at the national level, are commonly used at the local level and tend to apply to a specific part of the market, either highusage products, hospital-only medicines, or generics dispensed in primary care [22,23]. A significant part of the discounting takes place at the hospital level as a result of confidential contracting between companies and individual hospitals (or groups of hospitals).
At both national and local levels, discounts are usually confidential, especially for product-specific agreements. It is therefore not possible to adjust the list historical estimates or list forecast on a product-by-product basis. Alternative approaches were taken to estimate the impact of these discounts, depending upon the data available in each country.
Data with which to inform the adjustments were identified through two channels: (1) a review of peer-reviewed and grey literature, including government agency websites and reports, and (2) interviews with health economic experts in each of the countries to discuss data sources. Details on the data available and the specific adjustments made for each country are summarised in Table 2.
Wherever data were available, list-to-net adjustments were made specific to a particular type of discount. For example, in Italy, aggregate data were available on the rebates paid by industry at the national level due to exceeding the expenditure cap, rebates paid as a result of productlevel MEAs, and net expenditure after discounts for both retail and hospital medicines.
In other countries, such as the UK, it was not possible to obtain data on specific forms of discounts (such as savings made as a result of Patient Access Schemes agreed with NICE, SMC, and AWMSG as they are confidential [24]). In these situations, the difference between list and net expenditure was estimated based on comparing historical aggregate net expenditure data from official sources with the historical list expenditure estimates from IQVIA. For the UK, this meant using the total net expenditure returns reported by the Department of Health as part of the PPRS agreement that controls pharmaceutical expenditure for most branded medicines in the UK. In Spain, where discount-specific savings were also not available, aggregate net expenditure data were obtained from reports from the Ministry of Finance and Public Administration.
To compare IQVIA historical list estimates with net data reported by governments, it was necessary to ensure that both estimates included the same types of expenditure (e.g. whether over-the-counter medicines were included) and costs (e.g. wholesaler or pharmacy margins). Where differences existed, we adjusted the IQVIA forecast accordingly. Notwithstanding, recognising the potential for discrepancies in the absolute aggregate estimate of expenditure, the focus of this analysis was on change in the size of the list-to-net gap over time (growth rate) rather than absolute estimates of expenditure. Thus, any form of discount that has remained flat in the past will not affect the growth rate of (net) pharmaceutical expenditure relative to list expenditure, and the adjustment is not included in the analysis. This would be a conservative assumption if those 'flat' discounts increase in the future.
Step 4: Net Forecast (2017-2021)
After the derivation of historical net estimates, the IQVIA list forecast model was re-run using the revised historical data to generate a new net forecast. As described in Step 2, the IQVIA list forecast comprises two main components: a projection forward based on historical trends combined with adjustments for expected 'events' (e.g. new product launches). The new net forecast reflected the revised historical data, while keeping unchanged the adjustments for expected events.
Results
CAGRs are presented in Table 3 and Fig. 2.
The value of the adjustments (i.e. rebate, discounts) increases over time and in doing so also represents an increasing share of total (list) expenditure. For instant, for EU5 in aggregate, the estimated level of adjustments was €9 billion (representing 7% of total list pharmaceutical expenditure) in 2014. By 2021, the level is estimated at €27 billion, representing 17% of total list expenditure.
France
Historical list estimates of expenditure in France showed low average growth rates of 0.5% CAGR between 2010 and 2016. Historical net growth estimates were derived from [48][49][50][51][52] French government returns (net of discounts), which were further adjusted to reflect rebates and payback agreements with manufacturers. Historical net data showed a small decline in pharmaceutical expenditure of − 0.4% CAGR over this period. The impact of including discounts is of nearly 1% point over the 7 years, which is significant. Expenditure in the French retail sector fell with the implementation of cost-control methods, loss of exclusivity of major products and generic entry, and some shift towards hospital expenditure. Hospital expenditure is also largely controlled at the list price level, with growth remaining relatively slow despite the launch of high-budget-impact hepatitis C virus (HCV) products in recent years.
Health authorities in France actively manage aggregate pharmaceutical expenditure against annual targets and utilise payback agreements and price cuts to control growth. For example, paybacks by industry doubled from €520 million in 2014 to €1020 million in 2015 [25].
The list forecast for France was 1.8% CAGR between 2017 and 2021, which fell to 0.6% in the net forecast-an impact of over 1% point. By 2021, the level of discounting (as a result of manufacturers' paybacks in the hospital sector) is estimated to represent 12% of total list pharmaceutical spend in that country, which is the lowest across the five countries. Given that the Comité Économique des Produits de Santé (CEPS) has publicly stated a target of €1 billion reduction in aggregate pharmaceutical expenditure in 2018, growth could in fact be lower than this [53].
Germany
The retail segment is dominant in Germany (86% of market), with many 'hospital-type' treatments delivered through office-based physicians (IQVIA data on file). According to the historical list estimates, total market and retail expenditure have both been growing historically at around 4% CAGR. Arzneimittelmarkt-Neuordnungsgesetz (AMNOG) rebates, which are publicly visible, are already captured in the historical list estimate and list forecast, but as more products become subject to them over time, the impact on total expenditure has increased (partly accounting for a lower list forecast growth rate of 3.2% compared with historical growth rate). Two mechanisms substantially reduce net expenditure estimates: the mandatory discounts applied to retail products and the sick fund clawbacks (rebates paid because of negotiated contractual agreements-noting that individual agreements are confidential, but the overall SHI impact is published yearly, which we have used in our analysis). The former (mandatory rebates) have fluctuated between 6 and 16% since 2010 [30]. The latter (clawback payments) increased threefold over 7 years, from €1310 million in 2010 to €3890 million in 2016 [28,34].
Overall, the effect in Germany is to reduce the net forecast CAGR for 2017-2021 from 3.2 to 2.0%. By 2021, the level of discounting is estimated to represent 18% of total list pharmaceutical spend in that country, which is similar to the EU5 average. The effects of the mandatory discounts and the paybacks can be separated historically; the importance of the mandatory discount was higher than the payback system, although by 2015 its weight decreased to 60% of the total adjustment. For the forecast, the effect of both adjustments is aggregated, but the increase is driven by the increased SHI clawbacks, as mentioned.
Italy
Historical estimates of pharmaceutical expenditure in Italy from 2010 to 2016 were 4.5% (list) and 2.8% (net). Italian retail sector pharmaceutical sales have been falling since 2010, with overall market growth attributed to the hospital sector. This is due partly to a shift towards new product launches in specialty medicines and partly to the increasing role taken by hospitals in procuring medicines used outside of hospitals [54]. These medicines are distributed either directly by hospitals (e.g. new medicines for HCV) or by community pharmacies on behalf of hospitals (e.g. new oral antidiabetic medicines). The gap between list and net estimates increased from 2014 to 2016-at a time when both were growing faster than the historical trend-as a consequence of a sharp increase in savings related to deals agreed with manufacturers of HCV medicines. The most important discounts in the Italian system that are not already captured in the historical list estimate (the historical list estimates include a 9.75% binding discount over list prices applied to all medicines, excluding the 'innovative' ones) are as follows: 1. Industry-level payback based on level of hospital and outpatient pharmaceutical public expenditure as proportion of total health public expenditure; the actual payback was estimated at 1.6% of total gross expenditure [55]. These apply to all class reimbursed medicines (orphan and innovative drug manufacturers are exempt from paying). 1 Overall, the forecast CAGR for 2017-2021 reduced from 3.2% (list) to 1.1% (net); this is the biggest decrease in percentage points of the five countries. Indeed, by 2021, the level of discounting is estimated to represent 21% of total list pharmaceutical spend in that country, which is the highest across the five countries. Two adjustments were modelled: the discounts in hospital from managed entry agreements and industry-level payback, where total pharmaceutical expenditure caps are exceeded, which are expected to rise, and rebates for HCV medicines, which are expected to decline.
Spain
Historical list estimates of pharmaceutical expenditure growth in Spain were 2.2% CAGR between 2010 and 2016. However, growth declined between 2010 and 2014 and then increased on the back of expenditure on HCV medicines (between 2014 and 2016, growth at list prices was 9.4%).
Retail sector sales have been falling since 2010 in Spain, with most growth attributed to the hospital sector due to oncology costs and short-term expenditure on HCV medicines (Spain has particularly high HCV prevalence). In the retail setting, compulsory paybacks came in force in 2006 (Law 29/2006) [56]. The pharmaceutical industry pays money back every 4 months to research institutes (via the Institute Carlos III) and to the government to fund policies encouraging healthcare cohesion across Spain and education programmes for healthcare professionals, among others, expressed as a percentage of sales [57]. Such payments are not captured in the historical list estimates, although, as they have remained flat (1.5-2%) since their introduction in 2006, they do not contribute to any divergence in the list and net forecast.
Mandatory discounts applied to invoices for all hospital medicines (7.5%; 4% for orphan medicines) have been in place since 2010 [56]. Increasing use of hospital products means these savings are forecasted to increase. In addition, a dual pricing system is in place for hospital medicines: the list price ('precio notificado')-which is the official price for international price referencing-and the reimbursed price ('precio facturación'). The list price is published, but the reimbursement price is confidential.
Historical net expenditure data for hospital sales were available from 2014 to 2016 (during this period, HCV medicines were launched in Spain). These data are published by the Ministry of Finance, 2 and the difference between net expenditure on hospital medicines versus aggregate expenditure at list prices was 22%, 28%, and 34% for 2014, 2015, and 2016, respectively [58]. The Ministry of Finance also publishes net expenditure in primary care, but we did not compare this data with IQVIA's as we felt IQVIA data captured an important part of the discounts.
In 2015, a Stability Pact was signed between the Ministry of Health and the research-based pharmaceutical trade association, Farmaindustria, on behalf of industry [58]. This pact links pharmaceutical expenditure growth to GDP growth; over and above this level of growth, the industry is required to pay back the difference. These limits have not been reached since their introduction because pharmaceutical expenditure growth fell below GDP growth over this 1 Regulation of spending caps on drugs has changed many times. In 2001 (law n. 405/2001), a spending cap on drugs used outside hospitals, named 'Spesa farmaceutica territoriale' (retail sector and drugs procured by hospitals and used outside hospitals) was introduced. The spending cap was set at 13% of overall public health expenditure. This legislation was amended in 2008 (law n. 222/2007): the spending cap on Spesa farmaceutica territoriale (including patient co-payments set at regional level) was determined as the 14.0% ceiling of the overall public health expenditure at both national and regional levels, whereas the hospital (in-patient) budget for pharmaceutical expenditure (named 'Spesa farmaceutica ospedaliera') could not exceed a 2.40% ceiling of the overall public health expenditure. If the budget of the Spesa farmaceutica territoriale was overrun, the industry and the distribution would have been required to cover the full deficit. Since 2013 (law n. 135/2012), the industry was asked to cover 50% of the Spesa farmaceutica ospedaliera budget deficit (the budget was raised to 3% of overall public health expenditure, whereas the budget for the Spesa farmaceutica territoriale was lowered to 11.35%). The spending caps have changed again since 2017, set as 7.96 and 6.89% of the overall public health expenditure for retail drugs and all drugs procured by hospitals, respectively. 2 In December 2017, the Spanish Ministry of Health started publishing net costs of medicines used in public hospitals, including discounts. However, the two datasets do not coincide, and differences are especially big for Valencia and Catalonia (https ://www.diari ofarm a.com/2017/12/04/compa rable -gasto -hospi talar io-publi ca-sanid adhacie nda). The main reason behind the differences is they are measuring two different things and are thus not comparable. Our analysis uses the Ministry of Finance data, and it is beyond the scope of this paper to analyse the differences between the two different datasets. period [58]. However, this legislation could act as an upper limit on future growth.
After adjusting the historical list estimate for the observed difference with the Ministry of Finance net data for hospital pharmaceutical expenditures, the net forecast for 2017-2021 was 1.1 versus 2.5% under the list forecast. By 2021, the level of discounting is estimated to represent 20% of total list pharmaceutical spend in Spain, where the difference in this case is due to the discounts in the hospital sector, estimated as the gap between list and net expenditure.
UK
In the UK, the historical list estimate of expenditure growth was 6.8% CAGR between 2010 and 2016. At the time of data analysis, publicly available historical net expenditure data were available from 2013 to 2017 and only included the 51% of the UK drug market covered by the PPRS [48][49][50][51][52]. The PPRS is estimated to cover 80% of branded medicines by value. No adjustments were made to account for confidential discounts and rebates in the non-PPRS market, which includes generics and HCV expenditure on products marketed by Gilead (which, as of December 2017, was not a member of the PPRS). Given that these products are likely to be subject to discounts of a similar magnitude to PPRS products, the adjustment in the UK is likely to be underrepresentative of the true list-to-net difference and to overestimate growth rates. We are aware there is a pharmacy clawback based on rebates from manufacturers/wholesalers; this has been constant under the historic period so would not affect the growth rate. 3 This clawback is not modelled.
The unadjusted forecast is 3.8% CAGR for the period 2017-2021; the net forecast decreases to 2.3%. By 2021, the level of discounting is estimated to represent 17% of total list pharmaceutical spend in that country, which is equal to the EU5 average-as mentioned, there is only one adjustment for the UK (PPRS net sales after rebates).
EU5
When the forecasts for the five countries are aggregated, the EU5 (unweighted) average historical growth rate falls from 3.4% at list to 2.5% at net. For the forecast, the net growth rate is estimated at 1.5% CAGR versus 2.9% at list.
Some Sensitivity Analysis: Impact of Biosimilars
We did not undertake sensitivity analyses, although we illustrate with the case of biosimilars the sort of analyses that could be done. We pick biosimilars given the number and size of biologics that will lose patent protection in the coming years, as the evolution of the market for biosimilars will be an important variable impacting future expenditure growth rates [59]. Three key variables affect biosimilar impact: speed of entry, uptake, and degree of price competition (linked to the number of biosimilars, but also with originators). The assumption in the IQVIA list forecast is that the size of price reductions at the point of loss of exclusivity in future will be of the same magnitude as those observed historically since the introduction of biosimilars. This could be an underestimate of future biologic value erosion, as the biosimilar market is still developing, and greater competition is expected as it matures. Across the EU5 countries, €30-40 billion of cumulative sales can be exposed to biosimilar competition through 2021 (IQVIA data on file). If biosimilars lead to value erosion that is closer to that seen for small molecules, overall future net growth rates in the EU5 based on this analysis would be nearer to 0.5-1% over the next 5 years, rather than the 1.5%. However, this will depend on greater uptake of biosimilars than currently seen through educational and other activities, including the collection of high-quality comprehensive outcomes data on the effectiveness and safety of biosimilars and originator products [59][60][61].
Discussion
The results from this study suggest that future growth in pharmaceutical expenditure in Europe is likely to be lower than previously understood from forecasts based on list prices. The growth in use of confidential discounts over the last decade (such as those badged as patient access schemes, e.g. those in the UK and those agreed locally or regionally), especially for cancer medicines, which are usually used in the hospital setting (e.g. see Pauwels et al. [13]), has led to increased divergence between list and net prices, with the associated overstatement of historical expenditure levels. One possible reason for this is the increased use of external reference pricing, where prices across countries are interdependent; thus, there are incentives to keep these discounts confidential. However, this way of regulating medicines prices was recently criticised because of its negative effects [62]. Another factor that can impact the size of the rebates are 'product events'. For example, in the period 2015 and 2016, the new generation of HCV medicines were introduced and, in some markets (such as Italy), were sold with very substantial rebates that momentarily boosted the rebate and created an unreliable trend. However, industry-level payback because of drug budget overrun is expected to increase, and, incorporating this effect, the divergent trend of list and net expenditure is expected to be confirmed. It is also possible, but maybe less so, that rebates/adjustments would decrease over time and, thus, net expenditure growth would be higher than list expenditure.
The historical and forecast net-price adjustments presented in this paper reflect that divergence as closely as the publicly available data allow, but the confidential nature of these arrangements mean they are inherently difficult to quantify. Not all confidential discounts or rebates have been captured for every country. For example, in the UK, net data were only available for approximately half of the UK market by value (medicines covered by PPRS). This analysis also relied upon published net expenditure data reported at the national level. Rebates paid to hospitals and regional health authorities are not fully captured in all countries. This is certainly the main limitation of the paper; however, we are not aware of any other sources that will fill the existing information gaps on the level of discounting. One possible further avenue could be to undertake (confidential) surveys/interviews with various payers in these countries, or with pharmaceutical companies. Interaction with payers could also be used to validate our results. Moreover, unknown rebates have been assumed constant and thus do not impact the growth rate; however, they would have impacted the net value.
This analysis was based on an IQVIA forecast, which is considered one of the most established analyses of pharmaceutical expenditure and benefits from the comprehensive country-specific data collected by IQVIA. However, as with all modelling exercises, the IQVIA forecasting methodology incorporates assumptions about future events-such as new drug launches and socioeconomic developmentsthat are fundamentally uncertain. Nevertheless, knowing the current research and development pipeline of potential new medicines and the loss of exclusivity dates of existing medicines provides as accurate an estimate as possible of how medicines expenditure may change in the future. The adjustments to the IQVIA forecast for future years were estimated based on the trend in differences between list and net observed historically; if the level of discounts and rebates changes in future, this prediction may not be accurate. Future research could undertake sensitivity analyses to understand the impact of various events on both the expected growth rate and the increased/decreased divergence between list and net expenditure.
No other European net price forecasts were identified that could be used to validate the future estimates from this analysis. However, the historical adjustments can be compared with OECD data (Fig. 3), which shows net pharmaceutical expenditure as flat or falling as a proportion of healthcare expenditure in EU5 countries between 2010 and 2018 (most recent data available) [3]. These data provide some context to our results, but the differences between IQVIA and OECD data should be noted. Broadly speaking, these are that (1) OECD numbers are at sell-out price and IQVIA at ex-manufacturing price (the differences are wholesaler and pharmacy margins and dispensing fees); (2) OECD numbers can capture patient co-payments as part of expenditure, but these are not included for IQVIA expenditure; (3) OECD numbers are based on country reporting, and countries, for different reasons, might not always follow the OECD guidelines, which means the numbers may be over-or understated; and (4) hospital data are incomplete for some countries.
In the USA, a similar analysis was performed to explore the impact of discounts on total expenditure [63]. This study estimated that, between 2005 and 2012, manufacturers' discounts and rebates reduced expenditure on branded medicines by approximately 18% each year. However, between 2010 and 2014, the discounts and rebates increased from 18 to 28% of total expenditure on brand-name medicines. This growing divergence between list and net prices in the USA reflects the similar trend we observed in Europe. Unlike our own study, Aitken et al. [63] did not forecast the implications for future pharmaceutical expenditure growth in the USA.
It is beyond the scope of our study to explore the drivers of growth in pharmaceutical expenditure, as this is a complex issue. However, we can observe mixed positive and negative effects. On one hand, prescribed volumes for medicines to treat non-communicable diseases, such as diabetes, hypercholesterolaemia, hypertension, and acid-related stomach disorders, have appreciably increased in recent (Organisation for Economic Co-operation and Development data on pharmaceutical expenditure) years, with volumes rising several-fold among countries [64][65][66]. For instance, and according to AIFA estimates, most of the increase of the pharmaceutical retail market is due to an increase in volumes [67]. This is coupled with a shift towards more expensive medicines in the hospital setting as new innovative medicines are launched in areas of high unmet need. On the other hand, and as mentioned in Sect. 1, cheaper generics in many large therapeutic areas have entered the market, generating significant savings to third-party payers (i.e. see European Assessment [68]). As mentioned, one area with more uncertainty is the impact of biosimilars in the future.
Aitken [69] analysed the evolution of pharmaceutical expenditure in five countries, including France, Germany, and the UK, over 20 years. Among other things, he showed that pharmaceutical expenditure growth has been roughly in line with increases in total health expenditure. Indeed, our projections, after taking into account adjustments, are below predicted healthcare expenditure growth in Europe and in line with long-term economic growth rates [70,71]. Understanding the dynamics of the market in the past is always an important element in driving the forecasts. For policy makers concerned about the sustainability of pharmaceutical expenditure, this study may provide some comfort, in that the perceived problem is not as large as expected. While there is debate to be had about the merits of non-transparent discounts and rebates, they appear to be playing an important role in containing the growth of real pharmaceutical expenditure whilst allowing reimbursement and funding for new medicines that would not have been possible without such schemes. The results of this analysis suggest that healthcare payers maintain considerable control over pharmaceutical expenditure and have been effective in managing growth historically. Even the introduction of new HCV medicines, which prompted a very public debate about pharmaceutical expenditure sustainability [72,73], appear to have led to only a temporary uptick in the growth rate, mitigated by negotiated discounts and rebates (especially as a result of competition between different medicines), after which the growth trajectory quickly reverted to the historical average.
Conclusion
The increasing frequency and magnitude of confidential discounts, including MEAs, rebates, and discounts, have led to a growing divergence between list and net prices for medicines in Europe. This is driven by increasing financial pressures within health systems, policies such as external reference pricing, and a shift in pharmaceutical innovation from retail to hospital settings with most new medicines for immunological and cancer conditions in many countries. It is beyond the remit of our article to compare the expenditure in pharmaceuticals with the outcomes achieved from their use, as this is a complex task.
After adjusting for discounts and rebates, net expenditure growth in EU5 is predicted to be approximately 1.5% CAGR over the next 5 years. This is below predicted healthcare expenditure growth in Europe and in line with long-term economic growth rates.
Author contributions AH and AP conceived of and designed the paper and collected the data. All authors were involved in writing or critical review of the paper, with important intellectual contributions and collection of some information, and approved the final version of the article for submission. JM-F acts as guarantor that all aspects that make up the manuscript have been reviewed, discussed, and agreed among the authors in order to be exposed with maximum precision and integrity.
Compliance with Ethical Standards
Data availability statement The datasets generated during and/or analysed during the current study are not publicly available because they are owned by a third party (IQVIA). Aggregated data might be available from the corresponding author on reasonable request. The data we used on the level of rebates and discounts are available publicly (see references in text).
Conflict of interest JE, MS, BG, PA, CJ have received an honorarium from Celgene International for attending an advisory meeting. JM-F received an honorarium from Celgene International for attending an advisory meeting and to support the writing of the paper. SF and AP are employees of Celgene International. AH is employed by Dolon Ltd, a consultancy that provides services to pharmaceutical companies, including Celgene International. All authors have no conflicts of interest that are directly relevant to the content of this article.
Open Access This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-08-14T19:40:32.972Z | 2018-08-07T00:00:00.000 | {
"year": 2018,
"sha1": "dfe0d05d8e06a4dbf4e621e7509ba07fdeedec50",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc6244625?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfe0d05d8e06a4dbf4e621e7509ba07fdeedec50",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Economics",
"Medicine"
]
} |
119142903 | pes2o/s2orc | v3-fos-license | Symplectic capacities of domains in C^2
We derive new estimates for the Gromov width of certain domains in C^2.
Introduction
In his paper [3] M. Gromov proved his celebrated non-squeezing theorem. We will study domains D in C 2 with standard coordinates (z 1 , z 2 ) and projections π 1 and π 2 onto the z 1 and z 2 planes respectively. The standard symplectic form on C 2 is ω = i 2 2 j=1 dz j ∧ dz j and this restricts to a symplectic form on the balls B(r) = {|z 1 | 2 + |z 2 | 2 < r 2 }. In this notation Gromov's non-squeezing theorem states that if area(π 1 (D)) ≤ C and there exists a symplectic embedding B(r) → D then πr 2 ≤ C. Nowadays this can be rephrased as saying that the Gromov width of D is at most C. Of course this is sharp when D is a cylinder For general D it is natural to ask whether we can estimate the Gromov width instead in terms of the cross-sectional areas area(D ∩ {z 2 = b}). But for any ǫ > 0 there exists a construction of F. Schlenk, [4], of a domain D lying in a cylinder {|z 1 | < 1} with Gromov width at least π − ǫ but with all cross-sections having area less than ǫ. At least if we drop the condition on the domain lying in the cylinder, the cross-sections can even be arranged to be star-shaped, see [5]. Nevertheless in this note we will obtain such an estimate in terms of the areas of the cross-sections for domains whose cross-sections are all starshaped about the axis {z 1 = 0}.
Theorem 1 Let D ⊂ C 2 be a domain whose cross-sections D ∩ {z 2 = b} are star-shaped about center z 1 = 0. Define C = sup b area({z 2 = b} ∩ D). Then if B(r) → D is a symplectic embedding we have πr 2 ≤ C. In other words, D has Gromov width at most C.
In section 2 we will establish an estimate on the Gromov width for such domains D. This is combined with a symplectic embedding construction to obtain our result in section 3.
The author would like to thank Felix Schlenk for patiently answering many questions.
Embedding estimate
Here we prove the following theorem.
Theorem 2 Fix constants 0 < K ≤ M and 0 < t < 1. Let D ⊂ C 2 be a domain of the form D = {r < c(θ, z 2 ), |z 2 | < M } where (r, θ) are polar coordinates in the z 1 plane and c(θ, z 2 ) is a real-valued function satisfying t ≤ c(θ, z 2 ) ≤ 1 and . Then if B(r) → D is a symplectic embedding of the standard ball of radius r in C 2 we have πr 2 < C + 3 M tK 3 .
Its key implication for us is the following. This follows by rescaling. Note above that the volume of (D, ω L ) approaches infinity as L → ∞.
Proof of Theorem 2
We consider the symplectic manifold S 2 × C with a standard product symplectic form ω = ω 1 ⊕ ω 2 and still use coordinates (z 1 , z 2 ), where z 1 now extends from C to give a coordinate on the S 2 = CP 1 factor. Still π 1 and π 2 denote the projections onto the coordinate planes. Let F be the area of the first factor, we suppose that this is sufficiently large that the complement of {z 1 = ∞} can be identified with a neighborhood of {|z 1 | ≤ 1} in C 2 , the identification preserving the product complex and symplectic structures. In other words, from now we assume that D ⊂ S 2 × C \ {z 1 = ∞} and satisfies the conditions on its cross-sections. Let D c denote the complement of D in S 2 × C. Now let φ : B(r) → D be a symplectic embedding. Then we consider almostcomplex structures J on S 2 × C which are tamed by ω and coincide with the standard product structure on D c . By now it is well-known, see [3], that for all such J the almost-complex manifold S 2 × C can be foliated by J-holomorphic spheres. In {|z 2 | ≥ M } the foliation simply consists of the S 2 factors.
Let S denote the image of the holomorphic curve in our foliation passing through φ(0). By positivity of intersections S intersects {z 1 = ∞} in a single point, say {z 2 = b}. As above we will use polar coordinates (r, θ) in the plane . We intend to obtain lower bounds for both S∩D c ω 1 and First of all, we will suppose that π 1 (S ∩ D c ) = {r ≥ g(θ)} for a positive function g and that S ∩ D c is a graph {z 2 = u(z 1 )} over this region. We explain later how essentially the same proof applies to the general case. Recall that our Define a holomorphic function f : Therefore composing f with a translation we can redefine f as a function f : As g(θ) ≤ 1 for all θ the map f restricts to one from {|z| ≤ 1} and so by the Schwarz Lemma, if |z| < 1 we have |f ′ (z)| ≤ 2M 1−|z| . On the boundary of the disk, our assumptions on the boundary of D imply that |f and over all such functions |f ′ (z)| the final integral above is minimized by taking |f ′ (z)| as large as possible for small values of r. We compute for the final estimate using the fact that 0 < x < 1.
Next we compute Therefore writing k = 16 Thus S ∩ D has symplectic area at most A + π M 3(1−e − 1 2 ) 3 tK 3 < A + 3 M tK 3 , since S itself has area F . We assumed above that π 1 (S ∩ D c ) is starshaped about z 1 = 0 and that S ∩ D c is a graph over this region. If the projection π 1 : S → π 1 (S ∩ D c ) is a branched cover then we can define a function f as before simply choosing a suitable branch along the rays {θ = constant}. The proof then applies as before. Now suppose that π 1 (S ∩ D c ) is not starshaped about z 1 = 0. Then we find the smallest possible starshaped set {r ≤ g(θ)} containing the complement of π 1 (S ∩ D c ). The defining function g will then have discontinuities but this does not affect the proof which again proceeds as before.
Finally we choose a J which coincides with the push forward of the standard complex structure on the ball B(r) under φ but remains standard outside D.
The part of S intersecting the image of φ is now a minimal surface with respect to the standard pushed forward metric on the ball and so must have area at least πr 2 , giving our inequality as required.
Proof of Theorem 1
For any domain E ⊂ C 2 we will write C(E) = sup b area({z 2 = b} ∩ E). Again we let C = C(D). Arguing by contradiction suppose that B(r) → D is a symplectic embedding with πr 2 > C + ǫ.
Let B be the image of the ball of radius r in D. We will prove Theorem 1 by finding a symplectic embedding of B into (D 1 , ω L ) for all sufficiently large L, where D 1 is a domain C 0 close to D and with C(D 1 ) < C(D) + ǫ. Such embeddings would contradict Corollary 3.
First we choose a lattice of the z 2 plane sufficiently fine that if we denote the gridsquares by G i then sup i area(π 1 (D ∩ π −1 2 (G i ))) < C(D) + ǫ. Then we let Proof It suffices to find a diffeomorphism ψ of C \ {b j } which preserves the G i and such that ψ * (Lω 0 ) = ω 0 , letting ω 0 = dz ∧dz be the standard symplectic form. It is not hard to construct such a map, and the product of this map on the z 2 plane with the identity map on the z 1 plane gives a suitable embedding.
Given Lemma 4, to find our embedding it remains to find a symplectic isotopy of D 1 such that the image of B is disjoint from the planes C j = {z 2 = b j }. Equivalently we will find a symplectic isotopy of the union of the C j , compactly supported in a neighborhood of B and moving the C j away from B.
We may assume that the embedding of the ball of radius r extends to a symplectic embedding of a ball of radius s where s is slightly greater than r.
Let U be the image of this ball and J 0 the push-forward of the standard complex structure on C 2 to U under the embedding.
Lemma 5 There exists a C 0 small symplectic isotopy supported near ∂U which moves each C j into a J 0 -holomorphic curve near ∂U .
Proof Let (x + iy, u + iv) be local coordinates on C 2 . Let C be one of our curves. We may assume that in these coordinates near to the origin C ∩ ∂U is the curve {(x, 0, 0, 0)} and therefore that nearby C is the graph over the (x, y) plane of a function h(x, y) = (u, v). So u = v = 0 when y = 0.
There exists a constant k such that |u|, |v|, | ∂u ∂x | and | ∂v ∂x | are all bounded by k|y| near y = 0. Now, such a graph is symplectic provided We can make C holomorphic near ∂U by replacing h by (χu, χv) where χ is a function of y, equal to 0 near y = 0 and 1 away from a small neighborhood.
The resulting graph remains symplectic provided If we assume that | ∂u ∂x ∂v ∂y − ∂v ∂x ∂u ∂y | < 1 − δ the graph remains symplectic if χ is chosen such that Since the integral t 0 δ ky 2 dy diverges a function χ satisfying this condition while being equal to 0 near 0 and 1 away from an arbitrarily small neighborhood does indeed exist as required. The resulting surface is clearly isotopic through symplectic surfaces to the original C.
We now replace the C j by their images under the isotopy from Lemma 5.
We let J be an almost-complex structure on U which is tamed by ω, coincides with J 0 near ∂U , and such that the C j ∩ U are J-holomorphic. Now (U, J) is an (almost-complex) Stein manifold in the sense that it admits a plurisubharmonic exhaustion function φ : U → [0, R). In fact, work of Eliashberg, see [1] and [2], implies that such a plurisubharmonic exhaustion exists with a unique critical point, its minimum. Generically this will be disjoint from the C j .
Near the boundary we can take φ to be the push-forward under the embedding of a function |z| N C for some integer N ≥ 2 (depending perhaps on U ) and (any given) constant C. The definition of a plurisubharmonic function states that ω φ = −dd c φ is a symplectic form on U which is compatible with J (for a function f we define d c f := df • J). We can choose C such that ω φ | ∂U = ω| ∂U and thus by Moser's lemma the symplectic manifolds (U, ω) and (U, ω φ ) are symplectomorphic via a symplectomorphism F fixing the boundary. In fact, adjusting the isotopy provided by Moser's method we may assume that F fixes the C j (since they are symplectic with respect to both ω and ω φ ). Let V denote the image of U \ B under F and suppose that {φ ≥ R 0 } ⊂ V .
It now suffices to find a symplectic isotopy of the C j in (U, ω φ ) moving the surfaces into the region {φ ≥ R 0 }. Then the preimages of these surfaces under F gives a symplectic isotopy moving them away from B as required.
Let Y be the gradient of φ with respect to the Kähler metric associated to φ. Equivalently Y is defined by Y ⌋ω φ = −d c φ. Define χ : [0, R) → [0, 1] to have compact support but satisfy χ(t) = 1 for t ≤ R 0 . Then the images of the C j under the one-parameter group of diffeomorphisms generated by X = χ(φ)Y will eventually lie in {φ ≥ R 0 }. Thus we can conclude after checking that they remain symplectic during this isotopy. We recall that the C j are J-holomorphic and finish with the following lemma.
Lemma 6 Let G be a diffeomorphism of U generated by the flow of the vectorfield X. Then G * ω φ (Z, JZ) > 0 for all non-zero vectors Z.
Proof For any function f we compute Thus G * d c φ = g(φ)d c φ for some function g and The function g is certainly positive and so G * ω φ evaluates positively on the | 2019-04-12T09:18:00.252Z | 2005-10-10T00:00:00.000 | {
"year": 2005,
"sha1": "b9b7946f4ec071c67a1be758f2949ffbe82f8900",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0510203v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b9b7946f4ec071c67a1be758f2949ffbe82f8900",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
254638425 | pes2o/s2orc | v3-fos-license | Whole-genome sequencing across 449 samples spanning 47 ethnolinguistic groups provides insights into genetic diversity in Nigeria
Summary African populations have been drastically underrepresented in genomics research, and failure to capture the genetic diversity across the numerous ethnolinguistic groups (ELGs) found on the continent has hindered the equity of precision medicine initiatives globally. Here, we describe the whole-genome sequencing of 449 Nigerian individuals across 47 unique self-reported ELGs. Population structure analysis reveals genetic differentiation among our ELGs, consistent with previous findings. From the 36 million SNPs and insertions or deletions (indels) discovered in our dataset, we provide a high-level catalog of both novel and medically relevant variation present across the ELGs. These results emphasize the value of this resource for genomics research, with added granularity by representing multiple ELGs from Nigeria. Our results also underscore the potential of using these cohorts with larger sample sizes to improve our understanding of human ancestry and health in Africa.
In brief
Joshi et al. present a whole-genome sequencing dataset from 449 samples spanning 47 unique self-reported ethnolinguistic groups in Nigeria, briefly characterizing the genetic structure of the population and exploring novel and medically relevant variation across the groups.These findings emphasize the need for more inclusive cataloguing of human genetic variation through increased representation of African genomes.
INTRODUCTION
Recent advances in human genomics research have provided compelling insights into how genetic variation plays a role in disease predisposition and its impact on disease pathogenesis and treatment.Whole-genome sequencing (WGS), in particular, can be used to identify known and novel variation in disease-associated genes and to elucidate differences in disease prevalence across diverse geographic regions and ethnolinguistic groups.However, the lack of adequate representation of diverse, non-European, genomes in human genomics research may limit insights that can be made about variants influencing disease susceptibility and trait variability across populations.
Large-scale sequencing efforts such as the 1000 Genomes Project, 1 the HapMap Project, 2 and TOPMed 3 have contributed to our understanding of genetic variation on a global scale and have helped to narrow the gap in representation of diverse populations.In particular, these datasets have uncovered valuable insights into the distribution of novel and rare variation that exists in African populations, relative to Europeans.Despite being the most genetically diverse continent, the extent to which variation has been characterized across the numerous ethnolinguistic groups found in African countries has been limited. 4Nigeria represents one of the most diverse and populous regions in Africa, with a population of over 200 million 5 and over 250 unique ethnolinguistic groups. 6Genomics research involving Nigerian individuals and comprehensive cataloging of genetic variation in this diverse region can allow us to use these data as a proxy for variation on the continent.
These data can subsequently inform the development of precision medicine initiatives for non-communicable diseases (NCDs) such as type 2 diabetes, cancers, and cardiovascular disease, which are expected to be the leading cause of mortality in Africa within the next decade. 7We established the Non-Communicable Diseases Genetic Heritage Study (NCD-GHS) consortium to assess the burden of NCDs, characterize their etiological characteristics, and catalog the human genetic variation in 100,000 adults in Nigeria. 8We aim to contribute to prevention, treatment, and control strategies addressing NCDs through development of a resource that is comprehensive of purposeful sampling, deep phenotyping, and genomic studies centered around WGS/whole-exosome sequencing (WES) and genotyping with arrays.The NCD-GHS also aims to empower further genomics research initiatives in Africa through data sharing that promotes scientific reproducibility but is conscious of ethical and legal standards.
In this first study, we performed germline WGS of an initial 449 samples from the NCD-GHS.Here, we describe the methods used to generate a WGS dataset of 449 Nigerian individuals spanning 47 self-reported ethnolinguistic groups (ELGs) generated using the GATK Best Practices workflow. 9We explore the benchmarking of variant filtering strategies used to strike a balance between sensitivity and specificity by leveraging sequenced control samples.We provide a population genetics summary of the broad patterns in the data and a high-level characterization of variants, complementing that of results reported previously in the 1000 Genomes Project. 10 While sample size limits our ability to make any definitive statements about the clinical actionability of variants enriched or private to specific ELGs, we do summarize the extent to which these variants differ in prevalence within our ELGs compared with global populations.Additional details of the sample collection framework are discussed elsewhere. 82][13] Our work represents an effort to add more granularity to sample collections in Africa, specifically through representation of several distinct ELGs from Nigeria.We provide initial insights into the relative genetic distances between ELGs and the extent to which they vary in the number of rare or common variants they contain.Our findings have implications for precision medicine across global populations, such as prioritization of more at-risk groups for screening or population-specific drug dose calibration.
RESULTS
Samples were collected from several locations across Nigeria, with some of the larger collections based in cities and larger healthcare settings (Figures 1 and 2).The majority of subjects (68%) were female and had a median age of 51 (Table 1).Approximately 60% of the dataset consists of individuals referred to healthcare settings with cardiovascular disease (Table 1).A range of ELGs are represented across the 449 samples, with 68% of the dataset being described by 15 ELGs (Tables 1 and S7).
Given that current variant-calling approaches have been largely benchmarked using populations of European descent, we incorporated a non-Genome in a Bottle (GIAB) Yoruba sample (NA19238) as a control among our sequencing cohorts to evaluate whether our variant-calling pipeline is able to achieve a high rate of sensitivity and precision on a reference dataset that more closely relates to our population of study.When comparing the NA19238 control with Filter B applied to its corresponding HiFi dataset, we were able to achieve precision/recall/F1 scores of 97.9%/91.4%/94.5% for SNPs and 79.6%/57.6%/66.9%for insertions or deletions (indels) (Figure S1; Table S2).It is possible that using higher-coverage NA19238 data would improve this performance.Combined with our findings of variant counts (Table 2) across our cohort and NYGC's African (AFR) dataset, our results demonstrate that our variant-calling pipeline and post-processing filtering strategies are well suited for variant discovery in this dataset.
Patterns of variation across ELGs in Nigeria
We compared the properties of observed genetic diversity in the our dataset of 449 individuals (hereafter referred to as the ''54gene dataset'') with the subset of 650 African-ancestry subjects from the New York Genome Center 1000 Genomes Project high-coverage dataset (Table 2). 1 We found that both the transition/transversion ratio (T i /T v ) and the median number of variants per subject are comparable between datasets.We observed an increase in the overall count of SNPs and indels within the 54gene dataset relative to the African-ancestry subset of the 1000 Genomes Project (Table 2).This increase in overall variant counts are observed across all functional annotation categories (Table 3).We also observed an increase in counts for unknown variation (variants not present in dbsnp154) across the 54gene dataset with 3,748,259 unobserved SNPs and indels relative to the subset of the 1000 Genomes Project with 1,446,210.We hypothesize that this effect could be driven by an increase in the abundance of rare variants from a wider range of ELGs in the 54gene dataset relative to the 1000 Genomes dataset.However, we cannot rule out that there may be other reasons for this observation due to sampling design, variant-calling strategy, or experimental noise.
We examined the proportions of rare and novel variation across ELGs within our dataset, with the hypothesis that undersampled ELGs may harbor variation unobserved in broader catalogs of human genetic variation.Specifically, we compared counts of known and unobserved variants across the top 15 ELGs in the 54gene cohort (Figure 3).We observe that ELGs where the majority of samples are from southern Nigerian states (see Table S7) qualitatively have lower counts of unknown variants (e.g., Bini from Edo state, Ibibio from Akwa Ibom state, Igbo from Enugu state, Izon from Bayelsa state) relative to individuals from northern and northeastern states (e.g., Bura/Pabir from Borno, Kanuri from Gombe, Tera from Gombe), who tend to have higher numbers of novel variants (Figures 1 and 3A).However, these results remain to be corroborated by larger sample sizes across ELGs in Nigeria.
Comparing the number of rare, uncommon, and common variants across ELGs within our dataset, we see most variation in the rare category as expected. 14,15Several ELGs show a qualitative decrease in the number of rare variants, particularly the Bura, Fulani, and Kanuri groups (Figure 3B).For the latter two groups at least, we see evidence of Northern African or European admixture (Figure 4), which we hypothesize may play a role in this observation of a decrease in rare variation overall. 16For the NYGC data, LWK (Luhya from Kenya) had the highest number of novel variants (Figure 3C).An excess of variants common in this population but rare in other populations have been reported previously, attributed to an increased degree of population differentiation relative to other populations within the same continental grouping. 10pulation structure across ELGs across Nigeria We applied principal-component analysis (PCA) to investigate patterns of population structure across the ELGs in Nigeria.For example, we noted three distinct groups of genetically similar ELGs (Figures 5 and S3).The first consists of colocalized groups of Yoruba, Ibibio, Bini, Igbo, and Izon.A second group consists of Ham and Atyap.A third cluster consists of Tangale, with some overlap with Waja, Bura-Pabir, and Tera.7][18] We found specifically that the Hausa, Fulani, and Kanuri groups share a higher degree of genetic similarity with Mozabite-ancestry individuals, suggesting higher rates of North African ancestry within these populations from Northern Nigeria.For twelve ELGs sequenced by both Yale and MGO sequencing centers, we did not find a strong bias on genomewide estimates of genetic ancestry (Figures S4 and S5).
Admixture clustering had the lowest cross-validation error between K = 1 and K = 3 (Figure S6).We found similar patterns of ancestry between the Yoruba and Esan ELGs within our dataset and between the YRI (Yoruba in Ibadan, Nigeria) and ESN (Esan in Nigeria) populations from 1000 Genomes, respectively (Figure 4).Individuals reported as Yoruba, Esan, Igbo, Ibibio, Bini, and Izon showed evidence of similar ancestral composition (Figure 4).The states of origin for individuals from these ELGs tended to be South Western (Oyo), South-South (Bayelsa, Akwa Ibom, Edo), and South Eastern (Enugu) (Figure 1; Table S7).Individuals self-reported as Nupe, Ham, and Atyap differed somewhat from the first group and reflected origins from states that were largely central or central western (Kaduna, Niger).A third group-Waja, Tangale, Bura-Pabir, and Tera-corresponded to central-western and north-western states (Gombe, Borno).Lastly, Fulani, Hausa, and Kanuri stood out as having shared ancestry with North African or European groups (using Mozabite as a proxy for this ancestry and also incorporating European populations from the 1000 Genomes Project in the admixture analysis), corroborating results from PCA (Figure S3).
Variation of clinical importance
To get a broad understanding of the relative frequencies of genetic variation that may be of clinical relevance to our cohort, we subsetted our dataset to annotated variants classified as ''pathogenic'' and having established evidence as being disease causing in the ClinVar Database.Additionally, we stratified variants by whether they belonged to genes from the American College of Medical Genetics and Genomics (ACMG)'s recommended list of 73 genes with reportable variants. 19We identified a total of 134 variants classified as ''pathogenic'' in our cohort (Table S3).Fourteen individuals from our cohort carried at least one potential reportable ACMG variant, three carried a variant in BRCA2 (associated with breast and ovarian cancers), four carried a variant in BTD (associated with biotinidase deficiency), and two carried a variant in GAA (associated with lysosome-associated glycogen storage disease) (Table S4).
Of the 134 variants identified as ''pathogenic,'' eight were found to have a minor allele frequency (MAF) >5% in at least one of the self-reported ELGs in our cohort (Table S5).These eight variants were further compared to observed allele frequencies available for global populations and African population subsets in GnomAD 14 and the 1000 Genomes Project. 10 Similar to previous comparisons performed, 11 we observed several of these variants with disease associations to rare disorders with an MAF <5% across all populations in GnomAD and the 1000 Genomes Project.Larger sample sizes across these ELGs would be helpful to better understand differences in allele frequencies of these variants across multiple regions in Nigeria.These data could inform more precise classifications of ''pathogenic'' as well as ''likely pathogenic'' variants and could increase confidence when making disease associations across global populations.These results fall within a larger effort to re-examine alleles associated with rare diseases in more comprehensive population reference datasets.
Allele frequencies of known variants associated with response to indicated drugs
Understanding how genetic variation impacts drug efficacy and safety across diverse population groups can improve individualized clinical utility of pharmacogenomic profiling.Variants in pharmacogenes such as CYP2C9, CYP4F2, and VKORC1 have been implicated in the efficacy of warfarin, a commonly used anticoagulant for prevention of venous thrombosis, and have been included in pharmacogenomic screens to assess interindividual variability and dosing criteria for warfarin.Common variants in these genes have been found to differ in allele frequency between African-and European-ancestry individuals. 20To assess the value of studying underrepresented ancestries in pharmacogenomics, we surveyed the frequencies of variants in key pharmacogenes across the ELGs from the 54gene dataset.We then compared the frequencies of variants in these key pharmacogenes across ELGs to selected ancestry groups from the 1000 Genomes Project (Table S6). 21everal polymorphisms within the CYP4F2 gene encoding for the cytochrome P450 4F2 enzyme have been implicated in altered warfarin sensitivity and metabolism.We note elevated frequencies of pharmacogenomic variants within this gene for ELGs where the majority of samples are from northern states (Hausa, Fulani) relative to other ELGs sampled from the 54gene dataset as well as ancestry groups from the 1000 Genomes Project (Table S6).For example, the variant rs3093105, designated as CYP4F2*2, has a frequency of approximately 40% in the Fulani and Hausa ELGs but is closer to 30% frequency in the Yoruba. 22 i /T v is defined as the ratio of transition (T i ) to transversion (T v ) SNPs with their interquartile range (IQR) provided.Resource ll
OPEN ACCESS
We also observed an elevated frequency of rs2108622 (CYP4F2*3) in the Fulani and Hausa ELGs (15%-17% vs. $5% in YRI) (Table S6).4][25] In African populations specifically, there have been little-to-no associations made between the CYP4F2*3 allele and the warfarin dosage response because of the typically low frequency of this allele observed in the available, but limited, data in admixed and sub-Saharan African groups. 26,27These findings highlight the necessity for added representation of allele frequencies from diverse ELGs, which can improve our understanding of how genetic variability contributes to drug efficacy and how pop-ulation-specific data may be applied to improve the predictive power of dosing algorithms for commonly indicated drugs.However, there are additional factors to consider beyond the differing allele distributions such as socioeconomic factors, sampling strategy, and the geographic location and environment of these populations.The analysis performed here only applies to a limited subset of known variants within these genes, and further studies are needed to characterize novel variants in pharmacogenes and their effects on drug efficacy in medications.
DISCUSSION
This report represents an initial assessment using WGS to understand variation within, and the population structure of, some of the predominant ELGs in Nigeria.This resource also demonstrates the capacity for conducting large-scale genome analyses in the region, speaking to the promise of building research capacity on the African continent. 11,28We present results for several ELGs that have not been previously sequenced or for which there is very little existing publicly available sequence data.We demonstrate that we can observe a discernible population structure among closely related populations, even with limited sample sizes across groups.Our results are consistent with results for populations already sequenced as part of previous efforts, e.g., Yoruba from the 1000 Genomes Project.We have added to this by sampling from a wider set of ELGs across Nigeria.By using the NA19238 African control in addition to the gold-standard NA12878 to perform benchmarking using field standard strategies, 29 we show that we were able to well calibrate our variant-calling pipeline for variant discovery and generate comparable variant counts between the 54gene dataset and NYGC's AFR sample data.
Using a broader representation of genetic diversity within Nigeria, we find several features of population structure within ELGs within Nigeria.We observe specific groups that are more genetically similar to one another within Nigeria (e.g., Yoruba, Igbo, and Izon).A specific and notable example of population structure is the gradient of North African-related ancestry (approximated using Mozabite individuals) across multiple groups in Nigeria.Previous literature has shown high North African-related ancestry in Fulani individuals, but our analysis here considers this across a much wider range of groups within Nigeria. 16For example, we find elevation of this ancestry within the Hausa and Kanuri groups from Northern Nigeria as well.A finer-scale resolution of population structure could benefit from more detailed sampling with respect to the ELG of an individual's grandparents to highlight.We anticipate that further studies within these groups may shed light on potential trait-associated variants at higher frequencies in specific ELGs relative to the entire Nigerian population (e.g., elevated in Hausa), where each ELG consists of a sizable number of people, highlighting the importance of understanding fine-scale population structure within this region.
In order to derive tangible benefits from genomics research for global populations, making the resulting genomic data and metadata available is essential.However, the accessibility and availability of genomics data remains a persistent challenge for the field. 30There are notable exceptions to this.The public availability of data from the 1000 Genomes Project, HGDP, and the UK Biobank-to name a few-has removed major barriers to conducting human genetics research, particularly for researchers with limited funding. 10,31,32There are additional efforts for which a subset of the data are public (e.g., TOPMed, Resource imputation server but limited direct access to phased data) and others that, though publicly funded, remain difficult to access (e.g., H3Africa whole-genome sequences).We note that the data presented here were not funded by major public grants or other non-profit support, unlike some of the datasets highlighted above.While the data are not completely publicly available, and some level of access control is enforced, we are hopeful that this is a step in the right direction, where both public and private initiatives make every effort to release and share data with the broader research community.
While there is a critical need to facilitate open-access sharing of high-quality genomic data, there is also a need to balance the interests of the researchers generating the data and the ethical and privacy obligations to the participants.Specifically, ensuring the data are used for non-commercial purposes and that the data producers fully benefit from their contributions in the form of formal credit and/or acknowledgment drives progress and capacity building in genomics research in regions such as Nigeria.Ethical use of genomic data requires that there are safeguards for protecting patient privacy, confidentiality, and prevention of data misuse or unauthorized access.Implementing controlled and/or restricted access to genomic data with robust but transparent governance mechanisms allows researchers to find a balance between these challenges.Repositories such as the European Genome-phenome Archive (EGA) and dbGAP can facilitate secure and structured methods of data sharing.While this framework may create barriers in the form of application procedures, documentation, and longer turn-around times from assessment committees, it remains the best current solution to address security concerns.However, the burden of enabling data sharing highlights a larger need to re-evaluate international guidelines and best practices in genomics for effective data sharing to maximize scientific discoveries and health equity.
This resource provides an approach for conducting further population genomic studies in Nigeria using WGS with larger sample sizes to provide more definitive insights into novel or rare variation in certain ELGs and to provide a high-level summary of population structure.Our results also emphasize the utility of publicly available WGS data from under-sampled African populations as a resource to enable better cataloging of genetic variation to drive initiatives in precision medicine, improvement of human reference genomes, and the elucidation of population histories.
Limitations of the study
The sample sizes across the self-reported ELGs in our cohort and their depths of coverage limit the interpretations that can be made from the discovery of clinically relevant variants and potential conclusions that can be made about the distribution of pathogenic disease-associated variants in Nigeria.This also limits our ability to make conclusions on the relative frequencies of novel or known pharmacogenetic variants that exist within the population.Nevertheless, our findings of the relative counts of ACMG-reportable variants and broad comparisons of pathogenic and pharmacogene variant frequency can serve as a 1 in addition to Esan from 54gene and 1000 Genomes Project and Yoruba from 1000 Genomes Project and HGDP template for cataloging variation at the level of ELGs.An additional limitation of the currently generated data is that the lower depth of coverage limits our ability to draw demographic insights from patterns of rare-variant sharing across ELGs.Data of higher depth and quality and increased sample sizes across lesser-represented ELGs will allow for more robust conclusions about complex genomic regions and mutations that could have significant impacts on health or disease outcomes.As more complete demographic and health data emerge for these understudied population groups, we foresee significant opportunities for health interventions that will improve the health and well-being of patients, particularly in areas such as pharmacogenomics.
Figure 1 .
Figure 1.Overview of collection locations and regional designations within Nigeria
Figure 2 .
Figure 2. Collection sites in Nigeria where individuals of the 54gene dataset were sampled (A) States of origin for collected samples.Size of markers are proportional to the number of individuals collected.All states are listed in Table S1.(B) Reported ethnolinguistic group and state of origin for top 15 most prevalent groups.Marker size is in proportion to the number of individuals sampled.
Figure 5 .
Figure 5. Principal-component plot of ethnolinguistic groups listed inTable 1 in addition to Esan from 54gene and 1000 Genomes Project and Yoruba from 1000 Genomes Project and HGDP
Table 2 .
Counts of variants in high-level classes of functional impact for 54gene and NYGC datasets | 2022-12-15T14:11:46.742Z | 2023-07-22T00:00:00.000 | {
"year": 2023,
"sha1": "1024f22e26c9d1ebad9f11085f8ca26cfafec991",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xgen.2023.100378",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e35a57115a9e9268b9e689adbf8d4b4172936992",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
245405123 | pes2o/s2orc | v3-fos-license | Long-term monitoring of wildlife populations for protected area management in Southeast Asia
Long-term monitoring of biodiversity in protected areas (PAs) is critical to assess threats, link conservation action to species outcomes, and facilitate improved management. Yet, rigorous longitudinal monitoring within PAs is rare. In Southeast Asia (SEA), there is a paucity of long-term wildlife monitoring within PAs, and many threatened species lack population estimates from anywhere in their range, making global assessments difficult. Here, we present new abundance estimates and population trends for 11 species between 2010 and 2020, and spatial distributions for 7 species, based on long-term line transect distance sampling surveys in Keo Seima Wildlife Sanctuary in Cambodia. These represent the first robust population estimates for four threatened species from anywhere in their range and are among the first long-term wildlife population trend analyses from the entire SEA region. Our study revealed that arboreal primates and green peafowl ( Pavo muticus ) generally had either stable or increasing population trends, whereas ungulates and semiarboreal primates generally had declining trends. These results suggest that ground-based threats, such as snares and domestic dogs, are having serious negative effects on terrestrial species. These findings have important conservation implications for PAs across SEA that face similar threats yet lack reliable monitoring data.
| INTRODUCTION
Biodiversity is declining worldwide as unsustainable human activities drive the degradation and loss of natural habitats and overexploitation of species Leung et al., 2020;Mokany et al., 2020).
Global efforts to protect habitats and slow biodiversity decline are structured within the Convention on Biological Diversity (CBD; https://www.cbd.int). The Aichi Biodiversity Targets within the Strategic Plan for Biodiversity 2011-2020 identify protected areas (PAs) as key tools for improving the status of biodiversity; Target 11 outlines explicit targets for PA coverage (CBD, 2010). Historically seen as critical tools for conservation (Margules & Pressey, 2000), PAs provide the most likely refuges for biodiversity in increasingly human-dominated landscapes (Bruner et al., 2001). However, increasing PA size and coverage does not guarantee improved conservation outcomes (Armsworth et al., 2018;Bruner et al., 2004) and in some cases can have perverse consequences such as reduced management capacity across a PA network . PAs must be adequately resourced and managed in order to fulfill their potential to maintain viable biological populations in the context of increasing human pressure (Coad, Watson, et al., 2019;Geldmann et al., 2018).
Effective monitoring using appropriate biodiversity indicators is critical for PA managers to make informed decisions and assess conservation actions, thus allowing improved management over time (Dixon et al., 2019). Yet, rigorous longitudinal monitoring within PAs is often lacking (B. B. Hughes et al., 2017), hampering informed decision-making and effective deployment of resources. A lack of monitoring systems and frameworks to assess management effectiveness is common challenge facing PAs; only 9.4% of CBD signatories have assessed half or more of their PAs for effectiveness (Secretariat of the CBD 2020). Assessing PA performance requires welldesigned monitoring regimes that provide reliable, informative, and appropriate metrics of biodiversity over time (White, 2019). The critical role PAs play in halting biodiversity decline is emphasized in the Post-2020 Global Biodiversity Framework, which is currently being negotiated to replace the 2011-2020 Strategic Plan and includes quantitative biodiversity targets (CBD, 2020a). Therefore, the ability to assess PA efficacy and link conservation action to species outcomes, for which effective long-term monitoring is essential, will become increasingly important.
Southeast Asia (SEA) is characterized by exceptional faunal diversity and endemism (A. C. Hughes, 2017) yet has the highest rate of increase in extinction risk globally (Hoffmann et al., 2010). This region has the highest percentage of the world's threatened plants, reptiles, birds, and mammals (Sodhi et al., 2010) and one of the highest rates of deforestation globally (A. C. Hughes, 2017). Hunting in particular is an urgent threat (Gray et al., 2017). Increasing demand for wild meat and wildlife products, both domestically and for international trade, is driving unsustainable levels of hunting within SEA's forests (Gray et al., 2018;Harrison et al., 2016;Heinrich et al., 2020). Despite the urgency, there is a paucity of long-term quantitative data on wildlife populations in SEA. In many cases, species of conservation interest lack even a single estimate of population size (Table 1), making it hard to assess the performance of individual PAs and national and regional conservation programs. Empirical data are needed to make evidencebased decisions on PA management, evaluate the impact of past action (Geldmann et al., 2018), and increase the accuracy and utility of global assessments of status and trends. Understanding how wildlife populations respond to anthropogenic pressure is of particular importance in PAs, given their role in safeguarding species' persistence (Watson et al., 2014).
In this paper, we present 10 years of wildlife population monitoring from Keo Seima Wildlife Sanctuary (KSWS) in Cambodia, a globally important site for several species (Nuttall et al., 2017), to help address the knowledge gap created by the lack of empirical data on wildlife populations in SEA. Many of the species in our study lack a single reliable population estimate from anywhere else in their range (Table 1). We provide abundance estimates for 11 species within KSWS between 2010 and 2020 and model their population trends over time. We also provide spatial distributions for seven of the species, for which adequate data were obtained. Our study is among the first in the literature to report longterm wildlife population trends with absolute estimates from SEA. We highlight the importance of these results for SEA and International Union for the Conservation of Nature (IUCN) Red List status assessments and for evaluating conservation action and future conservation decision-making in KSWS. Finally, we discuss the need for long-term monitoring in PAs and the implications of our results for conservation programs across SEA.
2 | METHODS 2.1 | Study site KSWS (12.3346,106.8418, formerly Seima Biodiversity Conservation Area and Seima Protection Forest) falls within Mondulkiri and Kratie provinces in eastern Cambodia. It has an area of 2927 km 2 , sharing its southeastern edge with Vietnam ( Figure 1). Our 1880 km 2 study area is the former core zone (Figure 1). KSWS is characterized by a diverse mosaic of habitats; the southeastern area extends into the Southern Annamite Mountain Range with higher altitudinal mountainous topography and dense evergreen and semievergreen forest (Evans et al., 2013). The central and western areas form the edge of the Eastern Plains Landscape, dominated by low altitudes and dry deciduous dipterocarp forests (Evans et al., 2013;O'Kelly et al., 2012). Complementing the altitudinal and habitat gradients are seminatural grasslands and seasonal and permanent water bodies that together support rich biodiversity (Griffin & Nuttall, 2020;Nuttall et al., 2017).
| Data collection
Data were collected jointly by the Wildlife Conservation Society (WCS) and the Forestry Administration of the Royal Government of Cambodia (RGC) between 2010 and 2016, and by WCS and the Ministry of Environment of the RGC in 2018 and 2020. Forty square line transects of 4 km length were arranged throughout KSWS in a systematic grid with a random start point. Field teams conducted distance sampling surveys along these line transects in 2010, 2011, 2013, 2014, 2016, 2018, and 2020. Teams recorded visual observations of a predefined set of 11 species: those listed as Threatened on the IUCN Red List, easily detected on line transects, or both. The target species were southern yellow-cheeked crested gibbon (Nomascus gabriellae, hereafter "gibbon"), black-shanked douc (Pygathrix nigripes, hereafter "douc"), Germain's silver langur (Trachypithecus germaini, hereafter "langur"), long-tailed macaque (Macaca fascicularis, hereafter "LT macaque"), northern pig-tailed macaque (Macaca leonina, hereafter "PT macaque"), stump-tailed macaque (Macaca arctoides, hereafter "ST macaque"), banteng (Bos javanicus), gaur (Bos gaurus), northern red muntjac (Muntiacus vaginalis, hereafter "muntjac"), wild pig (Sus scrofa, hereafter "pig"), and green peafowl (Pavo muticus, hereafter "peafowl"). See Table 1 for the target species' global status, threats, and existing population estimates. Surveys were conducted during the dryer months of December-June. Temporal replication was achieved through multiple visits to each transect within each year. Field teams visited transects for between 1 and 8 days at a time and conducted surveys twice a day, at dawn and dusk. Teams would record only direct visual observations of target species. Laser rangefinders and compasses were used to measure distances and angles from the line transect to detected objects, which constituted either isolated individuals or spatially aggregated individuals (clusters), and cluster sizes were recorded. Distances were measured to the geometric center of clusters. Perpendicular distances from detected objects to the line transect were calculated prior to analysis. Additional data collected for quality assurance and covariate modeling included date, time, observer name, location of observer, and habitat type (2013 onward). Field protocols followed standard line transect methodology outlined in Buckland et al. (2001) and were consistent between years. For
| Annual abundance estimates
We used the conventional distance sampling framework (Buckland et al., 2001) to obtain point estimates of individual density and abundance for each species in each survey year. Only douc had sufficient within-year observations to allow for annual detection functions to be estimated. For the remaining species, distance data from all years were pooled in order to improve the model fit for detection function estimation (Buckland et al., 2001). To account for potential heterogeneity in detection between years, a scaled continuous year variable was tested for all species except douc. We fitted detection function models using the R package distance (Miller, Rexstad, Thomas, et al., 2019;R Core Team, 2017, version 0.9.8). Distance data for all species were truncated to improve model fitting and reduce bias (Buckland et al., 2001). We explored models with uniform, half-normal, and hazard rate key functions and cosine, simple polynomial, and hermite polynomial adjustments, both with and without observation-level covariates (Supporting Information). For further details on density, abundance, and variance estimation in distance sampling, see Buckland et al. (2001Buckland et al. ( , 2004Buckland et al. ( , 2015, Fewster et al. (2009)
| Temporal population trends
We used generalized additive models (GAMs) combined with bootstrapping (Hamilton et al., 2018) to estimate long-term population trends. The original systematic sampling design ensured representative coverage of habitat types, so we employed a bootstrap scheme that would preserve this property (Supporting Information). Each transect was categorized by habitat as either dense or open forest. Transects were sampled with replacement within each category until total within-category effort across all years equaled that of the original data. We fitted detection functions to each set of replicate data and fitted a GAM to the resulting annual abundance estimates to generate a temporal trend curve for each replicate. This process was repeated 2000 times per species. The 50%, 2.5%, and 97.5% quantiles from the replicate GAM curves were extracted pointwise to generate overall population trends and 95% confidence intervals (Fewster et al., 2000). The trend from a single bootstrap replicate was considered positive if the predicted estimate from 2020 was higher than that from 2010 and negative if the opposite was true. The overall trend for a given species was reported as significant if at least 95% of replicates agreed on trend direction; otherwise, the species was classified as stable. Banteng had insufficient observations to support the bootstrap procedure, precluding computation of confidence intervals and trend significance, so a single GAM was fitted to the annual abundance estimates produced from distance sampling analysis.
| Spatial analysis
We conducted spatial analyses to examine the distribution of each species across KSWS and link relative abundance to spatial covariates. The number of within-year observations for each species was generally low (- Table S2), and so to support the spatial modeling, we combined data from all years into a single analysis, creating a map of relative abundance spanning the whole study period for each species. If a species had fewer than 50 observations from the whole study period, they were excluded from the spatial analysis.
Line transects were partitioned into equally sized, discrete spatial segments, and wildlife observations were allocated to the segment within which they fell. We inspected distance data for all species to identify an appropriate single truncation distance that was used to establish an effective strip width W and subsequent segment size (Buckland et al., 2004). We chose a truncation distance of 50 m which resulted in segments of size 100 m  100 m, and between 0% and 27% of observations furthest from the line being discarded. The per-segment abundance was estimated using a Horvitz-and Thompson-like estimator (Buckland et al., 2004) and adjusted for imperfect detection using the species-specific detection function selected in the abundance estimation process above. GAMs were then used to quantify the relationship between the estimated abundance in each segment and the supplied covariates (Buckland et al., 2004;Wood, 2006). For covariate data, we acquired spatial data sets for several environmental and anthropogenic variables that were hypothesized to relate to animal abundance in KSWS. These were within-segment habitat, elevation, distance to water bodies, distance to human settlements, distance to ranger stations, distance to the Vietnamese border, and latitude, and longitude (Supporting Information). The distance to the Vietnam border covariate was included to capture factors such as cross-border wildlife trade and hunting (Harrison et al., 2016).
We ran three groups of models for each species, with each model group assuming a different response distribution (response = number of groups or individuals in a segment): quasi-Poisson, Tweedie, or negative binomial. The trend from a single bootstrap replicate was reported as positive if the predicted estimate for 2020 was higher than that for 2010 and negative if the predicted estimate for 2020 was lower than that for 2010. b Overall trend was reported as positive if >95% of the bootstrap replicates were positive and negative if >95% of bootstrap replicates were negative. All trends that did not reach the 95% level were reported as stable.
c Density and abundance were estimated using conventional distance sampling, and the analysis was conducted separately from the bootstrapped trend analyses. d There were insufficient observations of banteng in 2020 to produce density and abundance estimates.
We conducted model selection using a combination of diagnostic plot assessment and AIC for Tweedie and negative binomial distributions and analysis of variance for the quasi-Poisson distribution. We retained the habitat variable in all models based on our knowledge of the importance of habitat for the species in this study. Each F I G U R E 2 Annual abundance estimates (gray points) and population trend (black line) for 11 species in Keo Seima Wildlife Sanctuary between 2010 and 2020. A -Species with increasing or stable population trends, B -species with declining population trends. Hollow points denote zero observations in that year. Error bars around the annual abundance estimates, and gray error ribbons around the trend lines, denote 95% confidence intervals. Bootstrapping was not possible for banteng and so confidence intervals were not produced. PT macaque = northern pig-tailed macaque, peafowl = green peafowl, Gibbon = southern yellow-cheeked crested gibbon, Douc = black shanked douc, LT macaque = long-tailed macaque, Langur = Germain's silver langur, ST macaque = stump-tailed macaque, muntjac = northern red muntjac, pig = wild pig final model was tested for autocorrelation (see Supporting Information for further details on modeling approach). The selected GAM for each species and a prediction grid with 200 m  200 m cells were used to predict relative abundance for each species over the study area. Spatial analyses were conducted in the R package dsm (Miller, Rexstad, Burt, et al., 2019).
| Annual abundance estimates
Effort across all transects and years was 9460 km, resulting in 5056 observations across the study period.
The minimum and maximum annual effort was 1260 km (2013) and 1600 km (2010), resulting in 588 and 729 observations, respectively (Table S2). In 2020, the most abundant species among those with increasing populations was PT macaque (estimated abundance 3929 individuals, 95% CI = [2457, 6284], Table 2; encounter rate 0.18 km À1 , Table S3), while the least abundant species among those with declining populations was banteng, which was not observed in 2020 ( Table 2). The most abundant species overall was douc (estimated abundance 24,929 individuals, 95% CI = [16,241, 38,266], Table 2; encounter rate 1.08 km À1 in 2020, Table S3). Cluster size and year were the most frequently retained covariates in the detection function models (six species). Observer and habitat were retained for douc only (Table S3).
F I G U R E 3 Predicted spatial distribution and relative abundance for seven species in Keo Seima Wildlife Sanctuary from the study period in 2010-2020. Relative abundance categories denote predicted species-specific abundance above the 75% quantile ("high"), between the 50% and 75% quantile ("medium"), between the 25% and 50% quantile ("low"), and below the 25% quantile ("very low"). See Supporting Information for corresponding maps of coefficient of variation for the above species
| Temporal population trends
Significant trends were detected for six species: two positive (PT macaque and peafowl) and four negative (ST macaque, gaur, muntjac, and wild pig; Table 2 and Figure 2). Trends for four species that did not reach 95% directional agreement among replicates were recorded as stable (Table 2). Trend agreement among replicates for ST macaque and muntjac (both negative) was 100% (Table 2).
| Spatial analysis
Results for banteng, gaur, and ST macaque were excluded because of too few observations. Pig results were excluded because of poor model fit (<5% deviance explained). Final models for the remaining seven species ranged in deviance explained from 16.3% (muntjac) to 66.1% (langur, Table S5). The median coefficient of variation for the spatial predictions for each species ranged from 19% (muntjac) to 125% (langur). Coefficients of variation were high in areas with few or no observations but were generally low (<40%) in areas with high predicted relative abundance ( Figure S8). Distribution and relative abundance were heterogeneous among species (Figure 3). Species with known preference for evergreen and semievergreen forest (gibbon, douc, and PT macaque) had higher predicted relative abundance in the central and southeastern sections of KSWS where this habitat is dominant (Figure 3). Peafowl, muntjac, and langur had highest predicted relative abundance in mosaic habitat and open deciduous forest (peafowl and muntjac: central, north, and northwest; langur: northwest and southwest). Long-tailed macaque had highest predicted relative abundance in areas of KSWS that range from mosaic to open deciduous forest (central and northeast). Distance to the Vietnamese border was the most commonly retained spatial covariate (six species), followed by distance to water and distance to ranger station (five), elevation (four), and distance to settlement (two , Table S5).
| DISCUSSION
Long-term monitoring of biological populations is critical for conservation science and policy (B. B. Hughes et al., 2017). Multiyear data sets provide baselines against which conservation efforts can be judged (Magurran et al., 2010) and are important for monitoring PA effectiveness (Geldmann et al., 2018). We have presented population estimates and temporal trends for 11 species over one decade in a large and globally significant PA. These include the first robust estimates for one critically endangered (douc), one endangered (langur), and two vulnerable (PT and ST macaques) primates from anywhere in their ranges. We are aware of only one other study in the literature that presents long-term wildlife population trends in SEA based on absolute abundance estimates rather than uncalibrated indices (Duangchantrasiri et al., 2016; also see Groenenberg et al., 2020). Therefore, our results provide critical information for global status assessments, underpin evaluations of management effectiveness in KSWS, and inform management options in PAs with similar threats regionally.
Spatial modeling indicated that species distributions vary widely, with no clear commonality among species with declining population trends or among those with stable populations. This lack of commonality suggests that population trends are not associated with a particular habitat or area within KSWS but rather are driven by factors associated with species ecology and behavior. The exception is the border with Vietnam, which is a spatial attribute associated with declining abundance. The declining species in our study are ungulates and the single primate that is predominantly ground dwelling, whereas arboreal and semiarboreal primates and peafowl have stable or increasing populations. These results indicate that ground-based threats are likely to be the primary drivers of species decline, in particular implicating snares and free-ranging domestic dogs.
| Declining populations
Models for all species except langur showed decreased relative abundance closer to the Vietnamese border. Douc, gibbon, and PT macaque prefer evergreen and semievergreen forest (Nadler et al., 2007;Rawson et al., 2009), which dominate the border area. Long-tailed macaque is a generalist occupying a range of habitats (Hansen et al., 2019). Therefore, higher densities would be expected near the border based on habitat characteristics alone. The likely explanation for the contradictory pattern observed is that parts of KSWS in close proximity to the border have been hot spots for illegal cross-border activities throughout the study period, including illegal logging and hunting with firearms and snares (Evans et al., 2013;Ibbett et al., 2020;O'Kelly et al., 2018a). Snare density increases with proximity to the Vietnamese border (O'Kelly et al., 2018b), with high volumes of illegal incursions into KSWS driven by demand for wild meat and wildlife products from Vietnam (Shairp et al., 2016). Snaring is prevalent in Cambodian PAs more generally (Belecky & Gray, 2020;Coad, Lim, & Nuon, 2019). The scale of the snaring problem in a given area is difficult to quantify due to inherent biases in snare removal data resulting from issues with detectability and sampling, although reliable methods have recently been developed (O'Kelly et al., 2018a(O'Kelly et al., , 2018b. In 2015, nearly 28,000 snares were removed from Southern Cardamom National Park in southern Cambodia (Gray et al., 2018). In KSWS, 36% of survey respondents reported engaging in hunting and 20% reported laying snares to protect crops (Ibbett et al., 2020). These data suggest that snares may be a primary contributor to regional wildlife population declines.
There is substantial evidence that free-ranging and feral dogs can have negative effects on wildlife populations (J. Hughes & Macdonald, 2013;Young et al., 2011), and these effects are particularly severe in SEA (Doherty et al., 2017). Domestic dogs are commonly used by local communities in Cambodia for hunting inside PAs (Coad, Lim, & Nuon, 2019;Ibbett et al., 2020). In KSWS, 79% of households own dogs and nearly 50% of households take dogs with them into the forest (Ibbett et al., 2020). The number of domestic dogs in KSWS may be as high as 4000, corresponding to 1.36 km À2 (Ibbett et al., 2020), which would make the density of domestic dogs several times greater than that of any monitored ungulate. Therefore, it is likely that free-ranging and feral dogs, in addition to widespread snaring, are contributing to declines in ground-based species in KSWS.
The population trend for pig, although exhibiting an overall decline, follows a fluctuating pattern that possibly reflects factors additional to the threats mentioned above. Pigs are highly fecund, and their density-dependent populations can fluctuate dramatically based on food availability and disease (Gentle et al., 2019; S anchez-Cord on et al., 2019). African swine fever is a plausible contributing factor to pig declines, as the disease has been recorded in Cambodia and can have severe negative effects on wild pig populations (Ikeda et al., 2020;Marinov et al., 2020). Pigs are resilient to relatively high levels of hunting, so the population may be able to rebound quickly if the decline is due to disease or food shortages (Steinmetz et al., 2010).
Although the most prevalent direct causes of wildlife mortality in KSWS are likely to be snares and freeranging dogs, the broader drivers are more complex. Food insecurity, shifting livelihood strategies, a preference for wild over domestic meat, traditional medicines, targeted hunting by outsiders, increasing debt burdens caused by agricultural and socioeconomic fluctuations, changing perceptions of law enforcement effectiveness, and increased access to local markets are all interacting factors that contribute to hunting of wildlife in KSWS (Ibbett et al., 2020).
| Stable and increasing populations
We found that gibbon, douc, PT macaque, LT macaque, langur, and peafowl showed stable or increasing population trends. Arboreal primates and birds are less vulnerable than ground-based mammals to hunting with snares and dogs but can be targeted with firearms. The number of firearms in Cambodia has reduced in recent years, and access to firearms has become more difficult (Dyke, 2006). Although some species, including langur and LT macaque, are used in traditional medicine, human consumption of primates is less common in Cambodia than in neighboring Vietnam (Alves et al., 2010). The reduction in firearms and the absence of a strong cultural propensity for primate consumption together may have allowed arboreal primate populations to remain stable. Nevertheless, hunting of primates with firearms, as well as traditional projectile weapons such as crossbows, persists in KSWS (Ibbett et al., 2020), and it is likely to increase if there is continued unregulated movement of people from Vietnam into KSWS with associated illegal hunting and logging activities. The relative scarcity of primates in adjacent Vietnamese PAs means that KSWS has the potential to become a source for the primate trade in Vietnam.
During the study period, there has been large-scale deforestation outside the study area, driven primarily by industrial-scale agriculture in the form of land concessions, and subsequent leakage of illegal land clearance around concessions. In 2010, a Reduced Emissions from Deforestation and Forest Degradation (REDD+) project was initiated in KSWS. This project has provided financial incentives to the RGC and local communities to reduce forest loss in the study area; consequently, forest cover has remained largely intact. An estimated 25,000 ha of forest loss has been avoided because of the REDD+ project (McMahon et al., 2020). Maintenance of forest cover is likely to be another factor supporting stable and increasing population trends for arboreal primates, particularly of gibbon, douc, and langur, which are forest-dependent. Our abundance estimates for douc and gibbon suggest that populations in KSWS are likely to be the largest cohesive populations of these species globally (Duc, Quyet, et al., 2020;Rawson et al., 2020), although for douc, these are the first peer-reviewed abundance estimates published. Abundance estimates for langur suggest KSWS is also a globally important site for this species, although comparison between sites is challenging due to a lack of published population estimates Moody, 2018).
It is not clear what is causing the apparent difference in trends between LT and PT macaque, but there are several possibilities. Widespread live capture of LT macaques to supplement so-called monkey farms (Lee, 2011) in Vietnam and China, which in turn supply the international biomedical and laboratory trade, is known to have been occurring in Cambodia since 2003 (Eudey, 2008). This practice was reported from the northeast of the country, and specifically in KSWS, from 2006 onward (Lee, 2011;Pollard et al., 2007;Rawson et al., 2007), but there has been little evidence of this practice in KSWS in recent years. A second plausible explanation is the tolerance of LT macaque to a range of habitats, including urban and agricultural areas (Eudey, 2008), which in KSWS will expose the species to a higher density of snares and dogs and opportunistic hunting in parts of its range. PT macaques, although adaptable, prefer dense evergreen and semievergreen forest where available and are therefore less exposed to anthropogenic threats. A decline in LT macaque over time may be reducing resource competition with PT macaque, thus facilitating population increase in PT macaque.
Peafowl are predominantly ground based, yet they have experienced a population increase over the study period. Population recovery of peafowl is rarely recorded in the literature as this species is suffering from habitat loss and hunting across its range, generally leading to population declines (e.g., Sukumal et al., 2015). Nevertheless, when threats are reduced, population recovery can occur (e.g., Sukumal et al., 2017). It is unclear what has caused the increase in peafowl abundance in KSWS. The population density in KSWS is much lower than other areas, even within Cambodia (see Loveridge et al., 2017), suggesting scope for substantial population increases under favorable conditions. Peafowl mortality resulting from ground-based human threats could be lower than that of ungulates for several reasons. They are less vulnerable to dogs, as they can retreat into trees when approached, and they prefer open deciduous habitat, which is found in the central, northern, and western regions of KSWS, out of reach of the Vietnam border and the larger human population centers in the south of KSWS.
| Implications for Keo Seima Wildlife Sanctuary
KSWS has been officially protected for nearly two decades and over the last decade has benefited from a greater level of conservation investment than most other PAs in Cambodia. KSWS has one of the largest law enforcement teams within any Cambodian PA as well as a range of other programs including indigenous land tenure, community PAs, ecotourism development, and REDD+. Despite operational budgets that are relatively high in the context of Cambodia, the resources available to KSWS managers are well below international benchmarks. For example, KSWS has less than 10% of the recommended law enforcement ratio of one ranger per 5 km 2 (IUCN, 2016). Our results demonstrate that charismatic and ecologically important species are heading rapidly toward local extirpation-trends that are replicated in other Cambodian PAs (Groenenberg et al., 2020). Substantially more investment, particularly into ranger staffing levels, will be required to reverse current species trends. Recent developments in the voluntary carbon markets and Cambodia's decision to support both project and national REDD+ programs suggest this may be achieved in a sustainable manner through REDD+.
Historically, law enforcement efforts in KSWS have been disproportionately focused on illegal logging of luxury timber; this trend has been seen in PAs across the country and was a result of national policies and widespread media attention targeting the economically valuable timber trade. These efforts take place at the expense of combatting wildlife crime, with less attention focused on addressing species declines. Although there have been successes in reducing deforestation compared to the without-project scenario, and an extensive indigenous community land titling program that has increased indigenous tenure within KSWS, there have been no initiatives dedicated to reducing illegal hunting which have focused on community engagement. Community-led law enforcement patrols have been operational in KSWS throughout most of the study period, but these have largely prioritized illegal logging and forest clearance.
The monitoring program in KSWS represents a longterm commitment by RGC and WCS to provide PA managers with rigorous data to inform management action. Our results suggest that for effective conservation management to provide benefits to forests, biodiversity, and communities, increases in scale across all interventions are needed and, within law enforcement, the need for a greater focus on poaching, targeting illegal hunting with snares, weapons, and dogs. Most people in KSWS hunt wildlife for subsistence, as a source of additional income, for medicinal purposes, or to protect crops (Ibbett et al., 2020). Therefore, the community-focused conservation programs within KSWS, which include community engagement and livelihood development, should explore and develop approaches to reduce the community reliance on wild meat, promote domestic sources of protein, improve food security and livelihoods more generally, and offer nonlethal crop protection strategies. Such approaches may be more effective and enduring than law enforcement alone. For detailed management recommendations for KSWS and the Eastern Plains Landscape more broadly, see Griffin and Nuttall (2020) and Groenenberg et al. (2020).
| Broader implications for SEA
Ten of the 11 species monitored in KSWS are estimated to have declining global populations (Table 1, www. iucnredlist.org), yet our results show that six of these species have stable or increasing populations in KSWS. The remaining five ground-based species have decreasing population trends in KSWS that mirror global population trends. The striking divide we have uncovered between ground-based and arboreal species has important conservation implications for these species throughout their range. Significant declines in KSWS of species such as muntjac, which are generally widespread and common, are concerning as they suggest that sustained anthropogenic pressure can lead to population collapses, even for resilient species. Equally, results for arboreal primates and peafowl from KSWS suggest that when hunting pressure remains low and forest cover is maintained, species populations within a site can remain stable.
Our findings will be valuable for future IUCN Red List assessments and regional conservation planning. We have demonstrated how robust monitoring within KSWS has provided critical information for assessing the impact of past management action, for example, reduced forest loss through the REDD+ program, by linking it to species outcomes such as stable primate populations. Our results can guide future management decisions including increased antisnare efforts and strategic, targeted deployment of resources based on species distributions.
These results also have wider implications for both species conservation and PA management. First, the species trends and potential drivers of population declines seen in KSWS are likely to be replicated in PAs across SEA. Hunting of wildlife for consumption, trophies, and trade is widespread in SEA and has resulted in species extinctions (Brook et al., 2014). Hunting with snares and free-ranging dogs (hunting and feral dogs) in particular represent two of the most serious threats to wildlife populations across SEA. Population declines in terrestrial mammals driven by snaring and free-ranging dogs are likely to be occurring in PAs across SEA where pressure from such threats is high, conservation investment and resources are low, and awareness is limited by inadequate monitoring. In PAs across the region where these threats are known to exist, this study suggests that managers should target resources at antisnare efforts and management of free-ranging dogs to protect populations of terrestrial species.
Second, monitoring biodiversity via appropriate indicators is essential to allow the attribution of species outcomes to conservation action. The establishment of a robust monitoring framework is prioritized in the Post-2020 Global Biodiversity report (CBD, 2020b). Monitoring is particularly important within PAs as their primary function is the conservation of biodiversity. Continued efforts to increase global PA coverage, driven by Aichi Target 11 (CBD, 2010), have seen some success with over 15% of the Earth's terrestrial surface and 7% of oceans legally protected (United Nations Environment Programme World Conservation Monitoring Centre, IUCN, & National Geographic Society, 2020). Yet, evidence linking management action to biodiversity outcomes within PAs is sparse (Geldmann et al., 2018). For PAs where protection of wildlife is a primary objective, long-term data sets on wildlife populations are critical for understanding population dynamics, evaluating extinction risk, informing management action, and assessing interventions (Magurran et al., 2010;White, 2019). Despite the significant contribution that long-term data sets make to conservation research and policy, investment in the collection of such data is falling (B. B. Hughes et al., 2017). There is an urgent need for robust long-term wildlife monitoring data in SEA to understand the effects that hunting, wildlife trade, and other threats are having on alreadyfragmented populations, to support conservation decisionmaking and assessment, and ultimately to avoid species extinctions.
MN was funded by the Natural Environment Research
Council. OG, VS, field teams, and the fieldwork were funded by USAID, AFD, USFWS, GEF-5 (CAMPAS), and KSWS REDD+. We are grateful to E. Rextad for analytical guidance on distance sampling, S. Mahood and K. Nuttall for comments on early versions of the manuscript, and H. Washington for technical editing and proofreading. We are grateful to the Royal Government of Cambodia for support and facilitation of biodiversity monitoring in KSWS. We thank the reviewers who provided thoughtful comments that improved this paper. Final thanks go to all past and present members of the KSWS Monitoring Team and the local communities who have supported them.
CONFLICT OF INTEREST
The authors declare that they have no conflict of interest.
DATA AVAILABILITY STATEMENT
Raw wildlife data used in this study is publicly available on the Global Biodiversity Information Facility (GBIF) at https://doi.org/10.15468/37thhj. Some raw data are excluded from public access due to their sensitive nature (i.e., locations of threatened species that are vulnerable to hunting) but can be requested from the authors. The R code used for the analysis is available at https://github. com/mattnuttall00/PaperCode_ LongTermMonitoringSEA.
ETHICS STATEMENT
This study received ethics approval from the University of Stirling (AWERB/1920/031). | 2021-12-23T16:10:07.808Z | 2021-12-21T00:00:00.000 | {
"year": 2022,
"sha1": "3bd922cc1fc63d43d47702ed1c075bab1b494f35",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/csp2.614",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a05aed35a781d44030bd45d20f94c90f3696ce41",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
61644 | pes2o/s2orc | v3-fos-license | MicroRNAs regulate osteogenesis and chondrogenesis of mouse bone marrow stromal cells.
MicroRNAs (miRNAs) are non-coding RNAs that bind to target mRNA leading to translational arrest or mRNA degradation. To study miRNA-mediated regulation of osteogenesis and chondrogenesis, we compared the expression of 35 miRNAs in osteoblasts and chondroblasts derived from mouse marrow stromal cells (MSCs). Differentiation of MSCs resulted in up- or downregulation of several miRNAs, with miR-199a expression being over 10-fold higher in chondroblasts than in undifferentiated MSCs. In addition, miR-124a was strongly upregulated during chondrogenesis while the expression of miR-96 was substantially suppressed. A systems biological analysis of the potential miRNA target genes and their interaction networks was combined with promoter analysis. These studies link the differentially expressed miRNAs to collagen synthesis and hypoxia, key pathways related to bone and cartilage physiology. The global regulatory networks described here suggest for the first time how miRNAs and transcription factors are capable of fine-tuning the osteogenic and chondrogenic differentiation of mouse MSCs.
Introduction
MicroRNAs (miRNAs) are small non-coding RNA-molecules that bind to the 3' untranslated region of mRNAs and, depending on their degree of complementarity with the target genes, induce translational repression or mRNA degradation. 1 Since the fi rst identifi cation of miRNAs in 1993, 2 hundreds of miRNAs have been identifi ed from plants, animals and viruses. 3 Consecutively, miRNAs have proven to play essential roles in diverse biological processes including early development, 4,5 cell proliferation and cell death, 6 fat metabolism, 7 cell differentiation, 8,9 and brain development. 10 The sequences coding for miRNAs are spread around the genome, including exons, introns, 3'-UTRs and genomic repeat-areas, and are situated either in the sense or antisense orientation with respect to the overlapping protein-coding gene. Studies carried out with embryonic stem (ES) cells indicate that miRNAs expression profi les in stem cells are different from other tissues, 11 suggesting that miRNAs may play an important role in stem cell self-renewal and differentiation. The expression of Argonaute genes is restricted to specifi c anatomical sites in mouse embryos, suggesting that short regulatory RNAs may have physiological functions during organogenesis. 5 In addition, miRNAs are expressed in haematopoietic tissue where they participate in the regulation of haematopoietic stem cell (HSC) differentiation. 9,12 However, there is only limited amount of information about the expression of miRNAs in mesenchymal stromal cells (MSCs) or their potential role in osteo-or chondrogenesis. 13 Mesenchymal stromal cells are multipotent cells that have the potential to differentiate to various lineages of mesenchymal tissues, including bone, cartilage, adipose, tendon, and muscle. 14 Compared to haematopoietic stem cells (HSCs), MSCs are rare in bone marrow, representing ∼1 in 10,000 nucleated cells. Where HSCs are a well-characterized population of self-renewing cells that give rise to all mature blood cell lineages, 15,16 MSCs are less defi ned due to the limited understanding of MSC properties. As a result, terms such as "marrow stromal cells", "mesenchymal progenitor cells", "nonhaematopoietic mesenchymal stem cells" and "adult nonhaematopoietic stem cells" are used to defi ne this cell population. Isolation and characterisation of stem cells from bone marrow rely on their immunophenotypic or functional aspects. Haematopoietic stem cells have been shown to express surface markers CD14 , CD34 and CD45. 17 MSCs, on the contrary, lack clearly defi ned surface markers and thus the isolation and characterisation of MSCs is still based on the properties described by Friedenstein already in 1970s: their adherence to plastic, spindle-shaped morphology and ability to form colonies. 18,19 In addition, MSCs are commonly characterised for their differentiation capacity and for the negative expression of haematopoietic surface markers.
In calcifi ed tissue, MSCs are needed for bone and cartilage formation. During embryogenesis, bone formation begins with mesenchymal stem cell condensation. Membranous bone (craniofacial bones and the clavicle) is derived from MSCs that differentiate in situ into bone-forming osteoblasts and produce matrix rich in Type I collagen. Endochondral bone, which is the principal type of bone in the body, is formed by MSCs that fi rst differentiate into chondrocytes to form a cartilagenous template for the bone. Chondrocytes secrete a matrix rich in Type II collagen and Aggrecan, and go through a genetic program driven by Sox9 20 leading to cartilage enlargement. In the centre of the cartilage anlage, chondrocytes become hypertrophic and start to synthesise Type X collagen that is later degraded and replaced by bone. Although transcription factors such as Sox9 and Runx2, and signalling molecules such as Indian hedgehog (Ihh), Parathyroid hormone-related protein (PTHrP), Fibroblast growth factors (FGF), and Bone morphogenetic proteins (BMPs) are involved in the regulation of endochondral bone formation, 21 the molecular mechanisms leading to bone formation are still poorly understood. Thus, understanding the regulatory networks that control the lineage commitment and differentiation of MSCs is an important challenge.
In order to study the role of miRNAs in osteoand chondrogenesis, miRNA expression profi les of osteoblasts and chondroblasts derived from mouse MSCs were compared. Subsequently, target prediction studies carried out with the differentially expressed miRNAs were combined with pathway analyses to gain more insight into the cellular functions potentially regulated by these miRNAs. Bioinformatics studies have shown that the promoter regions of miRNAs seem to contain similar regulatory motifs as the promoter regions of protein coding genes. 22 In order to investigate whether the studied miRNAs could form regulatory networks with transcription factors (TFs) involved in osteo-or chondrogenesis, the promoter regions of the differentially expressed miRNAs were analysed. We present here multiple lines of evidence to suggest that in addition to haematopoietic cells, miRNAs are also involved in the regulation of lineage commitment in mesenchymal cells.
Cell culture and RNA extraction
All cell culture reagents, unless otherwise stated, were purchased from Gibco Invitrogen (U.S.A.). Total RNA was extracted from cultured cells before and after osteo-or chondrogenic induction using the mirVana miRNA Isolation Kit following the manufacturer's protocol (Ambion, U.S.A.). To remove genomic DNA contamination, total RNA samples were digested with DNase I (NEB, U.S.A.). RNA concentrations were quantified using an Eppendorf Biophotometer (Eppendorf, U.S.A.).
Bone marrow cells were isolated from 8-12 week-old male C57BL × DBA mice according to a previously described method. 23 Briefl y, cells were isolated from the tibiae and femora by fl ushing them from the bone marrow cavity using a 10 ml syringe with a 25 gauge needle and medium consisting of RPMI-1640, 12% iFCS, 100 U/ml penicillin and 100 µg/ml streptomycin. A primary culture of plastic adherent cells from mouse bone marrow is a heterogeneous population of mesenchymal and hematopoietic stem cells. 24 For the selection of mesenchymal stem cells, bone marrow cells were incubated 2 hours at 37 ºC on a plastic culture dish containing RPMI-1640 medium described above (12% iFCS, 100 U/ml penicillin and 100 µg/ml streptomycin) to remove rapidly adherent cells. 18,19 Unattached cells were collected and cultured in cell culture fl asks at the initial density of 1 × 10 6 cells/cm 2 . Non-adherent cells were removed 48 hours later and adherent cells were washed with phosphate-buffered saline (PBS). Cells were further cultured with a twiceweekly medium replacement (half of the medium replaced). When confl uent, cells were detached using trypsin-EDTA and re-plated at the density of 10 000 cells/cm 2 . RPMI medium has been demonstrated to inhibit the growth of hematopoietic cells in culture 25 and cultures were therefore maintained in RPMI-1640 for 1 to 2 weeks. 26 Finally, adherent cells were detached by a trypsin-EDTA treatment and expanded by plating them in DMEM medium supplemented with 12% iFCS, 100 U/ml penicillin and 100 µg/ml streptomycin at the density of 1 000 cells/cm 2 . Cells were cultured in described medium until confl uent (1 to 2 weeks), thereafter trypsinized, immunophenotypically characterised and subjected to osteoblastic or chondrogenic differentiation.
Osteogenic differentiation was induced by culturing the long-term selected MSCs in osteogenic medium consisting of phenol red-free α-MEM, 12% iFCS, 10 mM Na-β-glycerophosphate (Fluka BioChemika, Switzerland), 50 µgrams/ml ascorbic acid 2-phosphate (Sigma-Aldrich, U.S.A.), 100 U/ml penicillin and 100 µg/ml streptomycin for 3 weeks in cell culture fl asks and 24-well plates at the initial density of 10 000 cells/cm 2 . During the first week, the culture medium was supplemented with 10 nM dexamethasone. The cultures were terminated by RNA extraction or fi xation in 3% paraformaldehyde. To demonstrate osteoblastic differentiation, cells were stained for alkaline phosphatase (ALP) (Sigma-Aldrich, U.S.A.) and bone nodules were detected by von Kossa staining. 27 To induce chondrogenic differentiation, 200 000 of the cultured MSCs were placed in a 15-ml polypropylene tube and centrifuged (6 min 500 × g) to form a micromass pellet culture. 28 Cell pellets were cultured for 21 days in chondroinductive medium consisting of high-glucose DMEM supplemented with 10 ng/ml TGF-β3 (R&D Systems, UK), 10 −7 M dexamethasone, 50 µg/ml ascorbic acid 2-phosphate, 40 µg/ml L-proline (Sigma Aldrich, U.S.A.), 100 µg/ml sodium pyruvate (Sigma Aldrich, U.S.A.), 50 mg/ml ITS+ Premix (BD Biosciences, U.S.A.). Media were changed every 3 to 4 days. After three weeks of culture, the pellets were either lysed for RNA extraction or fi xed for 2.5 hours in 4% paraformaldehyde. To evaluate chondrogenic differentiation, pellets were embedded in paraffi n, cut into 5 µm sections and stained with toluidine blue for proteoglycans. Presence of type II collagen was detected by 6B3 monoclonal antibody raised against chicken type II collagen 29 following the method described earlier. 30
Osteogenic and chondrogenic gene expression
To further confi rm the osteogenic and chondrogenic differentiation of the long-term selected MSCs, the total RNAs were analyzed with RT-PCR for the expression of selected osteogenic and chondrogenic transcripts (Table S1). The osteogenic genes included Type I collagen, Osteocalcin, Osterix (Sp7), and Runx2, whereas chondrogenic differentiation was evaluated based on the expression of Type II and X collagens and Sox9. One µg of DNase I treated total RNA was reverse transcribed using M-MLV reverse transcriptase (Promega Corporation, UK). cDNAs were amplified using DyNAzyme II DNA Polymerase (Finnzymes, Finland) for 35 cycles and analysed on 1,5% agarose gel. Amplifi cation of GAPDH and L19 served as loading controls.
miRNA expression
A total of 35 miRNAs were selected for the follow-up based on their expression in haematopoietic tissues 9, 31, 32 or mouse embryonic stem (ES) cells 11 or based on computational predictions on physiologically important genes related to bone and cartilage function (Table S2). The e x p r e s s i o n p r o f i l e s o f t h e s e m i R N A s were detected by quantitative real-time PCR (qRT-PCR). Amplifications were performed using Taq DNA Polymerase (ABgene) and mir-Vana qRT-PCR miRNA Detection Kit (Ambion, U.S.A.) following the manufacturer's instructions. For each sample, a total of 37 different reactions were performed in triplicate with mirVana qRT-PCR Primer Sets. Out of these, 35 were specific for miRNAs and two, U6 snRNA and 5S rRNA, were used for normalisation. For qRT-PCR reactions, 50 ng of DNAsetreated total RNA was used and a no-template reaction was performed for each primer set. At the end, a dissociation analysis (melt-curve) from 56 ºC to 90 ºC was performed. Ct data were determined using default threshold settings. End-point reactions were analysed on a 15% neutral PAGE to discriminate between the correct amplification products (∼90 bp) and potential primer dimers.
Micro-RNA expression data was normalised to U6 snRNA and 5S rRNA according to the manufacturer's recommendations. Relative quantifi cation of miRNA expression was calculated with the 2 −∆∆ Ct method 33 where undifferentiated cells were set as a calibrator sample. Standard error of the normalized expression was calculated by applying the differential equation of Gauss. 34 The miRNA expression values were compared between undifferentiated and differentiated cells, and between osteoblasts and chondroblasts. MicroRNAs, whose expression was changed at least 5-fold or 2-fold, respectively, were selected for target predictions and promoter analyses.
Target predictions
Target prediction tools TargetScan (http://www. targetscan.org), PicTar (http://pictar.bio.nyu.edu) and miRanda (http://www.microrna.org) were utilized in order to fi nd out possible targets genes for the differentially expressed miRNAs. Since prediction algorithms often result in false positives, 35 the target gene lists of the three programs were combined and an intersection set, including only the target genes found with all three prediction algorithms, was created for each miRNA separately. Predicted target gene identifi ers were fi rst converted into a common nomenclature, and the results were then combined for each miRNA separately using data available from the latest Ensembl release 44.
Pathway analysis
In order to elucidate the physiological role of differentially expressed miRNAs, their target genes were analyzed through the use of Ingenuity Pathways Analysis version 5.0 (Ingenuity ® Systems, www.ingenuity.com). The intersection sets were uploaded into the application, and each gene identifier was mapped to its corresponding gene object in the Ingenuity Pathways Knowledge Base. The genes in the intersection sets were overlaid onto a global molecular network developed from information contained in the Ingenuity Pathways Knowledge Base, and networks were then algorithmically generated based on their connectivity. The functional analysis identified the biological functions or diseases that were most significant to the intersection sets. The significance of the association between the intersection set and pathway was measured in two ways. First, a ratio of the number of genes from the intersection set that mapped to the pathway divided by the total number of genes that mapped to the pathway was calculated. Second, Fischer's exact test was used to calculate a p-value determining the probability that the association between the genes in the intersection set and the pathway is explained by chance alone. Graphical representations of the molecular relationships between genes/gene products were also produced. Genes/ gene products were represented as nodes, and the biological relationship between two nodes was represented as an edge (line). All edges are supported by at least 1 reference from the literature. Nodes were displayed using various shapes that represent the functional class of the gene product.
Promoter analysis
To examine the potential transcription factors (TFs) involved in miRNA regulation, promoter analysis was performed and conserved binding sites for known TFs were predicted for the upstream regions of differentially expressed miRNA genes. As target prediction algorithms, transcription factor binding site (TFBS) prediction tools often result in false positives. In order to improve the authenticity of the predictions, a technique known as phylogenetic footprinting was applied. Thus, human and mouse orthological sequences were compared and only conserved binding sites were accepted for the analysis. Regulatory regions, 500 bp upstream and 100 bp downstream from the starting locus, of human and mouse orthological pre-miRNAs (
Cell culture
In order to compare miRNA expression in osteoblasts and chondroblasts, mesenchymal cells were isolated and enriched from bone marrow. To confi rm the negative selection of HSCs and the positive selection of MSCs, cells were characterised for the expression of CD34, CD45 and Sca-1 surface markers (Fig. 1). After enrichment procedure, cells were positive for stem cell marker Sca-1 39 and negative for hematopoietic surface antigens CD34 and CD45. 17 To compare miRNA expression in osteoblasts and chondroblasts, long-term selected MSCs were induced to differentiate into the desired cell types. For osteoblastic differentiation, cells were cultured for one to 3 weeks in osteogenic medium, followed by alkaline phosphatase-and von Kossa stainings ( Fig. 2A-D). For chondrogenic differentiation, cells were cultured in chondrogenic medium as micromass pellet cultures for 3 weeks, followed by histological evaluation. Toluidine blue staining demonstrated the presence of proteoglycans and immunohistochemical staining showed deposition of type II collagen in the cell pellets (Fig. 2E-F).
Osteogenic and chondrogenic gene expression
To further evaluate the phenotypes of the in vitro differentiated cell cultures, expression of osteogenic and chondrogenic genes was analysed with RT-PCR (Table S1). After a three-week culture in osteoinductive medium, long-term selected MSCs expressed osteoblast-related genes Type I collagen, Osteocalcin, Osterix and Runx2, but also Type X collagen and Sox9. Cells grown in chondroinductive conditions expressed Runx2, Type II collagen, Type X collagen and Sox9, and were negative for the expression of Type I collagen, Osteocalcin and Osterix (Fig. 2G).
miRNA expression
The mirVana qRT-PCR miRNA Detection Kit (Ambion) was used in order to compare miRNA expression in osteoblasts and chondroblasts. Undifferentiated MSCs were used as a control and data were normalized as described in Experimental procedures. Figure 3 shows the miRNA expression profi les of osteoblasts and chondroblasts derived from MSCs. Relative expression value of Relative expression of 35 miRNAs in MSCs after chondrogenic and osteogenic differentiation. 29 miRNAs were selected based on their expression in haematopoietic tissues or ES cells, and 6 miRNAs were selected based on target prediction studies. Expression value 1, which is marked with a black line, represents miRNA expression in undifferentiated precursor cells. miRNA expression after osteogenic differentiation is shown in blue and expression after chondrogenic differentiation is shown in red. The experiment was repeated twice with three replicates for each sample. The data is presented as mean +/− SE. Differentially expressed miRNAs (marked with arrows) were selected for further analysis.
Osteoblasts
Expression 1 represents the expression level of a specifi c miRNA in undifferentiated MSCs. Only 2 out of 35 miRNAs were differentially (Ն5-fold) expressed in osteoblasts compared to the undifferentiated cells whereas in chondroblasts the expression levels of 7 out of 35 miRNAs changed at least 5-fold during the differentiation (Table 1).
In addition, 8 out of 35 miRNAs were differentially expressed (at least 2-fold) between osteoblasts and chondroblasts derived from long-term selected MSCs (Table 1).
Pathway analysis
Intersection sets (Fig. 4) of potential miRNA target genes were uploaded into Ingenuity Pathways Analysis version 5.0 (Ingenuity ® Systems, www. ingenuity.com), resulting in multiple interaction networks. An exceptionally high score and low p-value, as calculated by IPA, was observed for 5 miRNAs out of 11 and the most signifi cant biological functions related to individual miRNA target gene pathways are shown in Table 2.
Promoter analysis
Information about eukaryotic transcription factors, their genomic binding sites and DNA-binding profi les is stored in databases such as TRANS-FAC 37 and JASPAR. 40 The upstream regions of differently expressed miRNA genes (Table S2) were analysed in order to study the potential transcription factors (TFs) involved in miRNA regulation. When the results were combined with data obtained from target predictions and IPA analyses, it could be noted that 3 transcription factors (PBX1, PPARγ and HIF1α) that were predicted as targets for the differentially expressed miRNAs had also binding sites in the upstream region of the same miRNAs (Table S4). PBX1 (pre-B-cell leukemia homeobox) is a potential target for miR-101 that was over 5-fold upregulated in chondroblasts. PPARγ (peroxisome proliferator-activated receptor γ) is potentially regulated by miR-130b that came up when comparing the miRNA expression between osteoblasts and chondroblasts. HIF1α was predicted to be regulated by miR-199a that was over 12-fold upregulated in chondroblasts. When the interplay of miRNA target genes and TFs was analysed on the basis of published observations, a global regulatory network could be observed (Fig. 4B). miR-199a that was upregulated in chondroblasts was found to target HIF1α. miR-124a was also upregulated in chondroblasts with RFX1 as its target. miR-96 expression was strongly suppressed in chondroblasts, and it was found to target SOX5, a transcription factor that controls chondrogenesis. The previously described cartilage-specifi c miR-140 target HDAC4 41 could also be explained by the regulatory network presented in Figure 4, yet the expression of miR-140 remained constant in our experiments. When the existence of genomic miRNA clusters were analysed for the miRNAs studied here, two miR-NAs were found to be located in clusters, namely miR-199a/miR-214 and miR-96/miR-182/miR-183. These clusters may strengthen the regulation of Sox5-Sox6 axis leading to type II collagen responses, or of HIF1α-PGF-axis leading to various hypoxia responses (Fig. 4).
Discussion
Although miRNAs have been shown to play an important role in cell differentiation, their contribution to osteo-or chondrogenesis has not been previously demonstrated. We present here multiple lines of evidence to suggest that miRNAs are an integral part of the transcription factor network regulating bone marrow stem cell differentiation and proliferation. Promoter analyses and target predictions carried out with differentially expressed miRNAs show that transcription factors may act as direct miRNA targets, but also as upstream regulators of miRNA target genes, or the miRNAs themselves. This way they constitute loops that strengthen or attenuate regulatory events.
Since there was no previous data about miRNA expression in mesenchymal stromal cells, we selected the miRNAs for this experiment based on their expression in haematopoietic tissue. We hypothesised that the haematopoietic miRNAs may also target genes involved in the differentiation of mesenchymal tissue within the bone marrow microenvironment. Haematopoietic miRNAs were supplemented with miRNAs found from mouse embryonic stem cells and with miRNAs selected based on target prediction studies carried out with genes involved in bone or cartilage function. In addition, miR-140 was included based on its suggested role in chondrogenesis. 41 To gain more insight into the cellular functions possibly affected by the studied miRNAs, the predicted target genes of the differently expressed A Venn diagram illustrating the production of intersection sets from three miRNA target prediction algorithms. Differentially expressed miRNAs were selected for miRNA target predictions by PicTar, TargetScan and miRBase. Intersection lists containing 8-107 predicted miRNA target genes (mean 63 genes) were then uploaded to Ingenuity Pathways Analysis (IPA) for further interaction network analysis. The miRNA target genes were overlaid onto a global molecular network developed from information contained in the Ingenuity Pathways Knowledge Base (www.ingenuity.com), and networks were then algorithmically generated based on their connectivity. Results from pathway analysis were combined with promoter analysis data to form a global regulatory network, as shown in the Figure. Composite loops (Shangi et al. 2007) were observed for three TFs marked with red nodes (PBX1, PPARγ and HIF1α); these harboured TFBSs in upstream regions of miRNAs while they were also predicted as target genes for the same miRNA. Two miRNA clusters that were found to target signifi cantly interacting genes are marked in the Figure with rectangles. Arrows represent miRNAs that are predicted to regulate a specifi c gene. Nodes are displayed using various shapes that represent the functional class of the gene product. miRNAs were analysed. We hypothesised that if miRNA target genes are physiologically relevant, they should produce significant interaction networks in the pathway analysis. Extremely low p-values and extensive interactions between genes predicted as miRNA targets were observed, which cannot be explained by chance alone. Bioinformatics' approaches have suggested that miRNA expression may be regulated by transcription factors. 42 When the interaction networks between miRNAs and transcription factors were computationally studied, thousands of human genes were suggested to be regulated by miRNA-TF interactions. 43 As miRNAs are known to target many TFs, the regulatory network appears to be very complex. Composite loops 43 were observed in our analysis for 3 TFs: PBX1, PPARγ and HIF1α. These harboured binding sites in the upstream region of miRNAs while they were also predicted as target genes for the same miRNA. In addition, numerous TF binding sites were observed in the upstream regions of the differentially expressed miRNAs. It became obvious from these analyses that only a fraction, if any, of the miRNA responses are such that lead to a single, easily predictable outcome in cells. Instead, accumulation of several miRNA responses eventually may lead to physiological responses (Fig. 5). An example of such a response is the downregulation of Type I collagen expression by miR-124a. The TF RFX1 (Regulatory factor X1) is a target of miR-124a, which was upregulated in chondroblasts. Another example is Sox5, which regulates Type II collagen expression during chondrogenesis but also lipase expression, raising the possibility that Sox5 has also a role in adipogenesis and lineage commitment. In MSC cultures, β-glycerophosphate and ascorbic acid induce osteoblast formation and lead to miR-96 upregulation. During osteoblastic differentiation, adipocyte or chondrocyte genes, like Sox5-regulated lipase or Type II collagen, must be turned off e.g. by miR-96 in favour of osteogenic differentiation. The complex regulatory networks described here indicate that miRNA-TF interactions are powerful modulators of bone marrow stromal cell differentiation. The specifi c biological functions related to the target genes of differentially expressed miRNAs suggest that they are involved in developmental pathways including cellular development and movement, cell morphology, cell signalling, cell death, and connective tissue development and function.
Figure 5. Model for miRNA-TF interactions during the differentiation of bone marrow stromal cells into bone-forming osteoblasts or chondroblasts.
Multiple lines of evidence suggest that miRNAs substantially downregulated in chondroblasts targeted genes important for chondrogenic differentiation. In addition, miRNAs signifi cantly upregulated in chondroblasts appeared to target genes important for osteogenesis, adipogenesis and myogenesis. PPARγ (involved in adipogenesis) was targeted by miR-130a and miR-130b that were upregulated in chondroblasts and osteoblasts, respectively. PBX1 (involved in myogenesis) is a potential target for miR-101 that was over 5-fold upregulated in chondroblasts. Osteoblasts were positive for miR-96 that downregulated Type II collagen expression via Sox5. Chondroblasts were positive for miR-124a that regulates Type I collagen expression via RFX1. HIF1α was targeted both by miR-18a (expressed in osteoblasts) and miR-199a (expressed in chondroblasts), suggesting that hypoxic signals are important regulators of MSC differentiation. At the tissue level signifi cant physiological responses may be seen already by affecting the balance between these miRNAs and their target genes. How these regulatory pathways that seem to be important in MSC differentiation can be utilised e.g. in regenerative medicine remain to be resolved. | 2017-06-07T09:41:25.433Z | 2008-04-22T00:00:00.000 | {
"year": 2008,
"sha1": "5e908e779bb1a0e8335cd4caa4b3ffb0a0f03f8c",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.4137/GRSB.S662",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7563845fa77ab2ebde111eca98bbe3f3f216a3e0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246737975 | pes2o/s2orc | v3-fos-license | Feel the suspense! masculine positions and emotional interpellations in Swedish sports betting commercials
ABSTRACT Gambling advertising is often permeated by stereotypical portrayals of gender, including those of male gamblers as tough and successful. Simultaneously, representations of men in other advertising has become increasingly diverse, including emotional and sexualized, heroic and muscular portrayals. This article uses both these bodies of research to discuss Swedish sports betting commercials from 2019–2020. It shows that different commercials draw on diametrically different formulations of the game, emphasizing skill and luck, rationality and emotion respectively, which is conceptualized as different, gendered emotional interpellations. These include production of emotional or stoic masculine viewer positions, as well as the portrayal and evoking of emotions, and point to the psychic and emotional dimension of neo-liberal, consumerist culture, which strives to incorporate and exploit ever more aspects of the personal. The article furthers the theorizing of emotions in consumerist culture, contributes to gambling research by problematizing gendered ideas about skill and luck, and adds to studies of men in contemporary consumerist culture with discussions of emotionality, rationality, homosociality, and masculinized interpellations of different kinds.
especially pertinent in Sweden with its ideals of care and emotionality in men, points to possible new portrayals, whose significance needs further study (Sarah Gee 2014;Goedecke 2021b).
In this article, I discuss masculine positions in online and televised Swedish sports betting commercials broadcast during 2019-2020. Cultural representations, such as commercials, not only mirror but also produce "reality" in certain ways but not others, which connects them to power (Stuart Hall 1997). Centering what I call emotional interpellations, I show that sports betting and bettors are articulated in ways that differ along the lines of emotionality and rationality, even if they are all masculinized. In the article, I further the theorizing of emotions in consumerist culture and, using feminist and critical perspectives, I contribute to gambling research by problematizing skill, luck, and gender. I also add to studies of men in contemporary, consumerist culture with discussions of emotionality, rationality, homosociality, and masculinized interpellations.
Men and emotions in consumerist culture
While still often portrayed as actors, subjects, strong, and in control, the last 60 years has seen increased variation in representations of men in popular culture, including advertising (Kristen Barber and Tristan Bridges 2017;Elena Frank 2014;Gee 2014). Indeed, as Susan M. Alexander (2003, 536) argues, while corporations may have an interest in maintaining traditional gender roles to ensure continued consumption of their products, "they also serve as agents of social change by creating new consumer markets", as epitomized by sexualized male bodies, addressed as in need of fashion and hairremoval products (Frank 2014;Gee 2014). Characteristics that have previously been connected to women now abound in advertising directed at men. However, the meaning of these changes is not clear: Do these new portrayals of men indicate actual change in gendered power relations or are they merely ways of making the same gendered power relations more acceptable (Barber et al. 2017)?
Emotions and rationality have a long history as a gendered dichotomy (Genevieve Lloyd 1993), with emotionality being associated with women and rationality with white men. Emotionality in men has taken on a symbolic role in popular feminist thought and self-help culture, the thought being that "the male role" causes men to repress, hide, or disregard their emotional lives. Thus, men's emotionality is allegedly progressive (de Boise et al. 2017). Emotionality is comparable to other feminine characteristics that have been discussed à propos men in consumerist culture, but despite this, representations of men's emotions and emotionalities have been overlooked in this research.
However, the gendered politics of women's emotions in advertising have been discussed (Gill 2017;Gill and Akane Kanai 2019;Gill and Shani Orgad 2015). This research suggests that ever expanding neoliberal and capitalist forces appeal to us on emotional and psychic levels. These appeals draw on postfeminist discourses, resulting in postfeminist "mediated feeling rules", encouraging women to feel confident or to love their body. Postfeminist feeling rules for men have been discussed in the context of Swedish television dramas. Sweden is often said to be one of the most gender equal countries in the world, with wide-spread ideals centering care and emotionality in men, especially fathers, which inform feeling rules prescribing emotionality (of certain kinds) and engagement in personal development among men (Goedecke 2021b). However, in the context of Swedish consumerist culture, men's emotionalities remain to be discussed.
In this article, I propose the term emotional interpellations. Cultural texts, including advertisements, create "structures of meaning" which are "translated into statements about who we are and who we aspire to become" with advertisements urging us, interpellating us, to become that which we are addressed as (Gill 2007, 50; see also Judith Williamson 1978). Arguably, the process of meeting and decoding cultural texts always has an emotional dimension (see also Margreth Lünenborg and Tanja Maier 2018). "Mediated feeling rules" builds upon Arlie Russell Hochschild's (2003) argument that emotions are produced in certain ways; "feeling rules" structure "emotion work", which is performed both professionally and privately, and in gendered ways. Contrastingly, "emotional interpellations" points to the appeals made in cultural texts, attempting, like other interpellations, to address and thus produce viewers, a process that can be resisted, altered, or challenged (Gill 2007;Hall 1996). Simply put, emotional interpellations can, if successful, produce various feeling rules and emotional subjects.
More specifically, emotional interpellations captures both emotional flow and emotional integration in advertising (Edward Kamp and Deborah J. MacInnis 1995), that is, the patterns of emotions portrayed and their integration with the commodity in question, aimed at pulling the viewer into the circulation of emotion. Emotional interpellations also, I propose, produce (gendered) subjects as emotional in certain ways while also using emotions as part of the interpellations; they produce emotional or stoic masculine positions while also portraying and evoking various and varying emotions. In this case, emotions become even more poignant as emotional experiences arguably constitute the commodity in gambling advertising; it is not lottery tickets or betting slips that are sold, but the suspense, thrills, and dreams of gambling.
This approach rejects views of emotions as irrepressible forces from within, overflowing or exploding if restrained. Instead, emotions are seen as produced and circulated through culture, discourses, and between bodies (Jennifer Harding and Deidre Pribam 2009;Margreth Lünenborg and Tanja Maier 2018;Margaret Wetherell 2012). With this view of emotions, emotional expressivity in men should not be seen as a sign of liberation (Sam de Boise and Jeff Hearn 2017), but as an emergent form of feeling rules, which in the context of commercials must also be understood in relation to capitalist, consumerist cultures.
This article primarily focuses on the production and circulation of emotion through mediated materials, but emotions stretch across dichotomies such as mind/body, individual/collective, private/public, and inner/outer (Jennifer Harding and Deidre Pribam 2009;Wetherell 2012). Gambling is a similarly complex phenomenon, which renders emotional interpellations a suitable lens through which to study it. chance. Gambling research is dominated by medical and psychological perspectives and quantitative methods, often treating gender as a variable in searches for gender differences (e.g. Phillips 2009). However, a growing body of research acknowledges that gambling is a cultural and discursive phenomenon, changeable across space and time (Rebecca Cassidy, Claire Loussouarn and Andrea Pisac 2013;Virginia McGowan 2004).
In Sweden, the commercialization of gambling began in the 1980s, at which point gambling also became subject of marketing strategies (Per Binde 2013). Previously subject to a gambling monopoly by the state, in 2019 the Swedish gambling market was reregulated and opened to commercial, licensed companies and to "moderate" advertising. Swedish gambling research is a small field with only a few qualitative studies (ethnographies include Binde 2011;Max Hansson 2004;Philip Lalander 2006, and analyses of gambling politics e.g. Susanna Alexius 2017). Gender perspectives are largely absent (Lalander 2006;Svensson 2013 are exceptions) and only a few studies of gambling advertising exist (e.g. Åsa Kroon 2019;2021).
Cultural representations of gambling, including advertising, produce meanings around gambling. Research of gender in gambling advertising is dominated by quantitative methods and points to stereotypical gendered performances and genderings of various gambling practices in several cultural contexts (Emily G. Deans, Samantha L. Thomas, Mike Daube, Jeffrey Derevensky, and Ross Gordon 2016; Hibai Lopez-Gonzalez, Ana Estévez and Mark D. Griffiths 2018; Hibai Lopez-Gonzalez, Frederic Guerrero-Solé and Mark D. Griffiths 2018; Zhen Sen and Wei Luo 2016). Men are portrayed with friends, "hanging out" and watching a game, as gamblers, lucky or winners, while women are casino hostesses or sexualized rewards. Sports betting, specifically, is connected to patriotism and love for sports, but also to control (Deans et al. 2016;Lopez-Gonzalez, Estévez and Griffiths 2018).
Jouhki, in a rare qualitative study of masculinity in poker advertising, argues that "the hegemonic masculinity which is nowadays more flexible and contested in ads than ever . . . is rather stable, if not stereotypical in poker" (2017,196). Contrastingly, Goedecke notes the discursive connection between Swedish ideologies of gender equality and postfeminist, allegedly reformed masculine positions and "soft", moderate gambling in Swedish betting commercials (2021a). Gambling advertising produces meanings around gender and articulates gambling in certain ways (Kroon 2021), a process I suggest is intimately related to gendered formulations of emotionality and rationality, and how they are used to appeal to viewers.
Sports betting, skill and emotionality
It is often repeated in gambling research that men and women prefer skill and chance games respectively, but research about gambling as a cultural practice complicates these statements. Cassidy (2014) shows that skill, rather than being a preference among male gamblers, produces gamblers as masculine and betting-shops as homosocial spaces. Among Cassidy's male research participants, elements of chance were deemphasized while mathematics, logics, control, and knowledgeability (traits described as unavailable to women) were highlighted. Hansson (2004) discusses similar tendencies of privileging skill and rationality and disregarding luck among Swedish male horse bettors but does not discuss gender.
The connection between mathematics, logics, rationality, and the masculine is familiar from feminist research (Lloyd 1993), but it is also intimately related to emotionality, as rationality and emotionality function as each other's constitutive outside (Hall 1996). As mentioned above, rationality, objectivity, and the mind are connected to white men; indeed, historically, this group has been seen as the only one capable of rational thought. Women, meanwhile, have been connected to irrationality, emotion, and the body (de Boise et al. 2017). This dichotomous thinking serves to reproduce gendered, raced, classed, and able-ist power relations, portraying not only women but also PoC, the disabled, and the working-classes as irrational, more closely connected to the body, animalistic, and impure. The discussions about skill and luck noted by Cassidy (2014) are permeated by these ideas, and just as in Cassidy's example, rational, masculine subjects are not pre-existent but produced by ideas about rationality and predictability (Judith Butler 1992, 10 f).
Together with these ideas about rationality and emotion, Cassidy's discussion suggests that different forms of gambling are variously associated with skill, competition, and mastery, and are gendered in different ways. While poker is associated with masculinity, glamour, mathematics, and skills in "reading people" (Jouhki 2017), sports betting is less glamorous, but still associated with skill. Like horse betting, sports betting is dominated by men, and connected to mathematics, rationality, and predictability. However, several researchers argue that so called skill games should really be seen as skill and chance games (e.g. Gerhard Meyer et al. 2013). Sports betting also has an intrinsic connection to sports, which arguably gives it an air of toughness, athleticism, and maleness (see also Garry Whannel 1999;Martha Wörsching 1999), and associates it with the emotional engagement of supporter culture.
Apart from furthering gambling research by problematizing gender, skill, and luck, the study of sports betting advertising deepens the understanding of the gendered dynamic between emotionality and rationality, mathematics, sports, and embodiment. As such, the study adds to studies of contemporary, consumerist culture and how emotional appeals and masculine positions are handled within it. Below, I present the methodology, and then the analysis, which points to two different kinds of emotional interpellations, tied to different formulations of sports betting; overtly emotional appeals and discourses of chance, and discourses about control, skill, and rationality.
Methodology and material
The material consists of four audiovisual sports betting commercials broadcast during 2019-2020 from Betsson,Coolbet,Rizk,and Unibet. 1 In Sweden, the money spent on advertising by gambling companies rose steeply between 2016 and 2018 (Spelinspektionen 2019), and televised and online gambling advertising was arguably the most prevalent cultural representation of gambling in the late 2010s, shaping meanings of gambling and gamblers in the Swedish context.
The four commercials are part of a larger corpus, consisting of online and televised audiovisual gambling commercials from 2019-2020, broadcast after the reregulation of the Swedish gambling market in 2019. Online searches complemented by televised commercials, encompassing most Swedish gambling brands, resulted in a sample of 160 commercials from 40 brands (see Deans et al. 2016 for a similar sampling approach). Through a hermeneutic process where I studied the corpus together with literature from gambling and gender studies, I identified skill, luck, rationality, and emotion as analytic themes. The four commercials studied were strategically selected as they provided differing perspectives and thus enabled detailed discussions of these themes. Thus, I follow researchers who use semiotic and discourse analysis of limited ranges of materials enabling in-depth discussions which enrich broader debates about cultural representations, subject positions, and gender politics (e.g. Gee 2014;Gill 2007;Hall 1997;Williamson 1978).
Inspired by previous research, I scrutinized (gendered, racialized, classed) portrayals of protagonists, including their dress, body poses, and interaction, as well as the discursive production of the commodity (sports betting) (Williamson 1978). Following discursive approaches within cultural studies (Hall 1997), I studied what was taken for granted and what was portrayed as arguable, controversial, or new, as well as explicit and implicit appeals within the commercials, indicating the commercials' imagined viewers (Gee 2014;Williamson 1978). In order to study emotional interpellations, I focused on portrayals of emotional states and how emotions were tied to commodities and subjects, explicitly and implicitly (see also Kamp et al. 1995). My hermeneutic research approach extended to an iterative and reflexive analysis process, where I repeatedly scrutinized my assumptions, analyses, and positionality as a white, middle-class, woman academic.
"Odds are devious creatures": chance and excitement
Commercials foregrounding the thrills, fun, and emotions of sports betting can be seen as engaging in very overt emotional interpellations. These commercials simultaneously articulate sports betting as unpredictable and based on chance.
A commercial from Rizk sports (Rizk 2019) illustrates this: Set to a piano soundtrack and inside what appears to be a 1950s fair tent, with walls striped in yellow and red and with lightbulbs along the wall, two white men, lighted from above, are arm-wrestling. One is burly, with muscular, tattooed arms, bald with a dark beard and wearing a wrestler's singlet and a black leather bracelet. His opponent, smaller and younger, dressed in the retro-style visor sunhat, waistcoat, and shirt with sleeve garters of an iconic retro office clerk, seems to be losing the match. Next to them, a white man in a straw hat, checked suit, and a cervical collar meets our gaze and addresses us in American English: "Odds are devious creatures. Just when you think you've got it all figured out, a little something happens that turns everything around".
Behind him, a cheering crowd of women and men in 1950s clothes watch the match. The narrator in the straw hat grabs the clasped hands of the arm wrestlers and pushes so that the small man wins. "My advice is, change horses in the middle of the race", the narrator says as the muscular man, shocked, looks around while part of the crowd cheers, hugging the small man. "Ouch, big fellow, that's got to hurt", the narrator says, touching the loser's shoulder. He leaves the frame, the muscular man still sitting, confused and dejected, at the table. "Everything can happen when you play live at Rizk sports" a female voice-over says in Swedish, while a red and yellow banner is shown, on top of the tentscene: "Play live at Rizk sports", followed by the Rizk logo.
Both athletes and the narrator, who gets to address and meet the viewer's gaze while also addressing the combatants, are white and male. The man-to-man nature of arm wrestling connects to masculine narratives about sports, where strength prevails while the lesser man loses. In contrast to commercials suggesting that online gambling can be done from the comfort and safety of one's own home, Rizk suggests that online gambling is exciting, equivalent to being at a Tivoli or watching a game live, thereby connecting online gambling with the masculinized public sphere (Svensson et al. 2011; see also Lopez-Gonzalez, Estévez, and Griffiths 2018, 48). These aspects contribute to a masculine address.
The narrator, holding our gaze and addressing us directly is the obvious protagonist of the commercial, but his interference in the arm-wrestling match, which would predictably have caused fury among the crowd and the athletes, goes almost unnoticed, and when he addresses the muscular loser, there is little reaction or eye-contact. This renders his position unclear-he does not seem part of the scene. However, similar to the female voice-over at the end, he interpellates the viewer, "you", directly, with advice and an articulation of what sports betting is all about: unpredictability. This is further illustrated by him acting as that "little something" which turns the match around and by his cervical collar, reminding bettors to not be fooled by outward signs, such as cervical collars or bodily size, but be flexible and smart in order to handle the unexpected.
The tent audience, facing us, watch the game in considerable suspense, cheering when their favorite wins and "hurt[ing]" when he loses. They point to the centrality of emotional engagement in the commercial's articulation of sports betting, connecting it to the unpredictability of the game. Together, the centering of emotion, men, and unpredictability function as an interpellation of a masculine viewer, willing to appreciate the emotional ups and downs and the risks of betting.
A similar example comes from the "Feel the moment of suspense/excitement" (Känn spänningens ögonblick) 2 campaign, issued by Betsson in 2018, and broadcast during 2019 by various Swedish TV-channels. 3 A board showing scores for a home and an away team and the roar of a sports audience indicate that we are at an ice-hockey match, at the end of the third period, i.e. at the very end of the game (Betsson 2018). Dressed in a black hockey helmet and white-and-black striped shirt, a judge slides up to two players from Swedish teams Djurgården and Frölunda for a faceoff. We see the audience, both women and men, wearing winter jackets and team scarves, and then two players crashing into each other. The face of one Djurgården player comes into focus, and in the next shot, we see him receiving and shooting the puck with his stick.
As he hits the puck, the tempo slows down considerably and an operatic basso voice starts to sing in a slow staccato. 4 We see the puck hitting the stick and the stick springing back with a deep, contorted "slow motion sound" as the shot is made. Still in slow motion, two men in the audience stand up, evidently noticing that something exciting is happening on the ice. A pair of skates, braking and sending ice fragments whirling, are hit by the puck, which bounces onwards, off the helmet of the judge. The audience is shown again, more people rising, all with half-open mouths and expressions of utter suspense on their faces. The board from the first shot is shown, indicating that it is two minutes left of the match, then a player hits the puck with his stick at waist height. The puck hurls towards the goal at a very slow pace, and the goaltender scrambles to come into position as the music grows more intense. Just before the puck approaches the goal, the narrative is interrupted and the name of the campaign, "Feel the moment of suspense/excitement", and the Betsson logo are shown against a black background.
As in the Rizk commercial, white male athletes are engaged in sports, watched by an audience whose expressive faces and body-language are vital to the narrative, and the setting of the commercial informs the viewers that betting with Betsson is exciting and thrilling in the same way as watching a game live. As above, the commercial centers men, emotionality and, in the light of the puck's movement among the players, unpredictability.
Both commercials use direct appeals to the viewers which constitute explicit emotional interpellations, most notably in the Betsson commercial's imperative to "feel" and enjoy the excitement and unpredictability of the game. The masculinized viewer is interpellated as emotional, as willing to show feelings and seek out emotional responses through engaging with sports betting. This appeal is strengthened by the emotional expressivity in the audiences of both commercials; betting is demonstrably exciting. The masculine viewer is encouraged to partake in these emotions, that is, a social relationship is offered with the audiences of the commercials, and, by implication, with other sports bettors, who are portrayed as suffering and rejoicing together (especially notable as both commercials feature fans of both "sides" seated or standing close together 5 ). The framing of the game as unpredictable is central to the emotional interpellation; precisely because of it, excitement, suspense, thrill, and the hurt of losing can occur, which is demonstrated through the audiences' emotive responses. These invite the viewer into a communal emotional experience, and into a cultural, embodied, and physiological circulation of emotion (Wetherell 2012) by selling the emotional experience associated with betting.
Interpellations in commercials about body grooming may tell the viewer that their body is unruly and in need of control, but the emotional interpellations featured here produce the viewers' lives as boring, in need of consumption and risk to "spice it up". Boredom and the seeking of excitement is an important factor in the development of gambling problems according to psychological research (e.g. Kimberley B. Mercer and John D. Eastwood 2010), and the discursive production of boredom in commercials like these can be seen as referring to this, even, perhaps, as a cynical use of boredom as a commercial strategy. To simplify, a "yes" to such an interpellation means taking up a subject position as bored, and agreeing to the premise that life (without betting) is boring, a feeling produced by consumerist discourses centering spending and consuming. Tanja Joelsson discusses boredom in relation to space and gender, and suggests that public places are connected to danger, risk, masculinity, and the alleviation of boredom (2015,1256). This fits in well with these commercials' ideas of online betting as equivalent to visiting a game or a Tivoli-through online betting, one symbolically leaves the feminized boredom of the home to join the thrill of a masculinized online experience.
However, the interpellation also evokes emotions. This is most notable in the Betsson commercial, which leaves the viewer breathless, literally caught in "the moment of suspense". This effect is achieved, I suggest, through the use of slow motion, which connects this commercial to sports broadcasting (Margaret Morse 2003). Representing a "spatial compression and temporal elongation and repetition" (Morse 2003, 380), slow motion entails a dreamlike quality, where speed and violent impact are turned into a dance-like beauty (Morse 2003, 381), while the virtuosity of the actors is emphasized (Vivian Sobchack 2006, 342). This evokes an emotional state of excruciating excitement where time does not exist. Similar distorted temporalities are often discussed in gambling research as "flow". As originator Mihaly Csikszentmihalyi has noted, flow is defined by an intense and focused concentration on what one is doing in the present moment as well as a distortion of temporal experience (Jeanne Nakamura and Csikszentmihalyi 2014, 240). This is often described as a profoundly positive experience, but in gambling research more sinister aspects have been noted, including "dark flow", connected to depression, excessive gambling, and gambling addiction, especially in the context of online gambling and slot machines (e.g. Mike J. Dixon et al. 2019).
The implicit reference to the emotional state of "flow" is an important aspect of the emotional interpellation of the Betsson commercial. Focusing on the breathless moments ending the match, the viewer is invited not only to engage in and appreciate the strong emotions of sports betting, but actually given a taste of these emotions during the course of the commercial. Thus, this commercial presents both dynamic emotional flow and emotional integration, that is, it portrays a changing pattern of emotions which is highly integrated with, shown to be motivated by its commodity, pulling the viewer into its circulation of emotion (Kamp et al. 1995).
This focus on the thrills of watching sports differs from a masculine interpellation that invites the masculine viewer to imagine that he himself is the athlete. As Morse (2003, 377ff) points out, televised sports are connected to scopophilic pleasures of watching athletic male bodies, and the appeal to emotional flows among primarily male audience members points to an unexpectedly emotional and close homosociality. Morse argues that the use of slow motion "saves" televised sports from this potential homoeroticism, as it represents a scientific view of the game, "allow[ing] the viewer to outguess the referee and see what 'really ' happened" (2003, 381). Here, I suggest that the difference between watching and betting on a match is drawn upon in a similar way.
Watching a game might echo of passivity, voyeurism, and erotic pleasure at athletic men, but by betting, the stakes rise; you have "skin in the game". This transforms mere watching, potentially (homo-)eroticized and passive, into a risk; the monetary investment in the game turns the viewer into a proxy-athlete, and the suspense is doubled, as both bettors and teams become possible winners and losers. The emotional interpellation to engage with the game emotionally is thus transformed and intensified into heroic risktaking.
In drawing on emotionality in men, the commercials connect with negotiations around gender in consumerist culture (Alexander 2003;Frank 2014;Gee 2014). Such negotiations often combine stereotypical masculine features, such as athleticism and sports, with more unexpected ones, including fashion and body grooming. Emotionality and lack of control over an unpredictable game arguably constitute such features; these are "balanced" by sports and masculine risk-taking (see also Gee 2014). As de Boise et al. (2017) suggest, it would be a mistake to regard emotionality in men as in itself subversive or a protest. In this case, men's emotionality is not a protest, but called for and encouraged through emotional interpellations, and even if (perhaps) unexpected, it does not subvert gendered power relations or heteronormative expectations of men.
"Luck is no coincidence": rationality and control
Previous research has shown that rationality, skill, and control are associated with men's gambling (Cassidy 2014;Jouhki 2017;Lopez-Gonzalez, Estévez, and Griffiths 2018). However, luck is an aspect of all gambling, even skill-based games such as poker or sports betting, and the division between rationality and emotionality is similarly complex, even in commercials ostensibly expelling emotionality.
A Coolbet commercial (Coolbet 2019) portrays an odds maker, "Mikael", white and 30-40 years old. Sitting down in an old, over-snowed Volvo, he drives along a snowcovered road to an ice rink, listening to his car radio. Its male voice speaks in a northern Swedish dialect and functions as the voiceover of the commercial, describing the virtues of the Coolbet brand, which include setting their own odds and being uniquely knowledgeable about not only big, well-known teams but also small, local ones. At the ice rink, alone in the stands, "Mikael" watches the team's practice, jotting down and crossing out numbers in a small notebook. He wears a moustache, and greying hair can be seen below his woolen hat, which together with his leather jacket with a fleecy collar gives him an un-styled, rural appearance. Next, he is portrayed sitting in a small, oversnowed forest house, by a fire. He ponders the odds in his notebook, crossing out and filling in new numbers. The voice-over describes Coolbet's unique transparency and knowledgeability; bettors can find out how others have betted to further refine their betting decisions.
This commercial exclusively features men, and as opposed to portrayals of gambling as glamorous (Jouhki 2017;Sen et al. 2016), it relies heavily on masculine discourses of knowledgeability, mathematics, and skill, as well as ordinariness, rurality, and oldschoolness. The odds maker is alone, and his car, appearance, and technology are unpolished and unmodern; he is equipped with the classic book of the bookmaker rather than a computer. Kroon (2019) notes a similarly "ordinary man" portrayal in her analysis of a Svenska Spel commercial, but while her man is made trustworthy and familiar through being white, unstylish, and a "football dad", the Coolbet man is constructed as trustworthy and familiar due to his knowledgeability, precision, and white, rural, down-toearth-ness. The interpellation in this commercial is connected with rationality rather than emotionality as it addresses its viewers as interested in precise, localized knowledgeability, and positions the brand as providing the bettors with uniquely accurate information, without frills.
While viewers were invited to partake in emotional thrills as part of an enthusiastic audience above, the bookmaker watches the players alone, with an analytical eye. However, the air of loneliness is counteracted by the possibility to see other bettors' bets. The sociality suggested here addresses Coolbet bettors as learning from, perhaps outwitting other bettors.
Another example is a Unibet commercial featuring Magnus Carlsen, the Chess world champion (Unibet 2020). In the commercial, a suit-clad Carlsen is engaged in a chess game watched by an elegantly attired audience. This scene is interjected with ones of his preparations, mostly set outdoors. To the sound of dramatic drum music, a male, deepvoiced voiceover tells us that "It's all about preparation . . . Doing the same thing, over and over. Making your body ready . . . your mind . . . your soul. Hours, months, years of work. Never stops . . . Making the most of every single thing. Then, he might be lucky". Scenes of Carlsen running, playing football and basketball, reading about chess in magazines and working on his laptop accompany this message, with similar scenes repeated to create an impression of "over and over".
Meanwhile, at the chess game, as Carlsen's chess opponent hits the clock, the camera moves towards Carlsen's eye, into the blackness of his pupil, taking us inside his mind as it were. Inside it, in a lofty hall, chess pieces hover near the ceiling and Carlsen watches them as he decides upon his next move. Next, Carlsen makes his winning move as the voice-over narrates that after much preparation, "he might get lucky". Shots of Carlsen winning the game are then interspersed with shots of a happy, younger Carlsen winning other matches, and the commercial ends with a serious Carlsen meeting our gaze, sitting in the lofty room inside his mind, accompanied by the text "Luck is no coincidence".
The protagonist, Carlsen, is a possible object of betting rather than a bookmaker as above, but the Unibet commercial, like the Coolbet one, centers knowledgeability and skill. If "luck is no coincidence" and it is "all about preparation" (Unibet 2020), preparedness and insight are possible to attain, both for the object and the subject of the betting, that is, both for Carlsen and the imagined viewer, the bettor. As above, the impact of luck, coincidence, and chance are minimized; Carlsen's preparedness and control mirrors the bettor's.
Both the Coolbet and Unibet commercials center men through portraying male athletes, male bookmakers, and male voice-overs. Sports betting is articulated as being about control and mathematics, which radically differs from the articulation discussed above. In the light of the masculinization of rationality, this in itself constitutes a masculine interpellation. As Lloyd (1993, 37) notes, "[i]t is not a question simply of the applicability to women of neutrally specified ideals of rationality, but rather of the genderization of the ideals themselves"; knowledge, reason, and rationality as well as the knowing subject are masculinized.
However, the articulation of betting not only spans the glamorous and the rural but also the athletic and the "brainy". We are taken inside Carlsen's mind, the loftiness of which indicates his intellectual capabilities, but we also get to see him working out and scoring goals, "saving" him and the Unibet brand from being associated with a solely intellectual, nerd-like masculine position (see also Gee 2014). As Wörsching (1999, 182, italics in original) points out à propos sports advertising: "the effort it takes to master the body and mind, the emphasis on unrivaled performance and the 'superhuman' concentration on the goal to win" is connected to how the media legitimize male superiority, a description that connects to the doubleness of sports betting.
These interpellations, in their centering of control, rationality, and leaving nothing to chance, are seemingly non-emotional, apart from the final moments of the Unibet commercial, when Carlsen is portrayed winning. However, emotions are present in several ways. As rationality and emotionality are each other's constitutive outside, emotionality constitutes the border which gives rationality its meaning, rendering it present in its absence. This applies also to skill and luck, a dichotomy explicitly addressed in the expelling of luck in the Unibet commercial. Also, the call for stoicism is an emotional interpellation, calling for a specific, gendered, way of handling emotionality (Hochschild 2003).
However, emotions are not only present in their absence. While the commercials above centered the emotional experience of betting, these center winning, at which point some emotional expressivity is called for. This differs from what Cassidy describes in the context of horse betting, where emotional displays when winning were strongly discouraged (Cassidy 2014, 180). In my material, the joy of winning is reachable only for the sufficiently rational gambler, thus, it constitutes a promise that is deferred to the end of the game; emotions should be controlled, and then released at the appropriate moment.
As de Boise et al. (2017, 786, italics in original) note, "to be rational, often, is to, quite literally, feel rational". In accordance with this, the emphasis on rationality and control constitutes an emotional interpellation centering the feeling of being in control. The centering of mastery, control, knowledge, and rationality is intimately related to the fantasy of the independent subject, acting upon and thereby controlling the world. Butler (1992, 10) notes that the instrumental actor "is itself the effect of a genealogy which is erased at the moment that the subject takes itself as the single origin of its action" while "the effects of an action always supersede the stated intention or purpose of the act". With this critique in mind, it becomes evident that rationality and control are aspects of a constructed and arbitrary masculine subject seeking to realize its own existence and stave off ontological anxiety. Interpellating viewers to feel like they are in control offers them an illusory investment in a seemingly stable, normative masculine position. However, it also subtly produces the viewers' lives as out of control, perhaps referring to a time when many markers of normative masculine positions, such as unquestioned access to women's bodies, life-long employment, and affordable housing are questioned or becoming increasingly inaccessible.
Emotionality was a prominent aspect of an offered sociality in the last section. Here, a subtle homosocial closeness is proposed between bettor and bookmaker/athlete. The relation between bettors instead has an air of competition as the viewer is interpellated as exceptional; smarter, and more rational and knowledgeable than other bettors. Hansson (2004) notes that luck-based, casual bettors are seen as cash cows by "serious", knowledge-based bettors, which is applicable here; rationality and knowledgeability are key to winning only if other bettors are sufficiently ignorant and irrational. This relation between bettors echoes of ideas about homosociality as based on competition rather than intimacy (Sharon Bird 1996), which, as Michael S. Kimmel (1997) points out, is coupled with fear of failure or exposure. Hansson (2004, 53) points out that meetings with other horse bettors are risky, as flaws in one's knowledge might be exposed, which connects to this aspect of the emotional interpellation.
Concluding discussion
Using the lens of emotional interpellations, this article has scrutinized four betting commercials. While consistent in their centering of men and the masculine, the emotional interpellations have varied widely, especially notable as the commercials concerned the same form of gambling. This variation connects to other parts of contemporary consumerist culture (Alexander 2003;Frank 2014;Gee 2014;Goedecke 2021a), but the question whether it indicates actual change in gendered power relations or merely is a way of making the same gendered power relations more acceptable remains.
As Lopez-Gonzalez, Estévez, and Griffiths (2018) note, control and reduction of risk in betting commercials is connected to masculinity, but as I have pointed out, so are messages emphasizing risk, emotion, and lack of control. While men's politics are not addressed explicitly in the material (cf. Goedecke's (2021a) article about postfeminism in betting commercials), the variation in itself holds subversive promises as it exposes the incongruent, constructed nature of masculine positions and gender.
The centering of emotions in part of the material can be interpreted as subversive, and the emotional interpellations foregrounding emotionality and excitement in a community of bettors come across as more honest than those foregrounding rationality, arguably cynical as they exaggerate the importance of control and with it, the chance of winning. However, emotionality is made comprehensible through the connection with risk, sports, supporter culture, and consumption, and is an aspect of neo-liberal, consumerist culture striving to incorporate and exploit ever more aspects of the personal (Gill 2017). This reduces the subversive potential. The commercials emphasizing control also tie in well with individualist, capitalist ideologies; there, other gamblers are competition, not company, and the (possible) success of the individual is promoted while the necessary failure of others is obscured. Thus, the emotional interpellations vary as do the masculine positions produced, but ultimately they converge in accommodating neoliberal discourses; whether stoic or emotional, men should still bet.
The normalization of men within gambling advertising is achieved through the centering of (white) men and male bodies. Women's peripheral presence in three of the commercials, as audience, can be seen as a first step towards normalizing women as customers and bettors, but female athletes are still conspicuously absent, as are PoC and other minorities. However, it is important to note that the interpellation of men also subjects them to consumerist logics. Future research on gambling from feminist and gender perspectives must embrace this complexity.
The article has added to gambling research by putting it in dialogue with methods and theories from cultural and feminist studies. The variation between formulating sports betting as a game of skill and of chance complicates the association between certain games and skill or chance, as well as the often habitual connection between men and skill games in gambling research.
I have proposed emotional interpellations as a term for capturing cultural production of various emotions and emotionalities in contemporary consumerist culture. As previous research (Gill 2017;Hochschild 2003) suggests, gender is central to these interpellations. Importantly, interpellations can be decoded or resisted, and while I have pointed to little subversive potential, other readings are possible; for instance, closeness among men and distorted temporalities may be seen in a queer light. Additionally, the emotional interpellations discussed here have to do with the cultural production and circulation of various emotions and emotionalities, but emotions and gambling encompass both the cultural and the embodied. This points to the need for more multidisciplinary research on gambling and to future possibilities of studying emotional interpellations in consumerist culture. | 2022-02-11T16:14:43.113Z | 2022-02-09T00:00:00.000 | {
"year": 2023,
"sha1": "a248f1ac174f06dcaecd7836009569920f28fd98",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14680777.2022.2032789?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "71e9bb4032fe119308936a405f4b9af3e65e3c5a",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
264591630 | pes2o/s2orc | v3-fos-license | The therapeutic value of bifidobacteria in cardiovascular disease
There has been an increase in cardiovascular morbidity and mortality over the past few decades, making cardiovascular disease (CVD) the leading cause of death worldwide. However, the pathogenesis of CVD is multi-factorial, complex, and not fully understood. The gut microbiome has long been recognized to play a critical role in maintaining the physiological and metabolic health of the host. Recent scientific advances have provided evidence that alterations in the gut microbiome and its metabolites have a profound influence on the development and progression of CVD. Among the trillions of microorganisms in the gut, bifidobacteria, which, interestingly, were found through the literature to play a key role not only in regulating gut microbiota function and metabolism, but also in reducing classical risk factors for CVD (e.g., obesity, hyperlipidemia, diabetes) by suppressing oxidative stress, improving immunomodulation, and correcting lipid, glucose, and cholesterol metabolism. This review explores the direct and indirect effects of bifidobacteria on the development of CVD and highlights its potential therapeutic value in hypertension, atherosclerosis, myocardial infarction, and heart failure. By describing the key role of Bifidobacterium in the link between gut microbiology and CVD, we aim to provide a theoretical basis for improving the subsequent clinical applications of Bifidobacterium and for the development of Bifidobacterium nutritional products.
INTRODUCTION
Cardiovascular disease (CVD) is the leading cause of death worldwide, accounting for ~30% of global mortality 1 .According to the World Health Organization (WHO), the global death toll from CVD was 17.9 million in 2021 2 .Physiological factors such as age, gender, and genetics as well as poor daily behavior are major CVD factors 3 .In addition, pathological states such as inflammation and oxidative stress and underlying diseases such as obesity, hyperlipidaemia (HLP), as well as diabetes have been proven to cause or exacerbate CVD 3,4 .However, a growing body of data has shown a correlation between intestinal microbiota and CVD, and that such microbiota hold great promise as an emerging therapeutic tool 5 .It has been reported that homeostasis of the intestinal microbiota, its physiological functions, and metabolites modulate human physiopathological processes and influence the developmental of CVD 6 .
Bifidobacterium belongs to the phylum Actinobacteria, a genus of high G + C Gram-positive, non-motile, rod-shaped, strictly anaerobic bacteria that are widely found in the intestinal tract and other luminal tissues of humans and animals [7][8][9] .As one of the first microorganisms to colonize the human gut and to accompany humans throughout their lives, bifidobacteria have a wide range of benefits for human health 8 .Bifidobacteria are one of the most important bacterial groups in the neonatal gut which promote proper development of physiological functions in neonates.In the early stages of life, bifidobacteria comprise up to 90% of the infant gut microbiota 8,10 .In the gut of adults, bifidobacteria make up 2-14% of the total gut microbial community; in the elderly, this number shrinks once again 11 .As probiotics, bifidobacteria are closely associated with human health and are best known for their role in promoting the health of the immune, digestive, and metabolic systems 8 .
Recent advances in research have shown that in the gut, which is rich in more than 1000 microorganisms, bifidobacteria show a significant potential for improving cardiovascular disease 12 .Interestingly, the abundance of bifidobacteria tended to decrease when cardiovascular events occurred, and conversely, their abundance was upregulated when CVD was ameliorated [13][14][15] .Emerging scientific research evidence suggested that bifidobacteria not only has a mitigating effect on inflammation, oxidative stress, but also improves intestinal barrier function as well as regulates intestinal microbial metabolites 14,[16][17][18] .Furthermore, researchers have shown that obesity, HLP, and type 2 diabetes mellitus (T2DM), which are serious risk factors for CVD, could indirectly be reduced by bifidobacteria [19][20][21] .This review has mainly discussed the direct and indirect mechanisms of bifidobacteria in inhibiting the development of CVD, including suppression of oxidative stress, immunomodulation, and correction of lipid, glucose, and cholesterol metabolism to reduce classical risk factors for CVD.In addition, current advances in the use of bifidobacteria in hypertension (HP), atherosclerosis (AS), myocardial infarction (MI), and heart failure (HF) are discussed with an outlook on the future use of bifidobacteria as a cuttingedge therapeutic strategy for the prevention or treatment of CVD.
DIRECT IMPROVEMENT OF CARDIOVASCULAR DISEASE Antioxidant properties
Oxidative stress exists in various CVDs, such as AS, HP, myocardial ischemia-reperfusion (I/R) injury and HF, and strongly plays a role in its development or exacerbation 22 .The oxidative stress process occurs when there is production of excess reactive oxygen species (ROS), which include superoxide radicals, peroxyl radicals, hydroxyl radicals and hydrogen peroxide 23 .In AS, oxidative stress first causes endothelial dysfunction, thus increasing vascular permeability 24 .The oxidative stress then modifies the low-density lipoprotein (LDL) recruited to the artery walls into oxidized lowdensity lipoprotein (ox-LDL), which forms the basis for foam cells 24 .Finally, the increased ROS exacerbates plaque development 24 .On the other hand, ROS acts on vascular endothelial cell phosphorylation pathways and gene expression factors in HP to cause abnormal vascular tone.Myocardial I/R manifests as restoration of blood flow to hypoxic organs, a process that leads to excessive ROS production 25 .In addition, all phases of HF are accompanied by an excessive production of ROS 24 .This demonstrates that oxidative stress on the heart aggravates the prevailing CVD.
Many studies have demonstrated that bifidobacteria can improve CVD due to its high antioxidant activity, for example, Bifidobacterium longum subsp.longum (B.longum) CCFM752 prevented HP and aortic lesions in rats 26 , and its culture supernatant prevented Ang II-induced increases in O 2-and H 2 O 2 in smooth muscle A7R5 cells of rat thoracic aorta 16 .Previously, it was observed that an increase in the number of bifidobacteria in the intestine was resulted in reduced cardiac risk ratios and atherosclerotic indices in broilers 27 .Numerous data on the mechanism of antioxidant action of bifidobacteria is available.First, iron-binding sites were identified in various bifidobacteria species, including B. longum NCC2705, which suggests its ability to chelate iron ions, thus inhibiting its catalytic effect on oxidation reactions (Fig. 1a) 23 .Catalase (CAT), which degrades hydrogen peroxide, has been widely shown to be a critical enzyme in the antioxidant defense system.Previous data showed that CAT activity was enhanced and NADPH oxidase (NOX) activity was inhibited in A7R5 cells preincubated with culture supernatant of B. longum CCFM752 (Fig. 1a) 16 .Interestingly, the enhancement of CAT activity was not due to changes at the transcriptional level but to the upregulation of related proteins at the translational level, as cellular transcriptome sequencing revealed that the 60S ribosomal protein L7a (Rpl7a) was upregulated and its expression was positively correlated with intracellular CAT activity.As for the suppression of NOX activity, it may be due to the decreased expression of Noxa1.Other studies have demonstrated that Nrf2 -Keap1 -ARE is a pivotal pathway in the antioxidant defense system (Fig. 1a).Upon exposure to oxidative stress, the active cysteine residues of Keap1 are modified, resulting in inactivation of Keap1.This allows Nrf2 to bind Keap1 to form a complex that is ubiquitinated.Next, the accumulated Nrf2 acts as a master regulatory transcription factor, activating phase II detoxification enzymes, drug transporters, and antioxidant enzymes, thereby exerting intracellular antioxidant defense 28 .Previous report have shown that bifidobacteria attenuate oxidative stress by promoting the dissociation of Nrf2 from Keap1 and facilitating the translocation of Nrf2 to the nucleus.(Fig. 1a) 29 .Multiple explanations have been proposed to account for the mechanism by which bifidobacteria induce the expression of Nrf2.The mechanism underlying the ability of Bifidobacterium to induce Nrf2 expression remained unclear until researchers directed their attention toward certain intestinal metabolites.The first is indole-3-lactic acid (ILA), Fig. 1 Direct Improvement of CVD. a Bifidobacterium may function as an iron ion chelator to inhibit oxidation and oxidation reactions.It can activate the key genes in the antioxidant defense system, produce glutamic acid (GSH) to maintain the cellular redox state, and inhibit the production of pathogenic bacteria through the acidic environment.b Bifidobacterium has special molecular structures such as Serpins, villi, TgaA, and EPS that allow it to participate in specific immunomodulatory processes and cause anti-inflammatory activity.Its metabolite acetate can also stimulate the production of butyrate, which has anti-inflammatory effects, by cross-feeding as a substrate.c Bifidobacterium can act on intestinal epithelial cell junctions to maintain the integrity of the intestinal barrier and inhibit LPS translocation; it can also protect the intestinal barrier from damage by controlling the production of TMAO, SCFAs and maintaining the abundance of intestinal microbial species.(The figure does not contain any third-party material, the figure and each of the elements in the figure were created by the authors).
a major catabolic metabolite of tryptophan produced by more than 50 Bifidobacterium species 30 , and in 2020 researchers found that ILA increased the mRNA expression of GPX2, SOD2, and NQO1, the target activator genes of Nrf2 31 .Second, an intestinal metabolite called sedanolide has recently been reported as a possible key effector in the activation of the Nrf2 pathway by bifidobacteria.Because the increase in sedanolide abundance in mice intestine, following administration of B. longum R0175, resulted in a significant upregulation of Nrf2 expression and Nrf2 pathway-related genes (NQO1, HO1) 32 .In addition, bifidobacteria metabolites, such as glutathione (GSH) and folic acid, also exert antioxidant benefits by maintaining the redox state of cells and increasing the antioxidant property of lipoproteins, respectively (Fig. 1a) 33 .It has also been proposed that the acid-producing effect of bifidobacteria could help maintain a low intestinal pH, limiting the proliferation of pathogenic bacteria, thus contributing to anti-oxidative stress (Fig. 1a) 33 .In summary, the anti-oxidative stress activity of bifidobacteria is realized by trapping metal ions, activating the antioxidant enzyme system and producing antioxidant metabolites which regulate intestinal microbiota.
Immunoregulatory properties
Previous data have shown that inflammation is critical in the development of CVD, especially AS.Activation of various inflammatory signaling pathways and increased production of pro-inflammatory cytokines depict intensification of inflammatory response, which directly relates to CVD such as AS, I/R, and MI 34 .Furthermore, the inflammatory response could influence the prognosis of some CVD.For instance, a previous study demonstrated the association between the intensity of the inflammatory response and ventricular remodeling after MI 35 .
The immunoregulatory properties of bifidobacteria are well known and thus have long been used to treat several inflammatory conditions, such as allergies and necrotizing enterocolitis in premature infants.The accumulating knowledge about intestinal microbial functions has allowed researchers to focus on the relationship between bifidobacteria and CVD 36 .Liang et al. 14 reported that Bifidobacterium animalis subsp.lactis (B.lactis) F1-7 enhanced the anti-inflammatory effect of krill oil, thereby alleviating inflammatory response in atherosclerotic mice more strongly compared to administration of krill oil alone.B. lactis has been reported in a randomized trial to significantly reduce the pro-inflammatory cytokines TNF-α and IL-6 in blood samples from patients with metabolic syndrome, which contributes to the reduction in cardiovascular risks in patients with metabolic syndrome 15 .Several specific molecular structures in bifidobacteria, such as secreted immunomodulatory proteins and peptides (Serpins, villi, peptidoglycan hydrolase TgaA) as well as extracellular polysaccharides (EPS), contribute to its anti-inflammatory properties (Fig. 1a) 37 .These specialized structures drive specific immunomodulatory processes by interacting with intestinal microbes or host cells.Specifically, production of serpin protects the bifidobacteria from adverse effects of host-derived proteases through anti-inflammatory activities 38 .Some villi (sorting enzymedependent hairs) could induce high levels of local inflammatory factors while inhibiting systemic inflammatory response, thus facilitating initial crosstalk between bifidobacteria and host cells 38 .Peptidoglycan hydrolase TgaA has been shown to induce IL-2 production in key cytokines in T cells 39 .EPS affects the differentiation of T cells into T helper cells thus regulating the levels of pro-inflammatory and anti-inflammatory cytokines.In addition, it was shown that bifidobacteria could trigger intestinal metabolite cross-feeding mechanisms and maintain intestinal homeostasis, which contributes to the maintenance of the intestinal immune system (Fig. 1a) 37,38 .Among the intestinal metabolites, butyrate is the potent anti-inflammatory agent that maintains the immune system and is involved in immune signaling by stimulating anti-inflammatory factors (TNF-β, IL-10) and inducing T-cell differentiation.Although bifidobacteria do not directly produce butyrate, the produced acetate can be used as a fermentation substrate for the indirect production of butyrate through cross-feeding interaction 40 .Strains of Bifidobacterium species also participate in tryptophan metabolism in the gut, while its metabolite indole-3-carbaldehyde (I3C) directly binds to AhR ligands involved in immune signaling and inhibits abnormal immune responses 41,42 .Although serotonin is well known as a central nervous system neurotransmitter, there is growing evidence that peripheral serotonin affects cardiovascular disease by influencing immune cell reactivity 43 .Previous studies have shown that bifidobacteria reduces serotonin levels by modulating key components of the serotonergic system in different host tissues, the number of intestinal chromophores and gene expression during serotonin synthesis and reuptake (THP1, SERT) [44][45][46][47] .In fact, these are far from allowing us to fully understand the role of bifidobacteria on the human immune system, as most of the current research is still at the cellular or animal level.Given the intricate nature of intestinal microorganisms, deciphering the immunomodulatory mechanisms of bifidobacteria within the gut presents a formidable task.It's also important to acknowledge the potential influence, and even reversal, of bifidobacteria's immunomodulatory effects due to the presence of other strains of bifidobacteria.
Intestinal barrier
Intestinal barrier dysfunction is an undervalued risk factor for CVD 48 .The integrity of the intestinal barrier is maintained by the connections between intestinal epithelial cells, which consist of proteins such as tight junction (TJ) proteins, adhesion junction (AJ) proteins, gap junction proteins, and desmosomal proteins 49 .It is essential for intracellular communication and paracellular permeability that TJ proteins are located at the top of the intestinal epithelial cells (Fig. 1c).Once the TJ proteins are impaired, bacteria and bacterial products such as lipopolysaccharide (LPS) in the intestines leak and enter the systemic circulation (Fig. 1a) 48 .There are accumulating reports on the association between the translocation of LPS and CVD.Abnormal elevation of serum LPS has been observed in AS, HP, chronic heart failure (CHF), and MI 48 .Indeed, impaired intestinal barrier, endothelial dysfunction, and pro-inflammatory response induced by LPS translocation have been shown to seriously affect the development of CVD.
It should be emphasized that the abundance of bifidobacteria directly impacts the integrity of the intestinal barrier.A previous study using mouse models demonstrated that a high-fat diet (HFD) damages the intestinal barrier by reducing the number of bifidobacterial 50 .The effects of bifidobacteria on TJ and AJ proteins have been demonstrated in recent studies.One study acknowledged that administration of bifidobacteria inhibited LPS translocation by increasing serum glucagon-like peptide-2 (GLP-2) to upregulate TJ proteins (occludin ZO1 and occludin) (Fig. 1a) 17 .As metabolites of bifidobacteria, short-chain fatty acids (SCFAs) are reported to protect intestinal barrier function by promoting assembly of TJ proteins and inhibiting activation of inflammatory vesicles (Fig. 1c).These data demonstrate that bifidobacteria is indeed beneficial in maintaining the intestinal barrier; and thus has the potential to be used in the repair of "leaky gut".However, the precise mechanism underlying the protective impact of bifidobacteria on the intestinal barrier remains incompletely understood.This includes its interplay with structures like the villi and crypts of the intestinal epithelium, as well as whether changes in junction proteins are linked to the attenuation of inflammatory processes.
Gut microbiota metabolites
Intestinal microbial metabolites are novel risk factors for cardiovascular events 51 .A gastrointestinal tract is integral in digestion and absorption, and gut microbiota creates a bridge between diet and host.The gut microbiota produces an abundance of small-molecule metabolites when involved in the co-metabolism of food or exogenous substances, some of which are critical in the transmission of information between the host gut and distant organs, such as trimethylamine N-oxide (TMAO), SCFAs, conjugated fatty acids and secondary bile acids 52,53 .Recently, there has been a lot of interest on the causal relationship between the small molecules and CVD, and it has been found that these small-molecule metabolites play a role in the development and progression of CVD 54 .
Trimethylamine N-oxide (TMAO).TMAO is a potential risk factor for chronic diseases, especially CVDs.The intake of red meat, milk, poultry, and eggs increases the body's choline or trimethylamine products, which are metabolized by the intestinal microbiota to produce trimethylamine (TMA).TMA undergoes intestinal absorption and portal vein transport and is then oxidized to TMAO by flavin-containing monooxygenases (FMO) 1 and 3 in the liver (Fig. 1c) 55 .TMAO has been shown to be a potential promoter of CVD, especially AS 56 .Several mechanisms of TMAO-promoting AS have been proposed, which include promotion of the expression of inflammatory factors, breakdown of the balance of cholesterol metabolism, and induction of thrombosis 57 .It is worth mentioning that TMAO has been proven to predict the risk of CVD in multiple clinical cohorts, and has demonstrated uninterrupted prognostic value in a variety of adverse cardiac events (including coronary artery disease, HF, MI, death) [58][59][60][61][62] .
In addition, it has been hypothesized that bifidobacteria could down-regulate TMAO levels in vivo, which has been confirmed by findings from multiple studies.In the previous study on the mechanism of resveratrol against AS, it was demonstrated that the level of bifidobacteria in the intestinal microbiota of mice increased, accompanied by a decrease in the TMAO level 63 .This negative correlation between bifidobacteria and TMAO levels led to the exploration of more profound interactions.The TMAOlowering properties of bifidobacteria have been explained by several mechanisms (Fig. 1c).A study conducted in choline mice showed that three bifidobacteria, including Bifidobacterium breve (B.breve) Bb4, B. longum BL1, and B. longum BL7, significantly decreased plasma TMAO levels and restored the abundance of some gut microbial species, while maintaining the activity of FMO 18 .This finding suggests that bifidobacteria may be reducing TMAO levels by modulating gut microbial abundance.Similarly, another study reported that administration of B. lactis F1-3-2 decreased plasma TMAO, which was not dependent on the regulation of FMO.Whether, it may have directly degraded TMA or adjusted the intestinal microbiota structure remains to be determined 64 .In addition, bifidobacteria may be antagonizing some strains responsible for synthesizing TMAO precursor molecules, such as Clostridium difficile 65 .Some probiotics have been proposed to reduce TMAO levels by altering miRNAs and modulating metabolomic profiles 66 , but data on bifidobacteria are still scarce and more targeted experimental evidence is needed.
Short-chain fatty acids (SCFAs).Previously, data on the metabolism of SCFAs in CVD remained limited.However, with the growing evidence, it was shown that SCFAs are important in regulating cardiovascular functions 67 .In HP, both an increase in propionic and acetic acids induces vasodilation, and butyrate has been shown to relieve HP by inhibiting the angiotensin system in the kidney 68 .The properties of SCFAs such as immunomodulation, antioxidant stress, and improvement of lipid metabolism, are of great significance to the treatment of AS 69,70 .In addition, SCFAs act on the nervous system to protect the heart from injury and maintain its functions.A previous study showed that butyrate reversed autonomic imbalance in I/R rats and improved cardiovascular function by targeting the paraventricular nucleus and superior cervical ganglia 71 .It has also been established that butyrate may prevent ventricular arrhythmias after MI by inhibiting sympathetic remodeling 72 .SCFAs are a particular metabolite whose impact on the development of CVD is of farreaching significance.
Human dietary fiber is degraded into organic acids by intestinal microflora, gas, and a large number of SCFAs 73 .Acetic acid, propionic acid, and butyric acid account for 90% of the SCFAs 74 .It is important to note that acetic acid is the main end-product of the metabolism of bifidobacteria.Bifidobacteria could indirectly increase butyric acid levels in the gut by cross-feeding interactions which enhance intestinal colonization of other butyric acidproducing commensal microorganisms (Fig. 1c), such as Faecalibacterium prausnitzii 75 and Eubacterium hallii 76 .However, crossfeeding interactions between bifidobacteria and butyrateproducing bacteria vary from species to species, ranging from symbiotic to competitive relationships for energy substrates, which are largely related to the ability of bifidobacteria to degrade energy sources 77 .Currently, studies on the ability of different Bifidobacterium strains to degrade energy sources need to be deepened and continuously improved.
INDIRECT IMPROVEMENT OF CARDIOVASCULAR DISEASE Obesity
Obesity is associated with numerous severe health consequences, with cardiovascular disease (CVD) being a primary concern.It contributes to the development of CVD and increases CVD mortality 78 .Recent data suggest that severe obesity increases the risk of cardiovascular-related incidents in varying degrees: it increased the risk of HF by about 4-fold and increased the risk of coronary heart disease and stroke by nearly 2-fold 79 .
Appetite.Recent studies have suggested that bifidobacteria are involved in energy homeostasis and appetite regulation in the central nervous system (CNS) by improving levels of hormones such as leptin and gastrin 80,81 .
Leptin, a peptide hormone, is associated with the CNS's perception of energy balance and food intake 82 .Leptin works by reducing dietary intake and body weight, so as the body fat increases, leptin continues to rise 83 .In chronically obese people, persistently had high levels of leptin in the blood reduce the sensitivity of leptin receptors in the hypothalamus, a phenomenon widely known as leptin resistance.Leptin resistance is characterized by a strong appetite, reduced energy expenditure and obesity.The mechanisms of leptin resistance include stimulation of inflammatory factors leading to abnormal signaling, reduced efficiency of bloodbrain barrier transport, and receptor mutations.A study by Renata et al. 81 demonstrated that after giving gastric gavage of probiotics containing B. bifidum to HFD-fed mice, the leptin resistance of obese mice was significantly improved, which was reflected in the significantly reduced food intake in mice.This may be due to the fact that probiotics containing B. bifidum significantly reduce mRNA levels of pro-inflammatory molecules TLR4 and IL-6 and expression of JNK and IKK proteins in the hypothalamus, thereby improving leptin signaling abnormalities (Fig. 2a).The molecular interaction between bifidobacteria and leptin remains unclarified.However, it has been demonstrated that SCFAs stimulate the expression of leptin in adipocytes by activating the free fatty acid receptor 3 (FFAR3) 84 , which provides a basis for future studies on the association between bifidobacteria and leptin.
Besides reducing appetite by improving impaired leptin signaling pathways, bifidobacteria also improve ghrelin signaling pathways.
Gastrin is an endocrine hormone produced in the gastric mucosa, which is responsible for the regulation of appetite and energy expenditure 85,86 .Specifically, ghrelin is secreted in large quantities during starvation, which then enters the blood circulation and reaches the center.Thereafter, it acts on the hypothalamic and midbrain limbic circuits to control dietary behavior and food intake 87 .For instance, a previous study showed that when nonobese individuals ate for 30 minutes, ghrelin was immediately suppressed by 39.5 % and continued to decline until it returned to baseline 88 .In contrast, for obese individuals, food intake did not seem to cause the ideal decrease in ghrelin 88 .Such postprandial ghrelin passivation is usually one of the reasons for excessive food intake in obese people.Previous studies using animal models showed that serum ghrelin levels were negatively correlated with the abundance of some bacterial populations in the gut microbiome, including the Bifidobacterium species 89 .Some scholars have explored the effect of bifidobacteria on human anti-obesity development.Findings from the studies showed that B. longum APC1472 significantly increased the ghrelin activity in obese individuals, which may demonstrate that B. longum APC1472 improves ghrelin signaling defects (Fig. 2a) 80 .
Lipid and glucose metabolism.Apart from the appetite in obese people, metabolic abnormalities directly enhance the process of obesity.It has been shown that obesity-related metabolic disorders are often accompanied by a decrease in the genus Bifidobacterium in the intestines 90 .Nevertheless, accumulating experimental evidence suggests that bifidobacteria supplementation is beneficial in inhibiting lipid metabolism process in obese host disorders in multiple ways.
In general, in non-obese people, there is maintenance of a dynamic balance between fat consumption and production.However, due to long-term unhealthy lifestyles or some disease factors, there is destruction of the lipid homeostasis in the body, and the decomposition of fat is far less than the production of fat, resulting in excessive accumulation of lipids and eventual obesity 91 .A variety of bifidobacteria, such as Bifidobacterium adolescentis (B.adolescentis) and Bifidobacterium animalis subsp.animalis (B.animalis), showed a tendency to restore lipid profiles (including LDL-C, HDL-C, TC, triglyceride (TG)) to normal levels 19,92 .It has been demonstrated that bifidobacteria regulate fat accumulation by upregulating mRNA expression of thermogenic genes and lipolytic enzymes, and inhibiting activation of lipogenic genes.For example, B. adolescentis was observed to upregulate mRNA expression of Ucp-1, Pgc1-α, Ppar-γ, and Ppar-α genes in HDF-fed mice, to produce fatty acids which promote lipid metabolism in brown adipose tissue (Fig. 2a) 19 .Ppar-α upregulates the expression of enzymes responsible for mitochondrial fatty acid oxidation, while Ppar-γ activation triggers brown adipocyte differentiation and adipogenesis 93 .These results suggest that acceleration of fatty acid conversion is one of the ways bifidobacteria use to improve fat metabolism (Fig. 2).In addition, some bifidobacteria may also improve lipid metabolism by producing EPS, a substance with potential health benefits 94 .Another study demonstrated that the expression of acyl-CoA oxidase 1 (ACOX 1), carnitine palmitoyl transferase 1A (CPT 1A), and 3-hydroxy-3-methyl-3-glutaryl-CoA reductase (Hmgcr) of Diet-Induced Obese Mice was significantly upregulated by the application of B. animalis IPLA R1, which produce EPS (Fig. 2a) 92 .ACOX 1 and CPT 1A are rate-limiting enzymes in the fatty acid oxidation pathway, while Hmgcr is a rate-limiting enzyme in cholesterol synthesis 95 .These data suggest that B. animalis IPLA R1 can inhibit fat accumulation in the liver by accelerating lipid oxidation and increasing cholesterol excretion (Fig. 2a).Similarly, SCFAs, as intestinal microbial metabolites, can be directly involved in improving lipid metabolism because of their ability to increase lipid oxidation (Fig. 2a) 69,96 .
Numerous studies have indicated that the regulation of lipid metabolism by bifidobacteria is often accompanied by the normalization of blood glucose levels [97][98][99] .Parallel to lipid metabolism disorders, glucose metabolism disorders are also closely related to obesity.A study by Cano PG et al. 99 found that seven weeks of continuous administration of Bifidobacterium pseudocatenulatum (B.pseudocatenulatum) CECT 7765 reduced blood glucose levels in obese HFD-fed mice by 17%, demonstrating its ability to increase glucose tolerance.The improvement of glucose metabolism parameters by bifidobacteria may be associated with the activation of glucokinase (GCK) (Fig. 2a) 100 .It is worth noting that GCK has better control over the rate of insulin secretion in pancreatic β-cells, and thus small changes in its activity can cause fluctuations in the insulin secretion threshold and affect glucose homeostasis 100 .Previous systematic analyses showed that yogurt fermented B. longum 070103 successfully activates GCK and thus significantly reduces fasting blood glucose and improves glucose tolerance as well as insulin resistance 101 .Moreover, specific regulation of the abundance of crucial intestinal microbes, reduction of the levels of metabolites such as 3-indolyl sulfate and 4-hydroxybutyric acid to alleviate glucose metabolic disorders could be included in the mechanism of bifidobacteria hypoglycemic activities 101 .
Hyperlipidaemia (HLP)
HLP is defined as excessive blood lipids (the sum of all lipids in the plasma) due to abnormal fat metabolism or transport.Abnormal levels of one or more lipids in the plasma often lead to HLP, and include increased total cholesterol (TC), TG, low-density lipoprotein cholesterol (LDL-C), and decreased high-density lipoprotein cholesterol (HDL-C) 102 .This explains why HLP presents as hypercholesterolemia, hypertriglyceridemia, or both (mixed HLP).However, lack of intervention eventually leads to CVD, regardless of the kind of HLP.Early intervention of dyslipidemia is essential for primary and secondary prevention of CVD, especially AS and coronary heart disease.
Hypercholester.Cholesterol is one of the subtypes of lipids which is transported to the whole body in form of lipoproteins.Lipoproteins can be divided into LDL-C, very low-density lipoprotein cholesterol (VLDL-C), and HDL-C, which perform unique functions in the body.LDL-C carries cholesterol into the artery and binds to macrophages on the arterial wall.Macrophages absorb cholesterol and gradually develop into foam cells, burying the hidden danger of AS.LDL-C is, therefore, generally considered "bad" cholesterol.On the contrary, HDL-C transports cholesterol accumulated in the arteries to the liver for metabolism, thereby preventing CVD.It has been suggested that every 10 mmol/L reduction in LDL cholesterol reduces the risk of CVD and mortality by 22% 102 .Adverse reactions or contraindications to statins and fibrates currently used in clinical practice are plaguing doctors and patients 103 .The role of bifidobacteria in lowering cholesterol is constantly emphasized, not only because of its effectiveness but also its diverse mechanisms of action in lowering the cholesterol.Cholesterol assimilation occurs when Bifidobacterium binds cholesterol to the surface of the cell and absorbs it into the membrane's phospholipid bilayer (Fig. 2b) 104 .Different Bifidobacterium strains showed varying degree of assimilation abilities.For example, under the same experimental conditions, the adsorption rate of Bifidobacterium bifidum (B.bifidum) MB109 to cholesterol was 52%, while that of Bifidobacterium longum subsp.infantis (B.infantis) ATCC 15697 was 34% 105 .Secondly, bifidobacteria's bile salt hydrolase (BSH) activity enables it to reduce cholesterol by participating in the metabolism of the bile acids.More specifically, Bifidobacterium hydrolyzes conjugated bile acids into primary bile acids through its high BSH capacity, making them more easily excreted in feces (Fig. 2b).The reduction of bile salts and the loss of bile acids stimulate the liver to increase cholesterol conversion, which decreases serum cholesterol levels.These findings were confirmed by Al-Sheraji and Jiang in an in vivo study using rats 20,106 .BSH activity has also been shown to limit cholesterol absorption by reducing the solubility of bile acids (Fig. 2b) 107 .Thirdly, cholesterol conversion also contributes to the removal of cholesterol by bifidobacteria (Fig. 2b).Another study observed an increase in coprostanol in cholesterol-rich cells treated with B. bifidum PRL2010, accompanied by upregulation of BBPR 0519, which is predicted to encode the aldehyde/ketone reductase which catalyzes the conversion of cholesterol to coprostanol 108 .
High triglycerides (TG).More than 30 years ago, the negative effect of high TG to CVD was comparable with that of high TC 109 .Although the advent of statins has led to more focus on lowering LDL-C over time, there has been a renewed interest in TG 110 .It has been proved that the size of a lipoprotein can enter the arterial wall when TG concentration is slightly or moderately elevated (2-10 mmol/L), thus increasing the risk of CVD 111,112 .In contrast, when its concentration is seriously elevated (>50 mmol/L), the lipoproteins become too large to enter the arterial wall, which affect people's judgment of the TG 113 .Although there may be more than one mechanism for TG in CVD, there is one mechanism that has been discussed most clearly: high TG leads to increased concentrations of residual cholesterol, which enters the arteries, promotes inflammatory episodes and foam cell formation, which ultimately increases the risk of CVD and death 114 .
Bifidobacteria were negatively correlated with TG levels.Further clinical studies have shown that supplementation with bifidobacteria or intake of functional foods containing bifidobacteria could reduce serum TG levels 115,116 .An improvement in the overall lipid profile usually accompanies TG-lowering effect of bifidobacteria.The TG-lowering mechanism of the bifidobacteria is less precise than that of cholesterol-lowering.However, the benefits of bifidobacteria and good tolerance in reducing TG make it worthy of consideration in the therapy of lipid abnormalities.
Type 2 diabetes mellitus (T2DM)
Diabetes is an established risk factor for CVD such as HF, coronary heart disease, stroke, and atrial fibrillation 117 .It is estimated that 2-4 times more diabetic patients die from CVD than non-diabetic patients, and at least 68% of diabetics over 65 years old die from various forms of CVD 118 .More than two-thirds of patients with T2DM are reported to have HP 119 .According to a meta-analysis report, the risk of HF was 1.11 (95% CI, 1.04-1.17)for every 1 mmol/L (≈18 mg/dL) increase in fasting blood glucose, indicating a positive correlation between fasting blood glucose and HF 120 .
Recently, the unique benefits of bifidobacteria in the treatment of T2DM have been validated in both animal studies and clinical trials.More studies have revealed the mechanisms underlying this beneficial role, and several phased results have been achieved.Studies have claimed that bifidobacteria have a significant effect on improving insulin resistance (IR) in patients with T2DM, and some strains are even more effective than metformin 21,29,121,122 .The experimental studies by Zhang et al. 29 showed that bifidobacteria utilize several pathways to improve insulin sensitivity.First, bifidobacteria may alleviate IR by targeting hepatic gluconeogenesis genes to reduce gluconeogenesis (Fig. 2c).The study used B. animalis 01 in T2DM rats, and showed that B. animalis 01 down-regulated phosphoenolpyruvate carboxykinase (PEPCK), glucose-6-phosphatase (G6Pase), and upregulated the expression of Nrf2, IRS-2, PI3K, and AKT-2 genes (Fig. 2c) 29 .Researchers have suggested that B. longum BL12 and B. lactis HY8101 down-regulate PEPCK and G6Pase in the liver of T2DM mice 21,121 .Nrf2 is a protective factor against oxidative damages.Activation of the Nrf2 inhibits IRS-2 phosphorylation and thus increases the expression of downstream signals such as PI3K and AKT (Fig. 2c).The IRS -PI3K -AKT pathway is critical in hepatic insulin signaling.Therefore, enhancement of the antioxidant capacity is another effective way adopted by bifidobacteria to restore insulin signaling and repair IR.
Inflammation is known to be associated with induction of IR and development of T2DM 123 .Due to its anti-inflammatory properties, B. adolescentis strains have recently been found to alleviate IR.The diabetic state of T2DM mice was improved after administration of B. adolescentis.The pro-inflammatory factors including TNF-α, IL-6, and IFN-γ were significantly inhibited, and the concentrations of butyric and propionic acids were significantly increased (Fig. 2c) 122 .Based on the high association between the elevated concentrations of SCFAs caused by these strains and their effects on blood glucose concentrations, Qian 122 hypothesized that the ameliorative diabetic effects of adolescent bifidobacteria are mediated through the bifidobacteria-gut microbiota-SCFAsinflammation axis.B. lactis GCL2505, which has been evaluated for improving diabetes, appears to regulate SCFAs (particularly acetate) levels 124 .In addition, elevated SCFAs stimulate GPR43, leading to the secretion of GLP-1, which regulates β-cell growth, stimulates glucose-dependent insulin release, and inhibits glucagon secretion (Fig. 2c) 124 .
CURRENT APPLICATIONS IN CARDIOVASCULAR DISEASE
Clinical studies on applications of Bifidobacterium species in CVD are summarized in Table 1.The studies vary by subject, sample size, bacteria, product, dosage, and study design.
Hypertension (HP)
HP is a chronic disease that is characterized by continuously high arterial blood pressure levels.Long-term HP is the most significant risk factor for coronary artery disease, stroke, HF, atrial fibrillation, and other CVDs.A 20 mmHg increase in systolic blood pressure and a 10 mmHg increase in diastolic blood pressure are associated with a two-fold increase in the risk of death from stroke, heart disease, or other vascular diseases 125 .Based on evidence from CVD attribution analyses, a rightward shift in blood pressure distribution resulting in major cardiovascular diseases in humans has been proposed 126 .
A link between reduced bifidobacteria abundance and increased blood pressure incidences in children with type 1 diabetes mellitus (T1DM) has been proposed 13 .In this study, children were assigned into three groups: healthy control group (HC, n = 5), T1DM group with normal blood pressure levels (T1DM-Normo, n = 17), and type 1 diabetes group with elevated blood pressure levels (T1DM-HBP, n = 7) 13 .Analysis of gut microbiota for each group revealed that bifidobacteria abundance in the guts of children in the T1DM-HBP group were significantly lower than in the other two groups 13 .Another trial indicated that supplementation with probiotics containing B. lactis HN019 reduced systolic blood pressure levels by 5 mmHg and diastolic blood pressure levels by 2 mmHg in women with arterial HP, with effective improvements in lipid metabolism as well as fasting glucose levels 127 .
In Germany, 100 grade 1 HP patients were invited to participate in probiotic intervention trials 128 .After 8 weeks of dynamic nocturnal blood pressure monitoring and evaluation of fecal microbiome composition as well as immune cell phenotypes, it was proposed that the mechanisms by which bifidobacteria reduce HP involve improving immune cell homeostasis by transforming dietary metabolic components 128 .This hypothesis has been tested in rat models.In deoxycorticosterone acetate (DOCA) salt hypertensive rat models, B. breve CECT7263 alleviated HP by increasing acetate concentrations in the gut and reducing TMA, restoring Th17/Treg immune homeostasis and suppressing vascular NADPH oxidase activities 129 .In spontaneously hypertensive rat (SHR) models, B. breve CECT7263 suppressed the elevated blood pressure levels by increasing the number of butyrateproducing bacteria, preventing Th17/Treg dysregulation, reducing endotoxemia, and improving endothelial dysfunctions 130 .
Atherosclerosis (AS)
Clinically, AS is an intimal disease in which fatty deposits form plaques in inner layers of arteries 131 .Then, growth of plaques leads to thrombosis and bulging in arteries.Fibrous tissue proliferation and calcium deposition accelerates arterial wall thickening and hardening 131 .Finally, the narrowing or blockage of arterial lumen leads to ischemia and necrosis of tissues and organs supplied by the artery 131 .Physiologically, the processes involved in AS development are complex and slow, involving many pathological changes, including hypercholesterolemia, inflammation, oxidative stress, and TMAO 66,131,132 .Even though the cholesterol-lowering, anti-inflammatory, anti-oxidative stress, and TMAO-modulating effects of bifidobacteria have been previously reported, the corresponding mechanisms are described in detail in Milad's review 133 .However, more preclinical and clinical trials should be performed to improve AS through bifidobacteria supplementation.
Randomized trials involving individuals with mild to moderate hypercholesterolemia revealed significant cholesterol-lowering effects of bifidobacterial 134 .Thirty-two adult males were randomized into two groups and administered with 3 × 100 ml/day of ordinary yogurt or fermented yogurt for 4 weeks 116 .A decrease in TC levels was established in the B. longum BL1 fermented yogurt group and the effects were more pronounced in individuals with moderate hypercholesterolemia (TC > 240 mg/dl) 116 .Another randomized trial revealed that in mild hypercholesterolemia patients with TC levels between 180 and 220 mg/dl, 10 weeks of a formula containing B. lactis Bb12 reduced their TC and LDL-C levels by 8.1% and 10.4%, respectively 135 .A clinical trial involving healthy young people showed that bifidobacteria improved lipoprotein profiles in hypercholesterolemia patients but not normal cholesterol levels 136 .In addition, some valuable clinical data showed that bifidobacteria improves TC levels T2DM patients 137,138 .
An animal study investigating the effects of probiotics on plasma TMAO revealed that 7 of 16 Bifidobacterium strains significantly reduced plasma TMAO concentrations, and that the plasma TMAO reduction rate for B. longum BL1 was as high as 30.89% 18.Current clinical trials have shown that bifidobacteria inhibits TMAO levels.In a previous study, TMAO levels were found to be elevated in 40 healthy young men (20-25 years) that had been subjected to a phosphatidylcholine challenge test 139 .Then, they were randomized into two groups, one of which received probiotic intervention 139 .The TMAO levels for most participants in the probiotic group decreased, however, the decrease was insignificant 139 .A similar randomized double-blind trial involving 27 healthy volunteers (mean age 47.1 years) obtained contrasting results 140 .In this study, participants were assigned to receive 140 .After 12 weeks, fecal TMA concentrations and the abundance of TMA-producing bacteria were significantly low in the bifidobacteria group, relative to the placebo group (p < 0.05) 140 .It is important to note that since there is a positive correlation between plasma TMAO levels and age, age of the subjects may have been a critical factor leading to different outcomes of the two experiments 141,142 .This may be due to the fact that expressions of FMO3 isoforms increased with age in clinical models, which promoted TMA transformation to TMAO 143 .
Overall, studies should elucidate on the link between bifidobacteria and TMAO.The potential mechanisms by which bifidobacteria affect atherosclerotic CVD via low-grade inflammation have been discussed 144 .In the past 5 years, clinical trials have reported the positive effects of bifidobacteria on inflammation and oxidative stress in diabetic patients with coronary heart disease 145,146 .One of the trials involved 60 overweight, diabetic, and coronary heart disease patients aged 50-58 years and it aimed at assessing the effects of synbiotics (including bifidobacteria) on inflammatory biomarkers of carotid intima-media thickness and oxidative stress 146 .After 12 weeks of treatment, bifidobacteria significantly reduced hsCRP as well as plasma malondialdehyde levels, and significantly increased nitric oxide (NO) levels 146 .In Poland, probiotic supplements containing various Bifidobacterium strains were shown to reduce IL-6, TNF-α, and thrombomodulin levels in postmenopausal obese women and effectively improved arterial stiffness 147 .
Myocardial infarction (MI)
Clinically, MI, known as myocardial ischemic necrosis, involves a rapid reduction or interruption of coronary blood supply due to coronary arterial disease (such as AS, spasm), leading to acute and persistent myocardial ischemia in the myocardium of coronary supply sites, and ultimately resulting in myocardial necrosis.
Previously, studies have reported a low gut abundance of bifidobacteria in rats 148 and humans 149 with MI.However, they did not conclusively determine whether bifidobacteria are cardioprotective against MI.The direct association between bifidobacteria and MI was first reported by Lam et al. in 2012.Lam et al. 150 reported that myocardial infarct sizes were reduced by 29% in MI rats that had been fed on Goodbelly for 14 days.Goodbelly is a commercially available probiotic juice containing two probiotics, Lactobacillus plantarum 299v and B. lactis Bi-07.Due to the absence of relevant evidence, it has been postulated that protective effects of this probiotic are as a result of reduction in serum leptin levels.Danilo et al. 151 found that administration of B. lactis B420 for 4 weeks or 7 days significantly mitigated myocardial infarct sizes in mice following I/R.The epigenetic mechanisms of B. lactis B420 against MI were also identified.Further, Danilo et al. explained that the anti-MI effects of B. lactis B420 were achieved by suppressing the levels of inflammatory factors and accelerating transitions to M2-type macrophages via the mediatory effects of anti-inflammatory T-regulatory immune cells.Their findings are in tandem with those of Jafar Sadeghzadeh et al. who found that oral administration of a probiotic combination formulation containing B. breve exerted cardioprotective effects on rats with infarct-like myocardial injuries by attenuating TNF-α and inhibiting oxidative stress 152 .
Depression-like behaviors and development of depression are frequent in MI patients 153 .Bifidobacteria have been shown to reduce the development of depression-like behaviors after MI in animal models [154][155][156] .Since increased apoptosis was observed in various limbic system structures after MI to varying degrees, Girard et al. 157 first proposed that prophylactic supplementation with B. longum and Lactobacillus reduces apoptosis.They confirmed this hypothesis in rat models and postulated that this combination of probiotics exerts anti-inflammatory effects leading to inhibition of apoptosis in the limbic system 157 .In addition to preventive effects, probiotic supplementation after ischemia-reperfusion has been shown to maintain its beneficial effects while improving depression-like behaviors after MI 155 .Later, scholars determined that B. longum plays more beneficial roles in probiotic combinations to improve depression after MI 156 .A randomized, doubleblind, placebo-controlled clinical trial revealed the beneficial effects of probiotic supplementation (Lactobacillus rhamnosus) on depressive symptoms, inflammation, and oxidative stress in MI patients 158 .Bifidobacteria, which has shown excellent antidepressive effects after MI in preclinical studies, may also hold up in clinical trials, although the link has yet to be established.
Heart failure (HF)
Since HF is not an independent disease, its definition in academia is inconclusive.The American Heart Association defines it as a complex clinical syndrome resulting from various cardiac structural or functional diseases that impair ventricular filling or ejection capacities 159 .Long-term HP increases cardiac load, leading to myocardial hypertrophy, remodeling, and HF 160 .Coronary atherosclerotic heart disease (coronary heart disease) results in long-term heart ischemia and hypoxia, leading to gradual weakening of heart contractions, thereby inducing HF 161 .Physiologically, MI is a risk factor for HF in coronary heart disease.This is because MI leads to a sharp decrease in myocardial contractility and to a significant decrease in cardiac pumping volume.Thus, after infarction, the myocardium becomes fragile, aggravating HF development 162 .HF is usually the common end for multiple CVDs 163 .The 5-year survival rate for HF patients is only 45%, indicating that new prevention and treatment strategies are urgently required to improve on its prognosis 164 .
Soluble tumor necrosis factor-like weak inducer of apoptosis (sTWEAK) is an independent predictor of mortality risk in CHF, and non-ischemic HF, implying that it is a potential new cardiovascular biomarker [165][166][167] .Compared with healthy individuals, the concentrations of sTWEAK in patients with congestive HF, coronary artery disease, and AS have been shown to be suppressed 167 .A triple-blind clinical trial randomly assigned 90 CHF patients into two groups receiving probiotic yogurt containing bifidobacteria or ordinary yogurt for 10 weeks 168 .Compared with ordinary yogurt, probiotic yogurt increased serum sTWEAK levels in CHF patients 168 .The increase in serum sTWEAK levels may have been because probiotic intake reduces inflammation, thereby downregulating Fn14, the only receptor for sTWEAK, which prevents sTWEAK from binding Fn14.In addition, a randomized triple-blind clinical trial showed that probiotic yogurt significantly reduced serum ox-LDL levels, which might be related to the increase in total antioxidant capacities through multiple pathways (production of antioxidant metabolites, modification of MAPK, NF-κB, and other pathways, as well as regulation of ROS-producing enzymes) 169 .
In addition to patients with HF alone, a previous clinical trial focused on gastric cancer patients with coronary heart disease and HF complications because apparent intestinal disorders are consistently observed in such populations and they stimulate HF progression 170 .In patients with gastric cancer with coronary heart disease and HF, probiotic capsules containing B. longum enhanced the intestinal barrier and corrected intestinal microbial imbalance as well as HF, thereby promoting patient rehabilitation 170 .
Several clinical trials have reported the positive effects of bifidobacteria in HF patients.However, these clinical trials used probiotic combinations containing bifidobacteria rather than individual Bifidobacterium strains alone, which blurred the role of bifidobacteria to some extent.Preclinical and clinical studies should be performed to elucidate on the independent beneficial effects of bifidobacteria in HF patients.
Prospects
This review has elucidated the critical role of bifidobacteria in the prevention of CVD development.On the one hand, bifidobacteria prevented damage to cardiac functions through the maintenance of intestinal barrier function and its antioxidant and immunomodulatory effects directly.On the other hand, bifidobacteria can also indirectly reduce the cardiac burden of obesity, HLP, and T2DM by affecting hormone secretion, regulating metabolism, assimilating cholesterol, and improving IR.In addition, bifidobacteria have found extensive application and demonstrated promising progress in various classic CVDs, such as HP, AS, MI, and HF.Indeed, some unique metabolic functions of Bifidobacterium may play a non-negligible role in cardiovascular disease, such as the metabolism of tryptophan, the production of its indole derivatives, and the regulation of peripheral serotonin, but the lack of evidence leads to the need for more attention and in-depth exploration.
It is important to recognize that advances in the field of molecular biology and genetic engineering techniques will pave the way for in-depth studies into all probiotics, including bifidobacteria.In recent years, several mutagenesis methods have been introduced in the field of bifidobacteria, including homologous recombination systems with non-replicating plasmids or temperature-sensitive plasmids, random rotor-based mutagenesis systems, and inducible plasmid self-destruction-assisted systems [171][172][173][174] .Although these strategies have some shortcomings, there are efforts to circumvent these problems through strategies, such as the development of endogenous CRISPR-Cas systems 171 .On the one hand, recent advances in the field of molecular biology and genetic engineering techniques have facilitated a deeper understanding of bifidobacteria-host interactions 175 .For example, by analyzing the genome sequence and in vivo transcriptome studies of Bifidobacterium breve, O'Connell 176 identified a key site that influences its colonization of the mouse host intestine and highlighted that the ability of different strains to colonize the intestine is highly correlated with their ability to utilize carbohydrates.Furthermore, genetic engineering holds profound significance for forthcoming therapeutic advancements and industrial applications involving bifidobacteria.By harnessing sophisticated genetic engineering methods, the potential exists to enhance stress resilience, engineer targeted delivery systems to combat pathogens, eradicate antibiotic resistance, achieve luciferase labeling, and even generate entirely novel transgenic bifidobacterial strains that surpass wild-type counterparts in both efficacy and safety 177 , which can help develop novel therapeutic approaches and facilitate further clinical optimization.
In summary, addressing and controlling the bifidobacterial composition holds promise for advancing innovative therapeutic strategies in the context of CVD.The manipulation of the Bifidobacterium genome via genetic engineering to create pertinent foods and medications is anticipated to pave the way for safer and more efficient utilization of Bifidobacterium in the future, augmenting its potential as a treatment for CVD in human populations.
Fig. 2
Fig. 2 Indirect Improvement of CVD. a Bifidobacterium can control appetite by affecting the transmission of leptin and gastric hunger signaling pathways; it can also reduce obesity by inhibiting the disordered lipid metabolism process and reducing the impaired glucose metabolism in obese hosts in multiple ways.b Bifidobacterium can reduce the development of hyperlipidemia through cholesterol assimilation; high BSH ability to increase cholesterol excretion in the feces and limit cholesterol absorption, thus promote cholesterol conversion.c Bifidobacterium can reduce glucose metabolism by targeting hepatic gluconeogenesis genes, restoring insulin signaling pathway; producing SCFA to secrete GLP-1 and regulating islet beta cell growth and modulating the target inflammation to improve insulin resistance and reduce diabetes.(The figure does not contain any third-party material, the figure and each of the elements in the figure were created by the authors).
Table 1 .
The cases of bifidobacteria in CVD. | 2023-10-31T13:17:14.103Z | 2023-10-30T00:00:00.000 | {
"year": 2023,
"sha1": "3e1d5598e892dc4d7008ae27675a037854dfaa10",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c6777effb2edfaf5442579556ae887c70c435465",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
} |
231870821 | pes2o/s2orc | v3-fos-license | Designing a Personalized Health Dashboard: Interdisciplinary and Participatory Approach
Background Within the Dutch Child Health Care (CHC), an online tool (360° CHILD-profile) is designed to enhance prevention and transformation toward personalized health care. From a personalized preventive perspective, it is of fundamental importance to timely identify children with emerging health problems interrelated to multiple health determinants. While digitalization of children’s health data is now realized, the accessibility of data remains a major challenge for CHC professionals, let alone for parents/youth. Therefore, the idea was initiated from CHC practice to develop a novel approach to make relevant information accessible at a glance. Objective This paper describes the stepwise development of a dashboard, as an example of using a design model to achieve visualization of a comprehensive overview of theoretically structured health data. Methods Developmental process is based on the nested design model with involvement of relevant stakeholders in a real-life context. This model considers immediate upstream validation within 4 cascading design levels: Domain Problem and Data Characterization, Operation and Data Type Abstraction, Visual Encoding and Interaction Design, and Algorithm Design. This model also includes impact-oriented downstream validation, which can be initiated after delivering the prototype. Results A comprehensible 360° CHILD-profile is developed: an online accessible visualization of CHC data based on the theoretical concept of the International Classification of Functioning, Disability and Health. This dashboard provides caregivers and parents/youth with a holistic view on children’s health and “entry points” for preventive, individualized health plans. Conclusions Describing this developmental process offers guidance on how to utilize the nested design model within a health care context.
Introduction
The Dutch Preventive Child Health Care (CHC), as part of public health, monitors children's health and their continuum of development with focus on protecting and promoting health and providing context for optimal development. This implicates preventing disease progression at early stages of a "growing into deficit," when symptoms do not cluster to a diagnosis or are even absent yet [1]. It is not easy to timely redirect these complex dynamics underlying health. The Bio-Psycho-Social perspective on health (BPS) displays the complexity by conceptualizing health as a result of lifelong, multidimensional interactions between individual (biological-genetic) characteristics and contextual factors [2]. This makes prevention challenging, but it is crucial to effectively address current burden of chronic diseases [3]. It is even a prerequisite that the current health care system, which is mostly reactive (ie, treatment after a diagnosis), transforms toward personalized health care (PHC) [4]. According to Snyderman, PHC includes the concepts prevention, prediction, personalization, and participation and to fully adopt these concepts within practice, the availability of qualitative, holistic health information is required [5,6].
The preventive CHC offers a unique platform to adopt these PHC concepts, as CHC (from birth on) digitally registers a broad spectrum of information about interrelated health determinants in child and environment [1,7]. Yet, the holistic health information, stored in the CHC's electronic medical dossier (EMD), is insufficiently accessible to effectively perform PHC. The actual data flow is time-consuming due to an inconsistent, nontheoretical structure of the EMD [8][9][10]. This challenges CHC professionals to gain clear overview of relevant CHC data within the limited timeframe available during consultations with parents and other caregivers. Consequently, CHC professionals are hindered in obtaining integral insight into the interrelated health determinants in child and environment, let alone parents and youth.
To acquire better overview of meaningful data, indispensable for interpretation of holistic health information, the idea was initiated from CHC practice to develop a novel approach for summarizing health data about child and its environment in 1 image [2,11]. Visualization design offers efficient opportunities to make holistic health information accessible at a glance and conform to the relevant theoretical perspective [12,13].
The initial idea was first converted into rough drafts of representation of CHC health information. To enable generation of informal development ideas, the researchers presented first drafts to parents, youth, and CHC professionals and asked for their reaction. Stakeholder's feedback on these first drafts during interviews (parents) and focus group meetings (professionals) was positive concerning comprehensibility, relevance, acceptability, and feasibility. A pilot study of an early-on version of the 360° CHILD-profile also showed positive results regarding reliability and validity, when used by CHC medical doctors to assess child functioning [14].
The 360° CHILD-profile seemed a promising new tool, but further development was needed to deliver a suitable and functional dashboard, ready to be introduced to CHC practice. To realize meaningful visualization of complex health information with sufficient user satisfaction and essential performance in practice, it is important that such a developmental process is guided by appropriate design models.
The main aim of this paper is to offer guidance on how to utilize a design model to visualize and structure health data in a health care context with a heterogeneous target group. As an example, we describe the systematic development and immediate validation (as far as possible) of a comprehensible 360°C HILD-profile: an online accessible visualization of CHC data. The ultimate goal of this multifunctional tool for preventive CHC practice is to visualize the coherence between health domains in a way that it guides analytic thought processes of both care providers and parents/youth in line with BPS perspective on health and PHC. This paper focusses on describing the overall development process of a visualization tool to offer a clear, representative content generalizable to various subfields and disciplines in health care.
Process Development and Prototype
The developmental process of the 360° CHILD-profile is based on a nested design model, adapted from Munzner [15] ( Figure 1). This model describes different levels of design that are structured within 4 cascading levels that consider an immediate upstream validation (toward delivering a suitable prototype of the dashboard) as well as impact-oriented downstream validation of the prototype (toward the effective performance of the dashboard in daily CHC practice).
The prototype of the CHILD-profile is developed within a user-centered design process [16] and relevant stakeholders were involved during every level of design. For each design level, new participants were recruited. During this project, we collaborated in an interdisciplinary expert group including CHC professionals and researchers with expertise on CHC context, epidemiology, human-computer interaction, and information visualization in health care. This approach, combining expertise from the medical field with expertise on information visualization, is rather new but particularly useful in this health care context to increase the likelihood of the intended health outcome [17].
The Medical Ethics Committee of the Maastricht University Medical Centre approved this design process (METC azM/UM 17-4-083).
Before starting the first level of the nested design model, a literature research was performed with focus on theoretical models for health and background of the Dutch preventive CHC to identify the information needed for each design level.
Domain Problem and Data Characterization
On the first level, it was of vital importance to bridge the information asymmetry between relevant stakeholders, researchers, and designers to get a common understanding of user, domain, and task [18]. To achieve this while considering the privacy of the users, we first conducted role games, in which CHC consultations were re-enacted in a real-life situation with key stakeholders (CHC professionals, parents, and youth). A schematic approach (summative representation of data to make sense of complex, nuanced information and enable team-based analysis) was used to observe and interpret interpersonal interactions [19]. In the second step, interviews with participants of the role games as well as other CHC professionals were carried out to get a deeper understanding of the process and related requirements from the perspective of individual stakeholders. Role games and interviews were audio recorded. Recordings were summarized and, after discussion by a team of researchers, relevant findings were listed.
Finally, the resulting conclusions about user's perspectives were immediately validated in real-life by observing consultation hours. During the observations, field notes were taken. Based on the information collected within the previous steps, personas and empathy maps were created to visualize users' characteristics, goals, and skills, to become more aware of their real needs and to help the research group align on a deep understanding of end users [20,21].
In parallel, the relevant domain knowledge was discussed and summarized with all involved stakeholders to ensure that the involved researchers/designers share a common understanding of the underlying concepts and mechanisms. Furthermore, related work in the field and visual artifacts were discussed.
In summary, all our findings formed the domain-specific basis for the other levels ( Figure 1).
Operation and Data Type Abstraction
The focus of the second level was on mapping the underlying data in a more abstract description of operations, data types, and structure to form the input required for the visual encoding stage.
Different theoretical frameworks were explored to choose the most relevant framework for prioritizing and ordering data. The International Classification of Functioning, Disability and Health: Children and Youth version (ICF-CY) framework appeared to be the most appropriate to comprehensively and accurately describe individual health situations [22]. The classification systems ICD-11 (International Classification of Diseases, 11th revision) and DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, version 5), commonly used in health care, were also considered. However, these frameworks do not fit preventive CHC because they are based on a biomedical model of health and focus on diseases and diagnosis and not on prevention [23,24]. The ICF-CY framework was chosen because it represents the broad BPS perspective on health and adequately fits the preventive CHC. The ICF-CY framework enables to display the broad variety of information on characteristics of a child and its environment, collected by CHC. Strengths and protective factors, inevitable for protection and promotion of health and prevention of diseases, are included in the ICF-CY framework. Next, symptoms, diseases, and determinants that challenge health can be presented. And, last but not least, information is formulated in concrete and neutral, if not positive, terms with little to no valuation. The ICF-CY structure was customized to integrate it into a profile that fits CHC practice and theoretical background.
During 2 review group meetings, the 360° CHILD-profile was presented and profile's content, terminology, and ordering were discussed with experienced CHC professionals. During the review meetings, field notes were taken and summarized and discussed to reach consensus.
For immediate validation, a static, adapted, early-on version of the 360° CHILD-profile was presented to parents and youth and semistructured interviews were performed to gain insight into user experience (comprehensibility and usability), requirements, and coverage of meaningful topics. Audio recordings of the interviews were transcribed, field notes were taken, and data were analyzed according to previous steps.
Findings were discussed in brainstorming sessions with the research team to verify coherence with scientific and practical purpose of the profile and generate developmental ideas. The resulting findings were not just limited to the data structure and detailed task definitions, but also included meaningful ordering of the information.
Visual Encoding and Interaction Design
The first 2 levels of design (Domain Problem Characterization and Operation and Data Type Abstraction) formed the primary input for the visual encoding and interaction design on a content level. The development of the formal level was based on 2 additional pillars: the consideration of international standards of human-computer interaction for information representation (ISO 9241-12) [25] as well as theoretical aspects of design based on prior research in this field [26,27] and the systematic integration of users within iterative validation and optimization cycles.
In early stages of the design process, prior findings were integrated into low-fidelity prototypes to conceptually visualize the relevant CHC data and test them with users.
A clear and accessible information structure appeared to be of vital importance to address requirements of the given scenario and a clear visual structure plays a major role in reducing the cognitive load and controlling the perceptual ordering [28]. Therefore, the design was developed based on a sectional grid system and information was structured into areas. The key areas were placed within the center (Figure 2, left) and to facilitate the understanding, key concepts were illustrated through icons in combination with text [29].
The resulting sketch was operationalized into a digital prototype, suitable for informal, qualitative tests with relevant stakeholders (CHC professionals and parents). Participants performed tasks within representative scenarios (to prepare for or to reflect upon a CHC consultation) while considering the profile in all its bearings. Participants were asked to express their first impression on the profile, line out the profile's structure, seek and interpret specific information, and indicate comprehensibility of information. A researcher guided and facilitated the participants during the sessions. To gain feedback on accessibility, comprehensibility, and usability of the 360°C HILD-profile for each user group, a "think aloud" procedure was conducted [30]. A second researcher observed the session and conducted interviews with the stakeholders. Audio recordings of the interviews were summarized, field notes were taken, and data were analyzed according to previous steps.
For this visualization in accordance with the ICF-CY framework, it is crucial that it stimulates viewers to take into account all domains and choose a routing from central (child) toward outside (environmental factors). Therefore, a gaze tracking evaluation was applied (Tobii X1 Light eye tracker 30 Hz) to gain indirect feedback on what parts of the profile the stakeholders looked at and in which order. Results were discussed in the research team meetings and eventually processed to deliver a digital application of the final version of the online accessible CHILD-profile. Figure 2. The figures illustrate relevant steps of our iterative design process. The first phase resulted in a global composition design that accurately reflects the content structure (grid layout on the left). In a next step, the design was optimized through additional dimensions such as color, visual language elements such as icons, and illustrations (prototype on the right).
Algorithm Design
The prototype was developed as a web application based on JavaScript and embedded within an HTML website to ensure an integration into real-life scenarios. Data parsing and mapping were realized through Data Driven Documents (D3) Version 4, while the interactions were implemented using jQuery, JavaScript, and CSS.
The technical implementation was immediately validated in 2 ways: the application was tested by analyzing computational complexity and content and optimized with Chrome DevTools (developer tools) as well as user tests with representative data samples.
Prototype and Downstream Validation
Downstream validation at the level of algorithm design and visual encoding and interaction design was immediately tackled within the described levels: application test and user tests (see the "Algorithm Design" section) and informal qualitative tests (see the "Visual Encoding and Interaction Design" section).
Downstream validation of the delivered prototype at the level of operation and data type abstraction is beyond the scope of this article. For this dimension of validation, a field study is planned to evaluate CHILD-profile's feasibility (usability and potential effectiveness) and feasibility of performing a randomized controlled trial (RCT) within the preventive CHC context [31]. This feasibility RCT aims at generating knowledge on how to build follow-up studies directed toward downstream validation at the level of domain problem and data characterization.
Domain Problem and Data Characterization
Within more age categories, a total of 3 role games were performed and all involved CHC professionals (nurses or medical doctors or both), parents, and in one case youth (age >12) were interviewed. For field validation, for 2 days, CHC consultation hours were observed in more age categories.
Observations and interviews showed that CHC nurses mainly perform regular, protocoled tasks, and CHC medical doctors mostly explore indicated concerns and problems more in depth. An example of schematic description of a professional within the CHC context (an integration of empathy map and persona) is provided in Multimedia Appendix 1. One of the key challenges we could identify within this level was that the visual structure and interaction design of the current EMD did not sufficiently address the informational needs of the target group. During the interviews and observations, it became apparent that this leads to fundamental problems to fulfill several tasks in the given time due to an ineffective information and interaction structure. Both CHC nurses and medical doctors noted that data registration in the EMD is time-consuming and that they are hindered in quickly referring to registered data and gaining clear overview of health information. Discussion between researchers on visual artifacts revealed the lack of overview and theoretical ordering of data within the EMD. During consultations, CHC professionals pursue active participation of parents and youth but they indicated the need for visual support for communicating health information with parents. Parents indicated the importance of being able to decide for themselves and feeling free to make their own choices during the upbringing of their child. Related work regarding visual support on health communication and revealing parent's perspectives did not provide a holistic and structured display (in accordance with the ICF-CY framework) of the large and complex electronic CHC data sets [32].
Together with users we developed a description of formal requirements for the CHILD-profile to be designed. The design of the CHILD-profile should be: • lively and user-friendly with neutral, serene, and warm (fear reducing) appearance to create a positive experience; • targeted at supporting communication between CHC professionals and parents/youth and providing comprehensible and accurate overview of health determinants in child and its environment.
The pursued ordering effects were allocating the child in a central position, visualizing the coherence between the multiple features in child and context (in accordance with the ICF-CY framework), and making complex health information tangible. Technical requirements for the application were suitability for desktop (for visual support during consultations) and online accessibility but it should also be printable as PDF (A4 format, to be used during house visits).
Operation and Data Type Abstraction
Content and data ordering for the CHILD-profile were based on the ICF-CY framework, resulting in 4 domains: "Body structures and functions," "Activities and participation," "Environment," and "Personal factors." The specific content of each domain was customized to the specific Dutch CHC practice and is in accordance with CHC's professional framework and "toolbox" [33,34]. During 2 review group meetings, the CHC professionals (2 nurses and 2 medical doctors) indicated that the clear overview, ordering of data, and the use of colors were an improvement on accessibility in comparison to the currently used EMD. They proposed even more emphasis on neutral (nuanced) and positive formulations. Second, as not all items are equally relevant during the continuum from age 0 to 18, the review group prioritized specific content for the different age groups (0-15 months, 15 months to 4 years, 4-9 years, 9-12 years, and 12-18 years). Consensus was reached on expert agreement and adaptations were made on prioritization per age category and more positive terminology of data.
Visual Encoding and Interaction Design
The visualization was designed while taking into account the CHC context, user experiences of prototypes, user's desires, formal and technical requirements, and the indicated options for improvement of this data visualization.
The qualitative tests of prototypes (on average 30 minute sessions) showed that both target groups could handle the prototype well and performed most of the given tasks correctly (CHC professionals: 7 tasks of 9; parents: 6 tasks of 9). Most participants could link different domains in which health facilitators and barriers are described. Stakeholders feedback on the prototypes included mostly positive remarks such as "nice to build up information during lifetime", "nice that not only risks factors but also protective factors are included in the overview" and "good to see coherence between health determinants". However, some parents mentioned the following remarks: "it is a lot of data, in the beginning it is hard to know where to start", "it is important that formulations are clear". Participants indicated that in some CHILD-profiles they missed specific information about the child and that it is important to know where the data come from. As participants mentioned the importance of showing a timeline and a separate conclusion section to highlight critical information regarding the last consultation, these elements were incorporated in the final version of the CHILD-profile. Gaze-tracker output showed that all participants explored the profile by starting at the center (child icon/image) and clearly distinguished the middle planes from outer columns. Almost all domain titles were noticed except for "Activities & Participation" and participating professionals often paid more attention to the "conclusion/advice" section than parents.
Algorithm Design
This algorithm design phase resulted in an application which automatically transfers CHC health data registered in the EMD. The application is built independently from the existing EMD and can be connected to any application programming interface that provides the related EMD data. The dashboard offers a "front end" summary to be linked to the EMD systems and online parent portal. The final version of the visualization design is tested and operational in the browsers used in the specific context (the CHC organizations uses Chrome and Firefox).
Prototype and Downstream Validation
So far, the described procedure resulted in a comprehensible 360° CHILD-profile, usable on computer and mobile devices (laptop or tablet) and printable for home visits. This visualization of CHC data at a glance is validated on impact at the level of algorithm design and visual encoding and interaction design and is ready to be introduced to CHC practice. Field study with focus on downstream validation on the level of operation and data type abstraction is beyond the scope of this article. This field study will be separately presented in feasibility RCT's protocol and result papers on this study which includes quantitative and qualitative research.
Overview
This paper describes the stepwise development of a new dashboard, which combines visualization and theoretical ordering of health data based on the ICF-CY framework, to offer guidance on how to use the nested design model to achieve visualization of a comprehensive overview at a glance.
In this example, the practical implementation of the ICF-CY framework to summarize electronic health records is intended to display coherence between different health domains. The goal is to facilitate analytic thought processes during shared decision making toward preventive, individualized health plans directed at promoting health [26,32].
The CHILD-profile is designed to optimally display a holistic overview of data from electronic health records in line with the ICF-CY framework and enables considering multiple perspectives on child's development and health. Within the ongoing project, the dashboard itself was evaluated while taking into account several perspectives.
Strengths and Limitations
This project shows us which opportunities can arise from bringing together expertise/experience from the medical and information visualization/human-computer interaction field of knowledge. This collaboration, not yet common within health care, leads to synergy and optimal ground for realizing meaningful visualization of complex health information and sufficient adoption rate and essential performance in practice.
Additionally, the choice for a user-centered design approach, with active involvement of relevant stakeholders in every design level, increases the likelihood of usability within CHC practice and reaching the intended goals [17].
The currently experienced problems with EMD concerning accessibility of health data are avoided in this new information technology by considering international standards of human-computer interaction for information representation (ISO 9241-12 [25]) as well as theoretical aspects of design based on prior research in this field [26,27]. The nested design model is especially suitable for the context of data visualization within health care as it offers a holistic perspective on the design process [15]. For each level of design, evaluation during development (upstream validation) and after finishing the data-visualization design (downstream validation) is included. By integrating these design and evaluation methods, knowledge is generated on how to deliver a solid visualization with performance as intended as well as on how to measure actual effectiveness in practice and interpret the findings during implementation. However, it is important to note that the nested design model offers researchers a framework for structuring the design process on a rather abstract level. For each specific visualization, the choice for design and evaluation methods and the operationalization should be customized to the content and aim of the visualization and the context in which it will be implemented.
As we can only understand how people use a new tool when it exists, we could only partly tackle downstream validation within this project. Early versions of the dashboard and prototype are technically tested and qualitative tests are performed with rather limited study populations. To complete downstream validation process, studies with higher numbers of participants must be performed to reach sufficient power to evaluate if the innovation contributes to experienced needs in practice and leads to the intended health outcomes.
Opportunities and Challenges
By utilizing the ICF-CY as a framework for ordering health data, professionals are provided with an interactional structure for aggregating details of an individual's unique health reality across several dimensions. This structure makes it not only possible to comprehensibly display the multidimensionality of health but also the coherence between different health domains. Therefore, we hypothesize that the use of this new dashboard in CHC practice can: • support to identify strengths, challenges, needs, and goals and "entry points" for health management; • automatically guide (mostly subconsciously) "thinking processes" toward a more predictive, personalized, and participative approach of health; • improve health literacy and facilitate shared decision making.
The modern information technologies, used to deliver a functional profile, allow greater direct access to health information for parents and youth (during visits and at home via online portal). By providing parents/youth insight into health facilitators and barriers, we think they will be empowered to take a more proactive, leading role during decision-making processes and make preventive health plans fit their context.
To study usability, adoption rate, and performance (regarding the intended goals) in practice, a field study and other follow-up studies need to be performed with sufficient power. To complete the validation process, it is important to measure ordering effects, visual salience, and bias effects, considering variables such as educational background and others. It is, however, a challenge to perform effect studies with sufficient sample sizes within the multidisciplinary and heterogeneous context of the preventive CHC. Therefore, the first study to be performed will be a pragmatic feasibility RCT, in which both CHILD-profile's feasibility and RCT's feasibility aims will be evaluated. The RCT protocol and results will be published in separate articles [31]. Results of this field study will offer underpinning of necessary requirements for successful follow-up effect studies with sufficient power.
After completion of downstream validation and effective implementation of this new tool in CHC, we anticipate that using the CHILD-profile within CHC will stimulate toward more complete and uniform data registrations. This would lead to availability of standardized and theoretically structured health data (in accordance with the ICF-CY framework), which are more fit for epidemiological research and future possibilities like automatic transformation toward internationally standardized ICF codes.
Conclusions
This work is an important step toward bridging the information asymmetry between electronic health data, physicians, and patients and clients in general.
We propose the nested design model as a method to structure the design process while considering validation cycles for each level of design, both immediately during the process and impact-oriented validation after implementation, considering the effects of individual aspects on performance in practice.
We provide guidance on how to utilize the design model in a health context based on a concrete example and specific guidelines on how to address heterogeneous capabilities within preventive CHC through visual means and interaction design.
In our design study we developed a working prototype of a comprehensible 360° CHILD-profile on which CHC data are visualized at a glance. The application automatically converts CHC health data, already registered in the EMD, into a visualization which represents the continuum-based context of children's health and development. | 2021-02-11T06:18:18.179Z | 2020-09-10T00:00:00.000 | {
"year": 2021,
"sha1": "34792151f256985d57f43a810a7026505cc9b26a",
"oa_license": "CCBY",
"oa_url": "https://jmir.org/api/download?alt_name=formative_v5i2e24061_app1.pdf&filename=f8ff285208ab1e105a33243a7410132f.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82b5c6b8eb0e844a6dfabc0ba33d690f6a105b92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204817004 | pes2o/s2orc | v3-fos-license | Prioritized Multi-agent Path Finding for Differential Drive Robots
Methods for centralized planning of the collision-free trajectories for a fleet of mobile robots typically solve the discretized version of the problem and rely on numerous simplifying assumptions, e.g. moves of uniform duration, cardinal only translations, equal speed and size of the robots etc., thus the resultant plans can not always be directly executed by the real robotic systems. To mitigate this issue we suggest a set of modifications to the prominent prioritized planner -- AA-SIPP(m) -- aimed at lifting the most restrictive assumptions (syncronized translation only moves, equal size and speed of the robots) and at providing robustness to the solutions. We evaluate the suggested algorithm in simulation and on differential drive robots in typical lab environment (indoor polygon with external video-based navigation system). The results of the evaluation provide a clear evidence that the algorithm scales well to large number of robots (up to hundreds in simulation) and is able to produce solutions that are safely executed by the robots prone to imperfect trajectory following. The video of the experiments can be found at https://youtu.be/Fer_irn4BG0.
I. INTRODUCTION
Problem of finding feasible, collision free trajectories for multiple robots navigating in a shared environment is a challenging problem that is lacking general efficient solution. Essentially, two approaches to multi-robot navigation are common. One is to adjust the velocity profiles of the robots in a reactive fashion, taking into account current observations. Methods implementing this approach, e.g. ORCA [1], typically scale well to large number of robots but can not guarantee that each robot reaches its goal. Another approach is to plan the collision-free trajectories beforehand assuming that robots will execute them precisely or within some given tolerance that has been accounted for (see works [2]- [5] etc.). In this work we adopt the planning approach.
Multi-robot planners typically solve a discretized version of the problem -multi-robot path planning on graphs [6] also attributed as multi-agent path finding (MAPF) [7]. They often provide guarantees on completeness, optimality (or bounded sub-optimality) of the solutions (w.r.t to discretization). Unfortunately most of the MAPF planners, like the ones presented in [8]- [11], rely heavily on numerous simplifying assumptions, e.g. neglecting agents' size, assuming all agents move synchronously with the same speed etc. To make the MAPF solutions applicable to real robots one can postprocess them as proposed in [12] or can modify the planning 1 Konstantin Yakovlev is with Federal Research Center for Computer Science and Control of RAS and with Higher School of Economics.
2 Anton Andreychuk is with Peoples' Friendship University of Russia (RUDN University). 3 Vitaly Vorobyev is with Kurchatov Institute.
This is a preprint of the paper accepted to ECMR 2019: https://ieeexplore.ieee.org/document/8870957 algorithm itself in the way that it lifts as much constraints as possible [13]- [17] and/or produces robust solutions that are more likely to be executed safely [18]- [20].
Following this line of research we suggest the prioritized multi-agent planner, particularly suited for differential drive robots that have a shape of (or can be modelled as) disks. It builds up on our previous work on any-angle safe interval path planning [21] and, to the best of our knowledge, it is the first multi-agent planner that in practice i) does not restrict robots' moves to syncronized translations; ii) allows moves of arbitrary durations (i.e. durations that are not strictly tied to the preliminary discretized timeline); iii) allows planning for robots with different sizes; iv) does not require moving speed to be the same; v) takes rotation actions into account when planning. It also preemptively tries to minimize the risk of collisions, occurring due to imperfect execution. On top of that, to increase the chance of finding the solution and to decrease the cost of the solution, as it is known that prioritized planning is not complete/optimal in general, we incorporate such techniques as deterministic re-scheduling and start safe intervals into the algorithm.
The proposed algorithm is extensively tested in simulation and on real robots. Results of the evaluation provide a strong evidence that algorithm scales well to large number of robots (up to hundreds in simulation) and that the solutions it produces can be safely executed by the robots with imperfect localization and trajectory following.
II. PROBLEM STATEMENT
Consider n robots populating the workspace that is tessellated to a grid (see Fig. 1b). Each robot can translate, rotate or wait-in-place. Waiting and rotating is allowed only at the center of a grid cell and initially all robots are waiting at their start cells. When moving inertial effects are neglected and the speed is fixed for a move (but can vary from one move to the other). The safety zone of a robot (or a robot itself) is modeled as an open-disk of radius r (i) .
A state (configuration) for a robot is a tuple (x, y, θ), where x, y are the spatial coordinates and θ is the heading angle. The configuration is valid w.r.t to static obstacles if the distance between (x, y) and the closest point of the obstacle is greater or equal r (i) . The trajectory for a robot i is a mapping Here start = (x s , y s , θ s ) and goal = (x g , y g , θ g ) are the initial and the goal states and C Two trajectories are said to be collision free if the robots following them never collide. The problem is to find n trajectories for the robots, s.t. each pair of them is collision free. The cost of the solution might be either the makespan, that is the maximum over the costs of individual trajectories, or the flowtime, that is the sum of costs. In this work optimal solutions are not targeted, but the low cost solutions are, obviously, preferable.
A. Prioritized planning
Prioritized planning is a well-known approach [22], when each robot is assigned a unique priority and then individual trajectories are planned sequentially in accordance with the imposed ordering. Prioritized planning is complete in case individual planner avoids start locations of the lower-priority robots and the instance satisfies certain conditions [3]. In some cases prioritized planning is also optimal [23]. In general, though, it is neither optimal nor complete. One of the approaches to mitigate this issue is to re-plan with other priority ordering in case of failure. In [24] random reshuffling was proposed, in [23] a deterministic algorithm for systematic exploration of priority orderings was suggested. In this work we adopt a heuristic algorithm proposed in [25] for re-assigning priorities. In case of failure it sets the priority of the failed robot to maximum and re-plans. This approach is fast and easy to implement and in practice it significantly raises the chances of finding solution and outperforms random re-ordering.
Another enhancement for prioritized planning we implemented is utilizing start safe intervals [25] (SSIs). Planning with SSIs means that start locations of the low-priority robots are considered to be blocked for the predefined amount of time and un-blocked afterwards. Such an ad-hoc technique contributes to increasing the chance of finding a solution, thus decreasing the number of re-planning attempts, thus lowering down the runtime.
For individual planning we use the enhanced AA-SIPP algorithm. Original planner [21] assumed that all agents are of equal radii and move with the same speed. The planning was conducted for translations (and waits) only. We lift these assumptions, i.e. we plan for translations and rotations (and waits), assume that different robots might have different rotation/translations speeds and take them into account when planning. We also perform the computation of the so-called earliest arrival time in a more straightforward fashion compared to a verbose algorithm described in [21]. As a result we end up with a more versatile and easy-toimplement version of AA-SIPP.
The source code of the resultant planner is open and available at https://github.com/PathPlanning/ AA-SIPP-m.
B. Individual planner: enhanced AA-SIPP 1) High-level overview: AA-SIPP stands for any-angle safe interval path planning. It is a heuristic search planner that groups contiguous, collision-free time points for each element of the configuration space into the intervals and use them to define the nodes of the search space [26]. Utilizing intervals relieves the search effort as now for each configuration only a limited number of search nodes, proportional to how many dynamic obstacles hit this configuration, might be generated while a conventional discrete planner might generate [1, ..., T ] nodes for a single configuration, where T is the time horizon that might be very large.
To illustrate the idea of interval planning suppose that the robot cur needs to move to the cell (3, 2) as shown on Fig.2, and it is known that two high-priority robots pass nearby and hit the cell. In this case 3 safe intervals for a configuration (3, 2, θ) should be considered, i.e. the robot can reach (and stay in) the cell before the first obstacle hits is, or in between the first obstacle leaves away and the second obstacle hits the configuration, or after the second obstacle moves away. SIPP-based planners, e.g AA-SIPP, consider all three possibilities by trying to generate 3 search nodes corresponding to this configuration. Within each safe interval a paradigm of "reach the configuration as early as possible" is adopted. Such an approach coupled with with the A* search strategy guarantees finding optimal solution (w.r.t to the discretization), i.e. the trajectory that avoids all the static and dynamic obstacles and minimizes time. Detailed code of the algorithm can be found in [21].
2) Computing intervals: AA-SIPP search nodes are identified by the tuples s = [cf g, interval]. cf g = (x, y, θ) accounts for robot's position and heading. interval = [t s , t f ] -is the contiguous period of time for a configuration, during which there is no collision and it is in collision one time point prior and one time point after the period.
When a planner considers a move to a cell it should attempt generating k successors corresponding to k distinct safe intervals. Figure 2 illustrates how safe intervals are computed. First, we draw circumferences centered at that cell of the radii equal to r (cur) + r (i) , where r (cur) is the safety radius of the current robot (the one we are planning for) and r (i) is the radii of the moving obstacles (high-priority robots) that pass nearby. Then, using conventional formulas of geometry, we compute the coordinates of the points at which the circumferences intersect the corresponding path segments. Knowing these points, as well at the obstacle trajectories, we may now compute the collision intervals and inverse them to get the safe intervals we are interested in. Now the planner can attempt to generate the successors (one per each safe interval). Whether an attempt succeeds depends on whether the move to the target cell is valid w.r.t static and dynamic obstacles.
3) Estimating the feasibility of the move w.r.t. static obstacles: Naive approach to estimate the feasibility of the move between the grid cells is to a) assume that robot fits inside the cell, i.e. r (cur) ≤ 0.5l, where l is the size of the cell and b) constrain the robot to move to 4 cardinal directions only. In that case one can simply check whether the target cell is traversable. We wish not to restrict agent's size, e.g. to be able to handle agents that are bigger then the grid cells, and to handle moves between arbitrary grid cells. To do so we developed the original procedure of estimating the feasibility of the move w.r.t. static obstacles.
The idea behind the procedure is to identify which cells are hit by the robot moving along the line connecting the move's endpoints and check their traversability. This is done in a similar way to how algorithms from computer graphics, e.g. Bresenham algorithm [27], identify pixels that lie along the straight line between two fixed points -see Fig. 3. We iteratively process the columns of the grid and for each column compute how many cells residing up/down the line to check (these cells are marked with "+"). We additionally process the endpoints of the move to identify which cells line inside the circumference of the given radius (these cells are marked with "#") and check them as well.
Instead of estimating the feasibility of the move w.r.t to static obstacles in the described fashion, one might think of a more conventional approach: enlarge obstacles by the half-radius of the agent and treat it as a moving point. The problem with that approach is two-fold. First, we need to make such transformation for each agent of different size, Fig. 3. Estimating the feasibility of the translation move for a robot of arbitrary radius. thus we will end up with storing and operating with multiple workspaces. Second, as the disk-robots can be of arbitrary radius, e.g. r = 0.6 cells, we may end up with marking some cells as blocked only because they partially overlap the noway zone. Thus, we may fail to find the path although it exists (due to some enlarged obstacles have merged).
4) Estimating the feasibility of the move w.r.t. dynamic obstacles and computing earliest arrival time: If the translation to the destination cell is feasible, it must be performed as soon as possible, following SIPP's paradigm of reaching each node at the earliest possible time. Problem is that the immediate translation might lead to a collision with a high-priority robot passing nearby or waiting/rotating at the cell that lies on the way. We treat all cases uniformly by considering wait/rotate moves as translations with zero velocity. The only exception is when a high-priority robot is waiting at its goal position. In this case the move for the current robot can not be performed anyhow and the corresponding successor is discarded.
To detect collision between two translating disks (onecorresponding to the current robot, another -to the highpriority one) we rely on the closed-loop formula from [28], which gives yes/no answer to the collision query (it also computes time to collision but we do not use it for planning purposes). This formula takes disks radii and translation velocities as arguments, so we are not restricted to predefined move speeds and agent sizes anymore.
If collision occurs we increment the duration of the wait action preceding the translation on some predefined value δ and repeat. In such a way we find the time when current robot can safely start moving. This time moment is guaranteed to exist as dynamic obstacle will sooner or later move out of the way. The earliest arrival time is now computed based on the time spent for waiting and for translating. Finally, the successor is generated in case two conditions hold: 1) the departure time belongs to the safe interval of the source node; 2) the arrival time belongs to the safe interval of the target node.
The proposed approach to compute the earliest arrival time is straightforward and can be easily implemented compared to the approach originally introduced in [21].
5) Handling rotations:
Commonly multi-agent path finding solvers assume that an agent do not need to rotate before translation. In the studied domain this assumption does not hold. Fortunately, as we do not discretize timeline into the timesteps we can naturally plan for rotations of any duration and on any angle needed, by simply assuming that before translating robot spends (θ − θ)ω time units for rotating, where ω is rotation speed, θ is current heading and θ is the desired heading. As said before, waits and rotations are treated uniformly when checking for collisions, so, from the collision-avoidance perspective, rotation is equivalent to wait action. Thus, rotation actions of arbitrary duration can be seamlessly embedded to the suggested planning framework reaffirming its versatility.
C. Increasing robustness
When it comes to real robots one can not expect perfect execution, which might lead to collisions although the plan is valid. To increase the robustness of the generated solutions we suggest two approaches. First, one can inflate the robots' size thus introducing extra-safety zone around them. We can do so by virtue of the proposed collision checking routines that are not tailored to specific size. Second, one can add additional wait of some arbitrary duration, say d, before any translation move when planning. At the execution phase, in case a robot fails to arrive to the waypoint on time, it has to wait before next move not d but d − delay timepoints, where delay is the amount of time the robot is late. Thus, chances are each translation move actually starts on time and the path following error is discharged (at least partially). We evaluated both suggested approaches on real robots and they showed convincing results. More sophisticated approaches, e.g. the one described in [18], might also be realized within the suggested framework.
IV. EXPERIMENTAL EVALUATION A. Simulated experiments
By conducting experiments in simulation we pursued two aims. First, we wanted to assess how the suggested algorithm scales to large number of robots. Second, we wanted to compare it to direct competitors. Unfortunately the second aim is hard achieve as, to the best of authors knowledge, at the moment of writing this paper there existed no other centralized multi-agent path finding solver that simultaneously (and without pre-or post-processing) handles rotations and translations into arbitrary direction, supports actions not tied to discrete timeline, supports varying robots' size and speed. At the same time the prominent decentralized algorithm -ORCA [1] -supports (almost) all of those features (ORCA does not support rotate actions explicitly but rather adjusts robots' velocity at a high rate that can be considered analogous to rotation). Thus, we chose it for the comparison. The source code of both algorithms is publicly available 1, 2 . Video of the selected experiments can be found at https://youtu.be/Fer_irn4BG0. 1) Empty hall: 100 meta-instances accommodating 300 robots were generated on empty 64 × 64 grid. Three types of robots were involved. Robots of the first type are small and fast: their radius equals 0.3 cells and translation speed is 1.5 cells per time unit. Second type robots translate with the speed of 1 cell per time unit and their radius is 0.5. Robots of the third type are large and slow: their radius equals 1.0 and translation speed is 0.5. Rotation speed is 180 • per time unit for all robots. Each meta-instance contained 100 robots of each type. Start and goal locations and headings were chosen randomly. During experimental evaluation we transformed each meta-instance to the instance with the specific number of robots. We started with 5 robots of each type, i.e. 15 robots on a map, and then gradually increased the number up to 300 of robots. Time limit was set to 1 minute. If the algorithm was not able to produce a solution within allotted time, we stopped it and count this run as failure.
The following parameters were used for AA-SIPP(m). Robots were assigned initial priorities based on the Euclidean distance between start and goal -the lower the distance, the higher the priority was. The value for the Start safe interval was set to 3. We used deterministic re-scheduling in case of failure by raising the priority of the failed robot. For ORCA we set time boundary equal to 10, sight radius to 15 and maximum neighbors to 15. These values were chosen empirically based on the preliminary evaluation of AA-SIPP(m) and ORCA.
Resultant metrics are shown on. Fig. 4. We average across the instances that were successfully solved by both algorithms. No results for AA-SIPP(m) for 285 and 300 agents are given due to the low success rate.
Overall, AA-SIPP(m) managed to solve all the instances with up to 210 robots, while ORCA eventually failed even when 45 robots were involved. Densely populated environments posed a problem to AA-SIPP(m). When the number of robots exceeded 240 AA-SIPP(m) almost always required re-scheduling and often the time was out. As a result AA-SIPP(m) solved less than 10% of instances with 285 or 300 agents while the success rate of ORCA dropped to about 50-60%.
AA-SIPP(m) always outperforms ORCA in terms of flowtime and almost always in terms of makespan. When the number of robots is low ORCA's flowtime is 10-15% worse. As the number of agents grows up and reaches 150, the difference becomes more than 50%. In terms of makespan the difference is not so significant and is about 4% on average.
2) Non-empty hall: To evaluate the algorithms' performance in non-empty environments we added 10 static obstacles to the map. Each obstacle was a rectangle formed of 20 × 2 cells. To let ORCA avoid them we used the code, proposed by the algorithm's authors, that builds the visibility graph and finds a reference path for each robot on this graph using Dijkstra's algorithm.
The results are presented in Fig. 5. In contrast to the previous tests, success rate of ORCA is not 100% even for 15 agents. In terms of solution cost AA-SIPP(m) shows much better results than ORCA. When the number of robots is low Fig. 4. Results on the 64x64 empty grid. OX-axis is the number of agents (on all charts). Success rate is in percent, runtime -in seconds, flowtime/makespan -in time units. their results are rather close, but when the number of agents exceeds 60 the difference becomes significant and reaches about 2x in flowtime and almost 40% in makespan.
3) Summary: Principally, AA-SIPP(m) scales well to large number of robots in simulation. When static obstacles are present AA-SIPP(m) significantly outperforms ORCA no matter how many robots are involved. Success rate is higher and the flowtime/makespan is notably lower. The only advantage of ORCA is lower runtime, which is predictable as it is a reactive navigation algorithm based on a rather simple collision-avoidance strategy compared to deliberative planning via heuristic search performed by AA-SIPP(m). When there are no obstacles and the number of robots is moderate AA-SIPP(m) solves more instances than ORCA and provides solutions of better quality. The only case when ORCA can be considered preferable is when the map is empty and the number of robots is very high (more than 285 in our case).
B. Evaluation on the wheeled robots
We conducted experiments with 6 identical differential drive robots depicted on Fig. 1a. Each robot is 21x21 cm in size and is able to move with maximum speed of 10 cm per second (that was the speed we used for planning). Rotation speed is 24 • per second. Each robot is equipped with the colored marker, that is tracked by the external vision-based navigation system composed of the 6 web-cameras, and with a APC220 radio module to communicate with the central computing station. The latter is the PC laptop that runs Ubuntu and ROS. We implemented ROS modules for i) retrieving, filtering and processing the video-stream from the cameras; ii) localizing the robots, i.e. computing (x, y, θ) state for each robot; iii) communicating with the robots, i.e. sending them the next action of a plan to execute. Action execution is performed locally, i.e. using the controllers installed on the robots. The PID regulator is used to maintain near-constant translation and rotation velocity.
The polygon is depicted on Fig. 1c. It is 6x4.8 m bounded rectangle containing 3 obstacles. This polygon was represented as 108 × 72 grid (cell size was 5 cm). For planning purposes robots were modelled as disks. The minimum radius we used was 15 cm which corresponds to almostzero safety zone for a robot, as when it rotates it describes a circle of radius 14.8 cm.
Before evaluating how well AA-SIPP(m) plans are executed by the robots, we have run a preliminary series of experiments with only one robot involved, aimed at estimating the accuracy of trajectory execution. We executed 2 different trajectories 20 times each and tracked the position of the robot to compare the executed trajectory against the planned one. We discarded the heading and compared only (x, y) components of the trajectories. The average RMSE turned out to be 30.65 cm, that is 1.45 of robot actual size.
The error is quite big so we conducted AA-SIPP(m) evaluation on 6 robots, setting the radius of the disk modelling each robot to be 15, 25 and 35 cm. First, we planned without adding waits, then added 5s waits before each translation (and use these waits to compensate delays of reaching the waypoints). 10 different solvable instances were generated. Thus, in total 10 × 3 × 2 = 60 experimental runs on real robots were made.
The results are depicted on Fig. 6. Executing trajectories that were planned without inflating the safety zone and without utilizing add-waits technique leads to collisions in 100% of cases. Inflating the safety zone and using wait augmentation made the produced solutions much more robust and suitable for execution by the robots. In fact, 100% of plans were safely executed when the radius was 35 cm and 5 s waits were added before each translation. The price one has to pay for such a robustness is a 2x increase in flowtime/makespan when compared to ideal plans, i.e. the ones that were constructed w/o waits for 15 cm disks (see Fig. 6 on the right). Left: Percent of safely executed instances when planning for different robots' size and with/without wait augmentation technique. Right: flowtime/makespan overhead compared to the ideal plan, i.e. the one that was obtained for radius = 15 cm and without wait augmentation.
In general, the suggested planner proved to be a flexible and versatile tool in practice. After appropriate tuning it was capable of providing robust solutions that were safely executed by real robots in the absence of perfect localization and path following.
V. CONCLUSIONS AND FUTURE WORK
In this work we suggested an enhanced multi-agent path finding algorithm based on prioritization and safe interval path planning that lifts numerous assumptions characteristic to the algorithms of this kind. The resultant planner supports varying robots' size, translation/rotation speeds, non-fixed moves' durations etc. and is particularly suitable for differential drive wheeled robots. One of the directions of future research is increasing the computational efficiency of the algorithm, another one is applying the proposed techniques to the planners that do not rely on prioritization and guarantee completeness/optimality, e.g. CBS-planners. | 2019-10-22T13:39:09.942Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "b8f0fb89941705a7cf20730a7af407c81907399b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1911.10578",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b6cf6c7ff4a177c13e93ddc7b17d13464e6ff58d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
268937083 | pes2o/s2orc | v3-fos-license | Effects of Prolonged Medical Fasting during an Inpatient, Multimodal, Nature-Based Treatment on Pain, Physical Function, and Psychometric Parameters in Patients with Fibromyalgia: An Observational Study
Fibromyalgia syndrome (FMS) is a common chronic pain disorder and often occurs as a concomitant disease in rheumatological diseases. Managing FMS takes a complex approach and often involves various non-pharmacological therapies. Fasting interventions have not been in the focus of research until recently, but preliminary data have shown effects on short- and medium-term pain as well as on physical and psychosomatic outcomes in different chronic pain disorders. This single-arm observational study investigated the effects of prolonged fasting (3–12 days, <600 kcal/d) embedded in a multimodal treatment setting on inpatients with FMS. Patients who were treated at the Department of Internal Medicine and Nature-Based Therapies of the Immanuel Hospital Berlin, Germany, between 02/2018 and 12/2020 answered questionnaires at hospital admission (V0) and discharge (V1), and then again three (V2), six (V3), and 12 (V4) months later. Selected routine blood and anthropometric parameters were also assessed during the inpatient stay. A total of 176 patients with FMS were included in the study. The Fibromyalgia Impact Questionnaire (FIQ) total score dropped by 13.7 ± 13.9 (p < 0.001) by V1, suggesting an improvement in subjective disease impact. Pain (NRS: reduction by 1.1 ± 2.5 in V1, p < 0.001) and quality of life (WHO-5: +4.9 ± 12.3 in V1, p < 0.001) improved, with a sustainable effect across follow-up visits. In contrast, mindfulness (MAAS: +0.3 ± 0.7 in V1, p < 0.001), anxiety (HADS-A: reduction by 2.9 ± 3.5 in V1, p < 0.0001), and depression (HADS-D: reduction by 2.7 ± 3.0 in V1, p < 0.0001) improved during inpatient treatment, without longer-lasting effects thereafter. During the study period, no serious adverse events were reported. The results suggest that patients with FMS can profit from a prolonged therapeutic fasting intervention integrated into a complex multimodal inpatient treatment in terms of quality of life, pain, and disease-specific functional parameters. ClinicalTrials.gov Identifier: NCT03785197.
Introduction
Fibromyalgia syndrome (FMS) is a musculoskeletal pain disorder which is often accompanied by comorbidities like fatigue, gastrointestinal symptomatology, poor sleep quality, and various other physical and psychological comorbidities [1][2][3].Its prevalence is estimated from 1.8% of the world's population to up to 8%, with females being predominantly affected [2][3][4].
The complexity of FMS is reflected in its diagnostic challenges.No radiological or laboratory markers can yet confirm its presence, and clinical symptomatology can fluctuate considerably or differ substantially between individuals.Diagnosis is usually asserted by pain scales and a history of pain in at least four of five body regions persisting for at least three months.It is not a diagnosis of exclusion, but can exist alongside other conditions, such as rheumatic diseases [4,5].
No specific causative factors have been identified or agreed upon yet, but FMS patients seem to show alterations in pain processing within the nervous system itself [4].Further associations have been found with biosocial stressors and trauma, especially physical abuse history [6].FMS is often accompanied by psychological symptomatology, including depression [7] and alexithymia [8].
To meet the complexity of the condition, the 2017 EULAR international therapeutic guidelines focus on multimodal treatment approaches, preferring non-pharmacological interventions over medication [9].In a Delphi process of 2022, experts in the field supported aerobic exercise, education, sleep hygiene, and cognitive behavioral therapy as core treatments for FMS symptoms, while recommending mind-body exercises as core interventions for pain, fatigue, and sleep problems, and mindfulness for depression [10].Clinical practice does not seem to reflect these recommendations accordingly, as overmedication has been repeatedly reported [11,12].Up to 90% of FMS patients have been reported to consult complementary medicine as a supportive therapeutic option [3], while satisfaction with conventional standard care was rated low in a two-year cross-sectional Spanish survey [11].
Dietary interventions have not yet been incorporated into the 2017 EULAR guidelines [9], but different approaches have proven helpful in FMS treatment: in particular, studies using high-antioxidant, high-fiber, and low-processed foods as well as weight loss studies and ones applying certain nutritional supplementations have shown marked effects in alleviating FMS symptomatology [13].Weight loss has been discussed as a separate mechanism for analgesic effects, this hypothesis being supported by preclinical models [14].Obesity, alongside nutritional deficiencies, and consumption of food additives have also been described as possible risk factors correlated with complications in FMS symptomatology [13].The current state of evidence suggests that dietary modifications may, at a low economic cost, improve quality of life in patients suffering from central sensitization syndromes, a complex of diseases to which FMS has been allocated [15].Predominantly plant-based dietary patterns have shown potential in improving quality of life, sleep, pain at rest, and general health status in FMS patients [16].
Microbiome differences between patients suffering from FMS and healthy controls have been outlined in exploratory studies, but it has not yet been ascertained whether certain food components or dietary changes could positively influence FMS symptomatology in clinical settings [17,18].
Therapeutic fasting has been found to contribute to weight loss and systemic antioxidation as well as to neuroplasticity [19,20].Additionally, improvements in mood and reduced pain perception have been observed, potentially due to increased brain availability of serotonin, endogenous opioids, and endocannabinoids [21].Changes to the gut microbiome [22] and behavioral changes regarding dietary habits, self-efficacy, and sense of coherence and freedom [23] are being discussed, but cannot be generalized, as data are still scarce.
In the case of FMS specifically, the effect of fasting on pain perception, weight, antioxidation, and subsequent eating behavior, has thus far only been investigated in one exploratory clinical study by our research group.In a controlled, non-randomized study with 48 patients, we observed significant beneficial effects of a nature-based medical approach including fasting, compared to a conventional treatment in the same hospital [24].Fasting is one of the main nature-based therapeutic add-ons to conventional rheumatologic treatment at the Department of Internal Medicine and Nature-Based Therapies (IMNT) at the Immanuel Hospital Berlin, Germany, where the present study was conducted.Other treatments may include physiotherapy, moderate exercise, dietary counselling, acupuncture, yoga, psychosomatic counselling, and cold or hot applications, among others.This multimodal approach renders it impossible to differentiate specific fasting effects from other therapeutic effects but presents a great opportunity to study fasting interventions under real-world conditions, more so as patients' realities and international guidelines suggest a multimodal approach to FMS [3,9].Furthermore, fasting in the traditional use, in German-speaking countries, has always been and still is a complex treatment, being accompanied by mind-body medicine interventions, physical exercise, relaxation, nutritional counselling, warm and cold water applications, and others, as described in respective consensus guidelines [25].
This observational study was conducted to evaluate the feasibility and effectiveness of prolonged therapeutic fasting in FMS patients in a larger cohort and with longer follow-ups than in our 2013 exploratory study.
Study Design
We designed this clinical study as an explorative, prospective, single-arm, single-center, open-label, observational study.Approval of the study protocol was granted by the Institutional Review Board of Charité-Universitätsmedizin Berlin (Charitéplatz 1, 10117 Berlin) in October 2015 (ID: EA4/005/17).The protocol was registered at ClinicalTrials.gov(Clin-icalTrials ID: NCT03785197).It was carried out in accordance with the standards of the Declaration of Helsinki.All participants were asked for written informed consent before participating in the study.The study design has already been published elsewhere [26].
Setting
Recruitment took place from February 2018 to December 2020 at the Immanuel Hospital Berlin.It is a hospital of 195 beds and 5000 inpatients annually, covering orthopedic, osteological, and rheumatological cases as well as patients treated with nature-based (NB) therapies in a separate 60-bed ward.Approximately 900 patients are treated annually on the IMNT ward.The costs for most of the patients are covered by German statutory health insurance companies, while a minority of patients have private insurance or pay themselves.
The Department of IMNT at the Immanuel Hospital Berlin belongs to Europe's leading institutions applying NB and traditional medical approaches [27,28], including traditional European medicine [29,30], on a large scale.It is especially adept at using prolonged fasting as a therapeutic measure for a variety of non-communicable diseases [31,32].
For this study, we focused on the four most common diagnoses for inpatient care in 2017, namely, FMS (ICD-code M79.7), osteoarthritis (knee and hip ICD-codes M17.9 and M16.9, respectively), rheumatoid arthritis (M06.9), and type 2 diabetes mellitus (E11.61).The results for osteoarthritis have been published before [26], while the results for the other diagnoses will be published elsewhere.
The main diagnosis for inpatient treatment was identified on admission during the first medical consultation on the IMNT ward.The individual treatment plan, including therapeutically relevant dietary interventions, was compiled accordingly by the responsible physician.If the main diagnosis was consistent with one of the abovementioned four diagnoses, and therapeutic fasting was prescribed, study personnel assessed whether patients were eligible for the study (see Section 2.4).If eligible, patients were informed about the observational study and invited to participate.
Study visits on the first or second day of hospitalization (V0) and on one of the last days before discharge (V1) involved electronic questionnaires on patient-reported outcomes that patients completed on tablets.Laboratory tests, which are part of the IMNT ward's routine standard procedures on admission and discharge, were also reviewed for study purposes.Data on body weight, blood pressure, the use of analgesics, symptoms, and adverse effects during fasting as well as their treatment were obtained from patient files after discharge.
Throughout inpatient treatment, patients are seen by their attending physician 4 to 5 times a week, while nurses check in with each patient daily.A blood test is routinely performed on the first day after admission.Typically, a complete blood count (excluding differential blood count), blood lipids, blood glucose, electrolytes, and routine kidney and liver function parameters are included.If the findings require monitoring, the blood test is repeated before discharge.This standard procedure was introduced before the start of the study and was not changed during the entire observation period.
Follow-ups were conducted at 3-, 6-, and 12-month intervals through questionnaires.These were sent either by e-mail, if participants had provided an e-mail address, or by mail.
Interventions
The methodology of the fast and the multimodal therapeutic program of the Department of IMNT have already been described elsewhere in detail [26].When the physician recommends fasting within the standard inpatient treatment, the first full day of the hospital admission is allocated as a day of preparation, involving a light plant-based diet.On this day, patients consume a calorie-restricted light diet of around 1200 kcal.Additionally, bowel cleansing is applied, facilitated by the intake of laxatives like Glauber's salt (hydrated sodium sulfate) or through an enema.Fasting commences on the subsequent day, typically lasting a minimum of 5 days to a maximum of 12 days, whereby the exact duration depends on the patient's constitution and inpatient stay regulations in accordance with the diagnosis, severity of illness, and ICD-10 coding.Throughout the fasting period, only water, unsweetened teas, and natural juices are consumed, resulting in a daily caloric intake of 200 to 300 calories.In the midst of the COVID-19 pandemic, the fasting regimen was switched to a fasting mimicking diet (FMD) from April 2020 until the end of the study, as there was uncertainty about the effects of fasting on the immune response to SARS-CoV-2.The total daily calorie limit was raised to maximum 600 kcal, incorporating small amounts of solid foods in the form of vegetable soup, steamed vegetables, porridge, and cooked potatoes.Fasting therapy in our department is integrated into a series of other therapeutic measures that the physician prescribed individually for each patient.Some of these nature-based therapies included nutritional counselling, cryotherapy (cold chamber), water aerobics, yoga therapy, mind-body medicine, active walking, meditation, and breathing techniques as well as psychosomatic counselling, acupuncture, physiotherapy, baths, or leech therapy.Traditionally, the fast is broken on the final day with an apple.Depending on the duration of the fasting period, solid food is gradually reintegrated over the subsequent 1 to 3 days as part of a light plant-based diet.The medication paused during the fast is gradually reintroduced on these days, if necessary.
Participants
All patients who were treated as inpatients in the IMNT department at Immanuel Krankenhaus Berlin in the period of February 2018 to December 2020 and who were prescribed fasting as one of the therapeutic measures by their treating physician were screened for suitability for the study.
Age between 18 and 85 years and written informed consent were further inclusion criteria.Dementia or other severe cognitive impairments, pregnancy or breastfeeding, difficulties with the German language, and involvement in another study were exclusion criteria.
Variables
The feasibility and effectiveness of prolonged therapeutic fasting in FMS patients was investigated using a validated questionnaire specifically for FMS symptoms, the Fibromyalgia Impact Questionnaire (FIQ).For the FIQ, an improvement of 14 percent has been acknowledged as the minimal clinically important difference (MCID) [33].Secondary outcomes included the following validated questionnaires: Hospital Anxiety and Depression Scale (HADS), a scale on self-efficacy (Allgemeine Selbstwirksamkeit Kurzskala, ASKU), and the Mindfulness Attention and Awareness Scale (MAAS).In addition, body weight, blood pressure, and medication as well as triglycerides, total cholesterol, LDL, and HDL were extracted from patient records.
Data Sources/Measurement
During the inpatient stay, electronic questionnaires were completed and, depending on the patient's wishes, digital or analog questionnaires were used for follow-up examinations.Blood samples were taken on the day of admission and, if considered necessary by the attending physician, on one of the last days of the hospital stay.Hospital physicians took blood samples daily between 7:30 and 8:15 a.m.prior to breakfast.
Bias
The use of fasting as an intervention cannot be blinded for either the patient or the hospital staff.The study personnel were only responsible for recruiting participants and ensuring that questionnaires were completed.The study personnel therefore had no control over the duration of fasting, modifications of treatment modalities, or the course of the patients' treatment.
To determine whether there was a reporting bias related to subjective improvement or worsening of symptoms in the follow-ups, patients were subdivided according to their improvements on the FIQ score at V1 (primary endpoint) into high (upper third), medium (central third), and low gainers (lower third).For the subsequent follow-ups, we crosschecked whether any of these subgroups was under-or over-represented in the answers we received.
Study Size
During the three-year duration of the study, an estimated 180 FMS patients were to be treated on the ward and receive fasting therapy, of which around 150 would be able to take part in the study.For an intraindividual pre-post comparison with the t-test and standard parameters of alpha = 0.05 (uncorrected for the number of tests, as usual in exploratory studies) and beta = 0.20 (equivalent to a power of 80%), 150 patients, including 20% dropouts, are sufficient to detect large, medium, and small effects with a minimum effect size of Cohen's d ≥ 0.23.
No interim analyses were planned.
Statistical Methods
In this observational, exploratory, single-arm study, participants' baseline values (V0) and vital signs were compared with those measured at subsequent visits (V1 to V4) using unadjusted t-tests.For the comparison between V0 and V1, all cases with complete data for that specific parameter were considered.As usual in explorative studies, no imputation was applied.
Data were analyzed using custom-written procedures in Python (v 3.9).
Study Population
In this uncontrolled observational study, n = 176 patients (168 females and 8 males) suffering from FMS and following therapeutic fasting on the ward between January 2018 and December 2020 were enclosed (Figure 1).Baseline characteristics of the studied population are displayed in Table 1.
Data were analyzed using custom-written procedures in Python (v 3.9).
Study Population
In this uncontrolled observational study, n = 176 patients (168 females and 8 males) suffering from FMS and following therapeutic fasting on the ward between January 2018 and December 2020 were enclosed (Figure 1).Baseline characteristics of the studied population are displayed in Table 1.Of the originally enclosed 176 patients, 157 (90%) fasted.The fast lasted between 3 and 12 days with a mean of 7.6 (SD 1.7) (Figure 2a).The questionnaires at V1 were answered by n = 142 (82%).However, due to a failure in programming the electronic questionnaire in the early months of the study, the FIQ was not completed as a full questionnaire by the first patients, and thus shows gaps in various subscales at V1 for 21 patients (for details, see the "n" column in Table 2).Of the originally enclosed 176 patients, 157 (90%) fasted.The fast lasted between 3 and 12 days with a mean of 7.6 (SD 1.7) (Figure 2a).The questionnaires at V1 were answered by n = 142 (82%).However, due to a failure in programming the electronic questionnaire in the early months of the study, the FIQ was not completed as a full questionnaire by the first patients, and thus shows gaps in various subscales at V1 for 21 patients (for details, see the "n" column in Table 2).
Questionnaires
Results revealed a significant improvement in fibromyalgia symptomatology as measured by the FIQ and its subscales (Figure 3a-e, Table 2).The total score fell from 58.3 ± 11.1 to 44.6 ± 15.5 (−23.5%) between the first and the last days of the stay, which relates to a significant reduction by 13.7 ± 13.9 points (p < 0.001, d = 1.02) on the FIQ scale for the total score ranging from 0 to 100, with 50 points being the average patient score.This marked reduction by 23.5 percent is larger than the minimal clinically important difference (MCID) of 14 percent.The strong improvement in the total score resulted from large effects in the subscores "Overall" (15.0 ± 4.2 to 10.9 ± 5.2, p < 0.0001, d = 0.87) and "Symptoms" (39.8 ± 9.0 to 30.5 ± 11.5, p < 0.0001, d = 0.89), a slight benefit in the FIQ subscore "Function" (3.5 ± 1.7 to 3.2 ± 2.0, p = 0.0328, d = 0.17), and a clinically significant effect in the pain subscore (6.8 ± 1.9 to 5.7 ± 2.6, p < 0.0001, d = 0.49, MCID ≥ 1 point on NRS) [34].Effects remain on a moderate level up until V2 (after three months) for the total score and the symptoms and pain subscores and are reduced to small effects thereafter (Figure 3a-e, Table 2).There was no significant correlation between individual fasting duration in days and improvement in the FIQ score (regression: r = 0.156, p = 0.094).
Physiological Parameters
The weight and blood pressure of the patients decreased during the inpatient stay, as did blood lipids.For more details on physiological, laboratory and anthropometric outcomes, please refer to Figure 2b and Table 3. Pain medication was mainly maintained or reduced, while herbal remedies for pain alleviation were given to some patients de novo.The details on the changes in medication can be found in Table 4. Legend: Pain medication was categorized in opioids/opiates and non-steroidal anti-inflammatory drugs (NSAIDs), including Ibuprofen, Diclofenac, Coxibs, Paracetamol, and Metamizole among others.Medications used to treat neuropathic pain (such as Carbamazepine, Gabapentin), Biologicals, Methotrexate (MTX) or Corticoids as well as herbal remedies were reported separately.We developed a scale for the changes in medication except herbal remedies: −2 (medication was discontinued), −1 (medication was significantly reduced, including discontinuation of rescue medication or reduction from daily use to rescue medication), 0 (no substantial change in dosage), +1 (new medication or 1.5 to 2.5-fold rise in dosage), and +2 (at least 3-fold increase in medication).For herbal medicines, we rated −2 as stopping herbal medicines taken at admission, −1 as a notable dose reduction, −0.5 change from daily intake of herbal medicines to rescue medication, 0 as no substantial change, +0.5 as a slight increase in dosage, including, i.e., new herbal rescue medication, +1 as a new daily intake of herbal medicines, +1.5 a new daily herbal medicine plus a herbal medicine as rescue medication, and +2 as two new daily herbal remedies.
Safety and Adverse Events
Self-reported side effects of fasting were recalled at the end of the inpatient stay or documented in the medical records, as presented in Table 5 and Figure 2d.No serious adverse events were recorded during the inpatient stay for any of the participants.In a further analysis, it was investigated whether the positive long-term results could be explained by a selective loss of patients during the follow-up period who had not profited notably from the treatment.However, when analyzing the data of those who profited most (upper 1/3), moderately (central 1/3), and least (lower 1/3), it was found that losses to follow-up were fairly consistent across all three groups.The long-term results therefore do not appear to be biased due to a selective loss of those who benefited least or most from the intervention.
Discussion
Medically supervised fasting of max.600 kcal daily, applied for a mean of 7-8 days as part of a multimodal inpatient nature-based intervention, showed improvements in different parameters regarding treatment of patients with FMS.These included patientreported outcomes related to pain perception, functionality, and quality of life, along with some clinical, anthropometric, and laboratory parameters in short-, medium-, and long-term follow-ups.
One of the main symptoms of FMS is pain, accounting for frequent overmedication and overtreatment [11,12,35].In our data set, we found a reduction in subjective and objective pain scores.The FIQ question on pain revealed a clinically significant reduction that persisted until the three-month follow-up, while patient-reported pain was still below baseline values one year after the hospital stay.Parallel to this result, the data from the patient records show that pain medication decreased slightly from baseline to discharge, while herbal remedies increased.This is especially interesting, as a current study on the use of pain medication in FMS concluded that many German patients use "on-demand" pain medication that is not in line with international guidelines, achieving only moderate pain reduction and leading to side effects [36].
Although our data are only observational in nature, the effect sizes observed (d = 1.02) compare to other nonpharmacological treatments (d = 0.63) including supportive treatments such as aerobic exercise (d = 0.89) or multidisciplinary treatment (d = 0.41) [37].In this context, comparative effectiveness and cost-effectiveness of fasting should be further investigated.
The improvements in pain and the FIQ indicate a better sense of wellbeing.In line with this, quality of life, which was assessed using the WHO-5 questionnaire, also increased in our sample.It showed a clinically relevant increase, while depression and anxiety, measured by the HADS, decreased.Our FMS study population seems to profit more and in a shorter time frame regarding anxiety and depression than, for example, pain neuroscience education, a program usually running over several weeks [38].The changes we observed were most pronounced from admission to discharge, but an improvement in symptomatology (FIQ subscale on symptoms), pain (FIQ subscale on pain), and mood (WHO-5) was sustained throughout the whole study period of one year.
These findings on wellbeing are of particular interest in FMS, as many patients suffer from alexithymia and depression.Four in ten FMS patients are reported to show depressive symptomatology [7].Depression has been linked to poorer outcomes in FMS patients [7] and alexithymia has been positively associated with pain [8].This is why interventions enhancing mindful awareness have been proposed to be integrated in therapeutic concepts in FMS [8].In our study, we could see a rise in mindfulness, measured by the MAAS questionnaire, at V1 compared to baseline.Although this change was not sustained in further visits, the findings indicate that fasting could potentially support mindfulness practice.Fasting, being a dietary intervention perceived by many patients as drastic, and the necessity for self-restraint to adhere to it, inherently challenges self-management strategies [23].This could be one of the mechanisms by which fasting may influence quality of life and mindfulness in FMS patients.This aspect is also important, as self-management has been discussed as a key element for treatment strategies for FMS [35].
In our study setting, fasting was not administered alone.The multimodal therapy offered entailed components that are known to improve FMS symptomatology.Evidence suggests that the promotion of moderate exercise [9,35,39], patient education [38], the introduction to mindfulness-based self-management strategies of mind-body medicine [39,40], physiotherapy [41], balneotherapy [42], manual therapies [43], and different traditional and nature-based health strategies like acupuncture, acupressure, and massage [44,45] can lead to symptom improvement.As mentioned before, fasting in the traditional sense is not only a reduction in caloric intake, but implies a complex therapeutic approach, which may include all these abovementioned interventions, depending on the clinic or medical practitioner implementing it.As such, therapeutic fasting itself could as well be described as a multimodal approach in itself.
Our findings are in line with findings from other multi-component programs for FMS. Bruce et al. have reported on a two-day multimodal treatment program embedded in routine care that yielded durable effects regarding improvement of functional status and psychological distress in a sample of 189 patients until the follow-up visit after five months [46].Ayurvedic and conventional multimodal treatment both showed improvements on the FIQ after a two-week inpatient treatment [47].Another multimodal program that is being developed in a community setting runs over six weeks and includes education about fibromyalgia, goal setting, pacing, sleep hygiene, and nutritional advice [48].
Other dietary interventions have also been shown to improve FMS symptomatology.A very-low-calorie diet (VLCD) of 800 kcal/d, applied in 195 obese FMS patients over three weeks, showed a minimum 30% symptom reduction in 72% of participants by week 3, while changes were not associated with the amount of weight lost [14].The caloric restriction applied in our study was more pronounced, but shorter, and showed similar improvements.In preliminary studies from our working group, fasting with 300 kcal/d for a week ameliorated mood and wellbeing in a heterogeneous sample of chronic pain patients [49] and FIQ scores, pain, sleep, and anxiety were reduced in a prolonged fasting group compared to controls in the same hospital after two and twelve weeks.These effects could, additionally to the abovementioned mechanisms, be connected to antioxidant capacity released by caloric restriction and fasting [19].FMS patients have been shown to produce more damaging free radicals than healthy controls, mainly affecting the nervous system, and treatments with antioxidants and vitamins have had effects on FMS symptoms [3].In our clinical experience, fasting is often seen by patients as a potential starting point for changes in habits in general, and especially in dietary habits, as we have described formerly in a study on a religious fast [23].
In our study setting at the Department of IMNT, patients are encouraged to follow a mainly plant-based diet with low-processed foods once back at home.A review of 36 studies on nutritional approaches to fibromyalgia of 2022 concludes that low-processed foods containing high amounts of antioxidants and fibers, high-quality protein, and healthy fats showed beneficial effects in FMS [13].A systematic review on the use of vegetarian and vegan diets in FMS published in 2021 analyzed four clinical trials and two cohort studies on the subject and concluded that following predominantly plant-based dietary patterns seems to improve biochemical parameters, quality of life, sleep, and pain at rest as well as general health status [16].In a review of the literature on nutritional influences on central sensitization syndromes, including FMS, the authors conclude that dietary changes can considerably increase patients' quality of life at modest costs [15].In fact, in our clinical experience, fasting is often used as an impulse for medium-and long-term lifestyle changes, so changes in dietary habits can also be seen as part of the effects of a fasting intervention and should be tracked as such in future studies.
Several limitations restrict the generalizability of our study results.Apart from the insufficient quality of our data due to technical programming problems at the beginning of data collection, the greatest drawback of this study lies in its observational character and the lack of a control group.The inpatient setting makes randomization difficult, so that no control group could be generated on the same ward, and comparison with patients on other wards would have entailed too many differences in the multimodal therapy as to be of use in this study.Inpatients for whom fasting therapy is not indicated due to their pre-existing conditions (e.g., cachexia or eating disorder) would also not represent a suitable comparison group.In addition, fasting, like other dietary changes, cannot be blinded, which applies to both personnel and patients and further compromises the generalizability of the results.The variations in fasting length also pose a challenge to reproducibility.Additionally, the determinants of the decision-making process between the patient and the therapeutic team concerning the length of the fast has not been well documented in our study.The multimodal treatment program on the ward being individualized for every patient, and containing numerous interventions, also impedes generalizability.On the one hand, the inpatient setting alone could have unspecific positive health effects, and on the other, effects specific to the reduction in caloric intake cannot be specified due to the multimodality of the treatment.We also did not track three important therapeutic elements that merit attention: psychiatric medication, exercise, and dietary habits.Not having tracked dietary habits in the follow-ups, it is not possible to differentiate long-term fasting effects per se from health-promoting changes in dietary habits in our data.Exercise is not only critical for patients suffering from obesity, as weight loss has shown to positively influence pain in FMS, but it is the non-pharmacological treatment option with the most evidence in FMS [9].Future studies should find ways to monitor changes in exercise, too, as less pain could contribute to more exercise and help maintain positive results after our inpatient stay.
Taking into consideration all these limitations, these data can be seen as a contribution towards the discourse concerning dietary interventions in the treatment of FMS.Our data suggest feasibility, safety, and potential advantages of medically supervised fasting for patients with FMS, when embedded in a multimodal therapeutic inpatient approach.The feasibility and safety of prolonged fasting have already been shown for various other indications, including different chronic pain syndromes [21,[50][51][52].
In summary, prolonged fasting could induce multiple positive effects on symptomatology of FMS.To generate more evidence in this field, it would be commendable to study fasting in outpatient settings, as they seem easily approachable for patients suffering from FMS [48], with control groups and fewer new treatments ensuing during fasting.It would also be interesting to investigate whether effects are dose-dependent and whether fasts of shorter durations or even intermittent fasts could have similar effects.From an economic perspective, cost-effectiveness of dietary interventions, and especially fasting, should be further investigated, as fasting has been shown to lower the need for medication, sparing patients potential side effects.The rise in quality of life could also have effects on the length of sick leave.In general, if a safe and feasible intervention of 5-10 days were able to lower disease burden in FMS in the medium and long term, giving it further attention seems worthwhile.We hope that these observational data will serve as a basis for subsequent prospective interventional trials that are required to explore the concept further and ensure reproducibility of results in different settings and populations.
Conclusions
The use of prolonged modified fasting as part of a multimodal medical approach could possibly help patients with fibromyalgia regarding pain and psychosomatic symptoms.
Nutrition (Ärztegesellschaft für Heilfasten und Ernährung e.V.).All other authors declare no conflicts of interest related to this manuscript.
Legend:
The left-hand side shows descriptive statistics for each visit separately, while the right-hand side shows the mean of individual differences between the baseline visit (V0) and the respective visit (V1: at discharge, V2-V4: 3, 6, and 12 months after V0).Differences and statistics have been calculated only for complete cases for the individual parameter and visit.FIQ = Fibromyalgia Impact Questionnaire, HADS = Hospital Anxiety and Depression Scale, PSS = Perceived Stress Scale, WHO-5 = Quality of Life, ASKU = Allgemeine Selbstwirksamkeit Kurzskala (self-efficacy scale), MAAS = Mindfulness Attention Awareness Scale, M = Mean, SD = Standard Deviation, n = number of participants, T = test statistic, p = p-value of the paired t-test, d = Effect size (Cohen's d).
Figure 2 .
Figure 2. (a) Fasting duration histograms (in days).Results for the physiological data, blood pressure (b), and weight (c).Here, bars indicate means and SDs.(d) Relative frequencies of side effects of fasting.Black bars indicate self-reported frequencies in the questionnaire at discharge; blue bars show frequencies of side effects reported to the staff.V0 = Baseline visit, Day 1 = 24 h after admission, V1 = visit at discharge."Mood" = mood disturbances.
Figure 2 .
Figure 2. (a) Fasting duration histograms (in days).Results for the physiological data, blood pressure (b), and weight (c).Here, bars indicate means and SDs.(d) Relative frequencies of side effects of fasting.Black bars indicate self-reported frequencies in the questionnaire at discharge; blue bars show frequencies of side effects reported to the staff.V0 = Baseline visit, Day 1 = 24 h after admission, V1 = visit at discharge."Mood" = mood disturbances.
Legend:
The left-hand side shows descriptive statistics for each visit separately, while the right-hand side shows the differences between the baseline visit (V0) and V1 (visit at discharge).As patients were received at the hospital after breakfast, weight was determined on the morning after admission (day 01) and compared with V1.M = Mean, SD = Standard Deviation, n = number of participants, T = test statistic, p = p-value of the paired t-test, d = Effect size (Cohen's d).
Table 3 .
Results of Physiological and Laboratory Parameters.
Table 4 .
Changes in Medication.
Table 5 .
Subjectively Reported Side Effects of Fasting.
Values are reported in absolute and relative frequencies (%). | 2024-04-06T15:31:55.274Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "6376c97110b07c54f42b5d3edd58544a236e073b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/16/7/1059/pdf?version=1712227505",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35f4df9f9b5f0b5b3c79dbe145c309710759f5bb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256832437 | pes2o/s2orc | v3-fos-license | Shifting students toward testing: impact of instruction and context on self-regulated learning
Much of the learning that college students engage in today occurs in unsupervised settings, making effective self-regulated learning techniques of particular importance. We examined the impact of task difficulty and supervision on whether participants would follow written instructions to use repeated testing over restudying. In Study 1, we found that when supervised, instructions to test resulted in changes in the self-regulated learning behaviors such that participants tested more often than they studied, relative to participants who were unsupervised during learning. This was true regardless of the task difficulty. In Study 2, we showed that failure to shift study strategies in unsupervised learning was likely due to participants avoidance of testing rather than failure to read the instructions at all. Participants who tested more frequently remembered more words later regardless of supervision or whether or not they received instructions to test, replicating the well-established testing effect (e.g., Dunlosky et al. in Psychol Sci Public Interest 14:4–58, 2013. http://doi.org/10.1177/1529100612453266). In sum, there was a benefit to testing, but instructing participants to test only increased their choice to test when they were supervised. We conclude that supervision has an impact on whether participants follow instructions to test.
Improving memory performance is a goal for many people, particularly those in academic settings. Fortunately, scientists in many fields (e.g., cognitive, education, behavioral) have identified best practices to maximize learning and remembering. Dunlosky et al. (2013) reviewed the efficacy of ten easy-to-use learning techniques, derived from basic research in cognitive psychology, for their potential to help students achieve their learning goals. Based on a review of published studies, Dunlosky et al. (2013) found that some study tools reportedly used by many have limited utility (e.g., highlighting, summarizing, mnemonics, and re-reading). Of ten successful techniques identified, repeated testing was one practice found to benefit all types of learners of different ages and abilities. Despite the fact that students may not naturally recognize the power of repeated testing (Karpicke et al., 2009;Tullis & Maddox, 2020), getting students to use this technique could be as simple as instructing them to do so. Ariel and Karpicke (2018) examined whether students would engage in repeated testing if they were informed of the utility of this learning strategy prior to studying. College-aged students were given instructions regarding the benefits of using multiple retrievals (three) to improve recall performance. Their performance was compared to students in a control condition who did not receive explicit instruction about study strategies. All students were tasked with learning 20 Lithuanian-English translations and allowed to choose their study strategy for each word pair. Participants were able to choose how to study the word pairs (either see both words together or be cued with the Lithuanian word and respond with the English translation). They found that participants who received instruction about the benefits of repeated testing tested themselves more often and remembered more of the translations during the cued recall task 15 min later. In a second experiment, they demonstrated that participants who were instructed to use testing in an initial task continued to use it in a similar task (learning Swahili-English word pairs) a week later without prompting, leading the authors to conclude that instruction to test has a lasting impact.
One aspect of student learning that has changed significantly since Ariel and Karpicke published their findings in 2018 is the need for students to regulate their own learning in fully online environments and without instructor supervision. As the COVID-19 pandemic necessitated significant changes to online learning, we wondered whether the instructions participants were given about the value of repeated testing would work as effectively in that context to produce the same increases in testing behavior Ariel and Karpicke (2018) saw in their lab-based intervention.
Likely to last well beyond the pandemic, online testing may be particularly useful in helping students gain extra practice with material outside of class so that they are more prepared to participate in class discussions that require higher level thinking. Even prior to the pandemic, a number of researchers found improvements in student learning using online practice tests that could lead to higher grades in a real course (Gurung, 2015;McDaniel et al., 2012;Van Camp & Baugh, 2014; but see also Bell et al., 2015Bell et al., , 2016. For example, Gurung examined online packages from three different publishers and found positive correlations between time spent using these online tools and students' performance on in-class assessments, even when controlling for GPA. Van Camp and Baugh (2014) examined the use of publisher-provided online learning tools (MyPsychLab, Pearson) in their Introductory Psychology course. Students not only reported that they believed the tools helped them to learn, they also reported that they enjoyed using the online tools as part of the course. While they found that students who chose to use the tools received better grades than students who had not, there was no improvement in the overall course grade or passing rate when the tools were required for the class, and importantly, not everyone used the course tools when they were available or required (Van Camp & Baugh, 2014). One possible conclusion is that some students, perhaps those who are already more skilled in learning strategies and/or more motivated to do well, use tools when they are available, whereas less skilled or motivated students do not. We note that these publisher-provided online study tools include more features than merely quizzing, although quizzing is an important component. It is possible, based on Ariel and Karpicke's (2018) findings, however, that regardless of motivation to do well, if all students were explicitly informed about the benefits of testing right before using the quizzing tool, this might increase the use of testing among all students and thus, improve all student performance.
We explored this phenomenon further in our first study thereby testing the effectiveness of an instruction intervention on participants' self-regulated learning across different contexts. We asked all participants to try hard to learn 20 English-Swahili word pairs across two modalities: some completed the tasks under supervision in the laboratory (thus, replicating Ariel & Karpicke, 2018), and some worked on their own, unsupervised. As noted above, Ariel and Karpicke (2018) found that students selected repeated testing when they were provided with instructions regarding the superiority of this learning strategy just prior to engaging in an opportunity to learn word pairs in a laboratory setting and a week later in a similar laboratory setting. Whether students would follow this instruction prompt in an online, unsupervised setting in the same way they did under supervision in a laboratory setting is not yet known.
A second feature that we explored was whether the difficulty of the task would change students' adherence to the instructions to use testing as a strategy to improve learning. In their first experiment, for example, Ariel and Karpicke (2018) cued participants with a Lithuanian word and then, asked them to recall and type in the English pair. Recalling English words given a non-English word as a cue is an easier task compared to recalling a word in an unfamiliar language given an English word as the cue (Bangert & Heydarian, 2017;Nelson & Dunlosky, 1994). It is important to know if task difficulty influences student decisions about testing and whether providing information about the value of testing at the start of a study session has any influence over behavior in that situation. Based on previous work cited above using publisher-provided online learning tools, we anticipate that not everyone will use the resources available to them, even with instruction. Other laboratory-based studies have shown that students will choose to test themselves but only after they have already reached a certain level of recall based on just viewing the word pairs (Kornell & Bjork, 2007), or if they are allowed to receive hints (Vaughn & Kornell, 2019) to make the learning task more "fun. " Therefore, we anticipate that participants will choose the study option more often in the difficult task (recalling Swahili) than in the easier task (recalling English words).
We examined the durability of the instruction effect, i.e., whether students will continue to select testing over studying when instructed to do so, and whether that effect is constant across different contexts (online vs. in person) and under differing levels of task difficulty (easy vs. hard). We examined the effects of not only task instruction about testing effects (present or not) but also the effects of task context (supervised or unsupervised) and task difficulty (easy or hard) on participants' decisions to use testing as a strategy to learn word pairs. In each context and difficulty level, participants were randomly assigned to one of two groups. Either they received instruction about the benefit of testing, suggesting that testing themselves was the most effective strategy to learn the words (instruction group), or that students should learn the word pairs, so they could recall as many as possible later (control group). We manipulated task context by having students complete the study with supervision in a laboratory or on their own and unsupervised. We manipulated task difficulty by cuing recall with the Swahili word and asking for the English translation (easy task) or cuing recall with the English word and having participants recall the Swahili translation (hard task).
Participants and design
We used G*Power to determine the appropriate cell size based on effect sizes reported by Ariel and Karpicke (2018). This yielded a target of 30 participants in each condition. The majority of students identified as female (69%) and were mainly first and second year students (84%). See Table 1 for specifics for each condition. Participants were undergraduate students in introductory psychology courses who completed the study for course credit.
The Easy condition consisted of Swahili prompts with participants reporting the appropriate English word, while the Hard condition had English prompts for the appropriate Swahili word. The Instruction condition explicitly emphasized the benefit of testing (whereas the Control condition did not). Study conditions were conducted one at a time, and participants were randomly assigned to instruction versus control. Although random assignment was not possible for the context and difficulty manipulations, participants were not able to selfselect which condition to join, decreasing the likelihood of selection bias. Participation in one condition excluded them from participating in another.
Materials
All procedures used 20 Swahili-English translations (e.g., jabini-cheese), all nouns. The Swahili words were six letters and were of medium difficulty for recalling English when prompted with Swahili according to previous research (Bangert & Heydarian, 2017). Task difficulty varied according to the language of the word to be retrieved following the cue, English (easy) or Swahili (hard). Context was either supervised in a laboratory setting or unsupervised on their own.
The program was created using PsychoPy (Pierce et al., 2019) for in person testing and was converted to an online version using Pavlovia as the platform for unsupervised learning. For all participants, we collected demographic information via an online questionnaire.
Procedure
For participants in all conditions, the first screen told them they would be learning 20 Swahili-English translations. They were told that they could control how they studied the translations in the learning phase and that their goal should be to learn all of the translations so that they would recall as many as possible on the final test that would follow approximately 45 min after the start of the experiment.
Before beginning the choice block, each participant was then randomly assigned the instruction or control condition. Participants in the instruction condition saw a second screen that included specific information about the benefits of repeated retrieval, recommending that they continue to practice the translations until they remembered each of them three times, modeled after the methods of Ariel and Karpicke as shown in their Appendix (2018, p. 56). The instructions were thus: Before you begin, we wanted to tell you about a strategy that is extremely effective for learning: repeatedly self-testing. Research shows that people learn more from repeated testing than from repeated studying. This is illustrated in the Figure to the right which shows differences in final memory performance for students who repeatedly studied information vs. repeatedly retrieved information with practice tests. The best strategy to ensure The control condition received no second screen with this information and proceeded immediately to the first phase of the experiment.
The study began with a learning phase which consisted of alternations between a choice block and a practice block (see Fig. 1). The choice block involved sorting the word pairs into Study, Test, and Done piles. Participants sorted by clicking on the blank pile to view the word pair, and then, they selected how or if they wanted to review that word pair by clicking on Study, Test, or Done on the right. Once all the word pairs were sorted into one of the three piles, the practice block began by asking if the participant would like to start with Study or Test piles first (if word pairs were placed in both piles, otherwise they proceeded directly to the pile that contained the word pairs they had selected for practice). Word pairs placed in the Study pile were presented in random order on screen until the participant pressed Enter to move to the next pair. Word pairs placed in the Test pile were randomly presented with only one word (Swahili or English, depending on whether they were in the hard or easy condition) shown and a blank area beneath it for the subject to type the appropriate translation. Once Enter was pressed, the participant was asked if they would like to receive feedback about their response. If Yes was selected, they were shown their answer and the correct answer for 2 s before the next trial began. Completion of the practice block brought the participant back to the choice block. This alternation between choice and practice blocks continued until the participant placed all word pairs in the Done pile during the choice block.
For supervised learning, participants then moved to a new computer in the same room to complete unrelated distractor tasks for 15 min. For unsupervised learning, they were directed to watch a YouTube video about visual illusions that lasted 15 min. After the distractor task, participants completed the testing phase of the study. Participants were cued with a Swahili word if they were in the easy condition and asked to type in the corresponding English word (e.g., jibini-?), whereas those in the hard condition saw the English word and were asked to type the corresponding Swahili word (e.g., cheese-?). They were allowed as much time as they needed to type the translation. Once they hit Enter, another cue appeared until all word pairs were presented. The order of presentation was random. This cued recall was identical to how the participants tested themselves during the learning phase if they placed the word pairs in the Test pile except that no feedback was given during this cued recall test. Following this test, participants completed a survey requesting information on demographics, their study habits, how hard they tried on this task, and their familiarity with Swahili. Finally, they were debriefed and thanked for their participation.
Study strategy
We sought to determine whether the instruction effect shown by Ariel and Karpicke (2018), namely increasing testing by telling students about the benefits of this technique, would persist if students were presented with those instructions in an unsupervised online task, and regardless of how difficult the material was to learn. Because participants chose a mix of test and study trials to complete this task, and we were interested in studying self-regulation of learning within individual participants, we examined the proportion of trials in which participants tested relative to the total number of trials they completed to create a test-study ratio measure. This test-study ratio measure takes into account how each participant chose to blend testing and studying activities in the different settings (supervised, unsupervised) and task difficulty levels (easy, hard).
We analyzed how often students used testing with a 2 (instruction vs. no instruction) × 2 (supervised vs. unsupervised) × 2 (easy vs. hard) ANOVA. To create a single measure of the degree to which students favored testing versus restudying, we used the mean proportion of test trials divided by total trials (study plus test) as our main dependent measure (see Table 2). Regardless of the total effort each participant put forth to learn, i.e., quitting after 20 trials versus 200 trials, this allowed us to see whether their study strategy "blend" favored testing. Scores closer to 1 indicate greater adherence to the recommendation to test, regardless of persistence in the task in general. This analysis yielded main effects of instruction, F(1, 250) = 8.43, p = 0.004, η p 2 = 0.033, and context, F(1, 250) = 7.89, p = 0.005, η p 2 = 0.031, and a three-way interaction among instruction, context and difficulty, F(1, 250) = 5.00, p = 0.026, η p 2 = 0.020. Although we did not predict a specific three-way interaction, we did expect that performance in unsupervised settings might differ from the supervised setting used by Ariel and Karpicke (2018) and specifically that students might comply with instructions less when not monitored. Therefore, we examined the mean proportion of test trials used within each level of context separately to understand this result with a 2 (instruction) × 2 (difficulty) ANOVA looking only at supervised participants. This analysis showed a significant main effect of instruction, F(1,133) = 6.21, p = 0.014, η p 2 = 0.045, but no significant effect of difficulty of the task, F(1,133) = 0.32, p = 0.574, η p 2 = 0.002, nor a significant interaction, F(1, 133) = 2.12, p = 0.148, η p 2 = 0.016. As seen in Table 2, when under supervision, participants favored testing over studying when given instructions to do so. In contrast, a 2 (instruction) × 2 (difficulty) ANOVA looking only at behavior of unsupervised participants revealed no significant effects. Neither instruction, F(1,117) = 2.89, p = 0.092, η p 2 = 0.024, nor difficulty of the task, F(1, 117) = 0.52, p = 0.470, η p 2 = 0.004, nor their interaction F(1,117) = 2.78, p = 0.098, η p 2 = 0.023, yielded significant differences in how participants chose to study the word pairs.
In summary, we found that instruction to test resulted in favoring testing for supervised participants regardless of task difficulty. Neither instruction nor task difficulty changed the behavior of unsupervised participants. Simply stated, students adhered to the recommendation to test when completing the tasks under supervision. Contrary to our prediction, task difficulty did not influence the instruction to test in supervised or unsupervised conditions.
Effort
We examined effort in two ways: total number of trials completed during the study and self-report after the study (see Table 3). We ran a 2 (instruction) × 2 (context) × 2 (difficulty) ANOVA on the number of trials completed and found main effects of context and difficulty. Supervised participants used significantly more trials to learn the words than unsupervised participants (6.24 vs. 4.74), F(1, 250) = 10.49, p = 0.001, η p 2 = 0.04. Participants given the hard task (recall Swahili) completed an average of 6.67 trials, whereas participants asked to recall English completed an average of 4.31 trials, F(1, 250) = 25.89, p < 0.001, η p 2 = 0.09. Interestingly, we found that instruction to test did not increase the total number of trials participants completed (control 5.22 vs. instruction to test 5.76), F(1, 250) = 1.38, p = 0.241, η p 2 = 0.005. As we noted above, instruction did change the blend of their choice of how to engage in the study in that those with instruction to test did test more frequently and they also reduced the number of study trials. There were no significant interaction effects.
To examine whether participants' self-reported effort varied across conditions, we conducted a 2 (instruction) × 2 (context) × 2 (difficulty) ANOVA on the level of effort participants reported putting forth when asked about this following the final recall task. The context in which participants completed the task (supervised vs. unsupervised) was a significant predictor of their reported effort. We found that, on a 5-point scale with 0 being the least ("no effort") and 5 being the most ("I tried as hard as possible"), supervised participants reported trying harder than unsupervised participants (M = 4.23, SD = 0.82 and M = 3.99, SD = 0.79, respectively), F(1, 250) = 5.83, p = 0.016, η p 2 = 0.022. There were no effects of instruction or difficulty on this measure.
Recall accuracy
Although not a main focus of our study, we were also interested in whether those who favored testing over studying actually recalled more words. Because students in the control condition also engaged in testing, to verify that we replicated the testing effect more broadly we examined the correlation between the proportion of times participants chose to test themselves and proportion correct at final recall regardless of where the learning took place (supervised or unsupervised), which was significant, r (256) = 0.404, p < 0.001. Although as we noted above that instruction did not change behavior in the unsupervised context, testing helped all participants remember more regardless of supervision. This finding is consistent with the well-established benefit of the testing effect and did not depend on where students completed the task. In other words, there is a benefit to testing, even unsupervised, but instructing participants to test will increase their choice to test only when supervised.
Study 2
The methodology of Study 1 precludes us from determining whether participants were ignoring the instructions to test or if they were not reading the instructions. It is possible, for example, that unsupervised participants did not read the instructions at all, or they did read them but chose not to follow them. We designed a second study with the simple goal of determining whether participants read instructions in supervised vs. unsupervised contexts. To achieve this, our goal was to have the instructions make the task much easier (by giving them the answers within the instructions), so there would be a strong motivation to follow them if they were read.
Method Participants
Participants (n = 114) were undergraduate students in introductory psychology courses who completed the study for course credit. As with Experiment 1, the majority of participants identified as female (77%) and were mainly first and second year students (86%). Participants completed the task either in a supervised (lab) or unsupervised location (at home, the library, etc.). In the control condition, 28 participants were supervised and 30 were unsupervised. Experimental condition participants were also supervised (n = 29) or unsupervised (n = 27).
Materials and procedure
We intended this study to be perceived by participants as challenging, so we presented 40 English-Swahili translations to participants via a Qualtrics survey and told them they would be asked to enter the Swahili translation for the English words. All participants were initially told to report to a laboratory for testing, but half the participants, chosen at random, were informed in an email that the session was over-booked and they were to complete the task on their own (at home, library, etc.). Other participants were given the task in the laboratory where the experimenter was nearby to supervise the session.
Participants in the control condition first saw a set of instructions telling them they would be attempting to learn 40 English-Swahili word pairs. The instructions asked them to do their best to learn the pairs so that they would be able to recall the Swahili word later and that they would be given the choice to practice before the final test. Next, the word pairs appeared on the screen for up to 3 min before participants were asked if they wanted to practice the translations. If they selected that option, 40 multiple choice questions followed in which the English word appeared and the participant could select its Swahili translation from four choices. Corrective feedback was given after each question. The participant was then asked if they would like to practice for a second round. Finally, a cued recall task ensued in which an English word was given and participants had to type in the correct Swahili translation.
Participants in the experimental (answer given) condition received the same sequence of tasks in the survey, but with key differences. First, the instructions told participants about the value of testing over restudying, using the same information and graphic that we used in Study 1. However, the last paragraph of the instruction screen read as follows:
Second, the real purpose of this study is to find out if you are reading the instructions. We'd actually like you to skip the practice session(s) and go directly to the final test. Instead of typing in the Swahili translation for the English word you're given, just type in the word 'bronco' , in all lower case letters as it appears here, for all of the translations.
Thus, for participants in the answer given condition, a correct response, if they were reading and following the instructions, would be to say "no" to each question about whether they wanted to practice the translations, skipping those blocks entirely, and to enter "bronco" for the responses to each of the cued recall questions at the end of the survey. All participants were thanked and debriefed once they had completed the survey.
Results and discussion
We first examined whether those receiving instructions followed them. We found that no one in the control condition entered the word given ('bronco'), and so we are confident that anyone in the answer given condition who entered that word on the final test knew to do so from reading the instructions. We found that 71.4% of participants (40 of 56) did indeed enter the answer provided to them ('bronco') at least once. We are confident, then, that over 70% of participants read and followed the instructions, and conversely, that nearly 30% of the participants did not read the instructions.
A second measure of reading the instructions was whether or not participants engaged in any practice tests; those in the answer given condition were told to skip the practice tests and those in control were encouraged to do so. Using a Welch's two-sample t-test, we found a significant difference between the number of practice blocks completed in the two groups (out of 2 possible), with participants in the answer given condition completing fewer (M = 0.45, SD = 0.63) than those in the control condition (M = 0.97, SD = 0.46), t (100.19) = 5.02, p < 0.001, Cohen's d = 0.94, as we expected.
Recall, however, our results suggest that 30% of the participants in the answer given condition did not read the instructions. If that's the case, then we would expect them to have engaged in more practice than those who followed our directive to skip the practice trials entirely. This is what we found. Using a Welch's two-sample t-test, we found that the 30% of participants in the answer given condition who did not follow the instructions completed significantly more practice blocks, (M = 0.94, SD = 0.77), than the 70% who answered 'bronco, ' (M = 0.25, SD = 0.44), t (19.00) = 3.35, p = 0.003, Cohen's d = 1.10. Furthermore, we would also expect that participants who received the control instructions, which encouraged practice, would practice more than the 30% of participants in the answer given condition who did not read the instructions, therefore missing out on the encouragement to test. This was also supported. The 30% of answer given participants who did not read the instructions at all completed as many practice blocks (M = 0.94, SD = 0.77) as the control group (M = 0.97, SD = 0.46), Welch's t (18.00) = 0.14, p = 0.891, Cohen's d = 0.04. We also analyzed the influence of supervision on rates of compliance in the answer given condition. We found that participants did follow the instructions (i.e., used 'bronco') more often when supervised (79%) than unsupervised (63%) although not significantly so, X 2 (1, N = 56) = 1.83, p = 0.176.
We feel confident in concluding that nearly 30% of participants did not read the instructions, and we can extrapolate that these same rates would apply to the percentage of participants who read the instructions in the control condition as well. Reading instructions and following instructions are not the same especially when there is a reason to avoid the instruction (e.g., they require the participant to do something perceived as hard or unpleasant). We demonstrate in this study, using instructions that make the task much easier, that while a large majority of participants do read the instructions, not all do.
Context matters
Our primary interest in this investigation was to determine whether or not we could get students to use an effective study technique, repeatedly testing themselves to learn new material, not only in a supervised setting but also when studying on their own, unsupervised. In short, we found that when students were informed about how useful testing is as a study technique, they did increase their use of it, but only when they completed tasks in a laboratory under supervision. When we gave them the same instructions about using testing, but in an unsupervised setting, they used testing about the same amount as those who saw no such information. The results of our second study show that this is likely due to participants reading but not wanting to follow the instructions, as testing is effortful, relative to studying. This outcome replicates and extends Ariel and Karpicke's (2018) finding that instructions to test can be useful in changing how students choose to learn, clarifying that there may be common situations in which such instruction will not be followed. Given the recent and dramatic increase in the need for instructors to create effective learning opportunities students can engage in on their own time, this is a useful, if disappointing, finding.
We also found that the context in which students completed the task mattered for the level of effort they put into it, regardless of what technique they picked to learn. For example, in Study 1, using the total number of trials that students completed to try to learn the word pairs as an objective measure of persistence, we found that students learning in the laboratory gave more effort than students learning on their own. This was true regardless of whether they had been given specific instructions to test.
Participants who completed the task in a supervised environment experienced a host of circumstances that likely differed in meaningful ways from those completing the task unsupervised. For example, when supervised, we asked all participants to turn off their phones and put them away. We also told them that we would be watching from another room, ensuring that they knew they were under supervision. We do not know at this point whether the mere presence of an authority figure, the academic setting itself, the lack of distractions, or even the effort invested to travel to a specific building at a specific time to participate were most influential or if each of these factors played some important role. We note, however, as shown by Study 2, that the majority of students followed the instructions regardless of supervision when the instructions made the task easier to complete.
In courses that are taken fully online without supervision, there may only be a few of these factors under an instructor's control. Trying to keep students from being distracted by other applications open on their desktops, notifications from other devices, or roommates and family members coming in and out of the room is likely a losing proposition. We suggest that instructors may have to rely on features of the task itself to keep students engaged or at least willing to go back to the task at a later time (which could be useful in and of itself due to the benefits of spacing out study sessions over time, e.g., Dunlosky et al., 2013). We consider some possibilities later in this discussion.
Students completing difficult tasks studied harder, but not smarter
We expected that telling students about the benefits of testing might be particularly useful for those who were given a more difficult task. Therefore, in Study 1, we asked participants in some conditions to recall words in an unfamiliar language (to our participants) such as Swahili. We surmised that if the task was difficult, they might respond to that challenge by using a tried and true method we had just informed them about (for those in the instruction condition). They did not. Students who had to recall Swahili words did not add any more testing to the mix than students recalling English words, even if they had just read about the value of testing. This suggests that even when testing would be especially useful, as in the difficult task, because testing itself is effortful, students are still reluctant to do it.
It is reasonable to think that an alternative outcome could have also occurred with the more difficult task. Students may have found test trials more aversive than study trials when trying to remember more difficult material due to greater failure rates during the learning phase. As shown by Vaughn and Kornell (2019), when participants are given only two options for learning, test or study, they typically favor studying by large margins. Vaughn and Kornell have shown that participants will choose testing, however, when there is an option to get a hint about the word's identity, presumably because this makes the testing event more likely to result in a success for the learner. Here, our participants do not appear to have gravitated more toward the study option to avoid failure. Rather, they kept the same ratio of study and test trials, and added more trials overall.
Given that students completing the difficult task were not avoiding testing by gravitating toward more study trials, it is perhaps all the more impressive that, in general, they persisted through more learning trials overall than students completing the easier task. That is, independent of what students actually did with a word pair (study it or test it), they did more of it. This suggests that students recognized they did not know the material yet and that they should keep at the task.
Students who tested more also remembered more
Regardless of instruction, context or task difficulty, Study 1 demonstrates that students who tested more remembered more and provides further evidence for the benefit to testing: Students who engage in more testing remember more information than those who do not test as frequently.
Giving students the choice to study or test, as we did in Study 1, seems to be an important component to the benefit of testing. For example, Tullis et al. (2018) found that testing was more effective than restudying overall, but forcing participants to take a test did not enhance learning if they did not choose to be tested. Similarly, Vaughn and Kornell also noted that "For one thing, what matters is how students choose to study, not what they think is best for learning" (Vaughn & Kornell, 2019, p. 2). Getting students to test more when they have the choice to study or test is trickier.
What could make students choose to test more?
How do we get students to choose to test? We surmise there are two distinct paths to get students to choose to test more. One is to persuade them to change their beliefs about testing by giving them instructions about its benefits, or even personalized feedback about how they have performed with testing versus studying in a task. For example, Tullis et al. (2018) found that although students consistently judged that restudying would yield better recall than retesting even though, across four experiments, restudying did not yield better recall in their actual performance. However, students revised their beliefs about retesting following clear and specific feedback about how many restudied versus retested items they actually had remembered in previous testing sessions. So perhaps consistent feedback about performance on tested vs. studied material can help, but this requires a lot of input from instructors.
The second path does not require any metacognitive awareness of the value of testing, but relies instead on making the testing experience itself less aversive, perhaps even fun. This is the approach taken by Vaughn and Kornell (2019) in their study, conducted online using mTurk. They allowed participants to select an option to receive a hint (one or more letters) when they retrieved the answer. None of their participants elected the no hint option, meaning that when given the option to restudy, test, or receive a hint, most people choose the less aversive hint option. In Experiment 2, when hints were not available, participants greatly preferred restudy versus test trials (80-20%, respectively). So clearly, participants are motivated to test, but they also want to get the answer right. If participants can be motivated to test, i.e., using dynamic features of the publisher-provided testing tool itself, then the need to persuade individual students of the value of testing may be unnecessary. Given our finding that instructions about the value of testing are more likely to be followed in supervised contexts, this second path offers a more "hands-off " approach for instructors who wish to help students improve their learning.
What may be needed at this point is a more direct test of how students manage various factors in self-regulated learning situations. Specifically, one could manipulate how effortful testing is (e.g., with difficult words, with hints), the social constraints of the situation (e.g., while an instructor is watching vs. not), and metacognitive information (instructions to test are given or not). We expect that instruction to test may not be helpful outside of the social constraints to do well (e.g., when alone) and may not be necessary when the task is fun (e.g., hints are used) or when the task is simple and testing is not warranted for learning. But when the task requires effort (e.g., no hints are given, the words are unfamiliar), supervision will increase the likelihood that students will follow instructions to test. We note, for example, that Ariel and Karpicke tested groups of 4-12 participants at a time in a laboratory setting and they found a significant and large effect of instruction on the decision to test. We also found a significant effect of instruction on testing when students were tested individually in a supervised lab context, but not when they were unsupervised. If participants in our unsupervised setting had been given an option to receive hints during testing, however, we expect this would have increased their use of the technique.
Limitations
One limitation to our study is that we do not really know what students were doing in the task, particularly those who were unsupervised but even those who were supervised. For example, we observed students covering the screen on study trials, effectively making these test trials. If this occurred often, it would underestimate the level of testing in which students engaged, weakening our confidence in the validity of this method for assessing study strategies. We might be able to assume that students did this to an equal degree across all conditions. They may have chosen to test in this manner because it might have been less aversive than typing in an answer during a test trial and having to wait to get feedback that it was wrong. This "covert" testing possibly may have caused us to underestimate how many testing trials students used. We note too that Vaughn and Kornell (2019) also reported anecdotally that students engaged in this behavior in some of their laboratory experiments.
Second, we cannot determine why instructions did not work for unsupervised learners. Study 2 suggests that the majority of students, regardless of supervision, read the instructions. Following instructions depended on whether or not the instructions made the task easier (answer given) or harder (use testing). However, since this is exactly the situation in which quizzing tools are often used by instructors in their classes-study outside of class, flipped classroom, etc., we can conclude that getting students to follow instructions in unsupervised conditions may not be sufficient to change behavior, unless the instructions make the task easier.
Third, we were unable to conduct Study 1 using true random assignment. Study 1 participants were randomly assigned to instruction vs. control conditions; however, random assignment was not possible for the context and difficulty manipulations. Participants were not able to self-select which condition to join, decreasing the likelihood of selection bias. But because the conditions were run over a period of time, we cannot rule out the possibility of cohort effects having an influence on the results.
Fourth, we did not assess actual classroom learning. It is possible that were students given guidance about the value of testing from a regular instructor and where recalling the information could yield a higher course grade, participants might have tested more often even when unsupervised. There is reason to doubt that this would be sufficient, however. As mentioned previously, Van Camp and Baugh (2014) asked students to use publisher-provided quizzing tools in their introductory psychology courses but found that requiring use of the tools did not increase their use. It is unclear whether more students would use these tools to test themselves if they were initially given information about the value of the testing effect in class and just prior to being asked to use the tool on their own.
Conclusion
In conclusion, our primary finding is that participants are more likely to follow instructions to test and they report trying harder only while supervised during in-person sessions. Our evidence suggests that the vast majority of participants are actually reading the instructions, but that whether instructions are followed depends at least in part on how difficult the task is they are being asked to do. If there is a single parameter to increase testing in unsupervised settings, we did not find it.
Significance
The ability to learn and remember content is a cornerstone of education, and improving this ability is important to students and their professors alike. There is ample evidence that students recall more information when they test themselves (e.g., quizzing), as opposed to rereading or restudying the information. These studies provide evidence that instructors can increase the amount of testing students do, but only under certain conditions. For example, when we tried to increase the number of times students used testing by giving them instructions about its benefits at the start of a task, students did test more, but only when their learning was supervised. When learning on their own, students who saw these same instructions did not increase their testing; they kept the same "test-study ratio" as those who were given no information about the usefulness of testing. This pattern stayed the same regardless of whether the task was easy or difficult. In a second study, we found that most participants did read and follow the instructions we gave them when these instructions provided the answer. Supervision by itself did not guarantee that participants would read or follow the instructions. | 2023-02-14T15:32:58.531Z | 2023-02-13T00:00:00.000 | {
"year": 2023,
"sha1": "46e622691fc7020141b64c897a65eb9197e15e4c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "46e622691fc7020141b64c897a65eb9197e15e4c",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257611596 | pes2o/s2orc | v3-fos-license | Haptic Amputation Under Endoscopic Guidance in Uveitis-Glaucoma-Hyphema Syndrome: A Case Report
Uveitis-glaucoma-hyphema (UGH) syndrome is a rare ophthalmic postoperative complication in which the intraocular implants or devices like intraocular lenses (IOLs) produce chronic mechanical chaffing to the adjacent uveal tissues and/or trabecular meshwork (TM) resulting in a wide spectrum of clinical ophthalmic manifestations ranging from chronic uveitis to secondary pigment dispersion, iris defects, hyphema, macular oedema, or spiked intraocular pressure (IOP). Spiked IOP is a result of direct damage to the TM, hyphema, pigment dispersion, or recurrent intraocular inflammation. UGH syndrome generally develops over a time course, varying from weeks to several years postoperatively. Conservative treatment with anti-inflammatory and ocular hypotensive agents might be sufficient in mild to moderate UGH cases but surgical intervention with implant repositioning, exchange, or explantation might be necessary in more advanced situations. Here, we report our challenge in managing a one-eyed 79-year-old male patient with UGH secondary to migrated haptic, which was successfully managed by intraoperative IOL haptic amputation under endoscopic guidance.
Introduction
Uveitis-glaucoma-hyphema (UGH) syndrome was first coined by Ellingson in 1978 and it is a rare postoperative complication in which the intraocular implants or devices with irregular placement or inappropriate design (e.g.intraocular lenses (IOLs)) produce chronic mechanical chaffing to the adjacent uveal tissues (iris or ciliary body) and/or trabecular meshwork (TM) resulting in a wide spectrum of clinical manifestations ranging from chronic uveitis to secondary pigment dispersion, iris transillumination defects (TIDs), hyphema, macular oedema, or spiked intraocular pressure (IOP) [1][2][3][4].
Direct damage to TM, hyphema, pigment dispersion, or recurrent inflammation all contribute to repeated elevated IOP attacks, which may result in glaucomatous optic neuropathy [3,4].
UGH syndrome generally develops over a time course, varying from weeks to several years postoperatively [5][6][7].
The management of UGH syndrome involves a step ladder approach corresponding to the severity.Conservative treatment with anti-inflammatory and ocular hypotensive agents might be sufficient in mild to moderate cases but surgical intervention with implant repositioning, exchange, or explantation might be necessary in more advanced situations [4,5,7,8].
In this article, we describe a unique case of UGH secondary to migrated haptic, which was successfully managed by intraoperative IOL haptic amputation under endoscopic guidance.
Case Presentation
A 79-year-old male presented to the emergency department of our hospital with complaints of the right eye (seeing eye) mild to moderate ocular pain associated with a gradual decrease of vision over two years, which started getting worse in the last two months.These complaints were not associated with headache, nausea, or vomiting.His medical history was notable for hypertension on medications, including acetylsalicylic acid (aspirin) 81 mg once daily and valsartan (Diovan) 80 mg once daily.
His past ocular history included uneventful phacoemulsification with IOL implantation eight years ago in both eyes at a private centre.Corneal decompensation followed the cataract surgery in the left eye, and then the patient underwent two corneal graft surgeries that failed in the end.Since then, the right eye is considered the good-seeing eye.There was no history of previous ocular trauma, pre-existing glaucoma, or chronic use of ocular drops.
On initial examination, his uncorrected visual acuity (VA) was hand motion (HM) in both eyes with no improvement with pinhole.The IOP measured with the Goldmann applanation tonometer (GAT) was 39 mmHg and 08 mmHg in his right and left eye, respectively.
Slit lamp examination for the right eye (Figure 1) showed mildly injected conjunctiva and moderately diffuse corneal oedema with signs of anterior uveitis, including 2+ cells, flare, and pigments with 1 mm height of hyphema with no further view behind.
FIGURE 1: Slit lamp photo of the right eye at presentation
The grafted cornea in the left was diffusely oedematous with scarring with no view behind.B-scan ultrasonography was unremarkable for both eyes.Ultrasound biomicroscopy (UBM) of the right eye revealed a subluxated IOL inferiorly touching the iris (Figure 2).The patient was diagnosed with right UGH syndrome secondary to a subluxated IOL.The patient was admitted to the ward and started on topical atropine sulphate 1% three times a day, prednisolone acetate 1% every four hours, and full anti-glaucoma drops, including brimonidine 0.2%, timolol maleate 0.5%, brinzolamide 1%, and bimatoprost 0.01%.
On the first day of admission, the patient noticed that ocular pain was less in severity but with a similar clinical picture and the IOP was still high at 34 mmHg.We decided to increase the frequency of steroid drops to every two hours and to add acetazolamide 250 mg tablet (Diamox) twice daily after getting medical clearance.
On the second day of admission, the patient symptomatically improved; he had less pain and his vision become more clear, with a VA of 20/400.His IOP went down to 28 mmHg and a slit lamp exam revealed less oedematous cornea, an otherwise similar finding to the day of admission.We decided to continue the same treatment plan with limited physical activity and maintaining head elevation during sleep.
On the third day of admission, the patient's vision deteriorated to HM despite a normal IOP of 18 mmHg and hyphema height increased and the cornea became oedematous again but with no corneal staining noticed.
We then decided to continue the same treatment plan and to prepare the patient for surgery (anterior chamber (AC) wash out +/-IOL repositioning, exchange, or removal) the next day after discussing with him all benefits, risks, and alternatives of this intervention.
Surgical technique
A temporal and nasal paracentesis was made using a microvitreoretinal (MVR) 23 blade (Alcon Laboratories, Inc., Fort Worth, TX).We injected preservative-free Mydrane (a combination of tropicamide 0.02%, phenylephrine 0.31%, and lidocaine 1%) intracamerally.A main wound was created temporally using keratome 2.2 mm (Alcon Laboratories, Inc.).We injected an ophthalmic viscosurgical device (OVD) (dispersive viscoelastic) (Alcon Laboratories, Inc.) to maintain the AC, wash out the blood, and dilate the pupil.Using an endoscopic cyclophotocoagulation (ECP) probe (Endo Optiks, Little Silver, NJ), we went in and saw migrated haptic at the sulcus while the other one was in the bag.Extensive fibrosis was noticed.We amputated the anterior haptic using micro-scissors (Synergetics, B+L, St. Louis, MO) and it was delivered out with McPherson forceps (Figure 3).
FIGURE 3: Amputated intraocular lens haptic
We performed endolaser treatment at 2.5 watts power to 270 degrees of ciliary body processes with a fair response.We washed OVD and closed the wounds with stromal hydration and the main wound using 10-0 nylon sutures.
On the first postoperative day (Figure 4A), the uncorrected VA in his right eye was counting fingers at one metre and the IOP was 20 mmHg on Diamox twice a day (BID).A slit lamp examination showed less oedematous cornea with 1+ cells in the AC without hyphema, and the IOL was stable and well-centred in the bag.The patient was discharged home on atropine, steroid, timolol, and brimonidine drops in addition to Diamox tablet 250 mg BID.Two weeks postoperatively, the uncorrected VA was 20/200 with an IOP of 19, and the cornea became more clear, with a quiet AC and IOL in place.One month postoperatively (Figure 4B), the uncorrected VA was 20/70 with IOP of 11 mmHg, clear cornea, quiet AC, and IOL in place with good red reflex, and the patient was advised to start tapering steroid drops, stopping Diamox tablet, and continuing topical anti-glaucoma drops with timolol and brimonidine.On his last examination, six months postoperatively (Figure 4C), the uncorrected VA was 20/40, the IOP was 14 mmHg, and a slit lamp exam showed clear cornea, quiet AC, well-positioned IOL, and flat retina with 0.75 cupping.
Discussion
UGH syndrome or Ellingson syndrome was first coined by Ellingson in 1978 [1].It is a rare postoperative complication in which the intraocular implants or devices, mainly IOLs (either anterior or posterior IOLs) or iris prostheses, produce chronic mechanical chaffing to the adjacent uveal tissues (iris and/or ciliary body) and/or TM resulting in a wide spectrum of clinical findings including elevated IOP [1,3].It is less often seen with devices such as glaucoma implants (e.g.Ahmed implant, Ex-Press shunts, iStent, and Hydrus Microstent).
Direct damage to TM, hyphema, pigment dispersion, and recurrent inflammation contribute to repeated elevated IOP attacks, which may result in glaucomatous optic neuropathy [4,5].
Factors such as irregular implant placement, such as IOLs implanted in the anterior chamber, inappropriate implant design, such as square-edged IOL, or improper surgical technique, like implanting single-piece IOL in the sulcus, contribute to the incidence of UGH syndrome [6,7].
UGH syndrome has sharply declined due to advancements in lens types, designs, surgical techniques, and the use of posterior chamber IOLs [4].
The diagnosis of UGH is clinical, based on patient history and slit lamp exam, and is supported by imaging.
The patients often present with blurry vision, intermittent ocular pain, redness, and photophobia, and all these symptoms fluctuate and correspond to the severity of the case [7].
UBM is a useful tool to aid in the diagnosis of UGH by visualization of IOL position and to comment on any contact with the iris or ciliary body, especially in those cases with difficulty seeing the IOL by slit lamp exam due to corneal oedema or hyphema, as in our case [12].
B-scan is also a helpful tool to rule out posterior segment pathology in cases with no view to the fundus [4].
UGH syndrome generally develops over a time course, varying from weeks to several years postoperatively and its management involves an approach corresponding to the severity, frequency of the signs and symptoms, lens type, position, and duration since primary surgery was done [6][7][8].
Topical treatment with anti-inflammatory and/or ocular hypotensive in addition to close observation might be enough in addressing mild to moderate cases, while surgical intervention is mandatory in advanced cases with longstanding inflammation, hyphema, and persistently high IOP [4,7,9,13].
In cases of IOL AC or sulcus (with single-piece lenses) implantation, UGH is more frequently encountered.
When the patient starts to be symptomatic by signs and symptoms of UGH, it is recommended to either reposition the IOL or replace it with a more suitable IOL type or design (three-piece IOL) in a better position [11].
The optic of the IOL is in the capsular bag, but, occasionally, one of the haptics is unintentionally placed in the sulcus, or it migrates to the sulcus with progressive postoperative capsular fibrosis leading to UGH, as in our case [9].
In the early postoperative period, this can be managed simply by reinserting the haptic in the capsular bag after inflating the bag with OVD.Capsular fibrosis with extensive adhesion precludes haptic repositioning in longstanding cases making either IOL removal or exchange the only solution [7,9].
Due to the risk of capsular rupture, uveal tissues injury, corneal endothelial dysfunction, or dislocated capsular bag, especially in the absence of clear visualization of the AC, iris, and IOL, we found that simply amputating or cutting the migrated haptic at its junction with optic under endoscopic guidance by ECP probe would be the best choice with less time-consuming, less traumatic, and encouraging outcome.
Conclusions
UGH is a rare ophthalmic postoperative condition with a wide spectrum of clinical manifestations that develop over a time course, varying from weeks to several years postoperatively and ranging from chronic uveitis to secondary pigment dispersion, iris TIDs, hyphema, macular oedema, or uncontrolled IOP.
The management of UGH syndrome involves a step ladder approach corresponding to the severity.Conservative treatment might be sufficient in mild to moderate cases but surgical intervention might be necessary in more advanced situations.In cases of UGH induced by migrated haptic in the sulcus with no view of the implanted IOLs perioperatively, it can be managed safely and successfully by intraoperative IOL haptic amputation under endoscopic guidance.
FIGURE 2 :
FIGURE 2: B-scan of the right eye showing flat retina with no vitreous haemorrhage (A).UBM of the same eye showing subluxated IOL touching the iris (B) UBM: ultrasound biomicroscopy; IOL: intraocular lens. | 2023-03-19T15:17:45.351Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "2caac1b5abac13c34c94ae633997f24e1122a3a3",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/141235/20230317-30582-g56l3i.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "aa042bce17fb25c7a22074f48498e393c00ad02c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16956411 | pes2o/s2orc | v3-fos-license | Characteristic waffle-like appearance of gastric linitis plastica: A case report
Linitis plastica is a gastric cancer of diffuse histotype that presents in the fundic gland area, and is characterized by thickening of the stomach wall and deformation of the stomach, resulting in a leather bottle-like appearance. A 66-year-old female was admitted to Kagawa University Hospital (Kagawa, Japan) with epigastric pain. X-ray examination revealed reduced gastric distension and deformation of the stomach, which exhibited a leather bottle-like appearance. Endoscopy indicated a depressed lesion in the gastric antrum, and abnormal folds, which crossed to form a waffle-like appearance in the upper gastric body. Analysis of biopsy specimens from the depressed lesion revealed a poorly differentiated adenocarcinoma. Morphological changes in the gastric folds indicated that the tumor had invaded the upper gastric body, therefore, a total gastrectomy was performed. Subsequent pathological findings demonstrated that the tumor had spread from the primary lesion to the upper gastric body. Therefore, the present report recommends that the diagnosis of the spread of linitis plastica-type gastric cancer should include assessments of the primary lesion, as well as evaluation of morphological changes in the gastric folds.
Introduction
Linitis plastica is defined as a gastric cancer of diffuse histotype (1) that presents in the fundic gland area, and is characterized by thickening of the stomach wall resulting in deformation of the stomach and a leather bottle-like appearance. Since the majority of linitis plastica patients are diagnosed at an advanced stage of the disease, clinical outcomes are commonly observed, regardless of the extent or type of primary resection that may have been performed (2). Linitis plastica-type gastric cancer tumors tend to infiltrate the submucosa and muscularis propria of the gastric wall, thus, superficial mucosal biopsies may be falsely negative, and detecting the extent of the spread and depth of the linitis plastica-type gastric cancer can be difficult endoscopically. The present report describes a patient with linitis plastica-type gastric cancer, in whom the characteristic morphological changes in the folds of the gastric body facilitated the determination of the spread and depth of tumor invasion. Written informed consent was obtained from the patient.
Case report
A 66-year-old female was admitted to Kagawa University Hospital (Kagawa, Japan) with the complaint of intermittent epigastric pain that was exacerbated by fasting. The patient had a history of hypertension and obstructive sleep apnea syndrome. Physical examination upon admission revealed no anemia (via conjunctival pallor examination), jaundice or pulmonary abnormalities. On palpation, the abdomen of the patient was soft and flat, with no areas of tenderness. Furthermore, pretibial edema was not observed and superficial lymph nodes were not palpable. Serum concentrations of the tumor markers, carcinoembryonic antigen and carbohydrate antigen 19-9, were within the normal ranges (<5 ng/ml and 0-37 U/ml, respectively). However, X-ray examination indicated reduced gastric distension, as well as deformation of the stomach, which exhibited a leather bottle-like appearance (Fig. 1). In addition, the lower gastric body demonstrated luminal narrowing and increased rigidity, with a depressed lesion (longest diameter, 20 mm) at the posterior wall of the gastric antrum and abdominal computed tomography revealed thickening of the antrum. No lymphadenopathy was observed. Additionally, endoscopy revealed an ulcerative lesion covered by a white necrotic substance on the posterior wall of the antrum ( Fig. 2A) and severe luminal narrowing, with poor distension of the lower gastric body. The upper gastric body, however, demonstrated good extension when compared with the middle and lower gastric bodies. The folds of the gastric antrum were flexible, stretched smoothly, and crossed one another, resulting in a waffle-like appearance on the greater curvature of the upper gastric body (Fig. 2B). Analysis of biopsy specimens from the ulcerative lesion revealed a poorly differentiated adenocarcinoma containing signet ring cells, however, adenocarinoma was absent from biopsy specimens obtained from the abnormally crossed folds. Due to the morphological changes that occured in the gastric folds, creating the waffle-like appearance, it was determined that cancer cell invasion of the upper gastric body was likely, and a total gastrectomy was performed. The resected specimen revealed the wall thickening and crossing folds of the gastric body (Fig. 3A) that were previously observed by endoscopy. Microscopic examination revealed that cancer cells had spread throughout the upper gastric body and had infiltrated the vessels in the submucosa, predominately into the muscularis propria, and marginally into the serosa (Fig. 3B). Immunohistochemical examination revealed positive staining for MUC5AC and MUC6 (gastric marker mucins) and negative staining for MUC2 and CD10 (intestinal marker mucins), indicating gastric-type mucin expression. The final diagnosis, according to the Japanese Classification of Gastric Carcinoma (3), was T4aN3aM0, clinical stage IIIC advanced gastric cancer. The patient was discharged 17 days after surgery without complications and commenced three cycles of S-1 adjuvant chemotherapy (80 mg/day, days 1-28) for 16 weeks. Following three courses of chemotherapy for 16 weeks, treatment was terminated due to patient fatigue. The patient has survived and is without disease recurrance 14 months after surgery.
A B Discussion
The Japanese Classification of Gastric Carcinoma has defined Type IV diffuse infiltrative gastric cancer as tumors lacking marked ulceration or raised margins, but with thickened and indurated gastric walls, and poorly defined margins (3). Type IV carcinoma, or scirrhous gastric carcinoma, is therefore a diffuse infiltrating adenocarcinoma, which is predominantly poorly differentiated and lacking a circumscribed lesion. Tumor involvement of the entire stomach wall results in a condition termed linitis plastica (4). Despite improved treatment outcomes for other types of gastric carcinoma in Japan, the prognosis of patients with linitis plastica remains particularly poor (5,6).
Endoscopic brush cytology and biopsy techniques have an overall sensitivity of 95-98% in the detection of gastric cancer (7,8). However, the accuracy of endoscopy ranges widely, depending on the gross tumor growth pattern and the anatomic location of the tumor (9), with a sensitivity of only 33-73% observed in linitis plastica patients (9)(10)(11)(12). The predominant reason for the poor sensitivity of endoscopy in the detection of scirrhous gastric cancer is the healthy appearance of the mucosa.
Although the invasion and spread of linitis plastica-type gastric cancer is difficult to diagnose prior to surgery, the presence of depressed lesions and marginal changes in the gastric folds may be indicators for diagnosis. Endoscopic findings that are characteristic of scirrhous gastric cancer include poor distension of the gastric walls, morphological changes in the gastric folds and the presence of primary lesions (13). The differential diagnosis of thickened gastric folds includes Ménétrier's disease, hypertrophic gastritis, malignant lymphoma, rare types of aberrant pancreas, syphilis and cytomegalovirus gastritis (14). Morphological changes in the gastric folds of linitis plastica patients include the presence of giant, swollen, straight, furrowed and crossed folds. These morphological changes are important for distinguishing linitis plastica from other diseases. Further diagnostic indicators of linitis plastica include poor distention of the stomach, and a leather bottle-like appearance as observed by gastrointestinal (GI) series, indicating that an upper GI series may be superior to endoscopic examination in the diagnosis and localization of these types of lesions (15). By contrast, certain linitis plastica-type gastric cancer patients do not demonstrate the typical deformity, poor distension, or irregular folds via upper GI series, however, exhibit focal alterations in infiltrated areas (16).
Linitis plastica is also characterized by poorly differentiated tumor cells that diffusely infiltrate the gastric wall, thus leading to reactive fibrosis (17). Early stage, undifferentiated [0-IIC morphological type (3)], linitis plastica-type gastric cancer is present in the mucosa of the gastric wall, progressing by diffuse infiltration into the submucosal layer prior to ulceration of the primary lesion, with the cancer cells extending beyond the fibrous tissue. In patients with mild fibrosis, the stomach distends well upon air insufflation, with the gastric body occasionally exhibiting a waffle-like appearance; these observations may be important in the accurate diagnosis of the spread of cancer cell invasion.
Recently, we performed a bloc biopsy, using the submucosal endoscopy mucosal flap to diagnose submucosal tumors (18)(19)(20). This technique lead to histopathological diagnosis without any complications, such as hemorrhage or dissemination of tumor. In the future, this method may exhibit potential diagnosis value for accurately diagnosing a linitis plastica-type gastric cancers.
In conclusion, accurately diagnosing the spread and depth of linitis plastica-type gastric cancer requires examination of the morphological changes in the gastric folds, as well as examination of the primary depressed lesion. The use of novel methods, such as bloc biopsy, may improve the accuracy of linitis plastic-type gastric cancer diagnosis. | 2016-05-12T22:15:10.714Z | 2014-11-07T00:00:00.000 | {
"year": 2014,
"sha1": "0f410ec42ce0e632701ff32b9f210caec2cff528",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/ol/9/1/262/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f410ec42ce0e632701ff32b9f210caec2cff528",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
91385345 | pes2o/s2orc | v3-fos-license | ETAPOD: A forecast model for prediction of black pod disease outbreak in Nigeria
Food poisoning and environmental pollution are products of excessive chemical usage in Agriculture. In Nigeria, cocoa farmers apply fungicides frequently to control black pod disease (BPD), this practice is life threatening and lethal to the environment. The development of a warning system to detect BPD outbreak can help minimize excessive usage of fungicide by farmers. 8 models (MRM1-MRM8) were developed and 5 (MRM1-MRM5) selected for optimization and performance check. MRM5 (ETAPOD) performed better than the other forecast models. ETAPOD had 100% performance rating for BPD prediction in Ekiti (2009, 2010, 2011 and 2015) with model efficiency of 95–100%. The performance of the model was rated 80% in 2010 and 2015 (Ondo) with model efficiency of 85–90%, 70% in 2011 (Osun) with model efficiency of 81–84%, 60% in 2010 (Ondo and Osun) and 2015 (Osun) with model efficiency of 75–80%, 40% in 2009 (Osun) with model efficiency of 65–69% and 0% 1n 2011 (Ondo) with model efficiency between 0 and 49%. ETAPOD is a simplified BPD detection device for the past, present and future.
Introduction
Global warming, food poisoning and environmental pollution are current problems emanating from excessive exposure to and combustion of chemical substances. The management of BPD is a major challenge to cocoa farmers in Nigeria as they frequently apply fungicide to safeguard their crops without consideration for the safety of life and the environment [1]. BPD is more established in West Africa than in any other parts of the world [2]. Adegbola [2] in his review of Africa estimated the average occurrence of the disease as 40% in several parts of West Africa and up to 90% in certain places in Nigeria [3]. In Nigeria, cocoa export made over 80% GNI before the 1960s [3], it was reduced to 37.9% in 1997 [4] due to BPD infection and other factors, yet cocoa export remained more profitable than Rubber, Palm fruits, Groundnut, Yam, Cassava, Maize, Millet, and Sorghum [3]. Cocoa yield decline started in [1971][1972] (255,000 to 241,000Mt), through 1978-1979 to 1986 (58,700Mt), with an increase from 165,000-180,000Mt between 2000 and 2003 [5]; [6]. The increase in cocoa production was entirely due to the expansion in production area rather than increases in cocoa yield [7]. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Global climate change is one of the major factors responsible for the inconsistent fluctuations in BPD outbreak experienced annually worldwide, due to its influence on the physiology of the pathogen(s), the suitability of the environment for microbial activities and the susceptibility of the host plant(s) to microbial attack [8]. The irregular rainfall pattern and inconsistent mode of BPD occurrence in Nigeria makes it nearly impossible to control it effectively. The efficacy of the existing management strategies (cultural, physical, biological and chemical control measures) are fast declining, as such increased fungicide dosages and frequent applications are methods devised by indigenous cocoa farmers to protect their crops from the disease. Hence, an urgent need for modern approach in the control of BPD in West Africa is imminent so as to reduce the level of exposure of cocoa pods to chemicals and also decrease the amount of chemical residues in the environment [9].
Plant disease forecasting (advance disease management strategy) advocates the use of plethora management techniques directed by a rational decision making system, such that indigenous cocoa farmers worldwide will be duly informed whenever BPD outbreak is suspected and the intensity of the outbreak quantified to avoid excessive use of preventive chemicals. This research seeks to develop a forecast model for BPD prediction so as to provide information on its outbreak and the areas suspected to be under severe invasion. In the nearest future, the quantity of preventive control measures required to combat the disease will be determined using simple computer algorithms in order to minimize fungicide misuse, reduce the level of chemical pollutants in the environment, increase cocoa productivity, and reduce the risk of chemical poisoning or deaths associated with consumption of toxic chemicals substances.
The area of focus
The area of research focus was the Western part of Africa with specific preference to Nigeria (Fig 1). The Southwestern region of Nigeria was used as a case study for result validation; this region was clearly described in Fig 2. The co-ordinates of the area of focus were determined using the blackberry mobile Global Positioning System (GPS) device (version 6.0) and a mobile satellite GPS receiver model GARMIN Etrex 10 obtained from the Department of Botany, Faculty of Science, University of Ibadan, Ibadan, Nigeria. Cocoa producing States in Nigeria were shown in Table 1
Black pod disease (BPD) data
Documented reports of BPD outbreaks within Southwestern Nigeria was obtained from Cocoa Research Institute of Nigeria (CRIN), Ìdí-Ayunrė, Ibadan, Oyo State, Nigeria and the report of Lawal and Emaku [10]. The total data collected spanned from 1985 to 2014. These served as secondary data.
Meteorological data
Weather data from 1985 to 2016 within Southwest, Nigeria were also collected from the report of Lawal and Emaku [10], National Bureau of Statistics (NBS) Ibadan, Oyo State, the Meteorological Station of Cocoa Research Institute of Nigeria (CRIN), Ìdí-Ayunrė, Ibadan, Oyo State, Nigerian Meteorological Station (Nimet), and the Department of Geography, University of Ibadan, Ibadan, Oyo State, Nigeria. These were also classified as secondary data.
Data analysis
Qualitative data were represented as charts and graphs plotted using SPSS, version 20.0 for 32 bits resolution, while the analysis of variance was carried out using COSTAT 6.451 software. The homogeneity of means was determined using Duncan Multiple Range Test (DMRT). The proposed forecast model(s) were templates of multiple regression equation(s) developed from the meteorological and BPD data (Secondary data). The models were designed using Minitab 16.0 software and programmed on Microsoft Excel Worksheet 2007 service pack for easy access. Model selection was aided by Pearson's Product Moment of Correlation (PPMC) "R-Sq", the Adjusted Coefficient of Correlation of the developed models (R-Sq Adj. ), the Standard Error of Regression (SER) and Root Mean Square Error of Prediction (RMSE pred. ). The Error of BPD prediction was determined using E = (Y-Ŷ) 2 .
Model validation
The data used for confirmation of the predicted BPD outbreak by the developed forecast model was obtained from the research work of Oyekale [11], [12]. The template for validation (accuracy check) was stated in Table 2.
Results
The BPD function was developed using simple mathematical rule Then; D ¼ FðH; P; EÞ ¼ aðH; P; EÞ þ b ð1Þ In any case the influence of man and vectors (Ants, Termites, and Rodents etc.) serve as constants in the equation because they influence the spread of BPD in the field, coupled with the timely combination of the key factors responsible for BPD development.
Therefore, the development of BPD forecast system for cocoa required an equation encompassing all the predictors necessary for the disease development. An example of such model was given thus: In any case the individual predictors were tested against the response variable to ascertain their role(s) in black pod disease outbreak.
Rainfall and average relative humidity had a positive correlation with BPD outbreak i.e. r = 0.445 and 0.477 (Fig 4), and r 2 = 0.105 ( Fig 5) and 0.295 (Fig 6), respectively. The average temperature, sunshine duration and the year of observation had negative association with BPD outbreak in Southwest, Nigeria (r = -0.420, -0.364 and -0.018 (Fig 4), and r 2 = 0.265 (Fig 7), 0.360 (Fig 8) and 0.035 (Fig 9), respectively). It was however observed that there was no relationship between the locations of cocoa farms (Fig 10), the specific period (month) when the disease was observed (Fig 11) and BPD outbreak in Nigeria.
The weather pattern of Southwest, Nigeria and how it affects BPD development
The weather pattern for Southwest, Nigeria in the late 1900s (20 th Century) showed that the height of rainfall across the four (4) States investigated was between the months of March and October from 1991 to 1995, suggesting the possibility of infection within these periods (Table 3). Phytophthora megakarya thrives better between 20˚C and 30˚C, therefore the specific periods of the year that favoured such temperature values in Ogun, Ondo, Osun and Oyo States were June, July, August, and September in 1991 to 1995 (Table 4). On the Contrary, the (Table 5). A relative humidity value of 75% and above favoured the establishment of BPD, therefore, periods of the year that had high relative humidity were March through October from the early morning readings taken 1991 to 1995, suggesting the possibility of infection within these periods also (Table 6). Judging by the trend of afternoon readings the periods of June through September across all the years favoured BPD proliferation (Table 7). These periods possibly served as an interlude for proliferation and spread of the pathogen leading to possible infection of predisposed cocoa plants judging from the BPD occurrence report given by the Cocoa
Development of prediction models for black pod disease in Nigeria
Several models were developed to predict BPD outbreak in Southwest, Nigeria. ETAPOD: A forecast model for prediction of black pod disease outbreak
Model selection
The posthoc analysis conducted showed that MRM 5 was the preferred model for BPD prediction followed by MRM 4
Prediction of annual BPD outbreak by ETAPOD and confirmation of forecast results
The annual BPD outbreak for Ekiti, Ondo and Osun States (Southwest, Nigeria) were used to test the developed BPD forecast model (ETAPOD). In 2009, Ekiti, Ondo and Osun States had Table 9).
The overall assessment of the output quality of MRM 5 forecast model (ETAPOD)
The Table 10.
Accuracy of ETAPOD in BPD prediction
The (Table 12).
The probability of obtaining accurate BPD predictions
The probability of obtaining accurate results for BPD prediction was very high in Ekiti and Ondo States, but it was not consistent in Osun State (Table 13). The probability range for obtaining good results in Ekiti was 0.93�P�0.95, whereas, it was 0.50�P�0.83 in Ondo State. In Osun State, the value was a disappointing 0.00�P�0.71 range (Table 13).
Weather survey in line with BPD outbreak in Southwestern Nigeria
The weather report in the early 1900s for Southwestern Nigeria showed that there was recurrent rainfall within the months of March through October from 1991 to 1995. Also, ambient temperature was low during the day and at night, and there was much saturated water vapour in the air across the four (4) States investigated within the same period. March to October happen to be the most productive periods for Cocoa production in Southwest, Nigeria; Therefore, the observations noted gives an indication of the possibility of infection within these periods. This favourable weather pattern for black pod disease infection was earlier reported by Akrofi [14].
BPD prediction and data validation
The MRM 5 BPD forecast model (ETAPOD) selected as the best fitted model for prediction of BPD outbreak in Nigeria gave results of annual BPD outbreak that accurately quantified annual BPD outbreak in Ekiti and Ondo States but inaccurately described the situation in Osun State. This is as a result of the credibility of the data fed into the model. The observation made was in accordance with the findings of Luo [15] who also designed a forecast model for the prediction of foliar diseases of winter wheat caused by Septoria tritici across England and Wales and his predictions for the disease was seemingly not 100% accurate. The error of BPD prediction was very low in Ekiti, whereas, it was on the high side in Osun. The disparity in the credibility of the predicted outcome is solely due to the quality of the data fed into the system. This lapses was indeed identified by Luo [15] who gave a few recommendations on how a forecast system can be improved in order to obtain quality forecast results. The level of prediction accuracy was defined thus as 0.0%�Accuracy Level<100%. This was also identified by Luo [15] as he recognized the fact that no forecast system can be 100% accurate at all times and in all instances.
Recommendation
ETAPOD harnesses several potentials and possibilities that can be improved on to obtain excellent results. The accuracy of the warning system developed for the prediction of black pod disease (ETAPOD) can be perfected if: 1. Weather parameters are obtained from meteorological stations situated in the farm 2. Consistency of cocoa production within that locality is constant 3. The type of cropping system employed could be determined 4. cocoa is the major crop cultivated on the piece of land 5. Advanced digital image analysis could be used to improve measurement precision of disease prevalence and severity. "+" Fairly Good Prediction, "++" Good Prediction, "+++" Extremely Good Prediction, and "-"Poor Prediction.
https://doi.org/10.1371/journal.pone.0209306.t010 Conclusion ETAPOD harnesses the potentials to improve the functionality of other existing management strategies for the control of BPD in Nigeria by providing timely information on its outbreak, detect areas under severe attack (AUSA), thereby discouraging fungicide misuse among local cocoa farmers. ETAPOD is unique in the sense that its primary function is not geographically restricted. Also, ETAPOD can be manipulated to provide optimum results anywhere needed in Nigeria, Africa and all around the world. Its ability to provide qualitative and quantitative description of BPD pressure makes it superior to other forms of BPD control strategies in use. Therefore, ETAPOD is a pertinent tool that can effectively minimize the prevalence of BPD in Nigeria with minimal chemical application, decreasing the risk of chemical poisoning and increasing the production of healthy cocoa products nationwide. This is the surest and fastest way to ensure sustainability of cocoa production in Nigeria and the world at large. | 2019-04-03T13:08:38.853Z | 2018-12-05T00:00:00.000 | {
"year": 2020,
"sha1": "39610be00306b5810ae60ec911058ae1d758d3b4",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0209306&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93f61f8785b1b07d06a6c3014f4a9815c1b31ea9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Mathematics"
]
} |
91885655 | pes2o/s2orc | v3-fos-license | Radial Kick in High-Efficiency Output Structures
: We have developed an analytical approach that predicts radial oscillation near the aperture of a pillbox cavity. In addition, it provides natural criteria for the design of a tapered guiding magnetic field in the output section of a relativistic klystron amplifier, as well as that of a travelling wave tube, in a method that is self-consistent with the dynamics of the electrons.
Introduction
Periodic structures play an important role in the interactions of electrons with waves, since they support harmonics of phase-velocity that are smaller than c, and with an adequate design, this velocity can be set to be equal to the average velocity of the electrons. In particular, in extraction structures, as the electrons interact with the wave and lose energy, they slip out of phase and consequently, the interaction is degraded. In order to avoid this situation, the phase velocity of the wave has to be adjusted, and the geometry change that is associated with this process should be designed for minimum reflections, otherwise the system oscillates. In a similar way, in photo-injectors, electron bunches are accelerated from zero velocity to virtually the speed of light in a relatively short distance (typically 2.5-5.5 periods) and therefore, the design needs to account for the accelerating bunch, such that the latter experiences the maximum radio frequency (rf) electric field.
At a given frequency and in single-mode operation, the electromagnetic wave is characterized by a single wave-number k, and quantities such as phase velocity, group velocity, and interaction impedance are well-defined. In principle, if the structure is no longer periodic, the field cannot be represented by a single wave-number, except when the variations are adiabatic, in which case, these characteristics are assumed to be determined by the geometry of the local cell. Adiabatic perturbations in the geometry may improve the efficiency from the level of few percent in uniform structures, to a level of 30%. However, one cannot expect to achieve 60-80% efficiency by moderate variation of the structure, bearing in mind that in contrast to accelerators where these changes occur over many wavelengths, in traveling-wave extraction structures, these changes should occur within a few wavelengths.
An abrupt change of geometry dictates a wide spatial spectrum, in which case the formulation of the interaction in terms of a single wave with a varying amplitude and phase is inadequate. In fact, the electromagnetic field cannot be expressed in a simple (analytic) form if substantial geometric variations occur from one cell to another. To be more specific: in a uniform or weakly tapered disk-loaded waveguide, as an example, the beam-wave interaction is analyzed, assuming that the general functional form of the electromagnetic wave is known; e.g., A(z) cos[ωt − kz − φ(z)], and the beam affects the amplitude A(z) and the phase, φ(z). Furthermore, it is assumed that the variation due to the interaction is small, being on the scale of one wavelength of radiation. Both assumptions are not acceptable in the case of a structure that is designed for high-efficiency interaction. To emphasize 1 this difficulty even further, we recall that a non-adiabatic local perturbation of geometry affects global electromagnetic characteristics; this is to say that a change in a given cell affects the interaction impedance or the group velocity several cells before and after the point where the geometry was altered.
To overcome these difficulties, an analytical technique has been developed in order to design and analyze quasi-periodic metallic structures. The method relies on a model consisting of a cylindrical waveguide, to which a number of pill-box cavities and radial arms are attached. In principle, the number of cavities and arms is arbitrary. The boundary condition problem is formulated in terms of the amplitudes of the electromagnetic field in the cavities. The elements of the matrix, which relates these amplitudes with the source term, are analytic functions, and no a-priori knowledge of the functional behavior of the electromagnetic field is necessary.
The study regarding travelling wave tube (TWT) output structures was triggered by research conducted at Cornell University by Nation et al. during the 1970s, 1980s, and 1990s; the references are listed chronologically, and thus, in order of development. Order power levels in excess of 200 MW were generated at the X-band in a 50 MHz bandwidth. These power levels were accompanied by gradients that are larger than 200 MV/m, and no rf breakdown was observed experimentally-the pulse duration was less than 100 ns. However, for further increases in the power levels, it was necessary to increase the volume of the last two or three cells, in order to minimize the electric field on the metallic surface. The system then becomes quasi-periodic. To envision the process in a clearer way, let us assume that 80% efficiency is required from our source. If the initial beam is not highly relativistic, which is the case in most systems, such an efficiency implies a dramatic change in the geometry of the structure over a short distance. Specifically, for a 500 keV beam, the initial velocity is v 0 ∼ 0.86c, and an 80% efficiency would imply a phase velocity of 0.55c at the output. This corresponds to a 36% change in the phase velocity, and a similar change will be required in the geometry, which is by no means an adiabatic change when it occurs in one period of the wave.
Relying on our experience, there are several difficulties that are associated with an extraction section based on a quasi-periodic traveling-wave structure: (i) reduction of the reflections primarily at the output end of the structure, in order to maintain a clean spectrum and to avoid oscillations; (ii) Tapering of the output section to avoid breakdown; (iii) Compensation for the decrease in the velocity of the electrons when maintaining a resonance condition; (iv) Adaptation of the guiding magnetic field to the variation in current density of the electron beam; (v) Accounting for the transverse oscillation of the electrons, in order to circumvent them from impinging upon the metallic structure. While the first three items were investigated in Chapter 6 of Reference [41], it is our goal in this study to consider the last two analytically-see also [28,31,32]. They were triggered by a new relativistic klystron amplifier (RKA) design by the first author (HH). The S-band device is driven by an 850 keV electron annular-beam (8 kA, radius 2.8 cm, and thickness 3 mm). The guiding magnetic field has a typical intensity of 1 T. In Figure 1, the top frame shows (not up to scale) the schematic of the configuration, and subsequently, we will need the cavity parameters: the drift tube radius is 3 cm, the gap width is 2.1 cm; the electric field in the gap of the output cavity is of the order of 20 MV/m. Originally, the RKA was developed over more than two decades by Moshe Friedman at the Naval Research Laboratory , and renewed interest was triggered by the possibility of designing a high-efficiency device; note that the references are presented in chronological order.
Formulation of the Model
Our goal is to examine the radial motion of the electrons in the vicinity of a cavity, and in the presence of a guiding magnetic field 0 B -the geometric parameters of the cavity and the beam are revealed in Figure 2, which illustrates the output from CHIPIC [83]. R is the annular-beam average radius, and r represents its thickness; g is the gap. The system is assumed to be azimuthally symmetric. Not shown is the guiding magnetic field of intensity 0 B along the z-direction.
In the gap of the cavity, we assume the existence of a uniform in space, steady-state exp jt
Formulation of the Model
Our goal is to examine the radial motion of the electrons in the vicinity of a cavity, and in the presence of a guiding magnetic field (B 0 )-the geometric parameters of the cavity and the beam are revealed in Figure 2, which illustrates the output from CHIPIC [83].
Formulation of the Model
Our goal is to examine the radial motion of the electrons in the vicinity of a cavity, and in the presence of a guiding magnetic field ( ) 0 B -the geometric parameters of the cavity and the beam are revealed in Figure 2, which illustrates the output from CHIPIC [83]. Schematic description (not up to scale) of a pillbox cavity attached to a waveguide. R w is the radius of the waveguide; R b is the annular-beam average radius, and ∆ r represents its thickness; g is the gap. The system is assumed to be azimuthally symmetric. Not shown is the guiding magnetic field of intensity B 0 along the z-direction.
In the gap of the cavity, we assume the existence of a uniform in space, steady-state (exp jωt) in time-longitudinal electric fields of complex amplitude E 0 ; thus E z (r = R w , |z| < g/2, t) = Re E 0 exp(jωt) . Explicitly, in the cylindrical waveguide (r ≤ R w ), the magnetic vector potential is: wherein Γ 2 = k 2 − ω 2 /c 2 , I 0 (u) is the zero-order-modified Bessel function of the first kind, and the amplitude A(k) is set by the field in the gap of the cavity (r = R w ): or explicitly: Thus, the field components in the entire volume (r ≤ R w , |z| < ∞) of the waveguide are given by: For completeness in the pillbox cavity (R w < r < R ext ), the dominant field components are: with η 0 = µ 0 /ε 0 representing the vacuum wave impedance, R ext is the external wall of the pillbox cavity, and J ν (u), Y ν (u), ν = 0, 1 are the ν th -order Bessel functions of the first and second kind, correspondingly. Now that we have determined the rf field, we focus on the dc field, ignoring the metallic wall. Assuming a thin annular-beam carrying a current I with a velocity v z = cβ, the effective electric field E r − v z B φ at the beam location is: and γ = 1/ 1 − β 2 -note that these are average values. Here, we ignored the effects of the metallic wall on the static field components. With the dc and the rf components established, the radial equation of motion, ignoring the energy exchange due to the longitudinal oscillation, reads: wherein e and m are the charge and rest mass of the electron, and: Due to the azimuthal symmetry of the problem, the canonical angular momentum is a constant of motion. Further, since the guiding magnetic field is uniform, the azimuthal component of the static magnetic vector potential is A φ = rB 0 /2; Ω c = eB 0 /m will represent in what follows, the non-relativistic cyclotron frequency. The conservation of the canonical angular momentum implies rp φ = r mγv φ − eA φ = rm γv φ − Ω c r/2 = const., thus assuming that the ith electron has an initially zero azimuthal velocity: and denoting by h(u) the Heaviside step function, we get: It should be emphasized that the effect of the image-charge, due to the wall presence, was tacitly neglected, since it is well-known that it tends to reduce the (effective) plasma frequency and in the framework of this analytic study, we consider only the main processes involved.
Next, we assess the equilibrium r eq R b in the absence of the rf, and we assume that the oscillation is small relative to the beam's radius r i R b + δr i , r i,0 R b . Further defining the current and the plasma frequency I = encβ(2πR b )∆ r , Ω 2 p = e 2 n/mε 0 , the radial equation of motion for the ith electron is: Note that the dc space charge force (expressed in terms of the plasma frequency) is taken for the worst case scenario, namely all electrons are concentrated in an extremely thin layer, and the radial (image-charge) force between this layer and the waveguide's wall is neglected. This can be readily corrected by replacing the plasma frequency with the effective plasma frequency, which accounts for the plasma frequency reduction, due to the ground metallic wall.
Effective Radial Force
Our goal in this section is to simplify the expression for the radial force F r (t) = −eE (RF) r,i , and for this purpose, we employ the Cauchy residue theorem for the evaluation of the integral on k in Equation (8); thus: wherein p s are the zeroes of the Bessel function of the first kind, J 0 (p s ) = 0, and Γ s ≡ With this definition, the radial force consists of an infinite manifold of evanescent waves, thus keeping in mind that from E 0 = E 0 exp(jψ), we get: and here, G s (z) ≡ 1 g g/2 −g/2 dz exp[−Γ s |z − z |]. At this stage, we consider the transverse kick experienced by the electron. In general, momentarily ignoring all of the force components except rf, the normalized transverse kick associated with the radial rfforce is given by: or explicitly, in the framework of our model: At r = R b the normalized transverse kick within an excellent approximation and for the specified parameters, is given by 1.293 cos(ψ − π/6). Now, we replace the exact rf force with the kick multiplied by the Delta function, such that the momentum transferred in the process is preserved, but its profile may differ quite significantly. Explicitly, the momentum transfer is given by: or in our particular case, for the specified set of geometric parameters: and thus, after reinstating the other force components, we get: This equation may be readily solved, keeping in mind that the Green's function of: is subject to the zero initial conditions: G 0 (t = t , t ) = 0, and [dG 0 (t, t )/dt] t=t = 0 is given by: Thus: This is the main analytical result of this study.
Discussion
Several features are evident from the present formulation: 1.
For beam stability, it is required that the relativistic cyclotron frequency is significantly larger than the relativistic plasma frequency Ω 2 c γ −2 ≥ Ω 2 p γ −3 . In all of the expressions so far, we considered the average beam density, but we need to keep in mind that high-efficiency energy conversion is facilitated by a highly bunched beam; thus local density may exceed the averaged value by a factor of α = 10 or even higher. Consequently, it will be more realistic to adopt a more stringent constraint on the magnetic field Ω 2 c ≥ α Ω 2 p γ −1 .
2.
The radial transverse kick, ∆ γβ r , is proportional to the amplitude of the radial oscillation.
3.
In case of a quasi-periodic structure whereby electrons lose a significant fraction of their energy γ i (z), the magnetic field is tapered Ω c (z), the bunch density varies Ω p (z), and assuming that there are no reflected electrons, then on the first order, we may assume that the radial oscillation as a function of the location is: 4.
According to the last result, there are several possibilities to taper the structure. Two are the most plausible: (i) Keeping the radius of the waveguide constant, but varying the guiding magnetic field and the external radius of the cavities (see in Figure 1b), such that the phase of the radial oscillation is preserved: (ii) Varying the waveguide radius, and thus the volume of the constraint on the guiding field may be weakened: See the bottom frame in Figure 1. However, the tapering in the latter case may be accompanied by a coupling to high-order modes (hybrid modes), which may lead to beam break-up.
In order to obtain a flavor regarding the maximum radial displacement, δr max 1.293 eE 0 g/mc 2 (c/Ω c ) 1.7[ mm]/B 0 [T], that the electron beam is experiencing as it traverses the gap, we illustrate the following results of several particle in cell (PIC) simulations. The left frame of Figure 3 shows the configuration-space, and the latter reveals the radial oscillation very vividly. In the right frame, we show the beam transport over the cavity gap. The red curve shows the upstream current. Blue, green, and magenta represent the downstream currents for B 0 = 0.6, 0.7, 1.0T. For the weaker magnetic field, the radial kick is larger; as a result, more electrons are intercepted by the structure, and thus, the transported current is lower. left frame of Figure 3 shows the configuration-space, and the latter reveals the radial oscillation very vividly. In the right frame, we show the beam transport over the cavity gap. The red curve shows the upstream current. Blue, green, and magenta represent the downstream currents for 0 0.6, 0.7,1.0 For the weaker magnetic field, the radial kick is larger; as a result, more electrons are intercepted by the structure, and thus, the transported current is lower.
(a) (b) Figure 3. Left frame: PIC simulation-see [83] for details-revealing the transverse kick exerted on the electrons by the rf field in the cavity. Right frame: Beam transport over the cavity gap. The red curve shows the upstream current. Blue, green, and magenta represent the downstream currents for 0 0.6, 0.7,1.0 B T = . For the weaker magnetic field the radial kick is larger, and as a result, more electrons are intercepted by the structure, and thus the transported current is lower.
In conclusion, we have developed an analytical approach that on one hand, may predict the radial oscillation near the aperture of a pillbox cavity, and on the other hand, it provides us with the natural criteria for the design of the tapered magnetic field in the output sections of a relativistic klystron amplifier, as well as the travelling wave tube. Figure 3. Left frame: PIC simulation-see [83] for details-revealing the transverse kick exerted on the electrons by the rf field in the cavity. Right frame: Beam transport over the cavity gap. The red curve shows the upstream current. Blue, green, and magenta represent the downstream currents for B 0 = 0.6, 0.7, 1.0T. For the weaker magnetic field the radial kick is larger, and as a result, more electrons are intercepted by the structure, and thus the transported current is lower.
In conclusion, we have developed an analytical approach that on one hand, may predict the radial oscillation near the aperture of a pillbox cavity, and on the other hand, it provides us with the natural criteria for the design of the tapered magnetic field in the output sections of a relativistic klystron amplifier, as well as the travelling wave tube. | 2019-04-03T13:11:44.095Z | 2019-02-22T00:00:00.000 | {
"year": 2019,
"sha1": "696ee47da595fa5a451ca1b6de633df4e222fa9e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-6182/2/1/3/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "76f17819358eb1ef1f92a8706d56cfd577cf0371",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
258488279 | pes2o/s2orc | v3-fos-license | Auditory category learning is robust across training regimes
Multiple lines of research have developed training approaches that foster category learning, with important translational implications for education. Increasing exemplar variability, blocking or interleaving by category-relevant dimension, and providing explicit instructions about diagnostic dimensions each have been shown to facilitate category learning and/or generalization. However, laboratory research often must distill the character of natural input regularities that define real-world categories. As a result, much of what we know about category learning has come from studies with simplifying assumptions. We challenge the implicit expectation that these studies reflect the process of category learning of real-world input by creating an auditory category learning paradigm that intentionally violates some common simplifying assumptions of category learning tasks. Across five experiments and nearly 300 adult participants, we used training regimes previously shown to facilitate category learning, but here drew from a more complex and multidimensional category space with tens of thousands of unique exemplars. Learning was equivalently robust across training regimes that changed exemplar variability, altered the blocking of category exemplars, or provided explicit instructions of the category-diagnostic dimension. Each drove essentially equivalent accuracy measures of learning generalization following 40 min of training. These findings suggest that auditory category learning across complex input is not as susceptible to training regime manipulation as previously thought.
Introduction
Is this mushroom edible?Is that a squeal of danger, or delight?Is that stranger trustworthy?Humans and other organisms readily learn complex constellations of cues that signal functionally equivalent sensory objects and eventslike crying babies, for example.Cries of pain during a vaccination tend to be louder and longer, with more variable pitch and greater nonlinear acoustic characteristics compared to cries of bath time discomfort (Helmer et al., 2020;Koutseff et al., 2018).But adults' ability to categorize pain versus discomfort based on these complex cues demands experience; adults who have spent little time with infants categorize cries no better than chance.In contrast, parents and infant caregivers are significantly more accurate in categorizing cries, their accuracy scales with how much infant experience they have, and their categorization ability generalizes to unfamiliar infants' cries (Corvin, Fauchon, Peyron, Reby, & Mathevon, 2022).Experience molds caregivers' ability to use imperfect and complex sensory input regularities and guides behavior upon encountering novel input with similar properties.The latter abilitygeneralizationis a signature characteristic of effective category learning.
Cognitive science has long investigated the emergence of categories.One especially productive approach has been to utilize training paradigms to teach participants categories across novel or unfamiliar exemplars.In addition to advancing theoretical accounts of category learning and generalization, these literatures have informed real-world applications in second-language acquisition (Lim & Holt, 2011;Reetzke, Xie, Llanos, & Chandrasekaran, 2018), science learning (Eglington & Kang, 2017;Goldwater, Hilton, & Davis, 2022;Nosofsky, Sanders, & McDaniel, 2018), social group recognition through faces and voices (Lavan, Burton, Scott, & McGettigan, 2019;Retter, Jiang, Webster, & Rossion, 2020), stereotyping (Hugenberg & Sacco, 2008) and approaches to building effective educational materials (Carvalho & Goldstone, 2021;Nosofsky, Slaughter, & McDaniel, 2019).Many studies of category learning have examined aspects of training that best support effective learning, informing both theory and application.We examine three such aspects in more depth in the next sections.
Explicit instruction
The provision of explicit instructions may also promote category learning.Explicitly instructing learners to focus on a category-diagnostic dimension, or to direct attention away from a category non-diagnostic dimension, can result in enhanced non-native speech category learning (Chandrasekaran, Yi, Smayda, & Maddox, 2016).Moreover, when explicit instruction draws attention to a category-diagnostic dimension, it benefits non-native speech category learning and production above and beyond what is achieved with high-variability training alone (Wiener, Chan, & Ito, 2020).More nuanced, less explicit, manipulations that guide learners to category-diagnostic dimensions have also been effective in facilitating non-native speech category learning (Ingvalson, Holt, & McClelland, 2012;Iverson, Hazan, & Bannister, 2005;Jamieson & Morosan, 1986;McCandliss, Fiez, Protopapas, Conway, & McClelland, 2002;McClelland, Fiez, & McCandliss, 2002).
Summary and aim of the study
In summary, examination of category learning across novel or unfamiliar categories has been useful in understanding how category training regimes affect learning and suggests means of improving realworld categorization.Indeed, an implicit assumption of category learning research has been that laboratory training tasks with relatively simple stimuli can inform real-world category learning.Studying visual category learning across simple dimensions, for example, may reveal processes available to early-career radiographers learning to categorize the subtle patterns that differentiate a benign from a cancerous tumor (Waite et al., 2019).Correspondingly, learning a simplified category characteristic of non-native speech might suggest scenarios that would improve classroom second language learning (Wiener, Murphy, Goel, Christel, & Holt, 2019).
However, most category learning studies differ substantially from natural category learning challengesoften by design.For example, the number of unique exemplars in lab experiments vastly undersamples natural exemplar variation.Laboratory studies tend to model real-world exemplar variability with a Gaussian distribution for simplicity.Exemplars are often defined across just two sensory dimensions, and dimensions tend to be simple, easily verbalized sensory features (e.g., line orientation, acoustic frequency).Even when categories are defined by natural visual objects or spoken utterances, exemplar sampling tends not to truly reflect the full complexity of natural categories.As a result, much of what we know about category learning has come from studies with simplifying assumptions.This entirely reasonable approach nonetheless calls into question the implicit expectation that these studies reflect the process of category learning under more complex learning challenges, such as those posed by real-world input.
Here, we put this question to the test by creating an auditory category learning challenge that intentionally violates some common simplifying assumptions.We create a novel, nonspeech acoustic stimulus space comprising >36,000 tokens across four auditory categories.The categories rely upon natural acoustic variability from spoken language (Mandarin lexical tone across multiple talkers) with underlying regularities known to be learnable because they are derived from real speech.Despite their speech origins, these sounds are not familiar, do not convey talker information, and are not heard as speech.This is because we use signal processing to eliminate voice and linguistic information, leaving only the fundamental frequency (F0) contour thought to be the most diagnostic dimension for conveying Mandarin lexical tone category to native listeners (Ho, 1976;Howie, 1976).In tonal languages like Mandarin, F0 differences like these allow a syllable like "ma" to have four different meanings according to its intonation (Chao, 1965;Gandour, 1983).As noted, we can be confident the structure of these novel categories is learnable because they are drawn from natural categories.Further, prior research examining category learning among the same pool of nonspeech hums demonstrates robust category learning among non-Mandarin listeners (Liu, 2014).
We exaggerate the learning challenge in two ways.First, each category exemplar is composed of two streams of three hums, each stream spectrally filtered such that one is situated in a high frequency band and the other in a low frequency band.These two streams are played simultaneously, but only one carries information diagnostic to category decisions; the other is acoustically variable and non-diagnostic.This creates a rarely examined category learning challenge: Listeners must forage the acoustic soundscape to discover category-diagnostic information as it evolves (and dissipates) over time.By design, we build this qualitative category learning challenge into our stimulus set without modeling specific details of speech per se.Instead, our approach is to create a novel version of an important puzzle present in auditory category learning: Listeners must discover category-diagnostic acoustic dimensions in the context of non-diagnostic (or less-diagnostic) acoustic variability arising from other dimensions of the same sound source (e.g., across different bands of formant frequencies) or even across simultaneous competing sounds.
Second, the hum stream in one frequency band is a concatenation of three unique hums drawn from a single Mandarin tone category.The other is a concatenation of three unique hums, each drawn from different Mandarin tone categories.In this way, one frequency band contains tone-category-diagnostic information, and the other frequency band is category uninformative.Thus, category learning requires both discovering (at least implicitly) the category-diagnostic frequency band that contains a statistically regular pattern derived from a single Mandarin tone category, and also recognizing the category-diagnostic, but acoustically variable, pattern within this band (see Fig. 1 for a schematic depiction of the stimuli).In summary, this creates a complex highdimensional exemplar space across which four categories are defined over multiple difficult-to-verbalize dimensions and sampling distributions.
We intentionally chose a learning challenge that would not approach ceiling in a single session so that we could better capture differences that might be apparent across training regimes; ceiling performance would make this problematic.Although this approach does not measure what learners can achieve with longer training, examining category learning across a single session has been the workhorse paradigm across both the visual and auditory category learning literatures because it tracks early, online category acquisition.Further, by limiting training to a single session, we can examine effects of online category learning without influences of offline learning or consolidation (which might be productively examined in future work).
Experiments overview
Here, we first examine whether young adult participants recruited from a diverse online sample can accomplish this complex category learning challenge in a single training session that involves overt category decisions and explicit feedback.We then examine how learning is influenced by variability in three aspects of the training regime, each of which has been shown to affect learning in simpler categorization challenges: manipulations of exemplar variability, category exemplar sequencing, and explicit instructions.
Participants
Since this was a novel categorization challenge, we conducted several pilot studies from which to estimate power.These studies revealed robust learning across ~30 participants.Here, we doubled the sample, targeting recruitment of 60 participants per experiment to improve our ability to detect subtle learning differences across learning contexts.
In total, 300 young adults aged 18-35 years participated online for monetary compensation via recruitment through Prolific.co.There were no restrictions on language background, and all participants selfreported normal hearing.Table 1 shares participant demographics.Given our relatively unrestricted recruitment of participants online, our sample is likely more representative of the general population than that of studies that recruit from a university student population (Henrich, Heine, & Norenzayan, 2010).Four participants were excluded due to an experimental error that duplicated trials, leaving 296 participants in the final analyses and a minimum of 58 participants per experiment.All participants provided informed consent approved by the Carnegie Mellon University Institute Review Board (IRB).
Stimuli
Fig. 1 illustrates the construction of sound exemplars.Stimuli for all experiments were drawn from the same acoustic space.The building blocks for these stimuli were nonspeech hums created by extracting the fundamental frequency (F0) contour from natural speech recordings of single-syllable words, each recorded by four native Mandarin speakers (2 male, 2 female; Liu, 2014).A screen displayed both the Mandarin Chinese character and the pinyin spelling of the word frame (with tone number 1, 2, 3, 4) to prompt native speakers to utter each word twice, with self-paced progress as utterances were digitally recorded with Praat (Boersma, 2001).Each speaker produced 20 unique word-frames (pinyin spellings: can, chou, di, fa, ge, guo, huan, jie, kui, peng, pu, qian, shi, tuo, xi, xiang, xing, xue, yang, yu) in each of the four lexical tones for a total of 80 utterances per talker.A native Mandarin listener checked stimuli for clarity and representativeness of the lexical tone contour.
These speech recordings were processed in the open-source speech analysis software Praat (Boersma, 2001) to create non-speech hums by extracting the pitch contour using the Analyse periodicity: To Pitch function and converted into hums using the To Sound (hum) function.Expert listeners removed some stimuli from the pool based on poor pitch tracking and discontinuous hum outcomes (Liu, 2014).
To make a single stimulus exemplar, three unique non-speech hums drawn from the same Mandarin talker were assigned to a higher frequency band and three to a lower frequency band.As illustrated in Fig. 1B, one of the frequency bands was designated the diagnostic band; it possessed 3 unique hums drawn from a single lexical tone category.The other band possessed 3 unique hums from any lexical tone category ("wild card").
Next, the hums were processed using the audio processing software Sound eXchange (sox.sourceforge.net),with additional processing in Adobe Audition (version 13.0.7).First, hums were padded with 50 ms of silence at the beginning and end of the sound clip and high-pass-filtered at 30 Hz to remove slow drift and reduced in gain by 10 dB.Second, high-and low-frequency-band versions of these stimuli were created.To create the high-frequency-band components, hums were pitch-shifted +33 semitones in Audition and then high-pass filtered using sinc Kaiser-windowed filter in Sox to preserve all frequencies at and above 1000 Hz.To create the low-frequency-band components, the same hums were pitch-shifted by − 1 semitone and low-pass filtered to preserve all frequencies at and below 500 Hz.In the process of pitch shifting, hums were simultaneously normalized to be 400 ms using the iZotope algorithm in Audition, using the high precision mode with pitch coherence set to 4. The 400-ms, pitch-shifted and high/low-pass filtered hums were RMS-matched in amplitude and normalized to be − 6 dB below the maximum digital range.
As shown in Fig. 1B, the category-diagnostic band was created by drawing from the pool of hums derived from a single talker, choosing a frequency band (high or low), randomly selecting three hums from a single hum (lexical tone) category, and concatenating the hums with 100 ms of total silence between each token.We created all permutations in both high and low frequency bands.
Similarly, the category-uninformative "distractor" band was created by drawing from a pool of hums from the same talker used to create the diagnostic-band hum sequence, with hums placed into the frequency band opposite the diagnostic band.For the non-diagnostic band, hums were randomly selected from three different hum categories (selected from any of the four hum categories) and concatenated with 100 ms silences between each hum.This was repeated for all permutations.The diagnostic band and uninformative distractor band were then added together such that the onset of each of the three hums of each frequency band was temporally aligned, and stimuli evolved across 1400 ms in all.
For counterbalancing purposes, there were two sets of four categories.Fig. 1B illustrates Set 1, in which Category A and Category B are defined by high-frequency diagnostic bands whereas Category C and D are defined by low-frequency diagnostic bands.This relationship was reversed in Set 2 (e.g., low-frequency diagnostic for Categories A and B, not shown in Fig. 1).Assignment of set was counterbalanced across participants in each experiment and analyses collapse across set assignment.
Overall, the full constellation of hum permutations resulted in a stimulus pool with over 36,000 exemplars.From this exemplar space we randomly selected 2048 total exemplars (256/category/set) for the
General procedure
Five experiments shared common procedures, differing only in their approach to training.In all experiments, training blocks alternated with generalization blocks (see Fig. 2C).Generalization was similar to training, but participants did not receive feedback.Generalization trials for all experiments were 4AFC.Each of the four generalization blocks consisted of 20 novel exemplars/ category (80 total stimuli) not encountered during training.These 80 exemplars were randomly selected without replacement from the stimulus pool reserved for novel generalization prior to the experiment, and the generalization set was used consistently for each participant across each experiment.This presented the opportunity to examine crossexperiment effects of training manipulations via participants' ability to generalize category learning to novel exemplars.
Participants completed the experiment online via Gorilla, an online experiment creation and hosting website (Anwyl-Irvine, Massonnié, Flitton, Kirkham, & Evershed, 2020) on a laptop or desktop computer using the Google Chrome browser.Prior to beginning the category learning task, participants underwent a system check to ensure the auto play of sound at a comfortable listening level and a short task to ensure compliance with the use of binaural headphones (Milne et al., 2021).All sounds were presented in the lossless *.FLAC format.After the experiment, participants shared language and music training history, were invited to share notes detailing their task strategies, and received an experiment debriefing.
Approach to analyses
For each experiment, we analyzed training and generalization blocks separately, asking whether significant learning and generalization occurred with a specific training regime.For training and generalization blocks, we analyzed: (1) the overall change in performance across Blocks 1-4 using a repeated measures ANOVA and post-hoc comparison of Block 1 and Block 4; and (2) indices of early learning by examining Block 1 accuracy compared to chance.We compared training and generalization performance between select pairs of experiments using mixed model ANOVA.(Linear mixed effects modeling yielded the same results and are available on OSF.io.) To ask whether training regime differentially affected generalization overall, a set of cross-experiment analyses (reported after Experiment 5) compared generalization progress from Block 1 to 4 as well as final generalization achievement in Block 4. We supplemented these analyses with Bayesian Equivalence Independent t-tests across all pairs of experiments, looking both at generalization progress and final generalization achievement.
Experiment 1: 4AFC training with full exemplar variability
Experiment 1 tested listeners' ability to learn the complex auditory categories under conditions of full acoustic variability in a fouralternative forced-choice categorization task, with feedback (Table 2).
Methods specific to experiment 1
Here, 480 exemplars (120/category) were randomly selected from the full pool of 512 training stimuli (128/category).On each trial, participants chose which of four aliens (4AFC) corresponded to the sound they had heard; as with all experiments, they received feedback after each training trial.Participant characteristics are shown in Table 1; data are shown in Fig. 3.
Experiment 2: 4AFC training with low exemplar variability
As noted above, high exemplar variability may lead to slower and initially less accurate performance in training.However, it can yield dividends in supporting better generalization (Logan et al., 1991;Lively et al., 1993 1. Fig. 4 shows training and generalization data.
Experiment 3: 2AFC training with pairs grouped by categorydiagnostic band
Recall that the auditory category exemplars confront participants with two learning challenges: (1) to identify the diagnostic frequency band in the context of a simultaneous, non-diagnostic band and (2) to learn the pattern of hums present in the diagnostic band despite their within-category acoustic variability.In Experiment 3, we block categorization decisions according to the category-diagnostic band, thereby potentially (implicitly) encouraging selective attention to the categoryrelevant frequency band within blocks of trials (Carvalho & Goldstone, 2017).
Methods specific to experiment 3
Here, training trials were blocked as 2AFC category decisions.Like Experiment 1, participants completed 480 training trials with feedback, where the 480 trials (120/category) were randomly selected from the full pool of 512 training exemplars (128/category).This was accomplished by dividing each training block (120 trials) into six 20-trial miniblocks.Half of the mini-blocks were grouped by high-frequency diagnostic band and half by low-frequency diagnostic band.For example, as shown in Fig. 1B, Category A and B stimuli were presented in one half of the mini-blocks, and Category C and D were presented in the other half.Mini-blocks alternated between category pairs differentiated in either the high-and low-frequency diagnostic band, with order counterbalanced across participants.Generalization blocks mirrored Experiments 1 and 2. Participant demographics are shown in Table 1.Data are plotted in Fig. 5.
Experiment 4: 2AFC training with all category pairs
As a counterpart to Experiment 3, Experiment 4 examines whether category learning with 2AFC training is successful without categorydiagnostic blocking.Here, all six possible pairs of categories were presented in separate training blocks (e.g., AB/AC/AD/BC/BD/CD).We hypothesized that without implicit direction to the diagnostic band, participants would be forced to discover the two learning challenges simultaneously and that this would, akin to interleaved presentation, exaggerate between-category differences (Carvalho & Goldstone, 2017).After Experiment 4 findings are reported, results from Experiments 3 and 4 are directly compared.
Methods specific to experiment 4
Experiment 4 used full exemplar variability (like Experiments 1 and 3) and presented 2AFC training across six 20-trial mini-blocks per training block (like Experiment 3).The order of category pair mini- blocks was randomized for each training block, for each participant.Generalization blocks mirrored previous experiments.Table 1 provides demographic information, and data are plotted in Fig. 6.
Comparison of experiments 3 and 4
We asked how training that paired categories according to diagnostic band (Experiment 3) compared to pairing categories randomly regardless of diagnostic band (Experiment 4).A mixed-model ANOVA revealed that there was a significant effect of Training Regime (F(1, 116) = 12.130, p = 0.000701, η G 2 = 0.073) but no interaction between Block and Experiment 257.56 Random pairing of categories without regard to the diagnostic band in Experiment 4 resulted in significantly better Block 1 training accuracy (t(114.7529)= − 5.5759, Bonferroni-adjusted p = 6.64e-7,Cohen's d = − 1.027) compared to Experiment 3.However, there was no significant difference in final training achievement in Block 4 (t(114.
Experiment 5: 2AFC training with pairs grouped by categorydiagnostic band and explicit instructions
Experiment 3 blocked categories according to their diagnostic frequency band in a manner that might implicitly guide discovery of category-relevant dimensions.Experiment 5 takes a more explicit approach, asking whether category learning is facilitated by providing instructions about the category-relevant frequency band.
Methods specific to experiment 5
Experiment 5 used full exemplar variability (like Experiment 1) and a 2AFC training task with trials blocked according to a shared diagnostic band (like Experiment 3).In addition, participants were informed that "previous participants […] found it beneficial to listen to the higher [or lower] pitched sounds when learning which sounds go with which alien."Before each mini-block of 20 trials, participants were presented with a blank screen with the text "Listen high!" or "Listen low!" in accordance with the diagnostic frequency band of the category pairs in the mini-block.Otherwise, the procedure followed that of Experiment 3. Table 1 shows participant demographics.Data are plotted in Fig. 7.
Comparison of experiments 3 and 5
There was a significant influence of the presence of explicit in-
Comparing generalization across training regimes
As described above, each experiment involved generalization testing blocks comprised of the same 80 exemplars, not heard during training.This allows for direct comparison of the influence of different training regimes on generalization of category learning.To this end, we conducted a two-way mixed model ANOVA of generalization accuracy across Block versus all five Training Regimes (Experiments).The significant effect of Block 845.94Given the similarity among experimental outcomes, we also conducted Bayesian equivalence testing to examine the strength of the evidence that training regime manipulations have essentially equivalent effects.We again focus on generalization progress along with final generalization achievement, setting the equivalence region from − 0.05 to 0.05 in Cohen's d units using Bayesian Independent Samples Equivalence t-test (JASP Team, 2022).
Fig. 9 shows Bayes Factor (BF) comparing the equivalence hypothesis (i.e., that the effect falls within our equivalence interval) versus the C.O. Obasih et al. hypothesis that the effect lies outside this interval.For each pairwise comparison, the evidence is stronger for equivalence.Using criteria suggested by Andraszewicz et al. (2015), there is moderate evidence that generalization progress and ultimate achievement are not differentially influenced by the training regimes that manipulate exemplar variability, exemplar sequencing, or explicit instruction.
General discussion
Category learning studies have often taken the entirely reasonable approach of examining simplified category-learning challenges; one or a few often easily verbalizable diagnostic dimensions with low exemplar variability and a small number of category exemplars have been typical (e.g., Gabay, Dick, Zevin, & Holt, 2015;Lim & Holt, 2011;Maddox, Koslov, Yi, & Chandrasekaran, 2016;Roark, Lehet, Dick, & Holt, 2022).This has been as true for natural exemplars, like non-native speech sounds as well as for novel objects and events.Overall, these studies have informed theories of category learning and have significantly driven our understanding of both basic processes and application.Yet we do not completely understand how factors that impact simplified category learning challenges might play out in more real-world category learning.Here, we developed a novel space of auditory categories that embodied some of the natural complexity and variability typically encountered in real-world stimuli.Within this space, categories were characterized by many unique exemplars, difficult-to-characterize dimensions, and simultaneous non-diagnostic information.
We observed strong evidence that these categories are learnable even over short-term training.Moreover, this learning generalizes readily to novel exemplars.Across five independent experiments involving 296 listeners, adult participants learned these challenging auditory categories above chance accuracy at the group level.Learning was rapid.There was evidence of learning as early as the first block across all training regimes; for most participants, categorization improved across the 40-45 min of total training.The learning curves across training are consistent with results from a wide variety of category learning studies with simpler category learning challenges.Typically, these studies show evidence of significant learning early in training followed by relatively slow, incremental increases in accuracy across subsequent blocks (e.g., Reetzke et al., 2018;Roark & Holt, 2019;Zeithamova & Maddox, 2006).
As is often the case in category learning studies, there were substantial range individual differences in learning outcomes (e.g., Baese-Berk, Chandrasekaran, & Roark, 2022).We informally examined two potential contributors to these individual differences across our sample of almost 300 participants: (1) experience with Mandarin or another tonal language and (2) musical expertise.Neither was predictive of generalization outcomes (supplemental information can be found at OSF.io).
With this learning and generalization as a baseline, we examined the extent to which manipulations of exemplar variability (Logan et al., 1991), exemplar blocking (Carvalho & Goldstone, 2017), and provision of explicit instruction (Chandrasekaran et al., 2016) each shown to impact category learning outcomes in prior researchmodulate generalization of category learning in a more complicated stimulus space.Under the present category learning challenge, learning was surprisingly consistent across training regimes.As demonstrated by the Bayesian analyses, generalization progress and final generalization achievement were essentially equivalent.This is quite unexpected given the prior literature.Even participants left to discover diagnostic dimensions implicitly via feedback did not fare more poorly in generalization of category learning than participants provided explicit instruction about where to find category-relevant information.Next, we consider the findings from prior literature and how they diverge from and inform our findings by examining the three training manipulations.
Exemplar variability
The expectation that training with high variability exemplars produces more robust generalization of category learning has a long history and continues to have a substantial impact on theory and application.As we noted in the introduction, the implications of high variability training have been especially well-investigated in non-native speech category learning (e.g., Logan et al., 1991).Brekelmans et al. (2022) review this literature thoroughly and make a case that evidence is mixed regarding an advantage of high versus low exemplar variability.Moreover, in this well-powered replication of Logan et al. (1991) and Lively et al. (1993), Brekelmans and colleagues observed no learning differences across high and low exemplar variability.
Other studies have shown that the benefit from high variability acoustic training interacts strongly with participants' individual characteristics and perceptual abilities.For example, Perrachione, Lee, Ha, and Wong (2011) demonstrated that high-variability training benefited only learners with already strong perceptual abilities and indeed impeded learners with weaker perceptual abilities.Several other studies have reported variation in the effectiveness of high-variability training for different learners, with some studies finding no beneficial effect of the high-variability condition, and others finding that high exemplar variability in training hinders learning (Fuhrmeister & Myers, 2017, 2020;Sadakata & McQueen, 2014).Further, another recent study has demonstrated that high variability training sets could confer an advantage or a disadvantage in voice-identity category learning, depending on stimulus type, the dimension that is varied, and the nature of the posttest (Lavan, Knight, Hazan, & Mcgettigan, 2019).
In summary, emerging evidence challenges the strength and/or consistency of effects of exemplar variability on category learning outcomes.The present results echo these concerns.Here, there was no advantage to generalization progress or ultimate achievement across training with high exemplar variability (480 unique exemplars) versus low exemplar variability (40 unique exemplars).
Exemplar sequence
A recent meta-analysis revealed that interleaved exemplar presentation tends to benefit learning (Brunmair & Richter, 2019), but vanishingly few studies have examined exemplar sequencing in the auditory modality.Studies examining learning across auditory input of non-native speech soundsthough few in numberhave found benefits of blocking, rather than interleaving, category exemplars (Carpenter & Mueller, 2013;Fuhrmeister & Myers, 2020).These studies also found that participants learned to rely on the category-diagnostic dimensions and made error judgments based on category-irrelevant dimensions.
In the present study, exemplars blocked according to the categorydiagnostic frequency band initially led to significantly poorer training performance than randomly paired category exemplars.Even so, by the end of training there was no difference in learning outcomes or generalization across training regimes.Any influence of blocked versus interleaved presentation of exemplars in training was ephemeral and contrary to expectations that category-diagnostic blocking would support learning.Participants left to discover category-relevant dimensions through trial-and-error tuned by explicit feedback fared no better or worse than learners who were supported by blocking according to the category-diagnostic dimension.
Explicit instruction
Explicitly instructing learners about the nature of categorydiagnostic dimensions can improve categorization accuracy for nonnative speech categories (Chandrasekaran et al., 2016).Other studies have more implicitly "instructed" participants via training methods that exaggerate category-relevant dimensions; these appear to enhance learning compared to control conditions (Ingvalson et al., 2012;Iverson et al., 2005;Jamieson & Morosan, 1986;McCandliss et al., 2002;McClelland et al., 2002).
In the present study, explicit instruction improved early training accuracy compared to implicit support to learning via blocking by category-diagnostic frequency band.But that advantage was fleeting.By the culmination of training, groups' learning and generalization achievements were equivalent.It is possible that simple instructions such as "Listen high!" or "Listen low!" may not be informative enough to direct listeners to the diagnostic dimension.However, we modeled our instructions after those of Chandrasekaran et al. (2016), who instructed listeners that previous participants had succeeded in listening to a specific dimension of sound, and listeners are fully capable of paying explicit attention to one of two spectrally separated dimensions in a range of tasks (Dick et al., 2017;Holt, Tierney, Guerra, Laffere, & Dick, 2018).
Conclusions
In sum, the present results underscore the robustness of auditory category learning, regardless of training regime.A large, diverse sample of online research participants exhibited the ability to acquire novel auditory categories drawn from a complex acoustic space within 40 min and to generalize this knowledge robustly to novel exemplars.At a group
Fig. 1 .
Fig. 1.Schematic of Sound Exemplars.A. Non-speech hums derived from natural utterances of four native Mandarin (2 female) speakers producing utterances varying in lexical tone, which is conveyed by fundamental frequency (F0) contours.Hums preserve only the F0 contour and do not sound like speech, yet they possess natural acoustic regularity within hum categories and distinct patterns across hum categories.Here and in subsequent panels, color conveys the hum category.B. Hums were filtered into high (≥ 1000 Hz) and low (≤500 Hz) frequency bands and three hums were concatenated in each band to compose a sound exemplar.For each, a diagnostic band (colored boxes) possessed within-hum-category exemplars and a non-diagnostic band had 3 hums, each drawn from a different one of the four hum categories (open "wild card" boxes).Exemplars defining the four categories were created such that listeners needed to discover the diagnostic band in the context of the simultaneous non-diagnostic band and learn the hum pattern across acoustic variability within the diagnostic band.The four aliens used to guide categorization responses are shown, as well.C. A spectrogram showing a representative exemplar drawn from Category A, for which the high-frequency band was diagnostic.Here, and in Panel D, colored rectangles indicate the lexical tone category from which the hum was created.Solid colored lines indicate the categorydiagnostic frequency band; dashed lines show the category uninformative frequency band.D. Spectrogram showing a representative exemplar drawn from Category D, for which the low-frequency band was diagnostic.
Only the nature of the training blocks varied across experiments.Generalization blocks were identical across experiments to facilitate cross-experiment outcome comparisons.All experiments involved training over 40 min.Moreover, across all experiments, training involved overt category decisions and explicit feedback (see Fig. 2 for schematics).Following a 500-ms fixation, participants heard a category exemplar and matched it to one of four novel 'alien' illustrations via keyboard response at sound offset, with immediate feedback lasting 1500 ms; the next trial commenced immediately.Across experiments, each auditory category consistently mapped to a specific alien presented on the screen.In Experiments 1 and 2 all four alien creatures were visible on the screen (4alternative force choice (4AFC)), whereas in Experiments 3-5, only pairs of alien creatures were visible (2AFC), with the other two aliens greyed out and unavailable for response.Each of the four training blocks consisted of 120 trials (30 trials/ category), totaling 480 training trials.At the commencement of each training block, 30 exemplars/category were randomly selected without replacement from the pool of 128 category exemplars.Thus, exemplars were never repeated within a single training block, and there was a low probability of any single exemplar repeating across training blocks.Each training block was divided into either three mini-blocks of 40 trials each (Experiments 1 and 2, for 4AFC training) or six mini-blocks of 20 trials each (Experiments 3, 4, and 5, for 2AFC training), to allow for brief selftimed breaks between mini-blocks.Except for Experiment 5 (see Section 7), participants were not informed of the dual-band nature of the stimuli and were simply instructed to use the feedback during training trials to learn which sounds corresponded with which alien.
Fig. 2 .
Fig. 2. Trial and Block Structure Across Experiments.A. Training trials with overt categorization decisions and immediate feedback.B. Generalization trials with novel sound exemplars not encountered in training, with no feedback C. Training regimes (defined by the nature of training trials) differed across experiments, but all experiments were comprised of four cycles of 120 training trials (A) followed by 20 generalization trials (B).Note that generalization trials were identical across experiments.
; see Raviv et al., 2022 for review).Conversely, small numbers of training exemplars may lead to faster and more accurate learning, but poorer generalization.We test this hypothesis in Experiment 2 with a limited set of training exemplars, but with the same set of novel generalization exemplars as in Experiment 1. 4.1.Methods specific to experiment 2 Here, training involved only 40 exemplars (10 exemplars/category) randomly selected from the training pool of 512 training exemplars prior to experimentation and consistent among participants.Each exemplar was encountered 12 times across training to arrive at the same number of 480 training trials as Experiment 1. Participant demographics are in Table
Fig. 3 .
Fig. 3. Experiment 1, 4AFC Full Exemplar Variability: Training and Generalization Accuracy by Block.The top panel represents training accuracy.The bottom panel shows generalization accuracy.Dashed lines represent chance and error bars reflect standard error of the mean.Each individual gray point represents an individual participant's mean accuracy and larger, colored symbols show mean across-participant accuracy.
Fig. 4 .
Fig. 4. Experiment 2, 4AFC Low Exemplar Variability: Training and Generalization Accuracy by Block.The top panel represents training accuracy.The bottom panel shows generalization accuracy.Dashed lines represent chance and error bars reflect standard error of the mean.Each individual gray point represents an individual participant's mean accuracy and larger, colored symbols show mean across-participant accuracy.
Fig. 5 .
Fig. 5. Experiment 3, 2AFC Pairs Grouped by Category-Diagnostic Band: Training and Generalization Accuracy by Block.The top panel represents training (2AFC) accuracy.The bottom panel shows generalization (4AFC) accuracy.Dashed lines represent chance and error bars reflect standard error of the mean.Each individual gray point represents an individual participant's mean accuracy, and larger, colored symbols show mean acrossparticipant accuracy.
Fig. 6 .
Fig. 6.Experiment 4, 2AFC All Category Pairs: Training and Generalization Accuracy by Block.Top panel shows training accuracy, and bottom panel shows generalization accuracy.Dashed lines represent chance and error bars reflect standard error of the mean.Each individual gray point represents an individual participant's mean accuracy and larger, colored symbols show mean acrossparticipant accuracy.
Fig. 8B) differed across training regimes.Given the similarity among experimental outcomes, we also conducted Bayesian equivalence testing to examine the strength of the evidence that training regime manipulations have essentially equivalent effects.We again focus on generalization progress along with final generalization achievement, setting the equivalence region from − 0.05 to 0.05 in Cohen's d units using Bayesian Independent Samples Equivalence t-test(JASP Team, 2022).Fig.9showsBayes Factor (BF) comparing the equivalence hypothesis (i.e., that the effect falls within our equivalence interval) versus the
Fig. 7 .
Fig. 7. Experiment 5 2AFC, Pairs Grouped by Category-Diagnostic Band + Explicit Instructions: Training and Generalization Accuracy by Block.The top panel represents training accuracy.The bottom panel shows generalization accuracy.Dashed lines represent chance and error bars reflect standard error of the mean.Each individual gray point represents an individual participant's mean accuracy and larger, colored symbols show mean acrossparticipant accuracy.
Fig. 8 .
Fig. 8. Generalization Progress and Achievement Across Training Regimes.Generalization of category learning was very robust.Training regime manipulations across experiments did not influence generalization progress from Block 1 to Block 4 (panel A), nor did they influence ultimate generalization achievement in Block 4 (panel B).Error bars indicate standard error.
Fig. 9 .
Fig. 9. Generalization Across Training Regimes, Bayesian Equivalence Testing.Each panel shows comparison of two Bayes Factors (BF) across experiments: the top number indicates the evidence that the difference lies within the equivalence region, and the bottom number indicates the evidence that the difference lies outside the equivalence region.A. BF results from Generalization Progress (Block 4 -Block 1 accuracy).B. BF results from Generalization Achievement (Block 4).For ease of interpretation, comparisons where BF > 4 are in bold font, and BF < 1 in italics.
Table 1
Participant demographics.experiments.Half of the exemplars for each condition (128/ category/set) were reserved as the training stimulus pool whereas the other half was reserved as a pool to test generalization.The 2048 stimuli selected for the present experiments are available on OSF.io.
1 Based on self-reported languages when asked to "List language(s) spoken before age 2."C.O.Obasih et al.present
Table 2
Training Protocols. | 2023-05-05T13:14:15.316Z | 2023-05-04T00:00:00.000 | {
"year": 2023,
"sha1": "92af67d1fa974e6883a364ef7f6b319f70f36d9e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.cognition.2023.105467",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4a2ecced7e361e54dbe9a861fbc30e65a368435b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208925763 | pes2o/s2orc | v3-fos-license | 2-{[2-(1-Methyl-2,2-dioxo-3,4-dihydro-1H-2λ6,1-benzothiazin-4-ylidene)hydrazin-1-ylidene]methyl}phenol
In the title compound, C16H15N3O3S, the dihedral angle between the aromatic rings is 8.18 (11)° and the C=N—N=C torsion angle is 178.59 (14)°. The conformation of the thiazine ring is an envelope, with the S atom displaced by 0.8157 (18) Å from the mean plane of the other five atoms (r.m.s. deviation = 0.045 Å). An intramolecular O—H⋯N hydrogen bond closes an S(6) ring. In the crystal, weak C—H⋯O interactions link the molecules, with all three O atoms acting as acceptors.
In the title compound, C 16 H 15 N 3 O 3 S, the dihedral angle between the aromatic rings is 8.18 (11) and the C N-N C torsion angle is 178.59 (14) . The conformation of the thiazine ring is an envelope, with the S atom displaced by 0.8157 (18) Å from the mean plane of the other five atoms (r.m.s. deviation = 0.045 Å ). An intramolecular O-HÁ Á ÁN hydrogen bond closes an S(6) ring. In the crystal, weak C-HÁ Á ÁO interactions link the molecules, with all three O atoms acting as acceptors.
MS acknowledges Higher Education commission of Pakistan for supporting PhD studies and the provision of a grant to strengthen the Materials Chemistry Laboratory at GC University Lahore, Pakistan.
Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: BT5989). In the synthesis of title compound, 4-hydrazinylidene-1-methyl-3H-2λ 6 ,1-benzothiazine-2,2-dione was subjected to react with 2-hydroxy benzaldehyde according to literature procedure ((Shafiq et al. (2011)). The product obtained was then recrystallized in ethylacetate under slow evaporation to obtain single crystals suitable for X-ray diffraction.
Refinement
The O-bond H atom was located in a difference map and its position was freely refined. The C-bound H-atoms were placed in calculated positions (C-H = 0.93-0.97 Å) and refined as riding. The methyl group was allowed to rotate, but not to tip, to best fit the electron density. The constraint U iso (H) = 1.2U eq (C,O) or 1.5U eq (methyl C) was applied.
Figure 1
The molecular structure of (I), showing displacement ellipsoids at the 50% probability level. The hydrogen bond is shown as a double-dashed line. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 2018-04-03T06:16:04.733Z | 2012-08-04T00:00:00.000 | {
"year": 2012,
"sha1": "3f5f1aa05ba13a828630345b94dc9277c1b36c4a",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2012/09/00/bt5989/bt5989.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f5f1aa05ba13a828630345b94dc9277c1b36c4a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
55727234 | pes2o/s2orc | v3-fos-license | Role of Theophylline on Renal Dysfunction of Asphyxiated Neonates
Renal dysfunction is one of the most common complications of neonatal asphyxia which is of poor prognosis and resulting in permanent renal damage in about 40% of survivors. The aim of this study is to determine whether theophylline could prevent or ameliorate renal dysfunction in term neonates with perinatal asphyxia. We assigned 82 severely asphyxiated term infants (Apgar score ≤ 5, Thompson>15) into 2 groups to receive intravenously a single dose of either theophylline (5 mg/kg; n=41) or placebo (n=41) during their first 60 minutes of life. We have measured the serum creatinine, urine creatinine, urinary sodium and GFR was calculated using Schwartz formula at 1st, 3rd and 5th days of infant's life. The obtained results showed a significant increase in GFR and urinary creatinine at 5th day of life in theophylline group compared to placebo. While, there was a significant decrease in the levels of serum creatinine and urinary sodium of theophylline group at 1st, 3rd and 5th day compared to placebo. The complications associated with asphyxia were lesser in theophylline group than placebo. We concluded that, a single dose of intravenous theophylline may be considered for all babies with severe asphyxia to ameliorate kidney dysfunction and decline other complications.
Introduction
Perinatal asphyxia is one of the most common neonatal problems which significantly lead to neonatal morbidity and mortality [1]. It is the second most important cause of neonatal mortality after sepsis, it accounts about 30% of neonatal mortality worldwide [2]. The consequences of hypoxic ischemic insult usually tends to develop organ dysfunction after asphyxia [3]. One of the most common complications of birth asphyxia is renal dysfunction and it is of poor prognosis, which result in permanent renal damage in about 40% of survivors [4]. Acute kidney injury (AKI) occurs commonly in critically ill neonates and is associated with adverse outcomes. Studies showing that renal adenosine acts as a vasoconstrictor metabolite in kidney after hypoxemia, causing pre glomerular vasoconstriction and post glomerular vasodilatation and result in falling in Glomerular Filtration Rate (GFR), mainly within first 5-days of life. This pre glomerular vasoconstriction can be suppressed by nonspecific adenosine receptor antagonists like theophylline [5]. A lot of previous trials across the globe have reported that theophylline is effective in improving kidney dysfunction in asphyxial renal injury. Despite the heterogeneity in study populations and the diagnostic criteria used to define asphyxia and renal failure in neonates, results of several studies involving asphyxiated new born consistently demonstrated a common occurrence of renal dysfunction ranging between 17% to 61%. Furthermore, Kidney Disease Improving Global Outcomes 2012 (KDIGO) guidelines suggest that a single dose of theophylline can be given in new-borns with severe perinatal asphyxia, who are at high risk of acute kidney injury [6]. Despite the tremendous advancements in neonatal care in recent years, treatment of acute renal failure has remained essentially supportive and relies on maintaining fluid balance and homeostasis, correction of electrolyte disturbances and management of associated hypotension, acidosis and hypoxemia to reduce renal vasoconstriction and improve renal perfusion. Nevertheless, several experimental studies have been conducted to examine various pharmacological agents to prevent or ameliorate kidney damage post asphyxia. Theophylline has been demonstrated in several animal and human studies to result in improvement of kidney function in conditions associated with increased renal adenosine content such as asphyxia. Moreover, the appropriate dose of theophylline therapy requires further investigation [1]. Thus, the aim of the study is to assess the efficacy of theophylline in the treatment of kidney dysfunction in neonates after asphyxia. of 82 patients met the inclusion criteria of perinatal asphyxia: 41 neonates were enrolled in theophylline group and 41 were enrolled in the placebo group. Infants eligible to be included in the study were of term gestation with a weight more than 2500 g. All neonates were undergone full general examination for severe birth asphyxia manifested by any two of the following:
Patients and Methods
(a) An Apgar score of up to three at 1 minute or up to five at 5 minutes [7].
Infants were excluded if they were; preterms, congenital anomalies, history of maternal drugs causing neonatal depression and if the babies required a mechanical mode of ventilation or had culture positive sepsis.
All of 82 infants who were enrolled in the study were divided into two groups: 1. Theophylline-treated group which receive a single dose of intravenous theophylline (5 mg/kg, 0.25 ml/kg) [4], 2. Placebo which receive an equal volume of saline over a five minute period within the first hour of birth.
Fluid intake and urine output were recorded every 24 hr at days 1, 3 and 5 of neonatal life to assess the following investigations: Creatinine levels in serum and urine, Sodium excretion and glomerular filtration rate (GFR) was also estimated on these alternate days using Schwartz's formula: GFR (mL/minute/1.73 m 2 )=0.45 × length (cm)/plasma creatinine (mg/100 mL) [9]. In addition, follow up of possible side effects of theophylline which are; Vomiting, Tachycardia, Convulsions, Hypokalemia and Hyperglycemia [10].
Statistical analysis
Data analysis was performed using the software SPSS (Statistical Package for the Social Sciences) version 20. Quantitative variables were described using their means and standard deviations. Categorical variables were described using their absolute frequencies and to compare the proportion of categorical data, chi square test was used. Kolmogorov-Smirnov (distribution-type) and Levene (homogeneity of variances) tests were used to verify assumptions for use in parametric tests. To compare means of two groups, independent sample t test was used. Nonparametric test (Mann Whitney) was used to compare means when data was not normally distributed and to compare medians in categorical data. To assess change in means over time, repeated measure ANOVA was used. The level statistical significance was set at 5% (p<0.05). Highly significant difference was present if p ≤ 0.001.
Results
The demographic and clinical characteristics of the studied asphyxiated neonates were presented in Table 1, the obtained results showed that There is statistically non-significant difference between theophylline group and placebo group regarding clinical data using: ∞ independent sample t test §Mann Whitney test ¥ Chi square test.
The represented results of serum creatinine levels during first five days of life in two study groups of asphyxiated newborns showed non-significant difference between both groups at baseline readings. Later on, highly statistically significant differences (p ≤ 0.001) were found between both groups in the values of serum creatinine on 3 rd and 5 th days ( Table 2).
The estimated urine creatinine during first five days of life showing a significant difference between both groups on 1 st , 3 rd and 5 th days. On pairwise comparison, the difference was noticed only between readings in 3 rd and both 1 st and 5 th days in theophylline-treated group while it was between first readings and those on 3 rd and 5 th days in placebo group. No difference between 3 rd and 5 th days in the group and no difference between 3 rd and 5 th days in placebo group ( Figure 1). In addition, the level of urinary sodium showing highly statistically significant differences were found between both groups on 1st and 5th days on measuring the urinary sodium along five days. There are highly statistical significant differences within each group between values of urinary sodium over time (Figure 2). The obtained data of GFR which calculated using Schwartz formula revealed a non-significant difference statistically between both groups at baseline readings, later on highly statistically significant differences (p ≤ 0.001) were found between theophylline treated group and placebo (31.8 ± 7.67 Vs. 21.87 ± 4.11 and 41.09 ± 10.5 Vs. 29.64 ± 4.58 ml/min/1.73 m 2 on 3 rd and 5 th days, respectively. There are highly statistical
ISSN 2386-5180
Vol.6 No.3:245 significant differences within each group between values of GFR over time (Table 3). The occurrence of the complications among the two groups was recorded in numbers and percentage (Figure 3). The complications includes: Acute kidney injuries, encephalopathy, multi organ failure and death. There is no significant difference between occurrences of complications among studied groups, but it is less in theophylline treated group compared to placebo [11].
Discussion
The obtained results indicate that treating term neonates with severe perinatal asphyxia with a single dose of theophylline (5 mg/kg) in the first postnatal hour was beneficial. The presented data of theophylline treated group showed a significant decrease in the serum creatinine levels, with a significant increase in the creatinine clearance and urinary creatinine excretion compared to non-treated group (placebo). Similar results of Eslami et al. [12] was stated that, Serum creatinine levels were not significantly different between the theophylline and control groups on the 1st day as in our study in which p=0.082; however, these levels significantly decreased in the infants of the theophylline group and significantly increased in the controls on the 3 rd day (p<0.001) which is similar to our findings(p<0.001). Prophylactic prescription of theophylline as a single dose of 8 mg/kg could decrease serum creatinine and urinary β2-microglubolin and increase creatinine clearance [13,14]. In contrast, there were no significant difference in urine sodium excretion was reported in the study of Bhat et al. [5], but in our study there were high statistically significant difference in urinary sodium excretion especially in the first day due to natriuretic effect of theophylline. Due to the natriuresis effect of theophylline, urinary sodium excretion on the 1 st day was significantly higher in the theophylline group than in the control group (p=0.02) as same as our findings (p<0.001) [12]. The estimated GFR was significantly decreased in the control group from 26.25 ± 4.46 at the first day to 21.78 ± 4.11 at the third day compared to theophylline treated group. Bakr [13] and Raina et al. [4] were reported the same results that, a decrease in GFR from 31 ± 13.03 at day one to 20.87 ± 8.5 at day three. Jenik et al. [14] reported that the severity of asphyxia and multi-organ involvement were similar between the two studied groups, while our study the development of complications such as multiorgan failure and encephalopathy were more less in theophylline group compared to placebo.
Conclusion
We concluded that therapeutic use of theophylline has the ability to overcome renal dysfunction associated with neonatal asphyxia. | 2019-03-18T14:03:47.119Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "ada6339fce39ddf976ff231cf296a54608c390c2",
"oa_license": null,
"oa_url": "https://doi.org/10.21767/2386-5180.100245",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "544403e578b5c841e326461c6cbd763af96e69af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208190422 | pes2o/s2orc | v3-fos-license | Associations between maternal social support and stressful life event with ventricular septal defect in offspring: a case-control study
Background Previous studies suggested that maternal subjective feeling of stress seemed to be involved in the incidence of congenial heart disease in offspring. To better understand the findings, our study would discuss the relationships of maternal exposure to stressful life event and social support, which are more objective and comprehensive indicators of stress, around periconceptional period with the risk of ventricular septal defect (VSD), the most popular subtype of congenital heart disease. Methods A hospital-based case-control study was conducted through June, 2016 to December, 2017. We collected maternal self-reports of 8 social support questions in 3 aspects and 8 stressful life events among mothers of 202 VSD cases and 262 controls. Social support was categorized into low, medium high, and high (higher is better), and stressful life event was indexed into low, medium low, and high (higher is worse). Logistic regression models were applied to estimate adjusted odds ratios and 95% confidence intervals (95% CI). Results The adjusted odds ratio of high stressful life event was 2.342 (95% CI: 1.348, 4.819) compared with low stressful life event. After crossover analysis, compared with low event & high support, the adjusted odds ratio of low event & low support, high event & high support, and high event & low support were 2.059 (95% CI: 1.104, 3.841), 2.699 (95% CI: 1.042, 6.988) and 2.781 (95% CI: 1.033, 7.489), respectively. Conclusions In summary, we observed an increased risk of VSD when pregnant women exposed to stressful life events, however, social support could, to some extent, reduce the risk of stressful life event.
Background
Within the recent 60 years, the global incidence rate of congenital heart disease (CHD) has increased about 15 times, from 0.6‰ in 1930 to 9.1‰ in 1995 [1]. Similarly, it was reported that the prevalence of CHD has reached up to 11.1‰ in China [2]. Among CHD, ventricular septal defect (VSD) is the most popular subtype, which accounts for more than one fifth of all CHD subtypes [3].
Till now, the etiology of CHD is still unclear. Both genetic factor and environmental exposure are involved in the occurrence of CHD. Previous studies indicated that maternal stress might be an important environmental exposure that could increase the risk of CHD [4,5]. For example, a hospital-based case-control study in China showed that maternal feeling of stress during pregnancy could bring almost 4 times' risk (OR = 3.93) of CHD in offspring [4]. Considering maternal feeling stress is a subjective indicator, in order to reduce recall bias, some studies used more objective indicators such as stressful life event (a source of stress) and social support (a potential beneficial buffer against stressor) as alternative evaluation of maternal stress exposure [6][7][8]. Moreover, compared to taking CHD as a whole, single type of CHD is a better model to explore causes or risk exposure of CHD.
To the best of our knowledge, there were only three studies exploring associations of social support and stressful life event with CHD with single type of CHD [6][7][8]. The three studies focused on severe CHD, tetralogy of fallot (TOF) and transposition of great arteries (TGA) [6][7][8], in which the relationship between maternal exposure to stressful life event during pregnancy and the risk of either TOF or TGA was not established, however, social support was found to be associated with a decreased risk of TGA [8]. By contrast, studies among other birth defects, such as neural tube defects (NTDs), orofacial clefts, gastroschisis or hypospadias, in general, consistently revealed that social support could decreased the risk, while stressful life event could increase the risk of birth defect [6][7][8][9][10][11]. Based on the contradictory findings, we speculated the possibility that complex CHD might be more effected genetically. It is imperative to explore the role of maternal stress in single type of simple CHD.
This study was specifically designed to observe the associations of maternal exposure to stressful life event and social support around periconceptional period with the risk of VSD, the most prevalent subtype of simple CHD, in which both individual and combined effect modifications were examined.
Participants
A hospital-based case-control study was conducted in Shanghai Children's Medical Center through June, 2016 to December, 2017. Sample size was estimated by these formulas [12]: As reported in previous studies,12.1-41.1% of pregnant women experienced maternal stress during pregnancy and the odds ratio of maternal stress in CHD was ranged 2.48-3.93 [4,5,8]. We used 0.15 and 2.5 as the estimates of P 0 and OR, respectively. Totally, 152 case. vs 152 controls are minimum sample size to achieve appropriate statistical power (α = 0.05, β = 0.1).
In this study, 202 children with VSD and 262 control children without any birth defects were enrolled. The children in the control group were recruited from the pediatric patients admitted into the same hospital during the same period when the cases were recruited. Among the 262 controls, 132 came from pediatric respiratory medicine, 91 from pediatric general surgery, and 39 from pediatric gastroenterology. To avoid recall bias, all the children were younger than 2 years old.
VSD was defined based on clinical diagnosis and verification by ultrasound. According to the codes of the International Classification of Diseases, Tenth Revision, Clinical Modification, the main VSD types include isolated VSD or VSD with mild complications such as secundum atrial septal defects, coarctation of aorta, patent ductus arteriosus, aortic valve stenosis, and pulmonary stenosis [13]. In the present study, among 202 cases, 85 was isolated VSD, and the other 117 was complicated with secundum atrial septal defects (n = 99), coarctation of aorta (n = 8), patent ductus arteriosus (n = 23), aortic valve stenosis (n = 2), or pulmonary stenosis (n = 2).
The children with any of the following conditions were excluded from the study: (1) death of mother; (2) mother diagnosed with mental disorder; (3) infant diagnosed with complex CHD (Tetralogy of Fallot, Transposition of great arteries, Hypoplastic left heart syndrome Common truncus, Common ventricle) and (4) inability to locate the mother for interview.
Procedure
Information on sociodemographic characteristics and parental health-related variables was retrospectively collected through the Parental Behaviors and Environmental Exposure Questionnaire (PBEQ). The women who signed written informed consent to participate and provided consent on behalf of their children were invited to participate in an interview, and to fill in the PBEQ when their children were in hospital. For cases, the interviewed was made after pediatric cardiothoracic surgeons and fetal ultrasonologists had made evaluation and ensured the final diagnosis of CHD. The detailed information of this case-control study has been described elsewhere [14].
The ethical application and consent procedure of this study were approved by the Ethics Committee of Shanghai Jiao Tong University School of Medicine (Approval number: SJUPN-201717).
Maternal characteristics
Variables regarding maternal characteristic were collected through in-person interview based on PBEQ. Parental ethnic was categorized as Han ethnicity vs. others; maternal age at delivery was grouped into < 35 years vs. ≥ 35 years; maternal educational level was categorized into three groups: middle school and below, high school, and college and above; marital status was grouped as married vs. unmarried/divorced/widowed; maternal residence was categorized as urban vs. suburban/rural; maternal prepregnancy obesity (defined as body mass index > 28.0 kg/m 2 [15], calculated as weight in kilograms divided by height in meters squared based on prepregnancy weight and height) was categorized as yes vs. no; maternal multiple births was categorized as yes vs. no; infant gender was categorized as male vs. female; family history of CHD was categorized as yes vs. no; maternal prepregnancy diabetes/hypertension was categorized as yes vs. no; maternal smoking/drinking (defined as maternal previous history of smoking and/or drinking) was categorized as yes vs. no; and maternal folic acid use was categorized as yes vs. no.
Social support and stressful life event evaluation
Social support and stressful life event were assessed by a Social Support and Stressful Life Event Questionnaire (SSSLEQ, as shown Table 4 in Appendix), as a part of PBEQ. The SSSLEQ was developed based on literature review pilot studies [6][7][8][9], which includes two sub-scales to collect information regarding social support and stressful life events, respectively. The validity and reliability of SSSLEQ were examined in our sampled participates containing all cases and controls, and the Cronbach's alpha coefficient was 0.870 for the total questionnaire (0.891 for subscale of social support, and 0.932 for subscale of stressful life event), which indicates that the internal consistency is good and acceptable. The explanatory factor analyses revealed a 5-factor model explaining 58.8% of the total variance.
In subscale of social support, eight questions were used to collect information regarding maternal social support around periconceptional period. The eight questions were conceptually grouped into three aspects: social relationship (three questions), emotional support (two questions), and help with daily tasks (three questions). The response was rated on a 5-scored scale (1 = none, 2 = rarely, 3 = some time, 4 = often, 5 = frequently). The individual social support was defined as "yes" if the response was scored as 4 or 5, and defined as "no" if the response was 0-3. Then the social support index was calculated by summing the count of "yes", and then was categorized as low, medium high, high when social support index being 0-4, 5-7, and 8 by trisection value in all participants, respectively.
In subscale of stressful life event, eight yes/no questions were applied to collected whether mother experienced the following stressful life events (financial problems, divorce/ couple separated, husband violence, lost job, illness/injury of someone close, death of someone close, social relationship difficulty, accident/natural disaster). Each question was scored as 1 if the response was "yes", or 0 if the response was "no". The stress index was then calculated by counting the number of "yes", and was grouped into three levels as low, medium low and high if the index being 0, 1, and ≥ 2 by responded percentile, respectively.
Statistical analysis
The description of characteristics was made by use of the number and percentage for categorical variables, and Chi-squared test was used to compare differences between groups. Logistic regression analyses were further applied to examine the crude and adjusted associations of social support and stressful life event around periconceptional period with VSD. Adjusted model was controlled for maternal ethnic, maternal age at delivery, maternal education, marital status, residence, maternal prepregnancy obesity, multiple births, infant gender, family history of CHD, maternal smoking/drinking, maternal diabetes/hypertension, and maternal folic acid supplementation. We also examined stressful life event and social support in combination, dichotomizing the social support index score as 0-4 versus 5-8 and the stressful life index score as 0-1versus ≥2 to indicate "low" or "high" level of maternal social support and stressful life events.
In order to achieve consistency in maternal characteristics between cases and controls, the present study adopted a propensity score method matching to balance the characteristic difference between cases and controls [16,17]. A multivariate logistic regression model was developed to estimate the propensity score, in which all potential confounding variables related to VSD were included in the model. In this procedure, logistic regression was conducted on the group indicator and then uses the resulting propensity variable to select controls for cases. The strength of propensity-score-adjusted analysis lies in taking all covariates along with their interactions as one covariate into account [16,17]. In propensity-score-matched analysis, controls were matched to cases based on a greedy nearest neighbor matching algorithm on propensity score with a caliper equaling to 0.05.
A statistical significance level was set at p value < 0.05. All analyses were performed with the Statistical Package for the Social Sciences (SPSS) (IBM-SPSS Statistics v24.0, Inc. Chicago, IL).
Participant characteristics
A total of 464 participants were enrolled in this study (202 VSD cases vs. 206 controls). Characteristics of VSD cases and controls have been shown in Table 1. The differences between VSD cases and controls can be seen in the following three variables: residence, infant gender and folic acid supplementation (all p < 0.05). After propensity score matching, all the characteristics was balanced between VSD cases and controls.
Social support and stressful life event
The detailed information of social supports and stressful life events by cases vs. controls was shown in Table 2.
With respect to social support, control group, compared to cases, more frequently responded "yes" in the following four items, "Did anyone care about you?", "Did anyone give you emotional support?", "Did anyone give you a suggestion?", and "Did anyone help you with your housework?" (all p < 0.05). After propensity score matching, except for the response to "Did anyone care about you?", the responses to other three items were still kept different. While for social support index, difference was existed between cases and controls either before or after propensity score matching (both p < 0.05). Regarding stressful life event, compared with controls, cases responded more yes for Financial problem and Husband violence (both p < 0.05). After propensity score matching, response to Financial problem was still different. For stressful life event index, difference was shown either before or after propensity score matching (both p < 0.05).
The associations between social support/stressful life event and VSD Table 3 depicts the associations of social support and stressful life event with VSD. It was shown that social support could decrease the risk of VSD, in which the increased level of social support index was related to a declining risk of VSD after controlling for possible confounders (aOR = 0.523 95% CI: 0.283, 0.967 for social support index being medium high; aOR = 0.510 95% CI: 0.321, 0.854 for the index being high). After propensity score matching, the similar tendency was also observed, while the significance occurred merely in the being high.
By contrary, stressful life event could increase the risk of VSD. Higher level of maternal stressful life event might increase the risk of VSD in offspring. The risk gradually increased when the maternal exposure to stressful life event was rising from medium low (OR = 1.204 95% CI: 0.768, 1.886) to high (OR = 2.257 95% CI: 1.300, 3.916). In the adjusted model, the trend was similarly found (aOR = 1.190 95% CI: 0.643, 2.201 for stressful life event index being medium low; and aOR = 2.342 95% CI: 1.138, 4.819 for index being high). When the analyses were repeated after propensity score matching, very similar results were obtained.
According to the crossover analysis of social support and stressful life event (as shown in Fig. 1
Discussion
Previous study indicated that maternal stress around periconceptional period could be involved in the incidence of CHD in offspring [4,5]. However, the findings were not always consistant and, in some cases, cannot be repeated in single type of CHD [6][7][8]. Considering most previous studies mainly focused on severe CHD, such as TOF and TGA, which might be caused by more complicated factors. This study is the first to analyze the role of maternal exposure to social support, stressful life events and their interactions in the risk of VSD, the most prevalent subtype of simple CHD. We found that maternal social support around periconceptional period was associated with a decreased risk of VSD, in which some dose-effect relation was observed that the higher maternal social support the pregnant women got the lower risk of VSD their children would have. It was also demonstrated that maternal exposure to more stressful life events was associated with an increased risk of VSD. The crossover analysis further revealed that social support could reduce the risk of stressful life event.
As our knowledge, up to date only three previous studies explored the risk of maternal exposure to stressful life event in the incidence of TOF and TGA, however, none of them established the relationship [6][7][8]. By contrast, we found that maternal exposure to two or more stressful life events could increase the risk of VSD in offspring. TOF and TGA, compare to VSD, have more complicated cardiovascular malformations, we speculate that more grave exposure, gene defects/ abnormalities, and their interaction would be implicated in the triggering of defects [18]. The other potential explanation could be that, compared to the three studies, this study collected different and more stressful life events. We did observe a tendency toward higher risk with more exposure to stressful life events. The three studies mentioned above provided some support for our speculation. For example, one of which using data from National Birth Defects Prevention Study, the USA, demonstrated a high risk of TOF (OR = 3.1) among those women who experienced 6-7 stressful life events, although 95% confidence interval included one (0.8-12.2) [8]. However, maternal exposure to stressful life events less than 6 didn't show the risk of TOF (OR ranged from 0.7 to 1.1) [8]. Considering only 3 women exposed to 6-7 stressful life events during periconceptional period among 311 TOF cases, we assume that the effect modification size would be statistically significant when enlarging the sample size. Some other studies suggested that subjective mental stress around periconceptional period was associated with an increased risk of CHD, in which the maternal stress was dichotomized into yes or no [4,5]. We found that the impact of stressful life event was smaller than that of subjective mental stress, however, when it came to particular stressful life event such as bereavement, the effect modification size appeared to be similar to our study. A registry-based study in Denmark reported that prenatal exposure to bereavement increased the risk of CHD by 1.4 times (OR = 2.4 95% CI: 1.4, 4.2) in offspring [19].
The protective effect modification of social support on VSD in our study was quite similar to one previous study in TGA [8]. Both of them revealed that good social support and assistance around periconceptional period would be helpful to reduce the risk of abnormal cardiac morphogenesis by approximately 50%. A number of studies conducted in other birth defect also provided evidence that more social support was associated with reduced risk of birth defects, including NTDs, gastroschisis and hypospadias [8,9,11]. When examing the combinated influence of stressful life events and social support, our findings suggested that women who exposed to highest stressful life events and lowest social support during periconceptional period had the largest risk estimation for VSD; and more social support could mildly modify the association.
Taken our findings together, those pregnant women who experienced stressful events during periconceptional period should be taken high priority in promoting antenatal care since undesirable and uncontrollable negative experience could increase the risk of VSD in offspring. In daily life, the most important effort is to improve family and social support, help them get more love, assistance and encouragement in emotion and financial condition. To give a good birth and good care of the new baby it is important for obstetricians to evaluate maternal social support and stressful life events during prenatal examinations. They might alert the pregnant women of high mental stress and low social support to their children's cardiovascular malformations.
The exact mechanisms underlying the role of social support or stressful life event in CHD or VSD are still unknown. Several mechanisms were proposed, including thrombotic, inflammatory, or endocrine pathways. For example, it was reported that exogenous corticosteroid use during pregnancy could pose a small increased risk of birth defects [20]. There might be the possibility that increased production of corticosteroids in response to maternal stress exposure may play a role in VSD of offspring [21]. Moreover, there is an increasing evidence for the transgenerational impact of early-life experiences and the involvement of epigenetic pathways in these effect modifications [22].
The strengths of our study included its specific cases choosing the most frequent single type of CHD, more detailed information regarding social support and stressful life events, dose-effect assessment of social support and stressful life event, stratify analysis of stress and social support in combination, adjustment for several potential confounders, and propensity score matching to balance differences between cases and controls. Our examinations of social support and stressful life event were much more comprehensive (a total of 16 questions) than any of previous studies exploring associations of stress with birth defects (up to 10 questions) [8,9]. The biggest strength of our study is that social support may provide a beneficial buffer against the negative impact of stress, but few other studies took it into consideration, and the results were not as impressive as ours [7][8][9]. Considering the possible selection bias and recall bias, we limited the children to 2 years old, however, most other studies did not do better, for example, a previous case-control study limited the children to 7 years old [4]. Meanwhile, we particularly chose to focus on questions related to concrete major life events rather than subjective feeling of maternal stress.
However, our study still has some limitations. We did not particularly focus on specific stressful event, and there is the possibility that some types may be more stressinducing. Recall bias and selection bias are inevitable in a case-control study. A hospital-based study has inherent weaknesses, since hospital-based cases could not represent the total distribution of CHD occurring in the local population. Although after propensity score matching, we balanced the residence of the enrolled family. However, as a result of sample restrict, we merely matched 168 controls. Due to these limitations, the findings require further confirmation by more studies with larger sample size or prospective longitudinal design on particular stress event.
Conclusions
This study, for the first time, observed an increased risk of VSD among mothers who reported more stressful life events and a decreased risk among mothers who got more social supports around periconceptional period. Moreover, social support could reduce the risk of stressful life event. The impact of maternal social support and stressful life events on risks of CHD has been studied much less than the impact on risks of other birth defect, such as NTDs [8]. Against the background that CHD has been the most prevalent birth defect, and in which VSD places the first, the findings of our study have important clinical and public health implications for the control of birth defects. Due to the retrospective design of our study, prospective longitudinal studies are needed to provided further and enriched evidence.
Abbreviations aOR: Adjusted odds ratio; CHD: Congenital heart disease; CI: Confidence interval; NTDs: Neural tube defects; OR: Odds ratio; PBEQ: Parental behaviors and environmental exposure questionnaire; SSSLEQ: Social support and stressful life event questionnaire; TGA: Transposition of great arteries; TOF: Tetralogy of fallot; VSD: Ventricular septal defect contributed to study concept and design, and drafting and revision of the manuscript. All authors read, corrected and approved the manuscript.
Funding Shenghui Li was funded by grants from National Natural Science Foundation of China (81874266, 81673183) in the design of the study, key project from Shanghai Municipal Science and Technology Commission (18411951600) in the analysis, interpretation of data, the Science and Technology Funds from Pudong New Area, Shanghai (PKJ2017-Y01) in the data collection, the Research Funds from Shanghai Jiao Tong University School of Medicine (20170509-1) in the data collection and the manuscript writing, and the Scientific Research Development Funds from Xinhua Hospital, Shanghai Jiao Tong University School of Medicine (HX0251) in the data collection and the writing of manuscript.
Availability of data and materials
The datasets analyzed during the current study are not publicly available due to the protection of patients' information but are available from the corresponding author on reasonable request.
Ethics approval and consent to participate
The ethical application and consent procedure of this study were approved by the Ethics Committee of Shanghai Jiao Tong University School of Medicine (Approval number: SJUPN-201717). All eligible infant and their mother were invited to participate in the study. Only those mothers who had signed a written informed consent and gave a written consent for their children to participate in the study participated, and filled in the parental behaviors and environmental exposure questionnaire (PBEQ).
Consent for publication
Not applicable. | 2019-11-21T10:46:23.506Z | 2019-11-21T00:00:00.000 | {
"year": 2019,
"sha1": "5fe3b97341f528b841bf44e3b182749d0acba435",
"oa_license": "CCBY",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-019-2541-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5fe3b97341f528b841bf44e3b182749d0acba435",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251265287 | pes2o/s2orc | v3-fos-license | Association of atrial fibrillation with diabetic nephropathy: A meta-analysis
Background: Many studies have provided evidence for an increased risk of atrial fibrillation among diabetic patients as compared to the nondiabetic population. It is also well known that diabetes predisposes a person to an increased risk of diabetic nephropathy. A few reviews and studies have hinted towards an increased risk of atrial fibrillation among diabetic nephropathy patients; however, there is no concrete evidence at present. Aim: To conduct a meta-analysis to explore if there is an association between diabetic nephropathy and atrial fibrillation. Methods: The available literature was searched for relevant studies from the period of January 1995 to November 2020. The following quality assessment criteria were considered for study shortlisting: clearly defined comparison groups, same outcome measured in both comparison groups, known confounders addressed, and a sufficiently long and complete (more than 80%) follow-up of patients. Two independent reviewers searched the databases, formed their search strategies, and finalized the studies. The data were analyzed to obtain a summary odds ratio along with a forest plot by Cochrane’s RevMan 5.3. Results: Only four studies were found to meet the inclusion criterion for this meta-analysis (total number of study participants: 307330, diabetic nephropathy patients: 22855). Of these, two were retrospective cross-sectional studies, one was a prospective cohort study, and one was a case-control study. Three studies had provided the odds ratio as the measure of effect (two retrospective cross-sectional studies and one case-control study), with the one cohort study reporting the hazards ratio as the measure of effect. Therefore, the meta-analysis was done excluding the cohort study. The summary odds ratio in the present study was 1.32 (0.80–2.18), which was not statistically significant. Due to large heterogeneity among the included studies and their small sample sizes, it was found that the summary estimate shifted towards the null value. Conclusion: The present meta-analysis found no significant association between atrial fibrillation and diabetic nephropathy. However, more studies with large sample sizes are required to strengthen the evidence for an association.
Introduction
Atrial fibrillation (AF), a type of arrhythmia, has been recognized as a leading cause of morbidity and mortality. Around 5 million new cases of AF are estimated to occur every year according to the Global Burden of Diseases Study. [1] AF has been identified as a potential risk factor for stroke and other cardiovascular diseases (CVDs), which are well established globally to result in a high mortality rate. [2] Therefore, AF indirectly leads to an increase in morbidity and mortality caused by CVD, which calls for more attention in research.
Association of atrial fibrillation with diabetic nephropathy: A meta-analysis
The next important step is to control this epidemic. For this, we must understand and recognize the risk factors and predictors associated with AF. The first attempt to acknowledge these independent risk factors was carried out three decades ago; factors such as age, hypertension, diabetes mellitus, coronary and valvular heart diseases, and congestive heart failure were identified as independent predictors of AF in the Framingham Heart Study (one of the largest cohort studies). [3] Diabetes mellitus is known to raise the risk for AF 1.24 times as compared to the general population without diabetes. [4,5] Furthermore, diabetes mellitus increases susceptibility to micro-and macrovascular complications like nephropathy, neuropathy, retinopathy, and CVDs. [6,7] The presence of microvascular complications can then increase the risk of macrovascular complications among the diabetic population and most of the complicated diabetic patients getting care in primary health care and outpatient clinics. Only a few studies have thus far explored such associations, such as those documenting an association between diabetic nephropathy and retinopathy with stroke, valvular heart diseases, and AF, to name a few. [8,9] Therefore, many independent reviews and meta-analyses have recognized a link between diabetes mellitus and AF [4,5] ; however, few studies have explored additional risk factors for AF among the diabetic population. Age, sex, body mass index (BMI), systolic and diastolic blood pressure, and lipid levels have commonly been identified as risk factors leading to an increased risk of AF among diabetic patients. [2,4,9,10] Only a few studies have explored the association between diabetic nephropathy and AF. [11][12][13][14][15] Hence, it is imperative to examine this association to further understand the burden of morbidity and mortality caused by AF due to nephropathy among diabetic patients.
Considering the lack of knowledge, the objective of this study was to perform a systematic review and meta-analysis to address whether diabetic nephropathy leads to an increased risk of AF compared with diabetic patients without nephropathy. The primary outcome measure was the combined odds ratio (OR) from the studies included in the meta-analysis.
Literature search
This meta-analysis was conducted as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [16] The following data sources were searched for all cross-sectional, cohort, and case-control studies: PubMed, EMBASE/ExcerptaMedica, Cochrane Central Register of Controlled Trials, Google Scholar, Reference lists. Search strategies were independently designed and performed by two separate investigators. We used the following MeSH terms or keywords in different combinations and permutations for searching studies from January 1995 to November 2020 in advanced PubMed search: "Atrial fibrillation," "microalbuminuria," "macroalbuminuria," "overt proteinuria," "diabetic nephropathy," "diabetes mellitus," and "risk factors for atrial fibrillation."
Study selection criteria
The search strategies described above provided a list of studies. The titles and abstracts of all the retrieved studies were screened independently by two authors. The irrelevant studies were discarded in the first attempt. The full-text version of the shortlisted studies was then analyzed for the presence of a measurable outcome variable in terms of incidence, risk ratio (RR), OR, or proportions of AF among diabetic nephropathy patients and diabetic patients without nephropathy, depending upon the study design. Also, we took care not to overestimate the measure of effect between the exposure of interest and the outcome. Therefore, we chose to include studies with a sample size of at least 100 in both groups. We did not pose any restrictions on the language of the articles as most of the articles could be translated by the google translate tool; most of the journals supported language conversion.
Data extraction procedure
Data were extracted independently by the same reviewers using a standard data extraction matrix in excel. The abstracted data pertained to the name of the author along with the year of publication, study design, study period, type of diabetes addressed in the study, age of participants, percentage of male participants, the total number of events of interest, OR/RR of AF among diabetic patients with/without nephropathy, a measure of effect and its 95% confidence interval (CI), and potential confounders addressed in the study. In a few studies, where the full text was not available, and the abstract provided only information on the measure of effect (OR/RR), it was decided to use the data directly in RevMan using the generic inverse variance option for entering data. Hence, such data were also utilized for this analysis. Where more than one publication of one cohort study existed, only the publication with the complete dataset was used for the meta-analysis. Disagreements in data extraction were resolved by discussion among the authors.
Further, we assessed the bias in the studies based on Cochrane's guidelines.
Statistical analysis
The data were analyzed using RevMan 5.3. The risk of AF was summarized using either the RR with 95% CI for cohort studies or the OR with 95% CI for cross-sectional studies and case-control studies. It was decided to present the results of the cohort and case-control studies separately as they have different measures of effect.
Data from all studies comparing the RR or OR for AF among two groups were pooled in the meta-analyses using the inverse generic variance method with the random-effects model. Heterogeneity of outcome measures between studies was examined using the Cochran Q and I² statistics.
Study selection
The combined literature search identified 3526 studies that contained the MeSH terms either in the title or abstract. After reviewing the title, we included 160 studies for abstract review. Finally, only four studies matched the inclusion criteria. Of these four studies, the full text was available for three studies for which data could be easily extracted. [11,14,15] For one study where only the abstract was available, the related data were available in the abstract, which was utilized for the analysis. [12] The studies that were excluded were rejected on various grounds, which are shown in Figure 1.
The characteristics of the four shortlisted studies are presented in Table 1. Of these four studies, one was a longitudinal cohort study, which was published as part of the Swedish National Diabetes Registry carried out by Zethelius et al. in 2015. [15] As it is a cohort study, the authors had used Cox Regression to estimate the hazards ratio (HR) of AF among diabetic patients with and without nephropathy. The two studies by Ananthapanyasut et al. [11] and Papa et al. [14] were retrospective cross-sectional studies where the measure of effect used was the OR. The fourth study was a case-control carried out by Dahlqvist et al. [12] in 2017, which reported the OR of AF among the two exposure groups (raw data was not available for this study). None of the studies addressed the bias adequately.
Study analysis
Keeping in view the different study designs and corresponding measures of effect presented in these studies, we decided to do an analysis of the studies presenting the OR separately from the study where the HR was calculated as the outcome measure. Therefore, we could include only three studies where OR was the measure in the meta-analysis to estimate the summary OR for the overall analysis. As there was only one cohort study, a meta-analysis was not possible; thus, we decided to include the results of this study as such.
All included studies were conducted after 2009. The two studies by Zethelius et al. [12] and Dahlqvist et al. [15] pertained to the Swedish population. The other two studies represented a sample population from the USA [11] and Italy. [14] The total diabetic population in these four studies was 127620 patients. The total number of reported AF cases in three studies was 4820 (data missing for Dahlqvist et al.). The summary results of the meta-analysis of the three studies are shown in Figure 2.
The risk of having AF among diabetic patients with nephropathy was found to be 1.21 times (1.08-1.38) greater as compared to diabetic patients without nephropathy in the study by Zethelius et al., [12] which was statistically significant.
Discussion
To the best of our knowledge, this is the first meta-analysis to assess the association between AF and diabetic nephropathy. The present study included the findings of three cross-sectional/ case-control studies and one cohort study, which reported the OR/RR of AF in diabetic patients with and without nephropathy. [11,12,14,15] These four studies had a combined sample size of 127620 and included both Type 1 and Type 2 diabetes mellitus patients. We have found that after adjusting for various confounders, as mentioned in Table 1, the authors of these studies primarily looked upon the association of AF development among the diabetic population with and without nephropathy.
The summary OR in this meta-analysis was 1.32 (0.80-2.18), which is not statistically significant. This was due to two reasons. First, we could not include one large study by Zethelius et al. [15] in this pooled estimate due to the different study designs and measure of effect. Secondly, of the three included studies, one study by Ananthapanyasut et al. [11] had a small sample size, and the results were insignificant. Therefore, the pooled estimate of the present study was pulled towards the value of 1, which supports the null hypothesis. This may be considered a limitation of this meta-analysis. If we consider excluding the study by Ananthapanyasut et al. [11] from this meta-analysis, then the pooled OR was calculated to be 1.66 (1.03-2.66), indicating that the risk of AF among patients with diabetic nephropathy was 1.66 times higher as compared to diabetic patients without nephropathy, thereby indicating the effect of sample size on the summary estimate.
This study has some potential [15] limitations. A significant limitation of this meta-analysis pertains to the limited number of studies available in this research area. Thus, the overall sample size remained small for a summary measure estimation. Secondly, the study design of included studies differed. Of the four studies, two were retrospective cross-sectional studies, which weakened the power of the estimate as cross-sectional studies are considered epidemiologically weak studies in providing evidence for associations. Ideally, we would have included either cohort or case-control studies, but due to lack of relevant studies, we decided to include cross-sectional studies also. The third limitation pertains to the large heterogeneity among the studies, which was attributed to the small sample sizes, varied study designs, and different populations.
Despite the various limitations, the current meta-analysis was able to highlight many research issues that are important for controlling the rising number of AF cases worldwide. Large sample prospective studies like the Framingham study, the Malmo study, and the Atherosclerosis Risk in Communities (ARIC) study have explored multiple risk factors like advancing age, hypertension, obesity, smoking, previous chronic heart failure, myocardial infarction, and valve disease. [3,9,17,18] Similar findings were reported by Zethelius et al. [15] . In addition, a few studies have pointed towards the positive association of HbA1c with AF, although this is not a well-established risk factor. [15] An association between HbA1c and albuminuria as a result of diabetic microangiopathy has long been established, as has the link between the presence of both diabetes and albuminuria and the risk of CVD. [7] Therefore, there is some evidence for a relationship between HbA1c and AF, which should be further explored in more prospective cohort studies. Another important thing to observe is that like other non-communicable diseases, AF shares many risk factors, which can be addressed simultaneously. [9,10,18] Therefore, it is better to identify more of the risk factors, such as diabetic nephropathy or the role of HbA1c in diabetic nephropathy, for AF so that a multipronged strategy can be used to contain the rising burden of this disease.
Aside from the study by Zethelius et al., [15] we could not find any other prospective study that reported an association between AF and diabetic nephropathy. In conclusion, the results of this meta-analysis do not show an increased risk of AF among patients with diabetic nephropathy. However, taking into account the study limitations, future research should be directed at further exploring a potential association between AF and diabetic nephropathy using large sample-sized prospective cohort studies. [9,10,18] Author contributions Arnous MM and Al Sayed KA conceived the study aims and design, made decisions on inclusion and exclusion of the articles contributed to the data extraction; Al Dalbhi SK and Al Saidan AA has contributed to study design, planned the analysis, interpreted the results, and drafted the final version of the paper manuscript development and critical review, final approval of the version to be published; Balghaith MA and Al Tahan TM have reviewed the manuscript critically, contributed to the article critically for important intellectual content and epidemiological aspects.
PRISMA 2009 checklist statement
The authors have read the PRISMA 2009 Checklist, and the manuscript was prepared and revised according to the PRISMA 2009 Checklist.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2022-08-03T15:08:11.362Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "2930add23d373d68ca843fa906ffbdc4f47ee922",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_577_21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db09fe8a958df58e34d0db469fab772962f7e806",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119330674 | pes2o/s2orc | v3-fos-license | Spontaneous Currents in Josephson Devices
The unconventional Josephson coupling in a ferromagnetic weak link between d-wave superconductors is studied theoretically. For strong ferromagnetic barrier influence, the unconventional coupling, with ground state phase difference across the link $0<\phi_{\rm gs}\leq \pi$, is obtained at small crystal misorientation of the superconducting electrodes, in contrast to the case of normal metal barrier, where it appears at large misorientations. In both cases, with decreasing temperature there is an increasing range of misorientations, where $\phi_{\rm gs}$ varies continuously between 0 and $\pi$. When the weak link is a part of a superconducting ring, this is accompanied by the flow of spontaneous supercurrent, of intensity which depends (for a given misorientation) on the reduced inductance $l=2\pi LI_c(T)/\Phi_0$, and is non-zero only for $l$ greater than a critical value. For $l\gg 1$, another consequence of the unconventional coupling is the anomalous quantization of the magnetic flux.
I. INTRODUCTION
Magnetic scattering effects and d-wave pairing in superconductors may have similar manifestations, such as π-phase states and spontaneous currents in Josephson devices. 1 The phase of the superconducting order parameter as a function of the momentum-space direction is different for d-wave and s-wave pairing states, since the latter has a single phase, whereas the d-wave state exhibits jumps of π at the (110) lines. 2 The Josephson current through a junction depends on the phase difference φ between the superconductors on the either side, I ∼ sin φ, and the negative sign to the prefactor of I can be regarded as an additional phase shift of π. Superconducting rings containing an odd number of such π shifts have the spontaneous magnetization ±Φ 0 /2, and quantized flux of (n + 1/2)Φ 0 , where n is an integer, providing that the product of the critical current and the self-inductance of the ring is I c L ≫ Φ 0 /2π. 3 Beside d-wave pairing, two alternative mechanisms for π-phase shifts at a conventional junction have been proposed. One is the spin-flip scattering in the junction barrier containing paramagnetic impurities, 4 and the other is the indirect tunneling through a localized state, in which correlation effects produce a negative Josephson coupling between two superconducting grains. 5 A problem related to the first mechanism is the possibility of π-coupling in superconductor/ferromagnetic metal (S/F) weak links and multilayers, due to the presence of the exchange field in F. [6][7][8][9] The characteristic oscillations of the critical temperature T c of multilayers as a function of the ferromagnetic layer thickness d F , predicted theoretically by Radović et al. 7 were observed in Nb/Gd multilayers by Strunk et al. 10 and by Jiang et al. 11 and in Nb/CuMn (superconductor/spinglass multilayers) by Mercaldo et al. 12 While the oscillatory T c behavior was interpreted in terms of π-coupling in Refs. 11 and 12, in Ref. 10 it was attributed to the change from paramagnetic to ferromagnetic state with increasing d F . Recently, double minimum T c oscillations were observed on Nb/Co and V/Co multilayers by Obi et al., 13 and the second minimum was found to be consistent with the appearance of the π-phase.
In connection with the second mechanism, it was predicted that the anomalous flux quantization may appear with 50% probability in rings made from either disordered superconductors, or granular superconductors doped with paramagnetic impurities. The latter possibility was tested recently on uniform Mo rings doped with Fe, but the absence of the effect of paramagnetic impurities on flux quantization was found. 14 This may be due to the fact that the phase shifts induced by the impurities do not survive averaging over different electron paths around the ring. It was suggested that a loop with nanometer size point contact, doped with magnetic impurities, might satisfy the requirement that electrons circling the loop are subject to the same exchange potential.
The Josephson effect in weak links with ferromagnetic metal barrier can be the direct probe of unconventional coupling. Tunneling properties of s-wave junctions with ferromagnetic insulator barrier have been studied theoretically by Kuplevakhskii and Fal'ko, 15 De Weert and Arnold, 16 and more recently, by Tanaka and Kashiwaya. 17 In the latter work the possibility of π− shift was obtained. Tanaka and Kashiwaya studied separately the Josephson effect in anisotropic superconductor junctions with any symmetries and with a nonmagnetic insulating barrier. 18 This reference provides a general result from which several existing theories can be derived, and contains an extensive list of relevant references. Short weak links and d-wave junctions with normal metal barrier were studied by Barash et al. 19,20 These authors find an essentially nonharmonic current-phase relation I(φ) at low temperature, and a flow of spontaneous supercurrents in superconducting ring interrupted by π− junctions. Recently, Fogelstrom et al. have considered pinhole junctions in d-wave superconductors and have shown that at low temperature the ground state phase difference at the junction may vary continuously between 0 and π. 21 In the present work, we show how new possibilities for unconventional coupling in S/F/S Josephson devices result from the combined effects of exchange field and d-wave pairing. We consider a Josephson weak link in the clean limit, with thin and short ferromagnetic (or normal) metal barrier. The interfaces between the barrier and the superconducting electrodes are assumed fully transparent. Two-dimensional (2D) superconductivity and d x 2 −y 2 symmetry of the order parameter are considered. 22 Assuming the barrier perpendicular to the a − b planes of superconducting electrodes, the influence of their relative orientation is studied. For comparison some results for 3D s-wave case are included. In Section II, we give a brief overview of the quasiclassical theory of superconductivity in presence of exchange field and for anisotropic pairing interaction, which we apply in the following. In Section III, we calculate the quasiclassical Green's functions for the weak link to obtain the supercurrent as a function of the phase difference at the contact. When the weak link is a part of a superconducting ring, the ground state phase difference is calculated and the conditions for the flow of spontaneous supercurrent and anomalous flux quantization are obtained. Section IV contains a discussion of the results and a brief conclusion.
II. QUASICLASSICAL EQUATIONS
A microscopic approach powerful enough to deal with superconductivity in restricted geometries is the Eilenberger quasiclassical theory of superconductivity. 23 To calculate the critical current of d-wave S/F/S contact, we generalize the method developed for the superconductor/normal metal/superconductor (S/N/S) contact by Svidzinskii and Likharev,24,25 applied to S/F/S contact for s-wave pairing case by Buzdin et al. 6 and by Demler et al. 9 The basic equations of the Eilenberger quasiclassical theory of superconductivity, in the presence of an exchange energy h, are where ω n = πT (2n + 1) are the Matsubara frequencies (h = k B = 1), v 0 is the Fermi velocity, andĝ =ĝ(v 0 , R; ω n ) is the quasiclassical Gorkov's Green's function integrated over energy. Explicitly, and the self-consistency equation is is the pairing interaction. The supercurrent density is given by where · · · denotes the angular averaging over the Fermi surface, N being the density of states at the Fermi surface.
In the following, we use the notation f ↓↑ = f , f + ↑↓ = f + , g ↓ = g for the set of Green's function for one spin direction (down with respect to the exchange field orientation). They satisfy the scalar equations For the opposite spin direction, the corresponding set of Green's functions is obtained by changing h → −h.
III. SOLUTIONS
We solve the quasiclassical equations for a d-wave S/F/S junction, where S is an anisotropic superconductor with d x 2 −y 2 symmetry, and F is a monodomain ferromagnetic metal with constant exchange energy h.
We assume both S and F metals clean, with same dispersion relations and with same Fermi velocity v 0 . Electron scattering on impurities in S can be neglected if l ≫ ξ 0 , and in F if h ≫ v 0 /l, where l is the electron mean free path and ξ 0 the BCS superconducting coherence length.
The thin and short barrier, of thickness d, is assumed perpendicular to theâ axis in theâ −b plane of the lhs monocrystal S L , which may be misoriented with respect to the rhs one, S R , theirâ axes making an angle θ (Fig. 1). For anisotropic pairing, the pair potential and the shape of quasiparticle spectra depend on the misorientation. For d x 2 −y 2 symmetry, 22 the pairing interaction and the pair potential in S L are V (v 0 , v ′ 0 ) ∝ cos 2ϕ cos 2ϕ ′ and ∆(v 0 ) ∝ cos 2ϕ, respectively, where ϕ is the angle the quasiparticle momentum makes with theâ axis. Similarly, in S R , ∆(v 0 ) ∝ cos 2(ϕ − θ). Assuming 2D nature of HTS with d-wave pairing, we take cylindrical Fermi surface for both metals, whereas for 3D s-wave case spherical Fermi surface is taken.
Far from the barrier, f and g approach their respective bulk values and Choosing the x-direction alongâ L , we look for the solution at the rhs, x ≥ d/2, Similarly, at lhs, x ≤ d/2, For F, ∆ = 0, and the solutions are and g = const. for |x| ≤ d/2. Note that f + = f (−v 0 , ∆ * ). Assuming a transparent boundary between two metals, we use the continuity conditions for g and f at the barrier interfaces, x = ±d/2. We find the normal Green's function in the barrier where is the parameter measuring the ferromagnetic barrier influence. The temperature dependence of ∆ 0 = ∆ 0 (T ) is quite similar to the BCS one, and can be approximated by ∆ 0 (T ) = ∆ 0 (0) tanh 1.74 T c /T − 1 . The only difference is that in the case of d-wave pairing ∆ 0 (0)/∆ BCS 0 (0) > 1. 22 From the above results it is easy to obtain the corresponding ones for d-wave junction with normal metal barrier, putting h → 0, and for s-wave junction, taking isotropic pair potential ∆ L = ∆ R = ∆ 0 , with ferromagnetic (h = 0) or normal metal (h = 0) barrier.
A. Supercurrents
In S/N/S case, for d-wave pairing the ground state phase difference on the contact is zero, as for the s-wave pairing, if two S monocrystals have the same orientation with respect to the barrier. This may not be the case for S/F/S contacts, and we calculate the supercurrent as a function of the barrier exchange field intensity, as well as the function of the orientation.
Taking the current direction alongâ L , from Eq. (2.6) we get for 2D d-wave case where u = 1/ cos ϕ and σ =↓, ↑. Green's function g ↓ is given by Eq. (3.9), and g ↑ (h) = g ↓ (−h). Eq. (3.13) gives the supercurrent through the barrier of the area S, I = jS as a function of φ, θ, temperature T via t = T /∆ 0 (T ), and of parameters measuring the influence of the F barrier Z andd = d/ξ 0 (0), where ξ 0 (0) = v 0 /π∆ 0 (0). Note that Z ∼dh/∆ 0 (0). The temperature dependent normalizing current is where the normal resistance is given by R −1 N = e 2 v 0 N S, N being the density of states at the Fermi surface. In the limit T → T c , numerical calculations show that Eq. (3.13) gives, as expected, the Josephson relation I = I c sin φ, (3.15) where the sign and magnitude of I c depend on T , θ and Z.
In general, the supercurrents in the Josephson junctions and weak links are carried by the Andreev bound states in the barrier. 8,18,27 In the present case, the spectrum of these states has been calculated from the analytical continuation of the above Green's functions g σ . 28
B. Magnetic flux
For a superconducting ring containing one Josephson junction, the ground state phase difference φ gs at the junction can be obtained by minimizing the reduced energy 29 19) where I(φ) is calculated from Eq. The phase φ e = 2πΦ e /Φ 0 represents the reduced external magnetic flux through the ring. In the ground state, the total magnetic flux is related to the spontaneous supercurrent I gs = I(φ gs ) by Note that φ gs now depends on φ e and when n ≤ Φ/Φ 0 ≤ n + 1, n = 1, 2, 3 . . ., φ gs → φ gs + 2πn.
IV. RESULTS AND CONCLUSION
To illustrate the consequences of unconventional coupling we present the results of numerical calculations for typical cases, low temperature, t = 0.05 (T /T c ≈ 0.1) and high temperature, t = 5 (T /T c ≈ 0.9), for strong influence of the ferromagnetic barrier (Z = 3) and for the normal metal barrier (Z = 0).
For d-wave pairing, I(φ) dependence is much more complex than in s-wave pairing case, even in the absence of exchange field. Besides deformations of sinusoidal curves at low T , the shape and sign of I(φ) depend very much on the misorientation angle θ, both for normal and ferromagnetic metal barriers. The low T deformations of I(φ) are reminiscence of discontinuities at T = 0. At zero temperature, for Z = 0 the current turns out to be discontinuous at φ = π for θ = 0, as in the s-wave pairing case. 25 For θ = π/2, the current jump is at φ = 0 (or 2π), and for θ = π/4 smaller jumps appear both at φ = 0 and π, similarly to the results of Ref. 19. With increasing temperature, I(φ) becomes less deformed tending to sinusoidal variations. Similar conclusions hold for Z = 3. The dependence I(θ) persists in the presence of exchange field, as a strong evidence of d-wave pairing. This is shown in Fig. 2 for the characteristic values of Z, t and θ.
In the absence of external magnetic field, for θ = 0 the ground state phase difference φ gs at the S/F/S weak link is always zero or π, both for 3D s-wave and for 2D d-wave pairing. For weak influence of the ferromagnetic barrier, Z < ∼ 1, φ gs = 0 and for stronger influence, 1 < ∼ Z < ∼ 4, there is a π−shift, φ gs = π (Fig.3). Larger values of Z, for which φ gs = 0 again, would correspond to the decoupling of S electrodes. 30 For θ = π/2, φ gs = 0 → φ gs = π and vice versa. These results are temperature independent. For s-wave tunnel junctions with ferromagnetic insulating barrier it is also found that φ gs changes from 0 to π as the exchange interaction is enhanced. 17 In weak links, for 2D d-wave pairing we find that φ gs varies between 0 and π, depending on θ and Z. This variation is monotonous at low temperature, whereas at high temperature we find step function-like jumps at θ = π/4. In the vicinity of θ = π/4, depending on Z and θ the transition from some intermediate value, 0 < φ gs < π, to φ gs = 0 or to φ gs = π may occur with the change of temperature, similar to the 0 to π transition found for d-wave tunnel junction, 20 see Figs. 4(a) and 4(c). For d-wave pinhole junctions a range of crystal orientations, where φ gs varies from 0 to π, is also found at low T . 21 For superconducting rings with sufficiently large reduced inductance l, interrupted by the S/F/S link, the unconventional coupling is also found, with nonzero φ gs < π, which rapidly decreases with increasing l −1 , Figs. 4(a) and 4(c). We note that at high temperature, t > 0.5, φ gs ≡ 0 for l −1 ≥ 1.
The appearance of spontaneous current I gs (θ) in superconducting rings for external magnetic flux Φ e = 0 is illustrated in Figs. 4(b) and 4(d) for several values of the reduced inductance. The flow of spontaneous supercurrent is the consequence of the misorientation effect, or of the exchange field influence, whenever φ gs = 0. This is a generalization of the spin-flip induced effect predicted by Bulaevskii et al. for conventional superconductor rings with junction barrier doped with paramagnetic impurities. 4 The highest values of spontaneous currents, corresponding to θ = π/2 for Z = 0, and to θ = 0 for Z = 3, strongly depend on the reduced inductance. Maximum values, equal to the critical currents, correspond to l −1 ∼ 1 at low T , and to l −1 < 1 at high T (Fig. 5). It is important to point out that the spontaneous currents appear only for sufficiently large reduced inductance, greater than some critical value l c , in contrast to the result of Barash et al. for S/N/S junctions at T = 0. 19 At high T , t > 0.5, where the current-phase relation is harmonic, we obtain an universal curve below l −1 c = 1, as in the case of tunnel junctions. 4 At low T , the spontaneous currents flow is obtained in a wider range of l −1 , the shape of the curves and l c depending on the barrier influence Z and on the type of pairing symmetry.
In the presence of external magnetic flux, Φ e = 0, the flow of spontaneous current leads to the anomalous flux quantization in the ring, both for S/N/S and S/F/S junctions. In the first case, Z = 0, this occurs in the vicinity of θ = π/2 and in the second case, Z = 3, in the vicinity of θ = 0. The effect of half magnetic flux quantization, Φ/Φ 0 = 1/2, 3/2, . . ., pronounced for l −1 small, becomes smeared out and eventually lost for larger l −1 (Fig. 6).
Since l −1 ∝ 1/I c (T ) rapidly increases with increasing temperature, the unconventional coupling effects, such as the spontaneous supercurrent flow and the anomalous flux quantization exist only below some temperature characteristic for the given device.
In conclusion, the spontaneous supercurrent flow in a HTS ring with ferromagnetic weak link provides new possibilities for experimental observation of the unconventional Josephson coupling, 0 < φ gs ≤ π. Due to the presence of the exchange field in the barrier, it may appear, manifested by the anomalous flux quantization, without the misorientation of the crystals in two superconducting electrodes. However, the exchange field effect do not mask the difference between s and d-wave pairing, since the coupling depends on the misorientation. We emphasize that the unconventional coupling effects in the weak links are much more diverse than in the tunnel junctions, which display in the ground state only 0 or π phase difference. | 2019-04-14T02:19:39.943Z | 1998-06-01T00:00:00.000 | {
"year": 1999,
"sha1": "5b522e0197e3db25c37406a6dbafdb52f4a57aa3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9908092",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6a04fdf4c39f97c5c81fd15ebbdf80e0704cd717",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
254677240 | pes2o/s2orc | v3-fos-license | Coronavirus and tourism: is there light at the end of the tunnel?
Tourism industry is one of the most striking examples of the COVID-19 pandemic impact on population and economy. In the previous decade the global and Russian tourism industry demonstrated sustainable development, while in 2020, due to the pandemic consequences the situation in the industry turned out to be on the brick of disaster. The most acute problems of business in this area were largely accounted for by significant social consequences of the pandemic. Decreased quality of life caused by the disease and its manifestations, aggravated chronic diseases, increased temporary disability, combined with closure of borders and collapse of international transportation, have developed a complex of factors that completely paralyzed all sectors of the tourism industry for a while. Many of these factors will retain their impact in the long term. However, the pandemic impact cannot be viewed in a negative way only. There comes an understanding that tourism may become one of the engines of economy recovery rather than a burden. This requires systematic actions of the state primarily focused on stimulating domestic tourism, restoring and developing business ecosystems, keeping in balance various interests of the tourism industry stake-holders, from tourists per se and local businesses to regional and federal authorities.
week, and 25.9% higher than the average for the last four weeks. The excess average incidence was registered in 26 regions (TASS 2022).
Although there are no plans to reintroduce anti-pandemic measures in any of the Russian regions so far, this has become an additional source of tension for the domestic tourism industry. In apt words of a well-known doctor, governments and mass media are the major «virologists» now, and business does not hesitate to follow such reports. Moreover, signals about the onset of the next COVID-19 wave in summer began to come from various parts of the world. Most visible increase in incidence in June, compared to May, 2022, was registered in France (+74%), Great Britain (+58%), Germany (+25%) and the U.S. (+16%) (GlobBaro HSE 2022).
Memories about the COVID-19 associated crisis paralyzing almost all sectors of tourism and hospitality in the spring of 2020 are still well-remembered as a nightmare by all market participants: tour operators, hotels, sanatoriums and health resorts, museums or restaurants. However, the question remains how this experience will affect further development of the industry, and how reliable are claims of some experts that the pandemic will have a negative long-term impact on the industry development.
It should be noted here that many experts -both in Russia and abroad -stick to a different point of view and consider the pandemic as a turning point to open up new prospects and opportunities for the industry rather than a source of problems (Niewiadomski 2020;Tsai 2021;Kwok and Koh 2022;Mishra 2022;Seabra and Bhatt 2022;Viana-Lora and Nel-lo-Andreu 2022). Nevertheless, impact of the pandemic on the Russian tourism is undeniable with ambiguous yet diverse consequences due to specifics of the industry itself. Therefore, it is important to understand the problems and development prospects of the industry, its impact on socio-economic development, environment and life quality of the population, which can be the key to solving many problems of the Russian economy.
Based on the analysis of statistical information and expert interviews, this article discusses major pandemic-associated trends in the global tourism industry, as well as prospects for developing conditions to meet demand of the Russian population for domestic tourism services.
Memories of the future: pandemic waves and tourism statistics
The COVID-19 pandemic is recognized as the biggest challenge in the history of the tourism and hospitality industry (UNWTO 2021). In the spring of 2020, due to consequences of the pandemic, the situation in the industry was on the brick of disaster (Table 1). According to the World Tourism Organization, two months after the onset of the pandemic, absolutely all international tourist destinations were closed. Dynamics in international arrivals went into the red (Fig.1) In just two months, the industry, which was considered one of the most promising and successfully developing, was on the verge of collapse (Sheresheva 2020b). All sectors of the tourism market have been negatively affected by the pandemic: transport, hotel sector, restaurant business, cultural and entertainment sector, insurance, etc. A dramatic decline in tourist demand has put a huge number of industry enterprises at risk of bankruptcy due to working capital zeroing. In many countries, the first response to the current situation was mass layoffs of employees (Castanho et al. 2021; Marome and Shaw 2021) 1 . During 2020, the global tourism industry loss amounted to about $ 4.5 billion, 62 million lost their job in the industry, and only 18.2 million jobs were restored in 2021 (UNWTO 2021;WTTC 2021).
Job loss in the tourism industry turned out to be devastating and has particularly affected females, older people, less educated and less skilled workers, as well as local residents (Lopes et al. 2021;Meegaswatta 2021;Seyfi et al. 2022;World Economic Forum 2021). This was largely due to the fact that up to 80% of the tourism industry enterprises are small and medium-sized businesses, mostly local. They turned out to be the least sustainable, because they did not have sufficient financial resources to cope with the impending crisis (E Simen and Sheresheva 2020; Muhamad et al. 2022). Small travel agencies are more often managed by females. It should also be noted here, that in many countries maidens and registration desk specialists are mainly females, the positions require close interaction with customers, and during the pandemic it has become dangerous for health. Moreover, the COVID-19 crisis has created a new type of unemployment, especially in the regions with higher concentration of people and tourist activity. In 2020, many of those who had never been unemployed 1 The Government of the Russian Federation has managed to mainly dampen this problem through special measures to support companies of the industry, provided that jobs are fully or partially preserved (Government Decree No. 976, 2020). before, had sustained salary and were highly motivated to work lost their job (Lopes et al. 2021). Drop-off from business of many small and medium-sized enterprises has resulted in disrupted supply chain and logistics areas, which are closely interconnected and often cross-border. Large companies have also faced difficulties threatening the mere staying in business. Host communities -the most important stakeholders of the tourism industryhave turned vulnerable as well in the face of the crisis. In some cases, a sharp drop in tourist demand has become a real threat to local communities that have lost their livelihood (Persson-Fischer and Liu 2021; Scheyvens et al. 2021).
International organizations have contributed to the support of the tourism industry during the pandemic. For example, an interactive map for tourists was developed during the COVID-19 period, containing information about all restrictions imposed by a particular state because of the pandemic (IATA 2020). National governments accounted for the main array of measures affecting all stakeholders.
But the main problem of the spring-summer of 2020 was interruption of logistic chains rather than almost completely frozen activities in the tourism market alone. Uncertainty has hit the market players the most: no one could project when external and internal tourist destinations would be re-established (Alexandrova 2020).
However, even now there is not a single analyst who could give an accurate forecast of the tourism sector development in the coming years.
The beginning of 2022 seemed quite optimistic for the tourism industry. According to UNWTO, the number of international tourist arrivals increased by 182% in the first quarter of 2022, compared to the same period in 2021: from 41 million to 117 million. According to UNWTO, best results were registered in Europe and the U.S.: the number of arrivals almost doubled. However, these data were still slightly more than 50% behind the volume of tourist flows before the pandemic. In the Middle East, in the first month of 2022, the number of arrivals increased by 90%, and by 51% -in Africa (but this is still 63% and 66% less than in 2019). It took the Asia-Pacific region the longest to recover, as several countries remained closed to tourists: increase in the number of arrivals out there equaled to 44%.
In the Russian outbound tourism in January-March 2022, the choice of tourist countries expanded compared to the first quarter of 2021, the geographical structure has significantly changed in terms of tourist flows (Fig.2). Experts pointed to an obvious decrease in the tourist flow to Turkey, however, first of all, they didn't attribute this to the pandemic, rather than to the fact that in January (except for New Year period) and February 2022 there were no regional charter programs of tour operators to Turkey.
In January 2022, UNWTO conducted a global survey of its tourism experts on the impact of COVID-19 on tourism and expected recovery time. The survey was conducted from January to March 2022, the number of experts interviewed added up to minimum 1,500 people from around the world. 48% of the experts were positive assessing recovery of the tourist flows to the 2019 level during the six-month period. According to the survey conducted in September 2022, the UNWTO Panel of tourism experts rated the period from May to August 2022 with a score of 125 (on a scale from 0 to 200). Prospects for the rest of the year are cautiously optimistic, the score is 111 points, showing a downgrade in confidence levels, but the overall indicator equaled to 55.5%. According to experts, dire economic conditions, including high inflation and soaring oil prices, exacerbated by the special military operation in Ukraine, remain the major factor hindering the tourism recovery. About 61% of the experts currently predict a potential return of international arrivals to the 2019 levels in 2024 or later, while the number of those who project return to the pre-pandemic levels in 2023 has decreased (27%) compared to the previous survey (48%) (UNWTO 2022).
As a result, a group of the main factors that can contribute to rather rapid recovery of international tourism was presented (Fig. 3).
However, it is already obvious that a complex crisis is coming, reinforced by a combination of pandemic and geopolitical factors. In mid-July 2022, Tedros Adhanom Ghebreyesus, Director General of the World Health Organization (WHO), called on all countries of the world to return the mask regime because of the next wave of COVID-19. The World Bank, which in January 2022 predicted a global economic growth of 4.1%, in July 2022 lowered its prognosis to 2.9% and warned that many countries could fall into recession as the economy slips into a period of stagflation reminiscent of the 1970s. The 2022 economic growth estimates for Europe and the U.S. have been lowered to 2.5%. Growth in developing countries is expected to decline to 3.4% in 2022 from 6.6% in 2021, well below the annual average of 4.8% between 2011 and 2019 (World Bank 2022). There is practically no doubt that consequences of the crisis will have a long-term impact on the economic and socio-demographic development of states.
This, in turn, puts the world tourism in a rather difficult situation. Long-term forecasts are rather vague and do not provide for any insight of how long it will take to fully restore international tourist trips. According to the Euromonitor International report, a noticeable decrease in the tourist flow from Russia will also contribute to reduction in global tourism industry revenues in 2022: in 2021 Russians spent USD 9.1 billion abroad, in 2022 this amount will be approximately USD 6.9 billion less (Interfax 2022). However, the remarkable thing is that the Russian tourists are totally cool about threats of the pandemic (for example, the flow of the Russian tourists to Turkey in the summer of 2022 turned out to be significant, despite information about the next wave of COVID-19 in this country). Absence of the Russian tourists on many international routes is rather explained by the economic sanctions imposed on Russia, resulting in higher cost of a number of traditional destinations, as well as hostile attitude towards Russians in the European countries in the spring of 2022 and increased number of refusals of a Schengen visa, which sharply reduced willingness to visit many previously popular European destinations. As to the European tourism, the negative impact of reports on higher incidence of new coronavirus strains in summer is also reinforced by the fact that tourists in other continents who use long-haul flights (for example, residents of the North America) postpone their trips to Europe due to the conflict in Ukraine, as they are not sure that the European countries are safe and suitability enough for travel and business.
Thus, not all changes in the global tourism industry are caused by the pandemic. Nevertheless, any of the trends has somehow been influenced by this "Black Swan" 1 .
Major trends in the tourism market in the current decade: what is the impact of the pandemic
If we are to talk about noticeable trends in the tourism industry in the pre-pandemic period, each trend was either intensified or modified ( Table 2).
Adjustment of business models at all levels is one of the main changes in tourism. As already noted, the COVID-19 pandemic has almost completely brought global tourism activities to a halt, and in this regard, one of the most acute pre-pandemic problems has been resolved at least for a while -impact of "overtourism" to the most popular destinations (Goodwin 2017;Alexandrova et al. 2019;Dodds and Butler 2019). The market players faced the opposite problem of "zero tourism" (Kainthola et al. 2021;Mestanza and Bakhat 2021). This has led to significant, sometimes breakthrough changes in the management models of tourist destinations Vărzaru et al. 2021). In less than four months, focus of governments and organizations in this area has been redirected from development models based on the idea of preserving and improving the quality of life of local residents in the fight against the impact of "supertourism" to the urgent need to support the survival of industry players, preserve jobs and ensure safety conditions for tourists, company employees and local residents (Guerreiro 2022). In these models, as in companies' business models, the issues of crisis management and strategic planning started to play a more significant role (Li et al. 2021a;Kowalczyk-Anioł et al. 2021). The pandemic has seriously increased focus of all tourism stakeholders to such aspects as safety and quality management (Berezka et al. 2021;Chang and Wu 2021;Kristiana et al. 2021;Sharma et al. 2021;Kuščer et al. 2022).
The devastating impact of COVID-19 on all aspects of travel and tourism has exacerbated the already existing sustainability challenges that have historically been associated with the tourism industry (Fletcher et al. 2021;Seabra and Bhatt 2022). At the same time, as noted by many researchers, pandemic-associated travel restrictions have reduced the negative impact on the environment. In general, the pandemic has brought at least a short-term relief by ridding destinations of previously existing irresponsible business practices and setting trends for more responsible behavior of tourism stakeholders, including tourists themselves, and introduction of more "sustainable" practices (Ioannides and Gyimóthy 2020;Eichelberger et al. 2021;Tiwari and Chowdhary 2021;Sumanapala and Wolf 2022). Therefore, we can say that the pandemic can also be considered as a catalyst of opportunities in sustainable tourism (Romagosa 2020;Schmidt et al. 2021).
Crucially, the pandemic has stimulated innovation in the tourism industry (Streimikiene and Korneeva 2020;Piccarozzi et al. 2021;Sheresheva et al. 2021). We can agree with the opin- Table 2. Pre-pandemic trends in the world tourism industry
Trend
Effect of the pandemic Development of initiatives on transformation of the tourism industry, its reorientation towards achieving the Sustainable Development Goals (SDGs), which implies a balance between economic, social and environmental components, as well consideration of interests and cooperation between business, the state and society The trend has transformed (certain aspects have been intensified while others -weakened) with a risk of returning to "old" practices The growing role of digital technologies that transform logics of travel organization and ways of getting impressions (global booking and distribution systems, sharing platforms, interactive maps, GIS technologies, virtual and added reality)
A sharp increase in digitalization
Changes in the supply and demand structure: emergence and development of new types of tourism and niche market segments, personification, increased share of independent travelers, request for immersion in the life of local communities, nature, crafts and local activities The trend remained The focus is shifted to relationship of actors, joint value creation, stakeholder and network approach in management of the industry in general and individual destinations The trend enhanced Source: compiled by the authors based on (Brodeur et al. 2021;Sheth 2020).
ion that lack of innovation means lack of preparedness for challenges. If changes in the environment and in demand are "as huge as in the case of the COVID-19 pandemic, the tourism industry is forced to adapt to, then lack of adaptation and innovation calls into question the mere existence of companies" (Montañés-Del-Río and Medina-Garrido 2020: 10). One of the main sources of potential innovations in tourism is the use of Industry 4.0 technologies, such as intelligent devices, which enable tourists to receive new benefits and impressions, and companies and destination management to receive a significant amount of information about behavior patterns and preferences of tourists (Ardito et The opinion that innovative business models should be evaluated not only from the point of view of profit prospects or quantitative increase in tourist flow, but primarily from the point of view of compliance with the Sustainable Development Goals, assuming development with minimized damage to current and future generations, is characteristic of and has been significantly reinforced by the pandemic (Tomislav 2018;Chkalova et al. 2019;Rasoolimanesh et al. 2020; Van et al. 2020).
Research also show that the trend towards the development of new types of tourism and niche market segments, as well as demand personification of tourist services continues (Carlsson-Szlezak et al. 2020; Higgins-Desbiolles 2020; Twining Ward and McComb 2020; Rogerson and Rogerson 2021). Independent tourism also remains popular, before the pandemic many modern travelers, especially young ones, unequivocally preferred independent travel arrangements. The pandemic has highlighted the risks of independent tourism that used to be hardly taken into account. First, in case of infection, independent treatment arrangements abroad turned out to be inaccessible in some cases rather than only more difficult and expensive. Second, in the spring of 2020, vast majority of the Russian tourists who were abroad through tour operators returned home rather quickly (about 160 thousand organized tourists from 43 countries in two weeks), and a large number of independent tourists had to wait to return to Russia through the efforts of the Russian Foreign Ministry (Sheresheva 2020b).
Of particular note is the emergence of new specific forms of interregional and intercountry cooperation, such as creation of "travel bubbles" -"safe corridors" formed during the pandemic period within the framework of an agreement between two individual countries that are confident they have coped with the pandemic wave, and open their borders only for mutual trips for residents of their countries, without imposing requirements for mandatory quarantine or availability of COVID testing results (Chan and Haines 2021;Fusté-Forné and Michael, 2021).
Russian tourism: a paradigm shift in development
International tourism, which most Russian tour operators have relied on in the past decades, has turned out to be the most "toxic" part of the tourism business because of the pandemic. As a result of the current situation, many tourists failed to refund, even in part, their expenses for tours and vouchers purchased in advance due to deductions under the tourist agreement. The only way out in this situation, which the tourist operators could find, is to issue a document on the transfer of trips to a later period, when borders will be opened and transport links will be restored.
By the summer of 2021, it became clear that at least 30 % of the purchased tourist certificates would be dated. Nevertheless, many Russian citizens, not fully realizing all seriousness of the situation, continued to purchase trips -most customers were confident that tourist communications would soon be restored and it would be possible to travel abroad (Zemtsov, Tsareva 2020). It should be noted here, that both specifics of the Russian mentality and so-called "inertia of the consumer" -a characteristic of consumer behavior well known to marketing specialists (Gray et al. 2017;Henderson et al. 2021) played a big role. However, the experience that tourists gained as a result of such transfers turned out to be negative for the majority of them, making many Russians either abandon trips altogether for a while, or choose domestic destinations, especially since stimulating measures in domestic tourism have expanded the range of offers.
Restrictions imposed by the Russian government in 2020 were based on finding a balance between keeping the country's economy afloat, preserving health and preventing mass unemployment and decreased living standards. In our opinion, in general, the measures were successful in achieving the balance and mainly preventing mass job loss. This both mitigated the situation and deescalated social tensions as well as made a lot of sense in terms of the prospects for returning to the pre-pandemic needs of the market. For comparison, serious disruptions in the service of air passengers in the UK in the summer of 2022 were due to the fact that a significant number of employees were dismissed during the pandemic, and their shortage naturally affected the service organization.
In Russia, the crisis turning point in 2020-2021 turned out to be a transition towards intensive development of the local tourism, with a growing understanding of the fact that it is the actions of the state that will determine successes and failures of countries and industries in the "post-pandemic world " rather than "invisible hand of the market", (Sheresheva 2020b). It is extremely important to stimulate adequate pricing policy in the regions, measures are needed to reduce cost of the domestic tourist product to affordable to the Russian middle-income population. This can be ensured within the framework of a well-structured state support for the tourism industry. It is no coincidence that the Russian Federal Agency for Tourism (Rosturizm) received appeals from businesses during the pandemic period with a request to create more loyal conditions because of the crisis that will make a number of tourist destinations more accessible to attract more tourists because of affordable prices.
However, the pricing problem in the Russian tourism is extremely complex and there is no clear-cut solution so far. On the one hand, many market players, following the 2020 lockdown, found themselves in a difficult financial situation, which was already hard even before the pandemic due to high taxes, expensive transport logistics, and Rus Ruble volatility. Calculations carried out before the outbreak of the pandemic show that in 2019, the costs of the tourism sector accounted for almost 90% of all profits, while gross value added decreased to 3%, since structures were not able to fully support the entire volume of tourist services. In-come in the tourism sector in the long term can remain at a stable level, regardless of the level of financial assets invested in the industry (Tolstykh 2018). The return on investment may decrease during the development and implementation of a new tourism service or product, but in a long period, the results of innovative activities of travel agencies and investment in the development of destinations become the basis of competitive strength (Sheresheva 2018). On the other hand, in the Russian domestic tourism, there are traditional cases of unjustified overpricing of popular destinations, when instead of a long-term focus on the client and investments in business development, an attempt is made to "bank the profit" on shabby-quality, but scarce services. In particular, such situation was discussed with regard to prices in Crimea in 2021, when the tourist flow in 2021 hit record: for the first time ever in the post-Soviet period, the region hosted 9.5 million guests, which was 20% more than in 2020 (Interfax 2021). However, in 2022, the region faced a severe underutilization of its recreational facilities due difficult situation in the region and constantly extended regime of temporary restriction on flights to 11 Russian airports rather than the pandemic. This turned out to be a serious lesson for local businesses to learn, they come to realize the need to fight for clients, use discounts and promotional offers.
It was during the pandemic that the option recognized as the most successful of all state support measures was found: tourist cashback -a program of the State subsidization of trips in Russia, developed by the Russian Federal Agency for Tourism, according to which tourists can pay for a trip with a Mir card and receive a 20% cashback on the price (Rosturizm 2022). According to marker players, reduction in the tax burden failed to become a significant support , while this program enjoyed a positive reaction from almost all market stakeholders and began to be regularly extended, as well as expanded (in particular, several waves of "children's cashback" took place).
The pandemic crisis has made it necessary to combine "rapid response" measures with systematic activities to identify threats and opportunities in view of the medium-and longterm development. In this regard, there is a need for (Sheresheva 2020a): • Inventory of resources (federal, regional, municipal) and identification of profitable areas to be combined • Balanced development of infrastructure across the country. • Development of a system for specialized, including managerial, skills and competencies in the field of tourism. • Support and development of small and medium-sized businesses; development of both public-private and public-non-governmental partnerships. • Stimulation of social activity among the population though drawing attention to local initiatives, especially aimed at inter-municipal and interregional cooperation. • Destination branding in the modern sense of the word -as development of a positive image of the government and trust in it among the population and business and higher attractiveness of the territory, first of all, as a place to live, for investment, etc. • Identification of the most promising areas of development and types of tourism, and development of the proper incentive system. • Development of a unified system of indicators in the Russian Federation to balance benefits and costs of different stakeholders at different levels, with the long-term strategic goals of the Russian Federation in priority. A strategy for developing competitive advantages of the domestic tourism industry should be aimed at the development of tourism infrastructure and organization of interregional and cross-border destinations. Effective are strategies focused on increasing compet-itive strength through innovation and adaptation to external conditions, and measures to ensure safety of tourists and development of all year-round event tourism.
The choice of survival strategy and competitive strategy in the tourism industry is based on the study of external conditions of business processes to reduce production costs. One of the tools can be a multiplicative analysis to assess profitability in the tourism industry and impact of production costs on competitive advantages of travel agencies.
Unlike previous decades and even the 2020 situation, the development of tourism has become one of the state-level priorities. In 2021, the Russian Federation adopted the state program "Tourism Development" until 2030, and also launched implementation of the National Project "Tourism and Hospitality Industry" with the following three Federal projects within its framework: FP 1 "Development of tourist infrastructure", FP 2 "Increasing affordability of tourist services", and FP 3 "Improving management in tourism". In just a year, significant positive changes have been achieved. Thanks to the activities of the National Project, there has been an impressive increase in support for the industry: tourism financing has increased many times, from 7 billion rubles in 2019 to 74 billion rubles in 2022. The list of tourist territories receiving state support includes such locations as the Volga, Altai, Primorsky Krai, Kaliningrad Region, as well as mountainous areas of the Irkutsk Region, Republic of Buryatia, Baikal, Kamchatka. Opening of new tourist destinations is not limited to these regions, there is an intensive promotion of a number of new tourist brands and destinations (Living Heritage 2022).
The Russian Federal Agency for Tourism, relying on proposals of expert and analytical organizations, social communities and direct participants in the tourism industry, who have joined forces within the framework of commissions and working groups, has developed directions for gradual post-crisis recovery of the tourism industry. These developments formed basis of the National Project "Tourism and Hospitality" (Tourism and Hospitality 2021) launched in 2021. In addition to infrastructure development, significant support measures are envisaged for small and medium-sized enterprises, investment promotion, development of tourist macro-regions and domestic charter destinations, development of interregional schemes (the Far East, Crimea, "Big Golden Ring"), promotion of special events, systematic promotion on the domestic and world markets, development of digital services through support of the National Program "Digital Economy" (Digital Economy 2019). By now, more than 55 thousand objects, routes and services have been digitized.
The tourism industry is characterized by a variety of tasks related to the development of transport and logistic infrastructure, recreational resources, historical settlements and specially protected areas, diverse and differently-scaled businesses. The tourism industry is able to generate income in a number of related sectors of the economy with the resulting multiplier effect significantly affecting socio-economic development of territories acting as tourist destinations (Leonidova 2018). Therefore, the ambitious goals set for the coming years, related to reorientation of the tourism sector to domestic routes and inbound tourist flow, required a reboot of approaches to its management. In this regard, it seems natural to transfer in line with the Decree of the President of the Russian Federation dated October 20, 2022 (Decree of the President of the Russian Federation 2022), functions of the Russian Federal Agency for Tourism to the Ministry of Economic Development of the Russian Federation with the relevant deputy Minister responsible for comprehensive development of the industry. At the same time, tourism retains its own profile deputy Prime Minister in the Government of the Russian Federation.
In general, one can say that the Russian tourism sees the "light at the end of the tunnel" turned on at full power rather than a flicker of light, and the pandemic, which could have brought the industry to disaster, eventually served as a catalyst of positive processes.
Conclusion
Already in the first half of 2020, most serious experts argued that the coronavirus pandemic would be negatively affecting the tourism sector for years. Now it is safe to say that this impact is long-term, while consequences of the pandemic are mixed. The global tourism industry has witnessed a number of changes in the market structure and management approaches, while in Russia, the crisis turning point in 2020-2021 turned out to be a transition to intensive development of the domestic tourism, the value of which was realized by all market stakeholders, from federal government and regional administrations to small businesses and tourists themselves. The current situation contributes to outstripping development of the domestic tourism, further stimulated by growing measures of state support, concerning not only and not so much the most popular destinations, but deliberately aimed at diversifying tourist routes and their uniform distribution across the country and by season. The scale of tourism resources and wealth of the Russian Federation are becoming the main source of competitive advantages, while creation of sufficient infrastructure and management competencies in line with resources available is the main objective at the federal level. It is especially important that both traditional and new tourist destinations are affordable to all segments of the population, including people with relatively low income. It is affordable vouchers that contribute to the development of mass domestic tourism, gradually resolving the crisis situation in the industry, simultaneously improving quality of people's lives. The implemented regulatory legal acts related to anti-crisis support at the federal level should be adapted to the regions, aimed at coordination between authorities and enterprises of the tourist and recreational sphere. It is equally important to understand that during a crisis such a multi-layered and complex industry as tourism can successfully develop only as a single business ecosystem. This implies both competition and mutual assistance, inter-organizational, inter-municipal and interregional cooperation between all stakeholders. | 2022-12-15T16:07:11.056Z | 2022-12-13T00:00:00.000 | {
"year": 2022,
"sha1": "1e0ea11903cd1d03e447d3401dcd19390cc4baf7",
"oa_license": "CCBY",
"oa_url": "https://populationandeconomics.pensoft.net/article/90708/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "be019fe8874621bc181742556a836c6a71182693",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73485414 | pes2o/s2orc | v3-fos-license | Influenza Virus Infections and Cellular Kinases
Influenza A viruses (IAVs) are a major cause of respiratory illness and are responsible for yearly epidemics associated with more than 500,000 annual deaths globally. Novel IAVs may cause pandemic outbreaks and zoonotic infections with, for example, highly pathogenic avian influenza virus (HPAIV) of the H5N1 and H7N9 subtypes, which pose a threat to public health. Treatment options are limited and emergence of strains resistant to antiviral drugs jeopardize this even further. Like all viruses, IAVs depend on host factors for every step of the virus replication cycle. Host kinases link multiple signaling pathways in respond to a myriad of stimuli, including viral infections. Their regulation of multiple response networks has justified actively targeting cellular kinases for anti-cancer therapies and immune modulators for decades. There is a growing volume of research highlighting the significant role of cellular kinases in regulating IAV infections. Their functional role is illustrated by the required phosphorylation of several IAV proteins necessary for replication and/or evasion/suppression of the innate immune response. Identified in the majority of host factor screens, functional studies further support the important role of kinases and their potential as host restriction factors. PKC, ERK, PI3K and FAK, to name a few, are kinases that regulate viral entry and replication. Additionally, kinases such as IKK, JNK and p38 MAPK are essential in mediating viral sensor signaling cascades that regulate expression of antiviral chemokines and cytokines. The feasibility of targeting kinases is steadily moving from bench to clinic and already-approved cancer drugs could potentially be repurposed for treatments of severe IAV infections. In this review, we will focus on the contribution of cellular kinases to IAV infections and their value as potential therapeutic targets.
Introduction
Influenza A (IAV) and B (IBV) viruses are important causes of upper respiratory tract infections [1]. IAV can cause severe acute respiratory disease with an attack rate of 5-10% in adults and 20-30% in children annually [2,3]. The significant public health burden caused by IAV infections is exemplified by the annual fatal cases globally, which number 290,000-650,000 [4]. Most at risk are children and the elderly, accounting for~90% of case fatalities and/or complications [5,6]. Occasionally, novel antigenically distinct influenza A viruses emerge that may cause pandemic outbreaks as has occurred in 1918, 1957, 1968 and 2009. Unlike IAV, IBV viruses do not continuously circulate in animals and are therefore less likely to be associated with zoonotic transmission or pandemics [7]. However, they do co-circulate with IAV and can be significant contributors of influenza-related morbidity and mortality [7][8][9]. Vaccination is the preferred intervention against influenza viruses and helps to limit the impact influenza outbreaks may have. In addition, antiviral drugs are available for the treatment of influenza virus infections. The surface glycoprotein hemagglutinin (HA) is the major target for the induction of virus-neutralizing antibodies by vaccination. Currently available antiviral drugs SphK2 knockdown has also been shown to reduce IAV replication in vitro. Moreover, in vivo inhibition of SphK1 and SphK2 resulted in prolonged survival of mice challenged with IAV [79].
Figure 1.
Host kinases and known roles during IAV infections. Schematic organizing host kinases based on kinase family, signaling pathway involved, specific kinase and effect of inhibition (from innermost to outermost ring; white cone).
Linking Metabolism and Innate Immunity
Like many pathologic conditions, IAV infection alters the metabolic landscape and most of these alterations are mediated by kinases resulting in direct or indirect effect on IAV replication, infection kinetics and pathogenicity. Consistently, the majority of host-cell alterations following IAV infections are in metabolic pathways [127]. Virus regulated kinase activity can have a major influence on cellular metabolism. AMP-activated protein kinase (AMPK) is a major sensor and regulatory master switch of carbohydrate metabolism, and is directly involved in insulin signaling and lipid metabolism. It links central carbon metabolism and glucose availability with the host innate immune response [128][129][130][131]. AMPK activity is modulated by intracellular calcium levels and this activity can regulate the stimulator of interferon genes (STING) through UNC-51-like kinase 1 (ULK1) activation. STING serves as a crucial factor of the innate immune response and an essential mediator for recognition of intracellular bacterial and viral pathogens. STING-dependent IFNβ induction is regulated by the calcium-dependent membrane potential of mitochondrial membranes. In vitro inhibition of AMPK resulted in reduced TNF-α and IFN-β secretion after activation with the STING ligand 5,6-dimethyl xanthone-4 acetic acid (DMXAA) [132][133][134]. AMPK phosphorylation of multiple sites of ULK1 leads to its dissociation from AMPK and subsequent activation. ULK1 activity promotes phosphatidylinositol-3-phosphate (PI3P) synthesis that contributes to autophagosome formation in addition to JNK1 induced Bcl-2 dependent autophagy during IAV infection [46,[135][136][137].
Although ER stress triggers translational shut-down through the PKR-like ER kinase (PERK), virally induced metabolic and ER stress in the context of an obese mouse model activates PKR [138]. This activation reduces cellular and viral translation and activates JNK1 and other inflammatory kinases in response [138]. Together, PKR and nutrient deprivation-dependent JNK1 activities lead to the subsequent activation of apoptosis signal regulating kinase 1 (ASK1) [139]. Integration of AMPK and JNK with other Raf/MEK/ERK related kinases allows engagement of metabolic processes via immune response components including NF-κB, PI3K/AKT/mTOR and PKC pathways [117,118,[140][141][142]. Accordingly, the NF-κB regulating kinase, IKK, has recently been linked to glycolysis [143,144]. In addition, IKK-and PKC-dependent serine phosphorylation of the insulin receptor, inhibits insulin signaling and directly regulates cellular lipid metabolism [145,146]. Furthermore, PKC has been described to be involved in fatty acid fate regulation, auto-stimulating kinase activity [147]. PI3K/AKT/mTOR signaling mediates its effects upstream and downstream of NF-κB, Raf/MEK/ERK
Phosphorylation of Influenza Virus Proteins
Phosphorylation of IAV and IBV has been reported with some conservation across influenza virus species [34]. IAV protein-phosphorylation regulates different stages of the viral cycle by either promoting replication or evading/suppressing the innate immune response [32][33][34][35][36]69]. Moreover, treatment with kinase inhibitors affects influenza A virus RNA and protein synthesis, shuttling of viral proteins between the cytoplasm and nucleus, and virion release [28][29][30][31]80]. The nonstructural protein NS1 is a multifunctional immune modulator that counteracts host defenses [81,82]. NS1 phosphorylation at T215, S42 and S48 is thought to regulate the dsRNA binding capacity of NS1, which promotes evasion of the innate immune response [33,83]. Akt, an effector kinase of both the PI3K and ERK pathways, is responsible for T215 phosphorylation, which consequently results in viral entry and genome replication suppression following Akt inhibitor treatment [84,85]. Additionally, mutation of S42 eliminates the interaction of NS1 with dsRNA and attenuates viral replication [33,84]. T132 phosphorylation of the M1 protein controls its nuclear import, which is critical for viral replication. The Janus kinase 2 (JAK2) inhibitor AG490 prevents nuclear import of M1, suggesting that JAK2 might be responsible for M1 T132 phosphorylation [34,35]. Inhibition of IAV nucleoprotein (NP) phosphorylation leads to its nuclear retention that is largely regulated by several phosphorylation sites, including S9, Y10, S165 and Y296. Mutation of these sites results in decreased viral replication in vitro and in vivo largely through disruption of interactions with cellular importin-α and chromosomal maintenance 1 proteins [34,36,86].
Tyrosine Kinases
Tyrosine kinases (TK) are a subgroub of~90 kinases within the human kinome that phosphorylate tyrosine amino acid residues; this can lead to conformational changes in a given protein or even serve as a scaffolding site to facilitate protein-protein interactions. TKs are further classified into receptor tyrosine kinases (RTK) and non-receptor tyrosine kinases (non-RTK). Non-RTK act as intracellular signal transducers, mediating the signaling of cell-surface receptors for cytokines, growth factors and other ligands [16]. Several phosphorylation sites (S, T and Y) in IAV and IBV proteins have been identified [34]. Based on additional sequence analysis of >50,000 strains (www.fludb.org), we identified highly conserved tyrosine residues in replication complex proteins (PA, PB1, PB2) and NP proteins of all IAV subtypes. This level of conservation suggests an evolutionary importance that might be Viruses 2019, 11, 171 5 of 17 exploited in understanding conserved functions and developing broadly active therapeutics targeting TK. Interestingly, while many of these phosphorylation sites have been previously reported and their importance demonstrated or inferred, the kinases that carry out their phosphorylation have yet to be experimentally validated.
Nerve growth factor receptor (TrkA) is a receptor tyrosine kinase that was shown to play a role in IAV viral RNA synthesis, vRNP nuclear export and virion release. In vitro inhibition of TrkA has been shown to diminish IAV RNA (vRNA, mRNA and cRNA) synthesis independently of NFκB signaling [29]. Interestingly, this reduction in RNA synthesis was largely due to direct inhibition of CRM1-mediated export and subsequent nuclear retention of IAV RNPs [29]. In addition, TrkA inhibition leads to reduced activation of the lipid biosynthesis enzyme, farnesyl diphosphate synthase (FPPS), which is known to modulate virion budding [28]. However, the exact mechanism of TrkA-mediated FPPS regulation remains undefined.
Focal adhesion kinase (FAK) is a non-RTK and a component of focal adhesions that tether the actin cytoskeleton to the extracellular matrix. We previously showed that FAK links phosphatidylinositol-3 kinase (PI3K) activation and cytoskeletal reorganization required for endosomal trafficking during IAV entry [40]. Furthermore, FAK positively regulates IAV replication and polymerase activity of different IAV strains/subtypes [39,40]. Others have also reported roles for FAK during other viral infections [87][88][89][90][91][92]. FAK can modulate the cellular immune response by regulating various functions of T cells, B cells and macrophages [93][94][95]. Consistent with this, we have observed FAK dependent regulation of innate immune responses during severe IAV infection in mice [96].
Abl1 (also known as Abelson murine leukemia viral oncogene homolog 1 or c-Abl) is a cytoplasmic and nuclear non-RTK that phosphorylates CRK (also known as p38 or proto-oncogene c-CRK), an adaptor protein required for efficient replication of avian influenza viruses and subsequent JNK-mediated apoptosis [97]. The viral nonstructural protein 1 (NS1) can disrupt Abl1-CRK interactions via its Src homology binding motifs and thereby inhibit CRK phosphorylation, ultimately resulting in IAV-subtype specific pathogenicity as shown for the 1918 pandemic H1N1 virus [50,51].
Acute respiratory distress syndrome (ARDS) and acute lung injury (ALI) due to immune cell infiltration during severe IAV and ensuing secondary bacterial infections can result in respiratory failure and are the main causes of death in influenza-infected patients [98]. Bruton's tyrosine kinase (Btk) can regulate TLR4-mediated activation in human neutrophils [99]. Interestingly, chemical inhibition of Btk can alleviate IAV induced ARDS symptoms in mice [49]. This effect is likely due to limiting damaging neutrophil activity and production of pro-inflammatory chemokines and cytokines including TNF-α, IL-1β, IL-6, KC, and MCP-1 during acute lung injury [49].
The IFN receptor I and III-associated tyrosine kinase 2 (Tyk2) has emerged as an important host factor targeting secondary bacterial infections. The virally induced retention of IL-1β and GM-CSF diminishes the bacterial-induced innate immune response that may allow the establishment of secondary bacterial infection. Specific ex vivo inhibition of Tyk2 resulted in impaired bacterial growth due to restored IL-1β and GM-CSF levels in human alveolar tissues [52].
Serine/Threonine Kinases
Serine/Threonine kinases (STKs) facilitate phosphorylation of protein at either serine or threonine residues. STKs are central components of many cellular signaling pathways including Raf/MEK/ERK, nuclear factor kappa-B (NF-κB) and PKC [100][101][102]. The Ras-dependent Raf/MEK/ERK pathway is activated by almost all cytokines and growth factors that bind to receptor tyrosine kinases, cytokine receptors and G-protein coupled receptors [43]. Accordingly, the importance of Raf/MEK/ERK signaling for effective IAV replication has previously been demonstrated [31,60].
IAVs utilize multiple mechanisms to hijack STKs to evade subsequent innate immune responses. c-Jun N-terminal kinases 1 and 2 (JNK1/JNK2) can regulate pro-inflammatory response induction and are upregulated by several IAV strains. IAV-mediated induction of JNK1/JNK2 activity triggers the Raf/MEK/ERK pathway, mediating production of chemokines and cytokines including tumor necrosis factor alpha (TNF-α), interferon β (IFN-β) and interleukin 6 (IL-6) [45]. Interestingly, recent studies suggest that JNK1-dependent phosphorylation of Bcl-2, a process normally observed as a starvation induced autophagy signal, is promoted by viral JNK1 activation resulting in virus-induced autophagy [46]. Chemical inhibition of JNK1/JNK2 resulted in reduced levels of pro-inflammatory cytokines in vivo [45]. Additionally, in vitro inhibition of JNK1/JNK2 results in impaired vRNA synthesis; however, the mechanism is yet to be defined [53].
As a member of the mitogen-activated protein kinase (MAPK) family, p38 is involved in several steps of the IAV infection cycle. IAV infected cells expressing the antiapoptotic protein Bcl-2 show reduced viral titers due to reduced vRNP export from the nucleus with no effect on virally induced apoptosis. The antiapoptotic effect of Bcl-2 was reduced by phosphorylation of its threonine 56 and serine 87 residues by virus-induced p38 activity. Inhibition of p38 diminished viral replication, vRNP export and apoptosis [103]. During early stages, TLR4 mediated viral activation of p38 MAPK is important for viral entry and replication [37,55]. Furthermore, in vivo inhibition of p38 MAPK directly limited excessive cytokine expression through an IFN-dependent mechanism. This regulation is mediated via phosphorylation of STAT1 and subsequent engagement of the IFNβ promotor to regulate IFN-stimulated gene (ISG) expression [54]. Influenza virus induced perturbations of the intracellular redox balance resulting in increased production of reactive oxygen species (ROS) can also activate p38 [56,104]. Furthermore, NADPH oxidase 4 (NOX4)-regulated p38 and ERK activation leads to increased ROS production during IAV infections in vitro [56,105,106]. Interestingly, mouse experiments suggest that the effects of Bcl-2 and NOX4 may be gender dependent. Female mice exhibited reduced clinical symptoms and viral titers; in contrast, higher IAV replication in male mice correlated with higher expression of NOX4 and phosphorylation of p38 [107]. The NF-κB signaling pathway is a central regulator of innate immune responses and the IkB kinase (IKK) is a direct target of the viral NS1 protein in counteracting the NF-κB mediated cellular antiviral response [63,108]. However, the majority of publications have shown that inhibition of NF-κB signaling diminishes viral replication in vitro and in vivo [64][65][66]; more specifically, lowered levels of pro-inflammatory factors, reduced caspase activity and therefore impaired caspase-mediated nuclear export of vRNP [62]. Interleukin 1 receptor-associated kinase-M (IRAK-M) is a NF-κB signaling related cellular kinase. During IAV induced pneumonia, IRAK-M acts a central regulator of inflammation of mucosal tissue in the respiratory tract. IRAK-M knockout mice challenged with IAV showed strongly increased lethality rate and decreased viral clearance [67].
The successful nuclear export of vRNP has been shown to depend directly on the viral activation of the Raf/MEK/ERK signaling pathway [31,109]. MAPK kinase (MEK) and extracellular signal-regulated kinase (ERK), belong to the group of classical mitogen-activated protein kinases (MAPK). MEKs have been shown to regulate IAV and IBV replication [31,109]. Several MEK inhibitors resulted in vRNP retention, reduced titers of progeny virus in vitro, and also improved mouse survival in vivo [57][58][59]. During early stages of IAV infection, ERK regulates the vacuolar H + -ATPase (V-ATPase) activity to mediate pH-dependent acidification of endosomes and subsequent fusion of the viral and endosomal membranes [41]. In vitro inhibition of ERK, a direct downstream mediator of MEK, impedes IAV vRNP nuclear import as well as export [41,60]. IAVs activation of Raf/MEK/ERK signaling also induces p90 ribosomal s6 kinases (RSK), which play an important role as downstream mediators of ERK signaling [61,110]. RSK2 is involved in regulation of cell growth and proliferation. RSK2 knockdown using shRNAs results in increased IAV and IBV replication and IAV polymerase activity [61]. Inhibition of RSK2 blocked IAV-induced phosphorylation of double-stranded RNA-activated protein kinase (PKR), one of 4 known kinases (PKR, HRI, PERK and GCN2) that phosphorylate the translation-initiation factor elF2 during stress responses resulting in inhibition of cap-dependent translation of cellular and viral proteins [61,111]. PKR activation by influenza virus infections is well established and the virus has evolved multiple mechanisms to suppress PKR activation. Furthermore, IAV-dependent stimulation of NF-κB and IFN-β was impaired by RSK2 inhibition, suggesting an effect on the cellular antiviral response [61]. In addition to Raf/MEK/ERK kinases, the G protein-coupled receptor kinases (GRKs) are also implicated in the induction of innate immunity pathways. Recent phosphoproteomic studies identified GRK2 as an important junction of cellular signaling pathways activated by IAV. In vitro and in vivo inhibition of GRK2 resulted in decreased viral replication [71], while the exact function of GRK2 remains unclear. Polo-like kinases (PLK) act as GRK nodes of cellular signaling and are crucial regulators of cell division and the cell cycle [112]. PLK1 has been described as acting as a pro-viral host factor for several viruses by phospho-regulating viral proteins [113,114]. A recent study shows that in vitro and ex vivo inhibition, as well as knockdown of PLK1, PLK3 and PLK4, results in impaired IAV replication [73].
Protein kinase C (PKC) is a STK that regulates multiple cellular processes including proliferation, differentiation, apoptosis and angiogenesis. The functional versatility of PKC is dependent on its various isoforms responding to different stimuli. The complexity of eleven different PKC isoforms expressed in most tissues also limits understanding of their function within different cell types [115]. Nevertheless, Kurokawa et al. showed almost 30 years ago that general in vitro inhibition of PKC results in reduced viral protein synthesis [30]. More recent studies have further defined the function of PKC isoforms and their involvement in IAV infections. Treatment of cells with bisindolylmaleimide, a highly specific PKC inhibitor that has activity against most PKC isoforms, reversibly inhibits virus entry by blocking endosomal trafficking and virion uncoating of both IAV and IBV [80]. Phosphorylation of the viral proteins PB1 and NS1, important for polymerase activity and efficient viral replication, has been shown to be PKCα dependent in vitro [68] and for PB1 in vivo [69]. In PKCβII kinase-dead cells, IAV is retained in late endosomal compartments, suggesting PKCβII as an important modulator of IAV entry [44]. PKCδ, interaction with the IAV polymerase subunit PB2, regulates NP oligomerization and vRNP assembly, and ablation of PKCδ impaired replication of the viral genome in vitro [70].
Lipid Kinases
Lipid kinases are key mediators of intracellular signaling, central carbon and lipid metabolism, apoptosis and cell proliferation through phosphorylation of lipid residues. Several lipid kinases have been implicated in several steps of IAV replication and in modulating cellular antiviral responses [38,79,[116][117][118]. One of the central lipid kinases is PI3K, which phosphorylates inositol phospholipids [119]. PI3K and its downstream effectors, Akt and mammalian target of rapamycin (mTOR), form a key signaling nexus that regulates cell differentiation, translation and metabolism [120]. Furthermore, it is involved in cross-interaction with other cellular signaling pathways including Raf/MEK/ERK and NF-κB pathways [121]. Early and late PI3K during IAV infections are key events required for IAV replication with distinct outcomes at different times of infection [38]. Early PI3K activity is triggered by viral attachment and mediates IAV entry [75]. Later during the infection, IAV NS1 suppresses PI3K activity via direct interactions with the p85 regulatory subunit. These interactions ultimately prevent AKT-mediated apoptosis, IRF-3 innate immune responses, vRNA synthesis and nuclear vRNP export [38,[74][75][76][77]122,123]. It should be noted that IBV only minimally induces later PI3K activation or apoptosis. Furthermore, in contrast to IAV NS1, IBV NS1 is dispensable for the antiapoptotic effects of PI3K activation suggesting IBV has developed NS1-independent mechanisms to suppress apoptosis [116,124].
Sphingosin kinases (SphK1 and SphK2) are lipid kinases that control conversion of sphingosine to bioactive lipid sphingosine 1-phosphate (S1P) [125], a known modulator of Raf/MEK/ERK, NF-κB and PI3K/AKT/mTOR signaling pathways and regulator of apoptosis [126]. IAV upregulates SphK in in vitro infected cells influencing cellular signaling and promoting efficient influenza virus replication [78,79]. Chemical inhibition of SphK1 results in reduced vRNA synthesis via suppression of NF-κB activity and reduced vRNP nuclear export due to impaired activation of ERK and AKT [78]. SphK2 knockdown has also been shown to reduce IAV replication in vitro. Moreover, in vivo inhibition of SphK1 and SphK2 resulted in prolonged survival of mice challenged with IAV [79].
Linking Metabolism and Innate Immunity
Like many pathologic conditions, IAV infection alters the metabolic landscape and most of these alterations are mediated by kinases resulting in direct or indirect effect on IAV replication, infection kinetics and pathogenicity. Consistently, the majority of host-cell alterations following IAV infections are in metabolic pathways [127]. Virus regulated kinase activity can have a major influence on cellular metabolism. AMP-activated protein kinase (AMPK) is a major sensor and regulatory master switch of carbohydrate metabolism, and is directly involved in insulin signaling and lipid metabolism. It links central carbon metabolism and glucose availability with the host innate immune response [128][129][130][131]. AMPK activity is modulated by intracellular calcium levels and this activity can regulate the stimulator of interferon genes (STING) through UNC-51-like kinase 1 (ULK1) activation. STING serves as a crucial factor of the innate immune response and an essential mediator for recognition of intracellular bacterial and viral pathogens. STING-dependent IFNβ induction is regulated by the calcium-dependent membrane potential of mitochondrial membranes. In vitro inhibition of AMPK resulted in reduced TNF-α and IFN-β secretion after activation with the STING ligand 5,6-dimethyl xanthone-4 acetic acid (DMXAA) [132][133][134]. AMPK phosphorylation of multiple sites of ULK1 leads to its dissociation from AMPK and subsequent activation. ULK1 activity promotes phosphatidylinositol-3-phosphate (PI3P) synthesis that contributes to autophagosome formation in addition to JNK1 induced Bcl-2 dependent autophagy during IAV infection [46,[135][136][137].
Although ER stress triggers translational shut-down through the PKR-like ER kinase (PERK), virally induced metabolic and ER stress in the context of an obese mouse model activates PKR [138]. This activation reduces cellular and viral translation and activates JNK1 and other inflammatory kinases in response [138]. Together, PKR and nutrient deprivation-dependent JNK1 activities lead to the subsequent activation of apoptosis signal regulating kinase 1 (ASK1) [139]. Integration of AMPK and JNK with other Raf/MEK/ERK related kinases allows engagement of metabolic processes via immune response components including NF-κB, PI3K/AKT/mTOR and PKC pathways [117,118,[140][141][142]. Accordingly, the NF-κB regulating kinase, IKK, has recently been linked to glycolysis [143,144]. In addition, IKK-and PKC-dependent serine phosphorylation of the insulin receptor, inhibits insulin signaling and directly regulates cellular lipid metabolism [145,146]. Furthermore, PKC has been described to be involved in fatty acid fate regulation, auto-stimulating kinase activity [147]. PI3K/AKT/mTOR signaling mediates its effects upstream and downstream of NF-κB, Raf/MEK/ERK and PKC pathways to regulate lipogenesis and lipid metabolism [121,131,148,149]. Recent studies suggest that inhibition of Btk leads to metabolic stress through suppression of PI3K/AKT/mTOR signaling [150], highlighting the link between metabolism and innate immunity. Interestingly, using a PI3K/mTOR inhibitor to disrupt glucose metabolism in vitro results in reduced virus production independently of genome replication and most likely drives lipid membrane depletion due to viral budding [127]. It is important to note that influenza virus-induced kinase activity does not only serve to evade the immune response but can also promote a pro-viral metabolic environment and responses.
Perspectives and Future Directions
The continued threat of severe and potentially lethal influenza A virus outbreaks is highlighted by rapid viral evolution, emergence of novel subtypes and antiviral-resistant strains and limited vaccine efficacy. Developing virus-directed antivirals is akin to hitting a moving target. Therefore, approaches that largely mitigate the potential for drug-resistance while being effective against multiple IAV subtypes and strains is highly desirable. Therapies that target host cell factors meet these criteria and are more likely to avoid exuberant immune responses that are likely to reduce disease severity and improve patient outcome.
Kinases are ideal candidates for host-directed antiviral therapies by linking critical cellular processes utilized by most viruses. Moreover, their importance in pathologic conditions such as cancer has led to the development of small-molecule inhibitors and repurposing these clinically approved drugs to treat severe infectious diseases like influenza, should be exploited.
Several reports have recently highlighted critical roles for the focal adhesion kinase (FAK) pathway during infection by several viruses [87][88][89][90][91]. FAK is not only critical for embryonic development and expression of several cellular proteins, it also links integrins with actin reorganization and receptor endocytosis [151][152][153][154]. Given its role in several cancers and the unique structure of its kinase domain, FAK is an attractive target of anti-cancer therapies and several FAK inhibitors are under investigation for clinical use [155].
The FAK pathway has recently emerged as a nexus point engaging antiviral innate immune and inflammatory pathways. Accordingly, FAK is also a component of the intracellular RIG-I-like receptor antiviral pathway where it provides a link between perturbations of the cell surface receptor during viral entry and cytosolic innate immune sensors [156]. FAK modulates the cellular immune response by regulating T cells, B cells and macrophage functions [93][94][95]. FAK was also recently reported to directly phosphorylate IKKα thereby regulating canonical and non-canonical NF-κB pathways [157].
Although SMKIs have been met with often-warranted criticism, this has stemmed from a misconception in clinical literature and inaccurate distinction between in vitro/in vivo substrate (target) specificity and cell-population specificity in vivo of these SMKIs [158][159][160][161]. Because tyrosine kinases share conserved sequences in their ATP binding sites, ATP analogs have an increased likelihood of "off-target" effects on other kinases [162]. Therefore, new small molecule inhibitors designed to avoid this problem directly interfere with FAK autophosphorylation by binding to Y397 instead of blocking ATP binding. One such compound is FAK Inhibitor I (also known as Compound 14 or Y15) which has been validated as a selective FAK inhibitor [163][164][165]. We found that Y15-treatment of various cells, or expression of kinase-dead FAK mutant (FAK-KD), provided the first evidence that FAK is activated by IAV attachment and that FAK kinase activity is critical for efficient endosomal virus trafficking [40]. We also reported that inhibitor-treatment or FAK-KD expression reduced polymerase activity of multiple IAV subtypes including highly pathogenic H5N1 and H7N9. Importantly, we observed FAK interactions with the viral NP [39]; however, the significance of this interaction is still under investigation. Defactinib is an FDA approved FAK inhibitor that has dual activity against FAK and the related kinase Pyk2 and is therefore expected to have different effects than Y15 due to differences in specificities. Our published data utilizing Y15 clearly indicates a FAK specific role in IAV replication. However, given that Pyk2 has overlapping roles in immune cell development and functions [93][94][95], it is possible that inhibiting both kinases will have alternative outcomes. While this might first be viewed as a cause for concern, it provides the opportunity to potentially fine tune treatments where either FAK or Pyk2 or both can be inhibited depending on the timing of treatment (early vs late in infection).
Investigating repurposed cancer drugs for their antiviral properties and their potential immunomodulatory effects during infection will improve our understanding of the role of the respective kinases in the pathogenesis of IAV infections and may lead to the development of novel intervention strategies. Further research on the role of host kinases in virus-induced metabolic changes is warranted and will likely open-up additional avenues of basic and translational research.
Conflicts of Interest:
The authors declare no conflict of interest. | 2019-03-11T17:19:35.938Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "2708cf9ad1e14f46a79336dc4f110bf505aabe92",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/v11020171",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2708cf9ad1e14f46a79336dc4f110bf505aabe92",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
256925921 | pes2o/s2orc | v3-fos-license | PRECISION MEASUREMENTS WITH KAONS AT CERN ∗
The NA62 experiment at CERN took data in 2016–2018 with the main goal of measuring the K + → π + ν ¯ ν decay. The high-intensity fixed-target setup and the detector performance make the NA62 experiment particularly suited to make precision measurements of charged kaon decays. Preliminary results of the K ± → π 0 π 0 µ ± ν ( K 00 µ 4 ) decay first observation and analysis based on NA48/2 data collected in 2003–2004 are presented. Re-sults from studies of K + → π 0 e + νγ ( K 0 e 3 ) decays are reported, based on a data sample of more than 10 5 K 0 e 3 candidates recorded in 2017–2018. Preliminary results with the most precise measurements of the K 0 e 3 branching ratios and of T-asymmetry in the K 0 e 3 decay are presented. The flavour-changing neutral current decay K + → π + µ + µ − is induced at the one-loop level in the Standard Model. Preliminary results from an analysis of the K + → π + µ + µ − decay are reported, using a large sample of about 3 × 10 12 kaon decays into two muons recorded with a downscaled dimuon trigger operating along with the main trigger. The most precise determination of the K + → π + µ + µ − form-factor parameters a + and b + has been made by NA62 using data collected in 2017 and 2018.
Introduction
There is a well-established history of kaon physics in the CERN North Area, including the first measurement of ϵ ′ /ϵ and the discovery of direct charge-parity violation (CPV).The kaon physics programme at CERN is a series of fixed-target experiments observing kaon decays in flight.A proton beam at 400 GeV/c is extracted from the SPS in spills of 3 s effective spill length.The protons are directed onto a beryllium target, producing a secondary hadron beam that consists of about 6% kaons.
The NA48/2 experiment operated during 2003 and 2004.The experimental setup is sketched in Fig. 1 and is described in detail in [1].The experiment had simultaneous K + and K − beams selected to have a mean momentum P K ≈ 60 GeV/c, with ∆P K /P K ≈ 3.8%.The beams passed through a kaon beam spectrometer (KABES) that measured the beam particle momentum with ≈ 1% momentum resolution, and a time resolution of ≈ 600 ps.Decays of K ± inside a fiducial volume are recorded.The momenta of the charged K ± decay products are measured with a magnetic spectrometer equipped with 4 drift chambers, two either side of a dipole magnet, located inside a vessel filled with helium.The spectrometer was followed by a scintillator hodoscope (na48-CHOD), a liquid-krypton (LKr) calorimeter, a hadronic calorimeter, and a muon veto system.The NA62 experiment has been operating since 2016 and is dedicated to measuring the K + → π + ν ν branching fraction.The experimental setup is sketched in Fig. 2 and is described in detail in [2].The experiment has a K + beam only, which is selected to have a mean momentum of P K ≈ 75 GeV/c and ∆P K /P K ≈ 1%.The beam particle rate is 750 MHz.The time of each K + in the beam is measured with a precision of 70 ps by a differential Cherenkov counter (CEDAR) combined with a kaon tagger (KTAG).The momentum of each beam particle is measured by the GigaTracker (GTK) beam spectrometer.About 15% of the K + decay inside the NA62 fiducial volume.The momenta of the charged decay products are measured with a 4-chamber straw detector (STRAW), with two chambers either side of a dipole magnet, located inside a vacuum vessel.The spectrometer is followed by a Ring Imaging Cherenkov (RICH) detector, two scintillator hodoscopes (NA48-CHOD and CHOD), a liquid-krypton (LKr) calorimeter, an upgraded hadronic calorimeter, and an upgraded muon veto system.
2. First measurement of the K ± → π 0 π 0 µ ± ν decay Four-body semi-leptonic kaon decays K → ππℓν are described by 5 kinematic variables: S π , the dipion invariant mass squared; S ℓ , the dilepton invariant mass squared; θ π (θ ℓ ), the angle between the π + (ℓ + ) in the dipion (dilepton) rest frame and the dipion (dilepton) system in the kaon rest frame; and ϕ, the azimuthal angle between the two planes defined by the dipion and dilepton systems in the kaon rest frame.The decay amplitude depends on form factors labelled F , G, R, and H.For the K ± → π 0 π 0 µ ± ν decay, the s-wave state has no dependence on cos(θ π ) and ϕ, so only F and R contribute to the amplitude.The form of F(S π , S ℓ ) has been determined from measurements of the K + → π 0 π 0 e + ν decay [3].A prediction for R(S π , S ℓ ) has been computed in the context of Chiral Perturbation Theory (ChPT) [4].The K ± → π 0 π 0 µ ± ν decay has never been observed, and there is no experimental measurement of R(S π , S ℓ ).
The K ± → π 0 π 0 µ ± ν branching fraction is measured at NA48/2, and is normalised to the K ± → π ± π 0 π 0 decay.The two decays are selected using common criteria: 4 isolated photons consistent with 2π 0 signature, matched in space and time with a KABES beam track; and a track in the drift chambers with an associated response in the muon veto system.The main background is K ± → π ± π 0 π 0 decays with the π ± → µ ± ν decay-inflight upstream of the LKr.To mitigate this background, selection cuts are set on: the π ± π 0 π 0 invariant mass and transverse momentum; the missing mass squared (M 2 miss ), computed using the momentum of the KABES track and the combined momentum of the three pions; and cos(θ ℓ ).The remaining background contamination is measured from a fit to the data.The analysis resulted in 2437 candidates, including 354 ± 33 stat background events.
This value is compatible with the theoretical prediction that includes a nonzero contribution from R(S π , S ℓ ).The branching fraction in the full phase space must be extrapolated based on theory inputs.The experimental measurement is compared to the theoretical prediction in Fig. 3. 3. New study of the K + → π 0 e + νγ decay The K + → π 0 e + νγ decay is a radiative kaon decay that can be described in ChPT.The final-state photon is produced either via direct emission (DE) or inner bremsstrahlung (IB), with the total amplitude including the interference between DE and IB.The IB term diverges as E γ → 0 and θ e,γ → 0, where E γ is the photon energy in the kaon rest frame and θ e,γ is the angle between the electron and photon in the kaon rest frame.To avoid the divergences, the branching fraction is measured in three kinematic regions labelled R j with j = 1, 2, 3. Table 1 gives the definition of R j , predicted values in ChPT from [5], and the most precise measurement from [6].
Table 1.R j definitions in terms of E γ and θ e,γ , ChPT predictions from [5], and results of the measurements performed by the ISTRA+ [6] experiment.The decay is sensitive to the T-odd observable ξ, accessible via the asymmetry A ξ , which are defined as where N + (N − ) is the number of events with the positive (negative) value of ξ.The predictions are |A ξ | < 10 −4 in the Standard Model (SM) and beyond [7][8][9][10], while the only measurement is A ξ (R 3 ) = 0.015 ± 0.021 [6].
The values of R j and A ξ were measured at NA62 using data collected in 2017 and 2018, with the K + → π 0 e + νγ measurement normalised to the K + → π 0 e + ν decay.The event selection was based on associating a K + candidate in the KTAG and GTK with an e + track reconstructed in the STRAW and a π 0 → γγ decay reconstructed in the LKr.The radiative photon was required to be coincident with the event and isolated from other energy deposits in the LKr.Events with other activity were rejected to suppress backgrounds from K + → π 0 π 0 e + ν, while additional cuts were applied to reject γ from bremsstrahlung and suppress K + → π + π 0 π 0 and K + → π + π 0 decays.After the selection, there are about 1.3×10 5 candidates in the R 1 signal region.The measured R j and A ξ values are: These are the most precise measurements of R j and A ξ (R 3 ), and the first measurements of A ξ (R 1 ) and A ξ (R 2 ).
Measurement of the K
The K + → π + µ + µ − decay is a flavour-changing neutral current process with a branching fraction of O(10 −7 ) in the SM.Although the decay is dominated by long-distance effects, short-distance physics can be extracted from the form-factor parameters a + and b + .The SM predicts a + and b + to be identical in K + → π + µ + µ − and K + → π + e + e − decays, and any difference is indicative of Lepton Flavour Universality (LFU) violation.Moreover, a + can be related to the B anomalies in models with minimal-flavour violation [11].The largest uncertainties on a + are in the muon mode.
The values of (a + , b + ) are extracted from the data using a fit to the z spectrum of the candidates, where z = M 2 µµ /M 2 K , M 2 µµ is the dimuon invariant mass squared, and M 2 K is the K + invariant mass squared.The fit proceeds by reweighting the (a + , b + ) values of simulated events until the best description of the data is achieved, evaluated as the smallest χ 2 .The fit yielded the values a + = −0.592± 0.015, b + = −0.699± 0.058, and the branching fraction BF(K + → π + µ + µ − ) = (9.27± 0.11) × 10 −8 .The results are consistent with earlier measurements of a + in the muon mode and in the electron mode.There is no discrepancy with SM predictions, and no indication of LFU violation. | 2023-02-17T16:03:43.655Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "31b2ec71c96f5403284ff77908d974450d6cd85d",
"oa_license": null,
"oa_url": "https://doi.org/10.5506/aphyspolbsupp.16.3-a10",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5a9d874f21244e495f277a780df3b8fba8dc2668",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
13517045 | pes2o/s2orc | v3-fos-license | QCMUQ@QALB-2015 Shared Task: Combining Character level MT and Error-tolerant Finite-State Recognition for Arabic Spelling Correction
We describe the CMU-Q and QCRI’s joint efforts in building a spelling correction system for Arabic in the QALB 2015 Shared Task. Our system is based on a hybrid pipeline that combines rule-based linguistic techniques with statistical meth-ods using language modeling and machine translation, as well as an error-tolerant finite-state automata method. We trained and tested our spelling corrector using the dataset provided by the shared task orga-nizers. Our system outperforms the base-line system and yeilds better correction quality with an F-score of 68.12 on L1-test-2015 testset and 38.90 on the L2-test-2015. This ranks us 2nd in the L2 subtask and 5th in the L1 subtask.
Introduction
With the increased usage of computers in the processing of various languages comes the need for correcting errors introduced at different stages. Hence, the topic of text correction has seen a lot of interest in the past several years (Haddad and Yaseen, 2007;Rozovskaya et al., 2013). Numerous approaches have been explored to correct spelling errors in texts using NLP tools and resources (Kukich, 1992;Oflazer, 1996). The spelling correction for Arabic is an understudied problem in comparison to English, although small amount of research has been done previously (Shaalan et al., 2003;Hassan et al., 2008). The reason for this is the complexity of Arabic language and unavailability of language resources. For example, the Arabic spell checker in Microsoft Word gives incorrect suggests for even simple errors. First shared task on automatic Arabic text correction has been established recently. Its goal is to develop and evaluate spelling correction systems for Arabic trained either on naturally occurring errors in text written by humans or machines. Similar to the first version, in this task participants are asked to implement a system that takes as input MSA (Modern Standard Arabic) text with various spelling errors and automatically correct it. In this year's edition, participants are asked to test their systems on two text genres: (i) news corpus (mainly newswire extracted from Aljazeera); (ii) a corpus of sentences written by learners of Arabic as a Second Language (ASL). Texts produced by learners of ASL generally contain a number of spelling errors. The main problem faced by them is using Arabic with vocabulary and grammar rules that are different from their native language.
In this paper, we describe our Arabic spelling correction system. Our system is based on a hybrid pipeline which combines rule-based techniques with statistical methods using language modeling and machine translation, as well as an error-tolerant finite-state automata method. We trained and tested our spelling corrector using the dataset provided by the shared task organizers Arabic (Rozovskaya et al., 2015). Our systems outperform the baseline and achieve better correction quality with an F-score of 68.42% on the 2014 testset and 44.02 % on the L2 Dev.
2 Data Resources QALB: We trained and evaluated our system using the data provided for the shared task and the m2Scorer (Dahlmeier and Ng, 2012). These datasets are extracted from the QALB corpus of human-edited Arabic text produced by native speakers, non-native speakers and machines . The corpus contains a large Monolingual Arabic corpus: Additionally, we used the GigaWord Arabic corpus and the News commentary corpus as used in state-of-theart English-to-Arabic machine translation system (Sajjad et al., 2013b) to build different language models (character-level and word-level LMs). The complete corpus consists of 32 million sentences and approximately 1,700 million tokens. Due to computational limitations, we were able to train our language model only on 60% of the data which we randomly selected from the whole corpus.
Our Approach
Our automatic spelling corrector consists of a hybrid pipeline that combines five different and complementary approaches: (i) a morphologybased corrector; (ii) a rule-based corrector; (ii) an 1 Part of the statistics reported in Table 1 is taken from Diab et al. (2014) 2 The list is freely available at: http: //sourceforge.net/projects/ arabic-wordlist/ SMT( statistical machine translation)-based corrector; and (d) an error-tolerant finite-state automata approach.
Our system design is motivated by the diversity of the errors contained in our train and dev datasets (See Table 1). It was very challenging to design one system to handle all of the errors. We propose several expert systems each tacking a different kind of spelling errors. For example, we built a character-level machine translation system to handle cases of space insertion and deletion affecting non-clitics, as this part is specifically treated by the rule-based module. To cover some remaining character-level spelling mistakes, we use a Finite-State-Automata (FSA) approach. All our systems run on top of each other, gradually correcting the Arabic text in steps.
MADAMIRA Corrections (Morph)
MADAMIRA (Pasha et al., 2014) is a tool, originally designed for morphological analysis and disambiguation of MSA and dialectal Arabic texts. MADAMIRA employs different features to select, for each word in context, a proper analysis and performs Alif and Ya spelling correction for the phenomena associated with its letters. The task organizers provided the shared task data preprocessed with MADAMIRA, including all of the features generated by the tool for every word.
Similar to Jeblee et al. (2014), we used the corrections proposed by MADAMIRA and apply them to the data. We show in Section 4 that while the correction candidate proposed by MADAMIRA may not be necessarily correct, it performs at a very high precision.
Original
Source '
Target
English which I have seen in Youtube is that Characters Source Target Table 2: Preparing the training and tuning and test corpus for alignment 3.2 Rule-based Corrector (Rules) The MADAMIRA corrector described above does not handle splits and merges; In addition to that, we use the rule-based corrector described in . The rules were created through analysis of samples of the 2014 training data. We also apply a set of rules to reattach clitics that may have been split apart from the base word. After examining the train dataset, we realized that 95% of word merging cases involve " /w/'and'" attachment. Furthermore, we removed duplications and elongations by merging a sequence of two or more of the same character into a single instance.
Statistical Machine Translation Models
An SMT system translate sentence from one language into another. An alignment step learns mapping from source into target. A phrase-based model is subsequently learned from the wordalignments. The phrase-based model along with other decoding features, such as language and reordering models 3 are used to decode the test sentences. We will use the SMT framework for spell checker where error sentences act as our source and corrections act as a target in the training data.
Phrase-based error correction system (PBMT):
The available training data from the shared task consists of parallel sentences. We build a phrasebased machine translation using it. Since the system learns at phrase-level, we hope to identify and correct different errors, especially the ones that were not captured by MADAMIRA.
Character-based error correction system (CBMT): There has been a lot of work in using character-based models for Arabic transliteration to English (Durrani et al., 2014c) and for conversion of Arabic dialects into MSA and vice verse (Sajjad et al., 2013a;Durrani et al., 2014a). The conversion of Arabic dialects to MSA at character-level can be seen as a spelling correction task where small character-level changes are made to convert a dialectal word into an MSA word. We also formulate our correction problem as a character-level machine translation problem, where the pre-processed incorrect Arabic text is considered as the source, and our target is the correct Arabic text provided by the Shared task organizers.
The goal is to learn correspondences between errors and their corrections. All the train data is used to train our the phrase-based model. We treat sentences as sequences of characters instead, as shown in Table 2. Our intuition behind using such model is that it may capture and correct: (i) split errors, occurring due to the deletion of a space between two words, and (ii) merge errors occurring due to the insertion of a space between two words by mistake; (iii) common spelling mistakes (hamzas, yas, etc).
We used the Moses toolkit (Koehn et al., 2007) to create a word and character levels model built on the best pre-processed data (mainly the feat14 tokens extracted using MADAMIRA described in 3.1). We use the standard setting of MGIZA (Gao and Vogel, 2008) and the grow-diagonal-final as the symmetrization heuristic (Och and Ney, 2003) of MOSES to get the character to character alignments. We build a 5-gram word and character language models using KenLM (Heafield, 2011).
Error-tolerant FST (EFST)
We adapted the error-tolerant recognition approach developed by Oflazer (1996). It was originally designed for the analysis of the agglutinative morphology of Turkish words and used for dictionary-based spelling corrector module. This error-tolerant finite-state recognizer identifies the strings that deviate mildly from a regular set of strings recognized by the underlying FSA. For example, suppose we have a recognizer for a regular set over a, b described by the regular expression (aba + bab)*, and we want to recognize the inputs that are slightly corrupted, for example, abaaaba may be matched to abaaba (correcting for a spurious a), or babbb may be matched to babbab (correcting for a deletion), or ababba may be matched to either abaaba (correcting a b to an a) or to ababab (correcting the reversal of the last two symbols). This method is perfect for handling mainly transposition errors resulting from swapping two letters , or typing errors of neighboring letters in the keyboard. We use the Foma library (Hulden, 2009) to build the finite-state tranducer using the Arabic Word-list as a dictionary. 4 For each word, our system checks if the word is analyzed and recognized by the finite-state transducer. It then generates a list of correction candidates for the nonrecognized ones. The candidates are words having an edit distance lower than a certain threshold. We score the different candidates using a LM and consider the best one as the possible correction for each word.
Evaluation and Results
We experimented with different configurations to reach an optimal setting when combining different modules. We evaluated our system for precision, recall, and F measure (F1) against the devset reference and the test 2014 set. Results for vari-ous system configurations on the L2 dev and test 2014 sets are given in We achieved our best F-measure value with the following configuration: using CBMT system after applying the clitic re-attachment rules. These were then passed through the EFST. Using this combination we are able to correct 66.79% of the errors on the 2014 test set with a precision of 70.14%. Our system outperforms the baseline for the L2 data as well with an F-measure of 44.02% compared to (F1=20.28% when we use the Morph module).
QCMUQ@QALB-201Results
We present here the official results of our system (Morph+CBMT+Rules+EFST) on the 2015 QALB test set (Rozovskaya et al., 2015). The official results of our QCMUQ are presented in Table 4. These results rank us 2nd in the L2 subtask and 5th in the L1 subtask. 6 Conclusion and Future work We described our system for automatic Arabic text correction. Our system combines rule-based methods with statistical techniques based on SMT framework and LM-based scoring. We additionally used finite-state-automata to do corrections. Our best system outperforms the baseline with an F-score of 68.12 on L1-test-2015 testset and 38.90 on the L2-test-2015. In the future, we want to focus on correcting punctuation errors, to produce a more accurate system. We plan to experiment with different combination methods similar to the ones used for combining MT outputs. | 2015-08-11T20:29:18.000Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "ee9ec0a5d989c28892661a803bc6a197d59b0162",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/W15-3217.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "225ecc288f9ad8eb1abf6dbbc270b88ea06762fc",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
246304898 | pes2o/s2orc | v3-fos-license | Inflammatory myofibroblastic tumour: case report of a rare form of bladder tumour
Introduction and importance Inflammatory myofibroblastic tumour (IMT) is a rare tumour with malignant potential and has been described in many major organs with the most frequent site being the lungs. However, bladder is an extremely rare location. IMT presents a unique diagnostic challenge because of the characteristics it shares with malignant neoplasms. Case presentation Here we report the case of a 47-yearold male who presented with storage lower urinary tract symptoms associated with non-specific lower abdominal pain for one month duration. Contrast-enhanced computed tomography of abdomen and pelvis revealed a 6 cm tumour at the dome and left side anterior wall of the bladder. He underwent laparotomy and partial cystectomy. Histopathology results were consistent with an IMT. Clinical discussion Even though bladder IMT is indolent in course, typical IMTs can be locally aggressive. Due to the lack of specificity in clinical symptoms, it is not easy to arrive at a precise diagnosis before surgery. Hence, the final diagnosis depends on histomorphological features and the immune histochemical profile. Conclusion It can be challenging to distinguish IMT from malignant neoplasms both clinically and histologically. As such, local surgical resection with close follow-up remains the mainstay of treatment for urinary tract IMT.
Introduction
Inflammatory myofibroblastic tumour (IMT) is an extremely rare clinical and pathological disease entity that has been reported in multiple anatomic locations, with the most frequent site being the lungs. It can also be found in head and neck soft tissue, abdominal cavity, omentum, retroperitoneum and other tissues and organs. However, the IMT is rarely encountered in the urinary tract [1].
In 1994, the World Health Organization (WHO) defined IMT as an intermediate soft tissue tumour with a background proliferation of spindle cells associated with a variably dense polymorphic infiltrate of mononuclear inflammatory cells [2]. Children and young adults are affected very often, but IMTs may occur along with the entire age range, with a slight predominance for women is discussed in some studies [3], while others report a male predominance [4]. The aetiology of IMT is unknown; theories include an inflammatory reaction to an infection or an underlying low-grade malignancy. However, immunohistochemical staining of IMT reveals the presence of IgG-predominant, polyclonal plasma cells. This finding lends support to the theory that IMT is a reactive inflammatory process [5].
Eventhough it is indolent in nature IMT presents a unique diagnostic challenge because of the characteristics it shares with aggressive malignant neoplasms. So it is essential to distinguish this tumour from other malignant spindle cell tumours, such as the sarcomatoid variant of urothelial carcinoma, leiomyosarcoma and benign lesions such as postoperative spindle cell nodule of bladder [6].
The outcome of these tumours can vary depending on anatomic location where lung and bladder tumours typically have a more favorable outcome. However, IMTs may locally recur in 25% of patients with abdominopelvic tumours. Patients may rarely develop metastatic disease; common sites include the lung, liver, bone, and brain [7].
We report a case of IMT of the urinary bladder in a 47-yearold male treated by partial cystectomy, and Histopathology results were consistent with an IMT. This case report has been written according to the SCARE 2020 criteria [12].
Presentation of case
A 47-year old male mason was presented with a burning sensation while passing urine, mild lower abdominal pain, urgency and frequency for one month. He denied hematuria and fever with chills. He has no history of urinary tract infection, instrumentation, trauma or any other urological problems. His past medical, surgical and family history were not significant.
The examination was essentially unremarkable other than mild abdominal tenderness in the suprapubic region.
An abdominal Ultrasound scan revealed heterogeneously hyperechoic polypoidal mass located in the left lateral wall and dome of the bladder. Urine analysis and other blood investigations, including renal profile were normal. Contrast-Enhanced computed tomography (CECT) was performed and confirmed the presence of a tumour at the dome of the bladder and left side anterior bladder wall measuring 6 × 5.7 × 5.5 cm abutting the sigmoid colon. There was not metastasis. Lymph node was not involved (Fig. 1).
Biopsy of the tumour was taken with rigid cystoscopy, which revealed a solitary extra mucosal exophytic polypoidal tumour with surrounding oedema with normal surface mucosa at the dome of the bladder and left side anterior bladder wall. Colonoscopy was normal up to caecum.
After the cystoscopy & biopsy, patient developed continuous haematuria; his haemoglobin was reduced to 8.4 g/dl prompting transfusion of one unit of whole blood the day before the surgery.
He subsequently underwent laparotomy and partial cystectomy of urinary bladder tumour with 1 cm margin under general anaesthesia. Intraoperative findings showed a large solid bladder mass measuring 6 × 6 × 5 cm not infiltrating to sigmoid colon or any adjacent organs. We clinically diagnosed it as a mesenchymal tumour. Excised mass was sent for histological examination (Fig. 2).
He was discharged on the third postoperative day, and the urinary catheter was removed on day 14.
Histologic examination of the excised mass displayed that the tumour infiltrated and extended from a subserosal to the mucosa. The tumour was composed of predominantly spindle and a few stellate myofibroblastic cells. There was a variable cellularity. Hypocellular areas showed scattered haphazard cells in an abundant myxoid stroma. Hypercellular areas showed spindle cell fascicles and storiform patterns in a myxofibrous stroma. Focal necrosis was present. Mitotic count was 1 per h.p.f. Nuclear pleomorphism was absent. The serosal surface was intact and situated 3 mm away from the tumour at the closest point. The tumour was situated 3 mm, 2 mm and 0.5 mm away from anterior, left lateral and posterior circumferential resection margins respectively. The mucosa was lined by transitional epithelium associated with Von Brunn's nests and cystitis cystica. Immunohistochemical stains showed that tumour cells were strongly positive for vimentin and diffusely positive for anaplastic lymphoma kinase (ALK). The tumour was negative for epithelial membrane antigen (EMA). Therefore leiomyosarcoma, carcinosarcoma and post-operative spindle cell nodule were excluded. Histopathology results were consistent with an IMT (Fig. 3).
He was on regular clinic follow up for six months. Post-operative flexible cystoscopy and CECT abdomen and pelvis were normal. Patient was satisfactory about his clinical outcome.
Discussion
The most common initial manifestation of IMT of the bladder is painless gross hematuria, resulting in anaemia. Other symptoms may include frequency of urination and dysuria. Urinary tract obstruction may also occur [8]. The patient presented with vague symptoms like dysuria and abdominal pain. He did not have haematuria.
Inflammatory myofibroblastic tumour of the urinary bladder is a rare condition of unknown aetiology. Several predisposing factors have been described, such as recurrent cystitis and prior urinary bladder surgery, but the cause and the pathogenesis remain controversial [9]. Consistent with this theory, we could not find any aetiology or risk factors in our patient.
Due to the lack of specificity in clinical symptoms, it is not easy to arrive at a definite diagnosis before surgery. Therefore, usually, the final diagnosis depends on histomorphological features and the immune histochemical profile. Urologists, therefore, need to be aware of the possibility of rare cases of malignant bladder myofibroblasts. Inflammatory myofibroblastic tumour of the urinary bladder usually follows a benign clinical course. Therefore, the optimal curative management is conservative surgery, transurethral tumour resection or partial cystectomy. Recognition of this diagnostic entity is vital to avoid unnecessary surgical management with resulting functional impairment [10]. There are no standardized schemes regarding follow-up, but it is advisable since this tumour has 25% recurrence. However, given the indeterminate biological behavior of these tumours, continued monitoring of their clinical course is strongly recommended [11].
Conclusion
This report describes a rare case of bladder IMT, which usually follows a relatively indolent course. However, it can be challenging to distinguish IMT from sarcomatoid carcinoma both clinically and histologically. As such, local surgical resection with close follow-up remains the mainstay of treatment for urinary tract IMT.
Sources of funding
No funding sources were used for this case report.
Ethical approval
Not applicable.
Informed consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
Author contribution
B. Balagobi, S. Gobishankar, A. Ginige, D.S. Gamlaksha, J. Sanjeyan, and L. Suvethini have equally contributed to the concept, design, data collection, and writing of this case report.
Registration of research studies
Not applicable.
Declaration of competing interest
All authors disclose any financial and personal relationships with other people or organizations that could inappropriately influence their work. | 2022-01-28T17:03:00.411Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "677a2e8841554b5f0047c517990db22112aa0024",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijscr.2022.106786",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e3b6d421607857b524406e527e44d84e85657ec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269301989 | pes2o/s2orc | v3-fos-license | Germination patterns and seedling growth of congeneric native and invasive Mimosa species: Implications for risk assessment
Abstract Comparisons of plant traits between native and invasive congeners are useful approaches for identifying characteristics that promote invasiveness. We compared germination patterns and seedling growth of locally sympatric populations of native Mimosa himalayana and two varieties of invasive M. diplotricha (var. diplotricha and var. inermis) growing in southeastern Nepal. Seeds were germinated under a 12‐h photoperiod or complete dark, low (25/15°C day/night) and high (30/20°C) temperatures, different water stress levels (0, −0.1, −0.25, −0.5, −0.75 and −1.0 MPa), and soil depths (0, 2, and 4 cm). Plant height, biomass allocations, and relative growth rate (RGR) of seedlings were measured. Invasive M. diplotricha had higher germination percentage, rate, and shorter germination time compared with the native species. Germination of both congeners declined as water stress increased, but the decline was more pronounced in native species. Seedling emergence declined with increasing depth in all taxa. The seedlings of invasive species were taller with higher leaf number and allocated greater proportion of biomass to shoot, whereas the native congener allocated greater biomass to root. The RGR was nearly twice as high in invasive species as it was in the native congener. Seedling height and number of leaves were always higher in invasive than in native species, and the native–invasive differences increased over time. Better germination and higher growth performance of invasive species than the congeneric native one suggests that seed germination and seedling growth can be useful traits for the prediction of species' invasiveness in their introduced range during risk assessment process.
of both congeners declined as water stress increased, but the decline was more pronounced in native species.Seedling emergence declined with increasing depth in all taxa.The seedlings of invasive species were taller with higher leaf number and allocated greater proportion of biomass to shoot, whereas the native congener allocated greater biomass to root.The RGR was nearly twice as high in invasive species as it was in the native congener.Seedling height and number of leaves were always higher in invasive than in native species, and the native-invasive differences increased over time.Better germination and higher growth performance of invasive species than the congeneric native one suggests that seed germination and seedling growth can be useful traits for the prediction of species' invasiveness in their introduced range during risk assessment process.
K E Y W O R D S
biomass allocation, functional traits, Mimosa diplotricha, M. himalayana, relative growth rate, species invasiveness, Timson's index
T A X O N O M Y C L A S S I F I C A T I O N
Evolutionary ecology
| INTRODUC TI ON
Understanding what causes some species to become invasive has received a lot of attention because only a small percentage of introduced species become invasive (Williamson & Fitter, 1996).
Identification of the factors that contribute to successful invasion is essential for predicting the emergence and expansion of invasive species and formulating measures to their prevention (e.g., through weed risk assessment) and control (Groves et al., 2003;Kolar & Lodge, 2001;Koop, 2012).In general, plant functional traits, such as the ones related to physiology, biomass allocation, growth rate, size, and fitness may determine invasiveness (Grotkopp et al., 2002;Kolar & Lodge, 2001).These functional traits can also be used to predict invasive species' performances in ecosystems (Kaushik et al., 2022) and some of these traits (e.g., seed size, capacity to propagate through vegetative parts) have therefore been routinely used in weed risk assessments (Groves et al., 2003).The performance of invasive species (i.e., invasiveness) also depends on the relative position of co-occurring native and invasive species in the functional trait space (Hui et al., 2021).Additionally, the use of plant functional traits as input variables in ecological niche modeling may improve the accuracy of model predictions (Wang & Wan, 2021).Several studies have revealed plant characteristics that frequently correspond with invasion success.These include size (height/biomass), growth rate, and competitiveness (van Kleunen et al., 2010).However, there are very few generalizations that can be made on the role of plant functional traits for invasiveness (i.e., a capacity of introduced species to establish, spread and inflict impacts on native species, ecosystem and environment in a reasonably short period) among alien species and across studies (Hayes & Barry, 2008), despite massive increase in literature of invasion ecology (Pyšek et al., 2008).It is because possession of a set of plant traits per se may not enable a plant to be invasive as the species' invasiveness also depends on a range of other biotic (e.g., presence of herbivores, pathogens, native competitors, traits of the recipient community) and abiotic (e.g., climatic suitability, nutrient, and other resources availability) factors which vary across habitats and geographic regions (Catford et al., 2019;Chun et al., 2010;Hui et al., 2023;Koop, 2012).Therefore, it can be challenging to pinpoint the traits that are consistently linked to invasiveness (Alpert et al., 2000), yet identification of such potential traits is an essential prerequisite for explaining and predicting invasions (Rejmánek, 1996).Research interests on these topics have increased in recent years (e.g., Catford et al., 2019;Liu et al., 2022;Milanović et al., 2020), yet the available data remain inadequate to draw any generalization globally and across taxonomic groups.
Further, understanding the differences between co-occurring native and invasive species in key functional traits has been identified as one of the major research gaps to comprehend species' performance in dynamic ecosystems (Kaushik et al., 2022).It warrants additional studies with taxa and site-specific analyses which can provide better insights (Hayes & Barry, 2008).However, the majority of studies of species' traits associated with invasiveness have been limited to relatively simple traits, such as plant height and growth form that are easily available (van Kleunen et al., 2010).
Other traits such as those related to seed germination are considered important to predict invasiveness (Perglová et al., 2009), but some studies have reported no significant differences in terms of germination between successful and unsuccessful invaders (Radford & Cousens, 2000;Reichard, 2003).Therefore, additional studies comparing native-invasive congeners under a common environment is needed to further characterize and identify any potential differences in seed germination and seedling growth traits, which can subsequently provide valuable insights on species' invasiveness (Chauhan & Johnson, 2008;Graebner et al., 2012;Schlaepfer et al., 2010;van Kleunen et al., 2010).Furthermore, comparisons between congeneric native and invasive species sharing the same habitat provide additional opportunities to assess whether the invasive species can displace a native species (Munoz & Ackerman, 2011;Webb et al., 2002).Therefore, a comparison of germination patterns and seedling growth vigor between congeneric native and invasive species can be an effective approach to identify traits useful for the prediction of invasiveness.
Seed germination rate and timing of invasive species may have significant impacts on coexisting native species, as early germination of invasive species may be an advantageous event in survival, growth, and subsequent establishment in competitive environments due to space preemption and greater access to resources (Byun, 2023;Guido et al., 2017).Invasive species tended to have higher germination percentage (GP) and shorter time to germination than the cooccurring or congeneric native species (Cervera & Parra-Tabla, 2009;Munoz & Ackerman, 2011;Wainwright & Cleland, 2013).For example, invasive Ardisia elliptica (Myrsinaceae) has higher GP and shorter mean germination time (MGT) than the sympatric native A. obovata in Puerto Rico (Munoz & Ackerman, 2011).Tolerance of invasive Ruellia nudiflora (Acanthaceae) to temperature and water stress extremes during germination is higher than that of native R. pereducta in Mexico (Cervera & Parra-Tabla, 2009).After germination, the invasive plants tend to grow more quickly than native species (Grotkopp & Rejmánek, 2007), or they have traits associated with higher relative growth rates (RGR) compared with native species (Hamilton et al., 2005).For example, James and Drenovsky (2007) reported a higher RGR of invasive species (6 spp.) than co-occurring native species (6 spp.) in Oregon, USA.Similarly, invasive species Prosopis juliflora and Leucaena leucocephala were reported to use light and nutrients more efficiently compared with native species, Mimosa caesalpiniifolia and Myracrodruon urundeuva, favoring growth and resources acquisition even under the availability of limited water resources (Barros et al., 2020).However, Munoz and Ackerman (2011) reported a lack of difference in the growth attributes such as the RGR between invasive Ardisia elliptica and native A. obovata.A metaanalysis also revealed that the difference between invasive and native in terms of RGR is not always significant and highly dependent on growing conditions (Daehler, 2003).
As summarized above, the empirical evidences are either inadequate (germination pattern) or inconsistent (growth rate) to draw any generalization regarding the distinction between invasive and native plant species.This necessitates additional studies particularly in data-poor regions such as South Asia.In this context, we studied the seed germination patterns and seedling growth of sympatric populations of native Mimosa himalayana and two varieties of recently reported, invasive M. diplotricha in southeastern Nepal.
The native M. himalayana has no record of introduced population outside of its native distribution range in South and Southeast Asia (POWO, 2023).Invasive M. diplotricha has been reported to be a major threat to forest ecosystems, agricultural land, and pastures in tropical to subtropical Asia and the Pacific regions (Sankaran & Suresh, 2013).In Nepal, the species has been reported to invade only in southeastern Nepal (Sharma et al., 2020), where massive environmental (e.g., smothering native plant species) and economic impacts (e.g.livestock death as a result of poisoning) of the species have been already observed (personal observations of BBS and NK during multiple visits to southeastern Nepal in 2021 and 2022).
Additionally, it also shares habitat with native congener M. himalayana in southeastern Nepal.We hypothesize that the invasive M. diplotricha outperforms the sympatric and native congener M. himalayana (which has no introduced population elsewhere) in seed germination and seedling growth.To test this hypothesis, we put forth the following questions: (1) Do invasive M. diplotricha perform better in germination than the native M. himalayana?(2) Is extreme water stress tolerance of invasive higher than that of native during germination?and (3) Is seedling growth performance of invasive better than that of native?Results of this study provide useful insights into the invasiveness of Mimosa species and can be helpful in detecting potentially invasive species during risk assessment.
| Study species
Sympatric populations of native Mimosa himalayana Gamble and invasive M. diplotricha Wright ex Sauvalle (Fabaceae) from the Jhapa district located at southeastern Nepal were selected for the present study.Mimosa himalayana is a scrambling shrub native to Himalaya (Grierson & Long, 2001;Shrestha et al., 2022), which is not known outside of its native distribution range (POWO, 2023).
Mimosa diplotricha is a subshrub native to tropical Central and South America, which is invasive in more than 45 countries in Asia, Africa, and Oceania including Nepal (Sharma et al., 2020;Uyi, 2020).It appears that the weed was introduced intentionally outside of its' native range for soil bioengineering, as cover crop, as green manure, or as hedge plant.The weed produces numerous small seeds (10,000 seeds per plant per annum) which are dispersed by animals (spiny pod clusters being adhered to animal fur), water, and as contaminants of agriculture produces and construction materials (e.g., sand, gravel) (Parsons & Cuthbertson, 1992).Mimosa diplotricha is known to have significant negative impacts on livelihood through reduced forage production in range lands, interfering human movements, reduced abundance of medicinal plants, and crop yield loss (Witt et al., 2020).Consumption of this weed by cattle may lead to their death because of nephrotoxicity (Sankaran & Suresh, 2013;Shridhar & Kumar, 2014).Two varieties of M. diplotricha have been reported in Nepal (Sharma et al., 2020).Mimosa diplotricha var.diplotricha Sauvalle has prickles on the stem surface while M. diplotricha var.inermis (Adelbert) Veldkamp is without prickles (Wu et al., 2010) (Table 1, Figure 1).The occurrence frequency of M. diplotricha var.diplotricha in southeastern Nepal and its invasiveness (based on the extent of area invaded) are higher than that of M. diplotricha var.inermis (Sharma et al., 2020; personal observations of BBS).
| Seed collection
Ripe seeds of each taxon were collected from ca. 10 × 10 m 2 plot from Jhapa district in southeastern Nepal (Table 2, Figure A1).The region has dry tropical climate with hot and humid summer with monsoon rain (June through September) and cold and dry winter.The collected seeds were cleaned upon returning to the laboratory, placed in air-tight plastic containers, and stored in the refrigerator (4°C) for about 5 days until they were used in germination experiments.
| Seed size and mass measurements
Seed mass was measured in three lots of 50 healthy seeds each after oven drying for 48 h at 70°C using digital balance (0.0001 g) (Baskin & Baskin, 2014).Seed size was measured by trilocular stereomicroscope using "Wave Image" software.Twenty seeds of each taxon were selected randomly and their length (longest axis length of the seed starting from hilum) and breadth (second longest axis perpendicular to the length axis) were measured (Figure A2).
| Test for imbibition of water
To determine if the seeds were permeable to water, we measured the dry mass of 50 seeds per taxon and transferred them to a Petri dish containing water.These seeds were re-weighed after 24 and 48 h.In each measurement, seeds were removed from the Petri dish, blotted dry, and weighed using digital balance (Baskin & Baskin, 2014).Seed mass of all taxa did not increase substantially even 48 h after placing them in water (Figure A3), suggesting that seed coat was impermeable and the seeds had physical dormancy (Baskin & Baskin, 2014).
Physical dormancy of Mimosa was further confirmed by a rapid germination of seeds after rupturing seed coats in a preliminary experiment (data not presented here).
| Seed scarification
Seed scarification was required to break physical dormancy of Mimosa species.Following the suggestion of Chauhan and Johnson (2009), hot water scarification was used because it is more convenient and environment-friendly than other methods of scarification such as the mechanical methods and use of acids.For hot water scarification, seeds were wrapped in muslin cloth as a loose pouch and dipped in 70°C hot water for 10 minutes (Chauhan & Johnson, 2009).Soon after scarification, the pouches were removed from hot water and opened immediately to bring seeds to room temperature, and subsequently they were transferred to Petri dishes for germination experiments.
TA B L E 1 Taxonomic and biogeographic account of the study species.
Stem is quadrangular with or without prickles along angles; leaves measure 10-15 cm with 3-7 or 10 pairs pinnae, each pinna measuring 2-4.The number of germinated seeds in each Petri dish was counted daily for 14 days.At each count, the germinated seeds were removed to reduce the crowding of seedlings.Emergence of radicle was used as a criterion for germination of seeds (Baskin & Baskin, 2014).
However, the germination of seeds maintained at dark was recorded at the end of experiment.
| Seedling emergence
To evaluate the impact of seed sowing depth on seedling emergence, seeds were sown at a predefined depth in pots (height 13 cm, diameter 11 cm) filled with prepared substrate (sand, vermicompost, and cocopeat in the ratio of 7:2:1 by volume).The pots were enamel painted with a black color to block out all light except from the upper surface.In each pot, 30 scarified seeds were sown at various depths: 0 (surface), 2, 4, 6, 8, and 10 cm.
Seeds were sown, covered with the substrate at the marked depth, and watered slowly to field capacity.Further watering (10 mL to each pot) was done at 1 day interval.The number of seedlings that emerged above the soil surface was recorded daily.Seeds were considered to emerge when the cotyledonary leaves spread out completely above the substrate surface.Emerged seedlings were removed daily and the experiment was terminated 30 days after seed sowing.
| Raising seedlings and biomass harvest
Some functional traits, such as height, biomass allocation, and RGR were measured in seedlings grown in a greenhouse (Figure A5).
For raising seedlings, plastic pots (height 13 cm, diameter 11) were filled with a mixture of soil, sand, vermicompost, and cocopeat prepared in the proportions of 3:3:3:1 by volume.The substrate was saturated to field capacity by tap water (ca.250 mL).
Then, three scarified seeds were sown in each pot, with 80 pots for each taxon.The pots were maintained in a greenhouse with a mean temperature of 27 ± 3°C and the light intensity ca.6220 lux measured at 11-12 a.m.On every alternative day, the pots were rearranged to lessen the positional effects.Watering (10 mL per pot) was done daily.Seeds of both varieties of Mimosa diplotricha began to germinate 5 days after sowing (DAS), whereas the seeds of M. himalayana began to germinate at 10 DAS.Each pot had two to three seedlings depending on the number of seeds germinated.Two seedlings were maintained in each pot after 20 DAS from the pot where there were three seedlings.Finally, only one seedling was maintained in each pot after 38 DAS by removing seemingly less vigor seedling.While doing so, we tried to ensure for each taxon that all pots had seedlings that were nearly of the same height and were healthy.Some of the seedlings died and the seedling mortality of native Mimosa himalayana was higher (20%) than the invasive congener (5% in M. diplotricha var.diplotricha and 6% in M. diplotricha var.inermis).
The first biomass harvest of seedling was done on 48 DAS.
Twenty seedlings of each taxon were drawn at random by a lottery method.The number of leaves of each seedling were counted and subsequently excised along with petiole to determine leaf biomass.Then, the seedlings were removed from the pots along with soil and gently washed to remove all soil particles from the root.Shoot and root lengths of each seedling were measured and they were separated to determine the respective biomass.Fresh root, stem, and leaf parts of each seedling were packed separately in paper envelope and were oven dried at 60°C for 72 h to determine the dry biomass (digital balance, 0.0001 g).Subsequent three harvests were made in the same way at an interval of 14 days (i.e., 62, 76, and 90 DAS).Due to seedling mortality, the number of seedlings available for each harvest ranged from 12 to 20 for each taxon (Table A1).
| Data analyses
Daily records of seed germination in Petri dishes were used to calculate germination percent (GP), Timson's index (TI), and MGT following the methods suggested by Baskin and Baskin (2014).The GP was the number of seeds germinated as the percentage of the total number of seeds sowed.The TI, a measure of germination rate, was calculated as the sum of cumulative daily GP obtained for each Petri dish.Since the germination experiment was continued to 14 days, the maximum possible value of TI was 1400.Finally, the MGT, a measure of time it takes for most of the seeds to germinate, was calculated as , where t i was the time from the start of experiment to the ith observation (day), n i was the number of seeds germinated at time i, and k was the last day of germination experiment (Baskin & Baskin, 2014).To assess the impacts of seed sowing depth on seedling emergence, the number of emerged seedlings was expressed as the percentage of total seeds sown.
Biomass allocation was analyzed by expressing the biomass of root, stem, and leaf as a fraction of total seedling biomass (Poorter et al., 2012).For example, the root mass fraction (RMF) was obtained as the ratio of root biomass of a seedling and total biomass of the same seedling.In the same way, stem mass fraction (SMF) and leaf mass fraction (LMF) were calculated.Similarly, root to shoot ratio (RS) was obtained after root biomass was divided by shoot biomass (sum of leaf and stem biomass).
RGR was determined as slope of linear regression line obtained by plotting seedling biomass against days after seed sowing (DAS) of each harvest (Perez-Harguindeguy et al., 2013).Total biomass of each seedling was log transformed (ln biomass) and the mean value of ln biomass was calculated for each taxon and each harvest.Then, mean ln biomass was plotted against DAS to get linear regression equation (Figure A6).
Prior to statistical analyses, seed GP was converted to fraction and then square root and degrees arcsine transformed, while fraction values of biomass allocations (RMF, SMF, LMF and RS) of each seedling were square root and arcsine transformed to improve homoscedasticity (Baskin & Baskin, 2014;Sokal & Rohlf, 1995).
Other values, such as Timson's index, MGT, seedling height, and the number of leaves of seedlings met the assumption of homoscedasticity (Levene's test).We performed one-way analysis of variance (ANOVA) to analyze the seed size among three taxa with a Tukey's post hoc test.An independent sample t-test was used to compare the mean of germination parameters of each species between a 12-h photoperiod and dark as well as low and high temperature conditions.One-way ANOVA with Tukey's post hoc tests were also used to compare the mean of germination parameters under different environmental conditions (light, temperature, and water stress) among taxa.To analyze the effect of water potential (or seed sowing depth in case of seedling emergence experiment), species, and their interactions, two-way ANOVA was done.To analyze the effect of depth of seed sowing on emergence within and among taxa, we performed one-way ANOVA using a Tukey's post hoc test.One-way ANOVA was also used to analyze the differences in biomass allocation patterns (RMF, SMF, LMF, and RS), plant height, and leaf number between species at each harvest and
| Seed mass and seed size
Native Mimosa himalayana had greater seed mass and larger seed size (length and breadth) compared with invasive congener M. diplotricha var.diplotricha as well as M. diplotricha var.inermis (Table 3).The seed mass of native species was three times greater than that of invasive species.
| Effects of light and temperature
There was no significant difference (p > .05) in seed GP between high (30/20°C) and low (25/15°C) temperatures as well as a 12-h photoperiod and complete dark for each taxon (Table 4, statistical results not shown).However, across taxa, the native taxon had lower germination rate at both high and low temperatures than the invasive taxon in photoperiod condition.In contrast, there was no difference in GP among three taxa at high temperature in dark.However, the germination of M. diplotricha var.inermis was higher compared with M. himalayana and M. diplotricha var.diplotricha at low temperature in dark (Table 4).
Timson's index (TI) was the highest in M. diplotricha var.inermis followed by M. diplotricha var.diplotricha and M. himalayana at both high and low temperatures (Figure 2a).The MGT was the longest in M. himalayana followed by M. diplotricha var.diplotricha and M. diplotricha var.inermis, indicating that the invasive taxa reached to a maximum germination rate sooner than native taxon (Figure 2b).
Both TI and MGT were independent of the exposed temperature regimes in M. himalayana and M. diplotricha var.diplotricha but they differed between high and low temperatures in M. diplotricha var. inermis.
| Effect of water stress
A decline in water potential (up to −0.5 MPa) resulted into a significant decrease in the GP and TI of native Mimosa himalayana but not of invasive M. diplotricha var.diplotricha and M. diplotricha var.inermis (Figure 3, Table A2).In particular, water stress up to −0.5 MPa water potential in M. diplotricha did not have effect on GP and TI but the effect was significant at −0.75 MPa.The MGT increased with a decline of water potential in all taxa but the increase was more pronounced in native than in the invasive taxa at −0.50 MPa water potential.Only few seeds of all taxa germinated at −0.75 MPa and no seeds germinated at −1 MPa.
The distinction in GP between native and invasive taxa was not clear at control and −0.10 MPa water potential but the GP was significantly lower in native taxon at −0.25 and −0.50 MPa (Figure 3a).For TI, the native-invasive difference was significant at −0.10, −0.25 and −0.50 MPa but not at control (Figure 3b).The MGT was significantly longer in native M. himalayana than in in- vasive taxa at all water potential treatments including control (Figure 3c).The two-way ANOVA also revealed that the GP, TI, and MGT varied significantly with species, water potential as well as their interactions.The effects of species and water potential alone were higher compared with their combined effects on the MGT (Table 5).
| Effect of soil depths on seedling emergence
Seeds of all taxa germinated when sown up to 4 cm below soil surface; seedlings from the seeds sown at 6, 8, and 10 cm depth did not emerge out.At the termination of the experiment, when the seeds were checked in the pots, we did not find any intact seeds but we found decayed radicles and seed coats.Seedling emergence did not vary among taxa at each depth (Figure 4).
Two-way ANOVA revealed that seed sowing depth had impacts on seedling emergence, but it was not affected by taxon identity and its interaction with sowing depth.There was no significant difference in seedling emergence percent between surface and 2 cm depth of sowing in each taxa, but it was significantly lower when seeds were sowed at 4 cm depth in all three taxa (Table A3).
Variation of cumulative emergence percentage showed that seedlings of invasive taxa emerged earlier and at faster rate than native (Figure A7).
| Biomass allocation
There was no significant difference in the RMF, SMF, LMF, and root shoot ratio (RS) of seedling among study taxa in the first harvest (48 days after sowing, DAS) (Figure 5a).However, these attributes varied across taxa in seedlings harvested at 90 DAS (Figure 5b).The RMF and LMF were higher in native Mimosa himalayana than in invasive congener.But, the SMF was the highest in the invasive M. diplotricha var.inermis.The RS was higher in native species than in both varieties of the invasive species.Two varieties of M. diplotricha also differed in SMF but not in RMF, LMF, and RS (Figure 5b).
Biomass allocation patterns changed with seedling harvest date in all three taxa (Table A4).The RMF and RS increased but the LMF declined in the successive harvests in all taxa.However, variation of the SMF did not show any consistent pattern and was taxon-specific.
| Seedling growth and vigor
RGR of two varieties of the invasive species were nearly equal but it was nearly two times as high as it was for the native species (Table 6).As expected, seedling height and the number of leaves in each seedling of all taxa increased with harvest dates, but the increase was more pronounced in invasive species than in native.For example, seedling height at 90 DAS was 45% higher than at 48 DAS in native M. himalayana but it was 277% and 290% in invasive M. diplotricha var.diplotricha and M. diplotricha var.inermis, respectively (Figure 6, Table A5).Similarly, the number of leaves at 90 DAS was 25% higher than that at 48 DAS in native species but it was 160% in M. diplotricha var.diplotricha and 117% in M. diplotricha var.inermis.Seedling height and the number of leaves did not vary between two varieties of invasive species at different harvest date (except for leaf number at 48 DAS), but it was always higher in invasive species than in native species (Figure 6).Native-invasive differences increased with DAS of the seedlings.For example, seedling height of invasive species (mean of two varieties) was 58% higher than that of native species at 48 DAS but it was 332% at 90 DAS.Similarly, the number of leaves of invasive species was 38% higher than that of native species at 48 DAS but it was 160% at 90 DAS.
| DISCUSS ION
Invasion scientists are in search of organisms' traits which are explicitly associated with high invasiveness and thus can be used for the identification of potentially invasive species during risk assessments.By comparing seed germination and seedling growth traits of congeneric native (with no known introduced population) and invasive Mimosa species, we showed that the invasive species had higher germination rate (Timson's index), shorter MGT, higher seedling RGR, and greater seedling height than the native species.These results suggest that the invasive M. diplotricha show some of the characteristic features of the "ideal weed" (sensu Baker, 1974) such as the capacity to germinate under wide environmental conditions (e.g., photoperiod and continuous dark, higher water stress condition) and adaptation to long distance dispersal (as a result of physical dormancy).These results have direct implications in the risk assessment of invasive species because the invasive species predictability can be improved when these traits are taken into account during the risk assessment process.
| Seed attributes
Seed attributes such as mass and size are often associated with plant's life history strategies that promote invasiveness (Grotkopp et al., 2002;Maranon & Grubb, 1993).The results of the present study showed that seeds of invasive species were smaller than that of a native congener.Species with small seeds are more likely to be invasive as small seed mass appears to be linked with greater seed production and higher RGR which are critical for the successful establishment of invasive species (Aarssen & Jordan, 2001;Grotkopp et al., 2002;Maranon & Grubb, 1993;Moodley et al., 2013; Reichard, 2003;Wright & Westoby, 1999).In a comparison between invasive and native species in Indonesia, Rindyastuti et al. (2021) found that the invasive species has lower seed mass compared with the native species.However, a global level comparison could not find a significant difference in seed mass between native and invasive species though the former tended to have smaller seeds (Mason et al., 2008).Gioria et al. (2021)
| Environmental effects on seed germination
It is generally anticipated that high seed germination rate confers high invasiveness (Baker, 1974;Goergen & Daehler, 2001;Schlaepfer et al., 2010).In the present study, the invasive species tended to have higher GP than native species but the pattern was inconsistent because the native-invasive distinction disappeared in some treatments (e.g., high temperature under continuous dark condition).
However, in a comparison between native (12 spp.) and co-occurring naturalized species (12 spp.) of coastal sage scrub (California), Wainwright and Cleland (2013) showed that the naturalized species had higher seed GP than native species.Higher seed GP has been also reported in invasive Ardisia elliptica than sympatric native A. obovata (Munoz & Ackerman, 2011).In the present study, the invasive species also had significantly higher germination rate (Timson's index) and shorter time to maximum germination than native species at both low and high temperatures.Such early germination of invasive species may have an advantage over the coexisting native species in survival and growth in competitive environments due to space preemption and greater access to resources, which could increase the likelihood of successful establishment (Byun, 2023;Guido et al., 2017).
The results of the present study revealed that the seed germination of both native as well as invasive taxa were independent of light.Such seeds can therefore germinate whether it is buried into the soil or exposed as long as moisture and temperature conditions are favorable (Chauhan & Johnson, 2008).Such light indifference in germination has been reported as a common phenomenon of most members of Fabaceae (Baskin et al., 1998;Baskin & Baskin, 2014;Chauhan & Johnson, 2008, 2009;Silveira & Fernandes, 2006) and this can be attributed to a relatively high seed mass of both native and invasive taxa in the present study (Milberg et al., 2000;Pearson et al., 2002).
Water stress often has differential effects on germination of native and invasive species (Cervera & Parra-Tabla, 2009;Perez-Fernandez et al., 2000).In the present study, the differences in GP, Timson's index and MGT between native and invasive were small but statistically significant.Higher GP and rate of both varieties of invasive species than that of the native one, with larger differences at lower water potential, suggests that invasive species have higher tolerance to water stress than native ones.This ability of invasive species to germinate better than native under water stress conditions could enable them to gain advantage as a weed because of earlier seedling emergence (during pre-monsoon in the study area) under water stress conditions (Byun, 2023;Chauhan & Johnson, 2008).
Seedling emergence declined with increasing depth of seed sowing, independent of species-a pattern reported by several previous studies (e.g., Chauhan & Johnson, 2008, 2009;Hao et al., 2017).
Lack of germination of the seeds sown below 4 cm could be attributed to the exhaustion of seedling reserves before emergence (Tamado et al., 2002).However, our results differed from Chauhan and Johnson (2008) who reported seedling emergence of Mimosa diplotricha from seeds sown up to 8 cm depth.This difference might be to the result of differences in substrate composition used for seedling emergence experiments.Recovery of seed coat and radicle fragments of the deep sown seeds might be the result of seed scarification prior to the emergence experiment, as scarification might have initiated the germination process in the deeply buried seeds, but the seedlings failed to emerge (Chauhan & Johnson, 2008, 2009).
| Biomass allocation
Native-invasive distinction among study taxa was blurred in LMF but clear in RMF, SMF, and RS.Lower RMF and RS as well as higher SMF of invasive Mimosa diplotricha can be attributed to a heliophytic nature of this species (Uyi, 2020), which require greater biomass allocation to stem for making them taller and more competitive (Delagrange et al., 2004).As both varieties of the invasive M. diplotricha are creeping plants, they adapt for additional height growth by elongating stems through additional resources allocation (Poorter et al., 2012).Both the varieties of M. diplotricha showed similar growth performance.A similar growth pattern has been also reported by Wang et al. (2019) when these two varieties were grown under ambient environmental conditions.Invasive plants often exhibit greater biomass allocation to shoot (lower RS) during early stages than native species, which may increase carbon assimilation efficiency and thereby reduce constraints to the establishment of invasive species in a community (Daehler, 2003;Grotkopp et al., 2002;Rejmánek & Richardson, 1996;Van Kleunen et al., 2010).Additionally, greater allocation to shoot as compared with root may confer high competitiveness over the slow-growing native species in their habitats with long-term ecological consequences.
| Seedling growth vigor
Higher RGR and greater height of invasive Mimosa diplotricha than that of native M. himalayana suggests a close association between these traits and invasiveness.Higher RGR of invasive species have Note: Different alphabets a-c, p-q and x-y represent significant difference across water potential levels for Mimosa himalayana, Mimosa diplotricha var.diplotricha and Mimosa diplotricha var.inermis, respectively (p ≤ .05,ANOVA).
TA B L E A 3
Effect of seed sowing depth on seedling emergence (mean ± SE).
photoperiod (12/12 h, light/dark), and continuous dark was maintained by wrapping the Petri dishes with double layer aluminum foil.b.Low and high temperatures: Seeds were allowed to germinate at low (25/15°C, day/night) and high (30/20°C) temperatures.At both temperatures, seeds were exposed to either a 12-h photoperiod or continuous dark as mentioned above.c.Water stress: Water stress was induced by germinating seeds in PEG solutions of different water potentials (−0.1, −0.25, −0.5, −0.75, and −1 MPa).Stock PEG solution (−1 MPa) was prepared by dissolving 296 g of PEG in 1 L distilled water, which was further diluted to prepare other solutions of different water potential(Michel & Kaufmann, 1973).Distilled water (0 MPa) was used as a control.Capacity of the study species to germinate under water stress condition provides valuable information on their reproductive phenology because the seed dispersal of both species occurred at the onset of the dry season which lasts for 5-8 months (before monsoon begins in June).
Variation of (a) Timson's index and (b) mean germination time (MGT) across species and between temperature treatments.Different alphabets (A-C and X-Z) above bars indicate significant differences among taxa at high and low temperatures, respectively (ANOVA); *** represent significant difference (p < .001,independent sample t-test) and ns represent no difference between high and low temperature within each taxon.
TA B L E 5
Results of two-way ANOVA showing the effect of different variables on germination percentage (GP), Timson's index (TI), mean germination time (MGT), and emergence percentage.
F
Effect of depth on seedling emergence.There was no difference in seedling emergence among taxa (p > .05,ANOVA).F I G U R E 5 Root mass fraction (RMF), stem mass fraction (SMF), leaf mass fraction (LMF), and root shoot ratio (RS) among taxa at (a) 48 DAS and (b) 90 DAS.Different alphabets above bars represent significant differences among taxa (p < .05,ANOVA).ns, not significant.TA B L E 6 Relative growth rate of Mimosa species.
Change in (a) seedling height and (b) leaf number of seedling in successive harvests.Different alphabets A-C above the line represents significant difference among taxa at each harvest (p < .05,ANOVA).
Attributes Mimosa himalayana Gamble Mimosa diplotricha C. Wright ex Sauvalle
Seed mass and size of Mimosa species (mean ± SD).
Note: Different alphabets (a, b) in superscript across each column represent a significant difference between mean (ANOVA, p ≤ .05);therewas no significant difference between high and low temperatures as well as photoperiod and dark conditions within each taxon (independent sample t-test).withinspecies at different harvests.Statistical Package for Social Science (SPSS, ver.25) was used for all statistical analyses (IBM Corp., 2017).
Days after sowing (DAS) Mimosa himalayana Mimosa diplotricha var. diplotricha Mimosa diplotricha var. inermis
Number of seedlings of Mimosa species at different harvest time.Effect of water potentials on germination parameters (mean ± SE).
Note: Different alphabets a-b, p-q and x-y represent significant difference within Mimosa himalayana, Mimosa diplotricha var.diplotricha and Mimosa diplotricha var.inermis, respectively, at different harvests (p ≤ .05,ANOVA).Variation of leaf number and seedling height (mean ± SE) of the study taxa with different harvests time (DAS, days after sowing).
TA B L E A 5 | 2024-04-24T05:07:32.493Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "f74a19196f9ed9ed06c841fa1199781e2f305539",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f74a19196f9ed9ed06c841fa1199781e2f305539",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232140871 | pes2o/s2orc | v3-fos-license | Early and Long-term Consequences of Nutritional Stunting: From Childhood to Adulthood
Linear growth failure (stunting) in childhood is the most prevalent form of undernutrition globally. The debate continues as to whether children who become stunted before age 24 months can catch up in growth and cognitive functions later in their lives. The potentially irreparable physical and neurocognitive damage that accompanies stunted growth is a major obstacle to human development. This review aims at evaluation and summarizing the published research covering the different aspects of stunting from childhood to adulthood. (www.actabiomedica.it)
Background
Stunting is a process that can affect the development of a child from the early stages of conception until the third or fourth year of life, when the nutrition of the mother and the child are essential determinants of growth. Stunting is defined as the percentage of children whose height-for-age is below minus two standard deviations for moderate and minus three standard deviations for severe stunting from the median of the 2006 WHO Child Growth Standards (1). Similarly, children are considered severely stunted if their length/height is below −3 SDs from the WHO child growth standards median for the same age and sex. Wasting is defined as the tendency to be too thin for one's height, sometimes called weight-for-height. While wasting is the result of acute significant food shortage and/or disease, stunting represents chronic malnutrition, and the effects are largely irreversible. Underweight, or low weight for age, includes children under 5 with low weight for height (wasting) and low height for age (stunting) and considered a proxy indicator for undernutrition if data on wasting is not available (2, 3) (Figure 1).
Wasting and stunting are often presented as two separate forms of malnutrition requiring different interventions for prevention and/or treatment. These two forms of malnutrition, however, are closely related and often occur together in the same populations and often in the same children, and there is evidence suggesting that they share many causal factors.
The prevalence of stunting (height or age below 2 SD) and thinness (BMI for age below 2 SD) were globally estimated using data from the Global School-Based Student Health and Health Behavior in School-Aged Children surveys conducted in 57 low-and middle-income countries between 2003 and 2013, involving 129 276 adolescents aged 12-15 years. Globally, the prevalence of stunting was 10.2% and thinness was 5.5% (4).
Nutritional stunting is caused by insufficient maternal nutrition, intrauterine undernutrition, lack of breastfeeding until 6 months of age, later introduction of complementary feeding, inadequate (quantity and quality) complementary feeding, and impaired absorption of nutrients owing to infectious diseases (5,6). Stunting has long-term effects on individuals and societies, including poor cognition and educational performance, low adult wages, lost productivity and, when accompanied by excessive weight gain later in childhood, an increased risk of nutrition-related chronic diseases in adult life (7).
This review aims at evaluation and summarizing the published research covering the different aspects of stunting and its early and the long-term consequences.
The early and long-term consequences of nutritional stunting
The consequences of child stunting are both immediate and long term and include increased morbidity and mortality, poor child development and learning capacity, increased risk of infections and non-communicable diseases, increased susceptibility to accumulate fat mostly in the central region of the body, lower fat oxidation, lower energy expenditure, insulin resistance and a higher risk of developing diabetes, hypertension, dyslipidaemia, lowered working capacity and unfavourable maternal reproductive outcomes in adulthood. Furthermore, stunted children who experienced rapid weight gain after 2 years have an increased risk of becoming overweight or obese later in life (7,8).
Growth and windows of vulnerability
A critical window (sensitive period) represents a period during development when an organism's phenotype is responsive to intrinsic or extrinsic (environmental) factors. The intrauterine and early post-natal months are well known to be particularly critical for future health and brain development (9,10). Optimal maternal nutrition is an essential component for fetal and infant development, closely linked with maternal supply of essential nutrients, including vitamins and minerals. Furthermore, maternal anaemia, tobacco use, and indoor air pollution can restrict fetal growth and result in low birth weight.
During the first 2 years after birth, nutritional requirements to support rapid growth and development are very high and thus adverse factors have a greater potential for causing growth retardation in early life. Frequent infections during the first 2 years of life also contribute to the high risk of becoming stunted during this period. Catch-up growth is possible in children older than 2 years, although stunting is often well established by this age, in many low-and middle-income countries (LMICs) (11).
Adolescence is another critical window for growth and nutrition. Although there are ~1.8 billion adolescents worldwide and the majority are clustered in low-income and middle-income countries there is a gap in adolescent health data (12). The global estimate for stunting among adolescents' data varies from 52% in Guatemala and 44% in Bangladesh to 8% in Kenya and 6% in Brazil (13). The adolescent growth spurt (AGS) was studied in Indian rural boys, aged 5 years and over, with known childhood nutritional stunting. They entered late into puberty, with significantly depressed intensity, but gained a similar amount of height, as a result of prolonged AGS, that continued till 19.2 years. Thus, a childhood background of undernutrition did not lead to any additional deficit in height during puberty. However, pre-pubertal height deficits were carried into adult height.
The potential role of mTORC1 pathway in the pathogenesis of child stunting
The signals that control weight and food intake are complex and appear to involve multiple pathways that have as a central control in the hypothalamus, particularly the medial central area, and peripheral cellular control through the Mechanistic Target of Rapamycin Complex 1 (mTORC1). The hypothalamic and mTOR system responses to food deprivation provide a reversible experiment of nature that gives perception into understanding the role of various interactions between nutritional status, psychosocial stress (including poverty, maternal deprivation and abuse), endocrine system, linear growth and skeletal growth (15).
A dietary pattern of poor-quality protein associated with stunting leads to significantly lower circulating essential amino acids than do non-stunted children. These deficient intakes of essential amino acids can adversely affect growth, through their effect on the master growth regulation pathway, the mechanistic target of rapamycin complex 1 (mTORC1) pathway which is exquisitely sensitive to amino acid availability. mTORC1 integrates cues such as nutrients (mainly proteins and amino acids), growth factors, oxygen, and energy to regulate growth in the chondral plate, skeletal muscle growth, myelination of the central and peripheral nervous system, cellular growth and differentiation in the small intestine, hematopoiesis and iron metabolism and organ size through the Hippo pathway. These organs are relevant to child stunting and its associated morbidities such as anemia, impaired cognition, environmental enteric dysfunction, and immunity against infectious diseases (16).
When amino acids are deficient, mTORC1 represses protein and lipid synthesis and cell and organismal growth. At low amino acid concentrations, mTORC1 is diffusely distributed in the cytosol and becomes inactive (17). Autophagy, an adaptation to nutrient starvation, is a process by which damaged or redundant proteins and other cell components are delivered to the lysosome and then degraded, releasing free amino acids into the cytoplasm. Proteins provide a reservoir of amino acids that are mobilized through autophagy when amino acids are scarce. In addition, in the absence of amino acids other signals, such as growth factors and energy, cannot overcome the lack of amino acids to activate mTORC1 (18). Figure 2 illustrates the complex role of mTOC in the pathogenesis of a stunting child.
CNS growth and cognitive functions in stunted children
The developing brain is particularly vulnerable to nutrient insufficiency between 24 and 42 weeks of gestation because of the rapid course of several neurologic processes, including synapse formation and myelination. (19) In healthy infants, there is well-documented rapid brain growth in the first 2 years, this early period is also critical for long-term neurodevelopment (9,10).
Cognitive functions, receptive and expressive language, and socioemotional skills develop at different ages. Development in brain structure and function supporting acquisition of cognitive, language, and socioemotional skills is most rapid during early childhood, with continued development in later years for many skills. Undernutrition affects areas of the brain involved in cognition, memory, and locomotor skills. The brain has major energy demands in early childhood and most cerebral growth occurs in the first 2 years of life. However, the associations between poor linear growth and impaired neurodevelopment are not well understood (20).
Nutritional stunting is associated with both structural and functional pathology of the brain and a wide range of cognitive deficits. In the CNS, chronic malnutrition can lead to tissue damage, disorderly differentiation, reduction in synapses and synaptic neurotransmitters, delayed myelination and reduced overall development of dendritic arborization of the developing brain. There are deviations in the temporal sequences of brain maturation, which in turn disturb the formation of neuronal circuits. Long term alterations in brain function have been reported which could be related to long lasting cognitive impairments associated with malnutrition (21)(22)(23)(24).
With chronic malnutrition, cognitive delays can occur throughout infancy, childhood, and adolescence, Measurable differences in receptive language by socioeconomic groups are apparent in preschool children ages three to five years; differences in cognitive ability have been observed even in the first two years (25).
Stunted children have impaired behavioral development in early life, are less likely to enroll at school, enroll late, and tend to achieve lower grades. They have poorer cognitive ability than non-stunted children. Furthermore, stunted children are more apathetic, display less exploratory behavior and have altered physiological arousal. Malnourished children performed poor on tests of attention, working memory, learning and memory and visuospatial ability except on the test of motor speed and coordination. Age related improvement was not observed on tests of design fluency, working memory, visual construction, learning and memory in malnourished children.
However, age related improvement was observed on tests of attention, visual perception, and verbal comprehension in malnourished children even though the performance was deficient as compared to the performance level of adequately nourished children (26)(27)(28)(29).
Stunted children followed longitudinally in Jamaica were found to have more anxiety and depression and lower self-esteem than non-stunted children at age 17, after adjusting for age, gender, and social background variables (29).
In addition, the brain may be susceptible to poor nutrition during its ongoing remodelling and during recovering from various forms of damage. At approximately age 10, a child's brain represents 5-10% of body mass, consumes twice the glucose and 1.5 times the oxygen per gram of tissue compared with an adult's brain, and accounts for up to 50% of the total basal metabolic rate of the body (9,10,19). Thus, nutritional deprivation during adolescence may have an undesirable impact on brain functions (30).
Growth failure and hormonal implications
The prevalence of stunting, defined as height-forage less than the 5th percentile of the NCHS/WHO 1995 reference data, ranges from 26% to 65% (31).
Growth failure in the first 2 years of life is associated with reduced stature in adulthood (32,33). The magnitude of growth deficits is considerable. Coly et al. (32) found that the age-adjusted height deficit between stunted and non-stunted children was 6.6 cm for women and 9 cm for men. Growth restriction in early life is linked not only to short adult height but also to certain metabolic disorders and chronic diseases in adulthood. The consequences of stunting in adolescence include greater risk of obstetric complications, including obstructed labour in females, and diminished physical capacity among adolescents of both sexes (31,34).
Chronic malnutrition in stunted children is associated with diminished levels of insulin-like growth factor 1 (IGF-1) synthesis. Even a transient 50% reduction in calorie or 33% reduction in protein availability can result in a reversible reduction in IGF-1 concentrations. The reduced levels of IGF-1 lead to a secondary increase in growth hormone (GH) through the negative feedback of low IGF-1 level on pituitary synthesis of GH. The end metabolic result is diversion of substrate away from growth toward metabolic homeostasis. The well-known metabolic effects of growth hormone, which are not IGF-1 dependent, would clearly be adaptive in the face of reduced substrate intake. These include increased lipolysis and mobilization of free fatty acids from adipose tissue stores and inhibition of glucose uptake by muscle tissue (35)(36)(37) (Figure 3).
Effects of protein and amino-acid supplementation on the physical growth of young children in lowincome countries
High quality proteins (e.g. milk) in complementary, supplementary and rehabilitation food products have been found to be effective for good growth. Individual amino acids such as lysine and arginine have been found to be factors linked to growth hormone release in young children via the somatotropic axis and high intakes are inversely associated with fat mass index in pre-pubertal lean girls. Protein intake in early life is positively associated with height and weight at 10 y of age (38). The results of 18 intervention trials in which supplementary protein or amino acids were analyzed in children ages 6-35 months and growth outcomes were reviewed. Eight studies conducted in hospitalized children recovering from acute malnutrition found that the recommended protein intake levels for healthy children supported normal growth rates, but higher intakes were needed for accelerated rates of "catch-up" growth. Micronutrient supplementation or lipid-based supplements with micronutrients have little to no effect on stunting (39).
Glucose homeostasis, insulin secretion and insulin resistance related to nutritional stunting
Data from the Maternal and Child Undernutrition Study Group indicate that lower birth weight (which is strongly correlated with birth length) and undernutrition in childhood are risk factors for high glucose concentrations, high blood pressure and harmful lipid profiles in adulthood after adjusting for adult height and body mass index (BMI) (40).
Glucose homeostasis is of paramount importance in the successful adaptation to chronic malnutrition leading to stunting. Until the brain and other obligate glucose users can adapt to ketones as a fuel source, adequate blood glucose levels must be maintained. Increased gluconeogenesis, stimulated in part by cortisol, plays a role. Diminished glucose uptake by tissues is also important. In this regard, both decreased insulin secretion and /or increased insulin resistance have been reported during chronic malnutrition. High cortisol and growth hormone levels antagonize insulin and prevent hypoglycemia during malnutrition (41,42) ( Figure 4).
Deleterious changes have been reported in the metabolism of glucose in children suffering from undernutrition in infancy. One study that examined the effects of undernutrition in the first year of life on glucose tolerance and plasma insulin found that early undernutrition in the extrauterine period, independent of the birth weight, was associated with hyperinsulinemia and a reduced sensitivity to insulin, which worsened as BMI increased in adult life (43).
Martins et al. (44) examined these hormonal changes in adolescence and observed that the stunted boys and girls showed plasma insulin levels that were significantly lower when associated with a lower homeostasis model assessment-B (HOMA-B), which evaluates pancreatic b-cell function, than those of a non-stunted group. At the same time, their values for HOMA-S (an evaluation of insulin sensitivity) were significantly greater at this age. The increase in insulin sensitivity might be due to a higher number of peripheral insulin receptors, especially in the adipose and muscle tissues, which may establish a counter-regulatory mechanism to compensate for the low levels of insulin (44).
A study carried out by Fekadu et al. (45) in adult diabetic individuals found a significant association between diabetes and a history of undernutrition and a lack of a clean water supply during childhood, emphasising the importance of adequate post-natal development for the maintenance of health in the long-term.
Apparently, the lower beta cell function is due to a lower beta cell number as a result of malnutrition. Furthermore, this could be a consequence of the increased concentration of glucocorticoids that occurs in undernutrition, as normal levels of glucocorticoids are necessary to ensure the development and maintenance of normal pancreatic architecture, as well as the expansion of the beta cell mass during critical periods of development (46). Based on the results described above it appears plausible that in adolescence a decrease in the function of the pancreatic cells is compensated by an increase in sensitivity to insulin. However, as the amount of abdominal fat and age increase, this condition begins to place more intense demands on undernourished individuals, causing an increase in pancreatic activity, which accelerates the exhaustion of the organ and the onset of diabetes (47,48).
While it is unclear whether stunting may be a risk factor for obesity per se, rapid weight gain, particularly after the age of 2-3 years among individuals born small at birth, is thought to lead to a particularly high risk of chronic disease in later life (49).
Endocrinopathies associated with nutritional stunting
The endocrinopathies associated with nutritional stunting involve multiple systems and mechanisms designed to preserve energy and protect essential organs. The changes in neuropeptides and in the hypothalamic axis that mediate these changes also receive input from neuroendocrine signals sensitive to satiety and food intake and in turn may be prepared to provide significant energy conservation (50).
Many studies reported a high cortisol levels in malnourished children. Increased cortisol levels during malnutrition represented an attempt of the organism to adapt to decreased dietary protein and/or energy supply through breakdown of muscle protein to provide the liver with the necessary amino acids for gluconeogenesis and albumin synthesis. This process protects the organism from hypoalbuminemia and hypoglycemia, respectively. The significant correlations between the percent weight deficit and the muscle diameter on the one hand and serum cortisol levels on the other suggest a causal relationship between the increase of serum cortisol and the degree of muscle wasting (51)(52)(53). After 4 to 8 weeks of nutritional rehabilitation, the circulating cortisol levels decreased markedly to become indistinguishable from those of the normal children.
Low leptin secretion during chronic malnutrition, appears to be an important signal in the process of metabolic/endocrine adaptation to prolonged nutritional deprivation. Low leptin levels decrease leptin inhibition on NPY that affects the regulation of pituitary growth and pituitary adrenal axes (42,54). Stimulation of the hypothalamic-pituitary-adrenal (HPA) axis and possibly the hypothalamic-pituitary-GH axis to maintain the high cortisol and GH levels appear necessary for effective gluconeogenesis and lipolysis to ensure a fuel (glucose and fatty acids) supply for the metabolism of brain and peripheral tissue during nutritional deprivation (42,54).
Most studies of the thyroid hormone adaptation to malnutrition have shown low to normal total T4 levels and free T4 (FT4) levels. In contrast, total and FT3, the physiologically active forms of the hormone, are reduced, with a concomitant increase in rT3, which is metabolically inactive. A reduction in active thyroid hormone levels can decrease thermogenesis and oxygen consumption, leading to energy conservation when energy producing substrate is scarce-an important adaptive response to malnutrition (55)(56)(57).
In summary, the decreased synthesis of IGF-1 and the low level of insulin and/or its diminished effect due to an insulin-resistant status in the presence of high circulating GH and cortisol levels ensure substrate diversion away from growth toward metabolic homeostasis (Figure 4).
Hypertension
A high prevalence of arterial hypertension has been found in children, adolescents, and adults with nutritional stunting.
Epidemiological studies indicate that there is a correlation between low birth weight (LBW, defined as a birth weight of a live born infant of < 2,500 gram) and hypertension in adulthood. It has been estimated that 8-26% of all childbirth worldwide is LBW, in which higher prevalence is found in developing countries compared to the developed countries (58).
The pathological mechanisms that link LBW and hypertension are multifactorial and include reduction in nephron number (renal mass) associated with retarded fetal growth, genetic factors, sympathetic nervous hyperactivity, endothelial dysfunction, elastin deficiencies, insulin resistance, high plasma glucocorticoid concentrations, and activation of reninangiotensin system (59).
Franco et al. (60) reported changes in the sympatho-adrenal and renin-angiotensin systems in children small for their gestational age (SGA). They investigated the plasma levels of ACE (angiotensin-converting enzyme), angiotensin and catecholamines in 8 to 13-year-old children to determine correlations between the plasma levels and both birth weight and blood pressure (BP). Circulating noradrenaline levels were significantly elevated in SGA girls compared to girls born with a weight appropriate for their gestational age. In addition, angiotensin II (AngII) and ACE activity were higher in SGA boys. There was a significant association between the circulating levels of both angiotensin II and ACE activity and systolic BP (SBP). These findings support the link between low birth weight and overactivity of both sympatho-adrenal and renin-angiotensin systems into later childhood (60).
It has been suggested that not only intrauterine undernutrition but also its occurrence during childhood may influence the incidence of hypertension in adulthood and its persistence (61).
In conclusion, these data reinforce the important association between undernutrition and hypertension from infancy through childhood and adulthood and emphasize the need for monitoring BP in undernourished children. These alterations are amplified with time, depending on the quality of the diet and on environmental factors. Physicians and other health care professionals practicing in developing countries and in large urban centers with low-income populations should be aware of the association between early in life undernutrition and hypertension for a timely detection and treatment of hypertension, and to keep monitoring these individuals throughout their life.
Stunting and future risk of obesity
There is a fair amount of epidemiological evidence showing that nutritional stunting causes increased risks of obesity. Obesity is increasing dramatically not only in developed countries but also in developing countries, such as Brazil, especially among the poorer. In addition, an increasing number of studies have shown that nutritional stunting causes a series of important long-lasting changes such as lower energy expenditure, higher susceptibility to the effects of high-fat diets, lower fat oxidation, and impaired regulation of food intake. A study from Brazil showed a high prevalence of undernutrition (low weigh-for-age and/or low height-for-age) in children (30%) with a shift towards overweight and obesity (high weight-for-height and BMI) among adolescents (21% in girls and 8.8% in boys) and adults (14.6%). In addition, stunting was associated with overweight in children of four nations that are undergoing the nutrition transition (63)(64)(65)(66)(67).
Early nutrition and later physical work capacity
Malnutrition in early childhood continuing into adolescence could be considered to have adverse effect on their work capacity by influencing their body weight. Stunting has important economic consequences for both sexes. A low BMI is related to a greater number of absences from work and also to lower productivity (68,69). A BMI of 17 kg/m 2 appears to be critical for the capacity for work, and below this value, productivity is negatively impacted. In addition, it is possible that work capacity might be reduced before this level is reached. In any case, work requiring the use of the body mass, such as carrying loads, digging, or shoveling earth or coal, pulling, or cycling a rickshaw, stone splitting, would impose a greater stress on people of low BMI (69).
These studies, combined with other measures such as cognitive functioning and reproductive performance, provide strong evidence in support of policies and programs aimed at eliminating the causes of environmental stunting in poor populations (70,71).
Conclusions
The heightened risks of potentially irreversible loss of growth and cognitive functions and increased morbidities and mortalities associated with stunting demand further work on the etiology, prevention, and early treatment of children with stunting. Further clinical studies are needed that should ensure supplementing adequate high-quality protein intake in addition to providing sufficient energy to guarantee that protein can be utilized for growth rather than potentially being diverted to meet maintenance energy needs. The intervention must be provided for a sufficient time to assess linear growth. Treatment of stunted children should be regarded as a public health priority. Based on available evidence, policy makers and program planners should consider intensifying efforts to prevent stunting and promote catch-up growth over the first few years of life as a way of improving children's physical and intellectual development. Effective nutrition supplementation and follow up can be achieved by identifying children with low weight-for-age Z-score (WAZ) and height-for-age (HAZ) in pediatric primary health care clinics and including them into a national nutritional program operating at clinics sites, and in the community.
Conflicts of interest:
Each author declares that he or she has no commercial associations (e.g. consultancies, stock ownership, equity interest, patent/licensing arrangement etc.) that might pose a conflict of interest in connection with the submitted article. | 2021-03-09T06:22:58.455Z | 2021-02-16T00:00:00.000 | {
"year": 2021,
"sha1": "f8cb0edfc3587cc18e8c19b594eec6afa104dbe3",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3c9d49aecc0d2d8a60dbd4f72b9d68bdb258f8b1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13622309 | pes2o/s2orc | v3-fos-license | Attentional, Visual, and Emotional Mechanisms of Face Processing Proficiency in Williams Syndrome
attentional processing activates similar brain regions (Pessoa, 2009). Pessoa (2009) suggested that the attentional network involves fronto-parietal regions, including the middle frontal gyrus, anterior cingulate cortex (ACC), inferior frontal gyrus, and anterior insula, which also involved in the processing of faces and emotional infor-mation. Overlapping brain networks of face and attentional processing suggested the role of attention in face processing. This explanation is fits with electrophysi-ological reports of atypical larger N200 were observed in WS, which was found to be correlated with performance on the Benton test (Mills et al., 2000). Larger N200 reflect increased attention to faces. In addition, the WS group showed dispro-portionate increases in volume and gray matter density in some brain regions of attention (ACC, anterior insula) as well as other regions including amygdala, orbital prefrontal cortices, and superior temporal gyrus, which are known to participate in emotion and face processing. The larger volume and gray matter density of the attentional networks may help WS indi-viduals to attend faces and may under-lie the islands of preserved skills of face processing in WS. This explanation fits with findings of Golarai et al. (2010) who showed that WS individuals are efficient in face recognition. This neuroanatomical link between attention and face process-ing was not made explicit by Golarai et al. (2010), which has important implications for the role of attentional mechanisms serving face processing in WS individuals.Alternatively, enhancement of face processing in WS might be aided by visual input received by primary visual cortex. For example, Galaburda and Bellugi (2000) suggested that the average size of cortical neurons was found to be greater in primary visual cortex in WS individuals than in con-trol brains, coupled with normal cell pack-ing density, which could result in enhanced visual input of face processing in WS brains.
attentional processing activates similar brain regions (Pessoa, 2009). Pessoa (2009) suggested that the attentional network involves fronto-parietal regions, including the middle frontal gyrus, anterior cingulate cortex (ACC), inferior frontal gyrus, and anterior insula, which also involved in the processing of faces and emotional information. Overlapping brain networks of face and attentional processing suggested the role of attention in face processing. This explanation is fits with electrophysiological reports of atypical larger N200 were observed in WS, which was found to be correlated with performance on the Benton test (Mills et al., 2000). Larger N200 reflect increased attention to faces. In addition, the WS group showed disproportionate increases in volume and gray matter density in some brain regions of attention (ACC, anterior insula) as well as other regions including amygdala, orbital prefrontal cortices, and superior temporal gyrus, which are known to participate in emotion and face processing. The larger volume and gray matter density of the attentional networks may help WS individuals to attend faces and may underlie the islands of preserved skills of face processing in WS. This explanation fits with findings of Golarai et al. (2010) who showed that WS individuals are efficient in face recognition. This neuroanatomical link between attention and face processing was not made explicit by Golarai et al. (2010), which has important implications for the role of attentional mechanisms serving face processing in WS individuals.
Alternatively, enhancement of face processing in WS might be aided by visual input received by primary visual cortex. For example, Galaburda and Bellugi (2000) suggested that the average size of cortical neurons was found to be greater in primary visual cortex in WS individuals than in control brains, coupled with normal cell packing density, which could result in enhanced visual input of face processing in WS brains.
Williams syndrome (WS) is a disorder of neural development caused by a micro deletion of about 26 genes from the long arm of chromosome 7q11.23 (Golarai et al., 2010). Individuals with WS were found to be are as good as typically developed (TD) healthy participants on Benton Recognition test, which is used to measure face recognition proficiency (Golarai et al., 2010). Golarai et al. (2010) used functional magnetic resonance imaging to investigate neural substrates underlying face-identity recognition in WS. They compared group differences in absolute fusiform area (FFA) size and also FFA size relative to the anatomical size of the fusiform gyrus, response amplitudes to faces and objects between WS and TD adults.
Volumes of the FFA in WS and TD participants were found to be larger in absolute terms in WS than TD participants in both hemispheres. Golarai et al. (2010) raised the possibility to explain face recognition proficiency in WS in terms of larger FFA volume, a developmental perspective of FFA volume, and genetic factors underlying FFA's larger volume in WS. Golarai et al. (2010) and many other studies focus on the FFA but that its functional role in face recognition is far from clear. The role of FFA in face-identity (and emotion) recognition must be viewed against the background of its connections to a number of other areas involved in attention and emotion processing. Golarai et al. (2010) did not discuss a possible role of attention or attentional networks in processing of faces in WS individuals. It has been suggested that face processing/emotional processing and This enhancement in face processing with bigger FFA may results in better processing of faces but not for objects.
Increased activation in amygdala was observed for processing of happy faces compared to fearful faces in WS individuals . Higher activation for happy faces and less activation for fearful faces in WS individuals indicate a possibility that WS individuals may see the fearful face as happy/positive face or approachable face as they can not inhibit an approach response to fearful/negative faces. This explanation is consistent with the finding that WS individuals show less activation in the amygdala during the processing of fearful/threatening face as compared to fearful/threatening scenes (Meyer-Lindenberg et al., 2005). Medial-prefrontal cortex (MPFC) and orbito-frontal cortex (OFC) are densely interconnected with amygdala and dorsolateral-prefrontal cortex (DLPFC) and have been implicated in the regulation of amygdala function, social cognition, and representation of social knowledge. Meyer-Lindenberg et al. (2005) found that WS individuals showed no activation in OFC during fearful/threatening face processing. In addition, no functional connection was observed of OFC with amygdala or DLPFC with in WS, which provide evidence of social disinhibition and impairments in adjusting behavior according to social clues in WS individuals. MPFC has been associated with empathy, representation of social knowledge, and integration of emotional information about others and self. MPFC region was found to be persistently activated in WS individuals during fearful/ threatening face processing, which maps well on phenotypic characteristics of relative social strengths with WS, such as increased empathy.
To understand the complete picture of face processing proficiency in WS, it is also important to consider the above mentioned attentional, visual, and emotional mechanisms of face processing in WS. Future work directed toward investigating the functional consequences of FFA size in face processing with longitudinal studies of young children with WS that will be of great important in understanding the mechanisms of cortical specialization during normal and atypical development. | 2014-10-01T00:00:00.000Z | 2011-04-05T00:00:00.000 | {
"year": 2011,
"sha1": "7bb7519f51362508c32c00a537bf60b8b1bc5e99",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2011.00018/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bb7519f51362508c32c00a537bf60b8b1bc5e99",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
261185058 | pes2o/s2orc | v3-fos-license | 3D Printing-Assisted versus Conventional Extracorporeal Fenestration Tevar for Stanford Type B Arteries Dissection with Undesirable Proximal Anchoring Zone: Efficacy Analysis
Background : To compare the outcomes of two Thoracic Endovascular Aortic Repair (TEVAR) techniques of Left Subclavian Artery (LSA) reconstruction for Stanford Type B Aortic Dissection (TBAD) patients with undesirable proximal anchoring zone. Methods : We retrospectively reviewed 57 patients with TBAD who underwent either three dimensional (3D)-printing-assisted extracorporeal fenestration (n = 32) or conventional extracorporeal fenestration (n = 25) from December 2021 to January 2023. We compared their demographic characteristics, operative time, technical success rate, complication rate, secondary intervention rate, mortality rate, and aortic remodeling. Results : Compared with the conventional group, the 3D-printing-assisted group had a significantly shorter operative time (147.84 ± 33.94 min vs. 223.40 ± 65.93 min, p < 0.001), a significantly lower rate of immediate endoleak (3.1% vs. 24%, p = 0.048) and a significantly higher rate of true lumen diameter expansion in the stent-graft segment (all p < 0.05), but a significantly longer stent graft modification time (37.63 ± 2.99 min vs. 28.4 ± 2.12 min, p < 0.001). There were no significant differences in other outcomes between the two groups ( p > 0.05). The degree of false lumen thrombosis was higher in the stent-graft segment than in the non-stent-graft segment in both groups and the difference was statistically significant (X 2 = 5.390, 4.878; p = 0.02, 0.027). Conclusions : Both techniques are safe and effective for TBAD with an undesirable proximal landing zone. The 3D-printing-assisted extracorporeal fenestration TEVAR technique has advantages in operative time, endoleak risk, and aortic remodeling, while the traditional extracorporeal fen-estration TEVAR technique has advantages in stent modification.
Introduction
Thoracic endovascular aortic repair (TEVAR) is a preferred treatment option for Stanford type B aortic dissection (TBAD) patients because it is less invasive and has fewer adverse effects [1,2]. However, TEVAR requires a proximal landing zone of at least 15 mm [3], and an undesirable proximal landing zone may compromise the success of TEVAR [4]. Therefore, TBAD patients with an insufficient proximal landing zone often need to cover the left subclavian artery (LSA) to obtain adequate anchoring, but this may cause serious complications, such as cerebral ischemia, LSA steal syndrome, spinal cord ischemia, and even death [5]. To reduce postoperative complications as much as possible, LSA revascularization has become a consensus among more and more experts and scholars [6]. With the increasing maturity of three-dimensional (3D) printing technology, its unique advantages in simulating complex aortic anatomy and morphology are more obvious [7]. 3D-printing-assisted extracorporeal fenestration TEVAR involves using 3D printing technology to create a model of the patient's aorta, and according to the position and diameter of LSA on the model, precisely fenestrate the stent graft ex vivo, and then place the fenestrated stent graft in the patient's aorta to achieve the accurate reconstruction of LSA [8,9]. This technique can overcome the limitations of conventional extracorporeal fenestration TEVAR, such as inaccurate fenestration position, inappropriate fenestration diameter, damage to stent graft during fenestration, etc. In addition, combined with cinching technique, the stent graft can be positioned and adjusted multiple times in the vessel, which further improves the success rate of surgery. However, the specific advantages and disadvantages of the two techniques in treating TBAD patients with an insufficient proximal landing zone need to be further studied. A comparative study of clinical efficacy in the same center and an assessment of short-term and mid-term clinical outcomes are still lacking [10]. Hence, this study aims to evaluate the short-term and mid-term clinical outcomes, strengths and weaknesses of 3D printing-assisted extracor-
General Data
This study employed a retrospective analysis method to collect the clinical data of 57 patients with TBAD affecting LSA who received TEVAR treatment at our center from December 2021 to January 2023. The patients were categorized into two groups based on the different surgical methods: the 3D printing-assisted extracorporeal prewindowing group (n = 32) and the traditional extracorporeal pre-windowing group (n = 25). All patients underwent aortic computed tomography angiography (CTA) before surgery to identify aortic dissection (AD), and the proximal landing zone was <15 mm. They were classified according to their type following the guideline [1], with 15 cases in hyperacute (<24 hours) and 42 cases in acute (1-14 days). The high-risk factors of the patients are presented in Table 1, which met the criteria for TEVAR surgery. The inclusion criteria were: (1) Diagnosed as type B AD based on the patient's medical history and preoperative CTA, according to the AD classification criteria (Stanford classification); (2) Preoperative CTA indicated that the distance between the intimal tear and LSA was <15 mm; (3) Preoperative CTA demonstrated that the dissection retrograde tear or hematoma had involved LSA; (4) No severe liver or kidney dysfunction. Exclusion criteria: (1) Type A aortic dissection, dissection retrograde tear or hematoma involving ascending aorta, etc.; (2) Patients who only underwent single branch artery reconstruction of LSA during surgery; (3) Patients who did not apply extracorporeal pre-windowing TEVAR technique; (4) Patients with hereditary connective tissue disease (such as Marfan syndrome). This study was approved by the Ethics Committee of Zhengzhou University Second Affiliated Hospital (Approval No: 2023167) and all patients signed informed consent before surgery.
Preoperative Data Collection and 3D Printing Model Fabrication
The patient's original CTA Digital Imaging and Communications in Medicine (DICOM) data were entered into Endosize software (Therenva SAS corp, Bretagne, France), and 3D image reconstruction was conducted to measure the following key points of the aorta: the aortic lesion (aortic aneurysm or true and false lumen of dissection), the planned proximal and distal landing zones and the inner diameter and lesion length of significant branch arteries. After verifying that the patient satisfied the condition, lesion scope and anatomical criteria for fenestrated/branched thoracic endovascular aortic repair (F/B-TEVAR), a surgical strategy was devised. Firstly, the original data file was entered into Mimics 21.0 software (Materialise corp, Louvain, Belgium), and 3D reconstruction of the aortic arch region (including the proximal and distal normal aorta, affected aorta and vital branch openings of the arch) was carried out. Then, the reconstructed 3D model data were entered into the design software (Geomagic Studio 2014, Geomagic corp, Triangle Development Zone, North Carolina, USA) for further preprocessing. Using reverse engineering technology, non-parametric surface reconstruction was applied to the blood vessels to obtain the computer-aided design (CAD) mathematical model of the blood vessels (see Fig. 1A). Using design software (Geomagic Studio 2014, Geomagic corp, Triangle Development Zone, North Carolina, USA), simulation analysis was executed. Based on the surgical strategy, the window positioning holes of the main branch arteries of the aortic arch were identified, and a 3D printing guide plate was designed (see Fig. 1B). Finally, the guide plate was sent to Stratasys Eden260VS 3D printer (Stratasys company, Eden Prairie, Minnesota, USA), and a hollow 3D aortic model close to the patient's affected aorta was fabricated with imported photosensitive resin as raw material. Each model cost $410, and finally, the printed 3D aortic model (see Fig. 2A) was sterilized and sealed with ethylene oxide.
Stent Modification
The diameter of the main stent was generally chosen to be about 10% larger than the CTA measurement value, and an appropriate size of Ankura (Xianjian) covered stent main body was deployed in the sterilized 3D printed model to determine the window location and diameter (see Fig. 2A). An electrode pen was used to rupture the membrane, and suturing of the stent lining, extension support, etc. It was selected at the window to decrease the occurrence of endoleak (see Fig. 2B). The V18 guidewire was passed through the 6 o'clock direction of the main stent (with the arch vertex as the 12 o'clock position), and a hole was made on the delivery sheath, and one end of the guidewire was drawn out from here. The main stent was reduced (at least 30%~45%) with 5-0 Prolene thread (Johnson, New Brunswick, New Jersey, USA) and secured on the guidewire to complete the bundle diameter and then retracted into the delivery system (see Fig. 2C), and then the stent delivery system was pre-bent in the sterilized 3D printed model stent to make it more conformable to the curvature of the aortic arch, facilitating successful deployment (see Fig. 2D). The conventional extracorporeal windowing group determined the window location and diameter on the main stent (Medtronic Minneapolis, Minnesota, USA) (Xinmai, Shanghai, China) based on the patient's aortic CTA data and the surgeon's experience, and used an electrode pen to rupture the membrane. The window location marker (Marker used a spring ring, which was sewn around the window edge with 5-0 Prolene), and finally retracted the stent into the delivery system.
Delivery and Deployment of the Stent
A direct incision was made in the left groin, a segment of femoral artery was mobilized for standby, punctured left femoral artery, placed 6F femoral sheath, administered 5000 u heparin anticoagulation treatment. Guided by a super-slip guidewire, a pigtail catheter was advanced to the ascending aorta plane, a high-pressure syringe angiography confirmed that the catheter was in the true lumen of dissection, exchanged for a super-hard guidewire to reach the bottom of aortic sinus. Withdraw femoral sheath, deliver main stent delivery system via super-hard guidewire, partially deploy stent when main stent reaches aortic arch to expose window hole, reduce front end diameter of main stent by 30%~45% under bundle diameter state, can finetune to facilitate "super-selection" through branch artery for window hole. Puncture left brachial artery, place 6F radial sheath, administered 5000 u heparin anticoagulation treatment. Guidewire catheter cooperation under superselection to stent pre-window hole, make stent window location correspond to branch vessel. Control blood pressure, deploy main stent, complete window hole "super-selection" after entering long sheath via brachial artery route, introduce branch stent and deploy, withdraw V-18 bundle diameter guidewire, fully deploy main stent. Intraoperative angiography again confirmed whether there was endoleak and stent stenosis occlusion, whether branch vessel blood flow was smooth. Postoperative dual anti-treatment for 3 months for patients, according to patient condition continue dual anti or change to single anti-treatment.
Follow-Up and Evaluation Methods
Postoperative follow-up was conducted through multiple channels such as ward rounds, telephone inquiries, outpatient visits, etc. (7-30 days, six months, and one year after surgery). The clinical outcomes measures included operative success rate, device deployment success rate (defined as successful positioning and release of the main stent graft during surgery, successful isolation of aneurysm, dissection proximal tear, etc.), intraoperative and postoperative complication rate, secondary intervention rate, mortality rate, etc. Patients underwent regular CTA examination to assess the patency of the main stent graft and fenestration stent graft and the occurrence of endoleak. Four aortic planes were selected for measurement (as shown in the Fig. 3), and the maximum diameter perpendicular to the intimal flap was measured in each plane. The changes in true and false lumen diameters in different aortic planes before and after surgery and the degree of thrombosis in the false lumen after surgery were compared to evaluate aortic remodeling. was used for the statistical analysis of data. Data with normal distribution were analyzed by independent samples ttest, and data with non-normal distribution were analyzed by non-parametric test (Mann-Whitney U test). Categorical data were compared by chi-square test or Fisher exact test. Two-sided test, significance level α = 0.05.
Patient Clinical Data
There was no significant difference in age (x ± s) between the two groups (55.14 ± 11.14 vs. 56.16 ± 13.02, t = 0.235, p = 0.815 > 0.05). The preoperative data and complications of the patients are presented in Table 1. The basic data did not differ significantly between the two groups (p > 0.05).
Perioperative Complications Results of Both Groups of Patients
The 3D-printing-assisted group had a significantly shorter operative time than the conventional group (147.84 ± 33.94 min vs. 223.40 ± 65.93 min, p < 0.001), but a significantly longer stent graft modification time than the conventional group (37.63 ± 2.99 min vs. 28.4 ± 2.12 min, t = 13.054, p < 0.001). The device deployment success rate was 100% in both groups, with no significant difference (p > 0.05). The 3D-printing-assisted group also had a significantly lower rate of postoperative endoleak than the conventional group (3.1% vs. 24%, p = 0.048), while there were no significant differences in other complication rates, secondary intervention rates and mortality rates between the two groups (p > 0.05). The perioperative complications results are shown in Table 2. Among them, one case of type II endoleak occurred in the 3D-printing-assisted group, which was not treated and resolved after one month of angiographic follow-up. Four cases of type I endoleak occurred in the conventional group (see Fig. 4), which were attributed to too large fenestration diameter, poor alignment of fenestration position and LSA ostium, and were managed by balloon dilation, filling coils (cook) in the gap and angiography again. Two cases of type II endoleak occurred in the conventional group, which were not treated and resolved after one month of angiographic follow-up.
Follow-Up Results
The follow-up time of the two groups was (16.14 ± 3.76) and (7.97 ± 3.80) months (t = 8.086, p < 0.001). The significant difference in follow-up time was due to the later application of 3D printing-assisted TEVAR with extracorporeal fenestration technique than conventional TEVAR with extracorporeal fenestration technique in our center. During the follow-up period, there were no new endoleaks and complications such as spinal cord and limb ischemia in both groups, one case of new dissection at the distal end of the stent in the conventional group, and one patient who died of Corona Virus Disease 2019 (COVID-19) seven months after surgery. There was no significant difference in survival rate between the two groups (p = 0.439 > 0.05).
Comparison of Aortic Remodeling after Dissection between the Two Groups
The preoperative and postoperative one-month thoracic and abdominal aortic CTA data of the two groups of patients were collected, and three-dimensional reconstruction and measurement were performed using Endosize. The results are shown in Table 3. In the stent-graft segment (L1, L2, and L3 planes), the 3D-printing-assisted group had a higher rate of true lumen diameter expansion than the conventional group (all p < 0.05), while there was no significant difference in the changes in true and false lumen diameters between the two groups in the non-stent-graft segment (L4 plane) (all p > 0.05). There was no statistically significant difference in thrombosis between the stent and nonstent segments between the two groups (all p > 0.05). The results are shown in Table 4. While the degree of false lumen thrombosis was higher in the stent-graft segment than in the non-stent-graft segment in both groups and the difference was statistically significant (X 2 = 5.390, 4.878; p = 0.02, 0.027).
Discussion
3D printing technology has a broad application and excellent effect in mimicking aortic morphology, but most Note: R (DTL) is the rate of true lumen diameter expansion, R (DFL) is the rate of false lumen diameter expansion, rate of true and false lumen area or diameter expansion = (postoperative true and false lumen diameter -preoperative true and false lumen diameter) / preoperative true and false lumen diameter × 100%. studies only focus on its role in mimicking aortic anatomical structure [11,12], and lack research on its role in treating TBAD patients with inadequate proximal landing zone [13]. This study innovatively employed 3D printing technology to assist the extracorporeal LSA pre-windowing technique, offering a new individualized treatment option for TBAD patients with unfavorable proximal landing zone, using preoperative and postoperative four-plane true and false lumen change rate, stent thrombosis rate, patient survival rate and complication incidence rate to assess the efficacy and clinical outcomes of this surgical method, and compared it with the conventional extracorporeal windowing technique in detail, aiming to more realistically and specifically evaluate the benefits and suitability of 3D printing-assisted extracorporeal pre-windowing technique.
This study compared and analyzed the short-and medium-term clinical outcomes of 3D printing-assisted extracorporeal pre-windowing technique and conventional extracorporeal pre-windowing technique in treating TBAD patients with undesirable proximal anchoring zone, both of which have good safety and efficacy, which is in line with domestic and international research results [14,15]. However, the 3D printing-assisted extracorporeal prewindowing technique has evident advantages over conventional extracorporeal pre-windowing technique, mainly manifested in the following aspects: First, in terms of surgery time, the 3D printing-assisted extracorporeal prewindowing group surgery time (147.84 ± 33.94) was significantly shorter than traditional extracorporeal prewindowing group (223.40 ± 65.93) (p < 0.001), which may be related to the following reasons: (1) 3D printing technology can create a dissection model that conforms to the patient's aortic anatomical morphology in advance, and achieve precise extracorporeal windowing based on this model, thus providing more intuitive, more accurate, and safer guidance for the surgery [16]; (2) Combined with bundle diameter technology, the main stent diameter is reduced by at least 30%~45% and retracted into the delivery system, which can achieve a larger range of adjustment in the vascular cavity, making it easier for the branch artery guidewire to super-select into the window, shortening the alignment time and release time of the stent window position and branch artery, thereby shortening the surgery time [17,18]; (3) Reduced the possibility of stent modification or secondary surgical intervention due to inaccurate window position or unsuitable window diameter. Second, in terms of postoperative complications, the 3D printing-assisted extracorporeal pre-windowing group's immediate angiography endoleak situation after stent deployment was also significantly lower than simple extracorporeal windowing group (p < 0.05), which may be related to the following reasons: This may be related to 3D printing technology can enhance the fit between the stent and the aortic wall and arch branch arteries. Through a 3D printing model, we can suture an embedded stent at the window to form a seamless connection between it and the left subclavian artery [19,20]. Postoperative false lumen reduction and thrombosis are reliable indicators for evaluating the long-term prognosis of TBAD patients [21,22]. The 3D printing-assisted extracorporeal pre-windowing group's true lumen diameter dilation rate was significantly higher than conventional extracorporeal windowing group (p < 0.05), which may be related to simulating stent deployment in the patient's 3D aortic model before surgery, determining appropriate deployment angle and position to make the stent main body fit well with aortic wall, effectively sealing dissection tear and channel, reducing false lumen diameter, and promoting false lumen thrombosis. In both groups, the postoperative stent segment false lumen thrombosis degree was significantly better than the non-stent segment, the difference was significant (p < 0.05), and the stent segment dissection aorta remodeling effect was better [23,24].
3D printing technology can accurately produce complex anatomical models of pathologies, providing a powerful auxiliary tool for surgeons to facilitate disease diagnosis, decision making and treatment planning [25]. In recent years, this technology has been widely used in the treatment of thoracic aortic dissection (TBAD) with an insufficient proximal landing zone by fenestration creation of aortic stent grafts in vitro and has achieved good results. 3D printed aortic models have two advantages in the modification of aortic stent grafts: (1) 3D printed models are hollow and transparent, which can allow accurate fenestration positioning of the aortic stent grafts after opening in the 3D printed models, avoiding errors caused by manual measure-ment; (2) The precise replication of the aortic arch can simulate the position of the aortic stent graft after implantation in vivo, and the position of the branch vessel orifice is more accurate than simply measuring from CTA, thus accurately simulating the spatial relationship between the aortic stent graft and the arch branches. For aortic anatomical variations such as bovine aortic arch, aberrant right subclavian artery, left vertebral artery originating from the aortic arch, etc., and severe tortuosity or angulation of the aortic arch such as type III aortic arch, the complex anatomical conditions of the patient's aortic arch make it difficult for the stent graft fenestration to match the branch vessel orifice, increasing the difficulty of surgery, reducing the success rate of surgery, and prolonging the operation time. Using 3D printing technology to produce these anatomically complex aortic arch models can not only visually demonstrate the anatomical structure of the pathology, but also simulate the distortion of the intraluminal stent graft, thus enabling more accurate fenestration design, reducing the difficulty of surgery and ensuring the safety of surgery [26]. In addition to guiding physicians in the fenestration creation of aortic stent grafts, 3D printed models can also clearly show the morphology and anatomical relationship of the aorta and its branches, facilitating doctors and patients to understand the disease and surgical plan, enhancing doctor-patient communication, and improving the education of young doctors on aortic diseases [27].
Limitations
Although 3D printing-assisted aortic stent windowing has higher precision than manual measurement of stent windowing, the stress interaction between the 3D printed model and the aortic stent differs from the actual aorta. Due to the existence of multiple twisted angles from the femoral artery to the thoracic aorta, when the main stent is introduced to the aortic arch, the vessel may be twisted by the stress of the super-hard guidewire and the stiff stent delivery device, resulting in displacement between the stent window and the arch vessel opening [28]. The current 3D-printing aortic model material is hard and cannot fully mimic the changes of the aorta after stress [29]. With the further advancement of 3D printing technology, we anticipate materials with flexibility and elasticity closer to the aorta to emerge, to more ideally mimic the real condition of the aortic arch, while minimizing the preparation time of the model as much as possible, so that patients with critical conditions who require emergency surgery can also benefit. Of course, we also acknowledge that there are some limitations in this study, such as small sample size, short followup time, medium-and long-term treatment outcomes that still need further follow-up and assessment, etc. Therefore, we need to conduct larger-scale, longer-term, more rigorous randomized controlled trials to validate the superiority and feasibility of the 3D printing-assisted extracorporeal prewindowing technique in treating TBAD patients with inadequate proximal landing zone.
Conclusions
From the above case data, both techniques have good safety and efficacy in treating Stanford type B aortic dissection with inadequate proximal landing zone. The 3D printing-assisted extracorporeal pre-fenestration technique has more benefits in shortening operative time, lowering endoleak risk, enhancing true lumen diameter expansion rate, facilitating aortic remodeling, etc. While conventional extracorporeal fenestration technique has advantages in stent graft modification time, and does not require 3D printing aortic model preoperatively, more appropriate for patients with critical condition who require emergency surgery. | 2023-08-27T15:27:48.559Z | 2023-08-23T00:00:00.000 | {
"year": 2023,
"sha1": "81b02741b7360ce0c2960ab06250017dc10f2b04",
"oa_license": null,
"oa_url": "https://journal.hsforum.com/index.php/HSF/article/download/5885/8405",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6425425128adde1c67ea0e5c2724365b1fd6dabd",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261984579 | pes2o/s2orc | v3-fos-license | Collaborative Learning Among Health Care Organizations to Improve Quality and Advance Racial Equity
Background: The study examined stakeholder experiences of a statewide learning collaborative, sponsored and led by Blue Cross Blue Shield of Massachusetts (BCBSMA) and facilitated by the Institute for Healthcare Improvement (IHI) to reduce racial and ethnic disparities in quality of care. Methods: Interviews of key stakeholders (n=44) were analyzed to assess experiences of collaborative learning and interventions to reduce racial and ethnic disparities in quality of care. The interviews included BCBSMA, IHI, provider groups, and external experts. Results: Breast cancer screening, colorectal cancer screening, hypertension management, and diabetes management were focal areas for reducing disparities. Collaborative learning methods involved expert coaching, group meetings, and sharing of best practices. Interventions tested included pharmacist-led medication management, strategies to improve the collection of race, ethnicity, and language (REaL) data, transportation access improvement, and community health worker approaches. Stakeholder experiences highlighted three themes: (1) the learning collaborative enabled the testing of interventions by provider groups, (2) infrastructure and pilot funding were foundational investments, but groups needed more resources than they initially anticipated, and (3) expertise in quality improvement and health equity were critical for the testing of interventions and groups anticipated needing this expertise into the future. Conclusions: BCBSMA's learning collaborative and intervention funding supported contracted providers in enhancing REaL data collection, implementing equity-focused interventions on a small scale, and evaluating their feasibility and impact. The collaborative facilitated learning among groups on innovative approaches for reducing racial disparities in quality. Concerns about sustainability underscore the importance of expertise for implementing initiatives to reduce racial and ethnic disparities.
Background
The COVID-19 pandemic exacerbated racial and ethnic disparities in the quality of care in the United States, resulting in increased public awareness and organizational efforts to advance health equity.Blue Cross Blue Shield of Massachusetts (BCBSMA), a large statewide insurer, is making progress in reducing disparities in quality by adopting collaborative learning methods developed by the Institute for Healthcare Improvement (IHI) and implementing payment reforms that incentivize reductions in racial inequities. 1,2earning collaboratives allow health care organizations to create teams with the goal of using pedagogical resources to achieve similar objectives.][4][5][6] There is limited information about collaborative learning systems that aim to reduce racial disparities in quality of care.In 2021, BCBSMA granted over $25 million to aid contracted provider groups' efforts to reduce racial/ethnic disparities in quality of care. 1,2rovider groups invited to participate were Alternative Quality Contract (AQC) provider groups.Provider groups using the AQC model reduced spending rates by 12% between over 8 years, while also improving their Health Care Effectiveness Data and Information Set (HEDIS) measures. 12,13Incentives for reducing racial disparities were not used during this time.][16][17] As a result of the AQC's impact and the racial justice movement, BCBSMA is beginning to revise their AQC contracts to include incentives for racial equity improvement in select HEDIS measures, which can ac-count for 20% of their total incentive pay. 12AQC groups were also invited to participate in the Equity Action Committee (EAC), BCBSMA's learning collaborative, to develop and test interventions that addressed racial/ethnic disparities in quality of care identified within their own electronic health record data and BCBSMA's claims data. 1 With funding from BCBSMA, IHI facilitated the equity improvement collaborative of teams from 12 AQC groups to disseminate best practices and to test and assess the impact of pilot projects aimed at reducing racial/ethnic disparities in quality.
Our study examines stakeholder experiences of the EAC.We analyzed the use of pilot grant funds and stakeholder experiences of identifying quality disparities and testing interventions to reduce racial and ethnic disparities.The qualitative research study makes a new contribution to evidence about learning collaboratives by examining how collaborative processes impact the development of strategies aimed at reducing quality of care disparities.Our findings can provide an important foundation for future insurer-led efforts for provider groups aimed at advancing racial equity.
Data and study design
A qualitative interview design was used to investigate participant experiences of BCBSMA's equity initiative.A qualitative approach allowed us to gain a comprehensive understanding of stakeholder perspectives through their own words, providing us with detailed data about implementation.Between October 12, 2022 and March 16, 2023, we reached out to 60 individuals and conducted 42 interviews of 44 individuals involved with or impacted by BCBSMA's equity initiatives (Table 1).
Researchers developed and finalized interview questions that assessed stakeholder roles, stakeholder familiarity with BCBSMA equity initiative components, efforts to improve race, ethnicity, and language (REaL) data collection, EAC experiences, and the use of infrastructure and pilot grant funding.Guided by past research on strategies to advance health equity, 18 a codebook of 11 codes and 16 subcodes was iteratively developed by four researchers.Participants reviewed a consent document before their 30-to 60-min recorded interviews, which were transcribed and coded after deidentification.
components of BCBSMA equity initiatives.We used deductive/inductive analysis methods to identify primary themes shared by stakeholders regarding the EAC process, infrastructure grants, and pilot project grants.Researchers used NVivo LLC software to code and analyze the transcripts.To ensure reliability, multiple researchers coded 10 interviews spanning the stakeholder groups, which were reviewed by four researchers to reach consensus.The 33 remaining transcripts were coded by one of the four researchers.Coding practices were reconciled and emerging themes were identified during weekly team meetings.NVivo analysis features were used to examine all text segments associated with focal codes, and themes were documented.Supporting text and quotes were organized within these themes.
QI priorities
BCBSMA provided groups with their 2019 and 2020 HEDIS performance data, stratified by race/ethnicity as well infrastructure grants and pilot grant resources.The stratified data stimulated several provider groups to identify quality gaps to assess quality of care disparities using data from their own clinical/administrative information systems for the first time.Provider groups' pilot projects targeted quality measures with high potential for reducing racial and ethnic disparities, including breast cancer screening, colorectal cancer screening, hypertension management, and type 2 diabetes care management.Groups implemented targeted interventions, including pharmacist medication management, mobile clinics for hypertension, and home health visits.Black/African American and Hispanic/Latino patients were found to have significant quality of care disparities for most provider groups.As a result, most groups' interventions were targeted for these patient populations.
Learning collaborative experiences
Our analyses of EAC activities revealed three key themes: (1) the EAC enabled the testing of interventions by provider groups; (2) infrastructure and pilot funding were important, but groups needed more resources than they initially anticipated; and (3) expertise in QI and health equity was critical for testing interventions.See Table 2 for an overview of all themes and corresponding quotes and text.
Theme 1: The EAC enabled the 12 provider groups to interact, and to formulate and implement pilot interventions that addressed racial disparities in quality of care.Stakeholders indicated that the EAC was integral to BCBSMA's equity initiative, because it offered expert coaching and a platform for provider groups to share ideas.Participants in the initiative utilized IHI-designed learning processes, including PDSAs, to expose potential flaws and successes.Monthly meetings and workshops with IHI coaches enabled participants to improve upon their respective interventions, yielding results beyond what stakeholders could have achieved individually.For instance, an AQC group executive stated, The EAC emphasized group learning, which was particularly valuable for organizations in the early stages of their own equity-focused efforts.The community aspect provided a forum for newer groups to thoughtfully design their initiatives, as noted by a smaller AQC group provider.Larger AQC groups tended to have pre-established equity-focused initiatives, driven by Chief Equity Officers or other executives, commonly with limited funding.As a result, their internal initiatives often took longer to design and implement than those designed for the EAC.As one AQC group provider observed, ''BCBSMA.has definitely spurred the continued focus on equity.as opposed to.our own internal work that had been happening before.'' The EAC contributed to ongoing efforts to promote equity within provider groups, building on prior efforts or offering to groups without their own equity initiatives a means to prioritize an equity-focus.By providing a platform for AQC groups, regardless of their size or experience, the EAC enabled groups to make progress in A BCBSMA executive explained that the extra funding has assisted provider groups with hiring more clinical support staff to help with data collection, community outreach, and mobile outreach efforts.These efforts have resulted in substantial improvement in REaL data for AQC group provider groups.Some groups did not fully understand their quality of care disparities by race and ethnicity before engaging in the EAC.Some provider groups previously lacked REaL data, but with BCBSMA's infrastructural support, they could now standardly collect and routinely access REaL data from their patients.Driven by REaL data collection processes refined through the EAC and additional infrastructure support from BCBSMAsponsored grants, AQC groups identified geographic areas and practice sites with concentrated disparities.An IHI coach said: ''[When] a team wants to improve diabetes care, hypertension care, or the maternal experience.they may not have data that shows a disparity between groups they serve.[This] means there is often a larger upfront period where they explore their own system for data that indicates a gap exists.''AQC groups were encouraged by BCBSMA to use a payer-agnostic approach to their infrastructure investments and pilot grant activities.Interventions were designed to have the most impact on the group's total patient population for a prioritized HEDIS measure regardless of the BCBSMA-specific patient counts for the measure.Thus, AQC groups were encouraged to focus on their total patient population with the assistance from expert coaches and peers in the EAC.BCBSMA leadership understood that their competitors' patients would benefit from these investments, but believed that it was more important for groups to build their infrastructure and experience with addressing racial disparities than worry about the potential free-riding of competitor health plans.
While AQC group leaders considered the infrastructure and pilot grant funding provided by BCBSMA as foundational investments, some raised concerns about the sustainability of engaging in QI by race/ethnicity for their organizations.These concerns were shared during EAC meetings, as shared by an IHI coach: ''And it's just a challenge because I think taking time to do those system changes.is really hard when you're already so resource strapped and time strapped.But I think that's one of the biggest areas [of] focus.howwill we make these changes sustainable?And how do we not just focus on adding more but changing steps within the processes that aren't working?''Key stakeholders stressed that continued funding for collaborative learning and interventions would be important to sustain the equity-focused interventions they implemented as part of the EAC.
Theme 3: Expertise in QI and health equity were considered very important to provider groups to support their testing of new interventions to reduce racial and ethnic disparities in quality of care.AQC group stakeholders raised concerns about patient responses to requests for self-reported REaL data.AQC group providers stressed the importance of adequate staff training for REaL data collection during IHI coaching sessions, including how to communicate the purpose of collecting race and ethnicity data and how to address patients' concerns.Although coaches understood AQC group challenges with REaL data collection, they often had to redirect efforts toward clinical interventions, relying on BCBSMA data to guide improvement.One IHI coach stated, ''We have the clinical meetings with them and then.anothermeeting about data collection.[This] is helping people to know that [clinical interventions are] what we need to do now.And then.wemeet one-on-one [to] think through.barriers, and.start to work on those things..And we can come up with a solution to a [clinical problem] and pilot that solution to get things done.''Some AQC group providers wanted to delay implementation of pilot interventions until after BCBSMA grant funding had been fully dispersed and coaching interactions were complete.Although understandable, groups often needed to be reminded that delays could limit the benefits of interventions for racial and ethnic minority patients and that implementing the pilot would not negatively impact their operations.As one AQC group provider highlighted, consulting with coaches led to focused use of resources: ''We are in the process of deploying the resources that BCBSMA and IHI provided to us, targeting diabetes interventions.And having the support of a consultant to discuss what we're doing in the PDSA cycles and doing process mapping and so on has been helpful while we wait to start [the pilot].''AQC group stakeholders consistently identified access to expert coaches as catalytic in enabling them to address racial and ethnic disparities in quality of care.As one IHI executive described the process, ''The faculty.offeronce a month coaching calls.that's when they're really able to [learn] how they can apply different change ideas and use the model for improvement to improve equity and care.''IHI coaches had expertise in QI and interventions to address racial and ethnic disparities in care.With their guidance, AQC groups were able to identify populations to target for interventions, assess intervention impacts on quality disparities, and refine their interventions.
Discussion
BCBSMA's equity initiative is one of the first U.S. statewide payer efforts to advance racial health equity through payment reform and collaborative learning.The collaborative learning methods used in the EAC, including expert coaching and group meetings helped groups measure and begin to address racial and ethnic disparities in quality of care.Interventions developed and tested during the pilot program included pharmacist-led medication management, REaL data collection improvement, mobile clinics, community health worker approaches, and strategies to improve transportation availability.
Our analysis found both facilitators and challenges of addressing racial and ethnic disparities through the EAC: (1) the EAC enabled the testing of interventions by provider groups; (2) infrastructure and pilot funding were important, but groups needed more resources than they initially anticipated; and (3) expertise in QI and health equity were critical for the testing of interventions.The identified themes provide insights into the facilitators and challenges of using learning collaboratives in QI efforts to address racial and ethnic disparities.
Coaches with expertise in addressing racial and ethnic disparities enabled AQC groups, some of whom were previously unfamiliar with implementing interventions for Black, Latino, and other vulnerable populations, to approach the initiative with greater ease and develop effective interventions within the EAC.In addition, coaches helped the groups identify the widest quality disparities and target interventions, which were generally Black/African American and/or Hispanic/Latino populations.This allowed groups to focus their improvement work.In addition, rapport between coaches and the teams enabled coaches to encourage provider groups to implement interventions and overcome initial hesitation among some team members.
Coaches also helped the groups identify evidencebased interventions for vulnerable populations, including mobile clinics, community-based outreach, and patient navigation by community health workers to address quality disparities.][6] Our findings can help insurers and provider groups begin to tackle racial and ethnic disparities in quality of care.The pilot grant funding was a central resource for this initiative.5][6] BCBSMA contracted IHI to oversee the pilot grant fund allocation process, encouraging provider groups to actively engage in the IHI facilitated collaborative learning process.The collaborative learning system allowed for multiple provider groups to efficiently design and test clinical interventions.
The findings should be considered in light of some limitations.First, although our response rate was high at 73%, nonparticipants may have had different experiences that may provide insights about the challenges of collaborative learning. 20Second, multiple informants per AQC group represented could have enabled an assessment of consistency of experiences within groups.Future research should examine heterogeneity of member experiences within provider groups.
Conclusions
Our interviews of stakeholders of the EAC aimed at advancing health equity highlighted the value of peer learning and coaching.The EAC supported groups in their efforts to improve REaL data collection and to test interventions on a small scale to assess their feasibility and impact on advancing racial equity.BCBSMA grant funding enabled provider groups to thoughtfully design and implement interventions, such as pharmacist medication management, increasing clinic transportation availability, and implementing community health worker approaches.
Concerns about sustainability of equity-focused interventions tested as part of the EAC, continued access to coaches, and availability of patient-reported data, however, were common concerns among provider groups.5][6] The results underscore the importance of payer alignment, so that provider groups can continue to advance their organizational initiatives to reduce racial and ethnic disparities.
Table 1 .
Key Stakeholders of Blue Cross Blue Shield of Massachusetts' Equity Initiative, by Group ''Teams initially[received]$250,000 to participate in just the[EAC].And now [pilot grant] money is getting out the door, teams are engaged and we've seen some good results from teams.''Provider group engagement in the EAC led to information sharing and the identification of shared quality
Table 2 .
Equity Action Community Themes and Illustrative Quotes IHI sessions are] a great convening of others towards a similar goal.In order to drive this type of transformative change, I think you have to have something like this.''-AQC group Executive and provider ''BCBSMA.hasdefinitelyspurredthecontinuedfocus on equity.asopposedto.ourowninternalwork that had been happening before.''-AQCgroupprovider''Sincewe'revery new to this space, we want to be thoughtful of how we approached it.''-AQCgroupprovider''Teamsinitially[received] $250,000 to participate in just the [learning collaborative].And now[more]money is getting out the door, teams are engaged and we've seen some good results from teams.''-BCBSMAExecutive ''[When] a team wants to improve diabetes care, hypertension care, or the maternal experience.theymaynothave data that shows a disparity between groups they serve.[This]meansthere is often a larger upfront period where they explore their own system for data that indicates a gap exists."-IHICoach''We'll be looking for[ways]to keep [the] programs sustainable.And then of course, keeping a very close eye on metrics that can cost us like readmissions, ED visits, et cetera.And that can take a lot of time, but the hope is that we'd be able to reduce utilization so that when the grant is no longer funded, we'll be funding it ourselves.''-AQC group Executive and provider ''It's not just the money.[BCBSMA has] also matched data capability with the provider organization's data capabilities.''-IHI Executive ''[W]e did a full.bottoms up budget calculation over a five year period.And you know, the grant is 2 million.But what we guesstimated.isabout a hundred million over a five year type of investment.''-AQC Stakeholder ''I sat in on one meeting, where [BCBSMA and IHI stakeholders] were saying.'You're not gonna lose We have the clinical meetings with them and then there's another meeting about data collection.We are in the process of deploying the resources that Blue Cross and IHI provided to us, targeting diabetes interventions.And having the support of a consultant to discuss what we're doing in the PDSA cycles and doing process mapping and so on has been helpful while we wait to start [the pilot].''-AQC group provider AQC, Alternative Quality Contract; ED, Emergency Department; IHI, Institute for Healthcare Improvement; PDSA, Plan-Do-Study-Act cycles; QI, quality improvement.Copado, et al.; Health Equity 2023, 7.1 http://online.liebertpub.com/doi/10.1089/heq.2023.0098 of care priorities among the provider groups to allow IHI to provide more support and resources that could benefit all, such as REaL data collection protocols and evidence-based interventions for managing chronic conditions and improving the provision of preventive care.BCBSMA provided infrastructure support to complement funding allocation efforts.As noted by an IHI executive, ''It's not just the money.[BCBSMA has] also matched data capability with the provider organization's data capabilities.'' money,' which is good at least for the first few years, but eventually you're gonna need to close these disparities in care.''-AQC group provider ''And it's just a challenge because I think taking time to do those system changes.is really hard when you're already so resource strapped and time strapped.But I think that's one of the biggest areas [of] focus.howwill we make these changes sustainable?And how do we not just focus on adding more but changing steps within the processes that aren't working?''[This] is helping people to know that [clinical interventions are] what we need to do now.And then.wemeet one-on-one [to] think through.barriers, and.. to start to work on those things..And we can come up with a solution to a [clinical problem] and pilot that solution to get things done.''-IHI Coach '' | 2023-09-17T15:13:12.852Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "d86c612e2235183307f27c320892769e4b57015e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1089/heq.2023.0098",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5497e69476e5fbdf3546b811d430c886a4b2d5d6",
"s2fieldsofstudy": [
"Medicine",
"Sociology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245893262 | pes2o/s2orc | v3-fos-license | Diagnostic value of combined prealbumin-to-fibrinogen and albumin-to-fibrinogen ratios in Hp-negative gastric cancer
Background This study aimed to investigate the diagnostic value of prealbumin-to-fibrinogen ratio (PFR) and albumin-to-fibrinogen ratio (AFR) alone or in combination in Helicobacter pylori-negative gastric cancer (Hp-NGC) patients. Methods This study included 171 healthy controls, 180 Hp-NGC patients, and 215 Helicobacter pylori-negative chronic gastritis (HpN) patients. We compared the differences of various indicators and pathological characteristics between groups with Mann–Whitney U test and Chi-square test. The diagnostic value of PFR and AFR alone or in combination for Hp-NGC patients was assessed by the receiver operating characteristic (ROC) curve. Results PFR and AFR were related to the progression and clinicopathological characteristics of Hp-NGC. As the disease progressed, PFR and AFR values gradually decreased and were negatively related to the tumor size and depth of invasion. In addition, the area under the curves (AUCs) that resulted from combining PFR and AFR to distinguish Hp-NGC patients from healthy controls and HpN patients were 0.908 and 0.654, respectively. When combined with PFR and AFR in the differential diagnosis of tumors with a maximum diameter ≥ 5 cm and the T3 + T4 stage, the AUCs were 0.949 and 0.922; the sensitivity was 86.32% and 80.74%; and the specificity was 94.74% and 92.98%, respectively. Conclusions PFR and AFR may be used as diagnostic biomarkers for Hp-NGC. The combination of PFR and AFR was more valuable than each indicator alone in the diagnosis of Hp-NGC.
Introduction
Gastric cancer (GC) is a malignant tumor that ranks fifth among the most common cancers and third among the most common causes of cancer death in the world. 1 Despite a decline in its incidence, the prognosis of GC remains poor. 2 GC is caused by many factors, including diet, genetic mutations, and Helicobacter pylori (H. pylori) infection. 3,4 Among these factors, persistent H. pylori infection can cause chronic gastritis, which may eventually lead to GC. 5 At present, serum carcinoembryonic antigen (CEA), CA724, and CA199 are still important markers for early GC screening, 6 but a small number of patients with GC are negative for H. pylori, which is called Hp-NGC. Although these markers have certain diagnostic value for GC, they are not recommended for early diagnosis of Hp-NGC due to its unique clinical and pathological features. 7 In addition, due to the extremely low prevalence of Hp-NGC, there are relatively few clinical studies on markers for diagnosing Hp-NGC. 7,8 Although methods such as endoscopy and pathological biopsy can accurately diagnose Hp-NGC, they are invasive and unsuitable for early screening. 5 At present, many potential markers have been proposed for the diagnosis of GC, such as long non-coding RNAs cancer upregulated drug resistant, long stress-induced noncoding transcript 5, phosphatase and tensin homolog pseudogene 1, and microRNA. 9 However, these markers are expensive to detect and are not suitable for routine testing. In addition, no useful non-invasive GC biomarkers have been identified. 10 Therefore, new, convenient, and economical biomarkers are needed to predict early GC, especially in Hp-NGC patients.
Inflammation is closely connected with the progression and prognosis of tumors, and abnormal blood coagulation and nutritional status may also affect the occurrence and development of malignant tumors. 11 Common inflammatory markers include prealbumin (PA), albumin (ALB), Fibrinogen (Fib), and new markers that are composed of the ratio of these indicators, which are usually abnormal in patients with GC. 12 ALB and PA, which are produced by the liver, are often used as biomarkers to assess inflammation and nutritional status. 13 In addition, a decrease in the serum ALB level indicates malnutrition and decreased immunity and accelerates the infection and progression of tumors. 14 However, the half-life of PA is shorter than that of ALB and is also more sensitive to malnutrition. It is a new and feasible indicator of a poor prognosis for patients with GC. [15][16][17] Several other studies have also confirmed that ALB and PA are potential prognostic factors in patients with GC. 13 Fib is an acute-phase reactive protein that can regulate the proliferation, metastasis, and signal transduction of tumor cells. 18 Palaj et al. found that high levels of Fib are related to tumor development, metastasis, and survival in GC patients. 19 Therefore, we speculated that PFR and AFR may be related to inflammation and tumor progression and are expected to become new biomarkers to predict and diagnose patients with Hp-NGC.
Existing studies have assessed the prognostic value of PFR and AFR in GC and other cancers; as a result, there is a lack of research on the predictive value of PFR and AFR in patients with Hp-NGC. Therefore, we used PFR and AFR both alone and in combination as diagnostic markers to evaluate patients with Hp-NGC and analyzed their relationship with clinicopathological characteristics.
Patients
This study was a retrospective analysis that included 171 healthy controls, 215 HpN, and 180 Hp-NGC patients from the First Affiliated Hospital at Guangxi Medical University between January 2013 and October 2020. The inclusion criteria for Hp-negative patients were as follows: (a) newly diagnosed with gastric adenocarcinoma via clinical pathology; (b) a diagnosis of chronic gastritis; (c) a negative for H. pylori by endoscopy, pathological biopsy, urea breath test (UBT), or serum H. pylori antibody; (d) no history of radiotherapy, chemotherapy or antiinflammatory treatment; (e) no other malignant tumors or related infectious diseases (such as autoimmune diseases, infectious diseases, hepatitis, cirrhosis, etc.); (f) no cardio cerebrovascular disease; and (g) complete clinical and pathological data. In addition, we also included 171 healthy people with no history of cancer and gastrointestinal diseases as healthy controls. Our study was approved by the Ethics Committee of the First Affiliated Hospital of Guangxi Medical University and was in line with the Declaration of Helsinki. All participants gave informed consent.
Clinical data collection and calculation
All relevant data were obtained from patient medical records, including gender, age, PA, Fib, white blood cells (WBC), neutrophils (NEU), hemoglobin (Hb), platelets, ALB, CEA, and clinicopathological characteristics. A Beckman-Coulter LH 780 (Beckman Coulter, Brea, CA) and Roche E6000 Analyzer (Roche Basel, Switzerland) were used to carry out routine blood and CEA analyses, respectively. We also used a Hitachi 7600 automatic biochemical analyzer (Tokyo, Japan) to determine PA and ALB. Plasma Fib was detected by Sysmex CA7000 automatic coagulometer. PFR and AFR values were generated from formula calculations: PFR = PA/Fib and AFR = ALB/Fib, respectively. We obtained the optimal cutoff values for PFR and AFR using MedCalc software and divided them into a high-value group and a low-value group.
Statistical analysis
The data in this study were analyzed using the SPSS software (version 25.0). All data did not follow a normal distribution; therefore, they were represented using the median and interquartile ranges. In addition, we used a Chi-square test and a Mann-Whitney U test to analyze the differences in the laboratory markers and pathological characteristics between the groups. MedCalc software (version 18.1.1) was used to determine the area under the curve (AUC) and receiver operating characteristic. For all the analysis results, P < 0.05 indicated statistical significance.
In this study, the sample sizes of the healthy control group, HpN group, and Hp-NGC group were 171, 215, and 180, respectively. The current sample size had more than 99% power to detect differences in PFR and AFR between different groups (α<0.05). The sample size of 180 patients had 98.05% power to detect the difference in PFR and AFR between subgroups of tumor size or tumor invasion depth (α<0.05).
Baseline data of patients
The baseline data of the study population are shown in Table 1 and Figure 1. In terms of gender, there was no statistical significance among the three groups, but Hb, ALB, PA, PFR, and AFR were all statistically significant. PFR and AFR values decreased gradually as the disease progressed. In addition, Hb, ALB, PA, PFR, and AFR values of Hp-NGC patients were lower than those of HpN patients and healthy controls, while WBC and NEU findings in Hp-NGC patients were higher than those of the other groups.
Correlation between PFR, AFR, and clinicopathological characteristics in Hp-NGC
According to Table 2, PFR and AFR were both associated with tumor size and depth of invasion (P < 0.05) but not with age, smoking, drinking, breakthrough of a serous membrane, or ulcer invasion (P > 0.05). PFR was also not associated with gender, lymph node metastasis, or tumor stage (P > 0.05) but was associated with distant metastasis (P < 0.05). In contrast, AFR was related to gender, lymph node metastasis, and tumor stage (P < 0.05) but was not associated with distant metastasis (P > 0.05).
Correlation between PFR and AFR stratification and clinicopathological characteristics
As shown in Table S1, there were 79 (60.3%) males and 52 (39.7%) females in the low-PFR group and 28 (57.1%) males and 21 (42.9%) females in the high-PFR group. There were 66 (66%) males and 34 (34%) females in the low-AFR group and 41 (51.2%) males and 39 (48.8%) females in the high-AFR group. The low-PFR group had a larger maximum tumor diameter (P < 0.001), a deeper invasion depth (P = 0.026), and more lymph node metastasis (P = 0.043) and distant metastasis (P = 0.041) than the high-PFR group. However, gender and tumor stage were not statistically significant between the high-PFR group and the low-PFR group (P > 0.05). The low-AFR group had more male patients (P = 0.045), a larger maximum tumor diameter (P = 0.006), a later tumor stage (P = 0.019), a deeper depth of invasion (P = 0.006), and more lymph node metastasis (P = 0.038) than the high-AFR group, but distant metastasis was not statistically significant between the high-AFR group and the low-AFR group (P > 0.05). Finally, there was no significant difference in age, smoking, drinking, breakthrough of a serous membrane, or ulcer infiltration between the low and the high groups of either PFR or AFR (P > 0.05).
The diagnostic value of PFR and AFR between Hp-NGC patients and other participants
The diagnostic value is shown in Table 3 and Figure 2.
First, the AUC values of PFR and AFR for differentiating Hp-NGC patients from the healthy control group alone were 0.888 and 0.765, respectively, while the combination of PFR and AFR improved the diagnostic value (AUC = 0.908) due to its sensitivity and specificity values of
Discussion
In recent years, early GC has not been easily detected, and most cases are diagnosed at an advanced stage. Although the incidence has gradually decreased following developments in surgery and adjuvant therapy, the prognosis remains poor. 20 There are many diagnostic methods for GC, but effective methods are usually invasive, such as gastroscopy and biopsy. As a common tumor marker of gastrointestinal malignant tumors, CEA is of great significance in the diagnosis of GC. 21 However, the diagnostic efficiency of CEA is not high (AUC = 0.644). 22 Therefore, it is particularly important to identify economical, efficient, and non-invasive diagnostic biomarkers. At present, many prognostic biomarkers are used to evaluate a variety of malignant tumors, such as AFR, fibrinogen-to-prealbumin (FPR), neutrophil-to-lymphocyte ratio, and platelet-to-lymphocyte ratio. [23][24][25][26] However, the diagnostic value of these markers for Hp-NGC remains unclear. Therefore, this study explored the clinical value of PFR and AFR in the diagnosis of Hp-NGC. Some studies have shown that Fib is a pro-inflammatory protein that is related to the clinicopathological characteristics and prognosis of many tumors, including gastrointestinal cancer, lung cancer, and prostate cancer. 27 Hyperfibrinogenemia has been significantly related to tumor enlargement, an advanced cancer stage, and a poor prognosis in GC patients. 28 Huang et al. reported that the Fib level in cancer patients was significantly increased compared with the control group. 29 This finding was in line with our results. In addition, a decrease in ALB and PA may lead to malnutrition and immune response disorders, which also may affect the prognosis of GC. 30,31 32,33 Therefore, PFR and AFR, which are composed of PA, ALB, and Fib, may be closely associated with the progression of cancer. PFR was also closely related in acute pancreatitis and cancer 16 and could be used to evaluate the severity of acute pancreatitis and the possibility of complications. 34 In another study, FPR could be used as a useful biomarker for the diagnosis of colorectal cancer and was associated with tumor stage. 35 At the same time, other studies have confirmed AFR as a new tumor prognostic biomarker, 36 and had a relationship with the tumor size, its depth of invasion, and potential lymph node metastasis. 37 Huang et al. showed that AFR was related to the clinicopathological features of tumors in their study of cervical cancer; and AFR was significantly lower in cervical cancer patients. 29 Based on these studies, PFR and AFR values gradually decrease as the disease progresses, which was also in line with our results. We found that PFR and AFR were related to Hp-NGC and that they decreased with increases in the tumor size and depth of invasion. As a result, PFR and AFR could be used as indicators of tumor aggressiveness.
This study evaluated the correlation between PFR and AFR and Hp-NGC. The results showed that PFR and AFR gradually decreased as the disease progressed. We also analyzed the relationship between PFR and AFR and clinicopathological characteristics and found that they were related to the tumor size and depth of invasion. A further stratified analysis showed that the larger the maximum tumor diameter, the deeper the invasion depth and the lower the PFR and AFR levels. Such patients were more prone to neurovascular invasion, suggesting that lower levels of PFR and AFR in patients with GC were associated with a higher postoperative recurrence rate and a worse prognosis. Therefore, it is necessary to strengthen postoperative chemotherapy and perform frequent follow-up examinations.
When analyzing the diagnostic value of PFR and AFR in Hp-NGC patients and other populations, the diagnostic efficiency of combining PFR and AFR in differentiating Hp-NGC from healthy controls was higher than that of any single indicator (AUC = 0.908, sensitivity = 78.33%, specificity = 92.98%). In addition, the AUCs of the combined PFR and AFR in the diagnosis of HpN patients and healthy controls and of HpN patients and Hp-NGC patients were 0.813 and 0.654, respectively, indicating that PFR and AFR can also be used as moderate predictors for the differential diagnosis of HpN and Hp-NGC. However, the AUCs of HpN patients and healthy controls and of Hp-NGC patients and healthy control models were lower than the AUCs of Hp-NGC patients and healthy control models; therefore, Hp-NGC patient and healthy control models were still the best diagnostic models to use to diagnose Hp-NGC. Based on these results, the combined application of PFR and AFR had higher diagnostic efficiency for Hp-NGC than either method used alone. There were some limitations to our study. First, the sample size was small, and all participants came from the same hospital, which might have caused errors in the results. Second, the study was a single-center retrospective study, and the results might not be comprehensive enough. Finally, prospective multi-center studies should be conducted in the future to verify these results more comprehensively.
Conclusions
Low PFR and AFR values were related to the tumor size and depth of invasion. PFR and AFR may be used as diagnostic biomarkers for Hp-NGC. The combination of PFR and AFR had a higher diagnostic efficiency than a single diagnosis of Hp-NGC. It might be an economical, convenient, and promising biomarker.
Author contributions
Shan Li conceived the study design, Linyan Zhang wrote the manuscript, Simeng Qin was involved in data acquisition, Liuyi Lu and Li Huang performed the analyses. All authors read and approved the final manuscript.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Natural Science Foundation of China (grant number 81460431).
Supplemental material
Supplemental material for this article is available online. | 2022-01-12T06:18:28.853Z | 2022-01-11T00:00:00.000 | {
"year": 2022,
"sha1": "06c3c6b4652b09fb98067f2326e0d6860ace729f",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/17246008211072875",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "7a1eeb2886acf1706563302a74fa58bea9f6206e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253125292 | pes2o/s2orc | v3-fos-license | Exploring the impact of perceived risk and trust on tourist acceptance intentions in the post-COVID-19 era: A case study of Hainan residents
Hainan, is the only free trade port that also exudes quintessence of the culture of China. Tourism is one of Hainan's most lucrative industries. On the one hand, the regional economy is flourishing and on the other hand, the economy is facing unprecedented impacts from the COVID-19 pandemic. In response to the affected global market environment, this study investigates Hainan residents' acceptance intentions, or tolerance, of tourists. Here, based on the theory of reasoned action, which includes “subjective norm” combined with “trust,” “perceived risk,” and “perceived value.” Using “resident attitude” as a mediator, theoretical research frameworks were developed. A total of 447 valid responses were collected using online and paper-copy questionnaires distributed to Hainan residents from 15 July 2021 to 10 November 2021. The data from the questionnaires were used for three analyses namely, descriptive statistical analysis, measurement model verification, and structural equation modeling analysis. Findings show a positive effect of trust on residents' attitudes in Hainan; perceived value and subjective norm showed a positive effect on resident acceptance intentions for tourism; the perceived risk of residents showed a more negative effect on attitudes toward tourists, but the influence was small. Finally, through the results of the study, theoretical and practical implications in a post-pandemic era are discussed.
Introduction
The devastation and crisis caused by the COVID-19 global pandemic have resulted in major socioeconomic upheaval (Prime et al., 2020). The multiplicity of pandemic pathogens and their recurrence and rebound in multiple regions is unprecedented. To date, the World Health Organization is advising countries and individuals to take steps in protecting their health and preventing the spread of the pandemic (WTO, 2021). As a result of COVID-19, the world's tourism industry is facing a disaster beyond .
imagination as tourism is vulnerable to natural disasters, pandemics, conflicts, terrorism, and economic crises. For example, health measures and communication restrictions such as embargoes, travel bans, quarantine, and social distancing have halted business in tourism-related industries. COVID-19 is causing a long-term structural change in the tourism industry and challenging existing economic and tourism systems. However, tourism has shown resiliency in rebounding from major economic, political, and health crises in the past (Sigala, 2020). Even now, with the recurrence of the pandemic in individual regions, people's demand for tourism is still rising and the market demand for cross-regional and overseas tourism is substantial. Therefore, the management of pandemic precautions in tourism is an important and long-term social issue facing the world today. According to a recent study released by the World Travel and Tourism Council (WTTC) on tourism global averages, Chinese tourism recovery is significantly ahead of other key markets in Europe and the United States. With several government initiatives underway, the tourism market is predicted to be extremely optimistic for 2022 (WTTC, 2021). According to official data from China's Hainan Tourism and Culture, Radio, Film, and Sports, Hainan Province received 81.43 million domestic and foreign tourists in 2021 which is up an average of 25.5% per year since 2019 and, recovering to 97.5%. Total tourism revenue was 138.434 billion RMB in 2021, up an average of 58.6 and 30.9% since 2019. Hainan has one of the best tourism recovery rates in China (QiuShi, 2022). Enthusiasm and demand for cross-regional travel are still evident. Hainan is a free trade port with the quintessence of Chinese cultural characteristics and a destination for tourists. The region is currently building a pilot free trade zone and the tourism industry is an economic pillar of Hainan (SCMP, 2022). However, with the recurrence and uncertainty of the pandemic, residents of tourist destinations have heightened perceived risk and trust regarding safety associated with sightseeing tourists. Residents' wariness of foreign tourists is increasing, especially toward tourists from high-risk areas located outside China. Nonetheless, tourism companies are surviving despite the turmoil of the pandemic and are trying to seize the potential of tourism opportunities. For regions and countries where tourism is an important economic pillar, a safe living environment is essential for residents of tourist destinations. As an exemplary tourist destination, Hainan's community deserves a more indepth exploration of the attitudes, perceived risks, and trust of residents toward tourists.
Although the body of research on COVID-19 and tourism is growing, past research has mainly focused on the impact on the tourism industry (Baum and Hai, 2020;Lew et al., 2020;Ranasinghe et al., 2020;Sigala, 2020). Several studies have focused on the destination-image recovery processes (Castillo-Villar, 2020;Chemli et al., 2020;Rasoolimanesh et al., 2021) and estimating impacts and changes in visitor behavior (Davahli et al., 2020;McGinlay et al., 2020;Wen et al., 2020). However, because tourism is the main source of income for the long-term livelihood of residents in tourism destinations, empirical studies should consider the resident's acceptance intention, or tolerance, for tourists, especially under the norm of recurrent pandemics. This study employs the theory of reasoned action (TRA) while using the impact of residents' perceptions (e.g., perceived risk and trust) to develop a theoretical structural model of tourism in the context of COVID-19. Perceived value is considered a construct of perceived behavior control in TRA. The application of TRA to residents' acceptance intention of tourism is critical to comprehensively augment a structural framework of tourism tolerance.
Therefore, this study takes Hainan residents' intention to accept tourists as the main axis, and uses trust, perceived risk, perceived value, and subjective norm as independent variables. Attitude is considered a mediating variable to explore the mechanism of factors influencing Hainan residents' intention to accept tourist visitors. Based on domestic and international literature around tourism, this study designed questions aligning it with the current research context, used questionnaires as a means of collecting data, and conducted data analysis using statistical tools. This study's results serve to highlight practical implications and provide managerial recommendations for tourism practitioners and local tourismrelated government departments.
Theory of reasoned action
The Theory of Reasoned Action is a foundational and highly influential theory in social psychology used to study cognitive behavior and was originally developed by Fishbein (1967) and Ajzen and Fishbein (1969). The four main factors of the TRA model include attitude, subjective norm, actual behavior, and behavioral intention. The theory asserts that behavior precedes intention; an individual's intention is determined by subjective normative attitudes (Fishbein et al., 1980). Individual behavioral attitudes and subjective norms lead to behavioral intentions, and individual behavioral intentions directly influence actual behavior (Ajzen and Fishbein, 1975). This suggests that human rational behavior satisfies both individual's wishes and the expectations of others (Dan and Chieh, 2008). Attitude refers to the perception and evaluation of performing a specific behavior and includes consideration of the subsequent outcome (Verma and Sinha, 2018). Subjective norms are the views and opinions held about social pressures when engaging in specific behavior and are easier or more difficult to express if others hold favorable or unfavorable opinions, respectively (Procter et al., 2019). Ajzen and Fishbein (1975) intention as the expectation of one's behavior in a given context, operationalized as the likelihood of an individual performing an act. The intention is influenced by two variables -attitudes and subjective norms. In the context of this study, attitudes refer to Hainan residents' positive or negative perceptions of tourists after COVID-19. Subjective norm refers to how residents are influenced by the perceptions of friends, neighbors, co-workers, family members, and community residents about tourists. The intention is the extent to which residents accept or tolerate tourists. Several studies have explored human behavior based on TRA. Past research based on TRA focused on issues such as gambling (Procter et al., 2019), purchasing, and user behaviors. For instance, Chen et al. (2020) and Loan and Quyen (2020) investigated consumer attitudes toward purchasing social insurance, using smart home devices, and purchasing environmentally sustainable goods. The theory is also widely used in health and medical-related studies to explain increases in healthy behaviors correlating with a decrease in habitual behaviors (Sheeran and Conner, 2019). For example, a study was conducted about college students and their participation in health-related extracurricular activities, where attitudes, subjective norms, and cultural exploration have significant positive effects on college students' behavioral intentions to participate in college activities (Harb et al., 2021). In the field of tourism, Song et al. (2021) examined the behavior relating to Chinese golfers' intention to revisit, which was influenced by the golfers' perceptions of staff attitudes, destination uniqueness, and place attachment.
Results from several studies suggest a positive correlation between trust and attitude. For example, Sadiq et al. (2021) studied online travel purchases and concluded trust strongly influences consumer attitudes toward online purchases. In addition, Ng (2020) proposed trust in colleagues is a catalyst for influencing employees' willingness to share knowledge with others. An increase in trust with colleagues positively impacts attitudes toward knowledge sharing. Informed by past studies utilizing TRA, the results of this study are likely to find that increased trust of Hainan residents toward tourists simultaneously increases positive attitudes. The increase in trust of Hainan residents is likely to increase the influence of tourists' attitudes. Therefore, this study proposes hypothesis 1.
H1: The trust of residents in Hainan toward tourists will have a significantly positive effect on attitudes toward tourists.
Trust
Trust is defined as the willingness of one party to believe or rely on the other party's attitude or willingness to have a mutually beneficial human interaction (Moorman et al., 1993;McKnight and Chervany, 2001;Kim and Tadisina, 2007). Trust is also defined as the perceived feeling of reliability and goodwill of one party toward the other party (Gefen and Straub, 2004). McCarter and Northcraft (2007) found that trust is a psychological situation in which one party is willing to believe in another party in anticipation of cooperation. Social scientists are increasingly interested in the concept of trust as a response to incomplete knowledge that leads to uncertainty (Giddens et al., 1991). Lewis and Weigert (1985) stated, "trust begins where prediction ends, " and tourism, like all areas of life, is fraught with uncertainty. One focus of this study is on trust interactions between travelers and residents. Given that COVID-19 spreads from person to person, all people in the same spatial proximity should have baseline trust to protect each other (Dedeoglu and Bogan, 2021;Ukpabi et al., 2021;Quintal et al., 2022). Shin et al. (2022) indicate future travel intentions would be determined by an individual's level of trust in the COVID-19 measures at their destination. Here, this paper proposes Hainan residents' trust in tourism flourishes when tourists choose to do the right thing for local residents. In return, local Hainan residents are willing to establish trust and create mutually beneficial interactions.
Trust includes social trust, government trust, social media trust, and online perception trust. Trust in these sectors has increased to an unprecedented level. For instance, trust leads to higher purchase intentions in online shopping environments (Jones and Kim, 2010). Additionally, Kim et al. (2012) and Pan et al. (2012) verified online shopping intentions are influenced by consumer trust, and trust also influences consumers' perceptions of online shopping. Trust is critical in a rapidly evolving event like COVID-19, which is characterized by scientific uncertainty (Balog-Way and McComas, 2020). Travel-related businesses are now compelled to find strategies to retain customers. One basic strategy is to build customer trust and loyalty. (Laparojkit and Suttipun, 2021). During times of uncertainty, trust is a key factor in sustaining society and underpins people's attitudes and behaviors (Paul et al., 2021). Bauer (1960) argued most consumer behaviors may be risky behavior and consumers perceive risk because they are unable to predict outcomes from their purchases and the approximate probability of various outcomes. Perceived risk in tourism is defined as an individual's perception of "behaviors that may influence travel decisions if hazards are perceived to be beyond acceptable levels" (Chew and Jahari, 2014). Risks may include physical, psychological, financial, and health risks from injuries, accidents, terrorism, natural disasters, political instability, and pandemics. Some outcomes of purchases may be negative and once perceived as potentially negative by consumers, they are then typically perceived as a risk. In the tourism industry, risk is considered a major concern for international travelers (Kozak et al., 2007). Due to tourists inherently seeking security, travel .
decisions made in situations of uncertain risk can be heavily influenced by safety and security concerns (Beirman, 2002). In addition, the experiential and intangible nature of tourism often leads to higher levels of non-systematic risk perceived by tourists (Fuchs, 2013). The perceived risk in this study is Hainan residents risking their health due to coronavirus disease potentially being transmitted by tourists. Due to the risk COVID-19 poses, tourism increases the risk for Hainan residents to unknown levels. Therefore, perceived risk, as it relates to the transmission of coronavirus, is defined as the uncertainty of risk residents of Hainan face.
Since the 1990's, researchers have studied various pandemics' impacts on tourism decisions and tourist behavior (Li et al., 2020;Choe et al., 2021;Sánchez-Cañizares et al., 2021). In particular, the economic impact of pandemics on tourism and tourism intentions have been widely discussed. For instance, severe diseases such as SARS, avian influenza, and the Middle East respiratory syndrome have severely affected the tourism industry (Floyd et al., 2004;Lee et al., 2012). The global outbreak and prevalence of COVID-19 in just 3 years have led to an increase in travel-related empirical studies. Indeed, objective risks may only have an impact on travelers' behavior when they are perceived. Similarly, residents face several risks when receiving foreign tourists into their country, like, compromised privacy, security problems in tourist destinations, and information about health protocols.
Numerous studies have been conducted relating to the relationship between perceived risk and attitudes. Andrews et al. (2014) investigated Australian public perception of personally controlled electronic health records and results show that perceived risk negatively affects attitudes. Another study about online consumer purchase behavior suggested that perceived risk has a significant role on attitude (Sadiq et al., 2021). Notably, Bae and Chang (2021) examined the effect of perceived risk from COVID-19 on willingness to travel "noncontact" during the first wave of the pandemic in Korea in March 2020. Results suggest emotional risk perception has a significantly positive effect on attitudes toward non-contact travel. Based on these literature, this study proposes attitudes defined as the positive or negative perceptions of Hainan residents toward tourists after COVID-19. The subjective norm is the magnitude of influence that perceptions of friends, neighbors, co-workers, family, and community residents on visitors have on Hainan residents. The intention is the degree of acceptance toward tourists by residents, and when the perceived risk of residents to tourists increases, attitude toward tourists is affected. Additionally, the higher the perceived risk, the more negative the attitude is. Therefore, this study proposes hypothesis 2.
H2: Attitudes toward tourists by residents in Hainan have a significantly negative effect on the perceived risk of tourists.
Perceived value
Perceived value is defined as the consumer's overall assessment of the utility which is based on perceptions of what is received and given (Zeithaml, 1988). Consumer-perceived value is the consumer's perceived performance-to-price ratio, and consumers' price perception has a strong influence on the customer's perceived value (Varki and Colgate, 2001). Perceived value is measured by assessing the range of consumer experiences (Sweeney and Soutar, 2001) and measuring the difference between actual costs and perceived benefits (Gallarza and Saura, 2006). In this study, perceived value is the perception of Hainan residents after COVID-19 reflecting the actual local needs and tourism as a positive local economic effect, i.e., visitors are welcome. Moreover, the perceived value is defined as the perception of residents of Hainan that tourists bring direct and positive economic benefits.
Many empirical studies on tourism focus on perceived value. Young tourists' perception values of nature-based tourism experiences and impacts on trip outcomes include overall satisfaction, word of mouth, and intent to revisit (Caber et al., 2020). Perceived value, destination image, and tourist satisfaction in war tourism predict the indirect and direct effects of quality tourism experiences on behavioral intentions (Ghorbanzadeh et al., 2021). The perceived value of upscaling experiences on tourists affects their attitudes and intentions to travel responsibly, and how their attitudes moderate intentions (Um and Yoon, 2021). Tourism motivation enhances the quality of experience and perceived value (Lu et al., 2021). The value of tourism is enhanced by improving positioning strategies and the promotion of tourism niches (Jamal et al., 2011). One study concluded that among trust and habit, perceived value is an influencing factor explaining the intention to travel via air. Um and Yoon (2021) concluded perceived condition value of tourism upscaling on the intention to conserve, indicating the condition value of tourism areas. In addition, Caber et al. (2020) estimated young visitors' perceived value as an important determinant of overall satisfaction and behavioral intentions (i.e., word-of-mouth and revisit intentions). Several studies show individuals' performance and effort expectations indirectly influence their adopted intentions through perceived value.
According to the above literature, this study builds on the past exploration of the perceived value as related to Hainan residents' perception of tourists under the influence of the COVID-19 pandemic. Here, this study infers that the perceived value of local residents toward tourists has a positive impact on acceptance intention, and the more positive the perceived value, the higher the tolerance to tourists. This study proposes hypothesis 3.
H3: Residents in Hainan have a significant positive effect on the perceived value of tourists on their acceptance intention toward tourists.
Attitude, subjective norms, and resident acceptance intention
Numerous studies have investigated the correlation and positive effects between attitudes and intention to adopt. Krajaechun and Praditbatuga (2019) articulated their results using Pearson's correlation coefficient, showing the acceptability of non-life insurance conditions and subjective normative attitudes toward the product were significantly associated with the intention to purchase non-life insurance. Nomi and Sabbir (2020) examined individual-level factors to explain changes in intention to purchase life insurance, empirically testing TRA as valuable. Um and Yoon (2021) argued that attitudes influence the intention to protect and participate in responsible tourism. Nomi and Sabbir (2020) found that attitudes and subjective norms have the greatest impact on purchase intentions.
The positive correlation and effects between subjective norms and adoption intentions are frequently studied. For example, Polat et al. (2021) expanded trust and social norms in airline services, directly and indirectly, affecting air travel intentions by reducing perceived risks. Firouzbakht et al. (2021) found moderating health beliefs and subjective norms (indirect beta = 0.35) exert a greater indirect effect on behavioral intentions and a higher intention coefficient (β = 0.626) was observed for subjective norms. This study draws from previous studies to formulate hypotheses 4 and 5 to examine the acceptance of tourists by residents of Hainan impacted by the COVID-19 pandemic. It can be inferred that the attitude of residents toward tourists has a positive effect on their acceptance, and the more positive the attitude, the higher the acceptance of tourists. For the same reason, residents' perceptions of tourists are influenced by their friends and relatives, and the higher the degree of subjective norm influence, the higher the acceptance of domestic and overseas tourists. Therefore, hypotheses 4 and 5 are proposed in this study.
H4: The attitude of residents of Hainan toward tourists has a positive and significant effect on the intention to accept tourists. H5: There is a positive and significant effect of Hainan residents' subjective norms toward tourists on the intention to accept tourists.
Trust, perceived risk, attitude, and resident acceptance intention
Research on the presence and effects of attitudes as a mediating variable are numerous. In Harb et al. (2021), attitudes were found to mediate the effects of cultural exploration and subjective norms on students' behavioral intentions. Also, Um and Yoon (2021) verified the mediating role of attitudes in the relationship between tourists' perceived experience value and intention to travel responsibly. Additionally, many studies investigated attitudes as a mediating effect between trust and intention to travel. As stated by Kasri and Ramli (2019), Muslim donations in Indonesia found that attitudes had a mediating effect between trust and intention to donate. According to Shin et al. (2022), the COVID-19 pandemic forced travelrelated agencies to develop effective strategies to attract travelers and developed an integrated framework to explain the effects of the pandemic period and post-pandemic travel facilitation, constraints, and attitudinal factors on travel decisions. The results showed the specific factors determining travel decisions (travel intentions and frequency) during and after a pandemic. The results reflected that attitude had a mediating effect between trust and intention to travel.
Attitudes also have a mediating effect between perceived risk and intention to accept. Hung et al. (2006) explored the e-Government service and found attitudes toward utilization of the service have a mediating effect between perceived risk and intention to accept. Further, Andrews et al. (2014) declared that Australian residents' attitudes toward personally controlled electronic health records have a mediating effect between perceived risk and intention to accept. Therefore, according to the above literature, residents' attitudes toward foreign tourists should have a mediating effect between their trust and acceptance intention. Simultaneously, residents' attitudes toward foreign tourists should have a mediating effect between their perceived risk and intention to accept. Accordingly, hypotheses 6 and 7 are proposed in this study.
H6: Residents' attitudes toward tourists have a mediating effect on trust and acceptance intention. H7: A mediating effect exists in Hainan residents' attitude toward tourists between perceived risk and acceptance intention of tourists.
This study proposes a research framework as shown in Figure 1.
Research design Research subjects and data collection
This study applied a quantitative method using a questionnaire to collect data that is aimed at exploring the intention of residents to accept tourists after the COVID-19 pandemic. The target research group was residents in Hainan Province and the sampling design was based on the intentional sampling method. The questionnaire was distributed and filled out through paper and electronic questionnaires from 15 July to 10 November 2021, in the east, south, and north of Hainan Province. Data were collected from international companies, domestic enterprises and institutions, community groups, religious groups, civic associations, social media groups, and .
Description of research variables
The questionnaire was divided into two major parts. The first part on basic personal information had six categories including gender, marital status, place of residence, monthly income, occupation, and education level. The second part on perceived risk and trust and acceptance intention by residents toward tourists after the COVID-19 pandemic had five variables of trust, perceived risk, perceived value, attitude, and subjective norm, in addition to resident acceptance intention.
Questionnaire design for the measurement of latent variables
The question measures were designed according to a 7point Likert scale, with "1" strongly disagree, "2" disagree, "3" somewhat disagree, "4" average, "5" somewhat agree, "6" agree, and "7" strongly agree. The higher the score, the higher the level of agreement of the respondents for the study variables. After the questionnaire design was completed, tourism industry specialists and academic experts on tourism reviewed and augmented the questionnaire's content. The questionnaire contained 6 variables, 29 items, and literature sources as shown in Appendix.
Variable measures of acceptance intention
This study adapted the construct of acceptance intention from Fichten et al. (2016). The theory of planned behavior predicts the graduation intentions of Canadian and Israeli postsecondary students with and without learning. The original questions for the intention section were "I intend to complete my program of studies, " "I will try to complete my program of studies, " "I expect to complete my program of studies, " "I am determined to complete my program of studies, " and "All things considered, it is possible that I might not complete my program of study." Corrections were made by translation, and a total of five questions were adopted from this study.
Variable measures of perceived value
This study referred to Liu et al. (2010) for questions on perceived value. Sample questions from the value of mobile phone services section include: "The service of the cell phone is good value for money, " "The service of mobile phone is a good buy, " "The price of mobile phone service is economical, " and "The service of the mobile phone is worthwhile." The questions were revised and four questions were finalized.
Variable measures of perceived risk
The construct of perceived risk was adapted from Chen's (2012) study. The original intention of Chen (2012) was to examine the factors affecting the intention to continue using mobile banking. The original questions for the perceived risk section had sentences such as "Using m-banking to pay my bills would be risky, " "M-banking is dangerous to use, " "Using mbanking would add great uncertainty to my bill paying, " "Using banking exposes you to an overall risk, " and "On the whole,
Variable measures of perceived trust
The basis of our questions for perceived trust was formulated from Lin et al. (2010). This study investigated whether online team members can be trusted. The original questions for the trust were, "I consider our online team members as people who can be trusted, " "I consider our online team members as people who can be counted on to do what is right, " "I consider our online team members as people who can be counted on to get the job done right, " and "I consider our online team members as people who are always faithful." The fifth question was designed by corrections and giving a summary by reference. A total of five questions were adapted.
Variable measures of attitudes
Questions for the construct of attitude were based on a 2017 study (Mehrad and Mohammadi, 2017). This study defined attitudes about the process of utilizing a mobile banking service. The original questions based on questions from the attitude section were, "Use mobile banking service is compatible with my lifestyle, " "Use banking services is compatible with most banking activities, " "Using mobile payment services is a wise idea, " and "Using mobile payment services is beneficial." A total of five questions were set.
Variable measures of subjective norms
A 2018 study influenced the questions on subjective norms which explored behavioral intentions in recycling (Taufique and Vaithianathan, 2018). The questions from this study included, "Most of my friends think I should recycle household garbage, " "Most of my neighbors think I should use environmentally friendly household products, " "Most of my neighbors think I should recycle, " "Most of my co-workers think I should use environmentally friendly household products, " and "Most of my family members think I should use environmentally friendly products." A total of five questions were designed.
Data analysis
Data analysis was conducted in three stages: descriptive analysis, measurement model verification, and structural equation modeling analysis. Descriptive analysis by SPSS contained two parts: the frequency distribution calculation of demographic data for a basic understanding of the sample set, and the mean value and standard deviation for each construct. A two-step analysis was then conducted to measure the model and structural model (Anderson and Gerbing, 1988), Reliability was confirmed through confirmatory factor analysis (CFA). This includes composite reliability which measures the internal consistency of each variable, convergent validity, and discriminant validity. In the third stage, a structural equation model (SEM) analysis was conducted to test the fit of the model and to check the hypotheses of the study structure. The structural equation model included factor analysis, path analysis, and mediation effect analysis by statistic software AMOS (SPSS Inc., Chicago, IL, USA).
Descriptive analysis
Sample background data statistics The basic data surveyed contained six items: gender, marital status, place of residence, monthly income, occupation, and education level. Among the respondents, N = 275 (61.52%) of them were women. A majority of the respondents were married, N = 264 (59.06%), and were residents of Haikou N = 273 (61.07%). Most of them had a monthly income of <5,000 RMB, N = 281 (62.86%). Many of them were occupied in jobs other than what was listed, N = 171 people (38.26%), and the majority had a bachelor's degree or a college degree, N = 272 (60.85%). The results are shown in Table 1. Table 2 displays the total number of valid questionnaires which was N = 447. The minimum value in the questionnaires was 1 and the maximum value was 7, and all values were within the range of 1-7, indicating no construction errors in the variables. The mean value was between 4.15 and 5.28, which met the criteria of no value being >6 or <2, which indicated all question items had discriminatory power. The analysis of the skewness and kurtosis for each question showed the range of skewness was −1.02−0.21 and the kurtosis values ranged from −0.91-0.54, which met the criteria of the absolute value of skewness of <2 and an absolute value of kurtosis <7 (Kline, 2005). This indicated that the study data conformed to normal distribution. The maximum mean value of PVQ1 was 5.28 and the minimum mean value of PRQ5 was 4.15, which meant that the respondents agree most with PVQ1 and less with PRQ5 ( Table 2). The standard deviation ranged from 1.46 to 1.8, showing the degree of disagreement among the respondents was consistent for each topic.
Reliability and validity analysis
According to Anderson and Gerbing (1988), a complete SEM evaluation consists of the evaluation of the measurement model and the evaluation of the structural model. Only when .
/fpsyg. . (Kline, 2011). The measurement model was estimated using an approximate estimation method. The estimated parameters included standardized factor loading, square multiple correlations (SMC), composite reliability, and average variance extracted (AVE) ( Table 3). Among them, standardized factor loadings >0.60 are acceptable and ideally should be >0.70 (Chin, 1998). Few scholars have suggested that standardized factor loading questions below 0.45 indicate measurement error (Hair et al., 1998). In other words, the question is too broad and should be removed (Hooper et al., 2008). Other scholars have concluded that the standardized factor loading for each indicator variable should be >0.50, while composite reliability should be >0.60, and AVE should be higher than 0.50 (Fornell and Larcker, 1981;Nunnally and Bernstein, 1994). These parameters support that the measurement model will have good convergent validity. As shown in Table 3, the standardized factor loadings ranged from 0.758 to 0.941, which represents that each item had topic reliability. Composite reliability (CR) was used as a reliability indicator for the constructs. Because CR values are suitable for data analysis of structural equation models, their role was similar to the function of Cronbach's alpha. The composite reliability for each construct ranged from 0.935 to 0.973. Previous studies .
/fpsyg. . have suggested that composite reliability should be >0.7, thus, all constructs met the criteria for good internal consistency. The AVE ranges were 0.744-0.878. Because all AVE ranges were above 0.5, all show good convergent validity for each construct (Fornell and Larcker, 1981;Hair et al., 1998). Fornell and Larcker (1981) also suggested that discriminant validity should consider the relationship between convergent validity and construct correlation. Therefore, they suggested the square root of AVE for the construct should be greater than the correlation coefficient between the constructs. This would ascertain that the model in this study had discriminant validity. The root means square of the AVE for each construct on the diagonal is greater than the off-diagonal correlation coefficient (Table 4). Therefore, each construct in this study has good discriminant validity.
Structural equation modeling analysis
Structural model analysis Structural model analysis was performed using the great likelihood estimation method. The analysis results include model fit, significance test of the research hypotheses, and explained variance (R 2 ). The research hypothesis of SEM is sample covariance matrix = model covariance matrix. However, SEM is a large sample analysis method, so the p-value can easily be <0.05, which will often wrongly reject the hypothesis and .
/fpsyg. . The items on the diagonal in bold represent the square roots of the AVE; off-diagonal elements are the correlation estimates. conclude that the model is deficient. Schumacker and Lomax (2010) and Kline (2011) concluded the degree of model fit should not be determined by the p-value, but rather report a variety of different goodness-of-fit indicators to determine whether the model is good. Jackson et al. (2009), applied fit metrics to 194 international academic journals as a blueprint for applying model fit analysis. The results of this study reported the nine most widely used fit metrics: list the metrics here. In principle, the lower the χ 2 value the better, however, since χ 2 is very sensitive to sample size, this study use χ2/df to reduce sensitivity and assist in the assessment. Ideally, this value should be <3. Hu and Bentler (1999) suggested each fit should be assessed independently and a more rigorous model fit should be used to control type I errors simultaneously. This includes Standardized RMR < 0.08 and CFI > 0.90 or RMSEA < 0.08. The model fit meets the criteria (Table 5), therefore, suggesting the model has a good fit.
Path analysis
In this research model (as shown in Table 6 Figure 2 represents the proposed conceptual model.
Analysis of mediation e ect
The statistical confidence interval for the indirect effect was generated using bootstrapping, using resampling regression. Table 7 shows the indirect effect of perceived risk→ resident acceptance intention is p ≥ 0.05 and the bias-corrected contains 0 [-0.074 to 0]. This means the indirect effect is not valid. In total, regarding the indirect effect of trust→ resident acceptance intention, the confidence interval does not contain 0 [0.005-0.426], indicating that the indirect and mediating effect is maintained.
Discussion and conclusions with implications
This study explored the main variables influencing Hainan residents' acceptance intention regarding tourists after the COVID-19 pandemic, based on the theory of reasoned action, and combining the factors of "perceived value, " "trust, " and "perceived risk" with "attitude" as mediating variables. A research model and related hypotheses were proposed. After collecting data through questionnaires, SEM was used to test the model and verify the hypotheses.
Theoretical contributions
The main aim of this study was to assess the acceptance intention of residents of Hainan to accept tourists in the context of the COVID-19 pandemic and to explore the influence of Hainan residents' perceived value on their acceptance intention, in addition to considering the influence of perceived risk and trust factors. This study endeavors to fill the research gap on this subject by providing essential and empirical insights into the role of factors influencing tourist acceptance intentions post-COVID-19 era. First, trust affects the attitude of Hainan residents toward tourists. Trust as an independent variable is the most critical variable for tourists' acceptance in this study. This variable has a significantly positive effect on residents' intention to accept, with a standardized regression coefficient of 0.784. This aligns with Sadiq et al. (2021), who stated that one of the important variables in online travel purchases is "trust" and trust strongly influences consumers' attitudes toward online purchases. Most research scholars have explored the relationship between trust and intention from the consumer's perspective or the tourist's perspective on the product service and the physical product. This research explores the acceptance of tourists' wishes from the perspective of residents. The high trust level of residents directly reflects attitudes toward tourists. When residents have a friendly attitude to receiving tourists, naturally tourists will have a very pleasant experience.
Second, perceived risk affects the attitude of Hainan residents, and although it presents a significant negative effect, the effect is small. The findings of this study are similar to those of (Andrews et al., 2014), in that the findings show that perceived risk has a significantly negative effect on attitudes. As Polat et al. (2021) confirms, the variable with the greatest impact on explaining air travel intentions under the COVID-19 pandemic is perceived risk. To sum up, this study appears consistent with previous findings. The findings of this study also indicate Hainan .
/fpsyg. . residents are sympathetic and kind. This is a very important trait for a tourist city to have so guests can have a welcoming experience. Although the perceived risk of tourists is high, its impact coefficient is very small with a standardized regression coefficient of −0.076. China's comprehensive protection against the pandemic, especially the government's implementation of a series of effective and rapid measures, is another important factor. For example, China's national real-name health code system network platform was rapidly established. Not only was a national platform on health code launched, but each local government also launched a provincial regional health code simultaneously and combined the two for integrated supervision. At the same time, personal information is required to be updated synchronously and the records inside the system include individual vaccination time details, nucleic acid antibody information, departure health information, travel track records, and vaccine prevention records. Effective and fast vaccination measures are believed to be the reason why Hainan residents show low rejection, no panic, and acceptance intention toward visitors. Thus, the importance of public media propaganda information mechanisms, timely updates to the masses, and external announcements of the latest COVID-19 pandemic case tracking and reporting reassured the public, easing the economic hardship period and fear of the COVID-19 pandemic. These are potential reasons why Hainan residents' intention to accept tourists is less affected by perceived risk.
Hainan residents have a significant positive influence on the perceived value acceptance intention of tourists. This is similar to the conclusion of previous studies which validate the positive effect of perceived value on air travel intentions (Polat et al., 2021). Um and Yoon (2021) identified the perceived value perception of tourism upscaling also had a significant impact on the willingness to conserve, indicating the importance of increasing the perceived value of tourist areas. Thus, with the pursuit of economic value and actual perceived value, Hainan residents typically accept tourists. Effective economic repair is very important for residents' daily economic livelihood and perceived value is one of the main considerations, especially for Hainan's geographical location and the Chinese government's tourism market orientation at the national level.
Based on the theory of reasoned action, it is clear that Hainan residents' acceptance intention toward tourists has been directly influenced by subjective norms. This study verifies the influence of the relationship between attitude, subjective norms, and intention. The conclusion indicates Hainan residents are more compliant with local resident norms and actively cooperate with a series of national and governmental legal measures for the COVID-19 pandemic. Residents are also very dependent on local governmental legal regulations for tourist acceptance intentions, including at the workplace, local community, and influence of surrounding family and friends.
Substantive contributions
The results of this study can effectively assist various stakeholders including local governments, the tourism industry, the hotel industry, and the restaurant industry. First, this study found that the attitude and acceptance of Hainan residents toward tourists are high. Therefore, the local government has been very successful in promoting and educating residents about the attitudes and acceptance of tourists. The paper recommended continuing to promote and educate tourists in a friendly manner. Furthermore, perceived value has a significantly positive effect on local residents' acceptance of tourists. Therefore, it is recommended for the local government, tourism industry, hotel industry, and catering industry that these stakeholders should adopt a series of reasonable and effective strategies, product branding effect, and destination attachment to persuade tourists to choose higher levels of products or services. Standardization and improvement of tourism support facilities provide tourists with friendlier, and better quality tourism services in the local market. Second, based on the recommendations from the study, the focus should be on the development of various types of services including experiential consumption, diversification of innovative tourism consumption products, effective targeting of personalized tourism products, dynamic market demand for tourism services, integrated marketing communications, multi-directional information placement to stimulate tourist Frontiers in Psychology frontiersin.org . /fpsyg. . consumption, and constantly achieve demand-oriented attraction of tourists. Finally, based on the comprehensive needs of residents and visitors, this paper provides rich and professional practical training services and enhances accessibility to residents. Another major finding is that perceived risk has a direct negative influence on attitudes, but the influence is small. Hainan residents show a low perceived risk of rejection by visitors indicating the local government has done well to provide comprehensive protective measures, a series of security services, and a safe and secure system for the COVID-19 pandemic, allowing for low-risk perception in tourists. It is recommended that such benign precautionary measures and local safety and security services be maintained continuously. It is believed that such measures are the reason residents are at ease and willing to accept tourists.
Research limitations and future developments
This study is mainly for Hainan residents, and only the core cities in Hainan Province were selected for sampling such as Haikou, Sanya, Wanning, and Wenchang. The regional selection does not cover the whole Hainan province. Results learned for data of the sampling target Haikou city is relatively high at 61.07% and there is unevenness in the data between cities. College and university-educated residents accounted for a total of 60.85%. The higher the education level, the more responses, and flexibility to different emergency scenarios came easier, and the higher the willingness to easily accommodate visitors in the context of the pandemic. This group also has a higher percentage of people who know how to effectively use the online realname health code and how to use safety preparation in special situations such as the pandemic. The data report may not fully present the attitudes and intentions of other people with lower education. Furthermore, the concept of variables itself can be extended for the limited number of variables in the construct design. For example, perceived risks can be subcategorized and safety SOP norms can be explored as a separate column. In addition, this study focuses on current Hainan residents, and in the future, the methods and conclusions from this study could apply to other regions of China or other countries, especially tourist destinations. Expanding the understanding of the willingness of residents of various countries to accept tourists in the context of the COVID-19 pandemic is useful in the future.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the patients/participants or patients/participants legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
Author contributions
HZ devised the project, the main conceptual ideas, proof outline and all of the technical details for data analysis. JI worked out writing suggestion and guideline, with help from AM. All authors contributed to the article and approved the submitted version. | 2022-10-27T15:06:48.316Z | 2022-10-26T00:00:00.000 | {
"year": 2022,
"sha1": "ceff0e2979fb05aa76e718e9ed0b8715540b9622",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "ceff0e2979fb05aa76e718e9ed0b8715540b9622",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
33362921 | pes2o/s2orc | v3-fos-license | Possible Roles of Sulfur-Containing Amino Acids in a Chemoautotrophic Bacterium-Mollusc Symbiosis
Invertebrate hosts of chemoautotrophic symbionts face the unique challenge of supplying their symbionts with hydrogen sulfide while avoiding its toxic effects. The sulfur-containing free amino acids taurine and thiotaurine may function in sulfide detoxification by serving as sulfur storage compounds or as transport compounds between symbiont and host. After sulfide exposure, both taurine and thiotaurine levels increased in the gill tissues of the symbiotic coastal bivalve Solemya velum. Inhibition of prokaryotic metabolism with chloramphenicol, inhibition of eukaryotic metabolism with cycloheximide, and inhibition of ammonia assimilation with methionine sulfoximine reduced levels of sulfur-containing amino acids. Chloramphenicol treatment inhibited the removal of sulfide from the medium. In the absence of metabolic inhibitors, estimated rates of sulfide incorporation into taurine and thiotaurine accounted for nearly half of the sulfide removed from the medium. In contrast, amino acid levels in the nonsymbiotic, sulfide-tolerant molluscs Geukensia demissa and Yoldia limatula did not change after sulfide exposure. These findings suggest that sulfur-containing amino acids function in sulfide detoxification in symbiotic invertebrates, and that this process depends upon ammonia assimilation and symbiont metabolic capabilities.
Introduction
Aquatic habitats such as deep-sea hydrothermal vents, mangrove swamps, eelgrass beds, and sewage outfall sites tend to be characterized by high levels of the metabolic toxin hydrogen sulfide (Fenchel and Riedl, 1970;Cavanaugh, 1983). Hydrogen sulfide diffuses freely across respiratory surfaces and therefore cannot be excluded from tissues (Denis and Reed, 1927;Julian and Arp, 1992). It also reversibly inhibits cytochrome c oxidase (Lovatt Evans, 1967;Nicholls, 1975) and decreases hemoglobin oxygen affinity (Carrico et al., 1978). Consequently, animals living in these environments require physiological mechanisms to cope with hydrogen sulfide toxicity.
Invertebrates that harbor symbiotic chemoautotrophic bacteria also require sulfide, for their symbionts utilize the chemical energy generated by hydrogen sulfide oxidation to fix carbon dioxide into carbohydrates Ruby et al., 1981;Cavanaugh, 1983). The invertebrate host delivers hydrogen sulfide and oxygen to its symbionts and relies upon the symbiont-produced carbohydrates as a source of nutrition (Rau, 1981;Southward et al., 1981;Felbeck, 1985). This type of symbiotic relationship exists in over 100 species from at least five invertebrate phyla (Cavanaugh, 1994).
Sulfide detoxification and transport strategies have been well studied in the coastal protobranch bivalves of the genus Solemya, which harbor approximately 1 ϫ 10 9 intracellular chemoautotrophic symbionts per gram wet weight of gill tissue (Cavanaugh, 1983;Felbeck, 1983). These clams generally form U-or Y-shaped burrows in shallow-water reducing sediments and pump oxygenated water through their burrows (Frey, 1968), which enables them to simultaneously acquire the dissolved oxygen and sulfide needed for chemoautotrophy.
To detoxify sulfide, Solemya velum and S. reidi, two well-studied species, use several strategies apart from utilization by their symbionts. For example, S. velum has two types of cytoplasmic hemoglobins: one binds oxygen, and a second, which combines with sulfide to form ferric hemo-globin sulfide, may mediate sulfide transport to the symbionts (Doeller et al., 1988). The sulfide-binding form of cytoplasmic hemoglobin is not present in S. reidi (Kraus et al., 1992); rather, sulfide oxidation occurs in the mitochondria, hematin, and sulfide-oxidizing bodies Somero, 1985, 1986;Powell and Arp, 1989).
The exceptionally high levels of taurine and the related amino acid thiotaurine (2-aminoethanethiosulfonic acid) in several chemoautotrophic symbioses have prompted investigators to consider additional roles for these amino acids. For example, taurine appears to be an end product of ammonia assimilation in S. reidi (Lee et al., 1997), and may be involved in sulfur cycling in S. velum (Conway and Mc-Dowell Capuzzo, 1992). Similarly, thiotaurine may function in sulfur cycling in deep-sea symbiotic bivalves (Alberic and Boulegue, 1990;Pruski et al., 2000;Pruski, 2001). Taurine and thiotaurine may serve as important sulfide storage compounds, allowing S. velum to maintain low levels of intracellular sulfide. Under conditions where sulfide is low or absent, taurine and thiotaurine may provide sulfide to mitochondrial and symbiont sulfide oxidation pathways. Several species of aerobic gram-negative soil and enteric bacteria use taurine as a sulfur, carbon, and nitrogen source (Stapley and Starkey, 1970;Smiley and Wilkinson, 1983;Seitz et al., 1993;King and Quinn, 1997;Chien et al., 1999;Cook et al., 1999;Reichenbecher and Murrell, 1999), and similar metabolic pathways may be present in the gram-negative chemoautotrophic symbionts. Additionally, taurine and thiotaurine production could facilitate sulfide detoxification under anaerobic conditions, thereby preventing deleterious effects such as the reaction of sulfide with metalloproteins. This strategy would be particularly beneficial in the burrow environment of solemyid clams, in which oxygen and sulfide levels vary.
In the present study, we tested the hypothesis that taurine and thiotaurine levels in intact gills of S. velum increase upon exposure to sulfide, which would occur if these amino acids are involved in sulfur storage or cycling. The effects of sulfide exposure on levels of hypotaurine (2-aminoethanesulfinic acid), a possible precursor to taurine and thiotaurine, were also examined. Host-and symbiont-specific metabolic inhibitors were used to distinguish the roles of the S. velum host and its symbionts in ammonia assimilation and the maintenance of taurine, thiotaurine, and hypotaurine pools. Flow-through respirometry studies were conducted with S. velum to quantify rates of ammonia flux and to determine whether sulfide consumption is related to fluctuations in amino acid pools. In parallel studies, we tested for correlations between sulfide and free amino acid levels in two sulfide-tolerant, nonsymbiotic bivalve species: the estuarine mussel Geukensia demissa and the protobranch bivalve Yoldia limatula.
Specimen collection and maintenance
Solemya velum Say, 1822, Geukensia demissa (Dillwyn, 1817), and Yoldia limatula (Say, 1831) were collected by the Marine Resources Center of the Marine Biological Laboratory (Woods Hole, MA) and shipped on the day of collection. In our laboratory the clams were maintained for up to 5 days in a flow-through respirometry system (see http://www.wsu.edu/ϳrlee/respirometer/respirometer.htm and Fig. 1). In each experiment, the temperature of the respirometry system was regulated to match the ambient water temperature at which the clams were collected, which ranged seasonally from 5 to 15°C over a nearly 2-year period. These temperature differences did not affect the free amino acid composition. Consumption of sulfide, oxygen, and ammonia was measured in a subset of experiments conducted in the late spring and at 15°C. Solemya velum and Y. limatula were maintained in 0.45-m filtered, 35‰ artificial seawater (ASW; Instant Ocean, Aquarium Systems, Mentor, OH) supplemented with 50 M ammonia. Geukensia demissa was maintained in filtered, 30‰ ASW supplemented with 50 M ammonia. The ASW solution in the chambers was kept mixed with magnetic stir bars (60 to 100 rpm). ASW flow rates ranged from 0.18 to 0.19 ml min Ϫ1 .
Maintaining the clams in a flow-through respirometry system allowed for constant monitoring of experimental conditions. The oxygen concentrations in the chamber outflows were determined with a polarographic O 2 sensor (POS), whereas the concentrations of sulfide and, in some experiments, ammonia were measured by flow injection analysis (FIA). Throughout this paper, sulfide refers to ⌺H 2 S (primarily the sum of HS Ϫ and H 2 S), and ammonia refers to ⌺NH 3 (the sum of NH 3 and NH 4 ϩ ). Outflows were pumped directly through a flow cell containing a sulfideinsensitive gold microcathode POS (Orbisphere 2120, Geneva) followed by a series of FIA injection sample loops (Rheodyne Type 50 valves with Rheodyne 5701 pneumatic actuators, Rohnert Park, CA).
The FIA determination of sulfide involved derivitizing the sample stream with 1.5 mM 2,2Ј-dithiodipyridine, which forms a stable product with an absorbance maximum at 343 nm (Svenson, 1980). The derivitized sample was loaded into an injection loop and sent to a variable wavelength detector (Hyperquan VUV-20, Colorado Springs, CO). This method produces 343-nm absorbance peaks proportional to sulfide concentration (Svenson, 1980). Ammonia was determined by a modification of the FIA protocol of Willason et al. (1986). The sample was loaded into an injection loop and then treated with NaOH to raise the pH, converting ammonium to ammonia gas. Ammonia passes through a Teflon membrane into a carrier stream containing phenol red, raising the pH of the phenol red solution. The resulting color change (absorbance at 560 nm) is monitored with a detector fabricated from a green LED and a phototransistor. POS and FIA detector outputs were continuously recorded with a data acquisition system (Sable Systems, Datacan V, Henderson, NV), which was also used to control FIA injection valve actuation.
Three respirometer chambers containing clams, one control chamber, and two standards were connected to a sixway stream-selector valve (Fig. 1). Every 0.13 h the valve was automatically switched, allowing sampling from all six channels every 0.8 h. Peak height (for FIA) or average output (for POS) was calculated to determine concentration differences between chambers containing clams and the control chamber. These data were continuously monitored to ensure, in the case of oxygen measurements, that the clams were not exposed to hypoxic conditions. Direct measurements of sulfide in samples taken from the respirometer outflow were used to standardize sulfide data obtained by FIA. Fluxes were calculated as follows:
Experimental treatments
In all experiments, the clams were acclimated for 24 h in ASW prior to sulfide or inhibitor exposure. In experiments in which clams were exposed to sulfide, a syringe pump (Harvard Apparatus 944, South Natick, MA) was used to meter a 20 mM Na 2 S solution (Fig. 1C) into the ASW solution before its entry into the respirometer chambers. Final sulfide concentration in the chambers was 0.45 Ϯ 0.05 mM. Clams assayed for amino acid levels were exposed to sulfide for 24 h. In some experiments, a metabolic inhibitor was added to the ASW solution after the 24-h acclimation period. The final concentrations of these inhibitors were 0.9 mM chloramphenicol (Sigma, St. Louis, MO), 0.02 mM cycloheximide (Sigma), and 0.25 mM methionine sulfoximine (MSX; Sigma). In experiments in which sulfide consumption was examined, the clams were acclimated for 24 h in ASW, then exposed to sulfide for up to 100 h. During the final 24 h of the sulfide exposure, metabolic inhibitors were added to the ASW solution. The clams were weighed before and after treatment.
Amino acid analyses
After experimental treatment, the bivalves were opened by severing their adductor muscles. The gills and foot of each clam were dissected free of other tissues, blotted briefly on a paper towel, then individually frozen in liquid nitrogen and stored at Ϫ80°C. The gills and feet were individually homogenized in distilled water (1:25, tissue/ dH 2 O) on ice. To precipitate proteins, the homogenates were treated with 5% sulfosalicylic acid (Sigma) in a 1:10 ratio of sulfosalicylic acid/homogenate (Lee and Slocum, 1988). The solutions were centrifuged for 5 min at 16,000 ϫ g, and the supernatants were stored at Ϫ80°C.
Total free amino acids were quantified in a Beckman Figure 1. Schematic of flow-through respirometry system. (A) CO 2 , N 2 , and O 2 gas mixture. (B) Seawater-gas equilibration column with pH regulation. (C) Syringe pump for metering sodium sulfide stocks into seawater entering the chambers. The syringe pump is also used to meter inhibitor into chamber 2. (D) Water-jacketed chambers containing clams, with at least one empty chamber to function as a control. Outflows from the chambers are pumped to a pneumatically actuated six-way stream-selector valve (E). The position of this valve determines whether the outflow goes to waste or to the analysis system. The analysis system consists of a flow injection analyzer for sulfide and ammonia determination and an O 2 sensor mounted in a flow cell (F). The analyzer and stream-selector valve are linked to a data acquisition and automated control system (G). 6300 amino acid analyzer following a protocol modified from Lee and Slocum (1988). The samples were diluted (1:30, by volume) with Li-S buffer (96.8% H 2 O, 1% LiCl, 1% thiodiglycol, 0.7% HCl, 0.5% benzoic acid; pH 2.2; Beckman Coulter, Inc., Fullerton, CA). Of the 240 -250 l of sample or standard loaded onto the sample loops, 50 l of each solution was analyzed. The amino acids were separated in Li-A buffer (98% H 2 O, 1% Li citrate, 0.5% LiCl, 0.5% HCl; pH 2.8; Beckman Coulter) on a 10-cm ion exchange column and reacted in-line with ninhydrin solution. Absorbances were monitored at 570 nm and 440 nm. After preliminary experiments, only taurine, hypotaurine, and thiotaurine were quantified. The standards were 200 M taurine (Sigma) and 20 M hypotaurine (Sigma) in Li-S buffer. The thiotaurine standard was prepared by dissolving 0.0011 g hypotaurine and 0.05 g Na 2 S ⅐ 9H 2 O (Fisher Scientific, Fair Lawn, NJ) in deionized water, heating to 100°C , acidifying the solution with 1 M HCl, and evaporating the solution (Cavallini et al., 1963). This standard was verified by mass spectrometry.
The temperature of the column affected the separation of thiotaurine from taurine and the reactivity of hypotaurine with ninhydrin. Thiotaurine could be detected only at 70°C. The values of hypotaurine reported here are only from analyses run at 45°C, because hypotaurine was not as reactive with ninhydrin at 70°C. The temperature did not affect taurine levels. Reported values for taurine are averages between levels detected at 45°C and 70°C.
Results are presented as the mean Ϯ the standard error and as average rates of synthesis per gram wet weight over a 24-h period [(amino acid level in g Ϫ1 wet weight in clams exposed to sulfide) Ϫ (amino acid level in clams not exposed to sulfide)/24 h] assuming the synthesis rate was linear over the 24-h period. Differences among means were detected using one-way ANOVA for each amino acid in S. velum samples (Statistica, Statsoft, Inc., Tulsa, OK). The appropriate comparisons were analyzed with Fisher's LSD procedure (Statistica). The G. demissa and Y. limatula amino acid data were analyzed with two-sample t tests (Statistica).
Sulfide exposure increased taurine and thiotaurine levels in S. velum but not in two nonsymbiotic bivalves
Specimens of Solemya velum exposed to sulfide had significantly more taurine (P ϭ 0.0034) and thiotaurine (P Ͻ 0.0001) in their gill tissue than clams not so exposed (Table 1). These values equate to average rates of change in taurine and thiotaurine levels of 0.89 mol ⅐ g Ϫ1 wet weight ⅐ h Ϫ1 and 0.22 mol ⅐ g Ϫ1 ⅐ h Ϫ1 over the 24-h incubation period. Hypotaurine levels were not significantly affected (P ϭ 0.064). Cysteine and methionine, two other sulfur-containing amino acids, were below the limits of detection in all samples. Levels of the most abundant nonsulfur-containing free amino acids-alanine, glutamate, and aspartate-were unaffected by sulfide exposure (data not shown). In preliminary experiments with sulfide-exposed clams, free amino acid profiles of S. velum foot (symbiontfree) and gill tissues were similar (data not shown), as observed previously (Conway and McDowell Capuzzo, 1992). In subsequent experiments with metabolic inhibitors, only taurine, hypotaurine, and thiotaurine levels in the symbiont-containing gill tissues of S. velum were quantified.
Gill tissue from the nonsymbiotic bivalve species Geukensia demissa and Yoldia limatula contained less taurine (P Ͻ 0.0001, non-sulfide-exposed) than S. velum (Table 1), but comparable levels of hypotaurine (P ϭ 0.068) and thiotaurine (P ϭ 0.648). The concentrations of these amino acids were the same whether or not the bivalves were exposed to sulfide (all comparisons, P Ͼ 0.05).
Metabolic inhibitors decreased taurine and thiotaurine levels in S. velum gills
To investigate the role of the chemoautotrophic symbionts, clams were exposed to chloramphenicol, a specific Data are mean Ϯ SEM in mol amino acid/g wet weight of gill tissue. (n), Number of replicates; MSX, methionine sulfoximine. a Significant differences (P Ͻ 0.05) between clams exposed to sulfide (ϩSulfide) and not exposed (ϪSulfide). b Significant differences (P Ͻ 0.05) between clams treated with inhibitor and not treated.
inhibitor of bacterial protein synthesis (Burnap and Trench, 1989), at a concentration previously determined to disrupt symbiont metabolism but to be nontoxic to the host (R. W. Lee, unpubl.). To examine the role of the host, clams were exposed to cycloheximide, a specific inhibitor of eukaryotic protein synthesis (Burnap and Trench, 1989), at a concentration nontoxic to the host for the duration of the treatment.
In additional experiments, the effects of the ammonia assimilation inhibitor, methionine sulfoximine (MSX; Rees, 1987) were examined; MSX inhibits glutamine synthetase, which has been detected in S. velum tissues (Lee et al., 1999). To ensure complete inhibition of ammonia assimilation, the MSX level was 10-fold higher than that utilized by Rees (1987). Taurine, hypotaurine, and thiotaurine levels in clams not exposed to sulfide were not affected by exposure to any of the inhibitors (Table 1). Additionally, the wet weights of whole clams were not altered by treatment with sulfide or metabolic inhibitors. When clams were treated with the three metabolic inhibitors, the usual sulfide-induced increase in taurine levels was not observed (P values for comparisons between Ϫsulfide and ϩsulfide, in the presence of inhibitors: chloramphenicol, P ϭ 0.552; cycloheximide, P ϭ 0.451; MSX, P ϭ 0.707). Hypotaurine levels were not altered by sulfide exposure or treatment with inhibitors (P ϭ 0.064). Thiotaurine levels increased after sulfide exposure, even in the presence of metabolic inhibitors (P values for comparisons between Ϫsulfide and ϩsulfide in the presence of inhibitors: chloramphenicol, P Ͻ 0.001; cycloheximide, P Ͻ 0.001; MSX, P Ͻ 0.0001). This sulfide-stimulated increase, however, was reduced by treatment with chloramphenicol (P ϭ 0.016, sulfide-exposed control clams versus chloramphenicol-treated sulfide-exposed clams).
Discussion
This study demonstrates that taurine and thiotaurine levels in Solemya velum gills increase after sulfide exposure. These two amino acids may function as nontoxic sulfide storage compounds. Inhibition of symbiont and host protein synthesis and host ammonia assimilation blocked sulfidestimulated taurine synthesis; in contrast, only the inhibition of symbiont metabolism decreased the sulfide-stimulated thiotaurine synthesis. The maintenance of free amino acid pools depended upon the presence of functioning symbionts, sulfide consumption, and host ammonia assimilation. Thus, sulfur-containing free amino acids also may be a link between cycling of nitrogen and sulfur in chemoautotrophic symbioses.
The magnitude of changes in taurine and thiotaurine pools observed in the present study is sufficient for these amino acids to be physiologically significant sulfide storage compounds. In experiments in which clams were exposed to sulfide for 24 h, the increases in taurine and thiotaurine levels corresponded to synthesis rates of 0.89 mol ⅐ g Ϫ1 wet weight ⅐ h Ϫ1 and 0.22 mol ⅐ g Ϫ1 ⅐ h Ϫ1 , respectively. Since taurine contains one S atom and thiotaurine contains two S atoms, this corresponds to a potential sulfide incorporation rate of 1.33 mol ⅐ g Ϫ1 ⅐ h Ϫ1 . The average rate of whole animal sulfide consumption measured under the control (no inhibitor) conditions was 2.57 mol ⅐ g Ϫ1 ⅐ h Ϫ1 , which is similar to the sulfide consumption rate of Solemya reidi under similar experimental conditions (Anderson et al., 1987). Therefore, the contribution of taurine and thio- taurine to sulfide detoxification could account for up to 50% of the total sulfide flux.
Treatment with chloramphenicol, cycloheximide, and MSX prevented the sulfide-induced increases in taurine levels exhibited by the control clams. The lack of detectable taurine synthesis in chloramphenicol-treated clams likely can be attributed to the chloramphenicol-induced cessation of sulfide consumption. Additionally, chloramphenicol may act to prevent synthesis of mitochondrial proteins that are not nuclear encoded. Therefore, effects of chloramphenicol treatment might also be ascribed to impairment of mitochondrial metabolism. However, in preliminary experiments, S. velum tolerated treatment with 0.9 mM chloramphenicol for at least 9 days, suggesting that the effects due to chloramphenicol treatment are most likely the result of disrupted symbiont metabolism rather than toxicity to the host. Treatment with cycloheximide, which inhibits eukaryotic protein synthesis (Burnap and Trench, 1989) and is functionally analogous to chloramphenicol, decreased taurine synthesis in the presence of sulfide, but did not affect sulfide consumption. These results suggest that taurine is synthesized by the host and that the cycloheximide treatment did not affect any sulfide consumption which may occur in host tissues (Powell and Somero, 1986). Exposure to MSX blocked ammonia assimilation, probably contributing to the decreased taurine levels in MSX-treated clams. These results mirror those reported by Lee and coworkers (Lee et al., 1997), who found a direct relationship between external ammonia availability and taurine levels in S. reidi.
Hypotaurine levels were not altered by exposure to sulfide or the metabolic inhibitors. These results suggest that hypotaurine is not directly involved in sulfide detoxification, nor is it an intermediate in the taurine synthesis pathway in S. velum (Cavallini et al., 1976). Alternatively, hypotaurine could have protective functions, such as by serving as a compatible osmolyte (Yin et al., 2000) or by scavenging free radicals (Huxtable, 1992).
Thiotaurine levels were greater in the gills of sulfide-exposed clams than in non-sulfide-exposed clams, regardless of exposure to metabolic inhibitors. Treatment with chloramphenicol reduced, but did not prevent, sulfide-induced thiotaurine synthesis. This reduction likely resulted from the chloramphenicol-induced cessation of sulfide consumption. Inhibition of host metabolic activity with cycloheximide did not affect thiotaurine levels in sulfide-exposed clams. Thiotaurine may be produced abiotically in host tissues, in which case host enzymatic pathways may not be necessary. Despite the MSX-induced inhibition of ammonia assimilation, thiotaurine levels in MSX-treated clams increased following sulfide exposure. Again, these results suggest that thiotaurine may be produced abiotically from precursors already present in the gill tissue and depend less upon ammonia availability.
Taurine and thiotaurine synthesis in S. velum
It is not known whether symbiotic bivalves maintain free amino acid pools by absorbing amino acids from the environment or by synthesizing them. We do know that S. reidi can take up free amino acids from sediment interstitial water (Lee et al., 1992). However, sulfur-containing amino acids were not detected in the pore water samples from S. reidi burrows (Lee et al., 1992) and were not present in the incubation medium in the experiments presented here, suggesting that solemyid clams synthesize taurine and thiotaurine. The biosynthesis pathways of taurine and thiotaurine in solemyid clams are unknown, but the results from this study suggest that these pathways require ammonia assimilation, sulfide consumption, and active symbiont metabolism.
Ammonia is present at elevated levels in the burrow environment of solemyid clams (Lee et al., 1992;Krueger, 1996), and the clams assimilate it into amino acids, which may then serve as precursors for sulfur-containing amino acids (Lee and Childress, 1994). The present study indicates that glutamine synthetase is the primary enzyme in the assimilation pathway, since MSX treatment blocked ammonia uptake and caused ammonia excretion, similar to what was demonstrated in an algal-cnidarian symbiosis (Rees, 1987). The product of ammonia assimilation, glutamate, can then be used as a precursor in the production of taurine, which is a major product of ammonia assimilation in S. reidi tissues (Lee et al., 1997; thiotaurine production was not tested for). Therefore, in S. velum, taurine and thiotaurine production in response to sulfide, as demonstrated in this study, may be facilitated by the ability of the bacteriummollusc association to synthesize glutamate from inorganic nitrogen.
Just as glutamate likely contributes organic nitrogen to the synthesis of taurine and thiotaurine, the probable source of organic sulfur is cysteine. All of the demonstrated taurine synthesis pathways in mammalian and invertebrate tissues incorporate cysteine (Jacobsen and Smith, Jr., 1968;Bender, 1975;Bishop et al., 1983;Huxtable, 1992), which apparently cannot be synthesized by molluscs (Bishop et al., 1983). Although some intertidal molluscs utilize external cysteine sources to maintain taurine pools (Allen and Awapara, 1960; Jacobsen and Smith, Jr., 1968;Allen and Garrett, 1972;Bender, 1975), it is unlikely that solemyid clams take up cysteine from their environment (Lee et al., 1992). Therefore, the most likely source of cysteine or other taurine precursors is the symbiotic bacteria. Gram-negative bacteria cannot synthesize hypotaurine, taurine, or thiotaurine (Jacobsen and Smith, Jr., 1968;Huxtable, 1992), but can make cysteine (Kredich, 1996). Cysteine synthesis in E. coli requires sulfur in the form of sulfide or thiosulfate (Stauffer, 1996) and assimilated ammonia in the form of glutamate (Reitzer, 1996). Translocation of the essential amino acid cysteine from gram-negative symbionts to host may occur in S. velum gill tissue as modeled in Figure 3. Such translocations have been demonstrated in bacteriaaphid and algal-cnidarian associations (Wang and Douglas, 1999;Douglas et al., 2001).
The results of this study suggest that S. velum relies upon its symbionts as a source of taurine precursors, as modeled in Figure 3. The inhibition of symbiont metabolism with chloramphenicol, therefore, may equate to a loss of cysteine metabolism, thus decreasing sulfide consumption and taurine and thiotaurine synthesis by the host. Ammonia limitation, either by MSX treatment or reduced exogenous ammonia resources, may limit glutamate availability in S. velum tissues, thereby limiting cysteine production in the symbionts (Fig. 3). This could result in the lower taurine levels seen in the solemyid clams in this study and in previous work (Lee et al., 1997). Thus, taurine and thiotaurine may be a link between nitrogen and sulfur cycling in chemoautotrophic symbioses and serve as nontoxic sulfide storage and transport compounds. The absence of similar patterns in nonsymbiotic sulfide-tolerant molluscs (Geukensia demissa and Yoldia limatula) suggests these functions of sulfur-containing free amino acids may be limited to symbiotic molluscs. The clams extract ammonia and sulfides from the burrow environment. Ammonia is assimilated into glutamate, which is probably utilized by the symbionts in cysteine synthesis. Cysteine is translocated to the host and utilized in the synthesis of taurine and thiotaurine. | 2018-04-03T02:21:00.784Z | 2003-12-01T00:00:00.000 | {
"year": 2003,
"sha1": "e98fad823d39b41df6f849fcb514b0dfcd5f8fc2",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.biodiversitylibrary.org/part/5922",
"oa_status": "GREEN",
"pdf_src": "UChicago",
"pdf_hash": "3d46f308ed0dc70c9d9e39f2919b36504e566ef5",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
236975610 | pes2o/s2orc | v3-fos-license | Do Beijing’s Capital Controls Bind Hong Kong? Reality or Illusion
Abstract The ongoing impact of COVID-19 on global economic growth is likely to result in a retreat from financial globalization, including restrictions on capital movements. This concern arises from the experience of short-term capital control policies being implemented by countries in past financial crisis. This trend, together with China’s long history of using capital controls, has further sparked fears in Hong Kong regarding the extent to which the capital control restrictions from Beijing could impact Hong Kong’s open financial policy on capital transfers. With this context, this article evaluates situations where concerns have been raised and seeks to ascertain whether Hong Kong could be legally liable for the implementation of capital controls in Beijing.
Both advanced economies and emerging markets are being confronted with unprecedented financial shocks, and the magnitude of the collapse makes it the worst recession since the Great Depression-far worse than even the Global Financial Crisis (GFC) of 2008. 2 Government bond yields in advanced economies such as Germany and the USA have fallen sharply, and equity markets have experienced dramatic sell-offs since the advent of the crisis. While equities in advanced economies have recovered, they have remained extremely volatile, while emerging market equity prices remain about 20 per cent lower in mid-January.
Further harming recovery prospects is the fact that currencies of several commodity-producing economies tumbled by more than 20 per cent against the US dollar in the first quarter of 2020. Amid a surge in demand for hard currency, the pandemic has also led to dramatic outflows of capital. Portfolio investments from emerging economies have been particularly affected, with outflows already surpassing the levels seen during the GFC. 3 Fluctuations in the global financial market and risk sentiment have led to significant governmental action being taken in order to stabilize the situation. Included among these are changes to fiscal, monetary, and exchange rate policies that have been introduced by numerous central banks in order to offset the tightening in financial conditions and reduce systemic stress. 4 While capital controls have not yet featured in most policy responses to COVID-19, 5 uncertainty related to the ongoing economic decline is likely to result in restrictions on capital movements in line with policy of retreat from financial globalization. This concern is based on past experience, as many countries have implemented capital controls to forestall or mitigate the impact of financial crises. For instance, in 1991, Chile imposed a one-year unremunerated reserve requirement of 20 per cent on foreign loans, while in the Asian Financial Crisis of 1998, Malaysia banned short-selling of the listed stocks and implemented a minimum stay period in portfolio investment of 12 months, subject to penalties for early withdrawal. 6 More recently, Iceland and Greece have imposed a number of capital controls during the GFC in order to stabilize the economy in the wake of economic collapse. These examples have showcased possibilities of using restrictions on the withdrawal of existing foreign investment in response to a sudden reversal in capital flows during a retreat from globalization. Such measures can aggravate the impact of financial crisis on foreign investors and exacerbate distorted incentives in the domestic financial system.
China's well-known use of long-term capital controls as a policy tool to control flows and maintain financial stability has sparked fears in Hong Kong regarding the extent to which capital control restrictions from Beijing will impact Hong Kong's financial policy on capital transfers. While these fears have always existed, they have become heightened in the midst of the COVID-19 pandemic and circulation of online rumours of further curbs following the controversial imposition of a national security law on Hong Kong. 7 This brief article seeks to ascertain whether Hong Kong could be legally liable for the implementation of capital controls in Beijing. After introducing Hong Kong's long-standing policy of free capital movements in the first part, the article continues, in the second part, to discuss the territory's key role as a financial centre for China generally and for the Belt and Road Initiative (BRI). It is in this role where concerns have been raised regarding Hong Kong's potential liability for controls taken in Beijing. After finding little reason for genuine concern, the third part provides further comfort by describing how China is in the process of opening its financial markets. The fourth part concludes the article.
Hong Kong's policy on capital transfers
Hong Kong is a major international financial centre. Characterized by a transparent regulatory system and a laissez-faire type of openness to commerce, Hong Kong is home to many financial institutions providing a wide range of financial products and services to local and international investors. Hong Kong's financial industry features a high degree of liquidity and is widely viewed as a leading financial centre in Asia. Hong Kong's legal system is an important ingredient in the success of its financial services, with investors and businesses attracted by the security and predictability of the territory's strong adherence to the rule of law. Illustrating both of these points is Hong Kong's rating in the latest Heritage Foundation's Index of Economic Freedom (2020), which ranked the territory second among 186 economies in terms of economic freedom and fifteenth in the category 'Government Integrity', under the heading 'Rule of Law'. 8 Financial services in Hong Kong are primarily regulated by the Hong Kong Monetary Authority (HKMA), the Securities and Future Commission, and the Office of the Commissioner of Insurance. The regulatory agencies operate within the framework of the Banking Ordinance, the Exchange Fund Ordinance, the Security and Futures Ordinance, and a number of other ordinances. With its free market approach, Hong Kong seeks to keep the involvement of its agencies in the financial system to a minimum while ensuring the maintenance, stability, and integrity of the monetary and financial systems. 9 In this regard, Hong Kong does not impose any foreign exchange or capital movement controls, thus allowing capital to flow easily in and out of the territory without restriction.
Under 'one country, two systems', Hong Kong maintains its own economic, financial, and legal system. 10 Hong Kong's mini-constitution, the 'Basic Law', enshrines free market principles for the monetary and financial systems, with Article 112 stating that: Article 112 is unequivocal in its commitment to free capital movements and prohibition of foreign exchange control policies from being applied in Hong Kong. Moreover, Hong Kong's policy on the free flow of capital and free convertibility of currency is also reflected in its trade and investment agreements. To date, Hong Kong has signed 22 international investment agreements (to which it refers as 'Investment Promotion and Protection Agreements' [IPPAs]) with foreign economies in order to enhance two-way investment flows, provide additional assurance to overseas investors that their investments in Hong Kong are protected, and protect the interests of Hong Kong investors overseas. 12 All 22 IPPAs provide for, inter alia, the free transfer of investments and returns across borders in any freely convertible currency. 13 Most of the IPPAs that Hong Kong has signed contain largely blanket prohibitions on capital controls. As an illustration, Article 6 of the Hong Kong-UK IPPA reads: Each Contracting Party shall in respect of investments guarantee to investors of the other Contracting Party the unrestricted transfer of their investments and returns abroad. Transfers of currency shall be effected without delay in any convertible currency. Unless otherwise agreed by the investor transfers shall be made at the rate of exchange applicable on the date of transfer. 14 Hong Kong's recent IPPAs have begun to contain General Agreement on Trade in Services-like exceptions to the above commitments, 15 such as for balance of payments difficulties and prudential measures in the form of a prudential carve-out. 16 For instance, while in the Association of Southeast Asian Nations (ASEAN)-Hong Kong IPPA Hong Kong undertakes commitment to allow all transfers relating to its investment commitments to be made freely and without delay into and outside of its territory, it includes both the balance of payments and prudential carve-out as exceptions to be used in exceptional circumstances when movements of capital cause serious difficulties for macroeconomic management and when requested by the International Monetary Fund (IMF). Likewise, the Australia-Hong Kong IPPA provides for the implementation of temporary safeguard measures in serious balance of payments and external financial difficulties if payments or transfers relating to capital movements cause serious difficulties for macroeconomic management. 17 Thus, in all of Hong Kong's IPPAs, investors have the unrestricted right to transfer capital across borders except in certain envisaged circumstances and with strict conditions. Hong Kong has also signed eight Free Trade Agreements (FTAs) that touch upon and impact the movement of capital. All of Hong Kong's FTAs contain a wide range of commitment relating to market access and national treatment liberalization, particularly in relation to financial services. Moreover, the starting point is that Hong Kong undertakes to not apply restrictions on capital flows in sectors where a specific liberalization commitment has been made. Among Hong Kong's FTAs, one-half include provisions on the free transfer of funds relating to covered services commitments. 18 movements relating to the commercial presence or cross-border supply of the services pursuant to its commitments. 19 In contrast to the open and free market in Hong Kong, Mainland China still has a long way to go before it fully liberalizes its capital market and makes the renminbi (RMB) fully convertible. In this regard, despite the fact that China has moved from a closed economy and has been gradually liberalizing its financial market for some time, the process of opening can be characterized as 'gradual and cautious', 20 and a multitude of capital controls remain. China's current controls are generally 'direct administrative restrictions', such as authorization requirements, time requirements, and quantitative limits.
While China has not imposed restrictions suspending remittances by multinationals, more than 50 per cent of IMF members have done so over the past decades in response to global and regional financial crises. 21 There is concern, however, that amid the rising tension in USA-China relations 22 and prolonged global economic downturn as a result of COVID-19, Beijing may become more cautious with capital flows and impose greater restrictions. This leads to the question of whether China's capital control policy (for instance, imposing additional barriers to profit repatriation and withdrawal of existing foreign investment) can potentially affect Hong Kong, creating a potential risk of retreat from its existing commitments to free capital movements and foreign exchange settlement, especially in the context of Hong Kong playing such a key role in the BRI in the areas of international project financing and offshore RMB business.
Hong Kong's financial position on BRI: a safe harbour from capital controls?
The increasing financial integration between Hong Kong and China has become an important determinant of capital flows into and out of Hong Kong. 23 A host of factors-including the range of financial product on offer, open access for both Chinese and international issuers and investors, and its increasing significance as the largest offshore RMB business centre-make Hong Kong one of the most 19 the nature of the alleged breach. For instance, should the Chinese government exercise tight capital controls on the withdrawal of investments, eligible investors enrolled in the Bond Connect could have a legal remedy directly against the Hong Kong intermediaries without the need to bring a claim in China.
The system operates as follows: prior to issuance and distribution, overseas investors are required to sign a distribution agreement with mainland underwriters, specifying the business mode and the rights and obligations of both parties and stating that overseas investors shall invest in the bonds through the HKMA's CMU nominee account. 30 In addition to signing the distribution agreement, overseas investors are required to register and be recognized as an eligible foreign investor with the People's Bank of China and to open a trade account with the China Foreign Exchange Trade System via the Bond Connect Company. 31 This 'northbound' trading is executed through a 'request for quotation' mechanism between an eligible foreign investor and a mainland underwriter at the China Foreign Exchange Trade System (see trading flow in blue). 32 The settlement link facilitates the settlement and custody of the bonds between the CMU (offshore custodian and settlement agent for eligible offshore investors) and the Shanghai Clearing House (SCH) or the China Central Depository and Clearing Company (CCDC) (onshore clearing institutions) (see settlement flow in pink). 33 The bonds thus acquired are registered in the name of HKMA and held in the onshore nominee accounts opened by the CMU in the SCH or the CCDC. The CMU members then settle the bond transactions on behalf of offshore investors through the CMU. 34 The clear linkages built between foreign investors and mainland Chinese financial institutions (in particular, the contractual relationship in the trading link) through Bond Connect Company and HKMA allow foreign investors to bring a claim challenging the imposition of capital controls from Mainland China (Figure 1).
Overseas investors therefore do not need to open Chinese settlement and custody accounts. Instead, Bond Connect allows overseas investors to deploy their existing trading and settlement practices in Hong Kong with no further restrictions such as investment quotas, lock-up periods, and repatriation limits. 35 This also provides a slight uncertainty for investors, as the bonds are acquired through the custodian (the CMU), and the rights of eligible overseas investors must be enforced through the nominee holder CMU of the HKMA. 36 uncertainty stems from the loss of control in the legal process and the question as to what actions the investor would have should the CMU of the HKMA not pursue the legal claim. This being said, it does seem possible for overseas investors to bring a legal claim in their own name, as the ultimate beneficial owners of the bonds, in the courts in China. 37 Overseas investors in this scheme, likewise, should qualify as investors making an investment under China's bilateral investment treaties (BITs) should their country have a treaty with China. For instance, the term 'investment' is defined in the Investment Agreement of the Mainland and Hong Kong Closer Economic Partnership Agreement as meaning assets (including bonds) that an investor owns or controls, directly or indirectly. 38 The eligible foreign investors in Bond Connect directly control the trading of the bonds by placing trade orders through trading links provided by access platforms and conclude a bond trade with onshore market makers while also indirectly owning the bond as the investment is registered in the name of HKMA (in its custodian role) with the SCH or the CCDC. 39 This interpretation is confirmed by the Shanghai Clearing House Detailed Operation Rules for Registration, Custody, Clearing and Settlement of Bond Connect Cooperation between the Mainland and Hong Kong SAR, which recognizes investors' ultimate beneficial ownership by expressly providing for eligible investors to enjoy the rights and interests of the bonds. 40 Therefore, from the point of view of either trading or settlement practices, eligible foreign investors satisfy the definition of 'investors' in China's BITs.
Having established eligibility under a BIT, eligible foreign investors could likely succeed in a claim against China in an investor-State dispute settlement should Beijing impose controls on the withdrawal of the investment as the standard provision requires parties to permit all transfers relating to a covered investment to be made freely, and without delay, into and out of its area. 41 It is worth stressing, however, that China's capital controls are not usually in the nature of repatriation restrictions and that China is unlikely to impose such measures in the foreseeable future.
The only question that remains to be answered is whether an investor can bring a claim against Hong Kong for such measures (should they be imposed). While an overseas investor would likely qualify as an investor under Hong Kong's IPPAs, a claim is unlikely to succeed for the simple reason that the control will have been initiated by the Chinese government and not Hong Kong. Hong Kong undertakes to protect and promote investments in its IPPAs and agrees not to take certain measures that restrict the outflow of capital-but, in our scenario, Hong Kong is not the responsible government restricting outflow: it is the government of China. Any such investor-State dispute settlement claim would thus be better placed to be filed against the government in Beijing. At the same time, should the CMU of the HKMA refuse to enforce the claims of eligible overseas investors, it would be reasonable for an aggrieved investor to access the Hong Kong courts to enforce the claims. But, here, liability for Hong Kong is not the result of any capital control being imposed by Beijing but, rather, through the lack of action of a Hong Kong governmental entity in enforcing the claims of an investor.
China accelerates the opening up of capital markets
Since 2016, China has attempted to strengthen capital controls with enhanced supervision on capital flights, including limitations on household residual foreign exchange and outward direct investment and portfolio investment. On the other hand, China has recently begun to accelerate the opening up of its capital markets to foreign investors. A special report from The Economist succinctly restates the situation: 'Domestic savers are still caged in, but foreign investors say they have no trouble getting money out, even during market routs. Thus, while still restrictive, China is making moves to liberalize the market. For instance, in 2015, China launched a reform of the RMB exchange rate to allow a greater role for market forces. This change marked a major improvement to the formation of the RMB's central parity rate against the US dollar and has been recognized by the IMF as an important step, building on the progress already made. 43 Since then, China has started to accelerate the liberalization of the capital markets in several ways. First, the government relaxed the requirements for the Qualified Foreign Institutional Investors (QFII) scheme (launched in 2006), which allowed licensed foreign investors to use offshore yuan to invest in China's capital with a combination of multi-tier and multi-stage approval procedures and a heavily regulated quota-based system. 44 In 2019, restrictions on the foreign investment quota of QFII were abolished. The new rules also removed the three-month lock-up period and the 20 per cent repatriation limit, allowing QFII investors to repatriate the principal and profits from their securities investments in China at any time. 45 Second, the launch of the Shanghai-Hong Kong Stock Connect in November 2014, and, later, the Shenzhen-Hong Kong Stock Connect in 2016, enabled mutual market access between China's Stock Exchange and the Stock Exchange of Hong Kong, allowing Hong Kong and international investors to access the mainland's stock market through trading and clearing facilities in Hong Kong. 46 In this regard, international investors can seek direct Shanghai/Shenzhen market access outside of the QFII. The annual aggregate quota for Shanghai Connect has been abolished since 2016, and no annual aggregate quota has been established for Shenzhen. In addition, the daily quota has been increased to allow a maximum 'net buy' value of cross-border trades to RMB 52 billion, with no limit on the withdrawal of the investment from Stock Connect. 47 Under this scheme, the Hong Kong Securities Clearing Company Limited is responsible for the clearing and delivery of shares and funds for eligible overseas investors with respect to the transactions concluded with Stock Connect. 48 Similar to the nominee holder account structure in Bond Connect, the 'nominee holder' and 'beneficial owner' in Stock Connect are recognized in Mainland China, 49 and, thus, the issues of legal liability outlined in the previous section would appear to be replicated here.
Third, in 2019, the Office of Financial Stability and Development Committee of the State Council announced the Relevant Measures for Future Opening Up of Financial Sector (11 Measures), covering a broad spectrum of financial service sectors. 50 By April 2020, foreign ownership limits for securities, fund management, and futures companies had been removed, 51 and foreign financial companies are now allowed to establish wholly owned units in China. In the midst of a global health pandemic combined with deep economic uncertainty, these opening-up measures should alleviate concerns that China will retreat from the global system and boost confidence in the financial services industry.
In addition, amid the COVID-19 pandemic in May 2020, the People's Bank of China, the China Banking and Insurance Regulatory Commission, the China Securities Regulatory Commission, and the State Administration of Foreign Exchange jointly issued a financial support guideline for the development of the Greater Bay Area (GBA). The guideline put forward specific measures for expanding the liberalisation of the financial sector and promoting cross-border capital flows within the GBA. 52 Notable measures include: • carrying out pilot projects for cross-border investment by private equity investment funds. This guideline promotes various pilot operations of cross-border financial services and capital flows without reference to capital controls and demonstrates Beijing's determination to further promote opening-up from a financial point of view and to deepen financial cooperation within the GBA. As a result, the region is expected to receive more support for cross-border financial services and investment through the continued relaxation of restrictions.
A final point to make is that while China's investment climate has not always been friendly and stable, it is unlikely to backtrack, even in the face of COVID-19, when banking and capital markets have faced increased volatility recently. China realizes it must seek foreign investment to continue its robust economic growth and that uncertainty arises among investors if a government even hints at using capital controls to protect domestic capital markets. More importantly, at a press conference for the National People's Congress and the Chinese People's Political Consultative Conference Sessions held in May 2020, China's Premier Li Keqiang re-emphasized China's policy of opening-up against the background of global pandemic and its associated difficulties: 'it is impossible for any country to achieve development with its door closed or retreat back to the agrarian times and China will not waver in this commitment. Instead, China will further expand cooperation with the rest of the world and introduce more opening-up'. 53 While this declaration, of course, carries no legal force, it may provide additional comfort to foreign investors seeking market access to Chinese capital markets through Hong Kong.
Conclusion
Despite increasing concerns over the retreat from financial openness worldwide, capital controls have not yet featured in the policy responses of most governments. This article has provided an overview of Hong Kong's policy on capital flows, its commitment to capital transfers in related trade and investment agreements, and its financial positioning in relation to the BRI. The article also examined China's gradual and steady policy of liberalizing its capital markets in recent years. Combining these factors together, we conclude that it is unlikely that China's capital controls will legally bind Hong Kong. Moreover, this type of control is unlikely and would be unprecedented given Beijing's historical and continuing capital control practices and the trend towards opening up its capital markets.
It is worth concluding on the point that, although China has a long history of imposing capital controls on both outward and inward investment, thus far, none have had any impact on Hong Kong's financial policy or legal position. Going forward, the same will likely be the case, and any restrictions on capital flows from Beijing, likewise, will not affect Hong Kong's legal commitments. For its part, China will likely continue with its opening up policy throughout and following the economic decline brought about by the COVID-19 pandemic. Hong Kong stands to benefit from continued liberalization, especially with increased regional unity stemming from the GBA, and can continue to capitalize on its unique advantages to connect China with other regions. Global investors can, likewise, continue to leverage Hong Kong's open market and robust legal system for their investment activities in both Hong Kong and China's capital markets. | 2021-08-12T05:21:15.452Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "c50d591c0159fe007f6ffaecd85d54c814107b49",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c50d591c0159fe007f6ffaecd85d54c814107b49",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
28095759 | pes2o/s2orc | v3-fos-license | Evaluating the analgesic effect of the GLS inhibitor 6-diazo-5-oxo-l-norleucine in vivo
Abbreviations: AIA, adjuvant-induced arthritis; AST, aspartate amino transferase; BSA, bovine serum albumin; DON, 6-diazo-5oxo-l-norleucine; DRG, dorsal root ganglia; GLU, glutamate; GLS, glutaminase; IR, immunoreactivity; MRNA, messenger rna; NSAID, non-steroidal anti-inflammatory drug; PBS, phosphate buffered saline; PBST, phosphate buffered saline with triton x-100; PPP, post-surgical pain; PVP, polyvinylpyrollidone; VGLUT2, vesicular glutamate transporter 2
Introduction
Surgical procedures are common in the United States with some estimates greater than 99million per year. A serious clinical problem that can occur because of surgery is the condition of persistent postsurgical pain (PPP). This has been defined as pain that lasts longer than three months after surgery. The incidence of PPP has been reported to be as high as 40%, with greater than 18% of patients reporting their pain as moderate to severe. 1 The physiological consequences of inadequately controlled post-operative pain can result in various complications (i.e., respiratory, cardiovascular, thromboembolic, gastrointestinal, musculoskeletal, and psychological), all of which can delay or impair patient recovery and increase the total cost of care. Inadequate post-operative pain control may also lead to the development of chronic pain after surgery. These problems highlight the overarching goal of post-operative pain management, which is to avoid post-operative complications and the development of chronic pain, thereby improving the comfort of the patient, facilitating recovery, reducing morbidity and promoting quicker discharge from the hospital. 2
Pre-emptive analgesia
Pre-emptive or preventive analgesia is defined "as the administration of analgesia before surgical incision to prevent establishment of central sensitization from incision or inflammatory injury in order to achieve optimal post-operative pain control." The results of clinical trials on the efficacy of pre-emptive analgesia are controversial due to the difficulty in differences in noxious stimuli intensity, verification of the direct pharmacological effect of treatment, differences in drug concentrations between study groups, and outcome measurements. 3
Glutaminase in primary sensory neurons
Glutamate (GLU) is the amino acid neurotransmitter used by primary sensory neurons of the dorsal root ganglia (DRG). This excitatory neurotransmitter is released from the peripheral nerve and spinal synaptic terminals during nociceptive (pain) signaling. 4 GLU is produced either by cytosolic transamination of aspartate by aspartate aminotransferase (AST) or via the hydrolytic deamination of glutamine by the mitochondrial metabolic enzyme, glutaminase (GLS). In neurons, GLS is the primary producer of glutamate. 5 GLS is elevated in both the cytoplasm and mitochondria of primary afferent sensory neuronal cell bodies in the adjuvant-induced arthritis (AIA) and surgical incision models in the rat. [6][7][8] GLS protein expression, enzyme activity, and immunoreactivity (ir) are elevated in DRG neuronal cell bodies from 1-8days of AIA. The elevation of GLS-ir is most prominent in nociceptive 'C' type neurons at 8-12days of AIA. Following increased production in the cell soma during AIA, GLS is transported in peripheral axons causing increased GLU synthesis in peripheral nerve terminals. 9 Furthermore, we have demonstrated that both GLS and VGluT2-ir are elevated post-surgical incision in S 1 DRG neurons for up to 72hours, while GLS mRNA levels rapidly decrease post-incision and remain depressed for at least 96hours. 8 Since GLS is responsible for the neuronal production of glutamate from glutamine, it is a potential therapeutic target for alleviating pain. 7,10,11 The GLS enzyme contains several regulatory sites 12,13 and several classes of GLS inhibitors, acting at both allosteric and enzymatic sites are effective analgesics when injected into the inflamed limb of adjuvant-induced arthritis (AIA) animals. 7 Miller et al. 7 examined if inhibition of GLS in peripheral terminals would reduce elevated glutamate levels and provide pain relief during chronic inflammation. After a single intra-plantar injection of DON in rats with 3days AIA, thermal and mechanical hyperalgesia responses were comparable to control animals and glutamate-ir appeared similar to controls. 7 Other experiments have shown that pre-treatment of rat hind paws with DON, along with co-administration of DON at the start of inflammation with carrageenan, decreased paw swelling by 10% and reduced the number of activated neurons in the spinal cord as demonstrated by c-fos-ir. 10 Based on these findings and our discovery that DRG neurons increase GLS and VGluT2 following surgical incision similar to AIA, we investigated if a topical application of DON into a surgical incision will have a similar effect compared to the AIA model. We analyzed tail thermal and mechanical responses and evaluated skin ir for the glutamatergic biomarkers, VGluT2, GLS and GLU.
Animals
Harlan Sprague-Dawley rats (Oklahoma State University Center for Health Sciences breeding colony originating from Charles River) were housed on a 12hour light: 12hour dark cycle and given free access to food and water. Procedures were conducted according to guidelines from the National Institutes of Health 11 and were approved by the Oklahoma State University Center for Health Sciences Institutional Animal Care and Use Committee. All appropriate efforts were made to minimize the number of animals used in these studies.
Surgical incision
Harlan Sprague-Dawley rats (n=21, male and female), weighing between 250-350grams (mean weight=300.2grams, mean age=252.4days), were anesthetized with 5% isoflurane and oxygen (3 L/min) in a plastic induction chamber until mobility ceased. A nose cone was utilized to deliver maintenance anesthesia (isoflurane 2-3%, O 2 1.5 L/min). The tails of the rats were cleaned using povidone iodine for disinfection of the surgical site. The experimental animals (n=15) had a 20mm midline incision made through skin, fascia and muscle on the dorsal surface of the tail. The incisions were made longitudinally, along the midpoint of the proximal third of the tail, with half the incision above and half below the proximal third midpoint. The incisions were sutured using a 19 mm reverse cutting needle with two 5.0 silk sutures. Control animals (n=6) were surgical naïve rats that underwent anesthesia and disinfection as described previously.
DON administration
Animals were anesthetized as described previously. Using a tuberculin syringe fitted with a 27-gauge needle, twenty 0.05ml aliquots of DON (2mM) were infiltrated into the wound margins bilaterally and down the midline of the incision. For control animals that received DON, similar injections were given by intradermal administration.
Experimental time line
All animals were acclimatized to behavior testing (e.g., thermal and mechanical) daily for 3days prior to the initiation of the experiment. On Day 0, baseline tests were performed for both thermal and mechanical responses and data recorded. Behavior tests were performed and data recorded at 0, 16, 40 and 64hours after the initiation of the experiment. Animals that received surgical incisions (i.e., incision only and incision with DON groups) had behavioral testing after the surgical incision procedure. Injections were given 16hours after the incision time point (0hour) for both groups that received DON injections (i.e., DON injection only and incision with DON). The 16hour time point post surgical incision was chosen to allow for newly produced GLS enzyme as a result of the surgical incision adequate time to travel from associated DRG to their peripheral terminals. By timing the injection with the rise in GLS enzyme at the peripheral terminals we would best demonstrate DON's mechanism of action (i.e., GLS inhibition). For control animals, only behavior acclimation and testing were performed. Following the final behavior testing, all animals underwent transcardial perfusions and skin tissue was collected from the proximal portion of the tail ( Figure 1).
Thermal behavior testing -tail flick
An IITC Tail Flick Apparatus (IITC Life Sciences, Inc.) was used with intensity set to 10 and beam strength set to 80 prior to starting the tail flick test. The rats' tails were darkened with a black marker to normalize the background of and response from the tail. Animals were restrained and placed upon the IITC Tail Flick Apparatus (IITC Life Sciences, Inc.) with their tails placed through a hole on the apparatus making it accessible to the light beam. The light beam was directed onto the proximal third of the tail of the animal where the incision and/or injection were located. The test was complete when the animal moved its tail away from the light beam or when the humane cut-off point of 10seconds was reached. Latencies to response were recorded. Animals were tested 3times per time point.
Mechanical behavior test -von frey
Rats were placed upon a platform with the tail incision/injection area exposed for mechanical probing using the up-down (quantal) method. 12 Four groups were analyzed: control, incision, incision with DON injection, and DON injection only. Using a 300g (6.65N) Von Frey monofilament, pressure was applied in a downward perpendicular motion with the tip of monofilament onto the lateral margin of the incision, until the monofilament flexed. A positive response was noted when the animal moved its tail and/or vocalized. Animals were tested 3times per time point.
Immunohistochemistry
Animals were deeply anesthetized with tribromoethanol (2.5% w/v) (Sigma-Aldrich) and xylazine (100 /ml) (Lloyd Laboratories) and perfused through the ascending aorta with 75mL calcium-free Tyrode's solution, pH 7.3, followed by 325 mL paraformaldehyde/ picric acid fixative. 13 Tail skin tissues were collected placed in postfixative for fourhours at room temperature, and transferred to 10% sucrose in phosphate buffered saline (PBS), pH 7.4, overnight at 4°C. Tail skin tissue sections were cut at 20μm with a cryostat (Leica Microsystems) and mounted onto gel coated ProbeOn™ (Fisher Scientific) slides. The tissues were rinsed inside slide mailers (5 slides per mailer) three times with PBS, pH 7.4, and placed onto a rocker for 10minutes after each wash. The primary antisera used were mouse anti-glutamate (1:40,000; Madl, Colorado State Univ.), rabbit anti-GLS (1:5,000; Curthoys, Colorado State Univ.) 14 and rabbit anti-VGluT2 (1:2,000; Sigma-Aldrich). The anti-sera were placed into a blocking agent consisting of 0.5% polyvinylpyrollidone (PVP), 0.5% bovine serum albumin (BSA) in PBS with 0.3% Triton X-100 (PBST), pH 7.4 and placed on a rocker in a cold room for 96hours. After the 96-hour incubation, the tissues were rinsed three times for 10minutes each with PBS, pH 7.4, and placed on a rocker during each rinse. Tissues were incubated in AlexaFluor 488® conjugated goat anti-rabbit anti-sera (1:2000; Invitrogen) or Alexafluor 555® conjugated goat anti-mouse anti-sera (1:1500 Invitrogen) in (PBS-T) and incubated on a rocker for 60minutes at room temperature. Antisera were decanted and the tissues were washed three times for 10minutes per wash with PBS pH 7.4 and placed on a rocker between washes. ProLong Gold™ (Invitrogen) mounting medium was added to each section of tissue and coverslips applied. Slides were stored at room temperature for 24hours, in a dark area, to allow for complete polymerization of mounting medium.
Absorption controls
Primary antiserum absorption control and secondary antiserum control experiments were performed for the GLS and VGLUT2 primary antisera and secondary antisera, respectively. For primary antiserum absorption controls, each diluted antiserum was incubated for 24hours at 4°C with the respective antigen at 20mg/mL. Processing of DRG sections for immunofluorescence was carried out as mentioned above. Additional sections were incubated for fourdays at 4°C in either absorbed diluted antisera or non-absorbed diluted antisera. The secondary antisera control sections were incubated in PBS-T for fourdays at 4°C before continuing with routine processing (Figure 2).
Imaging
All images were acquired using an Olympus BX51 epifluorescence microscope (Olympus; Center Valley, PA, USA), equipped with a SPOT RT740 quantitative camera (Diagnostic Instruments; Sterling Heights, MI, USA). VGluT2 images were acquired using a 40X objective. To ensure proper quantitation, all images were captured using a 1200 millisecond exposure with the gain set at one. Captured images were 1600 × 1200 pixels with 5.38 pixels per micrometer. There were a total of 1275 data points analyzed. Images were taken along the stratum lucidum border. ImageJ (NIH) was used for analysis using the grid plug-in for standard sizing and the rectangular selection tool for taking random images along the stratum lucidum border. Sample size of the rectangular selection tool was 23408 pixels. GLS images were acquired using a 60X objective. To ensure proper quantitation, all images were captured using a 2000-millisecond exposure with the gain set at 1. Captured images were 16001200 pixels with 8.07 pixels per micrometer. There were a total of 218 nerve fibers evaluated. Images were taken within the dermal layer. ImageJ (NIH) was used for analysis, utilizing the free form drawing feature to outline nerve fibers and analyze their intensity. GLU images were acquired using a 60X objective. To ensure proper quantitation, all images were captured using a 2000-millisecond exposure with the gain set at 1. Captured images were 1600×1200 pixels with 8.07 pixels per micrometer. Three slides were used for the GLU anti-sera. There were a total of 106 nerve fibers evaluated. Images were taken within the epidermal layer. ImageJ (NIH) was used for analysis, utilizing the free form drawing feature to outline nerve fibers and analyze their intensity.
Statistics
Data from the analyses are reported as percentage change. A student's t-test, ANOVA or Fisher's exact test were used to determine differences between experimental and control groups (Prism version 6.0, GraphPad Software Inc., La Jolla, CA.). In all analyses, p-values less than 0.05 were considered significant.
Behavior testing -mechanical
The control and DON injection-only groups had no response to the 300 g (6.65 N) monofilament. Within the incision only group, 12 out of 15 animals responded with tail movement or vocalization to the 300 g monofilament. Six of the 12 incision only animals that responded to mechanical stimulation received an infiltration/injection of 2 mM DON into the incision site as described previously. The DON administration occurred 16hours post-incision and animals were tested 24 and 48hours later. Following DON administration, no response was noted for five out of six animals to the 300g monofilament ( Figure 3). All untreated incision only animals (n=6) responded to mechanical stimulation at 24 and 48hours. No gender differences were noted.
Behavior testing -thermal
The control and DON injection only groups had no response to the thermal testing. Out of the animals that received an incision (n=15), only five animals responded to thermal stimuli. The responders were given an injection/infiltration of 2 mM DON into the incision site 16hours post incision. These animals were tested 24 and 48hours later and no animal showed an analgesic response. No gender differences were noted.
Immunoreactivity changes in the glutamateric biomarkers VGluT2, GLS and GLU after the application of the GLS inhibitor DON
To examine the effect of DON on glutamate (GLU) we infiltrated the incision area with twenty 0.05 ml aliquots of 2 mM DON 16-hour post-surgical incision.
For GLS, ir was examined in the dermis (Figure 4). A total of 218 nerve fiber bundles were analyzed. The mean grayscale intensity (MGI) for control was 100.51±11.76ru, for DON injection only 101.42±10.46ru, incision only 153.43±3.24ru, incision with DON 138.23±3.74ru ( Figure 5). A significant decrease in immunofluorescence was observed in the incision with DON group compared to incision only group. Our analysis found that GLS-ir decreased by approximately 27.9% in comparison to the incision only group after the administration of DON. Note DON was administered 16hours post incision. All images were acquired 64hours post-incision.
For GLU ir was measured in the epidermis (Figure 4). A total of 106 nerve fibers were analyzed. The MGI for control was 133.62±8.00ru, for DON injection only 155.78±5.03ru, incision only 180.61±3.47ru, incision with DON: 139.09±6.02ru ( Figure 6). A significant decrease in immunofluorescence was observed in the incision with DON group compared to incision only group. Our analysis found that GLU-ir decreased by approximately 23% after the administration of DON.
For VGluT2, ir was measured in stratum lucidum of the epidermis (Figure 7). A total of 1275 areas were analyzed. The MGI for control was 101.31±31.48ru (n=319), DON injection only 102.08±2.71ru (n=318), incision only 120.77±3.27ru (n=319), incision with DON 114.20±2.71ru (n=319) (Figure 7). A significant decrease in immunofluorescence was observed in the incision with DON group compared to incision only group. Our analysis found that VGluT2-ir decreased by approximately 5.4% after the administration of DON.
Discussion
Current pharmacotherapy for the treatment of pain consists of topical analgesics, local anesthetics, opioids, NSAIDs, anti-epileptics, and anti-depressants. Although numerous agents exist for the treatment of pain, none have proven a panacea. Furthermore, none of these drugs act directly on the peripheral glutamatergic system, even though glutamate is a strong stimulator of primary afferent neurons. 15,16 Since glutaminase is responsible for the neuronal production of glutamate from glutamine in peripheral afferent terminals, it is a potential therapeutic target for alleviating pain. 7,10,11 The GLS enzyme contains several regulatory sites, [17][18][19] and several classes of glutaminase inhibitors, acting at both allosteric and enzymatic sites are effective analgesics when injected into the inflamed limb of adjuvant-induced arthritis (AIA) animals. 6,7 One particular glutamine antagonist, 6-diazo-5-oxo-l-norleucine (DON), inhibits the enzyme, glutaminase. 15,20 This compound was previously tested as a chemotherapeutic drug over 50years ago, 21 however, toxicity limited its approval and use in humans. 22 Originally discovered from a Streptomyces strain, DON was found to have tumor inhibitory action and was introduced into human cancer clinical trials. 21,23 Its main chemotherapeutic effect is blockade of purine biosynthesis by inhibition of amido phosphoribosyl transferase (EC 2.4.2.14) and phosphoribosyl formyl glycinamidine synthase (EC 6.3.5.3), although it can block a number of amido transferases. 24 DON has been used in at least 13 clinical trials, representing over 700 cancer patients. 21,25,26 There have been anecdotal reports of pain relief following treatment with DON in cancer patients. A woman who received 480mg/m 2 "reported marked decrease in pain from a destructive lesion in the lumbar spine due to a fibrosarcoma." This pain relief lasted six weeks despite progression of the disease. 26 Another report of analgesia was a fourteen-year old male with a vertebral tumor who reported a decrease in pain after 450mg/m 2 dose. 25 These two examples of pain relief both involved tumor invasion of the spinal vertebrae. This is a highly painful condition and adequate pain relief is difficult to provide. 27,28 DON inhibition of GLS activity in peripheral nerve terminals may have been part of the reason for pain relief in these patients.
DON mechanism of action
As mentioned previously, the GLS enzyme contains several regulatory sites. 14-17,29-31 Several GLS inhibitors acting at both allosteric and enzymatic sites have been shown to be robust analgesics when injected into the inflamed limb of AIA animals, and the most potent analgesic was DON. 7 The mechanism of action for DON is transport into nerve terminals via glutamine transporters 32 and irreversible binding to the glutamine-binding site of GLS in the peripheral terminals. 12 Replenishment of inhibited GLS in peripheral terminals takes severaldays as neuronal cell bodies (e.g. DRG) must transcribe, translate, and transport the newly synthesized GLS to the peripheral terminals, which can reside up to one meter away in humans. Because this process takes one to twodays, DON inhibition of GLS in peripheral terminals provides long-lasting analgesia as seen in these animal experiments. The qualities of having a long duration of action and analgesia as compared to anesthesia make DON a unique therapeutic agent in the local treatment of pain.
A surgical incision is an acute painful injury. Following incision, wound inflammation causes an increase in the production of nociceptive molecules that augment both central and peripheral sensitization. 33 These primary sensory neurons are glutamatergic, releasing glutamate from peripheral and central terminals via VGluT2 synaptic vesicles. 34 Previous studies have demonstrated that glutamatergic metabolism is elevated in DRG neurons post-surgical incision and during adjuvantinduced arthritis (AIA), leading to hyperalgesia and allodynia. 7,8 Our investigation was to evaluate whether DON, when injected into a surgical incision, would have a similar analgesic effect as observed in adjuvant-induced arthritis (AIA) animals. 7
Alterations in GLS, VGluT2, and GLU Immunoreactivity in the skin post-surgical incision
Previous work in the aseptic incision model has shown a cytoplasmic elevation of GLS in affected DRG occurring betweendays 1 and 2, post-incision. 8 However, the question remained as to whether this increased GLS is transported to peripheral terminals to allow for increased production of GLU. Upon examination of the skin 64hours post-surgical incision, we found that skin GLS-ir was elevated, supporting the narrative that elevated DRG GLS is transported to peripheral terminals in the aseptic incision model in the rat. 7,8 The next question was whether this increased level of GLS allowed for increased production in GLU. To address this question, we examined GLU and its vesicular transporter, VGluT2, expression in the skin. Previous research 8 has shown that VGluT2 expression is elevated in the DRG and that this increase in both VGLuT2 and GLS may mean that there is increased synthesis and release of glutamate from the peripheral terminals. The question still remained whether this increase in DRG VGluT2 expression was transported to the peripheral terminals in the aseptic incision model. Our findings revealed that VGluT2-ir is elevated in the skin post-surgical incision indicating that the increased expression of VGluT2 synthesized in the cell body is transported to the peripheral terminals in response to an incisional injury. Finally, GLU-ir was measured in the epidermis and our findings revealed that GLU-ir was elevated post-surgical incision. This expression profile is congruent with the role primary afferent neurons play in nociception, in both acute and chronic painful conditions. 35,36 The final question was whether the application of the GLS inhibitor, DON, would decrease the levels of GLS-ir, VGluT2-ir, and GLU-ir. We conjectured that the irreversible binding of DON to GLS would result in protesomal degradation, thereby leading to decreased levels of GLS immunoreactivity, followed by a subsequent decrease in GLU-and VGluT2-ir. Our analysis found that GLS-ir decreased by approximately 27.9%, VGluT2-ir by approximately 5.4%, and GLUir by approximately 23% after the administration of DON. Other researchers have shown similar results at other nervous system sites. For example, Conti et al., showed when 0.25-1 mM DON was either injected intraparenchymally or applied epipially to the sensorimotor cortex of adult Sprague-Dawley rats,that GLU-ir was abolished in neuron perikarya. 20 Bradford et al. 37 showed both a decrease in GLU pool size and release from cerebrocortical synaptosomes when treated with DON (5mM). 37 When applied intrathecally to lumbar DRG's in rats, DON decreases GLS ir in DRG neuronal cell bodies withinhours of application, indicating cellular degradation of the inhibited enzyme. 38 In summary, our analysis found that GLS-ir decreased by approximately 10.5%, VGluT2-ir by approximately 6.5%, and GLUir by approximately 23% after the administration of DON. Taken together, these results provide in vivo support for DON's mechanism of action. The GLU immunoreactivity results offer the most direct and compelling support for DON's mechanism of action. We would expect that an inhibitor of GLS to show decreased GLU immunoreactivity as GLU is the product of that enzyme and by inhibiting this enzyme less GLU would be produced.
The behavioral effects of don on mechanical and thermal sensitivity post-surgical incision
Previous studies have examined if inhibition of GLS in peripheral terminals would reduce elevated glutamate levels and provide pain relief during chronic inflammation. 6 These studies have shown that following a single intra-plantar injection of DON, 3days post-AIA, thermal and mechanical responses were comparable to control animals and glutamate-ir appeared similar to controls. 7 Another study showed that pre-treatment of rat hind paws with DON, along with co-administration of DON at the start of inflammation, decreased paw swelling by 10% 10 and reduced the number of activated neurons in the spinal cord as demonstrated by c-fos-ir.
To test the analgesic efficacy of DON in the post-surgical incision model, mechanical and thermal latencies were analyzed. When mechanical latency was tested, the control and DON injection-only groups had no response to the 300g (6.65 N) Von Frey monofilament. In the animals that received an incision (n=15), 12 out of 15 responded to 300g monofilament. Of the 12 animals that responded, six received a 1 ml injection of 2mM DON 16hours post incision and were tested 24 and 48hours after the injection. Of those tested, five out of six animals showed a significant analgesic response to the 2mM dose, indicating analgesic efficacy.
There was no significant decrease in thermal latency between the control and incisional animals. This finding is consistent with previous work in the tail incisional model. Weber et al. 39 showed no significant difference in thermal response in their surgical incision model. However, the Brennan incisional hind paw and gastrocenimus models do show thermal sensitivity. 40 These differences might well be determined by the variations in tissue composition between the tail, paw, and gastrocenimus.
Conclusion
In this study, we addressed the following hypothesis: Since glutaminase and VGluT2 are elevated in affected dorsal root ganglia neuronal soma post-surgical incision, then the application of a glutaminase inhibitor will result in a decrease in glutamate biomarkers in skin and provide analgesia. The results of these studies demonstrate that skin immunoreactivity of GLS, GLU and VGluT2, in this surgical incision model, increase after post-surgical incision. This elevation in immunoreactivity is attenuated by the application of the glutaminase inhibitor, DON. The decrease in these biomarkers is consistent with the mechanism of action proposed for DON. Furthermore, the application of DON post-surgical incision produced analgesia by attenuating allodynia and producing a mechanical response similar to control animals. However, the results of thermal sensitivity were not significant, similar to other reports. 39,40 Overall, our results corroborate previous findings using DON in an animal inflammatory model. In summary, these data offer an exciting insight into a novel treatment of pain though the inhibition of the enzyme that produces the principle neurotransmitter of primary afferent neurons, i.e., glutamate. This approach lays a preclinical foundation for DON and provides additional support for further investigation into this novel treatment of pain. | 2018-06-21T16:08:09.103Z | 2016-01-08T00:00:00.000 | {
"year": 2016,
"sha1": "b55d05151d245d75c06b919db52e3638df4336e1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15406/ppij.2015.03.00055",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dbb8cc16312cd1426561ab583d35075cf4b4fa00",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
126343955 | pes2o/s2orc | v3-fos-license | Temperature-controlled acoustic surface waves
Conventional approaches to the control of acoustic waves propagating along boundaries between fluids and hard grooved surfaces are limited to the manipulation of surface geometry. Here we demonstrate for the first time, through theoretical analysis, numerical simulation as well as experimentally, that the velocity of acoustic surface waves, and consequently the direction of their propagation as well as the shape of their wave fronts, can be controlled by varying the temperature distribution over the surface. This significantly increases the versatility of applications such as sound trapping, acoustic spectral analysis and acoustic focusing, by providing a simple mechanism for modifying their behavior without any change in the geometry of the system. We further discuss that the dependence between the behavior of acoustic surface waves and the temperature of the fluid can be exploited conversely as well, which opens a way for potential application in the domain of temperature sensing.
Introduction
An important topic of recent research has been the investigation of the acoustic analog of surface electromagnetic waves, particularly surface plasmon polaritons, which occur at interfaces between highly conductive and dielectric media [1][2][3][4]. The principal difficulty in this general case is the apparent nonexistence of an acoustic analog of metals. However, it has been shown that, in the case of spoof plasmons [5][6][7][8][9][10][11], i.e. electromagnetic surface modes on highly conducting grooved surfaces, a corresponding acoustic phenomenon may exist in the form of an acoustic surface wave at the boundary between a fluid and a hard grooved surface [12][13][14][15][16]. The understanding of this analogy has opened up new ways of controlling the behavior of acoustic surface waves by tailoring the period, the width and the depth of the grooves. It has been shown that by varying the surface geometry in this way, the wave number of the propagating surface wave can be made different from its initial value k 0 , even exceedingly large [17][18][19][20][21]. This has led to a number of applications such as sound trapping, where a gradient change of the surface texture has been introduced in order to slow down and finally 'stop' the surface wave at a desired position along the structure, or acoustic lensing, where it has been used to tailor the phase pattern of the surface wave [22]. However, the main drawback of the proposed techniques for manipulating acoustic surface waves is that the effect depends directly on the geometry of the grooves, and that it can thus be varied only through physical modifications of the geometry.
In this paper we demonstrate that an acoustic surface wave propagating at the interface between a fluid and a hard grooved surface can be efficiently controlled by varying only the temperature of the fluid, while the geometry of the grooved surface remains unchanged. This opens up a way for a number of potential new applications, all tunable by external means. Following the theoretical considerations, we further numerically demonstrate temperature-controlled sound trapping and its potentials in acoustic spectral analysis and temperature sensing. We also present a temperature-controlled gradient refractive index (GRIN) acoustic medium and apply it to achieve temperature-controlled acoustic focusing. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Temperature controlled propagation of acoustic surface waves
A typical example of a hard grooved surface used to support acoustic surface waves is shown in figure 1, where d, a, and h represent the period, the width and the depth of the grooves, respectively. It should be noted that the only relevant property of the substrate material is mechanical hardness, which prevents the grooved surface itself from vibrating.
When the period of the grooves is much smaller than the guided wavelength of the propagating acoustic surface wave, the grooved surface can be considered as an effective medium and characterized by an effective density tensor. For an acoustic surface wave propagating along the direction x, the effective dispersion relation of such a medium is [12] = + ( ) ( ) ( ) k k hk 1 t a n , 1 x a d 0 where k 0 denotes the wavenumber in free space, considered here to be dry air and modeled as an ideal gas. To express the effective dispersion relation as a function of temperature, we first note that the wavenumber k 0 in any ideal gas is the following function of temperature where r 0 is the gas density, k is the adiabatic constant and p is the pressure. From the ideal gas state equation, the gas density can be obtained as a function of temperature where R denotes the specific gas constant, and T is the temperature in K. In case of dry air, k = 1.4, = --R 287.05 J kg K 1 1 and = p 101 325 Pa. Finally, the effective wavenumber is obtained in the temperature dependent form as t a n 4 x RT a d R T and plotted in figure 2 for a range of temperatures. It can be seen that, for a given temperature, the medium is almost non-dispersive at low frequencies (i.e. the wavenumber linearly varies with frequency). However, as frequency increases, strong dispersion occurs, the effective wavenumber asymptotically approaches infinity and finally a stop-band appears. Moreover, it can be seen that the frequency at which dispersion occurs and the position of the stop-band also depend on the temperature.
Experimental validation of concept
To validate the analytical solution given by (4), results of full-wave FEM simulations as well as of the experiment for the cases of = T 297 K and = T 313 K are also shown in figure 2. The FEM simulations were performed using COMSOL Multiphysics ® 4.4 acoustic and heat transfer modules. Both 2D and 3D models were made to compare the responses of infinitely wide grooved surfaces and those with a finite width. The initial temperature in all simulations was defined as a constant value in the surrounding medium (air). The air was modeled as an ideal gas. The simulation was performed using two studies. In the first study the time domain was used and heat radiation sources radiated from a defined start time to a defined end time. The second study was performed in the frequency domain, with the acoustic plane wave source radiating at the defined frequency, while the temperature distribution of the medium was equal to the one observed in the first simulation study. Perfectly matched layers were assigned to the boundaries of the simulation domain in order to prevent any radiation from a reflected acoustic wave. The largest mesh element size was set to a value below 1/10 of the smallest wavelength. As for the experiment setup, the prototype, with dimensions = d 8 mm, = a 5 mm, = h 24 mm, length = L 162 mm and width = w 40 mm, was fabricated in 3D printing technology using Felix 3.0 3D printer and PLA as filament material. Measurements were performed at controlled temperatures and pressures. The excitation signal was a 3000 s long linear chirp signal with instantaneous frequency ranging from 1 to 4 kHz, and the response of the system was recorded by two G.R.A.S. 46DP 1/8″ microphones at a fixed distance of D = x 8 mm, using a PC with a sound card and Audacity software. The sampling frequency was 48 kHz, and the obtained signals were post-processed in MATLAB to recover their amplitudes and phases. The microphones were mounted through small holes in the structure from its reverse side, and aligned with the bottom surface of the grooves.
It can be seen that a very good agreement exists between the analytical, FEM and experimental results. Although (4) is strictly valid for infinitely wide grooved surfaces only, it has also been shown to hold for surfaces of finite width, as long as the width is sufficiently greater than the period of the grooves d [10,11]. Namely, experimental results for a grooved surface with finite overall width equal to = d 5 40 mm, shown in figure 2, are in very good agreement with the results obtained for the infinite-width case.
The temperature dependence of the wavenumber k x and the phase velocity for the frequency of the propagating surface wave equal to 3400 Hz is shown in figure 3. This frequency has been chosen arbitrarily within the range in which prominent dispersion occurs.
From figure 3 it can be seen that the wavenumber increases while wave velocity decreases with decreasing temperature, until the temperature reaches a critical value T c when the wavenumber rapidly increases, theoretically to infinity. Below T c no propagation can occur due to the fact that the group velocity is equal to zero. The critical temperature T c can be obtained from (4) as the following function of the operating frequency f and the depth of the grooves h For the operating frequency and geometrical parameters given above T c equals 265 K. It should be noted that the discussion above does not hold in the close vicinity of T , c where the wavenumber becomes too large and the effective medium concept is inadequate since the guided wavelength becomes comparable with the period d. The effective media concept can be applied as long as the guided wavelength is larger than approximately d 4, i.e. p < k d 2 .
x This sets the actual critical temperature ¢ T c to a value somewhat above the theoretical value T , c namely to the value at which p = k d 2 .
x Although k x depends only on the ratio of d and a, and not on their individual values, as shown in (4), ¢ T c does depend on d. Figure 4, which compares three structures with a constant d a but with a different d, shows that although T c is the same for all three structures, the corresponding values of ¢ T c are different and can be obtained from Since the critical temperature depends on the operating frequency, for fixed geometry and temperature a theoretical maximal frequency f c can be obtained from (5), above which waves cannot propagate. However, the actual critical frequency ¢ f c is always below this value since the period of the grooves is not infinitesimally small and the maximum obtainable wavenumber is p d 2 rather than infinity. The dependence of the critical frequency ¢ f c on temperature is experimentally verified in figure 5, which shows the normalized sound pressure at an arbitrarily chosen point along the grooved surface, at two different temperatures.
The results presented above indicate that a good control of acoustic surface waves can be obtained by varying the temperature alone, while the grooved surface remains unchanged. This is illustrated in figure 6, where the acoustic pressure field distribution over a grooved surface is shown for four different values of the temperature of the surrounding medium. Even for relatively small temperature variations, the wavelength of the acoustic surface wave varies significantly for the same applied frequency (taken here to be ) 3400 Hz . Namely, it changes from 40 to 14 mm when the temperature decreases from 303 to 283 K.
Acoustic surface wave trapping
The described phenomenon can also be considered from an alternative point of view: if the temperature of the surrounding medium decreases along the propagation direction, the acoustic surface wave slows down and 'stops' when the temperature reaches ¢ T . c In this way, an acoustic wave of a given frequency is trapped at any desired point along the surface, simply by controlling the temperature of the surrounding medium, instead of varying the surface geometry as in [19]. This is illustrated in figure 7, where three cases are shown, each obtained by using a different linear temperature gradient along the propagation direction (indicated by the right y-axis).
In each case, the trapping occurs at a different position along the grooved surface, determined solely by the applied temperature gradient. We note here that temperature distributions other than linear can also be used. The concept presented above lends itself to a number of potential applications. For example, spatial spectral analysis of acoustic surface waves can be realized, since for a fixed temperature gradient, surface waves of different frequencies will be trapped at different points along the surface, as demonstrated in figure 8. The resolution of such a spectrum analyzer is quite high: a frequency shift of only 1.5% (i.e. ) 50 Hz corresponds to a shift of as much as 82 mm in the position of the sound intensity peak. In practical applications, if a point microphone such as G.R.A.S. 46DP 1/8″ with a diameter of 3.175 mm is used to measure normalized sound intensity, it will be possible to obtain a frequency resolution of only 1.9 Hz.
Temperature sensing using acoustic surface waves
Another application of the proposed concept would be temperature sensing (i.e. temperature mapping), where the temperature is estimated from the distribution of acoustic pressure on the grooved surface. To illustrate this numerically, let us consider a grooved surface with two radiation heat sources arbitrarily positioned along the surface. The corresponding temperature distribution in the stationary regime is shown in figure 9, together with the consequent distribution of the acoustic pressure at 3400 Hz. It can be seen that the guided wavelength increases with the temperature, due to the decreased wavenumber in warmer regions. Under the assumption that the temperature in a certain area is approximately constant, it can be uniquely estimated from the guided wavelength, which can in practice be readily determined from phase measurements at two close locations along the structure. To avoid the case where the phase difference includes a non-zero integer multiple of p 2 , the distance between measurement locations has to be smaller than the guided wavelength. For a given excitation frequency, the described temperature sensor thus functions in a certain temperature range, sufficient for some applications. However, by combining different excitation frequencies and measuring corresponding phase differences, this range can be expanded.
Acoustic surface wave bending and focusing
In all preceding applications, a temperature was varied along the direction of propagation of surface waves. However, a temperature variation can also be applied in a transverse direction, resulting in an acousticgradedindex (GRIN) medium. Let us consider a host medium composed of a = w 500 mm wide grooved surface, with the dimensions d, a and h as given above, and with a constant temperature of the surrounding medium of = T 306 K. 0 One section of length = l 100 mm of this medium is divided into 25 channels with individual widths of 20 mm, to each of which a different temperature is applied ( figure 10). Since the width of each channel is 20 times larger than the surface period = ( ) d 1 mm , the dispersion relation (1) is still usable. By applying different temperature distributions in the channels ( ) T y , different profiles of the refractive index of GRIN medium can be achieved, resulting in a range of effects, such as steering (i.e. bending) and focusing of acoustic waves.
To bend incoming acoustic waves for an arbitrary angle, the temperature distribution ( ) T y is calculated by combining (4) and the corresponding refractive index profile given in [23]. The resulting temperature profiles are shown in figure 11 for two arbitrarily selected bending angles of 15°and 30°.
The entire structure has been simulated, and ideal thermal isolation has been assumed between its different regions. The resulting acoustic pressure is shown in figure 12, demonstrating the bending of acoustic waves for a specified angle.
To achieve focusing of acoustic waves, hyperbolic secant refractive index profile is used [24], and the refractive index in the center of the structure is arbitrarily selected to be = n 1.8.
0
The calculated required temperature distribution across the GRIN medium is shown in figure 13. The resulting acoustic pressure distribution, shown in figure 14, clearly demonstrates that the focusing of acoustic waves has been achieved. Namely, a flat slab of GRIN medium behaves as an acoustic lens, controlled only by temperature.
Conclusion
We have numerically and experimentally shown that the temperature of the surrounding medium can be used to manipulate acoustic surface waves at the interface of a fluid and grooved hard surface in a variety of ways, through applications such as acoustic wave trapping, spatial spectral analysis of wideband acoustic waves, as well as acoustic wave steering and focusing. Unlike standard techniques for manipulation of acoustic surfacewaves, which require careful design of the surface geometry, this approach offers a simple solution, applicable to surfaces with uniform geometries and easily adaptable to any change in system specifications, thus allowing wave steering over variable paths or wave focusing with a variable focal length. Conversely, the same phenomenon can be used in the opposite direction, i.e., for sensing the temperature of the fluid through the behavior of the acoustic wave, which extends the field of its potential application even further. | 2019-04-22T13:03:25.914Z | 2016-10-06T00:00:00.000 | {
"year": 2016,
"sha1": "b006dc5b7816f8c6ccaf65b846e9b2a997c1fe77",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/18/10/103006",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2bd627a17f4eddec60ba586e0aed158063255d24",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235213877 | pes2o/s2orc | v3-fos-license | Validation of the Combined Biomarker for Prediction of Response to Checkpoint Inhibitor in Patients with Advanced Cancer
Simple Summary IMAGiC model is the model consisting of four-gene and PD-L1 expression levels to predict immunotherapy response. The IMAGiC model’s predictive performance was validated in patients with several advanced tumor types in this study. The PFS and OS demonstrated significant differences between the dichotomous IMAGiC groups. IMAGiC group could be utilized as a binary biomarker for predicting response to immunotherapy regardless of TMB level or MSI status. Abstract Although immune checkpoint inhibitors can induce durable responses in patients with multiple types of advanced cancer, only a limited number of patients have a known reliable biomarker. This study aimed to validate the IMmunotherapy Against GastrIc Cancer (IMAGiC) model, which was developed based on a previous study of four-gene and PD-L1 level, to predict immunotherapy response. We developed a clinical assay for formalin-fixed paraffin-embedded samples using quantitative real-time polymerase chain reaction to measure the expression level of the previously published four-gene set. The predictive performance was validated in a cohort of 89 patients with several advanced tumor types. The IMAGiC score was derived from tumor samples of 89 patients consisting of eight cancer types, and 73 out of 89 patients available for clinical response were analyzed with clinicopathological factors. The IMAGiC group (responder vs. non-responder) was determined with a specific value of the IMAGiC score as a cutoff, which was set by log-rank statistics for progression-free survival (PFS) divided the patients into 56 (76.7%) non-responders and 17 (23.3%) responders. Clinical responders (complete response/partial response) were higher in the IMAGiC responder group than in the non-responder group (70.6 vs. 21.4%). The median PFS of the IMAGiC responder group and non-responder was 20.8 months (95% CI 9.1-not reached) and 6.7 months (95% CI 4.9–11.1, p = 0.007), respectively. Among the 17 IMAGiC responders, 11 patients had tumor mutation burden-low and microsatellite-stable tumors. This study validated a predictive model based on a four-gene expression signature. Along with conventional biomarkers, our model could be useful for predicting response to immunotherapy in patients with advanced cancer.
Introduction
Immunotherapy, represented by immune checkpoint blockade, has demonstrated robust antitumor effects in treating various cancer types [1]. Since the initial approval of the cytotoxic T-lymphocyte-associated antigen 4 inhibitor ipilimumab in 2011, multiple checkpoint inhibitors have been developed and approved for multiple cancer types. Unlike conventional chemotherapeutic drugs, checkpoint inhibitors enhance the immune system to destroy cancer cells by blocking negative regulators expressed on the surface of immune or tumor cells [2]. Immunotherapy could induce a more durable response through these modes of action and had relatively fewer adverse events than conventional chemotherapy [3].
However, since the overall response rate to checkpoint blockade monotherapy was reported only in about 10-30% of patients in most types of cancer [4], substantial efforts are ongoing to define more reliable predictors of response to understand the biology of resistance to immunotherapy. Several biomarkers, such as microsatellite instability (MSI) status, programmed death-ligand 1 (PD-L1) expression, and tumor mutation burden (TMB) levels, have been extensively investigated. However, even these modalities did not fully predict the response to immunotherapy, and the proportion of patients with these markers was reported to be low in most types of tumors [5,6]. Additionally, the optimal cutoff of each modality is somewhat controversial, with various cutoff values used in several trials and studies. Therefore, there is still an unmet need to find another functional assay that could offer more transparent binary discrimination of responsiveness to immunotherapy.
One of the most recently studied biomarkers is analyzing transcriptomic features of tumors. Gene expression profiling could assess the simultaneous changes in the mRNA transcript levels of related genes. Several transcriptomic signatures have been developed to examine and predict the sensitivity or resistance to immunotherapy [7][8][9], most of which were based on inflammation or immune checkpoint pathway signature as cornerstones of each assay.
Recently, using a cohort of 21 patients with metastatic gastric cancer from the Samsung Medical Center, we developed a model named IMmunotherapy Against GastrIc Cancer (IMAGiC) score, based on the expression of a four-gene signature and PD-L1 combined positive score (CPS), that predicts response to pembrolizumab [10]. As a validation of our previous study, we analyzed the performance of the IMAGiC score by applying the model to another independent patient set of various tumor types in the current study.
Patients and Samples
From June 2019 to November 2020, tumor samples with clinicopathological factors were analyzed in patients who had previously received at least one immune checkpoint inhibitor at the Samsung Medical Center. Total RNAs were extracted from 10 (4-µm-thick) sections cut from each formalin-fixed paraffin-embedded tissue. From the tumor-rich areas (>20% tumor volume), RNA was isolated using the RNeasy FFPE kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions.
In our previous study, we developed the IMAGiC score model using NanoString platform (NanoString Technologies Inc., Seattle, WA, USA), which was calculated using gene expression levels of ubiquitin C-terminal hydrolase L1 (UCHL1), tyrosine kinase 2 (TYK2), protein kinase D1 (PRKD1), and armadillo repeat-containing X-Linked 1 (ARMCX1) gene and PD-L1 CPS. In this study, quantitative real-time polymerase chain reaction (qRT-PCR) was performed to validate the IMAGiC score model using cDNAs synthesized from total RNAs. PCR amplifications were performed in triplicate wells using the following conditions on a 7900 HT Sequence Detection System (Applied Biosystems, Foster City, CA, USA): 2 min at 50 • C and 10 min at 94 • C, followed by 40 two-temperature cycles of 95 • C for 15 s and 60 • C for 60 s. PD-L1 CPS was calculated by summing the number of PD-L1-stained cells (tumor cells, lymphocytes, macrophages) and dividing the result by the total number of viable tumor cells, multiplying by 100. Because we changed platform nanoString to qRT-PCR, linear regression model was reconstructed using mRNA expression levels of those four genes and the PD-L1 CPS of tissues with cancer. IMAGiC scores are obtained by multiplying the weights for each of the four gene expression levels and PD-L1 CPS from the linear regression model and adding them, with lower scores predicting higher chances of response to immunotherapy.
Medical records of patients were retrospectively gathered for age; sex; cancer type; treatment line, regimen, and number of cycles of immunotherapy; TMB; MSI status; PD-L1 CPS; expression level of each gene for calculating the IMAGiC score; and response to treatment.
Statistical Analysis
Statistical tests included Fisher's exact test for two-sample tests of proportions and Wilcoxon rank-sum test for two-sample tests of continuous variables that did not follow a normal distribution. Pearson's correlation was used to examine the association between the IMAGiC score and groups with several biomarkers for immunotherapy and calculate correlation coefficients. Log-rank statistics were used using the maxstat package for R to assess the cutoff point of the IMAGiC score for dividing patients into two categories, IMAGiC responder and non-responder. The Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 were used to assess treatment response. Progression-free survival (PFS) and overall survival (OS) were calculated from the start of treatment to the date of disease progression or death and death, respectively. The Kaplan-Meier curve method and the log-rank test by R package "survival" were used to compare PFS and OS between the IMAGiC responder and non-responder groups. Receiver operating characteristic (ROC) curve analysis was performed. The area under the curve (AUC) was calculated to evaluate the predictive performance of the IMAGiC model in checkpoint inhibitor response. Two-sided p values of 0.05 or lower were considered significant. R studio software (version 1.2.1335) was used for statistical analysis.
Validation in Another Cohort+
RNA sequencing data of patients with advanced non-small cell lung cancer treated with anti-PD-1/PD-L1 (GSE135222, n = 27) were selected to validate the IMAGiC score [11]. The ComBat function was used for adjusting gene expression data using the sva package because the validation data and test data had different platforms. Since there was no separate report of PD-L1 CPS in this study, we utilized the expression value of the CD274 gene, which encodes PD-L1 protein, as a substitute for PD-L1 CPS, as a measurement of PD-L1 mRNA expression using RNA sequencing is equivalent to PD-L1 expression by immunohistochemistry both analytically and clinically in predicting response to immune checkpoint inhibitor [12].
Patient Clinicopathologic Characteristics
Tumor samples were extracted from formalin-fixed paraffin-embedded tissue from a total of 89 patients. Of these, 73 patients were available for the evaluation of response to immunotherapy. Baseline characteristics of patients with available response data according to treatment response (complete response (CR)/partial response (PR) versus stable disease/progressive disease (PD)) and all included patients are shown in Table 1. Overall, two had CR, 22 had PR, 31 had SD, and 18 had PD. The median age was 61 years, and 37 (50.7%) patients were men. Eight types of cancer were included in the analysis: cervical cancer, cholangiocarcinoma, colorectal cancer, gastric cancer, hepatocellular carcinoma, melanoma, sarcoma, and urothelial carcinoma. Among gastric cancers, there were no Epstein-Barr virus-positive tumors. Immunotherapy has been used alone or in combination with other chemotherapeutic agents. Atezolizumab, avelumab, durvalumab, nivolumab, and pembrolizumab were included, and pembrolizumab containing regimen was most frequently administered (41.1%). Among known biomarkers for checkpoint blockade treatment, MSI status and PD-L1 CPS were significantly different between clinical responders (CR/PR) and non-responders (SD/PD) groups.
IMAGiC Score/Group and Treatment Outcome
Initially, the optimal cutoff point of the IMAGiC score that divided patients into two groups, IMAGiC responders and non-responders, was determined by log-rank statistics.
Clinical response to immunotherapy based on the RECIST criteria was not available in 14 (20%) out of 70 IMAGiC non-responders and two (11%) out of 19 responders. The proportion of the IMAGiC responder was significantly higher in the CR/PR group than in the SD/PD group (50.0% vs. 10.2%, p ≤ 0.001) ( Table 1). Conversely, the number of clinical responders (CR/PR) was higher in the IMAGiC responder group than in the nonresponder group (70.6% versus 21.4%, Figure 1A). The IMAGiC score was also significantly different between CR/PR and SD/PD groups ( Figure 1B). The treatment durations and best overall responses based on the immunotherapy regimen are shown in Figure 2. Of the 17 IMAGiC responders, two patients had MSI-high and TMB-high tumors, four others had TMB-high tumors, and the remaining 11 had TMB-low and microsatellite stable (MSS) tumors. Figure 3 demonstrates the PFS and OS data of the IMAGiC responder and non-responder groups. Kaplan-Meier survival curve analysis demonstrated that the IMAGiC responder was significantly associated with longer PFS and OS. The median PFS of responders and non-responders was 20.8 months (95% confidence interval [CI] 9.1-not reached) and 6.7 months (95% CI 4.9-11.1, p = 0.007), respectively. The median OS of each group was not reached but showed a clear separation between the two groups. Most events of OS analysis were censored due to a relatively short median follow-up duration of 6.9 months.
Cancers 2021, 13, x 5 of 11 responders (CR/PR) was higher in the IMAGiC responder group than in the non-responder group (70.6% versus 21.4%, Figure 1A). The IMAGiC score was also significantly different between CR/PR and SD/PD groups ( Figure 1B). The treatment durations and best overall responses based on the immunotherapy regimen are shown in Figure 2. Of the 17 IMAGiC responders, two patients had MSI-high and TMB-high tumors, four others had TMB-high tumors, and the remaining 11 had TMB-low and microsatellite stable (MSS) tumors. Figure 3 demonstrates the PFS and OS data of the IMAGiC responder and nonresponder groups. Kaplan-Meier survival curve analysis demonstrated that the IMAGiC responder was significantly associated with longer PFS and OS. The median PFS of responders and non-responders was 20.8 months (95% confidence interval [CI] 9.1-not reached) and 6.7 months (95% CI 4.9-11.1, p = 0.007), respectively. The median OS of each group was not reached but showed a clear separation between the two groups. Most events of OS analysis were censored due to a relatively short median follow-up duration of 6.9 months.
Association between IMAGiC Score/Group and Other Immunotherapy Biomarkers
The relationships between each biomarker were analyzed in a total of 89 tumor samples. In the current study, the cutoff value of the IMAGiC score was determined to be −0.18; patients with scores below this value were classified as IMAGiC responders, and those with scores above this value were classified as non-responders. IMAGiC scores were lower in TMB-high and MSI-high tumors than in TMB-low and MSS tumors (online supplemental Figure S1). However, correlation plots demonstrated a low association between the IMAGiC score and MSI status (r = 0.14) and the IMAGiC score and TMB level (r = 0.11) (online supplemental Figure S2). The PD-L1 CPS and TMB values (as a continuous variable) were significantly higher in the IMAGiC responder group than in the non-responder group. However, the TMB group (high and low, the cutoff of 10 mutations/megabase) and MSI status were not significantly different between the two groups. (online supplemental
Association between IMAGiC Score/Group and Other Immunotherapy Biomarkers
The relationships between each biomarker were analyzed in a total of 89 tumor samples. In the current study, the cutoff value of the IMAGiC score was determined to be −0.18; patients with scores below this value were classified as IMAGiC responders, and those with scores above this value were classified as non-responders. IMAGiC scores were lower in TMB-high and MSI-high tumors than in TMB-low and MSS tumors (online supplemental Figure S1). However, correlation plots demonstrated a low association between the IMAGiC score and MSI status (r = 0.14) and the IMAGiC score and TMB level (r = 0.11) (online supplemental Figure S2). The PD-L1 CPS and TMB values (as a continuous variable) were significantly higher in the IMAGiC responder group than in the non-responder group. However, the TMB group (high and low, the cutoff of 10 mutations/megabase) and MSI status were not significantly different between the two groups. (online supplemental
Association between IMAGiC Score/Group and Other Immunotherapy Biomarkers
The relationships between each biomarker were analyzed in a total of 89 tumor samples. In the current study, the cutoff value of the IMAGiC score was determined to be −0.18; patients with scores below this value were classified as IMAGiC responders, and those with scores above this value were classified as non-responders. IMAGiC scores were lower in TMB-high and MSI-high tumors than in TMB-low and MSS tumors (online supplemental Figure S1). However, correlation plots demonstrated a low association between the IMAGiC score and MSI status (r = 0.14) and the IMAGiC score and TMB level (r = 0.11) (online supplemental Figure S2). The PD-L1 CPS and TMB values (as a continuous variable) were significantly higher in the IMAGiC responder group than in the non-responder group. However, the TMB group (high and low, the cutoff of 10 mutations/megabase) and MSI status were not significantly different between the two groups.
(online supplemental Table S1). The usefulness of the IMAGiC score and other conventional assays (TMB, MSI status, and PD-L1 CPS) as predictive biomarkers for immunotherapy was further evaluated by AUC analyses based on clinical response (CR/PR versus SD/PD) to immunotherapy (Figure 4). The AUC value of the IMAGiC score was 0.704, and the highest value was obtained for the combination of the IMAGiC score and TMB level (0.76). Additionally, the ROC curve of the PD-L1 CPS and IMAGiC score were compared using DeLong's test for two correlated ROC curves. There was no significant difference between the two models (p = 0.539, Figure 5). Table S1). The usefulness of the IMAGiC score and other conventional assays (TMB, MSI status, and PD-L1 CPS) as predictive biomarkers for immunotherapy was further evaluated by AUC analyses based on clinical response (CR/PR versus SD/PD) to immunotherapy ( Figure 4). The AUC value of the IMAGiC score was 0.704, and the highest value was obtained for the combination of the IMAGiC score and TMB level (0.76). Additionally, the ROC curve of the PD-L1 CPS and IMAGiC score were compared using DeLong's test for two correlated ROC curves. There was no significant difference between the two models (p = 0.539, Figure 5).
To validate the IMAGiC score model, we applied the IMAGiC score model for another cohort of patients with advanced non-small cell lung cancer who received immune checkpoint inhibitor treatment with published RNA sequencing data (GSE135222, n = 27) [11]. The AUC value for predicting clinical response to immunotherapy of the IMAGiC score, IMAGiC group, and mRNA expression of PD-L1 was 0.76, 0.69, and 0.61, respectively (online supplemental Figure S3). These results support that the IMAGiC score model might be a strong predictive biomarker to predict response for immunotherapy. To validate the IMAGiC score model, we applied the IMAGiC score model for another cohort of patients with advanced non-small cell lung cancer who received immune checkpoint inhibitor treatment with published RNA sequencing data (GSE135222, n = 27) [11]. The AUC value for predicting clinical response to immunotherapy of the IMAGiC score, IMAGiC group, and mRNA expression of PD-L1 was 0.76, 0.69, and 0.61, respectively (online supplemental Figure S3). These results support that the IMAGiC score model might be a strong predictive biomarker to predict response for immunotherapy.
Discussion
The success of checkpoint blockade treatment in several metastatic tumors has opened a promising new opportunity for cancer therapeutics. However, only a small subset of patients respond to checkpoint inhibitors, making it crucial to identify patients who could benefit from these therapies. Current molecular testing to predict the response to immunotherapies includes panel or immunohistochemistry-based individual methods (e.g., MMR status or PD-L1 expression). However, the immune response and biology complexity renders it implausible that a single ideal biomarker could sufficiently predict immunotherapy response. In this context, we developed and evaluated the efficacy of the IMAGiC model as a predictive biomarker for advanced pan-cancer patients who received checkpoint blockade treatment. This study identified that the IMAGiC group could be utilized as a binary marker for response to immunotherapy, regardless of other conventional assays such as TMB level or MSI status.
The IMAGiC score is the value that was calculated by assigning different weights to the expression levels of each of the four genes and the PD-L1 CPS. In our study, the cutoff score, which divided patients into two categorical groups, was the point at which the separation of the survival curves for PFS in the two groups was maximized. The cutoff of the IMAGiC score was −0.18, which corresponded to the value of the highest Youden index (sensitivity plus specificity minus 1) on the ROC curve of the IMAGiC score for the prediction of clinical response (CR/PR versus SD/PD). With this cutoff value, the IMAGiC group had a predictive power with a corresponding specificity of 0.918, a sensitivity of 0.500, a positive predictive value of 0.750, and a negative predictive value of 0.790 for predicting clinical response. Therefore, it seems reasonable to set the cutoff of the IMAGiC score as −0.18 even when considering not only the PFS but also the aspect of predicting the clinical response. By employing the ROC curve, the IMAGiC score could predict the immunotherapy response (CR/PR) with an AUC of 0.704, the IMAGiC group with an AUC of 0.699, and the combination of IMAGiC group and TMB level with an AUC of 0.758, which could indicate that the conjunction of the two modalities could demonstrate better performance.
UCHL1, TYK2, PRKD1, and ARMCX1 were significantly differentially expressed between the response and the no-response group to pembrolizumab in our previous study [10]. UCHL1 was reported to promote the expression of PD-L1 through the Akt-P65 signaling pathway, and a high UCHL1 expression level inhibited T cell activity [13]. TYK2 is
Discussion
The success of checkpoint blockade treatment in several metastatic tumors has opened a promising new opportunity for cancer therapeutics. However, only a small subset of patients respond to checkpoint inhibitors, making it crucial to identify patients who could benefit from these therapies. Current molecular testing to predict the response to immunotherapies includes panel or immunohistochemistry-based individual methods (e.g., MMR status or PD-L1 expression). However, the immune response and biology complexity renders it implausible that a single ideal biomarker could sufficiently predict immunotherapy response. In this context, we developed and evaluated the efficacy of the IMAGiC model as a predictive biomarker for advanced pan-cancer patients who received checkpoint blockade treatment. This study identified that the IMAGiC group could be utilized as a binary marker for response to immunotherapy, regardless of other conventional assays such as TMB level or MSI status.
The IMAGiC score is the value that was calculated by assigning different weights to the expression levels of each of the four genes and the PD-L1 CPS. In our study, the cutoff score, which divided patients into two categorical groups, was the point at which the separation of the survival curves for PFS in the two groups was maximized. The cutoff of the IMAGiC score was −0.18, which corresponded to the value of the highest Youden index (sensitivity plus specificity minus 1) on the ROC curve of the IMAGiC score for the prediction of clinical response (CR/PR versus SD/PD). With this cutoff value, the IMAGiC group had a predictive power with a corresponding specificity of 0.918, a sensitivity of 0.500, a positive predictive value of 0.750, and a negative predictive value of 0.790 for predicting clinical response. Therefore, it seems reasonable to set the cutoff of the IMAGiC score as −0.18 even when considering not only the PFS but also the aspect of predicting the clinical response. By employing the ROC curve, the IMAGiC score could predict the immunotherapy response (CR/PR) with an AUC of 0.704, the IMAGiC group with an AUC of 0.699, and the combination of IMAGiC group and TMB level with an AUC of 0.758, which could indicate that the conjunction of the two modalities could demonstrate better performance.
UCHL1, TYK2, PRKD1, and ARMCX1 were significantly differentially expressed between the response and the no-response group to pembrolizumab in our previous study [10]. UCHL1 was reported to promote the expression of PD-L1 through the Akt-P65 signaling pathway, and a high UCHL1 expression level inhibited T cell activity [13]. TYK2 is required for the immune response to cancer and the development of inflammation [14]. PKD1 is involved in several biological processes, including cell proliferation, migration, Cancers 2021, 13, 2316 9 of 11 invasion, angiogenesis, and immune regulation [15]. The function of ARMCX1 is relatively unknown, but a recent study reported that ARMCX1 is associated with RNA modification and damage response and could be a potential prognostic marker of gastric cancer [16]. Considering each gene's function and character, it seems valid that these genes constitute the IMAGiC score, predicting immunotherapy's efficacy.
The results of this analysis suggest that the IMAGiC model and other modalities, including TMB level and MSI status, are moderately correlated. Although the IMAGiC score was significantly lower in the MSI-high group than in the MSS group, the number of patients with high MSI was not different between the two IMAGiC groups. Similarly, the TMB level and IMAGiC model showed a discrepant association. MSI status and TMB level, when assessed separately or in combination, could predict the clinical efficacy of immunotherapy across multiple tumor types according to previous data [17][18][19]. We found several patients whose tumors were TMB-low and MSS, which suggests that multiple approaches using several assays might be necessary for appropriate selection of patients to be treated, and our biomarker may help identify small populations predicted to show a low response rate for which immunotherapy might still be beneficial.
Recent studies have established and validated a predictive model for immunotherapy using the gene expression profiling method. Analyzing transcriptomic features of tumors could assess the simultaneous changes in the mRNA transcript levels of related genes. Several transcriptomic signatures have been developed in various tumor types to examine and predict the sensitivity or resistance to immunotherapy [7][8][9]20,21], most of which were based on inflammation or immune checkpoint pathway signature as cornerstones of each assay. In the current study, we built a model with the composition of four genes selected through the differentially expressed gene analysis and PD-L1 CPS, which were not limited to a specific pathway.
With the development of high-throughput technologies, a variety of biomarker strategies have been developed and found that multifactorial synergistic predictive markers were superior to the single marker. Comprehensive predictive biomarker models developed through integrating different types of data based on different components of tumor-host interactions is the direction of future research and will have great impacts on the field of precision immuno-oncology. Given that IMAGiC platform correlates well with high TMB in various tumor types and predicted response to checkpoint inhibitor in patients with high-MSI tumor and not responding to immunotherapy, the predictive function of the IMAGiC is thought to be at least equal or superior to PD-L1 expression. Although the superb performance of IMAGiC was not proven in the present study, possibly due to a small number of cases, it could be considered that the IMAGiC is one of the multifactorial synergistic biomarkers, and we are planning to perform a prospective clinical trial to prove this in a larger cohort.
This study has some inherent limitations. First, as we tried to construct a simplified predictive biomarker by analyzing multiple genomically heterogeneous tumors at once, it might be difficult to generalize the results due to differences in the characteristics of each cancer type. Second, the statistically determined cutoff point for the IMAGiC score in the current study would need to be validated in a generalized and larger dataset. Third, a relatively short follow-up period prevented the maturation of survival data (PFS and OS). Nevertheless, despite these drawbacks, our dichotomized IMAGiC score was identified as a robust biomarker for better predicting treatment response and improved PFS in patients treated with checkpoint blockade treatment.
Conclusions
In conclusion, our data showed the IMAGiC model's potential as a predictive modality for immunotherapy in multiple types of advanced cancer. An appropriate cutoff for the IMAGiC responder should be determined in further large-scale studies. The application of the model to each advanced cancer type will also be further conducted.
Supplementary Materials:
The following are available online at https://www.mdpi.com/article/ 10.3390/cancers13102316/s1, Figure S1: IMAGiC score according to (A) TMB and (B) MSI status; median [quartile range]. Figure S2: Correlations between the IMAGiC model, TMB, MSI status, and PD-L1 CPS. Figure S3: ROC curve and AUC of each biomarker based on response to immunotherapy in advanced non-small cell lung cancer cohort. Table S1: Clinical characteristics and biomarker profile according to the IMAGiC responder/non-responder group.
Author Contributions: All authors had full access to all study data and take responsibility for the integrity of the used data and accuracy of the data analysis. J.L. and K. Institutional Review Board Statement: This study was approved by the Institutional Review Board of Samsung Medical Center (approval number: 2020-04-225-001) and the requirement for informed consent was waived. | 2021-05-28T05:19:34.841Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "0e4bd2287c33e8b98a19cc6aa2db039dbc83107f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/10/2316/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e4bd2287c33e8b98a19cc6aa2db039dbc83107f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18106571 | pes2o/s2orc | v3-fos-license | Understanding the Functional Interplay between Mammalian Mitochondrial Hsp70 Chaperone Machine Components*
Mitochondria biogenesis requires the import of several precursor proteins that are synthesized in the cytosol. The mitochondrial heat shock protein 70 (mtHsp70) machinery components are highly conserved among eukaryotes, including humans. However, the functional properties of human mtHsp70 machinery components have not been characterized among all eukaryotic families. To study the functional interactions, we have reconstituted the components of the mtHsp70 chaperone machine (Hsp70/J-protein/GrpE/Hep) and systematically analyzed in vitro conditions for biochemical functions. We observed that the sequence-specific interaction of human mtHsp70 toward mitochondrial client proteins differs significantly from its yeast counterpart Ssc1. Interestingly, the helical lid of human mtHsp70 was found dispensable to the binding of P5 peptide as compared with the other Hsp70s. We observed that the two human mitochondrial matrix J-protein splice variants differentially regulate the mtHsp70 chaperone cycle. Strikingly, our results demonstrated that human Hsp70 escort protein (Hep) possesses a unique ability to stimulate the ATPase activity of mtHsp70 as well as to prevent the aggregation of unfolded client proteins similar to J-proteins. We observed that Hep binds with the C terminus of mtHsp70 in a full-length context and this interaction is distinctly different from unfolded client-specific or J-protein binding. In addition, we found that the interaction of Hep at the C terminus of mtHsp70 is regulated by the helical lid region. However, the interaction of Hep at the ATPase domain of the human mtHsp70 is mutually exclusive with J-proteins, thus promoting a similar conformational change that leads to ATPase stimulation. Additionally, we highlight the biochemical defects of the mtHsp70 mutant (G489E) associated with a myelodysplastic syndrome.
Mitochondria are ubiquitous, complex, and essential organelles of eukaryotes. Several biochemical reactions in iron metabolism, amino acid biosynthesis, urea metabolism, nucleotide biosynthesis, fatty acid metabolism, and oxidative phosphorylation are carried out within this organelle (1). More than 98% of the proteins that comprise mitochondria are encoded by the nuclear genome (2)(3)(4). Because a vast majority of proteins in the mitochondrial matrix are synthesized on cytosolic ribosomes, efficient import of proteins is critical for mitochondrial function (2,3,5,6). This is assisted by mitochondrial chaperone machinery consisting of the 70-kDa family (mtHsp70), 2 J-proteins, and nucleotide exchange factors, which are critical for several cellular functions, including preprotein translocation and their subsequent folding inside the mitochondrial matrix (5,7).
The mtHsp70 chaperone machinery components are highly conserved across species, including in mammalian mitochondria (8). The mtHsp70s are the most abundant and are usually present as one or two copies in the mammalian mitochondrial matrix compartment (9,10). In the mammalian system, human mtHsp70 is commonly referred to as "mortalin" or glucoseregulated protein (GRP75) (11,12). The mouse mitochondrion contains two isoforms of mtHsp70 termed as "mot-1" and "mot-2." Both isoforms are involved in several important cellular processes. In contrast to murines, human mitochondria have only one mtHsp70 encoded by the HSPA9 gene, which functionally corresponds to mouse mot-2 (9,10). Similarly, in human mitochondria, there are two members of type-I class Hsp40 isoforms reported, which are usually referred to as hTid-1 L (Larger isoform) and hTid-1 S (Smaller isoform). However, the physiological importance of these two J-proteins is still unclear (13). The different nucleotide states of human mtHsp70 are regulated by the nucleotide exchange factor: human GrpEL1 (14,15). Similarly, a mammalian mitochondrion also possesses an ortholog of recently discovered "yeast Zim17" termed as hsp70 escort protein, "Hep" (16,17). The physiological role of Hep in human mitochondrial matrix compartment is poorly understood. In vivo, the functional cycle of interaction of an unfolded mitochondrial client protein is initiated in the ATP-bound state of mtHsp70. The binding of the client protein in the peptide binding cleft and simultaneous interaction of J-domains of J-proteins with the ATPase domain of Hsp70 results in ATP hydrolysis, and thus stabilizes the interaction of a client polypeptide by converting Hsp70 to the ADP-bound state (18). Nucleotide release factors cause exchange of ADP for ATP, resulting in dissociation of bound clients, and prime Hsp70 for a second cycle of interaction (19 -21).
In the mammalian system, the mtHsp70 machinery plays a very critical functional role for protein quality control in the matrix compartment, thereby regulating mitochondria biogenesis. Importantly, the altered expression and specific mutations in the mtHsp70 chaperone machinery lead to severe mitochondrial disorders. In humans, the levels of mtHsp70 are highly up-regulated in all types of tumors and have been used as biomarkers to detect tumor invasiveness (22). Reduced expression of mtHsp70 is commonly observed in Parkinson disease and other age-related diseases leading to mitochondrial dysfunction (23). In zebrafish, a single point mutation in the C-terminal region of mtHsp70 (G492E, referred as MDS mutant) is associated with a hematopoietic defect similar to the myelodysplastic syndrome, presumably due to loss of chaperone function (24). Recently, it has been shown that altered expression of mammalian Hsp40 isoforms (hTid-1 L and hTid-1 S ) leads to several cellular phenotypes, including apoptosis, malignancy, and cardiac-related disorders such as dilated cardiomyopathic syndrome (13,25). Despite these critical cellular roles, little functional information at the biochemical level of human mtHsp70 chaperone machinery is available.
To address the functional importance of mammalian mtHsp70 chaperone machinery components in mitochondria biogenesis, we have undertaken the reconstitution of the human mtHsp70 machine to analyze its unique properties at the biochemical level as compared with the well explored model organism Saccharomyces cerevisiae. Our studies identified several novel biochemical features that are distinct from other Hsp70 systems and are critical for the mammalian mitochondrial function. Although mtHsp70 is highly conserved across species, we observed significant differences in sequence-specific binding toward mitochondrial client proteins and the mechanism of regulation through co-chaperones such as J-proteins. From our results, it is evident that such changes are mainly attributable to overall variations in the architecture of the substrate binding domain acquired during the evolution. Additionally, we have unraveled several novel properties of Hep, such as its unique nature to stimulate ATPase activity of mtHsp70 as well as prevention of aggregation of non-native unfolded proteins. By reconstitution analysis, we have uncovered several biochemical defects associated with the MDS mutant to understand the functional relevance of mtHsp70 machinery in disease conditions.
EXPERIMENTAL PROCEDURES
Plasmid Construction and Mutagenesis-Human MOT2, DNAJA3 isoforms 1 and 2, and GRPEL1 open reading frames were PCR-amplified from HeLa cells cDNA library (Stratagene) using sequence specific primers. For purification analysis, the His 6 tag was introduced at the N terminus of human mtHsp70, while at the C terminus for other proteins. For solubilization of human mtHsp70 in the bacterial system, its open reading frame was cloned into pRSFDuet-1 vector along with the yeast Zim17 (26). DNAJA3 isoforms 1 and 2 and GRPEL1 open reading frames were cloned into pET-3a vector. The human HEP clone (pOTB7) was obtained from Open Biosystems and subcloned into pET-3a vector. The GST fusion constructs of HEP and the substrate binding domain (SBD) of HSPA9 were generated by introducing the respective coding sequences downstream of the GST tag in the pGEX-KG vector, respectively (27). Yeast Ssc1 was purified from Escherichia coli BL21(DE3) according to the procedure as previously described (28).
For generating deletion mutants of human mtHsp70, appropriate reverse primers were designed and cloned into the pRSFDuet-1 dual expression plasmid. Point mutants of human mtHsp70 and H/Q mutants of hTid-1 S were generated by QuikChange site-directed mutagenesis using high fidelity Pfu Turbo DNA Polymerase from Stratagene. All the clones were verified by DNA sequencing reactions carried out at Eurofins Inc. and Macrogen Inc. All the clones used for purification and analysis were devoid of mitochondrial leader sequence based on the reported mature forms and MITOPROT prediction software.
Expression and Purification of Human Mitochondrial Chaperone Machinery Components-For purification of His-tagged human mtHsp70, coexpression was carried out with yeast Zim17 in E. coli BL21(DE3) strain by allowing growth at 30°C to an A 600 of 0.6, followed by induction using 1 mM isopropyl 1-thio--D-galactopyranoside for 8 h. Human mtHsp70 was purified by standard affinity chromatography using nickel-nitrilotriacetic acid fast flow Sepharose. 500 ml of cell pellet was resuspended in 5 ml of lysis buffer A (25 mM HEPES-KOH, pH 7.5, 20 mM imidazole, 100 mM KCl, 10% glycerol) containing 2.5 mM magnesium acetate, 0.2 mg/ml of lysozyme and protease inhibitor mixture, followed by incubation at 4°C for 1 h. The sample was gently lysed with 0.2% deoxycholate followed by DNase I (10 g/ml) treatment for 15 min at 4°C. The cell lysate was clarified by centrifuging at 28,000 ϫ g for 30 min at 4°C. The soluble supernatant was incubated with 500 l of nickelnitrilotriacetic acid-Sepharose (bed volume) for 2 h at 4°C. Unbound proteins were removed by extensive washing with buffer A, followed by an additional wash with buffer A containing 0.5% Triton X-100. To remove nonspecific contaminants, the resin was extensively washed again with buffer B (25 mM HEPES-KOH, pH 7.5, 20 mM imidazole, 100 mM KCl, 10 mM magnesium acetate, 10% glycerol, and 2 mM ATP) at 4°C, followed by 2 washes with high salt buffer C (25 mM HEPES-KOH, pH 7.5, 20 mM imidazole, 1 M KCl, 10 mM magnesium acetate, 10% glycerol). Nonspecific impurities were further removed by buffer A wash containing 40 mM imidazole. The bound proteins were eluted with buffer D (25 mM HEPES-KOH, pH 7.5, 250 mM imidazole, 100 mM KCl, 10 mM magnesium acetate, 10% glycerol) and the samples were dialyzed against appropriate buffers for use in particular experiments. All mtHsp70 deletion and point mutants were purified similar to wild type protein, unless otherwise specified.
Histidine-tagged hTid-1 L and hTid-1 S proteins were purified from the insoluble pellet fraction obtained by expressing them in the BL21(DE3) dnaKJ Ϫ E. coli strain and purified using a similar protocol as previously described for yeast Mdj1 (29). GrpEL1 and human Hep purification were done using similar procedures as described (29,30). The full-length GST-Hep and GST alone proteins were purified according to the published protocols with minor modifications (28). The GST tag from the SBD of human mtHsp70 was cleaved by thrombin treatment according to the manufacturer's instructions (Novagen). Greater than 95% purity was obtained for the preparations of human mtHsp70 and its mutants, hTid-1 L , hTid-1 S , human GrpEL1, and Hep as analyzed on SDS-PAGE (supplemental Fig. S1, A and B), size exclusion chromatography, and mass spectrometry (data not shown).
Fluorescence Anisotropy Peptide Binding Assays-25 nM Fluorescein-labeled P5 peptide (CALLLSAPRR) was incubated with increasing concentrations of wild type, deletions, MDS mutant of human mtHsp70, and yeast Ssc1 at 25°C in buffer (25 mM HEPES-KOH, pH 7.5, 100 mM KCl, 10 mM magnesium acetate, 10% glycerol). After binding reached equilibrium, anisotropy measurements were recorded with the Beacon 2000 fluorescence polarization system (Invitrogen Corp.) at 25°C with excitation at 490 nm and emission at 535 nm. The data were fitted to a quadratic single-site binding equation using Prism 4 (GraphPad) to calculate the equilibrium dissociation constant (K d ). Similarly, peptide corresponding to the presequence of cytochome c oxidase (Cox4-MLSLRQSIRFFKP-TRRLC) was also used for binding measurements. For calculating k off rates, excess unlabeled P5 peptide was added and measurements were recorded. The values were fitted to a one-phase exponential dissociation equation using Prism 4.0.
Single Turnover ATPase Assays-Human mtHsp70-ATP complexes were prepared according to the procedure as previously described (31). Briefly, 100 g of mtHsp70 protein (wild type, deletion, and MDS mutant) was incubated with 50 Ci of [␣-32 P]ATP (BRIT, 10 mCi/ml) in 100 l of buffer X (25 mM HEPES-KOH, pH 7.5, 100 mM KCl, and 10 mM magnesium acetate) on ice for 3 min. The complex was immediately isolated on a NICK column (GE Healthcare). Aliquots of 1 M mtHsp70-[␣-32 P]ATP complexes containing 10% glycerol were frozen in liquid nitrogen and stored at Ϫ80°C. Single turnover experiments were performed in buffer A at 25°C in the presence of various concentrations of proteins (hTid-1 S , hTid-1 L , Hep, and GrpEL1) and P5 peptide as previously described (32). The reaction was stopped at various time intervals and the mixture was separated on thin layer chromatography plates (TLC) and exposed to phosphorimager cassettes (33). The percent conversion of ATP to ADP was determined and the rate of ATP hydrolysis was fitted to a first-order rate equation by nonlinear regression analysis using Prism 4.0.
Rhodanese Aggregation Assays-Bovine liver rhodanese (Sigma) was used as a model substrate for analyzing the aggregation prevention activity of human mtHsp70 machine components using the same procedure as previously described (34). The prevention of rhodanese aggregation by each specific chaperone (mtHsp70, hTid-1 S , hTid-1 L , and Hep) was analyzed independently or in combination using the appropriate buffer as described (34). The percentage of aggregation was calculated by setting the total value in the absence of chaperones as 100%.
In Vitro GST Pulldown Analysis-Purified GST-Hep (1 M) was incubated with a 10-l bed volume of glutathione-agarose beads in 150 l of GST buffer (20 mM HEPES-KOH, pH 7.5, 150 mM KCl, 10 mM magnesium acetate, 0.2% Triton X-100). After washing to remove unbound proteins, the beads were blocked with 0.1% bovine serum albumin for 20 min at 23°C. The beads were washed 2 times in GST buffer to remove excess unbound bovine serum albumin. The GST-bound beads were then resuspended in 200 l of binding buffer and incubated with 2 M human mtHsp70 (wild type, deletions, and SBD mutants) for 30 min at 23°C. The incubation was continued for an additional 5 min with the addition of 2 mM ATP or ADP in the same reaction mixture. After washing the beads 3 times in GST buffer, bound proteins were resolved on SDS-PAGE followed by Coomassie dye staining.
Conserved Hsp70 Shows a Difference in Sequence Specificity
Across the Species-Yeast mitochondrial matrix has three members of Hsp70 (Ssc1, Ssq1, and Ecm10) dedicated to several specialized functions (35). To the present date, our major understanding regarding functional interplay between various matrix chaperone components is derived primarily from studies conducted in yeast. In contrast, higher eukaryotic systems such as human mitochondria contain only one mtHsp70 dedicated to all diverse cellular functions. The unique biochemical features acquired by human mtHsp70 during higher eukaryotic evolution that is critical to its myriad cellular functions have not been investigated. Therefore, to understand the molecular interactions between human mtHsp70 chaperone components, we have attempted for the first time to purify major components of the human mtHsp70 chaperone machine and have functionally reconstituted them in vitro.
To explore the functional versatility of human mtHsp70 at the biochemical level, we have analyzed several parameters of the human mitochondrial Hsp70/J-protein/GrpE/Hep system comparing them with yeast Ssc1. As one of the parameters, the difference in mitochondrial client protein binding properties between human mtHsp70 and yeast Ssc1 was investigated. The client protein binding affinities of human mtHsp70 and yeast Ssc1 were analyzed using a fluorescence anisotropy-based peptide binding assay. We have utilized two model mitochondrial targeting sequence-derived peptides; 1) P5 peptide (CALLLSA-PRR), having a portion of the mitochondrial targeting sequence of aspartate aminotransferase from chicken and 2) a portion of the yeast cytochrome oxidase 4 mitochondrial targeting sequence peptide (MLSLRQSIRFFKPTRRLC) named Cox4. Both peptides were labeled with fluorescein fluorophore covalently attached to the cysteine residue. The underlying principle behind the fluorescence anisotropy assay is a change in the relative tumbling rates of the fluorescent labeled peptides in free and bound forms of Hsp70 in solution. The kinetic parameters obtained are used for measurements of relative affinities for the client proteins in different nucleotide bound states of mtHsp70.
Human mtHsp70 yielded a very high dissociation constant (K d ) of 14.9 M for the P5 peptide ( Fig. 1A and supplemental Table S1). Interestingly, the affinity of human mtHsp70 for P5 peptide was 50-fold lower than its yeast ortholog: Ssc1 (K d of 0.3 M) ( Fig. 1A and supplemental Table S1). To further assess the difference in the affinity for P5 peptide between yeast Ssc1 and human mtHsp70, we have measured the release rate (k off ) of bound labeled P5 peptide complex upon addition of an excess of unlabeled P5 peptide. Human mtHsp70 revealed ϳ2.7-fold greater k off of 0.41 (Ϯ0.007) min Ϫ1 for P5 peptide as compared to yeast Ssc1, suggesting that the inherent lower affinity of wild type human mtHsp70 toward the P5 peptide is likely due to enhanced release and on rate for peptide binding (supplemental Table S1).
To identify the client specificity of human mtHsp70 and its relative affinities for different peptide substrates, we have also performed peptide binding analysis using the larger (18-mer) hydrophobic Cox4 peptide. Surprisingly, wild type human mtHsp70 showed a higher affinity with K d of 1.49 M for Cox4 peptide in the ADP-bound state (Fig. 1B and supplemental Table S1). Thus, human mtHsp70 showed 10-fold greater affinity toward larger peptides that contain more hydrophobic sequences, such as Cox4 as in comparison to conventional P5 peptide. However, yeast Ssc1 showed an affinity with a K d of 23.6 M for Cox4 peptide that is 15.8-fold lower compared to human mtHsp70 (Fig. 1B and supplemental Table S1). Interestingly, this affinity of yeast Ssc1 for Cox4 peptide was found significantly lower (ϳ78-fold) than its affinity toward the P5 peptide ( Fig. 1B and supplemental Table S1). These observations suggest that, although the functional specificity of mtHsp70 across species is highly conserved, significant differences exist at the biochemical level with respect to their affinities toward the client protein interaction.
C-terminal 10-kDa Helical Lid of Human mtHsp70 Is Dispensable for Smaller P5 Peptide Interaction-The deletion of the complete C-terminal helical lid in yeast Ssc1 resulted in a lethal phenotype (36). Also, the lidless variant of yeast Ssc1 showed a severely reduced affinity for peptide binding (36). To probe the potential role of the helical lid in the observed peptide binding affinities of human mtHsp70, we generated 3 C-terminal deletion mutants by removing C to E helices (M600), truncating at the middle of helix B (M584), and deleting the entire helical region (M555) (supplemental Fig. S1, B and C). The purified deletion mutants of mtHsp70 were assessed for the peptide interaction utilizing the fluorescence anisotropic measurements using labeled P5. In contrast to yeast Ssc1, the dissociation equilibrium constants for the C-terminal human mtHsp70 truncations (M600, M584, and M555) were not significantly altered except that, the M600 mutant displayed a 2-fold lower affinity for P5 binding (Fig. 1C and supplemental Table S2). We speculate that, a less drastic influence of the helical lid on the P5 peptide interaction is either due to an overall differential helical-fold of variable domain or the altered orientation of the helical lid over the peptide binding cleft, as observed in rat Hsc70 (37,38). Additionally, higher k off observed for the wild type protein also further supports the idea that the helical lid region of human mtHsp70 is not involved in regulating the half-life of the substrate complex in the ADPbound state, possibly due to a partial open conformation of SBD.
Binding of client proteins to SBD brings a global conformational change in the ATPase domain, thus enhancing the rate of ATP hydrolysis of Hsp70 (39). To analyze the effect of the helical lid domain in the interdomain communication, we tested the ability of deletion mutants to stimulate its ATPase activity upon P5 binding using single turnover experiments. We observed that fold-stimulation of the ATPase activity of the deletion mutants correlated well with their peptide affinities, thus retaining a normal level of interdomain communication in deletion mutants (Fig. 1D). Similarly, the rate of ATP-dependent substrate release was found similar to wild type for the deletion mutants as observed by using fluorescence anisotropy measurement analysis (data not shown). In contrast, a partial loss in interdomain communication was observed for yeast Ssc1. These observations further support that the role of a helical lid with respect to interdomain communication is still dispensable for human mtHsp70. Taken together, we conclude that the C-terminal 10-kDa helical lid region of human mtHsp70 does not play a critical role in determining the affinities for P5 peptide binding as compared with other well explored Hsp70 systems.
Two Human Mitochondrial J-protein Isoforms, hTid-1 L and hTid-1 S , and Hep Differentially Regulate the ATPase Activity of Human mtHsp70-As a second functional parameter, we have investigated the mechanism of interaction between human mtHsp70 with its multiple co-chaperones that are critical for regulation of the chaperone cycle in mitochondria biogenesis. The functional Hsp70 cycle is initiated in the ATP-bound form, followed by ATP hydrolysis stimulated by the concerted action of J-proteins and locking up the client proteins in the ADP state (18,40,41). Interestingly, the human mitochondrial matrix consists of two members of type-I DnaJ homologs namely, hTid-1 L and hTid-1 S in comparison to well studied yeast mitochondria. Human Tid-1 S is a truncated splice variant of hTid-1 L with an insertion of 6 new amino acids but lacking 33 amino acids of the larger isoform at the C terminus (13). To assess the functional interaction of matrix J-proteins with wild type mtHsp70 protein, we again utilized well established single turnover ATPase experiments. The wild type human mtHsp70 had a basal rate constant of 0.032 min Ϫ1 . However, this basal rate is 2-fold slower than that of yeast Ssc1 and E. coli DnaK (42). To address the functional differences between these two matrix J-protein splice variants; their basic property of stimulating ATPase activity of human mtHsp70 was analyzed. At 1:1 molar ratio of mtHsp70 to J-protein, a 3.2-fold stimulation was observed with hTid-1 L for wild type protein ( Fig. 2A). Interestingly, under similar conditions hTid-1 S showed an 11-fold robust stimulation for wild type mtHsp70 ( Fig. 2A). Based on these results, it is tempting to speculate that the difference in the stimulatory activities among hTid-1 J-variants might contribute to their in vivo phenotypic differences as reported (13).
Hep belongs to a newly discovered class of zinc-binding proteins that have been implicated for maintaining the functional status of mitochondrial Hsp70 by actively modulating the conformations in different nucleotide bound states. The function of Hep1/Zim17 was found to be essential in yeast mitochondria. A similar ortholog exists in mammalian mitochondria including human, with a predicted analogous function. The function of Hep is least studied among all other co-chaperones and how Hep modulates the conformations of Hsp70s is still elusive. Previously, it was speculated that Hep may interact with human mtHsp70 and function as a nucleotide exchange factor (26). The nucleotide exchange factors are critical components of chaperone machinery that accelerates the rate of exchange between ADP to ATP by several orders of magnitude in a typical folding reaction. GrpEL1 is the proposed nucleotide exchange factor of the human mtHsp70 machine similar to yeast Mge1 and E. coli GrpE. To assess the nucleotide release activity of human GrpEL1, we used the single turnover ATPase assay to monitor the hydrolysis of the prebound mtHsp70-ATP complex of the wild type protein in the presence of GrpEL1. As indicated in Fig. 2B, upper panel, human GrpEL1 showed robust nucleotide exchange activity in the presence of excess unlabeled ATP, thus inhibiting the maximum ATP hydrolysis of wild type mtHsp70. Our results establish the true nature of the function of GrpEL1 as a nucleotide exchange factor of human mtHsp70. In contrast, even in the presence of an excess of unlabeled ATP, Hep retained the ability to stimulate the ATPase activity of human mtHsp70 (Fig. 2B, lower panel). Surprisingly, in contrast to its yeast Zim17, human Hep showed a significant ability to stimulate the ATP hydrolysis of mtHsp70 under single turnover conditions. At a 1:4 molar ratio of mtHsp70 to Hep, respectively, a 4.3-fold stimulation was obtained for wild type mtHsp70 ( Fig. 2A). A significant increase in fold-stimulation was observed at higher concentrations of Hep (data not shown). In conclusion, based on the ability of Hep to stimulate the ATPase activity of mtHsp70 in comparison to GrpEL1 convincingly rules out speculation of Hep acting as a nucleotide exchange factor.
Nature of Hep Interaction with mtHsp70 Is Distinctly Different from a Substrate-specific Interaction-The unique ability of Hep to stimulate the ATPase activity in contrast to yeast Zim17 raises two intriguing questions. First, the interaction between Hep and mtHsp70 is substrate-specific or second, it functions similar to the canonical matrix J-proteins. To gain further insights into the true functional nature of Hep in the human mtHsp70 chaperone machine, we have analyzed the physical interaction between Hep and mtHsp70 in different nucleotide states using GST pulldown analysis. To investigate the stability of the interaction, we have briefly incubated the preformed GST-bound Hep-mtHsp70 complex in the presence or absence of nucleotides (ATP/ADP). As shown in Fig. 3A, lane 1, a stronger interaction was observed between Hep and mtHsp70 in the absence of nucleotides. However, stability of the complex was reduced 2-fold in the presence of ATP or ADP indicating that the affinity is compromised in nucleotide bound states of mtHsp70 (Fig. 3A, lanes 2 and 3). As a control, GST alone did not interact with wild type mtHsp70 (Fig. 3A, lanes 4 -6).
To address the specificity of interaction between Hep and mtHsp70, the preformed complex was incubated in the presence of excess peptide substrates with different nucleotide bound forms of mtHsp70. The P5 peptide did not affect the stability of the Hep-mtHsp70 complex in the presence or absence of bound nucleotides (Fig. 3B, compare lanes 1-2 with 3-4). Our results confirm that the nature of interaction between mtHsp70 and Hep is not a Hsp70-substrate interaction. Recently, it was shown that Hep stably interacts with the isolated ATPase domain of human mtHsp70 by a gel filtration analysis (26). To address whether the substrate binding domain of human mtHsp70 can independently interact with Hep in vitro, we incubated various concentrations of His-tagged SBD of mtHsp70 with GST-Hep and subjected to pulldown analysis. No detectable levels of interaction between SBD and Hep were observed (Fig. 3C, lanes 1-4). However, these results did not rule out the possibility of interaction between Hep and SBD of mtHsp70 in a full-length context.
Hep Binds to SBD of mtHsp70 in a Full-length Context and Its Interaction at the C Terminus Is Different from J-proteins-To
analyze the Hep interaction in greater detail in a full-length context, we subjected deletion mutants of mtHsp70 for GST pulldown analysis using similar experimental conditions in different nucleotide bound states. Surprisingly, greater than 2-fold enhanced interaction was observed for the M600 deletion mutant in the non-nucleotide and ATP-bound states (Fig. 4A, compare top two rows, lanes 1-2 and 3-4). Interestingly, the interaction was restored to wild type levels in M584 and M555 deletion mutants (Fig. 4A, compare top two rows, lanes 1-2 and 5-8). As a control, all deletion mutants did not show any detectable levels of interaction with GST alone at similar concentrations utilized for monitoring the Hep interaction (Fig. 4A, bottom two rows, lanes 1-8). The results of this study clearly highlight the importance of the helical lid region in regulating the Hep interaction with SBD.
To test the influence of the peptide binding -sandwich region in Hep interaction, we generated additional site-specific point mutants of mtHsp70 in the well conserved SBD pocket. These include arch mutants (L450A, A475W), the hydrophobic pocket mutant (V482F), and a double mutant that is a combination of arch and the hydrophobic pocket mutant (A475W/ V482F) (supplemental Fig. S1, B and C). To validate their functional properties, we purified the mutant proteins and subjected them to peptide binding and J-protein stimulation anal-ysis. Similar to E. coli DnaK, these mutants were found defective in peptide binding and failed to show stimulation by J-proteins (supplemental Fig. S2). Importantly, these mutants were tested for their ability to interact with Hep, based on GST pulldown analysis. Interestingly, all the mutants showed significant and enhanced interaction with Hep in non-nucleotide as well as in the ATP-bound state in comparison to wild type protein (Fig. 4B, top two rows, and compare lanes 1-12). As a control, GST alone did not show binding of any detectable trace amounts of mtHsp70 mutants (Fig. 4B, bottom two rows, lanes [1][2][3][4][5][6][7][8][9][10][11][12]. Based on these findings, we propose that Hep interacts with both domains of the human mtHsp70 in a full-length context and the C-terminal helical lid is involved in regulating its interaction with SBD. Type-I J-proteins are known to interact with SBD of Hsp70s in a full-length context. Based on biochemical and genetic data it has been speculated that J-protein interaction with SBD in the full-length context is similar to a typical substrate (42). Therefore, to differentiate Hep binding from J-protein interaction at the C terminus of mtHsp70, we compared their specific ATPase stimulating activities in deletion and point mutants. Interestingly, all mutants significantly retained Hep-dependent stimulating activity in contrast to the matrix J-proteins (hTid-1 L and hTid-1 S ) stimulation ( Fig. 5A and supplemental Fig. S2, D and E). These novel findings further strengthen our understanding that the Hep interaction at the C terminus of wild type mtHsp70 is not substrate specific and the interaction sites are not mutually exclusive with J-protein.
Binding Sites of Hep and J-proteins with the ATPase Domain of Human mtHsp70 Are Mutually Exclusive-Hep and J-proteins are known to interact with the ATPase domain of mtHsp70 and stimulate its ATPase activity (26,43). To test whether the binding sites of Hep and J-domain of the J-proteins The bound proteins were analyzed by SDS-PAGE followed by Coomassie dye staining. GST alone was used as a negative control and 25% input of wild type human mtHsp70 and mutants was used as a loading control.
on the ATPase domain of mtHsp70 are similar, we performed competition experiments using single turnover ATPase assays. For this experiment, we utilized a mutant form of hTid-1 S where histidine from the J-domain "HPD" signature sequence was replaced by glutamine (QPD). Previously, it was reported for J-proteins that the H/Q mutation in the J-domain abolishes its ability to stimulate ATPase activity despite binding to Hsp70 in the ATP-bound state (43). Similarly, we observed that the hTid-1 S QPD mutant failed to stimulate the ATPase activity of human mtHsp70 even at higher concentrations (data not shown). To identify the binding sites, different molar concentrations of the hTid-1 S QPD mutant were titrated against a constant ratio (4:1) of Hep to the mtHsp70-ATP complex. At a 4:1 ratio, Hep alone showed a 4.3-fold stimulatory activity against the preformed mtHsp70-ATP complex (Fig. 5B, left panel, first pair of bars). Interestingly, increasing concentrations of the hTid-1 S QPD mutant showed a robust decline in the ability of Hep to stimulate the ATPase activity of mtHsp70 (Fig. 5B, left panel, second to fourth pair of bars). Conversely, a fixed ratio of hTid-1 S QPD to the mtHsp70-ATP complex was competed by increasing concentrations of Hep, thus showing enhancement in ATPase stimulation of mtHsp70 (Fig. 5B, right panel, compare the first and second bars to the rest). These novel findings suggested that Hep and J-protein interaction at the ATPase domain of human mtHsp70 is mutually exclusive or brings similar conformational changes upon binding, thus stimulating the ATPase activity of mtHsp70.
To identify the critical residues important for Hep interaction at the ATPase domain of human mtHsp70, we created a triple substitution mutant in the ATPase domain of mtHsp70 by replacing amino acids at positions 196, 198, and 199 to alanines (YND to AAA). The human mtHsp70 YND mutant showed 3-fold elevated basal activity in comparison to wild type (Fig. 5C, first panel). A similar mutation at corresponding amino acid sequences in E. coli DnaK resulted in decreased ATPase stimulation by J-proteins. Similarly, the YND mutant showed decreased stimulation by hTid-1 S in comparison to wild type (Fig. 5C, second panel). Interestingly, the YND mtHsp70 mutant also showed a significant decrease in Hep stimulation as in comparison to the wild type protein further supporting the mutually exclusive nature of Hep and J-protein interaction at the ATPase domain of human mtHsp70 (Fig. 5C, third panel).
Human Mitochondrial Chaperone Machinery Components Prevent Aggregation of Unfolded Client Proteins-As a third functional parameter, we assessed the ability of matrix chaperone machine components in preventing the aggregation of client proteins. Molecular chaperones are known to prevent aggregation of client proteins in response to various types of physiological stress and play a critical role in maintenance of matrix protein quality control. Different members of the Hsp70 family have been shown to prevent aggregation to various extents depending on the nature of the substrate as well as robustness of the chaperone machinery. To understand this function, we monitored the in vitro aggregation of rhodanese as a model client protein. As shown in Fig. 6A, at 10-fold molar excess of human mtHsp70, ϳ70% protection was observed with denatured rhodanese. At similar molar ratios, bovine serum albumin as a control did not protect the denatured rhodanese against aggregation indicating that human mtHsp70 can specifically interact with an exposed hydrophobic core of the substrate to prevent formation of non-native conformations (Fig. 6A). Greater than 80% protection against aggregation was observed with higher molar ratios (1:20) of human mtHsp70 Hep (right panel). ATP hydrolysis was monitored under single turnover conditions. Fold-stimulation was calculated by setting the intrinsic ATP hydrolysis rate as 1. Error bars are derived from two independent sets of experiments. (Fig. 6A). Compared to DnaK, human mtHsp70 was less robust in providing protection against the aggregation of rhodanese at lower concentrations tested probably due to weaker affinity toward substrates (34).
Similarly, both mitochondrial J-proteins, hTid-1 L and hTid-1 S , showed an ability to prevent aggregation of denatured rhodanese over a range of concentrations. Comparatively, hTid-1 S showed better protection against aggregation at a similar concentration than hTid-1 L (Fig. 6, B and C). Because, Hep can efficiently stimulate the ATP hydrolysis of mtHsp70, we then tested for its ability to interact with unfolded clients in the presence of different concentrations similar to J-proteins. Surprisingly, as indicated in Fig. 6E, greater than 60% protection against aggregation of rhodanese was observed in the presence of 4 times molar excess of Hep. These results provide the first evidence to show that Hep can independently prevent aggregation of client proteins besides its Hsp70 escort activity. To test the aggregation prevention efficiency of mtHsp70 together with J-proteins or Hep as a unit, we performed the experiments using substoichiometric concentrations of hTid-1 S , hTid-1 L , and Hep in combination with mtHsp70. As indicated in Fig. 6, D and E, a better protection was observed as a unit, indicating their central role in prevention of aggregation of client proteins in the mitochondrial matrix.
Multiple Functional Defects Associated with Human mtHsp70 MDS Mutant-One of the important goals of our reconstitution analysis was to set a platform to dissect specific functional defects associated with the physiologically relevant chaperone mutant phenotypes linked to various mitochondrial disorders in a mammalian system. Recently, several point mutants have been reported in human mtHsp70 that are associated with pathological disorders such as Parkinson disease and myelodysplastic syndrome. To explore the connection between chaperone function and diseased state in the myelodysplastic syndrome, we generated a novel G489E (MDS mutant) point mutant of human mtHsp70 located within the predicted loop (L4 and 5) of the -sandwich region that is associated with this syndrome (supplemental Fig. S1, B and C).
The MDS mutant showed a 5-fold elevated basal ATPase activity and 1.6-fold larger K d value for P5 peptide binding as compared with wild type (Fig. 7A and supplemental Table S2). However, the MDS mutant exhibited a severe defect in P5 stimulation even at higher concentrations of substrates in contrast to its peptide affinity ( Fig. 7B and supplemental Table S2). These results indicated a possible interdomain communication defect associated with this novel mutant, which was not explored previously in other Hsp70s. Similarly, this mutant failed to show stimulation by both J-proteins, hTid-1 S and hTid-1 L (Fig. 7C, first and second panels). Moreover, human GrpEL1 showed a reduced rate of nucleotide exchange activity with G489E as inhibition was plateaued at 10% hydrolysis in comparison to wild type (compare Fig. 7D with Fig. 2B, upper panel). Interestingly, however, it retained the ability to be stimulated by Hep as well as showed an enhanced interaction with Hep in both nucleotide states (Fig. 7, C, third panel; 4, B, lanes 11 and 12). In conclusion, we hypothesize that multiple chaperone-specific biochemical defects associated with the genetic G489E mutant impairs the chaperone cycle, thus leading to MDS.
DISCUSSION
Our major goal in this study was to understand the molecular mechanism of action of various components of the mitochondrial chaperone machine in human mitochondria. To gain insights into the molecular mechanism of mammalian mitochondrial Hsp70 chaperone function, we reconstituted and analyzed the chaperone properties of the human mtHsp70 chaperone machine components utilizing well established in vitro biochemical tests. Our analysis reveals four distinct and novel biochemical aspects that are important for understanding the chaperone function of human mtHsp70 and its co-chaperones.
The first aspect reveals a sequence-specific interaction of human mtHsp70 with the peptide substrates derived from mitochondrial targeting sequences of client proteins. Importantly, human mtHsp70 shows very weak affinity toward shorter and less hydrophobic peptide substrates such as P5, whereas it shows higher affinity toward larger peptides that contain more hydrophobic sequences, such as Cox4 when compared with yeast Ssc1. Notably, C-terminal helical lid deletion (44). However, our data indicates that the lidless variant of human mtHsp70 exhibits ϳ2.5-fold lower k off for P5 as compared with wild type. Therefore, we speculate that the overall fold and relative orientation of a 6amino acid shorter helical lid over the SBD of human mtHsp70 significantly differs when compared with other Hsp70s.
The second aspect demonstrates the mechanism of regulation of the chaperone activity of human mtHsp70 by J-protein splice variants: hTid-1 L and hTid-1 S . Our analysis shows that hTid-1 S is more efficient in regulating the ATPase cycle of mtHsp70 due to its robust stimulating activity as compared with hTid-1 L . This raises an intriguing question about the involvement of 33 amino acids from the C-terminal end of hTid-1 L in negatively regulating its ability to stimulate the ATPase activity of mtHsp70. On the other hand, it is possible that an insertion of 6 new amino acids at the C terminus of hTid-1 S leads to gain of function, thus stimulating more efficiently. Also, both J-protein isoforms displayed differential abilities in preventing aggregation of denatured rhodanese. The functional differences between these two J-protein variants may be the primary reason for opposite phenotypes seen at the cellular level as reported (13).
Third, our analysis uncovers a detailed novel mechanism in which the Hep protein modulates the chaperone function of human mtHsp70. Our observation provides the first direct evidence showing that the stability of Hep interaction is also dependent on the C-terminal region of human mtHsp70. Based on our GST pulldown analysis, we hypothesize that C-terminal domain ␣-helices C, D, and E are directly involved in negatively regulating the Hep interaction in wild type protein. This is supported by two important observations. First, the truncation of C to E ␣-helices enhances the interaction of Hep in the case of the M600 deletion mutant. Second, a restoration of the wild type level of interaction in M584 and M555 deletion mutants suggests that amino acids from 584 to 600 might be critical for Hep binding at the C terminus of human mtHsp70. Similarly, an enhanced interaction with Hep was also observed in arch or SBD human mtHsp70 cleft mutants due to retention of C-terminal contact sites, comprised of amino acids 584 to 600 in these mutants. However, the negative regulation by C-terminal C, D, and E ␣-helices may largely be ineffective in mutant proteins due to alteration in the positioning of these helices relative to the -sandwich domain.
On the other hand, we do not rule out the possibility that deletion of ␣-helices (C, D, and E) may overall influence the relative orientation of the SBD and ATPase domain of human mtHsp70 by altering the position of the interdomain linker region, thus promoting a favorable conformation for better Hep binding in M600 and arch/cleft point mutants. Recent experimental evidences are in favor of this hypothesis wherein the interdomain linker region has been shown to play a critical role for binding Zim17 to the ATPase domain of mtHsp70 in yeast (45). Such rearrangements in the domain interface in mutants may promote a higher propensity to generate self-aggregation prone conformers in non-nucleotide or the ATP-bound state thereby enhancing their binding with Hep protein. Also, the functional significance of C-terminal D and E ␣-helices is well established in bacterial DnaK (46). Our result emphasize the importance of C, D, and E ␣-helices in regulating the interaction of Hep with mtHsp70 in human mitochondria.
The nature of Hep interaction at the C terminus of wild type human mtHsp70 is unique and distinct from substrates or J-proteins. Two biochemical evidences presented here support our hypothesis. 1) The preformed Hep-human mtHsp70 complex in the presence or absence of nucleotides is not destabilized by excess levels of P5 peptide, indicating its different nature of interaction at the C terminus. 2) All deletion and point mutants of human mtHsp70 significantly retained their ability to stimulate ATPase activity as compared with J-proteins. However, the increased Hep interaction with the human mtHsp70 mutants did not show significant enhancement in stimulation of ATPase activity. We speculate that the physical interaction through the C terminus of human mtHsp70 is dispensable for modulating the conformational changes necessary for activation of the ATPase domain to hydrolyze the ATP.
Interestingly, despite the absence of J-domain, Hep showed a unique ability to stimulate ATPase activity of human mtHsp70. However, we conclusively rule out the possibility of Hep functioning as a nucleotide exchange factor, as speculated earlier based on the stimulatory activity of Hep observed in single turnover experiments (26). Furthermore, our biochemical data demonstrate that the mechanism of Hep action closely resembles the type-I J-protein function. To further confirm the specificity of stimulation, we have identified critical amino acid residues of the N terminus of human mtHsp70 that are essential for inducing the conformational changes during ATP hydrolysis. Surprisingly, these residues were found canonical with those that are essential for J-protein stimulation by DnaJ and hTid-1 S for E. coli DnaK and human mtHsp70, respectively. Besides, binding of hTid-1 S can be competed out by Hep or vice versa, indicating that the interaction sites of these two proteins at the ATPase domain of human mtHsp70 are mutually exclusive. The overlap in binding sites between Hep and hTid-1 S indicates that they may induce similar conformational changes in the ATPase domain that results in acceleration of ATP hydrolysis. However, the rate at which they couple conformational changes to the ATP hydrolysis in the chaperone cycle might differ due to differences in ATPase stimulation activities between them.
Besides interacting with Hsp70 partner proteins, our in vitro biochemical experiments reveal that Hep also possesses human mtHsp70-independent functions. For example, Hep exhibits bona fide chaperone activity, because it can bind unfolded substrates such as rhodanese, preventing its aggregation even in the absence of the Hsp70 partner protein similar to type-I J-proteins. Based on these chaperone-specific functions performed by Hep, we propose that Hep represents a member of a new class of co-chaperone for Hsp70 evolved for higher eukaryotic mitochondria biogenesis. However, the specific mitochondrial client protein other than Hsp70 requiring the assistance of Hep to prevent aggregation in response to various physiological stress stimuli is yet to be elucidated.
The fourth aspect focuses on understanding the chaperonespecific functional defects associated with MDS mutant. The mutant shows significant defects in interacting with J-protein co-chaperones (hTid-1 S and hTid-1 L ) as well as the reduced rate of nucleotide exchange ability by GrpEL1. The basal ATPase activity is significantly elevated and together with the loss of stimulation by client peptides indicates an interdomain communication defect associated with this novel loop mutant. Therefore, we hypothesize that the loss of mtHsp70 activity in the MDS mutant impedes the import of many precursor proteins and their subsequent folding in the matrix leading to mitochondrial dysfunction. Our results establish that the loss of chaperone function may be the leading cause of myelodysplastic syndrome. To evaluate the importance of this residue in other Hsp70s, we made a similar mutation at the corresponding position in yeast Ssc1, which resulted in a lethal phenotype signifying the importance of this residue in the proper functioning of mtHsp70. 3 Our biochemical insights will provide further understanding of this syndrome at the physiological level.
In summary, our results establish and highlight several unique and distinct biochemical features of the human mitochondrial chaperone machine (mtHsp70/J-protein/GrpE/Hep) that are critically required for protein quality control in the mitochondrial matrix. Additionally, it confirms the need for multiple co-chaperones for proper mitochondria biogenesis required for fulfilling cellular demands in the mammalian system. Besides, our investigation also provides key insights to connect the involvement of chaperone function in a diseased state such as myelodysplastic syndrome. Together, our results provide a better platform for the future investigation on mtHsp70-based therapeutic design in treating various mitochondrial disorders. | 2017-06-09T19:28:58.504Z | 2010-04-14T00:00:00.000 | {
"year": 2010,
"sha1": "1f6807c6739f0ccc9fa403698abb089d5dfb3e80",
"oa_license": "CCBYNC",
"oa_url": "http://www.jbc.org/content/285/25/19472.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f6807c6739f0ccc9fa403698abb089d5dfb3e80",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248398959 | pes2o/s2orc | v3-fos-license | A Peri-Ictal EEG-Based Biomarker for Sudden Unexpected Death in Epilepsy (SUDEP) Derived From Brain Network Analysis
Sudden unexpected death in epilepsy (SUDEP) is the leading seizure-related cause of death in epilepsy patients. There are no validated biomarkers of SUDEP risk. Here, we explored peri-ictal differences in topological brain network properties from scalp EEG recordings of SUDEP victims. Functional connectivity networks were constructed and examined as directed graphs derived from undirected delta and high frequency oscillation (HFO) EEG coherence networks in eight SUDEP and 14 non-SUDEP epileptic patients. These networks were proxies for information flow at different spatiotemporal scales, where low frequency oscillations coordinate large-scale activity driving local HFOs. The clustering coefficient and global efficiency of the network were higher in the SUDEP group pre-ictally, ictally and post-ictally (p < 0.0001 to p < 0.001), with features characteristic of small-world networks. These results suggest that cross-frequency functional connectivity network topology may be a non-invasive biomarker of SUDEP risk.
INTRODUCTION
Sudden unexpected death in epilepsy (SUDEP) is the leading cause of epilepsy-related mortality, however, the etiology remains poorly understood (Thurman et al., 2014;Devinsky et al., 2016;Sveinsson et al., 2017). Fear of SUDEP can decrease quality of life for patients and family members. There are no validated SUDEP risk biomarkers, which are needed to develop and assess interventions and prevention strategies for individuals and more broadly (Devinsky et al., 2016;Odom and Bateman 2018).
Several clinical factors correlate with SUDEP risk. A few of these, such as occurrence and frequency of generalized tonic-clonic and other types of seizures over the preceding year, duration of epilepsy, and use of multiple anti-seizure medications, among others (Novak et al., 2015), have been combined to form the SUDEP Risk Inventory (SUDEP-7), which provides a total score suggestive of overall SUDEP risk (Hesdorffer et al., 2011). Studies of SUDEP biomarkers have focused mainly on predicting SUDEP risk through findings correlating with SUDEP-7, other clinical risk factor algorithms or heart rate variability (Jha et al., 2021;Sivathamboo et al., 2021). Many of these biomarkers only have an indirect association with SUDEP (Odom and Bateman 2018;Ryvlin et al., 2019). Few studies prospectively assessed their predictive power (Novak et al., 2015;Ryvlin et al., 2019). SUDEP biomarkers with a more direct association may include peri-ictal cardiorespiratory dysfunction and prolonged post-ictal generalized electroencephalography (EEG) suppression (PGES), although findings are contradictory (Kang et al., 2017;Odom and Bateman 2018;Ryvlin et al., 2019). EEG based biomarkers, including prolonged electroclinical tonic phase and dynamics of seizure termination are correlated with PGES duration but have not been tested in a SUDEP patient cohort (Tao et al., 2013;Alexandre et al., 2015;Bauer et al., 2017;Grigorovsky et al., 2020). Delta-gamma cross-frequency interactions are a potential surrogate of PGES (Grigorovsky et al., 2020) and were found to persist during the peri-ictal period of a SUDEP patient. To date, studies exploring EEG SUDEP biomarkers have neglected measures targeting functionally aberrant connections in brain networks, which are characteristic of the epileptic brain. Functional brain networks reflect the complex interactions in the brain and may distinguish pathology from normal functioning brain. These rhythms are important in information processing in the brain, with low frequencies being more spatially distributed and responsible in coordinating local high frequency activity. In this study, we aim to compare peri-ictal network differences in SUDEP patients using graph theory measures of directed functional connectivity. Specifically, we construct novel directed graphs combined from delta -HFO functional connectivity to capture the cross-frequency interactions between different brain regions. We use these directed graphs and their topologies to discern between SUDEP and non-SUDEP epileptic patients.
Data Acquisition
Scalp EEG recordings for 14 non-SUDEP (with 77 peri-ictal segments) and 8 SUDEP (with 25 peri-ictal segments) patients were provided through a consortium formed by the Toronto Western Hospital, the NYU Comprehensive Epilepsy Center, and the Phramongkutklao Royal Army Hospital (Table 1). Non-SUDEP patients had focal (temporal or extratemporal lobe) epilepsy, were resistant to anti-seizure medications and were undergoing presurgical evaluation. Ictal segments were marked by board-certified neurologists and electroencephalographers. The institutional review boards of the consortium approved the study protocol and all patients gave informed consent.
Patient data were originally filtered with a 0.1 Hz high pass filter during acquisition and were later pre-processed by removing power line interference using a finite impulse response (FIR) notch filter at 50 Hz or 60 Hz (data centre location dependent) and associated harmonics. Recordings used an acquisition reference at FCz, grounded at Fpz. Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund -Research Excellence; and the University of Toronto (Loken et al., 2010;Ponce et al., 2019).
Wavelet Phase Coherence
Wavelet phase coherence (WPC) was computed between pairs of scalp EEG electrodes. The phase of different frequency bands was extracted through the complex wavelet transform (Cotic et al., 2015). The Morlet complex wavelet transform was used with a mother wavelet of central frequency of 0.8125 Hz and bandwidth of 5 Hz, as previously used on EEG data (Grigorovsky et al., 2020). The relative phase difference was obtained using where }W} is wavelet coefficient, }W p } is the complex conjugate, "s" is the scaling coefficient and }τ} is the time shift. The phase coherence between two electrodes is computed over a time window (N · Δt) expressed as an integer multiple N of the sampling period Δt. The phase coherence is defined as: WPC was applied to each wavelet central frequency in the delta (0.5-2 Hz) and HFO (80-120 Hz) ranges, in increments on a logarithmic base two scale. Wavelet frequency scales took the form of 2 k where }k} ranged from −2.0 to 1.0 for the delta range and 6.3 to 6.9 for the HFO range in 0.1 incremental steps. The WPC window size was proportional to 8 cycles for each frequency.
The undirected connectivity matrices were computed at 1 s intervals by assigning each edge corresponding to a pair of scalp electrodes to the WPC averaged over the frequency range (delta or HFO) and over 1 s temporal windows. The edges between electrodes are undirected, yielding to symmetrical connectivity matrices.
where " e ij (t)} and }ρ ij (s, τ)} is the edge and WPC between node }i" and }j}, }E} and " V} is the set of all edges and the set of vertices respectively in the graph.
Directed Low to High Frequency Connectivity
The undirected connectivity matrices were derived from delta-HFO WPC. The degree of each node was computed for both the delta and HFO WPC connectivity adjacency matrices to construct the directed delta-HFO network. Edges between pairs of nodes i and j were computed from the product of the delta degree of node i and HFO degree of node j. This measure represents information flow across the brain pertaining to the coexistence of simultaneous cohered delta and cohered HFO regions, which are two frequency ranges associated with seizure activity (Guirgis et al., 2015;Grigorovsky et al., 2020). The nature of this simultaneous coexistence may or may not be associated ( Figure 2) with classical cross-frequency coupling (Tort et al., 2008;Canolty and Knight 2010;Stankovski et al., 2017).
Additionally, this measure can identify phase-amplitude crossfrequency coupling (Supplementary Figure S1), where low frequency oscillations coordinate large scale activity driving local HFOs.
Graph Theory Measures
Global coherence was computed as a measure for the undirected delta and HFO connectivity networks. First the eigenvalues of the connectivity matrix were computed and sorted. The global coherence is a ratio between the largest eigenvalue to the sum of all eigenvalues and has been previously used to analyze spatiotemporal EEG dynamics (Cimenser et al., 2011).
The temporal mean of the global coherence was used to assess group differences. Two network measures were computed: the clustering coefficient and the global efficiency. Together, these measures provide a description of the connection topology in the brain network.
The clustering coefficient for each node/electrode }C→ i} in the directed graph is defined as the fraction of directed edges between adjacent nodes of node i over the maximum amount of directed edges where }k i } is the degree of node i (Watts and Strogatz 1998;Fagiolo 2007;Liao et al., 2011). The clustering coefficient of the network is the mean clustering coefficient of all nodes.
The global efficiency of the network is a measure which quantifies how information flows throughout the network. Graphs with a high global efficiency have on average shorter paths connecting any two nodes within the network. The global efficiency was used instead of the characteristic path length as the shortest path length is not defined when a network contains two nodes that are not connected by any path. The global efficiency is the average efficiency, defined as the inverse of the shortest path between two nodes, over all electrode pairs (Latora and Marchiori 2001;Mitsis et al., 2020).
where }d ij } is the shortest path between nodes i and j in the directed graph. Graph theory measures were computed using the Brain Connectivity Toolbox in Python (Rubinov and Sporns 2010). Graph visualization was created using the circular layout graph of the MNE-Python software package (Gramfort et al., 2013).
Statistical Analysis
DABEST Python toolbox was used for two group comparisons of graph measures generating Gardner-Altman estimation plots for independent group mean differences (Ho et al., Frontiers in Network Physiology | www.frontiersin.org April 2022 | Volume 2 | Article 866540 2019). Bootstrapping was used to obtain distribution and confidence intervals for difference in groups. The Wilcoxon rank sum test was also used to test for significance between the two groups, as a separate test from the bootstrapping confidence intervals.
RESULTS
Abnormalities of brain networks have been implicated in different brain disorders including epilepsy (Liao et al., 2010). In this study, we explore network properties as a biomarker of SUDEP. 1) We constructed functional connectivity networks from delta and HFO WPC. 2) These networks were combined to create cross-frequency directed graphs as a proxy for information flow in different spatiotemporal scales of the brain. The directed graphs were validated using simulated data and seizure examples from a SUDEP and non-SUDEP epileptic patient. 3) We compared the topological differences in the directed networks between the two groups, yielding in a biomarker for SUDEP.
Functional Connectivity Network
FCN gives insight into disease induced changes in synaptic plasticity and efficiency of communication within neural networks in the brain (Bettus et al., 2008). WPC has previously been used as a measure for FCN in the brain, depicting coupling of different brain regions by way of specific brain rhythms (Cotic et al., 2015). Figure 1 shows scalp EEG traces of representative seizures in a non-SUDEP (P2) and SUDEP patient (P19), and examples of the corresponding WPC between two electrodes. The chosen electrodes showed the highest closeness centrality during seizure. Differences in the WPC distributions reaffirmed the choice of the two frequency ranges. The analysis was repeated for each pair of nodes and averaged over the frequency range and temporal windows to provide the connectivity strength between the two nodes. Functional connectivity graphs with adjacency matrices computed by averaging the wavelet phase coherence between pairs of electrodes over the entire traces and over delta and HFO frequency ranges respectively. (E) Directed connectivity graph computed from connectivity graphs in C and D. Edges between pairs of electrodes i and j (nodes) are computed from the product of the delta degree of electrode i and HFO degree of electrode j. The directed graph accurately represents the connection between the low frequency hub to the high frequency hub and the direction of information flow.
Validation of the Cross-Frequency Directed Graph Connecting Low and High Frequency Hubs
Complex information flow involves multi-frequency large-scale organization in the brain (Buzsaki 2006;Jirsa and Müller 2013). We used directed networks to provide information about the coupling directionality between low and high frequency rhythms. The directed graph is constructed using the low frequency and high frequency with edge weights corresponding to the product of the low frequency graph degree of one electrode to the high frequency graph degree of another electrode. To validate our cross-frequency directed network, we simulated EEG rhythms and low/high frequency hubs. Starting with an empty set of EEG signals, we began populating specific channels with chosen delta and HFO rhythms. All channels contained Gaussian white noise. We aimed to create one low frequency hub and one high frequency hub and confirm that the directed connectivity network showed a directed edge between them. The low frequency hub was chosen to be electrode P7. This hub was FIGURE 4 | Dynamic changes in the ictal delta-HFO network not found in SUDEP patients. (A,B) Undirected functional connectivity computed from average wavelet phase coherence between each pair of electrodes using a 10 s sliding window throughout a seizure event in a non-SUDEP patient. (C) Directed graph computed using the delta and HFO graphs, connecting nodes with strong low-frequency degrees to nodes with strong high-frequency degrees. (E,F) Undirected functional connectivity computed from average wavelet phase coherence between each pair of electrodes using a 10 s sliding window throughout a seizure event in a SUDEP patient. (G) Directed graph computed using the delta and HFO graphs. Note the difference in seizure network dynamics of the directed graphs in the non-SUDEP patient in (C) compared to the SUDEP patient in (G).
Frontiers in Network Physiology | www.frontiersin.org April 2022 | Volume 2 | Article 866540 designed to contain two different delta rhythms which then would each spread to another electrode (P3 and O1). The high frequency hub was chosen to be electrode F8. This hub was designed to contain two different HFO rhythms which then would each spread to a nearby electrode (Fp2 and C4). As expected, the low and high frequency networks shown in Figures 2C,D showed the desired connections. The directed connectivity network correctly identified the connection between the low and high frequency hubs as per our network design. Furthermore, the network analysis proved to be consistent when applied to recorded EEG data from a SUDEP (P19) and non-SUDEP patient (P1) (Figure 3). Taken together, these results validate and demonstrate how the directed graph captures cross-frequency interactions within the brain.
Temporal Changes of the Delta-HFO Directed Network During Seizure
We constructed the delta-HFO directed network for consecutive 10-s duration time windows during peri-ictal regions in representative SUDEP (P19) and non-SUDEP (P1) patients ( Figure 4). The graph visualizations indicated differences in network seizure dynamics in a non-SUDEP patient that were not observed in a SUDEP patient (Figure 4). The directed graphs ( Figures 4D-G) further highlighted the seemingly unchanging network of the non-SUDEP patient during the ictus. Topological measurements of the directed graphs were used to compare the networks between the two patients. The pre-ictal, ictal, and post-ictal mean clustering coefficient and global efficiency were higher in the SUDEP than in the non-SUDEP patient ( Figure 4D).
Peri-Ictal Topological Network Changes as Biomarker for SUDEP
The clustering coefficient and global efficiency measures were used to compare group differences between non-SUDEP and SUDEP epileptic patients ( Figure 5). The clustering coefficient is an average measure of how node triples are connected within the network and specifies the tendency for nodes to cluster together. The clustering coefficient was significantly higher in the SUDEP group during the pre-ictal, ictal, and post-ictal segments (preictal: p = 0.00100, ictal: p = 0.00001, post-ictal: p = 0.00100). The global efficiency, which indicates how efficiently information is transferred between nodes, was shown to be significantly higher in the SUDEP group pre-ictally, ictally, and post-ictally (pre-ictal: p = 0.00012, ictal: p = 0.00001, post-ictal: p = 0.00071). These results show that network topology is a potential biomarker in assessing SUDEP risk.
DISCUSSION
We have found topological differences in the peri-ictal delta-HFO directed networks of epileptic patients with SUDEP exhibiting significantly higher pre-ictal, ictal, and post-ictal clustering coefficient and global efficiency in the delta-HFO directed networks. These data suggest a higher connectivity and more efficient flow of information in seizure networks of SUDEP patients. Both high clustering coefficient and high global efficiency are features that resemble a small-world organized network as first described by Watts and Strogatz (Watts and Strogatz 1998). These networks are both locally and globally efficient, combining high clustering and short characteristic path length features (Latora and Marchiori 2001). The observed network changes suggest that cross-frequency network topology is a possible SUDEP biomarker. The delta-HFO directed networks captured the complexity of the seizure networks and differences between SUDEP and non-SUDEP groups. The importance of these rhythms is consistent with previous studies localizing seizure networks (Cotic et al., 2015). A recent study from our group described delta-gamma cross-frequency coupling as a biomarker of PGES (Grigorovsky et al., 2020).
Proposed mechanisms of SUDEP involve ictal-related cardiorespiratory dysfunction, which may be caused by epileptiform activity spreading to the brainstem. A crucial element of SUDEP is brainstem dysfunction, for which PGES might be a biomarker (Lhatoo et al., 2010;Devinsky et al., 2016). The MORTEMUS study, which examined SUDEP cases that occurred in epilepsy monitoring units, found the cause of death to be due to postictal respiratory impairment and bradycardia (Ryvlin et al., 2013). The cross-frequency network differences that were observed in our study may suggest that the network is more efficient in the spread of seizure activity, reaching central autonomic structures more easily. This may increase the likelihood of ictal associated bradycardia and asystole. Spread to the brainstem may also affect respiration centers, inducing hypoxia and hypercapnia. This is consistent with findings where electrical stimulation of the amygdala induced respiratory arrest (So 2008).
Further research needs to be done using intracranial EEG in both patient groups to have a deeper understanding of how these topological changes relate to seizure spread and brainstem dysfunction. A limitation of this study is the low number of patients in the SUDEP group. More patients need to be added to the sample size to examine the predictive power of this biomarker. Although the network dynamics throughout representative seizures seemed to differ in SUDEP patients, further exploration needs to be done in comparing group differences. Regarding patient selection criteria, EEG recordings in this study were obtained from patients monitored in the EMU. While most patients were weaned off anti-seizure medications in order to provoke seizures, changes in their medication regimen were not annotated in the EEG recordings. This should be taken into consideration for future studies. Also, all non SUDEP patients were medically refractory and this may limit the generalizability of this biomarker to the broader epilepsy population. In conclusion, there is an unmet need for non-invasive biomarkers to identify those patients at high risk for developing seizure-associated SUDEP. Our study describes such a biomarker for SUDEP using scalp EEG signals to construct functional connectivity networks of the brain.
DATA AVAILABILITY STATEMENT
The anonymized datasets used in this study are available upon request. They are not publicly available due to institutional restrictions associated with original data acquisition protocols.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. (The Ontario Brain Institute is an independent non-profit corporation, funded partially by the Ontario government. The opinions, results and conclusions are those of the authors and no endorsement by the Ontario Brain Institute is intended or should be inferred), and c) The SciNet HPC Consortium which is funded by the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund -Research Excellence; and the University of Toronto. | 2022-04-28T01:51:22.661Z | 2022-04-26T00:00:00.000 | {
"year": 2022,
"sha1": "41e5895d457f1af6dc9c2a062821f5bbba61044f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "41e5895d457f1af6dc9c2a062821f5bbba61044f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248441247 | pes2o/s2orc | v3-fos-license | Effect of Soil Fertilizer Application on Soil Nutrient Migration and Tea Quality of Plateau-Puerh Tea
In order to solve the problem of chemical fertilizer application balance of Pu’er tea in Taiwan, the effects of different fertilization treatments on land nutrient migration and tea quality were explored, and the basis for rational fertilization of tea gardens was provided. In this study, the effects of different fertilization treatments on tea quality indexes were studied by three different sets of fertilization treatments in tea gardens. Three sets of different fertilization treatments were set up in the experiment: conventional fertilization treatment (T1), slow-release fertilizer reduction by 20% (T2) and slow-release fertilizer reduction by 30% (T3), and the quality index of tea under different fertilization treatments, as well as the alkaline nitrogen, available phosphorus and total nitrogen and total phosphorus content in surface water of the soil were measured and analyzed. The results showed that: 1) compared with T1, the soil available phosphorus in T3 decreased by 23.5%, and the alkalinelyzed nitrogen increased by 20.5%; 2) compared with T1, the total nitrogen and total phosphorus concentrations of surface water in the T2 and T3 treatments were at a low level compared with T1; compared with T1, T2 decreased by 71.4%, and T3 decreased by 68.6%; 3) compared with T1, T3 was able to maintain the quality indicators of amino acids, tea polyphenols and soluble sugars in tea in a high and stable range. Therefore, under the condition of conventional fertilization and reduction, a 30% reduction in slow-release fertilizer is currently more suitable for the fertilization technology of Menghai County Tea Garden. on
Introduction
Tea, one of the three most popular non-alcoholic drinks in the world (Wang, 2020), has been drunk since ancient times. China is the origin of tea and one of the important cash crops in China, which plays an important role in agricultural cash crops (Zhang & Su, 2007). According to statistics, the planting area of tea trees in China reached over 2.6 million hectares in 2015, and the annual tea output reached over 2 million tons, both ranking first in the world (Ni et al., 2019). With the increasing popularity of tea planting, many countries have begun to export tea products and tea culture, and China's tea market is facing such a huge challenge (Shi et al., 2021). In order to enhance the international competitiveness of Chinese tea, it is necessary to develop tea products with local characteristics, and form a green and harmonious tea planting environment while improving the quality of tea.
Menghai County of Yunnan Province is a famous tea town in China, recognized as one of the origin places of tea trees in the world tea circle and one of the birthplaces of Pu'er tea (Jiang et al., 2019). Because of its excellent natural geographical environment and tea making technology inherited for thousands of years, Menghai County has become the first tea producing county in Yunnan Province. In recent years, with the rapid development of Pu'er tea industry, the one-sided pursuit of economic benefits has resulted in uneven quality of Pu'er tea on the market, which has seriously affected the competitiveness of Pu'er tea in the market (Geng & Ma, 2021). During the rapid development of tea industry, long-term irrational fertilization resulted in the loss of nitrogen and phosphorus in soil, resulting in eutrophication of water body and acidification and consolidation of land (Shi et al., 2018;Duan et al., 2005;Dong et al., 2006). In addition, long-term large and single fertilization makes it difficult for plants to absorb nutrients and stay in the soil in large quantities, which seriously affects the quality of local tea (Li & Huo, 2019). Scientific and rational fertilization and reducing the use of chemical fertilizer are important measures to improve the environment, ensure the quality of tea, realize the technology of reducing the application of chemical fertilizer and increasing the efficiency of tea garden in China, and promote the sustainable development of tea industry.
At present, most experiments mainly focus on the effects of fertilization methods on the yield and quality of tea, and there are few studies on the relationship between the effects of different fertilization methods on soil nutrient migration in tea gardens and tea quality. This experiment mainly took The terrace Pu-erh tea in Menghai County as the research object, studied the characteristics of surface nutrient migration and the changes of tea quality indexes under different fertilization conditions, explored the economic and efficient way of tea H. Zhang et al. garden fertilization, screened out the application amount of nitrogen fertilizer suitable for the environment of Menghai County tea garden, and provided the basis for rational fertilization of tea garden.
Overview of the Study Area
Menghai County is the hometown of the famous "Pu'er tea" at home and abroad and the earliest place of tea production in China. There are wild "tea king" 1,700 years ago and ancient tea trees scattered all over the country. The climate in this region is subtropical plateau monsoon climate, with an annual average temperature of 15.5˚C and an annual precipitation of 1065 mm, which is mainly concentrated in June to October. The soil type of tea garden is red soil. The monitoring test site is a concentrated and continuous hilly tea garden in Manzhen Village at the intersection of the upper reaches of mengpang Reservoir area, with 100.337471˚ EAST longitude and 22.184598˚ north latitude ( Figure 1). The tea garden covers an area of 0.65 mu. The tested tea tree variety is "large-leaf tree tea" and the planting year is 20 years.
Test Design
The experiment was carried out in the field from July to November in 2020.
Three treatments were set up in the test site, and each treatment was repeated three times, with a total of 9 cells. Each treatment was randomly distributed, and the area of each cell was 48 m 2 (6 m × 8 m). Each area is separated by cement baffle with a buried depth of 0.5 m to prevent the phenomenon of water running between each area. The design principle of chemical fertilizer reduction treatment was based on the baseline application of chemical fertilizer in tea garden with annual pure nitrogen amount of 300 kg/hm 2 and the application ratio of N, P and K was 3:1:1. The selected fertilizers in the experiment were urea with N content ≥ 46.4% and purity ≥ 99.4%; calcium superphosphate, effective P 2 O 5 content ≥ 12%; potassium sulfate, K 2 O content ≥ 50.0%; compound fertilizer, N-P 2 O 5 -K 2 O = 15-15-15, total nutrient ≥ 45%; controlled release fertilizer, N-P 2 O 5 -K 2 O = 28-5-5.
According to the fertilization habits of local tea farmers, the base fertilizer was applied in early July, and the top fertilizer was applied at the end of September, with 60% as the base fertilizer and 40% as the top fertilizer. Each time, the furrow was fertilized at the vertical downward place of the edge of the tea canopy surface, and the depth of fertilization was 10 -20 cm. After fertilization, the soil was covered in time.
Surface Water Collection
Tea garden runoff monitoring sampling time for the August 2020-September 2020, menghai county into the rainy season, at this time every time collecting tea surface runoff water samples, first remove tea plantations litter of runoff collecting barrel, then runoff with a clean plastic barrels of water, fully mixing, then use polyethylene plastic bottles collected 500 ml water sample. Finally, all the water in the runoff bucket is poured out and the inner wall of the runoff bucket is cleaned with clean water to prepare for the next runoff collection. After water samples were collected, they were transported back to the laboratory in time and stored in a refrigerator at 4˚C. All indicators were determined within three days after water samples were collected.
Soil Samples Collected
The tea garden soil is acid red soil. The sampling time of soil samples is from August 2020 to November 2020, and soil samples are taken for 4 times. About 1 kg of soil samples with the same thickness, width and depth as possible shall be collected according to the "five point mixed sampling method" (Bao, 2013). After the soil samples are mixed evenly, they shall be dried and ground by natural air and screened for 2 mm and 0.149 mm respectively for analysis and determination.
Tea Samples Collected
The tea was collected in September 2020. It was picked on the 10th, 20th and 30th of that month for three times. One bud and two leaves were picked each time. After picking, the tea was steam killed. After killing, the dry weight of the tea was recorded accordingly, and some of the killed tea was dried and ground for indoor analysis.
Determination Method
The test adopts the soil sample determination method: pH: determined by pH meter; alkali-hydrolyzed nitrogen: determined by alkali-hydrolyzed diffusion method; available phosphorus: molybdenum antimony anti-chromogenic spectrophotometric determination; Water sample determination method: pH: determined by pH meter; total nitrogen: alkaline potassium persulfate digestion UV spectrophotometry (HJ636-
Statistical Analysis
Qualitative analysis: use Excel 2016 edition to conduct basic calculation and standard error processing on the original data, combined with chart comparison, and qualitatively analyze the impact of different fertilization treatments on surface nutrient migration.
Effects of Different Fertilization Treatments on Soil Available Phosphorus
It can be seen from Figure 2 that different fertilization treatment conditions have different effects on the content of available phosphorus in soil. One month after the application of base fertilizer, the content of soil available phosphorus in T1 treatment was the highest and that in T3 treatment was the lowest, which was 31.96% lower than that in T1; the soil available phosphorus content of T2 treatment was in the middle, which was 20.31% lower than that of T1. After topdressing in September, the content of soil available phosphorus in T1 treatment decreased significantly, while the content of soil available phosphorus in T2 and T3 treatment increased significantly, especially in T3 treatment. With the passage of fertilization time, the content of soil available phosphorus in T2 and T3 treatment decreased gradually. Compared with the content of soil available phosphorus after topdressing, T2 decreased by 14.4% and T3 decreased by 23.5%. The results showed that reducing the application of slow-release fertilizer by 30% Journal of Geoscience and Environment Protection
Effects of Different Fertilization Treatments on Soil Alkali-Hydrolyzable Nitrogen
It can be seen from Figure mg/kg). The nitrogen content of 1.5% was significantly higher than that oft 2.5% after alkali treatment. It shows that the reduced application of slow-release fertilizer can significantly improve the content of soil alkali hydrolyzable nitrogen, and the soil alkali hydrolyzable nitrogen provided by slow-release fertilizer is more lasting and not easy to lose.
Effects of Different Fertilization Treatments on Total Phosphorus in Surface Water
As can be seen from Figure 4, the total phosphorus content of surface water has a continuous change in a period after fertilizer application. At the initial stage, the total phosphorus content of surface water mainly showed T1 (7.4 mg/L) > T2 initial stage, the total phosphorus content of T1, T2 and T3 decreased by 91.9%, 90.5% and 89.4% respectively. The results showed that in this fertilization period, the surface loss could be reduced by slow release fertilizer reduction, and the effect of 30% slow release fertilizer reduction was more obvious.
Effects of Different Fertilization Treatments on Total Nitrogen in Surface Water
As can be seen from Figure 5, the total nitrogen content of surface water is significantly correlated with fertilization time. From the whole sampling period, the total nitrogen content of surface water in the three groups showed a high level at the beginning of fertilizer application. With the passage of fertilization time, the total nitrogen content of surface water decreased gradually. Compared with the initial stage of fertilizer application, the total nitrogen content of surface water decreased by 96.4% in T1, 98.7% in T2 and 94.8% in T3. In this experiment, the surface water content of the three groups at the beginning of fertilization showed that T1 (19.5 mg/L) > T2 (15.9 mg/L) > T3 (11.6 mg/L). Compared with T1, T2 decreased by 18.5% and T3 decreased by 40.5%. At the late fertilization stage, the total nitrogen content of surface water treated by the three groups was T1 (0.7 mg/L) > T3 (0.22 mg/L) > T2 (0.2 mg/L). Compared with T1, T2 decreased by 71.4% and T3 decreased by 68.6%. The results indicated that the reduction of slow release fertilizer application could reduce the total nitrogen concentration in surface water, that is, reduce the migration of surface nutrients.
Effects of Different Fertilization Treatments on Soluble Sugar in Tea
The influence of different fertilization treatments on soluble sugar content in tea is shown in Figure 6. During the tea collection cycle, the soluble sugar content of tea leaves in the three treatments showed a trend of gradual decrease, which is shown as follows: T1 (26.9%) > T2 (11.9%) > T3 (6.3%). Compared with T1, the soluble sugar content of tea treated by T2 and T3 was relatively stable and maintained at a certain high content level, indicating that slow release fertilizer could continuously provide nutrients for tea growth to a certain extent.
Effects of Different Fertilization Treatments on Tea Polyphenols
The influence of different fertilization treatments on tea polyphenols content is shown in Figure 7. In the early stage of tea collection, tea polyphenols content is the highest, and tea polyphenols content in the three treatments is as follows: T2 (289.8 mg/g) > T3 (284.5 mg/g) > T1 (241 mg/g). With the passage of time, the content of tea polyphenols in tea leaves gradually decreased, and at the late collection stage, the content of tea polyphenols in the three groups was gradually flat. In the whole cycle, the content of tea polyphenols in T2 and T3 was maintained at a higher level than that in T1. In each time period of this study, application of slow release fertilizer could ensure the high level of tea polyphenol content.
Effects of Different Fertilization Treatments on Theine of Tea Leaves
The quality reversal threshold of tea caffeine ranged from 38 mg/g to 45 mg/g.
By comparing the caffeine content in tea under different fertilization treatments (Figure 8), it can be seen that during the test period, the caffeine content in T1 treatment was significantly higher than that in T2 and T3 treatment, and showed a trend of gradual decline, but they were all higher than the caffeine quality reversal threshold. The content of tea caffeine in T2 and T3 treatment was within the quality reversal threshold, and the content of tea caffeine in T3 treatment had the smallest fluctuation and the change was relatively stable. The results showed that the reduction of slow release fertilizer could maintain the stability of caffeine content between the quality reversal threshold and was beneficial to the improvement of tea quality.
Effects of Different Fertilization Treatments on Phenol/Ammonia Ratio of Tea
Aminophenol specific content of tea under different fertilization treatments is Figure 9. The content of aminophenol ratio in tea has a certain relationship with fertilization methods. In the experimental period, the ratio of phenol-ammonia in T1 treatment was 7.15, 6.99 and 6.81 mg/g, respectively. The ratio of phenol-ammonia in T2 treatment was 7.54, 6.94 and 6.03 mg/g, respectively. The phenol/ammonia ratios of T3 treatment were 7.40, 7.23 and 6.18 mg/g, respectively. At the initial stage, there was no significant difference in phenolammonia ratio between the three groups, while over time, the phenol-ammonia ratio of T2 and T3 treatment was at the lowest level. Generally speaking, the quality of tea leaves is higher when ammonia phenol is low. Therefore, the fertilization method of slow release fertilizer reduction is beneficial to promote the content coordination of amino acids and tea polyphenols of tea trees, and has a certain effect on improving the quality of tea trees.
Effects of Different Fertilization Treatments on Amino Acids of Tea
As can be seen from Figure 10, different fertilization treatments affect the amino acid content in tea. During the test period, with the change of fertilization time, Figure 9. The content of phenolic ammonia in tea under different fertilization treatments.
Effects of Different Fertilization Measures on Soil Nutrient Migration
Fertilization is an important factor affecting soil nutrient migration. Studies have shown that the main reasons for nitrogen loss in China are excessive application of nitrogen fertilizer and low utilization rate, which is far lower than the world average level (Gao, 2018). Duan Yonghui et al. believed that the main reason for soil nitrogen loss was excessive fertilization, and the experiment also showed consistent results. Studies have shown that reduction of chemical fertilizer application can effectively reduce the proportion of nutrients taken away by surface runoff (Tian et al., 2020;Tang, 2016). Application of slow release fertilizer can significantly reduce the loss of nitrogen and phosphorus in surface runoff of tea gardens, and reduce the risk of eutrophication of water bodies around tea gardens. The results of this study are consistent with this.
Different fertilization measures will affect the migration, effectiveness and persistence of nitrogen and phosphorus nutrients in the soil, thus affecting the nutrient absorption rate and effect of plants (Li et al., 2017). Relevant studies have shown that slow-release fertilizer has the characteristics of slow nutrient release, long duration and high utilization rate (Ma et al., 2020;Huang et al., 2010). The results of this study showed that compared with conventional fertilization, the application of slow release fertilizer could significantly increase the content of nitrogen and phosphorus in soil, and reduce the migration and loss of nitrogen and phosphorus in soil, and the experiment effect of 30% slow release fertilizer was more obvious.
Effects of Different Fertilization Measures on Tea Quality
Soluble sugar, tea polyphenols, theine, aminophenol ratio and amino acid content are important indexes affecting tea quality. Studies have shown that conventional fertilization reduction of 20% combined with the application of fulhuic acid biological agent has a good improvement effect on the quality indexes of free amino acids and tea polyphenols of tea (Quan et al., 2020). Wang Ziteng et al. showed that reduced fertilizer application and combined application of organic fertilizer can effectively improve soil quality, reduce nitrogen and phosphorus runoff loss, and improve tea yield and quality (Wang et al., 2018). Gao Journal of Geoscience and Environment Protection Shuangwu (Gao, 2019) studied the replacement of chemical fertilizer by the combined application of formula fertilizer and organic fertilizer, and concluded that the combined application of formula fertilizer and organic fertilizer can meet the requirements of tea growth, tea quality is high, and the purpose of reducing fertilizer application and increasing efficiency can be achieved. Kong Xiaojun et al. (Kong et al., 2019) compared the effects of six different fertilization modes on the quality and economic effects of tea and concluded that the combination of organic fertilizer and controlled release fertilizer was the optimal fertilization measure, which could improve the quality and economic benefits of tea to the greatest extent. Han Wenyan et al. (Han et al., 2007) conducted pot and field experiments to study the influence of controlled release nitrogen fertilizer on tea quality, and concluded that controlled release nitrogen fertilizer had significant quality improvement effect on tea. The results of this experiment showed that compared with conventional fertilization, the content of soluble sugar, tea polyphenols, theine, aminophenol ratio and amino acid of tea were significantly increased by reducing application of slow release fertilizer. In general, the improvement of tea quality was more obvious by reducing application of slow release fertilizer by 30%.
Conclusion
Under the three different fertilization treatments in this experiment, the 30% reduction of slow-release fertilizer could reduce the amount of chemical fertilizer and increase the contents of available p and alkali-hydrolyzable N in tea garden soil and reduce the concentrations of total N and total P in surface water.
Reasonable fertilizer reduction is of great importance to the quality of tea. Compared with conventional fertilizer application, 20% or 30% reduction in slow-release fertilizer application can significantly increase the contents of amino terminal, tea polyphenols, soluble sugar and other quality indexes in tea, which is a more reasonable way of fertilizer application.
In conclusion, considering soil fertility, surface water environment and tea quality, 30% reduction of slow-release fertilizer application is a reasonable fertilization mode at present.
In this study, when the soil fertility and water environment status of the tea garden were investigated, the density of the sampling points was not enough to reflect the current soil and surface water status in Menghai County. Therefore, in subsequent studies, sampling points should be added to improve the credibility of the trial. Journal of Geoscience and Environment Protection | 2022-04-30T15:08:06.654Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "4c0babcfb8971cb4f40f2325dfc98382069c7856",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=116888",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f02fa5702ee166fdd76dfb09df55a6381dcac0d8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
18467650 | pes2o/s2orc | v3-fos-license | Calibration Techniques for VERITAS
VERITAS is an array of four identical telescopes designed for detecting and measuring astrophysical gamma rays with energies in excess of 100 GeV. Each telescope uses a 12 m diameter reflector to collect Cherenkov light from air showers initiated by incident gamma rays and direct it onto a `camera' comprising 499 photomultiplier tubes read out by flash ADCs. We describe here calibration methods used for determining the values of the parameters which are necessary for converting the digitized PMT pulses to gamma-ray energies and directions. Use of laser pulses to determine and monitor PMT gains is discussed, as are measurements of the absolute throughput of the telescopes using muon rings.
Introduction
Like all gamma-ray detectors which use the atmospheric Cherenkov technique, the VER-ITAS instrument is fundamentally quite simple. Each of its four telescopes consists of a 12 m reflector which directs Cherenkov light from air showers onto a matrix of 499 photomultiplier tubes (PMTs) which are read out using 500 MSample/s flash-analog-to-digital converters (FADCs). In order to translate the digital information emerging from the FADCs into a form which can be used to select gammainitiated showers from background and determine the energy and direction of the incident gamma ray, one needs calibration constants. These 'constants' (which are not, strictly speaking, constant) need to be determined when commissioning the detector and monitored and adjusted periodically during the lifetime of the project. In this paper we describe techniques employed by the VERITAS collaboration to accomplish this task; two techniques use a laser to determine the absolute gains of the PMTs and one uses Cherenkov images, generated by isolated muons, for intertelescope calibration and determination of absolute throughput. Some of these issues are ad-dressed independently with a remote LIDARlike system described elsewhere in these proceedings [1].
The VERITAS Laser System
For flat-fielding and gain monitoring, VERI-TAS uses a nitrogen laser (λ = 337 nm, pulse energy 300 µJ, pulse length 4 ns). The beam is sent through neutral density filters arranged in two sequential wheels, with 6 filters each, such that transmissions ranging from less than 0.02% to 100% may be chosen. It is then divided, approximately equally, among 10 optical fibres, four of which are routed to opal diffusers located on the optical axes of the telescopes, 4 metres from the PMTs in the cameras. A fifth fibre supplies light to a PIN photodiode to provide a fast external trigger for FADC readout; self-triggers using only PMT information are also used for some applications. There is, at present, no independent monitor for measuring the pulse-to-pulse fluctuations in the laser intensity (typically 10%); these are monitored using a sum over a large number of PMTs in each camera.
arXiv:0709.4479v1 [astro-ph] 27 Sep 2007
A five-minute, 10 Hz laser run at nominal intensity is taken at the beginning of each observing night. The data obtained are used primarily for monitoring gain evolution and checking for problems. Other tests, described below, are done less frequently. Since the opal diffuser spreads the laser light uniformly (to better than 1%) over the face of the camera, the pulses can be used for flat-fielding the response of the channels. The high voltages of the individual PMTs are adjusted so that the average pulse size in each channel is the same for all channels. A PMT's average pulse size depends on the product of its photocathode's quantum efficiency and the efficiency for photoelectrons to be collected by its first dynode, as well as on the gain in the electron multiplier stage. To a lesser extent it depends on the reflectivity of the Winston-cone light concentrator in front of each PMT. The average pulse sizes are calculated and written to a database for use in off-line analysis. The gain of the electron multiplier can be tracked separately using the daily laser data using the method of photostatistics. In this method, we remove laser fluctuations using a sum-over-PMTs monitor and the effects of electronics noise and night sky background are measured in runs with zero laser intensity and unfolded. Then, to first order, we can state that the mean charge in a laser pulse is given by µ = GN pe with G the gain and N pe the mean number of photoelectrons arriving at the first dynode. Assuming that only Poisson fluctuations in N pe determine the width, σ, of the charge distribution, we have σ = G N pe . Thus we can solve for gain as G = σ 2 /µ. Taking into account statistics at the other dynodes, which are in general described by a Polya distribution, leads to a correction factor such that G = σ 2 /µ/(1 + α 2 ) where α is the width parameter which would result from injecting only single photoelectrons into the dynode chain. For our PMTs and their associated dynode voltages we simulate α = 0.47, which results in a revised estimate for multiplier gain of G = 0.82 σ 2 /µ.
As a check on this model, note that µ = G N pe or N pe = µ/G . This quantity is plotted for a representative PMT, in figure 1, as a function of applied high voltages in steps from nominal HV. Except perhaps for an effect due to increased first dynode collection efficiency due to increased HV, we do not expect N pe to change and the plot shows that it is constant over the range of voltages explored.
Single Photoelectrons
An alternative method for determining PMT gain is to directly measure the position of the single photoelectron peak in a pulse size spectrum. Again, this gives the gain of the electron multiplier structure (and any downstream electronics) and does not include effects of the photocathode. To resolve the single photoelectron peak, we take special laser runs at very low intensity where the average number of photoelectrons resulting from each laser pulse is less than 1.0. The resulting spectrum consists of a pedestal, the single photoelectron peak, and small admixtures of two, three etc peaks with the relative sizes of each component prescribed by Poisson statistics. We also rely on the constraint that the multi-photoelectron signals can be fit with the same parameters (mean and width) as the single photoelectron peak (up to 30th International Cosmic Ray Conference multiplicative factors). Allowing only a small number of free parameters results in robust fits to the spectra, an example of which is shown in figure 2. In this example the relative width of the single photoelectron peak is found to be 0.48. The average from a group of channels is 0.47. The data for this study were obtained using twice the normal gain, in order to resolve more clearly the single photoelectron peaks. With such a gain the simulation predicts a relative width of 0.44. Figure 2: A pulse size spectrum made with highly attenuated laser pulses and raised highvoltage. The single photoelectron peak is clearly visible as the structure next to the pedestal, which is the dominant feature. The data are fit with a sum of Gaussians as described in the text.
In order to maintain good signal-to-noise for this measurement, we cover the camera with a thin aluminum plate with a 3 mm hole drilled at the location of the centre of each PMT. This reduces the night sky background to the point where it is negligable compared with the laser light. Indeed, with the telescope in stow position, one can perform single photoelectron laser runs in the presence of moonlight. This is an important consideration given that sufficient statistics (∼50000 shots) require nearly an hour of running.
The gain values which result from the method of photostatistics and from the single photoelectron fitting are in units of digital counts per photoelectron. A comparison of the two Figure 3: A comparison of gains determined using photostatistics (abscissa) with those determined from single photoelectron fitting (ordinate). The slope of the correlation is approximately 1.1. It should be 1.0; the discrepancy is an indication of the present scale of the systematic error of the gain-measuring procedures. methods is shown in figure 3 where results from telescope 1 are shown. The data points in this figure highlight the difference between the 'multipler gain', which includes everything starting from the first dynode, and the 'overall gain' which also includes the light concentrator cones and the photocathodes. Since the PMTs have all been flatfielded according to the overall gain, the dispersion seen along the correlation line in figure 3 is due to channel-to-channel differences in these 'front end' components.
Muon Rings
Local muons are normally a nuisance for Cherenkov telescopes but they can be useful in providing a measurement of the optical throughput of the detector [3,2]. Muons passing through the centre of the telescope with trajectories parallel to its optical axis will produce azimuthally symmetric rings in the camera. The rings will have radii given by the Cherenkov angle of the muons (maximum value about 1.3 degrees) and the total number of photons expected in the ring can be calculated from the measured value of this angle. Muons with non-zero impact parameters will produce arcs with an azimuthally dependent photon density and muons arriving at an angle with respect to the telescope's axis will give rise to arcs with centres that are offset from the centre of the camera. Muon ring images can be obtained from normal data where they occur as part of hadronic showers. The images are cleaned (channels are required to have a minimum pulse size and to be next to other channels with non-zero charge, otherwise they are set to zero) and a ring is fit to the image. Further cleaning of the images, where charge deposits far from the fitted ring are suppressed, removes light from other components of the shower of which the muon was a member. After this second cleaning the ring parameters are re-calculated. A cleaned image of a complete muon ring is shown in figure 4. Since the morphology and location of the muon ring allow the muon's trajectory to be calculated, it is possible to predict with precision the number of Cherenkov photons that should be collected by the camera. This requires knowing the reflectivity of the mirror facets, shadowing effects due to the camera support structure, etc so the detector response to local muons is a good check on our understanding of Figure 5: Detected charge in muon arcs, normalized to their lengths, for VERITAS telescopes 1 (histogram) and 2 (data points), showing that they are well matched. the instrument. Absolute calculations are still in progress but certain relative measurements have already been implemented, such as intertelescope calibration and month-to-month stability checks. An example is shown in figure 5 where we histogram the summed charge in each muon arc, divided by its length, for two telescopes in the array. The overlap of the two histograms, normalized by the number of entries, shows that the telescopes are well balanced. | 2007-09-27T19:25:13.000Z | 2007-09-27T00:00:00.000 | {
"year": 2007,
"sha1": "35d86e2edbea1468e8c5595919353980f422ee3d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "35d86e2edbea1468e8c5595919353980f422ee3d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245887020 | pes2o/s2orc | v3-fos-license | Antioxidant and antifungal activities in vitro of essential oils and extracts of twelve Algerian species of Thymus against some mycotoxigenic Aspergillus genera
The aim of the study was to determine the phenolic and flavonoid content of essential oils (EOs), chloroform and ethanolic extracts of 12 Algerian Thymus species and evaluate their antioxidant and antifungal activities. EOs (1.73 ± 0.30–15.00 ± 1.24 μg/mg), chloroform extracts (33.8 ± 2.42–160.93 ± 3.88 μg/mg) and ethanol extracts (27.01 ± 3.56 –148.46 ± 4.40 μg/mg) showed considerable phenolic content. Flavonoids values of chloroform extracts ranged between 3.39± 0.17 and 20.27 ± 0.29 μg/ml while ethanolic extracts values ranged between 2.81 ± 0.11 and 26.64 ± 0.18 μg/mg. Results of DPPH showed that EOs, chloroform and ethanolic extracts exhibited strong radical scavenging activity (IC50 = 21.75 ± 6.54–338.22 ± 2.99 μg/ml, 22.91 ± 5.59–90.93 ± 1.36 μg/ml, and 33.51 ± 5.72–103.80 ± 4.54 μg/ml, respectively). Inhibition of β-carotene bleaching was potentially performed by all EOs (66.48 ± 2.41–94.06 ± 2.68 %), chloroform extracts (68.98± 1.58–95.30± 1.99%), and ethanolic extracts (62.15 ± 2.51–92.36± 1.15%). The antifungal activity of EOs and extracts was tested using the minimum inhibitory concentration (MIC) and minimum fungicidal concentration (MFC). The EOs (0.1 ± 0.00 mg/ mL – 1.06 ± 0.46 mg/mL), chloroform (0.1 ± 0.00 mg/ mL –1.06 ± 0.46 mg/mL) and ethanol (0.1 ± 0.00 mg /mL–1.6 ± 0.00 mg/mL) showed remarkable antifungal activity against mycotoxigenic Aspergillus genera. The MFC of EOs (1.0 ± 0.34 mg/mL and > 4.8 mg/mL) , chloroform (0.26 ± 0.11 mg/mL and > 1.6 mg/mL) and ethanol (0.2 ± 0.00 mg/mL and > 1.6 mg/mL) were fungicidal in nature higher than MICs. The findings of the study indicated that Thymus spp. EOs and extracts could be used as natural alternatives for food industry.
Introduction
Aspergillus spp. are widespread in nature and especially in the soil where they contribute to the biodegradation and recycling of organic matter. They are also used in several fields by performing beneficial roles such as the production of useful metabolites. Given its extreme economic importance linked to its useful and harmful effects, several works have been devoted to the genus Aspergillus in general 1 .
Mycotoxins produced by these fungi cause food spoilage: aflatoxins (AF), ochratoxin A (OA), nidulotoxin, sterigmatocystin (STC), emodin, ventilacton etc. Susceptibility to mycotoxins can vary greatly from one individual to another depending on the breed, physiological state or stress to which he is subjected. Likewise, the different mycotoxins induce different effects: some exerting a hepatotoxic or carcinogenic power, others proving to be estrogenic, immunotoxic, nephrotoxic or neurotoxic. Unlike bacterial toxins, the effects of which are immediate, mycotoxins have insidious effects, which manifest themselves in the more or less long term.
Several strategies have been used to control fungal growth and mycotoxin biosynthesis in foods. Compared to synthetic food additives, the changing consumer preference for more natural products and the reduction in the use of salt and sugar in foods for dietary reasons has stimulated the use of spices and / or aromatic plants, sodium and calories from these plants.
The content is weak. Therefore, in recent years, much research has focused on using essential oils, extracts and oleoresins extracted from spices and aromatic herbs as alternative food preservatives 2 .
A c c e p t e d a r t i c l e
Thymus is considered to be one of the eight largest genera of the Lamiaceae and includes several species distributed throughout the coasts and even in internal regions and in arid zones. Many studies have shown that Thymus species exhibit antibacterial, antifungal, and antioxidant activities 3 . These activities are mainly referred to the richness of essential oils (EOs) to non-volatile compounds comprising polyphenols and flavonoids.
In Algeria, 12 species of Thymus colonize the territory of the country. Some of them are endemic to Algeria such as Thymus pallescens de Noah, Thymus dreatensis Batt., Thymus guyonii de Noah and Thymus lanceolatus Desf., others are endemic to North Africa such as Thymus ciliates Desf., Thymus fontanesii Boiss. and Reut., Thymus numidicus Poiret., Thymus munbyanus Boiss. and Reut and Thymus algeriensis Boiss. and Reut. Thymus pallescensis is common and endemic to northern Algeria, while, Thymus dreatensisis rare and endemic to the Aures mountains (Batna region) and the Djurdjura mountains (region of the East) Kabylie 4 .
The aim of the study was to investigate the antioxidant activity and effect of Thymus spp. EOs and extract on the growth of Aspergillus species, isolated from stored wheat and corn.
Plant material
Thymus species were collected from different regions of Algeria in 2018 and then dried in the shade at an ambient temperature during seven days. The plants were identified in the laboratory of Ethnobotany and Natural Substances, Department of Natural Science, Ecole Normale Supérieure (ENS) ( Table 1). wheat and corn. Fungal strains were transferred to fresh potato dextrose agar (PDA) medium every two months in order to avoid a decline in strain viability and stored at 4 °C.
Extraction of EO
The EO from leaves of different species was extracted by hydrodistillation. The process consists of immersing the plant material (200 g) in a bath of distilled water for three hours.
The obtained organic EO solution was dried with anhydrous sodium sulfate (Na 2 SO 4 ) weighed and stored in opaque brown colored vials, hermetically sealed and stored at 4 ° C to avoid any degradation.
Preparation of extracts
Leaves were prepared and left to dry in the shade for 2 weeks, then crushed in a blender to turn them into powder. The extraction was carried out by maceration using two solvents: chloroform and ethanol. The extracts were prepared by adding 10 ml of the extraction solvent to 1 g of each species powder. After stirring for 30 minutes, the mixture was kept standing for 24 h at 4 ° C. The extracts were filtered using Whatman No.1 filter paper, then the filtrates were evaporated by a rotary evaporator.
Preparation of the inoculum
Spores from 7-day cultures of each fungus were recovered by washing the Petri dishes, with a volume of 10 ml of a sterile 0.1% (v/v) tween-80 solution. The number of spores was determined (1 × 10 6 spores/ml) by counting using the Mallassez cell (depth 0.2 mm, 1/400 mm 2 ) under a light microscope.
A c c e p t e d a r t i c l e
Determination of the content of total phenolic compounds
The determination of the total polyphenols was performed with the Folin-Ciocalteu colorimetric reagent according to the method of Dewanto et al. 5 . A volume of 125 µL of each EO and extract were dissolved in 500 µL of distilled water and 125 µL of 10 times diluted Folin-Ciocalteu reagent. The solutions were mixed and incubated for 3 minutes. After the incubation 1.25mL of sodium carbonate solution Na 2 CO 3 (7%) was added. The final mixture was shaken and incubated for 2 h in the dark at room temperature. The absorbance of each mixture was measured by a spectrophotometer at 760 nm.
The concentration of total phenolics was calculated from the equation generated with the standard gallic acid and expressed in µg of acid equivalents gallic per mg of extract.
Determination of the content of flavonoids
The content of flavonoids of the EOs and extracts was determined following the method described by Pękal and Pyrzynska 6 . One milliliter of each extract at a suitable dilution was added to the same volume of AlCl 3 .6H 2 O (2% dissolved in methanol). The mixture was vigorously shaken and incubated for 10 min at room temperature. The absorbance was measured at 440 nm. The contents were expressed as Quercetin Equivalents per Dry Weight (µg QE/mg DW).
Evaluation of the free radical scavenging activity by the DPPH method
The protocol followed was that described by Nikhat et al. 7 . In dry test tubes, an amount of 2.9 mL of each EO and extract and Butyl-hydroxytoluene (BHT) at different concentrations of EOs and extracts was mixed with 100 µL of the methanolic solution with 0.004% DPPH • .
After shaking, the tubes are placed in the dark at room temperature for 30 minutes. The results were expressed as anti-free radical activity where the inhibition of free radicals (DPPH °) in percentages (I%) was calculated by the following formula: Where: A blank : Absorbance of the control reaction containing all reagents except EO/ extract; A sample : Absorbance of the sample containing a tested dose of extract.
Evaluation of antioxidant activity by the β-carotene / linoleic acid method
The method used was that of Miraliakbari and Shahidi 8 . A stock was prepared where 0.5 mg of β-carotene crystals was dissolved in 1 mL of chloroform, then 1 mL of the solution was
Determination of minimum inhibitory (MIC) and fungicidal (MFC) concentrations
The minimum inhibitory (MIC) and fungicidal (MFC) concentrations of each crude extract were determined using the liquid dilution method reported by Prakash et al 9 . Ten (10) µL of the fungal suspension (1 × 10 6 spores/ml) were inoculated into test tubes containing 10 mL of the SMKY liquid medium at different concentrations of EOs (0.3 mg/mL, 0.6 mg/mL, 1.2 mg/mL, 2.4 mg/mL, 4.8 mg/mL) and extracts (0.05; 0.1; 0.2; 0.4; 0.8; 1.6 mg/mL). SMKY A c c e p t e d a r t i c l e tubes containing DMSO were used as control. The tubes were homogenized and incubated at 28 ± 2 ° C for 7 days. After incubation, observation of a range allows access to the MIC, which corresponds to the lowest concentration capable of inhibiting the growth of the microorganism. Tubes which showed complete inhibition were subcultured into Petri dishes containing 10 mL of the PDA culture medium. When there is a resumption of mycelial growth, the concentration is called fungistatic (MFCs). However, if there is no resumption of growth of the mycelium because of permanent inhibition, it is called fungicide (MFCc).
Statistical analysis
All bioassays were performed in triplicate and data analysis was done on mean ± SD subjected to one-way ANOVA using STATISTICA version 6 software. Means are separated by Tukey's post hoc test when ANOVA was significant (p < 0.05).
Phenolic content and flavonoids
Phenolic compounds such as phenolic acids and flavonoids are considered to be the major contributors to the antioxidant capacity of plants. Therefore, assays for phenolics and flavonoids from the EOs and extracts of Thymus species were performed in this study. The results are presented in Table 2.
The level of flavonoids of chloroform and ethanolic extracts was relatively comparable.
Such appreciable contents were revealed in the ethanolic extracts of TC1 (26.64 ± 0.18 µg/g), TF (10.23 ± 0.13 µg/g), TN (9.54 ± 0.22 µg/g), and TO (8.41 ± 0.21 µg/g); on the other hand, A c c e p t e d a r t i c l e the lowest flavonoid content recorded for the chloroform extracts was that of TP (3.39 ± 0.17 µg/g) and TL (3.68± 0.13 µg/g), and for the ethanolic extracts was that of TP (2.81 ± 0.111 µg/g), TI (3.19 ± 0.08 µg/g), and TD (3.50 ± 0.08 µg/g). However, the amounts of flavonoids in ethanolic extracts were low in comparison with those obtained by the chloroform extracts.
The exception was raised only with TC1, where the flavonoids were mainly found higher than the flavonoids present in the chloroform extract of TC1 (p<0.05). The content was very variable depending on the species and the fractions.
The group of phenolic compounds is one of the most distributed ubiquitous groups in plants.
The antioxidants extracted from plants are mainly phenolic compounds. Phenolic compounds seem to be good candidates for their antioxidant activities due to the presence of numerous hydroxyls which can react with free radicals. According to previous work, species of Thymus are rich in flavonoids 10 ; these data are comparable with our results since the tests revealed the presence of flavonoids with significant quantities.
Scavenger effect of the radical DPPH
The antioxidant activity was determined by the decrease in the absorbance of an alcoholic solution of DPPH at 517 nm. Various studies have experimentally determined the abilities of natural extracts to scavenge free radicals.
The IC 50 values for the EOs and extracts of thymus spp., as well as for the reference, BHT are presented in table 2. Based on this test, the EOs and extracts showed significant differences in their scavenging ability of the free radical.
A c c e p t e d a r t i c l e
Studies conducted by some researchers have indicated that the antioxidant activity of EOs may be greater than that of the majority of compounds that are tested separately. It has been established that the activity of an EO is related to the possible synergistic effects between minor constituents. Ballester-Costa et al. 11 , showed that strong antioxidant activity of the EO could be explained by the existence of hydroxylated constituents such as terpenes in their chemical profile. According to Amarti et al. 12 , the antioxidant activity of EO can be linked to the phenolic content. In a study on the EO of Thymus vulgaris, Jukic and Milos 13 showed that the phenolic (thymol and carvacrol) and non-phenolic (linalool) chemotypes are able to reduce the 2,2-diphenyl radical. 1-picrylhydrazyl, with a higher effect than that recorded for phenolic chemotypes.
On the other hand, the multiple researches that were conducted have established that the Thymus species are rich and promising sources of phenolics and flavonoids. The study by Kulšic et al. 14 showed that the aqueous extract of the leaves of Thymus vulgaris exhibited significant antioxidant activity. In this extract, phenolics, rosmarinic acid, and caffeic acid may explain the exhibited activity 14 . However, plant extracts containing a mixture of these compounds have given more or less satisfactory results, leading to the conclusion that the antioxidant activity of these compounds depend not only on the phenolic content but that the phenolic compounds can act synergistically, antagonistically or can independently affect the whole activity of the mixture. Amič et al. 15 Table 2).
The results obtained showed that the EOs and extracts were able to reduce lipid peroxidation in the β-carotene-linoleic acid system. These results suggest that they have a considerable capacity to react with free radicals to convert them into non-reactive species and to interrupt the chain of radical reactions. Compounds that possess this characteristic can be used in food systems. Maggi et al. 16 found that the EOs possess lipid peroxidation inhibitory activity due to the presence of a high level of oxygenates. These same authors also report that the antioxidant activity of a compound is very often linked to the presence of easily oxidizable portions such as a hydroxyl group on a hydrocarbon. In addition, Deba et al. 17 suggested that the antioxidant activity of phenolics lies in the fact that they have the capacity to give hydrogen atoms to free radicals (hydroperoxides of the reaction medium) resulting from oxidation linoleic acid and, consequently, stop the attack of these radicals on β-carotene. Sandhar et al. 18 in their publication, indicated that flavonoids inhibit lipid peroxidation at an early stage through the scavenger activity of peroxide radicals as they can interrupt a chain of radical reactions through the property of hydrogen donation.
Antifungal activity of EOs and extracts
The results of MIC and MFC are summarized in Table 3,4,5. There are no confirmed criteria for MIC endpoints for in vitro antimicrobial bioassays. Nevertheless, according to Aligiannis et al. 19 , the antimicrobial activity is considered stronger when MIC values are between 0.05 A c c e p t e d a r t i c l e mg/mL and 0.50 mg/mL, moderate when they are between 0.6 mg/mL and 1.5 mg/mL and low when greater than 1.50 mg/mL.
The EOs and extracts of the 12 species were significantly effective towards target fungi (p<0.05). Generally, MIC values varied between 0.6 ± 0.00 mg/mL and 4.8 ± 0.00 mg/mL for EOs, between 0.1 ± 0.00 mg/ mL and 1.06 ± 0.46 mg/mL for chloroform extracts and between 0.1 ± 0.00 mg /mL and 1.6 ± 0.00 mg/mL for ethanolic extract. According to this evaluation system, the EOs had moderate to low activity with respect to the tested fungi. The chloroform extracts exhibited good activity and the ethanolic extracts presented the same pattern of inhibitory activity.
EOs and extracts were fungicidal in nature higher than MICs as MFCs, with MFCs against all fungal strains being higher. The MFCs were found to be between 1.0 ± 0.34 mg/mL and > 4.8 mg/ml for EOs, 0.26 ± 0.11 mg/mL and > 1.6 mg/mL for chloroform extracts, 0.2 ± 0.00 mg/mL and > 1.6 mg/mL for ethanolic extracts.
Several chemical components recognized for their antifungal activities are present in Thymus spp. EOs as major or minor constituents. Generally, EOs manifest powerful antifungal potential against food fungi due to the abundance in phenolic compounds mainly thymol, carvacrol, linalool, γ-terpinene, and p-cymene indicating phenolic compounds as agents with high antifungal potential among all terpene components of EOs 20 .
The effect of thymol seems to be similar to that of carvacrol because the different position of the hydroxyl group in their phenolic ring does not influence the degree of antimicrobial activity. Dikbas et al. 21 , confirmed that the inhibition of fungi by thymol and carvacrol was similar or above that of the overall EOs.
Overall, the observed antifungal effect would then be attributable to one or more active molecules, present in high or low proportion in EOs. In this way, several active compounds A c c e p t e d a r t i c l e may have high inhibition potential or they have synergistic/antagonistic effects, which could affect the inhibition capacity.
The results indicated also that all fungal strains were affected by both extracts. Similar to antifungal activity of EOs, antifungal activity of extracts can be related to some bioactive compounds such as phenolics and flavonoids. Lahmar et al. 22 showed that the activity of phenolics against fungi can be attributed to the production of enzymatic inhibition by phenols due to the oxidation of compounds and inhibition of protein synthesis in the cell. In another studies, Zaïri et al. 23 , confirmed that the hydrophobicity of phenolics such as flavonoles is also a criterion of toxicity that allows them to intercalate in membrane phospholipids and exert their antifungal effects inside the cell.
Conclusion
The results of the present study showed that the EOs, chloroform and ethanolic extracts of TC1 TF TC2 TN TG TL TI TL TA TO TD | 2022-01-13T16:13:30.905Z | 2022-01-11T00:00:00.000 | {
"year": 2022,
"sha1": "669c38b91b8f8c94b5ae9131d517e3f2ea6ddcf0",
"oa_license": "CCBYNC",
"oa_url": "https://www.pagepressjournals.org/index.php/jbr/article/download/10299/9855",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e699dd09b9734656ff2a2dd869a62c7bac1b3943",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
} |
239059847 | pes2o/s2orc | v3-fos-license | Utilizing Artificial Neural Network for Load Prediction Caused by Fluid Sloshing in Tanks
In this research, neural network models were used to predict the action of sloshing phenomena in a tank containing fluid under harmonic excitation. A new methodology is proposed in this analysis to test and simulate fluid sloshing behavior in the tank. The sloshing behavior was first modeled using the smooth particle hydrodynamics (SPH) method. The backpropagation of the error algorithm was then used to apply the two multilayer feed-forward neural networks and the recurrent neural network. The findings of the SPH process are employed in the training and testing of neural networks. Input neural network data include the tank position, velocity, and acceleration, neural output data, and fluid sloshing curve wave position. The findings of the neural networks were correlated with the experimental evidence provided in the literature. The findings revealed that neural networks can be used to predict fluid sloshing.
Introduction
Fluid sloshing has a wide variety of uses in the fields of engineering, for example, the construction of fuel tanks for automobiles, containers to carry liquid on roads, ships, and space vessels. As a result, studying the fluid sloshing activity inside a partly filled container is important [1,2]. Sloshing is the movement of liquid inside a partly filled container as a result of external excitations. The liquid may undergo violent oscillations under such critical circumstances, such as major container movement or the presence of resonance, where the excitation frequency is similar to the normal frequency of the liquid sloshing mechanism. As a result, the container system is subjected to substantial structural load due to the induced high impact effect [3]. For a long time, fluid-filled storage tank systems have piqued researchers' attention because of the peculiar properties that result from the fluid's contact with the system. The sloshing is caused by the mixing between the fluid and structure, which may be a serious concern for vehicle stability and control [4]. The resonant state in sloshing will create high structural loads on the tank frame since the frequency of tank motion is similar to the normal frequency of the fluid within it. This resonance effect may be linked to complex movements of the filled liquid, which could couple with structure motions, posing a threat to the tank structure and its stability [5][6][7].
The behavior of the free surface motion of the liquid within the tank is determined by the form of excitation, the frequency ratio to the natural frequency, and the amplitude. Excitation can take many types, including impulsive, sinusoidal, and random. The tank will sway, rotate, pitch/yaw, or a combination of these motions. In moderate sloshing, the resulting free surface profile may be a mix of various wave modes, such as hydraulic jump and traveling waves, or standing and breaking waves in extreme sloshing [6][7][8][9][10]. For the last few decades, researchers have been very interested in predicting the free surface motion of liquids, and some of the experiments on liquid sloshing are mentioned. Mechanical models of the phenomena were used at the beginning, with terms in the harmonic equation of motion being modified [10,11]. There are several similar articles on reducing sloshing in the literature, and several researchers have published studies on sloshing using different methods [11][12][13]. Sloshing has been simulated in several previous studies using a generalized computational approach based on a pendulum or spring-mass model [12]. Nevertheless, because of the vast number of assumptions and simplifications used in the sloshing modeling stage, this approach has several flaws and weaknesses in implementation.
Numerical simulations, in addition to the above methods, were presented as commonly used techniques for investigating fluid sloshing problems. For studying extremely nonlinear sloshing problems, numerical techniques have offered an alternative tool. Many reports [14,15] include detailed analyses of numerical techniques for liquid sloshing problems. Grid-based approaches, such as the finite difference method (FDM) [16,17], finite element method (FEM) [18,19], and boundary element method (BEM) [20][21][22][23], were mainly used in computational experiments during the last decade. The traditional grid-based approach has several issues with liquid motion discontinuity or breaking waves. Although some free surface monitoring strategies, such as volume-of-fluid (VOF) [24,25] and level set [26], have been adopted to address these shortcomings and increase the efficiency of traditional mesh-based systems, they still cannot accommodate large fluid deformation, necessitating mesh adjustment or rezoning to resolve liquid sloshing [27]. The use of a Lagrangian formulation to describe both fluid and structure motion has received a lot of interest as an alternate class of numerical simulation methods. This is due in part to the method's ease of execution and in part to the method's independence from grid data. In reality, the term meshless refers to the lack of an intrinsic dependence on a certain mesh topology in these approaches. Using Lagrangian techniques on both the solid and fluid parts of the problem has the advantage of allowing one to follow the motion of the fluid-solid interface and model the fluid's free surface without any special care.
The SPH approach is a meshless methodology that Chen and Nokes [16] and Chen and Wu et al. [17,18] first introduced in 1977. The technique is based on a solely Lagrangian method and has been applied to a variety of issues, including astrophysics [28][29][30][31], fluid mechanics [31,32], solid mechanics [33], and fluid-structure interaction [28]. Mesh-free and particle methods have recently been introduced as alternatives to mesh-based and particle methods for studying nonlinear free surface flows [28-30, 34, 35]. The SPH is a meshless approach that was initially designed to solve a compressible flow before being modified to apply the incompressible state [30]. This study is aimed at improving the SPH method's ability to correctly simulate sloshing flow and calculate impact pressure by utilizing a more precise timestepping integration and a simplified boundary state treatment. The theoretical approach is the most pragmatic way of studying sloshing physics. During this time, both experimental and theoretical methods were considered. Smallscale studies have been mostly employed in the manufacturing fields due to the sophistication and stochastic existence of sloshing problems [33].
Even though laboratory tests have been conducted, the factors that should be studied are still up for debate. Sloshing loads under a given test environment are difficult to estimate due to their irregularity and nonparametric sophistication. A few studies have been carried out [36,37], but no definitive estimate can be made. As a result, a nonlinear intelligent approach must be used to model this action. A good method for modeling nonlinear processes is an artificial neural network. Ahn et al. [38] conducted a large number of experiments. The artificial neural network is conditioned on the database to simulate the magnitude of sloshing loads. Neural networks are nonlinear mathematical data processing or decision-making tools in more realistic terms. They can be employed to model complicated input-output relationships or to identify patterns in results.
Given that artificial neural networks have two basic features of learning or mapping based on the presentation of experimental data (power and ability to generalize) and parallel structure ability, they are one of the most important methods of artificial intelligence in which inspired by the human brain, while conducting the training process, data information is stored within network weights.
Because of their strength, flexibility, and ease of use, neural networks are superior tools in many applications of predictive processes through data analysis. In cases where the process has nonlinear behavior and difficult mathematical equations, one of the best methods is an artificial neural network.
The advantage of a neural network is direct learning from data without the need to estimate their statistical characteristics. The neural network is able to find the relationship between the set of inputs and outputs to predict each output corresponding to the desired input, without considering any initial hypotheses and prior knowledge of the relationships between the studied parameters.
Considering that most of the studies have been done on the study of fluid sloshing behavior in tank using numerical methods and equivalent mechanical models (pendulum model and mass-spring model), therefore, in this study, the use of a neural network is a suitable technique for predicting and modeling fluid sloshing behavior in the tank.
One of the achievements of this study is to determine and investigate the behavior of fluid sloshing in the reservoir using neural network tools to predict sloshing in reservoirs. In other modeling methods, due to the nonlinear behavior of the sloshing phenomenon, which leads to behavior change 2 Geofluids by changing the initial conditions or the simulation time, the use of neural network tools can be justified. In fact, the work done in this study is modeling behavior based on the input and output results of a sloshing phenomenon based on data from the SPH method. As a result, a suggested neural network-based methodology is used to model the sloshing action of liquid in a rectangular tank under harmonic excitation in this research. The experimental results of the obtained [38] and numerically SPH system study regarding the influences of sloshing phenomenal in the tank were discussed first, and then, a method for fluid sloshing in tank prediction was suggested. There are four steps to this article. The sloshing effect was numerically modeled in the first stage using the SPH equation. The collected SPH findings were compared to the experimental results. Multilayer feed-forward (MLFF) neural networks are generated in the second level. The Elman neural network is built in the third level. The inputs to neural networks were wave curve direction, velocity, and acceleration, and the outputs were wave curve position. The computed findings from neural networks were compared to real data in the final step, and they were in good agreement.
Model Description
At atmospheric pressure and room temperature, the sloshing tests are conducted out in a rectangular tank partly filled with water. This paper's case study is based on previous research on lateral sloshing effects under periodic harmonic excitation. The tank's dimensions are 1:3 m × 0:9 m × 0:1 m and correspond to the tank's length, height, and width, respectively (see Figure 1).
Based on the experimental work of Rafiee et al. [31], the schematic diagram for liquid sloshing in a tank was adopted, with a low filling depth ratio ðd = 0:2HÞ and a sinusoidal motion excitation, x = A sin ωt. The motion was introduced 3 Geofluids with a high amplitude ðA = 0:1m) and a resonance frequency of ω = 3:116. Figure 1 shows the precise dimensions of the rectangular tank in this model.
Numerical Analysis
3.1. SPH Theory. The SPH procedure is used to numerically simulate the sloshing effect in this article. A short overview of the approach is provided below, along with several key implementation problems. For a more detailed summary, the reader is directed to [38]. The interpolation principle underpins the SPH system. Using a kernel function, any function may be represented in terms of its values at a series of disordered points describing particle locations. The kernel function is a weighting function that determines the input of a common field vector, AðrÞ, at a 5 Geofluids specific point in space, r. AðrÞ's kernel estimate is specified as [24,28].
where r denotes the vector position, V denotes the solution space, and h denotes the kernel's effective distance. The particle approximation of the function at amparticle, ðrÞ, can be written as follows by discretizing approximation Equation (1).
At the summation, all particles inside the kernel function's compact support area must be considered. The weight function or kernel is W ij = Wðr i − r j , hÞ, and the mass and density are m j and ρ j , respectively. The following kernel function proposed by Faltinsen and Timokha [23] is one of the kernel functions used in this work: where α d is 10/ð7πh 2 Þ in 2D and q = r/h and r is the distance between two points a and b.
Discretization of Governing Equations in SPH
Formulation. To model fluid motions, the SPH formalism is extended to the Navier-Stokes equations. This approach is described briefly here. The continuity and Navier-Stokes equations are the governing equations.
where v ! denotes the particle velocity vector, t is the time, ρ is the fluid density, p is the pressure, g is the gravitational accel-eration vector, and ν denotes the laminar kinematic viscosity. When the actual equation of state is applied, the time stage would be very limited due to the finite compressibility of real liquids. As a result, in the actual measurement, the fluid is typically treated as weakly compressible, and the pressure field is extracted by solving the equation p = pðρ, eÞ. In addition, the entropy effect on the pressure field can be ignored when the fluid pressure is less than 1 GPa. The density of a fluid is solely determined by its pressure. The mass conservation equation, the energy conservation equation, and the momentum conservation equation make up the SPH equations. When the flow field's pressure is low, the fluid is considered barotropic, and energy has no impact on the pressure field. As a result, the energy equation remains unsolved. The density equation and the momentum equation are described as follows using the SPH method's kernel approximation and particle approximation: where P, m, ρ, c, v, r, and g represent pressure, mass, density, speed of sound, velocity, coordinates, and acceleration of gravity, respectively, R ij = r ir j . For the pressure, the system is closed with a stiff equation of state.
where ρ 0 denotes the fluid's nominal density (1000 kg/m 3 ), γ is a constant set to 7, and c s denotes the numerical sound speed used in the measurement. In SPH, the sound speed is normally set to 10 times the predicted maximum velocity of the fluid (V max ). Because density changes with the square of the Mach number, it is likely to be about 1% of the fluid's nominal density. The numerical sound speed is therefore kept low enough to allow for appropriate time measures.
Results of SPH Modelling
A Fortran code based on the SPH method is employed in the simulation. A simulated model is depicted in Figure 2. These findings can be checked based on the pressure results collected from the SPH process and experiments, as seen in Figures 2 and 3. The need for simulation of the SPH process, on the other hand, arises from the fact that it will be used in the next segment for neural network training, and the more data available, the closer the sloshing behavior prediction using a qualified neural network will be to actual performance.
Data Preprocessing.
Preprocessing data is a crucial part of the data mining process. Out-of-range values, unlikely data combinations, incomplete values, and other issues may arise from data collection approaches that are not tightly regulated. Analyzing data that has not been thoroughly screened for such issues will lead to false conclusions. As a result, before running an analysis, the representation and consistency of data must come first [1]. Data preprocessing is often the most crucial stage of a machine learning project [2]. Cleaning, instance filtering, normalization, transformation, function extraction and selection, and so on are all examples of data preprocessing. The final training collection is the product of data preprocessing. Data preprocessing may have an impact on how the results of the final data processing are viewed [3]. When the meaning of the findings is a critical point, this element should be carefully considered. The simulation employs the SPH process, as previously stated. As a result, a variety of particles were used to create the tank model and the fluid inside it. Each particle has many characteristics at each time phase of the simulation process. Particle position, velocity, pressure, mass, and density are among these characteristics. As a result, there is a lot of data available in the simulation over time. Due to the short time measures, the simulation time is less than 10 seconds, but the data collection is high. Figure 4 shows a picture of a virtual model of fluid in the tank, for example. As can be observed, a large number of particles make up the fluid volume. As a consequence, the simulation's output has a lot of numbers (see Figure 4). The next step is to prepare the data for use in neural networks. However, before we get to this stage, we must first decrease the size of the results. This is accomplished by the use of MATLAB applications. The model design according to the SPH system consists of a significant number of particles in both vertical and horizontal directions. We are looking for the produced waveform in the tank, which can be used to calculate the sloshing effect, pressure values, and generated forces. As a result, obtaining the location of the fluid surface, or, in other words, the generated wave position, is necessary to calculate the amount of fluid at any given time. As a result, the position of the particles on the fluid's surface is critical in determining the final form of the fluid in the tank. This is shown in Figures 5 and 6.
According to Figure 5, the goal of particle removal is to decrease the computational cost. As a consequence, using 7 Geofluids the wave's curve coordinates, we can calculate the sloshing results under the same conditions as before.
Artificial Neural Network.
Artificial neural networks are used in a variety of applications, including control, manufacturing, and optimization [39]. When it comes to neural networks, there are several different kinds of networks that can be employed. Various network parameters, such as the type of neurons present in each layer and the mechanism by which network layers are interconnected, can be determined by the network type. The following two forms are being considered for use in this research: (1) Two or three layers of neurons make up MLFF neural networks. Feed-forward networks get their name from the fact that the output of each layer is simply fed into the next layer. Each layer may have various sizes and transfer features (2) Elman networks are a form of recurrent network that has feedback from the first layer's output to the first layer's input and consist of two feed-forward layers.
The secret layer's neurons have a tangential-Sigmoidal transfer mechanism, while the output layer's neurons have a linear transfer function [40] 4.3. Data Collection. The gathering of data is an important step in the creation of neural network models. The data for this analysis comes from simulations of a complex SPH model. Throughout the simulation, the data is sampled every 0.01 second, yielding a total of 1000 datasets. Time, tank displacement, velocity, and acceleration are the model's input parameters. The waveform location is the model's output parameter.
The main technique in this study is the use of fluid sloshing wave profile curve data in the reservoir shown in Figure 5. The results obtained from the SPH method are based on particle modeling, so a large amount of fluid information is generated for modeling, and to use it, a large part of it must be filtered. Therefore, a programming code has been written in MATLAB software that extracts useful information from the simulation data of SPH method and transmits it for use in a neural network.
Geofluids
The next step is to define the input and output data for the neural network. As mentioned earlier, the input data of the network is equal to the displacement, velocity, and acceleration applied to the fluid and the output data is equal to the position of the profile of the fluid sloshing wave profile. For example, a view of the fluid sloshing wave curve after filtration is shown in Figure 6. As a result, the most important part of this research is the extraction of simulation data and the selection of appropriate data for use in the neural network.
Data Normalization.
Since each input sample has various physical definitions and proportions, the input sample must be normalized for each input sample to have an equivalent essential location and to avoid the weight from being adjusted into the flat region of error. The ANN's normalized inputs are generated using the regularization function below: where X max and X min are the maximum and minimum values of X, respectively, and X n is the X normalization value;
Determination of Neuron and Layer Numbers in ANN.
A two-layer feed-forward network configuration is used in this paper's neural network prediction model, which comprises a hidden layer and an output layer. The number of hidden neurons affects the network as well. The number of neurons in the hidden layer is proportional to the network model's predictive capacity.
The MLFF network is shown in Figure 7. The neural network used in this study is multilayer perceptron. This network is based on the backpropagation algorithm used to train the data. This algorithm is a method for deep learning of artificial neural networks with more than one hidden layer, which is used to calculate the weight gradient more accurately. This method is often done by optimizing the learning algorithm and stabilizing the weight of neurons by calculating the gradient descent of the cost function. Comparison of original and simulated data using neural network Step 44 Step 45 Step 46 Step 47 Step 48 Step 49 Step 50 Step 51 Step 52 Step 53 Step 54 Comparison of test and simulated data using neural network Step 55 Step 56 Step 57 Step 58 Step 59 Step 60 ANN prediction 55,...,60 Error
Geofluids
For a MLFF network, input data include X, V, and a, output data include ½y 1 , y 2 ⋯ , y 127 , W i1 denotes the connected weight between the input layer and hidden layer, W i2 denotes the connected weight between the hidden layer and output layer, and the sigmoid function as the activation function with the following form: whose derivative form is As a result, the estimation output of networký t andý t+1 are obtained according tó where xðtÞ = ½x t T v t T a t T T , xðt + 1Þ = ½x t+1 T v t+1 T a t+1 T T , and b 1 is the activated threshold. Formulas (12) and (13) transform the solving of y t and y t+1 to the learning of connected weights W i1 and W i2 . The main purpose of the calculations for the neural network is to reduce the difference between the predicted value of the network and the actual model, so to calculate this output, a cost function must be defined according to which the value of the difference between y t andý t is minimized.
The error δ i between the estimatedý i ðtÞ and y i ðtÞ is obtained by Comparison of original and simulated data using neural network Step 44 Step 45 Step 46 Step 47 Step 48 Step 49 Step 50 Step 51 Step 52 Step 53 Step 54 Comparison of test and simulated data using neural network Step 55 Step 56 Step 57 Step 58 Step 59 Step 60 ANN prediction 55,...,60 Error
Geofluids
We are looking for the minimum measurement error; therefore, the weight error ΔW i1 and ΔW i2 at t is where μ is the learning rate, XðtÞ is the input of neural network, g i ðtÞ is the output of the sigmoid function of the network, and W i2 is the connected weight between the hidden layer and output layer. g i and W i2 will be obtained from the neural network, see [41,42] for a more detailed discussion.
4.6. Training and Testing of Network. The data extracted from the SPH modeling has been spontaneously divided into three groups for training the networks: 70% is used for train-ing, 15% is used to verify that the network is generalizing and to interrupt training until it becomes overfit, and the other 15% is employed as a completely independent test of network generalization. The method of achieving optimum values for the adjustable parameters, weights, and biases used to obtain the best match between input and output data is known as ANN training. It is a nonlinear optimization problem to minimize mean squared error (MSE) and root-mean-square error (RMSE). The Levenberg-Marquardt optimization algorithm was used to complete the training phase. Comparison of original and simulated data using neural network Step 44 Step 45 Step 46 Step 47 Step 48 Step 49 Step 50 Step 51 Step 52 Step 53 Step 54 Comparison of test and simulated data using neural network Step 55 Step 56 Step 57 Step58 Step 59 Step 60 ANN prediction 55,...,60 Error 13 Geofluids modeling. For predicting sloshing behavior, a multilayer feed-forward neural network was implemented in the first phase. Input neural network data include tank position, velocity, and acceleration, neural output data, and fluid sloshing curve wave position. The wave's curve position is the neural network's contribution. As previously mentioned, the knowledge of the wave curve coordinates is calculated utilizing the data mining method based on the simulation results obtained using the SPH method. The number of particles in the waveforms is 127, which is used as the neural network's output based on the wave curve data ( Figure 6). In the secret layer, the number of neurons was set to 5. The network accuracy is also validated using regression (r) plots. The regression plots in Figure 8 indicate that the fit is strong for all datasets, with regression values of 0.9993 or higher in each case. Figure 8 shows a comparison of network expected and real insolation values. The error values for each input vector are also seen in this figure. The estimated values are very similar to the real values, and the majority of the errors are close to zero, as seen in Figures 9-13.
The Levenberg-Marquardt algorithm is one of the fastest implementation methods for backpropagation algorithm and has a very high efficiency for a medium network. The main drawback of this method is the need to store large matrices in memory, and this issue requires a lot of space [43][44][45][46][47], see reference [48] for more details. Another method used is the Bayesian regularization algorithm. In this method, weights and network biases are assumed to be random values with a specific distribution, see reference [49] for more details. In the third method, the scaled conjugate gradient algorithm is used. This algorithm works well for solving a wide range of problems. In particular, problems with a large number of network parameters are as fast as the L algorithm in estimating functions, see reference [50] for more details. The results of evaluating the algorithms used are shown in Table 1.
The Elman neural network was employed in the second stage to model the sloshing action. This network is one of the dynamic neural networks in which there is a feedback loop with a single delay around each layer of the network. This connection to the network helps to identify and generate time-varying patterns. The simplest form of this structure, which consists of only two layers and whose input and output excitation functions are tansigmoid and purelin, respectively, see reference [51] for more details. The results for the Elman neural network are shown in Table 2. Comparison of original and simulated data using neural network Step 44 Step 45 Step 46 Step 47 Step 48 Step 49 Step 50 Step 51 Step 52 Step 53 Step 54 Comparison of test and simulated data using neural network Step 55 Step 56 Step 57 Step 58 Step 59 Step 60 ANN prediction 55,...,60 Error 14 Geofluids Whereas our neural networks predict with high accuracy, they can sometimes encounter issues such as being stuck in the wrong local minima, or under or overfitting during training. A nonlinear network's error surface is more complicated than a linear network's error surface. Nonlinear transfer functions in multilayer networks cause several local minima in the error surface, which is a challenge. Since gradient descent is conducted on the error surface, it is common for the network solution to get entangled in one of these local minima, based on the initial circumstances. Based on how close the local minimum is to the global minimum and how small an error is expected, settling in a local minimum may be beneficial or detrimental. As a result, while a multilayer backpropagation network with enough neurons can enforce almost any function, backpropagation does not always find the optimal weights. To ensure that the right option has been found, the network can require to be reinitialized and retrained multiple times (see .
Usually in nonlinear systems, with the slightest change in its boundary conditions, the behavior of the system will change, which is the phenomenon of fluid sloshing in reservoirs as part of nonlinear systems. Therefore, modeling and finally predicting their behavior with other methods, such as numerical, experimental, and mathematical methods, will cause many problems. The use of artificial neural networks in fluid sloshing modeling due to the ease of using input and output data for neural network training has made this method superior to other methods, but it goes without saying that without the use of optimization algorithms such as genetic and PSO algorithms, the results will be poor. On the other hand, by implementing these models by the neural network, its dynamic model can be easily used in control systems.
Conclusion
In this paper, the fluid sloshing model in the reservoir was first simulated. The simulation was modeled based on the SPH method. The results obtained from the SPH method are based on particle modeling, so a large amount of fluid information is generated for modeling, and to use it, a large part of it must be filtered. Therefore, a programming code has been written in MATLAB software that extracts useful information from the simulation data of SPH method and transmits it for use in neural network. Then, the neural network was used to predict the sloshing behavior. In a neural Comparison of original and simulated data using neural network Step 44 Step 45 Step 46 Step 47 Step 48 Step 49 Step 50 Step 51 Step 52 Step 53 Step 54 ANN prediction 44,...,54 No. of instances Comparison of test and simulated data using neural network Step 55 Step 56 Step 57 Step 58 Step 59 Step 60 ANN prediction 55,...,60 Error network, two types of MLFF neural network and Elman neural network were compared based on different functions. The results obtained from the training data and test data show that there is good accuracy in neural network modeling. Table 1 also shows a comparison between time-based and MSE algorithms in RMSE.
Data Availability
The physical properties and data are extracted from [31].
Disclosure
The funding sources had no involvement support in the study design, collection, analysis, or interpretation of data, writing of the manuscript, or in the decision to submit the manuscript for publication. | 2021-10-21T15:27:15.400Z | 2021-09-09T00:00:00.000 | {
"year": 2021,
"sha1": "e7dae6d46093e2be16556bda9471e6ead7d1e875",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/geofluids/2021/3537542.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c2c7a30750490c7386b86f3ba2f1c5cdaa35d9cb",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
1970761 | pes2o/s2orc | v3-fos-license | Non-invasive in vivo determination of viable islet graft volume by 111In-exendin-3
Pancreatic islet transplantation is a promising therapy for patients with type 1 diabetes. However, the duration of long-term graft survival is limited due to inflammatory as well as non-inflammatory processes and routine clinical tests are not suitable to monitor islet survival. 111In-exendin-SPECT (single photon emission computed tomography) is a promising method to non-invasively image islets after transplantation and has the potential to help improve the clinical outcome. Whether 111In-exendin-SPECT allows detecting small differences in beta-cell mass (BCM) and measuring the actual volume of islets that were successfully engrafted has yet to be demonstrated. Here, we evaluated the performance of 111In-exendin-SPECT using an intramuscular islet transplantation model in C3H mice. In vivo imaging of animals transplanted with 50, 100, 200, 400 and 800 islets revealed an excellent linear correlation between SPECT quantification of 111In-exendin uptake and insulin-positive area of islet transplants, demonstrating that 111In-exendin-SPECT specifically and accurately measures BCM. The high sensitivity of the method allowed measuring small differences in graft volumes, including grafts that contained less than 50 islets. The presented method is reliable, convenient and holds great potential for non-invasive monitoring of BCM after islet transplantation in humans.
in islet grafts 15 . In another experiment, islets were pre-labeled with 18 F-fluorodeoxyglucose ( 18 F-FDG) and post-transplantation events could be monitored for up to 6 hours after transplantation 16 .
More recently, specific targeting of beta-cells after in vivo injection of radiolabeled exendin followed by SPECT imaging was reported as a promising strategy to non-invasively visualize and quantify BCM in the pancreas of rodents, as well as in healthy and diabetic individuals 17,18 . Similar exendin-based radiotracers were applied for non-invasive imaging of islet grafts in rodent transplantation models as well as in human skeletal muscle [19][20][21] . Although the use of such tracers in a clinical setting of islet transplantation is highly warranted, establishing the correlation between true BCM and the uptake of the radiotracer in vivo is the essential validation step before such studies should be conducted in humans. Such decisive studies have not been performed for GLP-1R imaging of islet transplants.
In the present study, we measured the uptake of 111 In-exendin-3 in transplants consisting of different amounts of islets in the calf muscles of C3H mice by non-invasive SPECT imaging in vivo and validated 111 In-exendin as a quantitative biomarker for assessment of transplanted beta-cell volume.
111
In-exendin accumulation in the muscle co-localizes with islet transplants. To check whether 111 In-exendin-3 signal originates from transplanted islets, C3H mice were transplanted with 800 islets in the calf muscle and were injected with 111 In-exendin-3 four weeks after transplantation, where accumulation of the radiotracer in the islets becomes reproducible 22 . Autoradiographical analysis of muscle sections showed tracer accumulation in well localized regions of the tissue (Fig. 1A) and immunostaining for insulin confirmed that the radioactive signal originated from the islets (Fig. 1B).
111 In-exendin uptake by transplanted islets correlates linearly with BCM. 111 In-exendin-3 uptake in grafts containing various amounts of islets was detected and clearly delineated by SPECT signal (Fig. 2A). Quantitative analysis of SPECT signal originating from the transplant revealed differences in 111 In-exendin-3 accumulation depending on the number of initially transplanted islets, where the uptake was 5.9 kBq ± 2.4, 22.9 kBq ± 4.8, 30.1 kBq ± 10.1, 60.9 kBq ± 9.8 and 88.7 kBq ± 11.5, in muscles transplanted with 50, 100, 200, 400 and 800 islets, respectively (Fig. 2B). Immunohistochemical determination of graft volume was performed in all groups of mice (Fig. 2C). Plotting of SPECT data against the insulin staining volume revealed an excellent linear correlation between 111 In-exendin uptake and transplant size (pearson r = 0.89).
Discussion
The aim of this study was to evaluate the relation between 111 In-exendin uptake in islet transplants compared to actual BCM in a muscle model of islet transplantation. Tracer uptake and beta-cell volume showed an excellent linear correlation. In addition, reliable visualization of transplants consisting of low numbers of islets indicates towards high sensitivity of this method. This high sensitivity in combination with the excellent correlation of BCM and tracer uptake demonstrate the potential of this method to quantify small differences in viable beta-cell volume.
Previously, imaging of liver islet transplants using 64 Cu-labelled exendin has been demonstrated in NOD/ SCID mice 19 . Twelve days follow-up of the grafts revealed significantly higher uptake of 64 Cu-labeled exendin-4 in the liver of transplanted animals when compared to the control group. Here, we used the skeletal muscle transplantation as a highly controlled model for histological verification of the insulin positive area (which is challenging in the liver transplantation model given that islets are distributed throughout a large part of the organ), a prerequisite enabling us to, for the first time, reveal an excellent linear correlation between the actual numbers of beta-cells that were successfully engrafted, and the uptake of the tracer in the islet grafts.
Recently, it was reported that injection of 123 I-IBZM enables the quantification of BCM in islet grafts, where tracer uptake could be linearly correlated with graft volume 23,24 . However, a far lower correlation was observed between 123 I-IBZM uptake and insulin positive graft volumes when compared to 111 In-exendin, indicating that GLP-1R could allow more accurate assessments of islet graft survival. Moreover, our data indicate that 111 In-exendin has a much higher sensitivity for detection of islet grafts when compared to 123 I-IBZM as we could easily visualize islet grafts after transplantation of 50 islets while 1000 islets were needed for in vivo visualization by IBZM. In fact, more than 50% of the islets could be lost in the first days after transplantation 25 , indicating that 111 In-exendin-SPECT was able to detect grafts containing far less than the 50 islets being initially transplanted. The superior detection sensitivity of 111 In-exendin-SPECT could be explained by the higher abundance of the GLP-1R on the surface of the beta-cells when compared to the dopamine 2 receptor 21,23 . Hence, 111 In-exendin-SPECT has the potential to detect small grafts even after post transplantation beta-cell loss. This enables the possibility to evaluate and optimize treatment to preserve the remaining islets in patients that became insulin dependent after islet transplantation, helping to preserve their positive effect on glucose homeostasis. The skeletal muscle was selected as a transplantation site to evaluate whether 111 In-exendin-3 can be used to monitor beta-cell volume after transplantation. So far, this site has been used for islet transplantation in an experimental setting in small animal models and in a few patients 20 . In the clinical setting, intra-portal islet transplantation still predominates 26 . The ability to detect islet transplants in the liver with exendin-3 has previously been demonstrated in a proof of concept study in mice 19 . Further studies are warranted to assess whether our finding that the 111 exendin-3 uptake corresponds to the beta-cell volume after transplantation is also valid after intrahepatic islet transplantation.
In conclusion, we have demonstrated for the first time that 111 In-exendin-3 can be used to determine beta-cell volume after intramuscular islet transplantation. 111 In-exendin-3 uptake linearly correlates with the amount of living beta-cells, and allows the detection of islet grafts consisting of less than 50 islets by SPECT imaging. Our data indicates that this approach holds great potential for accurate and sensitive quantification of viable beta-cells in a transplantation setting. Clinical studies evaluating the potential of this promising radiotracer for imaging of islets grafts in humans are under preparation.
Research Design and Methods
Animals. Female C3H/HeNCrl mice (22-30 g) were purchased from Charles River (Calco, Itlay). All experiments were conducted in accordance with Radboud University guidelines on humane care and use of laboratory animals and were approved by the Animal Ethical Committee of the Radboud University, Nijmegen, The Netherlands.
Pancreatic islet isolation and transplantation. Pancreatic islets were isolated from 6-8 weeks old mice by a collagenase digestion method. Briefly, mice were euthanized and 2 ml of cold RPMI 1640 (Invitrogen, Carmarillo, CA, USA) containing collagenase type V (1 mg/ml; Sigma Aldrich, St Louis, MO, USA) were infused into the pancreatic duct in situ. Perfused pancreata were collected in serum-free RPMI medium and kept on ice until enzymatic digestion at 37 °C for 12 min. Islets were purified on a discontinuous Ficoll gradient of following densities: 1.118, 1.096 and 1.037 g/ml (Cellgro by Mediatech Inc., Manassas, VA, USA) and islets were collected between the second and the third fraction. Islets were cultured overnight in a humidified 5% CO 2 atmosphere at 37 °C in RPMI 1640 medium supplemented with L-glutamine (Sigma Aldrich, St. Louis, MO, USA), penicillin-streptomycin (10 mg/ml; Sigma Aldrich) and 10% (v/v) fetal calf serum (HyClone, Celbio, Logan, UT, USA). Islets were counted and hand-picked under bright field microscope and 50, 100, 200, 400 or 800 islets were transplanted in the calf muscle (n = 5 per group), parallel to the fibula, using needles with a 0.8 mm diameter. The exact number of transplanted islets was determined by subtracting the remaining islets in the tube from the initially counted number.
Radiolabeling of exendin-3. [Lys 40 (DTPA)]-exendin-3 was purchased from Peptide Specialty Laboratories
(Heidelberg, Germany). Tracer labeling with In-111 was performed as previously described 17 . Radiochemical purity was determined by ITLC and radiolabeled exendin-3 was purified by solid-phase extraction using a HLB-column as previously reported 17 . SPECT acquisition. Mice with 50, 100, 200, 400 and 800 islets were scanned after 4 weeks. All mice (n = 4-5) were injected with approximately 15 MBq of 111 In-exendin-3 (peptide dose 0.1 µg in 200 µl PBS, 0.5% BSA) in the tail vein. SPECT scans were acquired 1 h post-injection on a U-SPECT-II/CT dedicated small-animal scanner (MILabs, Utrecht, Netherlands) for 50 min with a high sensitivity mouse collimator (1.0 mm pinholes). Computed tomography (CT) was performed subsequently for anatomical reference. Standards of 74 kBq, 55 kBq, 37 kBq and 18 kBq in 50 µl volume each, were scanned under the same parameters as reference for quantification. Images were reconstructed with voxel size of 0.4 mm, 3 iterations and 16 subsets, using U-SPECT-II reconstruction software (MILabs, Utrecht, The Netherlands). The VOI was drawn over the islet transplant region, total voxel intensity registered in the islet graft was corrected by the mean of 3 measurements of contra-lateral control muscle, to subtract the background signal originating from the muscle tissue. The absolute activity (in kBq) was calculated by multiplying the corrected voxel intensity value with the calibration factor determined by quantitative analysis of standards with known radioactivity and data were normalized by the injected dose.
Morphometric analysis of the transplant. Immediately after SPECT acquisitions, mice were euthanized, muscles were fixed in 4% paraformaldehyde and embedded in paraffin, then sectioned into 4 µm slices for autoradiography analysis or for determination of insulin volume by immunohistochemistry. For autoradiography analysis, muscle sections were exposed to an imaging plate (Fuji Film BAS-SE 2025, Raytest, Straubenhardt, Germany) for 7 days and images were visualized with a radioluminography laser imager (Fuji Film BAS 1800 II system, Raytest, Straubenhardt, Germany) and were finally stained with hematoxylin-eosin (HE) to confirm the presence of islets. For determination of insulin volume, insulin staining was performed in muscle sections. Antigen retrieval was done using 10 mM sodium citrate buffer, pH 6.0, for 10 min (Thermo Scientific PT module, Lab Vision, USA). Blocking of endogenous peroxidase activity was performed by incubation with 0.6% H 2 O 2 in 40% methanol/60% PBS for 30 min at RT in the dark. An additional blocking step was done with 5% swine serum in PBS for 30 min at RT. Primary anti-insulin antibody (cat. sc 9168, Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) was applied at a dilution of 1:50 (in PBS containing 1% BSA w/v). Subsequently, sections were washed with PBS and incubated with secondary horseradish peroxidase-conjugated swine-anti-rabbit IgG (1:50) (cat. P0217, Dakopatts, Copenhagen, Denmark) in PBS containing 1% BSA w/v for 30 min at RT. The staining was visualized with diaminobenzidine (PowerVision TM DAB substrate system, Immunologic, Duiven, The Netherlands) and nuclei were counterstained with haematoxylin. To determine the volume of the transplant, sections were scanned with Pannoramic250 Flash II scanner (3D Histech, Budapest, Hungary), beta-cell surface was manually drawn around insulin positive region using Photoshop CS6. Finally, the volume was determined by multiplying the insulin positive surface per section by the inter-section distance, which is 40 µm. Statistical analysis. Statistical analyses were performed using with Graphpad Prism 5 (San Diego CA, USA). The results were presented as mean ± SEM. Correlations were assessed using a two-tailed Pearson's correlation coefficient. The level of significance was set at P < 0.05. | 2017-11-03T05:54:37.110Z | 2017-08-03T00:00:00.000 | {
"year": 2017,
"sha1": "c411bf8e915655d6236fc9101c86fb21070b0648",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-017-07815-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5debc40652daef0813beca1e4fb1ceec9c7d957b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
80683151 | pes2o/s2orc | v3-fos-license | Pressurized intraperitoneal aerosol chemotherapy (PIPAC) in combination with standard of care chemotherapy in primarily untreated chemo naïve upper gi-adenocarcinomas with peritoneal seeding – a phase II/III trial of the AIO/CAOGI/ACO
Abstract Background Peritoneal metastasis is a common and dismal evolution of several gastrointestinal (GI) tumors, including gastric, colorectal, hepatobiliary, pancreatic, and other cancers. The therapy of peritoneal metastasis is largely palliative; with the aim of prolonging life and preserving its quality. In the meantime, a significant pharmacological advantage of intraperitoneal chemotherapy was documented in the preclinical model, and numerous clinical studies have delivered promising clinical results. Methods This is a prospective, open, randomized multicenter phase III clinical study with two arms that aims to evaluate the effects of pressurized intraperitoneal aerosol chemotherapy (PIPAC) combined with systemic chemotherapy vs. intravenous systemic chemotherapy alone on patients with metastatic upper GI tumors with a peritoneal seeding. Upper GI-adenocarcinomas originated from biliary tract, pancreas and stomach, or esophago- gastric junction are eligible. Patients in the study are treated with standard of care systemic palliative chemotherapy (mFOLFOX6) vs. PIPAC with intravenous (i.v.) chemotherapy (mFOLFOX6). Patients in first line with first diagnosed peritoneal seeding are eligible. Primary outcome is progression free survival (PFS). Conclusions PIPAC-procedure is explicit a palliative method but it delivers cytotoxic therapy like in hyperthermic intraperitoneal chemotherapy (HIPEC)-procedure directly to the tumor in a minimally invasive technique, without the need for consideration of the peritoneal-plasma barrier. The technique of PIPAC is minimally invasive and very gentle and the complete procedure takes only round about 45 min and, therefore, optimal in a clearly palliative situation where cure is not the goal. It is also ideal for using this approach in a first line situation, where deepest response should be achieved. The symbiosis of systemic therapy and potentially effective surgery has to be well-planned without deterioration of the patient due to aggressive way of surgery like in cytoreductive surgery (CRS)+HIPEC. Trial registration EudraCT: 2018-001035-40.
preserving its quality. In the meantime, a significant pharmacological advantage of intraperitoneal chemotherapy was documented in the preclinical model, and numerous clinical studies have delivered promising clinical results. Methods: This is a prospective, open, randomized multicenter phase III clinical study with two arms that aims to evaluate the effects of pressurized intraperitoneal aerosol chemotherapy (PIPAC) combined with systemic chemotherapy vs. intravenous systemic chemotherapy alone on patients with metastatic upper GI tumors with a peritoneal seeding. Upper GIadenocarcinomas originated from biliary tract, pancreas and stomach, or esophago-gastric junction are eligible. Patients in the study are treated with standard of care systemic palliative chemotherapy (mFOLFOX6) vs. PIPAC with intravenous (i.v.) chemotherapy (mFOLFOX6). Patients in first line with first diagnosed peritoneal seeding are eligible. Primary outcome is progression free survival (PFS). Conclusions: PIPAC-procedure is explicit a palliative method but it delivers cytotoxic therapy like in hyperthermic intraperitoneal chemotherapy (HIPEC)-procedure directly to the tumor in a minimally invasive technique, without the need for consideration of the peritonealplasma barrier. The technique of PIPAC is minimally invasive and very gentle and the complete procedure takes only round about 45 min and, therefore, optimal in a clearly palliative situation where cure is not the goal. It is also ideal for using this approach in a first line situation, where deepest response should be achieved. The symbiosis of systemic therapy and potentially effective surgery has to be well-planned without deterioration of the patient due to aggressive way of surgery like in cytoreductive surgery (CRS) + HIPEC. Trial registration: EudraCT: 2018-001035-40.
Introduction
Peritoneal metastasis is a common and dismal evolution of several GI tumors, including gastric, colorectal, hepatobiliary, pancreatic, and other cancers [1]. The therapy of peritoneal metastasis is largely palliative; with the aim of prolonging life and preserving its quality. Most patients receive Platin-based, combination systemic chemotherapy. Despite this guideline-recommended therapy, they die within months after diagnosis of peritoneal dissemination [2]. Almost 70 years ago, intraperitoneal chemotherapy has been discovered as an alternative therapeutic option in peritoneal metastasis [3]. In the meantime, a significant pharmacological advantage of intraperitoneal chemotherapy was documented in the preclinical model, and numerous clinical studies have delivered promising clinical results [4]. In the last 30 years, cytoreductive surgery (CRS) combined with hyperthermic intraperitoneal chemotherapy (HIPEC) has been increasingly used. On the basis of long-term survivors, some authors see a curative role for this combined therapy [5]. However, the level of evidence of CRS and HIPEC is still relatively low, and the complication rate remains significant so that this therapy is not accepted by all oncologists [6].
In spite of the above controversies, there is a broad agreement that CRS and HIPEC should only be offered to highly selected patients, taking into consideration the tumor type, the extent of disease, and the general condition of the patient [7]. In particular, diffuse invasion of the small bowel represent a contraindication for CRS and HIPEC because of the dilemma between complete cytoreduction and extensive resection of the small bowelwhich is not compatible with life [8]. Thus, there is an urgent need for novel therapies for the majority of peritoneal metastasis patients especially for those not eligible for CRS and HIPEC.
PIPAC is an innovative approach delivering chemotherapy into the peritoneal cavity without crop damage. It is easy to handle and several applications via laparoscopy (minor surgery) are possible without the need for major surgical manipulation [9][10][11][12][13][14][15][16]. Intraoperatively, at the time diagnostic laparoscopy to confirm peritoneal seeding, patients will be randomized after preoperatively written consent has been given for participation.
Subjects and methods
The scope of the trial is to evaluate the efficacy as well the safety and tolerability of the combination of PIPAC combined with systemic therapy vs. the same i.v. chemotherapy alone. Primary endpoint will be progression free survival (PFS) from randomization (the first PIPAC application, diagnostic laparoscopy, resp.) until disease progression or death of any cause. Secondary endpoint will be overall survival (OS), site of recurrence, morbidity, and quality of life (QoL).
Patients with peritoneal seeding of adenocarcinoma of upper GI (definition see upon) could be included into the trial if they fulfill the inclusion parameters after a central review.
All enrolled patients will receive a standard of care chemotherapy (mFOLFOX6) ± PIPAC.
Randomization: At the time of diagnostic laparoscopy to verify clinically or radiologically suspect peritoneal seeding, if patient has given written informed consent and meets inclusion criteria, patient will be randomized, using an interactive Web response system. Randomization will be balanced and stratified according to stratification criteria defined in the protocol.
Pre-therapeutic work-up: Patients eligible for the study (clinical and radiological evidence of peritoneal seeding) will be seen in clinics to check the inclusion and exclusion criteria. The patient will be required to give written informed consent to participate to this clinical study before any nonroutine screening tests or evaluations are conducted and before the explorative laparoscopy. The following assessments should be performed: Performance Status, Thoraco-Abdomino-Pelvic CT scan, PET Scan (optional), laboratory exams: serum CEA, CA19.9, and CA72.4 (optional marker according to tumor origin); hematology and serum chemistry; quality of life assessment (EORTC QLQ-C30). Staging video-laparoscopy of the abdominal cavity will be performed after written informed consent Patients with no macroscopic peritoneal carcinomatosis, not visible during the laparoscopic examination or patient where a laparoscopic access failed during surgery will be excluded from the study and treated as screening failure.
Patient fulfilling the inclusion criteria, with written informed consent and visible proven peritoneal seeding according to the laparoscopy will be treated according to randomization result as Arm A or Arm B.
Arm A (mFOLFOX6): Patients with clinically and radiologically signs of peritoneal seeding get a laparoscopic examination, if the peritoneal seeding is confirmed patients will be randomized. After randomization to Arm A laparoscopy will be finished after completion of 12 mFOLFOX6 doses without PIPAC, because patients in Arm A only receive intravenous therapy. Intravenous mFOLFOX6 standard systemic chemotherapy for the upper GI-cancers (SOC) will be administered. Patients will receive i.v. therapy only.
Arm B (mFOLFOX6 ± PIPAC): Patients with clinically and radiologically signs of peritoneal seeding get a laparoscopic examination, if the peritoneal seeding is confirmed patients will be randomized. After randomization to arm B patient will get a combination of i.v. chemotherapy with mFOLFOX6 + PIPAC. Systemic i.v. chemotherapy (mFOLFOX6) will be administered at the ward, independent of the PIPAC. 3 days after PIPAC patients are able to leave hospital if there are no signs of medical or surgical complications. Patients will be evaluated with clinical examination daily. Laboratory exams will be performed in order to assess hematological, renal, and hepatic function. Locoregional toxicity and systemic toxicity will be evaluated according to the Common Terminology Criteria for Adverse Events (CTC-AE v4.0) from the National Cancer Institute. PIPAC procedure is repeated every 6 weeks for three times and nine mFOLFOX6 doses.
In both of the arms, tumor assessments (CT or MRI) are performed prior (max. 28 days) to randomization and then every 8 weeks thereafter until progression/relapse, death or end of follow-up. During treatment, clinical visits (blood cell counts, detection of toxicity) occur prior to every treatment dose. Safety will be monitored continuously by careful monitoring of all adverse events (AEs) and serious adverse events (SAEs) reported.
QoL will be measured via EORTC QLQ-C30 v3.0 questionnaire in both arms, after written informed consent, before randomization and then every 8 weeks at the time of radiologically tumor assessments. Endpoints: Primary outcome: PFS will be measured from randomization (the first PIPAC application, diagnostic laparoscopy, resp.) until disease progression or death of any cause. Secondary outcomes: Efficacy (1 year PFS, 1 year and 2 years OS), pathological response rates and localization of recurrence, morbidity, and QoL. OS and PFS according to different subgroup, such as tumor entity (will be defined in the study analysis plan). DCR defined as the percentage of patients who have achieved complete response, partial response, and stable disease to a therapeutic intervention. Main inclusion criteria: Subjects with histologically confirmed unresectable locally advanced or metastatic upper GI-adenocarcinoma (originating from biliary tract, pancreas, stomach, or esophago-gastric junction) with peritoneal seeding. No prior chemotherapy in palliative indication. Proven peritoneal carcinomatosis by CT/MRI and laparoscopy. Medically operablefit for laparoscopy, ECOG ≤ 1. Main exclusion criteria: Concurrent anticancer treatment (for example, cytoreductive therapy, radiotherapy [with the exception of palliative bone-directed radiotherapy], immune therapy, or cytokine therapy, except for erythropoietin) including irradiation. Prior chemotherapy for unresectable locally advanced or metastatic adenocarcinoma of the stomach or gastroesophageal (GEJ), biliary tract or pancreas.
Measures of outcomes and assessments
Treatments PIPAC-procedure: Shortly, after insufflation of a 12 mmHg CO 2 pneumoperitoneum with open access or with Veres needle, two balloon safety trocars (5 and 12 mm, Applied Medical, Düsseldorf, Germany) are inserted into the abdominal wall. The extent of peritoneal carcinomatosis (PCI score) is determined based on lesion size and distribution [17]. Peritoneal biopsies are taken in all four quadrants for histological examination, and a local partial peritonectomy of several square centimeters was performed routinely to improve accuracy of anatomopathology.
A patented nebulizer device (Capnopen ® ) is then inserted via a 12 mm trocar into the abdominal cavity. The nebulizer unit is then connected with a high pressure line to a high-pressure injector. The liquid chemotherapeutic drugs (Cisplatin 7.5 mg/m2 body surface in a total of 150 mL NaCl 0.9 %; Doxorubicin 1.5 mg/m2 body surface in a total of 50 mL NaCl 0.9 %) are then injected with a flow rate of 30 mL/ min into the constant capnoperitoneum of 12 mm Hg. After an aerosol exposure phase of 30 min, the aerosol is evacuated via a closed aerosol waste system. Prior to the application of chemotherapy peritoneal biopsies are routinely taken from all four abdominal quadrants (if possible) taken both for conventional histological analysis and for gene expression testing. The laboratory team will be blinded to the clinical outcome. If present, ascites will be removed at the same time and the volume documented. PIPAC and PC sampling will be repeated every 6 weeks for three times or stopped earlier in cases of progression, death, or unacceptable toxicity [10][11][12][13][14] (Figure 1).
Sample size calculation:
The present trial is designed as a randomized phase III study which aims at estimating the therapeutic efficacy in terms of OS and PFS of the PIPAC-therapy including systemic therapy in relation to the standard systemic therapy. The assumptions derived from the historical data on patients in the described entity with a pronounced peritoneal seeding are verified by a randomized reference group without PIPAC.
The phase II part is exploratory. The primary endpoint of the phase II part of the trial is PFS as calculated by the hazard ratio for survival.
The assumptions are as follows: Median PFS with mFOLFOX6 is 4 months [18][19][20] in the population. The expected median PFS for the PIPAC arm is 5.5 months. The recruitment duration is 2 years and the total duration of the phase II part of the study is 30 months (that means the 24 months enrolment time plus 6 months follow-up after last patient in). Based on these assumptions a total patient number (phase II) of n = 206 was determined to observe, which are required to detect the improvement in PFS mentioned above with a power of 80 % and a significance level of 0.2 (two-sided) using a log-rank test. A 5 % drop out rate is included in the sample size. The software used for sample size calculation is SAS v9.3. Other secondary endpoints such as 5-year PFS and 5-year OS rates will be evaluated based on time to event outcome using Kaplan-Meier (KM) rates at 5 years over all patients for analyses. A sample size calculation of the phase III part will be performed based on the most current data available [21][22][23][24][25][26][27][28].
Study duration: Recruitment period will last 2 years (approximately 100 patients per year). Total study duration is 2.5-3 years (2 years recruitment plus 6 months follow-up after last patient in). The study can be analyzed earlier or later depending on the number of events observed.
Ethical considerations, information giving, and written informed consent: The authors state that they have obtained appropriate Institutional Review Board approval and have followed the principles outlined in the Declaration of Helsinki for all human experimental investigations. In addition, informed consent has been obtained from the participants involved.
Discussion
There is an urgent need for novel therapies for most peritoneal metastasis patients not eligible for CRS and HIPEC therapy. CRS and HIPEC is a possible option for some colorectal cancers, even though the level of evidence even in CRC is low but CRS combined with HIPEC is not established in upper GI cancer types, because of the more aggressive nature of disease in most kinds of upper-GI-cancers. An already existing peritoneal seeding in upper GI-cancers it is a clearly palliative situation with OS less than 1 year in most cases [22]. There is an urgent need for improvement of PFS, OS in this kind of patients without compromising the QoL.
PIPAC-procedure is explicit a palliative method but it delivers cytotoxic therapy like in HIPEC-procedure directly to the tumor in a minimally invasive technique, without the need for consideration of the peritoneal-plasma barrier.
The peritoneal-plasma barrier is a pharmacologic entity of importance for treatment planning in patients with malignant tumors confined to the abdominal cavity. This physiologic barrier limits the resorption of drugs from the peritoneal cavity into the blood. The sequestration of chemotherapeutic agents improves their locoregional cytotoxicity and reduces their systemic toxicity.
The technique of PIPAC is minimally invasive and very gentle and the complete procedure takes only round about 45 min and, therefore, optimal in a clearly palliative situation where cure is not the goal. It is also ideal for using this approach in a first line situation, where deepest response should be achieved. The symbiosis of systemic therapy and potentially effective surgery must be well-planned without deterioration of the patient due to aggressive way of surgery like in CRS + HIPEC.
Participating centers of the current trial are pioneering the potential fields of the application of PIPAC, including defining indications and contraindications, chances and risks, as well as success and failures of this therapy.
They have observed repeatedly that some patients who were primarily not eligible for CRS and HIPEC, most often because of small bowel involvement, could be treated after repeated PIPAC application with CRS and HIPEC.
Struller et al. showed that PIPAC with cisplatin and doxorubicin in patients with gastric cancer is well tolerated and active and concluded that randomized controlled trials should now be designed [29].) According to Khomyakov V. et al. a combination of systemic chemotherapy with XELOX and PIPAC with cisplatin and doxorubicin can induce objective tumor regression and is associated with a promising survival [30].
There is an unmet need for upper GI cancer patients with a leading peritoneal carcinomatosis for an improvement of therapy due to using the most direct way of application. According to the literature there are only publications of individual cases and small cohorts of patients describing a benefit for the patients with PIPAC, but to the knowledge of the authors there are no randomized phase III data comparing PIPAC combined with systemic therapy versus the SOC of systemic therapy alone in this kind of cancer population.
The general aim of this trial is to improve progression free-as well OS of these patients receiving this sequential therapy, in association with systemic standard of care palliative chemotherapy.
Acknowledgments: The authors thank the AIO: Arbeitsgemeinschaft Internistische Onkologie (Working group of Medical Oncologists), CAOGI: Chirurgischen Arbeitsgemeinschaft für den Oberen Gastrointestinaltrakt (Surgical Working group of upper gastrointestinal tract), ACO: Assoziation chirurgische Onkologie (Working group of Surgical Oncologists) for study support. Author contributions: TG is the study coordinator, and is responsible for the present paper. TG have been involved in drafting the manuscript; TG, AB, and PP have been involved in the study conception and design, assisted in writing the manuscript and have given final approval of the version to be published. All authors read and approved the final manuscript. All authors of the manuscript made substantial contributions in acquisition of data and have been involved in revising the manuscript critically for important intellectual content. Each of the authors have given final approval to the version to be published and have participated sufficiently in the work to take public responsibility for appropriate portions of the content and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.
Research funding: The current study is currently under evaluation for funding by the DKH -Deutsche Krebshilfe (German Cancer Aid). Employment or leadership: None declared. Honorarium: None declared. | 2019-07-10T14:56:52.639Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "15c6f1f06ba0c6e0136106c334596dbce0316ce0",
"oa_license": "CCBYNCND",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/pp-2018-0113/pdf",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "48d0c7d3c68d3520d628f1805a2269b9c3a0675b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25546234 | pes2o/s2orc | v3-fos-license | Role of oligosaccharides in the structure and function of respiratory syncytial virus glycoproteins
The contribution of oligosaccharides to the structural and functional make-up of respiratory syncytial (RS) virus G and F proteins was investigated by observing the effects of various oligosaccharide-specific enzymes on their molecular size as well as on virus infectivity. The N-linked oligosaccharides of the F protein were completely removed by endoglycosidase F and N-glycanase. Addition of oligosaccharides to F protein during synthesis was completely partially resistant to TM resulting in an 80-kDa form designated GTM. The G protein was estimated to contain approximately 3% N-linked and 55% O-linked carbohydrates, based on migration of G and GTM in polyacrylamide gels. Furthermore, treatment of detergent-extracted G protein with endoglycosidase F and endo-α-N-acetylgalactosaminidase, enzymes that specifically cleave N-linked and O-linked oligosaccharides, respectively, generated a variety of partially unglycosylated species, ranging in molecular weight from approximately 80 to 40 kDa. Virus infectivity was sensitive to limited removal of N-linked or O-linked oligosaccharides by endoglycosidases under conditions which did not greatly alter the molecular weight of the G protein. Thus, G and F protein oligosaccharides readily accessible to enzymatic removal are presumed to play an important role in the infectious process.
INTRODUCTION
Respiratory syncytial (RS) virions and infected cells contain seven major proteins (Lambert and Pons, 1983;Pringle er a/., 1981;Levine, 1977;Wunner and Pringle, 1976) and three small minor nonstructural polypeptides, totaling 10 unique viral proteins (Huang eta/., 1985). The two glycoproteins, designated G and F, are responsible for cell attachment (Levine et a/., 1987) and cell fusion, respectively (Walsh and Hruska, 1983). The unglycosylated G (36 kDa) and F (59 kDa) polypeptides synthesized during in vitro translation have been identified (Collins et al., 1984). The in vivo synthesized G (84-90 kDa) and F (68 kDa) glycoproteins have been identified by two-dimension nonreducing and reducing SDS-PAGE of purified virions and immunoprecipitates of infected cells (Lambert and Pons, 1983). RS virus F protein consists of two smaller disulfide-bonded subunits, F, (48 kDa) and F2 (19-21 kDa) (Fernie and Gerin, 1982;Gruber and Levine, 1983;Lambert and Pons, 1983). Monoclonal antibody to F inhibits cell fusion and partially neutralizes virus infectivity (Walsh and Hruska, 1983). Polyclonal monospecific antibody raised to G protein neutralizes RS virus (Wals,h et al., 1984) and inhibits virus attachment to cells (Levine et a/., 1987). Thus, F and G proteins appear to be functionally similar to the F (fusion) and HN or H (attachment) proteins of other paramyxoviruses except that RS virus has no hemagglutinin or neuraminidase activity (Richman et a/., 1971).
Tunicamycin (TM) inhibits lipid-dolichol-mediated N-linked glycosylation
of asparagine residues in nascent proteins (Takatsuki et a/., 1971(Takatsuki et a/., , 1975. The carbohydrate moieties (either "high mannose" or "complex") of viral glycoproteins are most commonly Nlinked to asparagine residues via the dolichol phosphate pathway and are thus sensitive to the inhibitory effects of TM (Leavitt et a/., 1977). TM completely inhibits glycosylation of the HN and F proteins of a number of paramyxoviruses and has allowed identification of unglycosylated forms of viral glycoproteins (Naka,mura et al., 1982). However, it has been shown that several animal viruses contain tunicamycin-resistant O-linked carbohydrates on their envelope glycoproteins (Holmes et al,, 1981;Johnson and Spear, 1983;Niemann et al., 1984Niemann et al., , 1982Olofsson et a/., 1981;Shida and Dales, 1981). O-Linked oligosaccharides are less structurally complex, compared to Nlinked oligosaccharides, and are found linked to serine, threonine, hydroxylysine, or hydroxyproline residues in proteins (Sharon and Lis, 1981). We have found that glycosylation of RS virus G protein was partially resistant to tunicamycin treatment resulting in an 80-kDa form of this protein (Lambet-t and Pons, 1984). The G protein has been shown to contain O-linked oligosaccharides and a high number of potential Olinked glycosylation sites were present in its sequence (Gruber and Levine, 1985;Wertz et al., 1985).
Several laboratories have investigated the effects of both tunicamycin and monensin on synthesis and pro-cessing of RS virus glycoproteins (Lamber-t and Pons, 1983;Gruber and Levine, 1985;Wertz et al., 1985;Satake et al., 1985;Fernie et a/., 1985). Monensin, an ionophore which inhibits membrane flow through the Golgi apparatus, and thus nonspecifically inhibits Olinked glycosylation (Griffiths et a/., 1983) did not inhibit transport of either G or F of RS virus glycoproteins to the cell surface (Satake et al., 1985). Fernie er al. (1985) demonstrated that several N-linked glycosylated precursor species (29 kDa to a predominant 45 kDa) of the G protein are processed into mature G protein (VP84). No intermediate forms of G protein were observed between 45 and 84 kDa, leading these investigators to suggest that the G protein might be a dimer of the 45-kDa intermediate form.
Since the immune response to the G and F proteins appears to be important in protection and recovery from RS virus infection, it is important to further examine the structure and function of these proteins so that their roles in the pathology and immunology of this virus can be better understood.
In this report, the contribution of N-linked and O-linked oligosaccharides to the structural and functional make-up of the G and F proteins was investigated, using endoglycosidases specific for these carbohydrate side chains. These experiments allowed determination of the role these oligosaccharides play both in the structural integrity of G and F proteins, as determined by SDS-polyacrylamide gel electrophoresis (SDS-PAGE), and in virus infectivity.
Cells and virus
HEp2 and CV-1 cells were used in this study. Cells were grown in minimum essential medium (MEM) supplemented with heat-inactivated fetal bovine serum. The Long strain of RS virus was prepared as previously described (Lambert and Pons, 1983). Virus infectivity assays were carried out in HEp2 cell monolayers using the methylcellulose overlay method as previously described (Lambert et al., 1980). SDS-PAGE purification of RS virus G, F, and N proteins RS virions, grown in CV-1 cells labeled with either [3H]glucosamine (13H]Glu) or 3H-amino acids (3H-aa) at 50 &i/ml, were purified as previously described (Lambert and Pons, 1983). Virus pellets were resuspended in SDS-PAGE sample buffer (Laemmli, 1970) without 2-mercaptoethanol and run on preparative 10% polyacrylamide gels (Laemmli, 1970). Recovery of proteins from gels was done essentially as described by Shida and Dales (1981). Briefly, l-cm-wide longitudinal strips removed from gels were cut into 2-mm sections and counted in a liquid scintillation counter to locate labeled protein bands. Protein bands cut from the remainder of the gel were eluted into 0.1% SDS In dH,O, for 18 hr at room temperature, and precipitated with 6 vol of cold 100% acetone at -20" overnight. Purified proteins were pelleted at 20,000 g, air dried, and dissolved in dHzO containing 0.5% NP-40.
As designated in the text, either O-giycanase'M (33 mu/PI, Genzyme) or endo-a-N-acetylgalactosaminidase (1.9 U/ml, Boehringer-Mannheim) was used to digest O-linked oligosaccharides (Umemoto et a/., 1977). Concentrations and combinations of enzymes are also indicated in the figure legends.
Western blot analysis of RS glycoproteins SDS-PAGE of enzyme-treated proteins and Coomassie blue prestained protein molecular weight markers (Diversified Biotech, Newton Centre, MA, or Bethesda Research Laboratories, BRL, Bethesda, MD) were carried out in 10% gels (Laemmli, 1970). Western blots were performed essentially as described by Towbin et a/. (1979) except that non-fat dried milk (5% solution in PBS) was used to block binding sites on nitrocellulose membranes (1 hr at room temperature) after electroblotting.
Blots were probed with monoclonal antibody (MAb) directed against G or F proteins. MAb L7, L9 (anti-RS virus G protein), and L4 (anti-RS virus F protein) were kindly provided by Dr. Edward Walsh (Rochester University Medical School, Rochester, NY). Antibodies were used at a dilution of 1:2000 in 5% dried milk dissolved in PBS. MAb-viral protein bands were detected using a Western blot kit containing horseradish peroxidase-labeled goat anti-mouse IgG (Bio-Rad).
Endoglycosidase digestion of gel-purified virion glycoproteins
To accurately determine the effects of enzymes specific for N-linked or O-linked oligosaccharides on the structure of radiolabeled RS virus G and F proteins, it was necessary to first gel purify virion proteins. Analysis of gel-purified proteins demonstrated that they were not contaminated with other proteins and that they comigrated with virion proteins (Fig. 1). Data presented in Fig. 2 show that purified G protein was relatively unaffected by neuraminidase treatment (3H-aa G and [3H]Glu G, lanes 2). Both endoglycosidase F and AI-glycanase treatments (lanes 3 and 4, respectively), which remove N-linked oligosaccharides, decreased the size of G protein from 84 to approximately 80 kDa (3H-aa and [3H]Glu). Removal of N-linked oligosaccharides from 3H-amino acid-labeled F2 polypeptide (3H-aa F, lanes 3 and 4) migrated more rapidly, whereas [3H]glucosamine-labeled F2 polypeptide was not detectable ([3H]Glu F, lanes 3 and 4). The migration of the F, subunit was unaffected by these enzyme treatments suggesting that the F2 subunit contained the bulk of oligosaccharides present on the fusion protein. [3H]glucosamine and 3H-amino acid-labeled proteins of purified RS virions were separated by electrophoresis in preparative 10% polyacrylamide-SDS-containing gels. G, F, or N proteins were isolated from gel fractions as described under Materials and Methods.
To test purity and condition of extracted proteins, aliquots of isolated proteins were electrophoresed in 10% gels. Because of the difficulties in identifying protein components of labeled G protein digests (Fig. 2), Western blot analysis of purified virion proteins was used (1) to further define the contribution made by oligosaccharides to the structure of G and F proteins and (2) to compare the susceptibility of TM-treated virion glycoproteins to endoglycosidase treatments. Nonreducing 10% polyacrylamide gels were used so that unreduced G and F proteins could be compared by Western blots probed with monoclonal antibody (MAb) directed against the G (Fig. 3A) or the F protein (Fig. 3B).
Results presented in Fig. 3A show that in the untreated sample (lane l), two protein bands of approximately 175 kDa (Gas) and 90 kDa (G) reacted with MAb against G protein suggesting that dimeric aggregates of the 90 kDa G protein were resolved in these nonreducing gels. Dimers of G or GTM proteins were not observed in reducing gels, TM-treated virus contained two sets of doublet bands of approximately 175/l 55 and 861'80 kDa which reacted with anti-G MAb (Fig. 3 O-Glycanase hydrolyzed normal G protein (lane 6) to a mixture of smaller species of partially unglycosylated proteins. Digestion of G rM with 0-glycanase resulted in smaller species of partially unglycosylated G proteins migrating between 43 and 60 kDa.
In Fig. 3B, identical blots of nonreducing gels probed with MAb directed against the RS virus fusion (F) protein showed that Endo F (lane 3) digested the bulk of the glycosylated form of F1,2 (68 kDa) to its unglycosylated form which migrated at approximately 59 kDa, similar to TM-treated F protein (FTM,. F protein treated with Endo F (-PNGase F) (Lane 4) was not affected. Migration of FTM in these gels was not affected by any of the endoglycosidase treatments ( Ten-microliter aliquots of purified virions, grown in untreated (-TM, lanes l-7) or TM-treated (TM, 1 pglml; lanes 8-14) HEp2 cells, were incubated for 24 hr at 37' in PBS (lanes 1 and 8). Virions were treated with various enzymes in PBS: Lanes 2 and 9 with Endo F (NEN, 8 U/ml); lanes 3 and 10 with Endo F (Boehringer-Mannheim, 60 U/ml); lanes 4 and 11 with Endo F (-PNGase) (Boehringer-Mannheim, 60 U/ml); lanes 5 and 12 with A/-glycanase (0.28 U/ml); lanes 6 and 13 with C-glycanase (1.15 U/ml); and lanes 7 and 14 with neuraminidase (Calbiochem, 0.1 U/ml). After enzyme digestions, virions were disrupted in 2X SDS-PAGE sample buffer without 2-mercaptoethanol and electrophoresed in 10% polyacrylamide gels at 30 mA/gel. Proteins were then blotted onto nitrocellulose for 17 hr at 20 V. Blots were blocked with 5% dried milk in PBS for 1 hr and probed with MAb against either the G protein (A) or the F protein (B). Monoclonal antibodies were detected with peroxidase-labeled rabbit anti-mouse IgG.
from 68 to approximately 65 kDa (Fig. 3B, lane 6) and indicates that a small amount of some contaminating enzyme was present in the 0-glycanase preparation used since this was not seen in other experiments using different preparations of 0-glycanase. Dimeric aggregates of Fl ,2 (Fag, Fig. 3B, lanes 1 and 2) but not of enzyme-treated cells (lanes 3-7) or in FTM (lanes 8-l 4) were resolved in these gels.
Analysis of the oligosaccharide structure of detergent-extracted G and GTM proteins Detergent extracted glycoproteins were used to determine whether complete removal of oligosaccharides on the G protein could be accomplished. RS virus glycoproteins extracted from purified virions with 20 mM octylglucoside were adjusted to 1% NP-40 and 0.1% SDS and digested with endoglycosidase F and with endo-a-/II-acetylgalactosaminidase (O-glycanpeptide hydrolase, Boehringer-Mannheim Biochemicars) in the presence of neuraminidase to facilitate Olinked oligosaccharide removal (Fig. 4). Reducing gels were used to decrease the potential for aggregation and to more completely denature unglycosylated G protein species for optimal separation.
Western blots, probed with monoclonal antibody L9 specific for RS virus G protein, showed that neuraminidase digestion alone (Fig. 4, lane 1) generated a more diffuse (82-90 kDa) G protein band compared to untreated G protein (lane 5) which migrated at approximately 85 kDa. Removal of N-linked oligosaccharides with endoglycosidase F alone (lane 2) generated at least five bands migrating in the range of 75-85 kDa. Endo-&I-acetylgalactosaminidase treatment, which specifically removes O-linked oligosaccharides, generated at least seven bands in the molecular weight range of 55-85 kDa (lane 3). Treatment with a mixture of Endo F and endo-a-N-acetylgalactosaminidase generated at least eight detectable bands having molecular weights of 40, 44, 48, 51, 54, 55, 60, and 81 kDa (lane 4) with the 55-kDa band being the most abundant. Faint bands representing partially unglycosylated forms of G protein were also detected between 60 and 80 kDa (lane 4). Similar treatment of GTM protein generated five glycoprotein bands having molecular weights of approximately 40,45,48, 50 and 55 kDa (lanes 8 and 9). Control experiments with F protein and, more importantly, Gs6 demonstrated that no protease activity was detectable in these enzyme preparations (Figs. 3A, B). Thus, the smaller bands generated by glycosidase digestions in Fig. 4
Effects of Endoglycosidase Treatments on Virus Infectivity
The importance of oligosaccharide side chains to the function of RS virus glycoproteins was evaluated by investigating the effects of several endoglycosidase enzymes on RS virus infectivity. The effects of the specific endoglycosidases were compared by incubation of virions in PBS at room temperature for 3 hr either without enzyme or with neuraminidase, /V-glycanase, 0giycanase or a mixture of both N-and O-glycosidases. These conditions were chosen as a compromise between efficient enzyme digestion and thermal inactivation of virus infectivity. Corresponding aliquots of virus, disrupted in 0.5% NP-40, were diluted in appropriate buffers optimized for the individual enzymes (see Materials and Methods) and incubated under the same conditions to ensure that the enzyme preparations were active.
Proteins of virus samples treated with endoglycosidases, either in disruption buffers (Fig. 5, lanes l-6) or in PBS (Fig. 5, lanes 7-l 2), were run in polyacrylamide gels, electroblotted onto nitroceiluiose filters, and probed with a mixture of two monoclona~ antibodies (L7 and L9) specific for RS virus G protein (Fig. 5). Lanes 6 and 12 represent the G protein of disrupted or whole virus, respectively, which was incubated on ice rather than at room temperature. Disrupted virus samples digested under denaturing conditions showed extensive changes in molecular weight as a result of oligosaccharide removal (Fig. 5, lanes Z-5). In addition, decreased MAb binding observed for several samples may indicate either protein denaturation or loss of protein (lanes 1 and 2). However, less dramatic alterations in electrophoretic rnob~l~~ were seen for whole-virus G protein digested in PBS (Fig. 5, lanes 8-1 1). As one might predict, these results demonstrate that removal of oligosaccharides from G protein was more efficient, during the 3-hr incubation period used, with detergent-disrupted virus glycoproteins than with whole virus as evidenced y the differences in G protein migration rates.
Aliquots of each virus sample treated with endoglycosidases in PBS (Fig. 5, lanes 7-l 2) were taken, immediately after treatment, for virus titration on HEp2 cell monolayers (Fig. 6) Treatment with neuraminidase increased the infectivity of RS virions by about 143% (Neur.), possibly by disaggregation of infectious virus particles. However, virus infectivity was reduced by 76% after treatment with /V-glycanase (Ngly), which cleaves N-linked oligosaccharides.
This indicates that removal of readily accessible N-linked oligosaccharides from the G and F proteins, had deleterious effects on virus infectivity. Treatment with 0-glycanase alone (Ogly) or a combination of /V-glycanase and 0-glycanase (N/Ogly) caused reductions of 97-9870 of RS virus infectivity. Thus, both N-linked and, to a greater extent, O-linked oligosaccharides appear to be necessary for RS virus infectivity. These are presumed to be only a few readily accessible oligosaccharide side chains because enzyme digestion of whole virions at room temperature caused relatively little alteration in migration rates of the Western blotted G proteins (Fig. 5, lanes 7-l 2).
DISCUSSION
In this report it was shown that RS virus G protein contained both N-linked and O-linked carbohydrate moieties which could be removed from the protein backbone with specific endoglycosidases generating discrete smaller molecular weight forms of G. These endoglycosidase studies support previous findings with tunicamycin and monensin, which showed that the G protein contains both N-linked and O-linked oligosaccharides (Lambert and Pons, 1984;Gruber and Levine, 1985;Wet-tz et al., 1985;Satake et al., 1985;Fernie et al., 1985). The migration rate of the protein moiety of RS virus G protein in SDS-PAGE is approximately 36,000 Da (Collins et al., 1984). Based on calculated molecular weights determined by SDS-PAGE, the TM and endoglycosidase data presented here indicate that the G protein contained approximately 58% carbohydrate (55% O-linked and 3% N-linked). Although subject to error due to potential SDS-PAGE migration anomalies, these calculations are consistent with nucleotide sequence analysis of the G protein gene (A2 strain) which indicates that there are four potential glycosylation sites for N-linked oligosaccharides and 91 serine and threonine residues which could serve as possible O-linked glycosylation sites (Wertz et al., 1985).
Because of difficulties in labeling the protein moiety of RS virus G protein to high specific activity (Pringle et al., 1981;Dubovi, 1982), Western blot analysis with monoclonal antibody was used in most of the experiments reported here in order to follow the migration of G protein endoglycosidase digestion products.
Results demonstrated that discrete, smaller molecular weight forms of the G protein were obtained after endo-ol-/I/-acetylgalactosaminidase (0-glycanase) treatments.
Treatment with a mixture of Endo-F and endo-a-N-acetylgalactosaminidase produced eight partially unglycosylated G protein bands having molecular weights of 40, 45, 48, 51 I 54, 55, 60, and 81 kDa and faint bands of G protein were also detected between 60 and 81 kDa. Similar treatment of GTM generated five detectable glycoprotein bands having molecular weights of approximately 40, 45, 48, 50, and 55 kDa. Since control experiments with F protein demonstrated that no protease activity was detectable in these enzyme preparations, the smaller bands generated must represent various partially to almost totally unglycosylated species of RS virus G protein, Endoglycosidase-generated intermediate forms of G protein (40-85 kDa) suggest that the 85 to 90-kDa G protein is not a dimer of the intracellular 45-kDa intermediate form as suggested by Fernie et a/. (1985) but more likely represents a highly glycosylated monomer of mostly O-linked oligosaccharides.
Results reported here strongly suggest that sufficient O-linked and Nlinked oligosaccharides are added to the 36-kDa G protein during cellular processing to contribute to its 85-90 kDa fully glycosylated form. Attempts to dissociate possible 90-kDa dimeric forms of 45-kDa G protein using strong denaturing agents, such as 4 n/l urea and 6 M guanidine-HCI, had no effect on G protein migration in SDS-PAGE (data not shown). The molecular size of G protein O-linked oligosaccharides, released by endo-a-/V-acetylgalactosaminidase and analyzed on BioGel P6 columns had an average molecu-!ar weight of about 1 kDa (Lambert, in preparation). Thus, on average, only about half of the possible Olinked glycosylation sites need be used to increase the moiecular weight of G protein from the 45-kDa (Nlinked intermediate form) to the 84-to 90-kDa "mature" form. Complete conversion of 85-to 90-kDa G protein to ?otally unglycosylated G protein (Gs6) was not efficiently produced by endoglycosidase treatments of normal G protein used in this study, although Gs6 protein was identified in some TM-treated virion preparations.
A possible explanation could be that "blocked" oligosaccharide cleavage sites or protein conformaticnal constrain?s inhibited complete oligosaccharide removal using the conditions employed here. Interestingly, a faint band representing a 38-kDa form of G protein was observed in whole virions digested with Endo F(-PNGase F) and 40-kDa forms of G were also observed in Endo F-treated and N-glycanase-treated samples (Fig. 3). During these experiments, two forms of O-linked glycosylated G protein were resolved In TM-trea?ed virions as evidenced by a doublet band of GTM. This suggests heterogeneity in either size or number of the O-linked oligosaccharide side chains.
In agreement with others (Gruber and Levine, 1985;Fernie et al., 1985), it was demonstrated here that glycosylation of ?he F protein was completely sensitive to TM treatment since no further digestion of FThF was accomplished by N-linked oligosaccharide-specific endoglycosidases.
Endo F and N-glycanase digestion of purified labeled fusion protein, demonstrated that the FZ subunit was more highly glycosylated than F, (Fig. 2), and essentially all the N-linked oligosaccharides could be removed from the F protein by Endo F treatment.
Comparison of molecular weight differences of F and FTM proteins, and Endo F digested samples, indicates that F protein contained approximately 13% N-linked oligosaccharides.
The importance of oligosaccharide side chains in biologic and antigenic properties of viral glycoproteins has been reported. Neuraminic acid residues on the hemagglutinin protein are involved in adsorption of measles virus to the surface of cells (Dore-Duffy and Howe, 1978). Loss of a specific oligosaccharide side chain through mutation at a single glycosylation site cn the hemagglutinin protein is responsible for increased virulence of avlan (H5N2) influenza virus (Kawaoka er al., 1984). Oligosaccharides might also contribute, either directly or indirectly, to antigenic sites on viral glycoproteins.
For example, carbohydrate-dependent epitopes have been shown to exist in gC protein of herpes simplex virus type 1 (Sjoblom et a/., 1987).
Infectivity assays of RS virus, after treatment with enzymes which remove N-linked or O-linked oligosac-charides, demonstrated that although :I??le alteration of the molecular weight of the G (attachment) protein was observed, virus iqfectlvlty was greatly reduced. Thus, limited removal of both N-linked and O-linked oligosaccharides greatly reduced virus infectivity and suggests that the few readily accessible N-linked or O-linked oligosaccharides on the ,G and F proteins may exert considerable influence on the attachment and/or penetration functions of RS virus glycoproteins.
Studies to clarify the specific role of oligosaccharide-mediated virus-host cell Interactions for RS virus are currently under way. It will be of interest to a!so determine whether these oligosaccharides play a? Important role in immunity to RS virus infections. | 2018-04-03T02:20:58.541Z | 1988-06-01T00:00:00.000 | {
"year": 1988,
"sha1": "700af18cff70d0cb313da2c091d454ab04ed766b",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/0042-6822(88)90560-0",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "029d429165a2cad3d840acc833c22f6b218d3619",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
119658850 | pes2o/s2orc | v3-fos-license | Polynomial decay of correlations for flows, including Lorentz gas examples
We prove sharp results on polynomial decay of correlations for nonuniformly hyperbolic flows. Applications include intermittent solenoidal flows and various Lorentz gas models including the infinite horizon Lorentz gas.
Of interest is the rate of decay of correlations, or rate of mixing, namely the rate at which ρ v,w converges to zero. Dolgopyat [17] showed that geodesic flows on compact surfaces of negative curvature with volume measure µ Λ are exponentially mixing for Hölder observables v, w. Liverani [22] extended this result to arbitrary dimensional geodesic flows in negative curvature and more generally to contact Anosov flows. However, exponential mixing remains poorly understood in general.
Dolgopyat [18] considered the weaker notion of rapid mixing (superpolynomial decay of correlations) where ρ v,w (t) = O(t −q ) for sufficiently regular observables for any fixed q ≥ 1, and showed that rapid mixing is 'prevalent' for Axiom A flows: it suffices that the flow contains two periodic solutions with periods whose ratio is Diophantine. Field et al. [19] introduced the notion of good asymptotics and used this to prove that amongst C r Axiom A flows, r ≥ 2, an open and dense set of flows is rapid mixing.
In [24], results on rapid mixing were obtained for nonuniformly hyperbolic semiflows, combining the rapid mixing method of Dolgopyat [18] with advances by Young [30,31] in the discrete time setting. First results on polynomial mixing for nonuniformly hyperbolic semiflows (ρ v,w (t) = O(t −q ) for some fixed q > 0) were obtained in [25]. Under certain assumptions the results in [24,25] were established also for nonuniformly hyperbolic flows. However, for polynomially mixing flows, the assumptions in [25] are overly restrictive and exclude many examples including infinite horizon Lorentz gases.
In this paper, we develop the tools required to cover systematically large classes of nonuniformly hyperbolic flows. The recent review article [26] describes the current state of the art for rapid and polynomial decay of correlations for nonuniformly hyperbolic semiflows and flows and gives a complete self-contained proof in the case of semiflows. Here we provide the arguments required to deal with flows. Our results cover all of the examples in [26].
By [24], rapid mixing holds (at least typically) for nonuniformly hyperbolic flows that are modelled as suspensions over Young towers with exponential tails [30]. See also Remark 8.5. Here we give a different proof that has a number of advantages as discussed in the introduction to [26]. Flows are modelled as suspensions over a uniformly hyperbolic map with an unbounded roof function (rather than as suspensions over a nonuniformly hyperbolic map with a bounded roof function). It then suffices to consider twisted transfer operators with one complex parameter rather than two as in [24], reducing from four to three the number of periodic orbits that need to be considered in Proposition 6.6. Also, the proof of rapid mixing only uses superpolynomial tails for the roof function, whereas [24] requires exponential tails.
Examples covered by our results on rapid mixing include finite Lorentz gases (including those with cusps, corner points, and external forcing), Lorenz attractors, and Hénon-like attractors. We refer to [26] for references and further details.
Examples discussed in [25,26] for which polynomial mixing holds include nonuniformly hyperbolic flows that are modelled as suspensions over Young towers with polynomial tails [31]. This includes intermittent solenoidal flows, see also Remark 8.6.
The key example of continuous time planar periodic infinite horizon Lorentz gases is considered at length in Section 9. In the finite horizon case, exponential decay of correlations for the flow was proved in [4]. In the infinite horizon case it has been conjectured [20,23] that the decay rate for the flow is O(t −1 ). (An elementary argument in [5] shows that this rate is optimal; the argument is reproduced in the current context in Proposition 9.14.) We obtain the conjectured decay rate O(t −1 ) for planar infinite horizon Lorentz flows in Theorem 9.1. [25], the decay rate O(t −1 ) was proved for infinite horizon Lorentz gases at the semiflow level (after passing to a suspension over a Markov extension and quotienting out stable leaves as in Sections 3 and 6). It was claimed in [25] that this result held also in certain special cases for the Lorentz flow, and that the decay rate O(t −(1−ǫ) ) held for all ǫ > 0 in complete generality. The spurious factor of t ǫ was then removed in an unpublished preprint "Decay of correlations for flows with unbounded roof function, including the infinite horizon planar periodic Lorentz gas" by the first and third authors. Unfortunately these results for flows do not apply to Lorentz gases since hypothesis (P1) in [25] is not satisfied. The situation is rectified in the current paper. (The unpublished preprint also contained correct results on statistical limit laws such as the central limit theorem for flows with unbounded roof functions. These aspects are completed and extended in [7].) (b) A drawback of the method in this paper, already present in [18] and inherited by [24,25,26], is that at least one of the observables v or w is required to be C m in the flow direction. Here m can be estimated, with difficulty, but is likely to be quite large. In the case of the infinite horizon Lorentz gas, this excludes certain physically important observables such as velocity. A reasonable project is to attempt to combine methods in this paper with the methods for (stretched) exponential decay in [4,12] to obtain the decay rate O(t −1 ) for Hölder observables v and w (cf. the second open question in [26,Section 9]).
Remark 1.1 (a) In
In Part I of this paper, we consider results on rapid mixing and polynomial mixing for a class of suspension flows over infinite branch uniformly hyperbolic transformations [30]. In Part II, we show how these results apply to important classes of nonuniformly hyperbolic flows including those mentioned in this introduction. The methods of proof in this paper, especially those in Part I, are fairly straightforward adaptations of those in [26]. The main new contribution of the paper (Section 6 together with Part II) is to develop a general framework whereby large classes of nonuniformly hyperbolic flows, including fundamental examples such as the infinite horizon Lorentz gas, are covered by these methods.
Remark 1. 2 The paper has been structured to be as self-contained as possible. It does not seem possible to reduce the results on flows in Part I of this paper to the results on semiflows in [26]. Instead, it is necessary to start from scratch and to emulate, rather than apply directly, the methods in [26]. Some of the more basic estimates in [26] are applicable and are collected together at the beginning of Sections 4 (Lemma 4.1 to Proposition 4.9) and Section 5 (Propositions 5.1 to 5.3), as well as in Section 5.2 (Propositions 5.7, 5.11 and 5.12). Also, results on nonexistence of approximate eigenfunctions in [26] are recalled in Sections 6.2 and Section 8.4.
Notation We use the "big O" and ≪ notation interchangeably, writing a n = O(b n ) or a n ≪ b n if there is a constant C > 0 such that a n ≤ Cb n for all n ≥ 1. There are various "universal" constants C 1 , . . . , C 5 ≥ 1 depending only on the flow that do not change throughout.
Mixing rates for Gibbs-Markov flows
In this part of the paper, we state and prove results on rapid and polynomial mixing for a class of suspension flows that we call Gibbs-Markov flows. These are suspensions over infinite branch uniformly hyperbolic transformations [30]. In Section 2, we recall material on the noninvertible version, Gibbs-Markov semiflows (suspensions over infinite branch uniformly expanding maps). In Section 3, we consider skew product Gibbs-Markov flows where the roof function is constant along stable leaves and state our main theorems for such flows, namely Theorem 3.1 (rapid mixing) and Theorem 3.2 (polynomial mixing). These are proved in Sections 4 and 5 respectively. In Section 6, we consider an enlarged class of Gibbs-Markov flows that can be reduced to skew products and for which Theorems 3.1 and 3.2 remain valid.
We quickly review notation associated with suspension semiflows and suspension flows. Let (Y, µ) be a probability space and let F : Y → Y be a measure-preserving transformation. Let ϕ : Y → R + be an integrable roof function. Define the suspension semiflow/flow where (y, ϕ(y)) ∼ (F y, 0) and F t (y, u) = (y, u + t) computed modulo identifications. An F t -invariant probability measure on Y ϕ is given by µ ϕ = µ × Lebesgue/ Y ϕ dµ.
Gibbs-Markov maps and semiflows
In this section, we review definitions and notation from [26, Section 3.1] for a class of Gibbs-Markov semiflows built as suspensions over Gibbs-Markov maps. Standard references for background material on Gibbs-Markov maps are [1,Chapter 4] and [2]. Suppose that (Y ,μ) is a probability space with an at most countable measurable partition {Y j , j ≥ 1} and let F : Y → Y be a measure-preserving transformation. For θ ∈ (0, 1), define d θ (y, y ′ ) = θ s(y,y ′ ) where the separation time s(y, y ′ ) is the least integer n ≥ 0 such that F n y and F n y ′ lie in distinct partition elements in {Y j }. It is assumed that the partition {Y j } separates trajectories, so s(y, y ′ ) = ∞ if and only if y = y ′ . Then d θ is a metric, called a symbolic metric.
More generally (and with a slight abuse of notation), we say that a function v : If in addition, sup j |1 Y j v| θ < ∞ then we say that v is uniformly piecewise d θ -Lipschitz. Note that such a function v is bounded on partition elements but need not be bounded on Y .
For b ∈ R, we define the operators where Z is the union of finitely many elements from the partition {Y j }. (Note that F | Z 0 : Z 0 → Z 0 is a full one-sided shift on finitely many symbols.) We say that M b has approximate eigenfunctions on Z 0 if for any α 0 > 0, there exist constants α, ξ > α 0 and C > 0, and sequences |b k | → ∞, ψ k ∈ [0, 2π), u k ∈ F θ (Y ) with |u k | ≡ 1 and |u k | θ ≤ C|b k |, such that setting n k = [ξ ln |b k |], Remark 2.4 For brevity, the statement "Assume absence of approximate eigenfunctions" is the assumption that there exists at least one finite subsystem Z 0 such that M b does not have approximate eigenfunctions on Z 0 .
Skew product Gibbs-Markov flows
In this section, we recall the notion of skew product Gibbs-Markov flow [26,Section 4.1] and state our main results on mixing for such flows. Let (Y, d) be a metric space with diam Y ≤ 1, and let F : Y → Y be a piecewise continuous map with ergodic F -invariant probability measure µ. Let W s be a cover of Y by disjoint measurable subsets of Y called stable leaves. For each y ∈ Y , let W s (y) denote the stable leaf containing y. We require that F (W s (y)) ⊂ W s (F y) for all y ∈ Y .
Let Y denote the space obtained from Y after quotienting by W s , with natural projection π : Y → Y . We assume that the quotient map F : Y → Y is a Gibbs-Markov map as in Definition 2.1, with partition {Y j }, separation time s(y, y ′ ), and ergodic invariant probability measureμ =π * µ.
Let Y j =π −1 Y j ; these form a partition of Y and each Y j is a union of stable leaves. The separation time extends to Y , setting s(y, y ′ ) = s(πy,πy ′ ) for y, y ′ ∈ Y .
Next, we require that there is a measurable subset Y ⊂ Y such that for every y ∈ Y there is a uniqueỹ ∈ Y ∩ W s (y). Let π : Y → Y define the associated projection πy =ỹ. (Note that Y can be identified with Y , but in general π * µ =μ.) We assume that there are constants C 2 ≥ 1, γ ∈ (0, 1) such that for all n ≥ 0, Let ϕ : Y → R + be an integrable roof function with inf ϕ > 0, and define the suspension flow 1 F t : Y ϕ → Y ϕ as in (1.1) with ergodic invariant probability measure µ ϕ .
In this subsection, we suppose that ϕ is constant along stable leaves and hence projects to a well-defined roof function ϕ : Y → R + . It follows that the suspension flow F t projects to a quotient suspension semiflow F t : Y ϕ → Y ϕ . We assume that F t is a Gibbs-Markov semiflow (Definition 2.2). In particular, increasing γ ∈ (0, 1) if necessary, (2.1) is satisfied in the form |ϕ(y) − ϕ(y ′ )| ≤ C 1 inf Y j ϕ γ s(y,y ′ ) for all y, y ′ ∈ Y j , j ≥ 1.
We call F t a skew product Gibbs-Markov flow, and we say that F t has approximate eigenfunctions if F t has approximate eigenfunctions (Definition 2.3).
We can now state the main theoretical results for skew product Gibbs-Markov flows.
is a skew product Gibbs-Markov flow such that ϕ ∈ L q (Y ) for all q ∈ N. Assume absence of approximate eigenfunctions.
Then for any q ∈ N, there exists m ≥ 1 and C > 0 such that Theorem 3.2 Suppose that F t : Y ϕ → Y ϕ is a skew product Gibbs-Markov flow such that µ(ϕ > t) = O(t −β ) for some β > 1. Assume absence of approximate eigenfunctions. Then there exists m ≥ 1 and C > 0 such that
Remark 3.3
Our result on polynomial mixing, Theorem 3.2, implies the result on rapid mixing, Theorem 3.1 (for a slightly more restricted class of observables). However, the proof of Theorem 3.1 plays a crucial role in the proof of Theorem 3.2, justifying the movement of certain contours of integration to the imaginary axis after the truncation step in Section 5.2. Hence, it is not possible to bypass Theorem 3.1 even when only polynomial mixing is of interest.
These results are proved in Sections 4 and 5 respectively. For future reference, we mention the following estimates. Define ϕ n = n−1 j=0 ϕ • F j .
Rapid mixing for skew product Gibbs-Markov flows
In this section, we consider skew product Gibbs-Markov flows F t : Y ϕ → Y ϕ for which the roof function ϕ : Y → R + lies in L q (Y ) for all q ≥ 1. For such flows, we prove Theorem 3.1, namely that absence of approximate eigenfunctions is a sufficient condition for rapid mixing.
Some notation and results from [26]
Let H = {s ∈ C : Re s > 0} and H = {s ∈ C : Re s ≥ 0}. The Laplace transform ρ v,w (s) = ∞ 0 e −st ρ v,w (t) dt of the correlation function ρ v,w is analytic on H.
(ii) There exist constants C, α > 0 such that Let m = ⌈α⌉ + 2. Then there exists a constant C ′ > 0 depending only on r and α, such that
Remark 4.2
Sinceρ v,w is not a priori well-defined on H, the conditions in this lemma should be interpreted in the usual way, namely thatρ v,w : H → C extends to a function g : H → C satisfying the desired conditions (i) and (ii). The conclusion for ρ v,w then follows from a standard uniqueness argument.
For completeness, we provide the uniqueness argument. By [26, Corollary 6.1], the inverse Laplace transform ofρ v,w can be computed by integrating along a contour in H. Since g ≡ρ v,w on H, we can compute the inverse Laplace transform f of g using the same contour, and we obtain ρ v,w ≡ f . Henceρ v,w ≡ g is well-defined on H and satisfies conditions (i) and (ii), so the conclusion follows from [26, Lemma 6.2].
for all s ∈ H, n ≥ 1.
Also, for s ∈ H, define the twisted transfer operators Proof This follows from [26,Corollary 7.2].
Remark 4.5 Restricting to q as above enables us to obtain estimates for the rapid mixing and polynomially mixing situations simultaneously hence avoiding a certain amount of repetition. The trade off is that the proof of Theorem 3.1 is considerably more difficult. The reader interested only in the rapid mixing case can restrict to integer values of q with greatly simplified arguments [26,Section 7] (also see version 3 of our preprint on arxiv).
Proof It is shown in [26,Proposition 8.7] that R (q) (s) θ ≤ C(|b|+1). Using the definition of b , the desired estimate follows by exactly the same argument.
Remark 4.7 Estimates such as those for R (q) in Proposition 4.6 hold equally for R (q ′ ) for all q ′ < q. We use this observation without comment throughout.
We have the key Dolgopyat estimate: Proof For the region 0 ≤ a ≤ 1, |b| ≥ δ, this is explicit in [
Approximation of v s and w(s)
The first step is to approximate v s , w(s) : Y → C by functions that are constant on stable leaves and hence well-defined on Y . For Proposition 4.10 Let w ∈ L ∞ (Y ). Then (a) ∆ k w is constant along stable leaves.
Proof Part (a) is immediate from the definition and part (b) follows by induction.
By Proposition 4.10(a), these can be regarded as functions Then All of these series are absolutely convergent exponentially quickly, pointwise on H.
Proof Since this result is set in the right-half complex plane, the final statement is elementary. We sketch the arguments. Let s ∈ C with a = Re s > 0. It is clear that Also, by Proposition 4.10(b), for each n ≥ 1, 0 ≤ k ≤ n − 1, This completes the proof.
For w ∈ L ∞ (Y ϕ ), we define the approximation operators Proof (a) Clearly | ∆ 0 w(y, u)| ≤ |w| ∞ . By (3.1), for k ≥ 1, This proves the estimate for ∆ k w, and the estimate for E k w is similar.
The case k = 0 is the same with one term omitted.
(c) For k ≥ 1, The case k = 0 is the same with one term omitted.
We end this subsection by noting for all k ≥ 0 the identities
Estimates for A n and B n,k
We continue to suppose that µ(ϕ > t) = O(t −β ) where β > 1, and that q, η, γ 1 , θ are as in Subsection 4.1. Let c ′ = 1/(2C 1 ). As shown in the proofs of Propositions 4.14 and 4.15 below, A n and B n,k are Laplace transforms of L ∞ functions A n , B n,k : [0, ∞) → R. In this subsection, we obtain estimates for these functions A n , B n,k .
Proposition 4.13
There is a constant C > 0 such that For j = n, Finally for j = 0, completing the proof.
Proposition 4.14 There is a constant C > 0 such that The result follows from Propositions 3.4(b) (with η = 1) and 4.13.
Proposition 4.15
There is a constant C > 0 such that Proof We compute that The result follows from Propositions 3.4(b) and 4.13.
Estimates for C j,k
For the moment, we suppose that µ(ϕ > t) = O(t −β ) where β > 1, and that q, η, γ 1 , θ are as in Subsection 4.1. First, we estimate the inverse Laplace transform Proof For all k ≥ 0, by Proposition 3.4.
Proposition 4.17
There exists C > 0 such that Proof By Proposition 4.6, there exists a constant M > 0 such that From now on, we specialize to the rapid mixing case, so q and β are arbitrarily large and all functions previously regarded as C q are now C ∞ . Note that Hence by Proposition 4.12(a,b), At the same time, the supnorm estimate (4.3) yields Combining these estimates and using (4.2) we obtain that
Using this and (4.3), it follows by Proposition 4.4 that
completing the proof.
Next, suppose that s ∈ H δ . By Proposition 4.9, R = λP + RQ where P (s) is the spectral projection corresponding to λ(s) and Q(s) = I − P (s). By Proposition 4.9, λ(s) is a C ∞ family of isolated eigenvalues with λ(0) = 1, λ ′ (0) = 0 and |λ(s)| ≤ 1, and It follows from the estimates for R j+1 V j and R ℓ that Since |λ(s)| ≤ 1, the proof of Proposition 4.17 applies equally to λ ℓ , so |Q By Lemma 4.11, C = ∞ j,k=0 C j,k is analytic on H. As shown in the next result, C extends smoothly to H.
Corollary 4.21
Assume absence of eigenfunctions, and let r ∈ N. There exists α, C > 0 such that and the proof for |b| ≥ δ is complete.
For |b| ≤ δ, we use Proposition 4.18 to write Proposition 4.20(b) takes care of the first term on the right-hand side, and it remains to Proof of Theorem 3.1 Recall that β and q can be taken arbitrarily large. Hence it follows from Proposition 4.
Similarly, by Propositions 4.14 and 4.15, sup H | A (r) n,k | ≪ n r+3 γ n 1 |v| γ |w| ∞ . Combining these with Corollary 4.21 and substituting into Lemma 4.11, we have shown thatρ v,w : H → C extends toρ v,w : H → C. Moreover, we have shown that for every r ∈ N there exists C, α > 0 such that The result now follows from Lemma 4.1 and Remark 4.2.
Polynomial mixing for skew product Gibbs-Markov flows
In this section, we consider skew product Gibbs-Markov For such flows, we prove Theorem 3.2, namely that absence of approximate eigenfunctions is a sufficient condition to obtain the mixing rate O(t −(β−1) ).
Proof This is contained in the proof of [26,Proposition 8.13].
Modified estimate for
Let d ∈ Y be a j-cylinder and let y, y ′ ∈ d. Then the arguments in the proof of Proposition 4.12(a,b) show that On the other hand,
Using (3.3), it follows that
and similarly, By Proposition 4.4, Hence by Proposition 5.3, Applying Proposition 4.4 once more and using (5.1), as required.
Truncation
We proceed in a manner analogous to [26,Section 8.4], replacing ϕ by a bounded roof function.
Consider the suspension semiflows F t and F N,t on Y ϕ and Y ϕ(N ) respectively. (Here, We make the following abuse of notation regarding norms of observables v : The similar convention applies to observables w ∈ H γ (Y ϕ(N ) ). However, restricting w ∈ H γ,0,m (Y ϕ ) to Y ϕ(N ) need not preserve smoothness in the flow direction. Below we prove: Lemma 5.8 Assume absence of approximate eigenfunctions. In particular, there is a finite union Z ⊂ Y of partition elements such that the corresponding finite subsystem Z 0 does not support approximate eigenfunctions. Choose N 1 ≥ |1 Z ϕ| ∞ + 3.
There exist m ≥ 1, C > 0 such that Proof of Theorem 3.2 Let m ≥ 1, N 1 ≥ 3 be as in Lemma 5.8. As discussed above, need not preserve smoothness in the flow direction. To circumvent this, following [25,26] we define an approximating observable w N : where the d N,j (y) are linear combinations of ∂ j t w(y, N − 2) and ∂ j t w(y, ϕ(y) − 1), j = 0, . . . , m, with coefficients independent of y and N uniquely specified by the requirements , the result follows directly from Proposition 5.7.
It remains to verify the claim.
, where y, y ′ lie in the same partition element. Then where C is a constant independent of N . Also, by (3.3), for 0 ≤ j ≤ m Hence Our strategy for proving Lemma 5.8 is identical to that for [26,Lemma 8.20]. The first step is to show that the inverse Laplace transform of ρ trunc v,w can be computed using the imaginary axis as the contour of integration.
Proof In this proof, the constant C is not required to be uniform in N . Consequently, the estimates are very straightforward compared to other estimates in this section.
The desired properties for ρ trunc v,w will hold provided they are verified for all the constituent parts in Lemma 4.11. Note that if f is integrable on [0, ∞), thenf satisfies the required properties with α = 0. Hence the estimate in Proposition 4.16 already suffices for W k . Also, the proof of Proposition 4.19 suffices after truncation since ϕ r+3 becomes (2C 1 N ) r+2 ϕ. (Actually, the factor ϕ r+3 is easily improved to ϕ r+1+2η which is integrable when r = 0 so truncation is not absolutely necessary for the term R j+1 V j .) By definition of N 1 , the truncated roof function ϕ(N ) coincides with ϕ on the subsystem Z 0 , so absence of approximate eigenfunctions passes over to the truncated flow for each N ≥ N 1 . Since ϕ(N ) ≤ 2C 1 N , all estimates related to R and T in Section 4 now hold for q arbitrarily large. Hence the arguments in Section 4 yield the desired properties for 0≤j,k<∞ C j,k . Also, it is immediate from the proof of [26, Proposition 6.3] that |J 0 (t)| ≪ N µ(ϕ(N ) > t) so J 0 (t) = 0 for t > N and hence J 0 is integrable.
It remains to consider the terms A n and B n . Here, we must take into account that the factor of ϕ in the definition of γ is not truncated. Starting from the end of the proof of Proposition 4.14, we obtain A simplified version of Proposition 4.13 combined with Proposition 3.4(b) yields . Hence n≥1 A n and 0≤k<n<∞ B n,k are integrable, completing the proof.
Proof As in [26, Section 6.1], we can suppose without loss that ρ trunc v,w vanishes for t near zero, so that By Proposition 5.9, it follows as in the proof of [26, Lemma 6.2] that By Proposition 5.9, equation (5.5) extends to H \ {0} and the result follows.
From now on we suppress the superscript "trunc" for sake of readability. Notation R, T and so on refers to the operators obtained using ϕ(N ) instead of ϕ. We end this subsection by recalling some further estimates from [26]. The first is a uniform version of Proposition 4.8.
Proof of Lemma 5.8
Let ψ and κ m be as in Corollary 5.10 with the extra property that supp ψ ⊂ (−δ, δ). By Proposition 5.1, ψ, κ m ∈ R(t −p ) for all p > 0, m ≥ 2. (5.6) By Corollary 5.10, we need to show that ψ(b)ρ v,w (ib) ∈ R( v γ,η w γ t −(β−1) ) and ). (Estimates such as these that hold even before truncation are clearly independent of N .) By (5.6) and Proposition 5.2, uniformly in N ≥ 1, Hence it remains to estimate ψ C and κ m C. The next lemma provides the desired estimates and completes the proof of Lemma 5.8 (recall that q > β − 1).
Lemma 5.13
Assume absence of approximate eigenfunctions. There exists N 1 ≥ 1, m ≥ 2, such that after truncation, uniformly in N ≥ N 1 , Proof (a) Let ℓ = max{j − k − 1, 0} and recall that By Proposition 5.11, we can choose m ≥ 2 such that κ m−5 T θ ∈ R(t −q ) uniformly in N ≥ N 1 . Write κ m = κ 3 κ m−5 κ 2 , where κ i is C ∞ , vanishes in a neighborhood of zero, and is O(|b| −i ). Then The estimates for R ℓ and R j+1 V j in Proposition 4.17 and Corollary 5.6 hold even before truncation and hence are uniform in N ≥ 1. Using (5.6) and Propositions 5.1 and 5.2, uniformly in N ≥ 1. Since q > 1, it follows from Proposition 5.2 that uniformly in N ≥ N 1 , Also, |W k | 1 ∈ R (k + 1) β+1 γ k 1 w γ t −q by Proposition 4.16 and this is uniform in N ≥ 1. Applying Proposition 5.2 once more, uniformly in N ≥ N 1 , κ m C j,k ∈ R((j + 1) β γ j/3 and part (a) follows.
General Gibbs-Markov flows
In this section, we assume the setup from Section 3 but we drop the requirement that ϕ is constant along stable leaves.
In Subsection 6.1, we introduce a criterion, condition (H), that enables us to reduce to the skew product Gibbs-Markov maps studied in Sections 3, 4 and 5. This leads to an enlarged class of Gibbs-Markov flows for which we can prove results on mixing rates (Theorem 6.4 below). In Subsection 6.2, we recall criteria for absence of approximate eigenfunctions based on periodic data.
Condition (H)
Let F : Y → Y be a map as in Section 3 with quotient Gibbs-Markov map F : Y → Y , and define Y j = Y j ∩ Y . Let ϕ : Y → R + be an integrable roof function with inf ϕ > 1 and associated suspension flow F t : Y ϕ → Y ϕ .
for all y ∈ Y such that the series converges absolutely. We assume (H) (a) The series converges almost surely on Y and χ ∈ L ∞ (Y ).
In the remainder of this section, we prove that F t is a skew product Gibbs-Markov flow (and hence F t is a Gibbs-Markov semiflow), and show that (super)polynomial decay of correlations for F t is inherited by F t . Proposition 6.1 Let F t : Y ϕ → Y ϕ be a Gibbs-Markov flow. Then F t : Yφ → Yφ is a skew product Gibbs-Markov flow.
Proof We verify that the setup in Section 3 holds. All the conditions on the map F : Y → Y are satisfied by assumption. Hence it suffices to check thatφ satisfies condition (3.3).
We say that a Gibbs-Markov flow has approximate eigenfunctions if this is the case for F t (equivalently F t ).
Periodic data and absence of approximate eigenfunctions
In this subsection, we recall the relationship between periodic data and approximate eigenfunctions and review two sufficient conditions to rule out the existence of approximate eigenfunctions. We continue to assume that F t is a Gibbs-Markov flow as in Subsection 6.1. Define ϕ n = n−1 j=0 ϕ•F j . Similarly, defineφ n andφ n . If y is a periodic point of period p for F (that is, F p y = y), then y is periodic of period L = ϕ p (y) for F t (that is, F L y = y). Recall thatπ : Y → Y is the quotient projection. Proposition 6.5 Suppose that there exist approximate eigenfunctions on Z 0 ⊂ Y . Let α, C, b k , n k be as in Definition 2.3. If y ∈π −1 Z 0 is a periodic point with F p y = y and F L y = y where L = ϕ p (y), then Proof Defineȳ =πy ∈ Z 0 and note that F pȳ = F pπ y =πF p y =ȳ. By (6.2), ϕ p (ȳ) =φ p (y) = ϕ p (y) + χ(y) − χ(F p y) = L.
The following Diophantine condition is based on [18,Section 13]. (Unlike in [18], we have to consider periods corresponding to three periodic points instead of two.) Proposition 6.6 Let y 1 , y 2 , y 3 ∈ Y j be fixed points for F , and let L i = ϕ(y i ), i = 1, 2, 3, be the corresponding periods for F t . Let Z 0 ⊂ Y be the finite subsystem corresponding to the three partition elements containingπy 1 ,πy 2 ,πy 3 . If is Diophantine, then there do not exist approximate eigenfunctions on Z 0 .
Proof Using Proposition 6.5, the proof is identical to that of [26,Proposition 5.3].
The condition in Proposition 6.6 is satisfied with probability one but is not robust. Using the notion of good asymptotics [19], we obtain an open and dense condition. Proposition 6.7 Let Z 0 ⊂ Y be a finite subsystem. Let y 0 ∈π −1 Z 0 be a fixed point for F with period L 0 = ϕ(y 0 ) for the flow. Let y N ∈π −1 Z 0 , N ≥ 1, be a sequence of periodic points with F N y N = y N such that their periods L N = ϕ N (y N ) for the flow F t satisfy where κ ∈ R, γ ∈ (0, 1) are constants, E N ∈ R is a bounded sequence with lim inf N →∞ |E N | > 0, and either (i) ω = 0 and ω N ≡ 0, or (ii) ω ∈ (0, π) and ω N ∈ (ω 0 − π/12, ω 0 + π/12) for some ω 0 . (Such a sequence of periodic points is said to have good asymptotics.) Then there do not exist approximate eigenfunctions on Z 0 .
Proof Using Proposition 6.5, the proof is identical to that of [26,Proposition 5.5].
By [19], for any finite subsystem Z 0 , the existence of periodic points with good asymptotics inπ −1 Z 0 is a C 2 -open and C ∞ -dense condition. Although [19] is set in the uniformly hyperbolic setting, the construction applies directly to the current set up as we now explain. Assume that (Y, d) is a Riemannian manifold. LetZ 1 andZ 2 be two of the partition elements in Z and set Z j = Intπ −1Z j for j = 1, 2. Assume that Z 1 , Z 2 are submanifolds of Y and that F and ϕ are C r when restricted to Z 1 ∪ Z 2 for some r ≥ 2.
Let y 0 ∈ Z 1 be a fixed point for F and choose a transverse homoclinic point in Z 2 . Following [19], we construct a sequence of N -periodic points y N , N ≥ 1, for F with orbits lying in Z 1 ∪ Z 2 . The sequence automatically has good asymptotics except that in exceptional cases it may be that lim inf N →∞ |E N | = 0. By [19], the liminf is positive for a C 2 open and C r dense set of roof functions ϕ.
Combining this construction with Proposition 6.7, it follows that nonexistence of approximate eigenfunctions holds for an open and dense set of smooth Gibbs-Markov flows.
Mixing rates for nonuniformly hyperbolic flows
In this part of the paper, we show how the results for suspension flows in Part I can be translated into results for nonuniformly hyperbolic flows defined on an ambient manifold. In Section 7, we show how this is done under the assumption that condition (H) from Section 6 is valid. In Section 8, we describe a number of situations where condition (H) is satisfied. This includes all the examples considered here and in [26]. In Section 9, we consider in detail the planar infinite horizon Lorentz gas.
Nonuniformly hyperbolic flows and suspension flows
In this section, we describe a class of nonuniformly hyperbolic flows T t : M → M that have most of the properties required for T t to be modelled by a Gibbs-Markov flow. (The remaining property, condition (H) from Section 6, is considered in Section 8.) In Subsection 7.1, we consider a class of nonuniformly hyperbolic transformations f : X → X modelled by a Young tower [30,31], making explicit the conditions from [30] that are needed for this paper. In Subsection 7.2, we consider flows that are Hölder suspensions over such a map f and show how to model them, subject to condition (H), by a Gibbs-Markov flow. In Subsection 7.3, we generalise the Hölder structures in Subsection 7.2 to ones that are dynamically Hölder.
In applications, f is typically a first-hit Poincaré map for the flow T t and hence is invertible. Invertibility is used in Proposition A.1 but not elsewhere, so many of our results do not rely on injectivity of f .
7.1
Nonuniformly hyperbolic transformations f : X → X Let f : X → X be a measurable transformation defined on a metric space (X, d) with diam X ≤ 1. We suppose that f is nonuniformly hyperbolic in the sense that it is modelled by a Young tower [30,31]. We recall the metric parts of the theory; the differential geometry part leading to an SRB or physical measure does not play an important role here.
Product structure Let Y be a measurable subset of X. Let W s be a collection of disjoint measurable subsets of X (called "stable leaves") and let W u be a collection of disjoint measurable subsets of X (called "unstable leaves") such that each collection covers Y . Given y ∈ Y , let W s (y) and W u (y) denote the stable and unstable leaves containing y.
We assume that for all y, y ′ ∈ Y , the intersection W s (y) ∩ W u (y ′ ) consists of precisely one point, denoted z = W s (y) ∩ W u (y ′ ), and that z ∈ Y . Also we suppose there is a constant C 4 ≥ 1 such that Induced map Next, let {Y j } be an at most countable measurable partition of Y such that Y j = y∈Y j W s (y) ∩ Y for all j ≥ 1. Also, fix τ : Y → Z + constant on partition elements such that f τ (y) y ∈ Y for all y ∈ Y . Define F : Y → Y by F y = f τ (y) y. Let µ be an ergodic F -invariant probability measure on Y and suppose that τ is integrable. (It is not assumed that τ is the first return time to Y .) As in Section 3, we suppose that F (W s (y)) ⊂ W s (F y) for all y ∈ Y . Let Y denote the space obtained from Y after quotienting by W s , with natural projectionπ : Y → Y . We assume that the quotient map F : Y → Y is a Gibbs-Markov map as in Definition 2.1, with partition {Y j } and ergodic invariant probability measureμ =π * µ. Let s(y, y ′ ) denote the separation time on Y .
Contraction/expansion Let Y j =π −1 Y j ; these form a partition of Y and each Y j is a union of stable leaves. The separation time extends to Y , setting s(y, y ′ ) = s(πy,πy ′ ) for y, y ′ ∈ Y .
We assume that there are constants C 2 ≥ 1, γ ∈ (0, 1) such that for all n ≥ 0, y, y ′ ∈ Y , d(f n y, f n y ′ ) ≤ C 2 γ ψn(y) d(y, y ′ ) for all y ′ ∈ W s (y), (7.2) d(f n y, f n y ′ ) ≤ C 2 γ s(y,y ′ )−ψn(y) for all y ′ ∈ W u (y), (7.3) where ψ n (y) = #{j = 1, . . . , n : f j y ∈ Y } is the number of returns of y to Y by time n. Note that conditions (3.1) and (3.2) are special cases of (7.2) and (7.3) where Y can be chosen to be any fixed unstable leaf. In particular, all the conditions on F in Sections 3 and 6 are satisfied. In Sections 7.3, 8.4 and 9, we make use of the condition Remark 7.1 Further hypotheses in [30] ensure the existence of SRB measures on Y , Y and X. These assumptions are not required here and no special properties of µ andμ (other than the properties mentioned above) are used.
Remark 7.2
The abstract setup in [30] essentially satisfies all of the assumptions above. However condition (7.2) is stated in the slightly weaker form d(f n y, f n y ′ ) ≤ C 2 γ ψn(y) . As pointed out in [16], the stronger form (7.2) is satisfied in all known examples where the weaker form holds. Condition (7.4) is not stated explicitly in [30] but is an automatic consequence of the set up therein provided f : X → X is injective. We provide the details in Proposition A.1.
In the examples considered in this paper and in [26], the map f is a first return map for a flow and hence is injective, so condition (7.4) is not very restrictive. Condition (7.4) is also used in [26, Section 5.2] but is stated there in a slightly different form. In [26], the subspace X is not needed (and hence not mentioned) and the stable and unstable disks W s (y), W u (y) are replaced by their intersections with Y . Hence the condition j in our present notation and hence holds by (7.4).
Hölder flows and observables
Given We say that w : Let X ⊂ M be a Borel subset and define C η (X) using the metric d restricted to X. We suppose that T h(x) x ∈ X for all x ∈ X, where h : X → R + lies in C η (X) and inf h > 0. In addition, we suppose that for any D 1 > 0 there exists D 2 > 0 such that We suppose that f is a nonuniformly hyperbolic transformation as in Subsection 7.1, with induced map F = f τ : Y → Y and so on.
Proposition 7.5 Suppose that the function χ : Y → R satisfies condition (H).
Dynamically Hölder flows and observables
The Hölder assumptions in Subsection 7.2 can be replaced by dynamically Hölder as follows. We continue to assume that inf h > 0.
Definition 7.6 The roof function h, the flow T t and the observable v are dynamically Hölder if v ∈ C 0,η (M ) for some η ∈ (0, 1] and there is a constant C ≥ 1 such that for all y, y ′ ∈ Y j , j ≥ 1, Also, we replace the assumption w ∈ C η,m (M ) by the condition that ∂ k t w lies in C 0,η (M ) and satisfies (b) for all k = 0, . . . , m.
It is easily verified that condition (6.1) remains valid under the more relaxed assumption on h in Definition 7.6(a). Also, it follows as in the proof of Proposition 7.5 that |ṽ(y, Next we estimate |ṽ(y, u)−ṽ(y ′ , u)| and |ṽ ) as in the proof of Proposition 7.5. Hence we can suppose that y, y ′ ∈ Y j for some j ≥ 1. Set z = W s (y) ∩ W u (y ′ ) and choose t, t ′ as in Definition 7.6(b). Then ).
Condition (H) for nonuniformly hyperbolic flows
In this section, we consider various classes of nonuniformly hyperbolic flows for which condition (H) in Section 6 can be satisfied. We are then able to apply Theorem 6.4 to obtain results that superpolynomial and polynomial mixing applies to such flows as follows: (b) Suppose that µ(ϕ > t) = O(t −β ) for some β > 1 and assume absence of approximate eigenfunctions for F t . Then there exists m ≥ 1 and C > 0 such that Proof Part (a) follows from the discussion in Section 7.2 (so ingredient (i) is automatic and ingredient (ii) is now assumed).
As described in Section 6.1, there is a measure-preserving conjugacy from F t to T t , so part (b) is immediate from Theorem 6.4 combined with Proposition 7.5.
The analogous result holds for nonuniformly hyperbolic flows and observables satisfying the dynamically Hölder conditions in Section 7.3.
We verify condition (H) for three classes of flows. In Subsection 8.1, we consider roof functions with bounded Hölder constants. In Subsection 8.2, we consider flows for which there is exponential contraction along stable leaves. In Subsection 8.3, we consider flows with an invariant Hölder stable foliation. These correspond to the situations mentioned in [26,Section 4.2].
Also, in Subsection 8.4, we briefly review the temporal distance function and a criterion for absence of approximate eigenfunctions.
Remark 8.5
In cases where h lies in C η (X) and the dynamics on X is modelled by a Young tower with exponential tails (so µ X (τ > n) = O(e −ct ) for some c > 0), it is immediate that ϕ ∈ L q (Y ) for all q and that condition (8.3) is satisfied. Assuming absence of approximate eigenfunctions, we obtain rapid mixing for such flows.
Flows with an invariant Hölder stable foliation
Let T t : M → M be a Hölder nonuniformly hyperbolic flow as in Section 7.2. For simplicity, we suppose that (M, d) is a Riemannian manifold and that Y is a smoothly embedded crosssection for the flow. We assume that the flow possesses a T t -invariant Hölder stable foliation W ss in a neighbourhood of Λ. (A sufficient condition for this to hold is that Λ is a partially hyperbolic attracting set with a DT t -invariant dominated splitting T Λ M = E ss ⊕ E cu , see [3].) We also assume that diam Y can be chosen arbitrarily small. In this subsection, we show how to use the stable foliation W ss for the flow to show that χ is Hölder, hence verifying the hypotheses in Section 6.1.
First, we show that if W s (y) and W ss (y) coincide for all y ∈ Y , then F t : Y ϕ → Y ϕ is already a skew product (so χ = 0). Proposition 8.7 Suppose that W s (y) and W ss (y) coincide for all y ∈ Y . Then ϕ is constant along stable leaves W s (y), y ∈ Y .
Hence ϕ| W ss (y 0 ) ≡ ϕ(y 0 ). Let Y = W u (y 0 ) for some fixed y 0 ∈ Y and define the new cross-section to the flow Y * = y∈ Y W ss (y). Shrinking Y if necessary, there exists a unique continuous function r : Y → R with |r| ≤ 1 2 inf ϕ such that r| Y ≡ 0 and {T r(y) (y) : y ∈ Y } ⊂ Y * . Moreover, r is Hölder since Y is smoothly embedded in M and Y * is Hölder by the assumption on the regularity of the stable foliation W ss . Define the new roof function ϕ * : Y * → R + , ϕ * (T r(y) y) = ϕ(y) + r(F y) − r(y).
We observe that ϕ * is the return time for the flow T t to the cross-section Y * .
Lemma 8.8 Under the above assumption on W ss , condition (H) holds.
Temporal distance function
Dolgopyat [18,Appendix] showed that for Axiom A flows a sufficient condition for absence of approximate eigenfunctions is that the range of the temporal distance function has positive lower box dimension. This was extended to nonuniformly hyperbolic flows in [25,26]. Here we recall the main definitions and result. We assume that condition (H) holds, so that the suspension flow Y ϕ → Y ϕ is a Gibbs-Markov flow (and hence conjugate to a skew product flow). We also assume the dynamically Hölder setup from Section 7.3. In particular, the Poincaré map f : X → X is nonuniformly hyperbolic as in Section 7.1 and Y has a local product structure. Also we assume that the roof function ϕ has bounded Hölder constants along unstable leaves, so condition (8.2) is satisfied.
Let y 1 , y 4 ∈ Y and set y 2 = W s (y 1 ) ∩ W u (y 4 ), y 3 = W u (y 1 ) ∩ W s (y 4 ). Define the temporal distance function D : Y × Y → R, It follows from the construction in [26,Section 5.3] (which uses (7.4) and (8.2)) that inverse branches F n y i for n ≤ −1 can be chosen so that D is well-defined. Remark 8.10 For Axiom A attractors, Z 0 can be taken to be connected and D is continuous, so absence of approximate eigenfunctions is ensured whenever D is not identically zero. For nonuniformly hyperbolic flows, where the partition {Y j } is countably infinite, Z 0 is a Cantor set of positive Hausdorff dimension [25,Example 5.7]. In general it is not clear how to use this property since D is generally at best Hölder. However for flows with a contact structure, a formula for D in [21,Lemma 3.2] can be exploited and the lower box dimension of D(Z 0 × Z 0 ) is indeed positive, see [25,Example 5.7]. The arguments in [25,Example 5.7] apply to general Gibbs-Markov flows with a contact structure. A special case of this is the Lorentz gas examples considered in Section 9.
Billiard flows associated to infinite horizon Lorentz gases
In this section we show that billiard flows associated to planar infinite horizon Lorentz gases satisfy the assumptions of Section 8.1. In particular, we prove decay of correlations with decay rate O(t −1 ).
Background material on infinite horizon Lorentz gases is recalled in Subsection 9.1 and the decay rate O(t −1 ) is proved in Subsection 9.2. In Subsection 9.3, we show that the same decay rate holds for semidispersing Lorentz flows and stadia. In Subsection 9.4, we show that the decay rate is optimal for the examples considered in this section.
Background on the infinite horizon Lorentz gas
We begin by recalling some background on billiard flows; for further details we refer to the monograph [13].
Let T 2 denote the two dimensional flat torus, and let us fix finitely many disjoint convex scatterers S k ⊂ T 2 with C 3 boundaries of nonvanishing curvature. The complement Q = T 2 \ S k is the billiard domain, and the billiard dynamics are that of a point particle that performs uniform motion with unit speed inside Q, and specular reflections -angle of reflection equals angle of incidence -off the scatterers, that is, at the boundary ∂Q. The resulting billiard flow is T t : M → M , where the phase space M = Q × S 1 is a Riemannian manifold, and T t preserves the (normalized) Lebesgue measure µ M (often called Liouville measure in the literature).
There is a natural Poincaré section X = ∂Q × [−π/2, π/2] ⊂ M corresponding to collisions (with outgoing velocities), which gives rise to the billiard map denoted by f : X → X, with absolute continuous invariant probability measure µ X . The time until the next collision, the free flight function h : X → R + , is defined to be h(x) = inf{t > 0 : T t x ∈ X}. The Lorentz gas has finite horizon if h ∈ L ∞ (X) and infinite horizon if h is unbounded.
In the finite horizon case, [4] recently proved exponential decay of correlations. In this section, we prove Theorem 9.1 Let η ∈ (0, 1]. In the infinite horizon case, there exists m ≥ 1 such that ρ v,w (t) = O(t −1 ) for all v ∈ C η (M ) ∩ C 0,η (M ) and w ∈ C η,m (M ) (and more generally for the class of observables defined in Corollary 9.6 below).
Let us fix some terminology and notations. The billiard map f : X → X is discontinuous, with singularity set S corresponding to the preimages of grazing collisions. Here, S is the closure of a countable union of smooth curves, X \ S consists of countably many connected components X m , m ≥ 1, and f | Xm is C 2 . If x, x ′ ∈ X m for some m ≥ 1, then, in particular, x, x ′ and f x, f x ′ lie on the same scatterer (even when the configuration is unfolded to the plane). Throughout our exposition, d(x, x ′ ) denotes the Euclidean distance of the two points, i.e. the distance that is generated by the Riemannian metric on X (or M ).
It follows from geometric considerations in the infinite horizon case that µ X (h > t) = O(t −2 ). Moreover, as the trajectories are straight lines, we have The billiard maps considered here (both finite and infinite horizon) have uniform contraction and expansion even for f . There exist stable and unstable manifolds of positive length for almost every x ∈ X, which we denote by W s (x) and W u (x) respectively, and there exist constants C 2 ≥ 1, γ ∈ (0, 1) such that for all x, x ′ ∈ X, n ≥ 0, This follows from the uniform hyperbolicity properties of f , see in particular [13,Formula (4.19)]. Furthermore, there is a constant C 5 ≥ 1 such that for x, x ′ ∈ X, To verify (9.5), note that d(x, x ′ ) consists of a position and a velocity component. In course of the free flight, the velocities do not change, while for x ′ ∈ W s (x), the position component can only shrink as stable manifolds correspond to converging wavefronts. A similar argument applies to (9.6).
Remark 9.2 (a) In the remainder of the section -and in particular in the proof of Proposition 9.5 below -we apply (9.1) repeatedly, but always in the case when either x ′ ∈ W s (x), or f x ′ ∈ W u (f x). As all iterates f n , n ≥ 0 are smooth on local stable manifolds (while all iterates f −n , n ≥ 0 are smooth on local unstable manifolds), both of these conditions imply x, x ′ ∈ X m for some m ≥ 1.
(b) For larger values of t than those in (9.5), we note that d(T t x, T t x ′ ) may grow large temporarily: it can happen that one of the trajectories has already collided with some scatterer, while the other has not, hence even though the two points are close in position, the velocities differ substantially. Similar comments apply to (9.6). This phenomenon is the main reason why we require the notion of dynamically Hölder flows T t in Definition 7.6.
It follows from Lemma 8.3 and Corollary 9.4 that condition (H) is satisfied. Hence by Corollary 8.1(a), the suspension flow F t : Y ϕ → Y ϕ is a Gibbs-Markov flow as defined in Section 6. By Proposition 9.7, µ(ϕ > t) = O(t −2 ). By Corollary 9.6, the flows and observables are dynamically Hölder (Definition 7.6). Hence it follows from Corollary 8.1(b) that absence of approximate eigenfunctions implies decay rate O(t −1 ).
Finally, we exclude approximate eigenfunctions. By Corollary 9.4, condition (8.2) holds and hence the temporal distortion function D : Y × Y → R is defined as in Section 8.4. Let Z 0 ⊂ Y be a finite subsystem and let Z 0 =π −1 Z 0 . The presence of a contact structure implies by Remark 8.10 that the lower box dimension of D(Z 0 × Z 0 ) is positive. Hence absence of approximate eigenfunctions follows from Lemma 8.9.
Semi-dispersing Lorentz flows and stadia
In this subsection we discuss two further classes of billiard flows and show that the scheme presented above can be adapted to cover these examples, resulting in Theorem 9.13.
Semi-dispersing Lorentz flows are billiard flows in the planar domain obtained as R\ S k where R is a rectangle and the S k ⊂ R are finitely many disjoint convex scatterers with C 3 boundaries of nonvanishing curvature. By the unfolding process -tiling the plane with identical copies of R, and reflecting the scatterers S k across the sides of all these rectangles -an infinite periodic configuration is obtained, which can be regarded as an infinite horizon Lorentz gas.
Bunimovich stadia are convex billiard domains enclosed by two semicircular arcs (of equal radii) connected by two parallel line segments. An unfolding process could reduce the bounces on the parallel line segments to long flights in an unbounded domain, however, there is another quasi-integrable effect here corresponding to sequences of consecutive collisions on the same semi-circular arc.
Both of these examples have been extensively studied in the literature, see for instance [9,13,14,25,6], and references therein. A common feature of the two examples is that the billiard map itself is not uniformly hyperbolic; however, there is a geometrically defined first return map which has uniform expansion rates. As before, the billiard domain is denoted by Q, and the billiard flow is T t : M → M where M = Q × S 1 . However, this time we prefer to denote the natural Poincaré section ∂Q × [−π/2, π/2] ⊂ M by X, the corresponding billiard map asf : X → X, and the free flight function ash : X → R + wherẽ h(x) = inf{t > 0 : T tx ∈ X}. Then, as mentioned above, there is a subset X ⊂ X such that the first return map off to X has good hyperbolic properties. We denote this first return map by f : X → X. The corresponding free flight function h : X → R + is given by h(x) = inf{t > 0 : T t x ∈ X}. Let us, furthermore, introduce the discrete return timẽ r : X → Z + given byr(x) = min{n ≥ 1 :f n x ∈ X}.
In the case of the semi-dispersing Lorentz flow, X corresponds to collisions on the scatterers S k . In the case of the stadium, X corresponds to first bounces on semi-circular arcs, that is, x ∈ X if x is on one of the semi-circular arcs, butf −1 x is on another boundary component (on the other semi-circular arc, or on one of the line segments).
The following properties hold. Unless otherwise stated, standard references are [13, Chapter 8] and [14]. As in section 9.1, d(x, x ′ ) always denotes the Euclidean distance of the two points, generated by the Riemannian metric.
• There is a countable partition X \ S = ∞ m=1 X m such that f | Xm is C 2 andr| Xm is constant for any m ≥ 1. We refer to the partition elements X m withr| Xm ≥ 2 as cells; these are of two different types: -Bouncing cells are present both in the semi-dispersing billiard examples and in stadia. For these, one iteration of f | Xm consists of several consecutive reflections on the flat boundary components, that is, the line segments. By the above mentioned unfolding process, these reflections reduce to trajectories along straight lines in the associated unbounded table.
-Sliding cells are present only in stadia. For these, one iteration of f | Xm consists of several consecutive collisions on the same semi-circular arc.
• inf h > 0, and suph < ∞, however, there is no uniform upper bound on h, and no uniform lower bound forh.
• f : X → X is uniformly hyperbolic in the sense that stable and unstable manifolds exist for almost every x, and Formulas (9.3) and (9.4) hold. This follows from the uniform expansion rates of f , see [13,Formula (8.22)].
• If x, x ′ ∈ X m where X m is a bouncing cell, in the associated unfolded table the flow trajectories until the first return to X are straight lines, hence (9.1) follows. If x, x ′ ∈ X m and X m is a sliding cell, the induced roof function is uniformly Hölder continuous with exponent 1/4, as established in the proof of [6, Theorem 3.1]. The same geometric reasoning applies toh k (x) =h(x) +h(f x) + · · · +h(f k−1 x) as long as k ≤r(x). Summarizing, we have for x, x ′ ∈ X m , m ≥ 1 and k ≤r(x) − 1. In particular, |h(x) − h(x ′ )| ≪ d(x, x ′ ) 1/4 + d(f x, f x ′ ) 1/4 .
• (9.2) has to be relaxed to d(T tx , T t ′x) ≤ |t − t ′ | for allx ∈ X and t, t ′ ∈ [0,h(x)). (9.12) • (9.5) has to be relaxed to the following two formulas: Similarly, (9.6) has to be relaxed to To verify (9.16), let us note first that d(x, x ′ ) consists of a position and a velocity component, and in course of a free flight velocities do not change. Now the mechanism of hyperbolicity for stadia is defocusing, see, for instance, [13, Figure 8.1], which guarantees that for x ′ ∈ W u (x), the position component of d(x, x ′ ) in course of the free flight is dominated by the position component at the end of the free flight. (9.14) holds for analogous reasons. To verify (9.15), by uniform hyperbolicity of f (in particular Formula (9.4), see above), it is enough to consider howf evolves unstable vectors between two consecutive applications of f , ie. within a series of sliding or bouncing collisions. On the one hand, again by the defocusing mechanism,f does not contract the p-length of unstable vectors, see [13,Section 8.2]. On the other hand, for an unstable vector, the ratio of the Euclidean and the p-length is √ 1 + V 2 / cos ϕ, where V is the slope of the unstable vector in the standard billiard coordinates, and ϕ is the collision angle, see [13,Formula (8.21)]. Now |V| is uniformly bounded away from ∞, see Formula [13,Formula (8.18)], while cos ϕ is constant in course of a sequence of consecutive sliding or bouncing collisions. (9.13) holds by an analogous argument.
• The map f : X → X can be modeled by a Young tower with exponential tails. In particular, there exists a subset Y ⊂ X and an induced map F = f τ : Y → Y that possesses the properties discussed in Section 7.1 including (7.4). The tails of the return time τ : Y → Z + are exponential, i.e. µ(τ > n) = O(e −cn ) for some c > 0. 4 Moreover, the construction can be carried out so that diam Y is as small as desired.
The existence of the Young tower satisfying these properties is established in [14]. As in subsection 9.1, we introduce the induced roof function ϕ = τ −1 ℓ=0 h • f ℓ .
• By construction, for y, y ′ ∈ Y j , j ≥ 1 and ℓ ≤ τ fixed, f ℓ y and f ℓ y ′ always belong to the same cell of X.
The adapted version of Proposition 9.5 reads as follows. | 2018-11-11T10:53:40.000Z | 2017-10-07T00:00:00.000 | {
"year": 2017,
"sha1": "d1625ae89926b5a35a8078d62d7d7635bc66c719",
"oa_license": null,
"oa_url": "https://art.torvergata.it/bitstream/2108/266114/1/Polynomial%20Decay%20of%20Correlations%20for%20Flows,%20Including%20Lorentz%20Gas%20Examples.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d1625ae89926b5a35a8078d62d7d7635bc66c719",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
234161750 | pes2o/s2orc | v3-fos-license | Finite Element Analysis of 3D Printed Steel-fiber RC Beam for Mechanical Performance
In this study, by referring to the existing test data and numerical simulation parameters of Bos, using the compression constitutive relationship of concrete proposed by Thorenfeldt and the tensile constitutive model proposed by Petersson, the numerical simulation of the flexural toughness of steel fiber notched beam by pouring and 3D printing is carried out by using the finite element software ABAQUS, and the load deflection curve is obtained, which is compared with the test curve of BOS to verify The rationality of the research model. Based on the verified model parameters, ABAQUS is used to simulate the four point bending test of steel fiber reinforced 3D printing concrete and plain concrete, focusing on the influence of steel fiber content and notch depth on the load deflection curve.
Introduction
3D printing technology is based on electronic data, through the continuous superposition of extruded materials, the 3D printing model is transformed into a real one [1]. 3D printing is one of the core technologies in the development of advanced manufacturing industry, which has broad prospects in various fields. At present, many research institutions are trying to apply 3D printing technology to the construction industry, in order to promote the modernization of the construction industry and accelerate the transformation and upgrading of the construction industry. Among them, the application of 3D printing concrete technology in the construction field is mainly limited to the production of reinforced structure, because based on the current printing technology, it is not possible to print the reinforced structure and concrete materials at the same time [2], but with the traditional cement-based materials, its strength, rheology and setting time are difficult to meet the requirements of 3D printing buildings.
With the continuous exploration of researchers, it is found that adding a certain amount of steel fiber material into cement-based materials can restrain the deformation of concrete components, effectively reduce the brittleness when under compression, improve the tensile strength and elastic modulus of printed concrete, improve the failure mode of materials under compression, and greatly improve the seismic and safety performance of buildings [3][4]. At the same time, adding steel fiber into concrete can ensure that the reinforced structure and concrete materials are printed at the same time [5]. Therefore, in the field of 3D printing concrete in the future, the research of steel fiber reinforced cement-based materials has broad development prospects.
In this study, adding short straight steel fiber to cement-based materials can significantly improve the mechanical properties of concrete beams by ensuring the printing of reinforced structure and concrete at the same time. At the same time, by using the finite element analysis software to simulate the three-point bending test of f.p.bos [6], on the basis of demonstrating the rationality of the model, the four point bending test is simulated, and the mechanical properties of steel fiber reinforced 3D printing concrete beam are explored. IOP Conf. Series: Earth and Environmental Science 643 (2021) 012009 IOP Publishing doi:10.1088/1755-1315/643/1/012009 2 2. Literature Review 3D printing technology, which adopts the concept of additive manufacturing, has been widely promoted with low processing cost, high production efficiency and high precision processing technology. In the field of construction, 3D printing can achieve Moldless, rapid and fine molding, so it has been playing a key role in the construction field. From the beginning, it was used to design and make the concept model of architecture, and now it can realize the integrated construction of building structure. The rapid development of its technology level also promotes the progress of the whole construction industry. Many scholars have done a lot of research on the selection of 3D printing concrete materials, the working performance and mechanical properties of printing components.
Development of 3D printing technology
The idea of 3D printing first appeared at the end of the 19th century. In 1981, householder, R. F. [7] and others first proposed the concept of selective laser sintering, that is, the method of layered manufacturing and layer by layer accumulation was used to make solid. In 1980, Hideo Kodama [8] proposed a design scheme of rapid prototyping system, which is similar to the later three-dimensional lithography 3D printing technology. In 1986, Charles hull [9] first proposed the 3D printing process of stereolithography and invented the first 3D printing machine in the world Printer, using 3D modeling software and solid scanning methods to create 3D digital model, and then through special software to cut the 3D model layer by layer, into slice file, guided by the data file to print layer by layer, establish the solid structure, realize the transformation from two-dimensional to three-dimensional. In the same year, Carl Deckard [10] invented the laser selective sintering technology, and invented a 3D printer using the laser selective sintering technology to print in 1987, and put forward the practical patent for the first time in 1989 [11]. In 1989, S. Scott Crump and Lisa Crump invented the melt deposition model, that is to use thermoplastic plastics to deposit materials on the construction platform in the 3D printing process. This technology is also the main method of 3D printing [12].
In 1997, Joseph Pegna [13], an American scholar, proposed a construction method for free-form components with layer by layer accumulation and selective solidification of cement materials, and applied 3D printing technology to the construction field for the first time. At present, the additive construction technology applied in the construction field mainly includes: contour technology proposed by Behrokh Khoshnevis [14], type D process invented by Enrico Dini, Monolite company, UK, concrete printing proposed by Richard Buswell of Loughborough University, and digital construction technology proposed by Professor Gershenfeld of Massachusetts Institute of technology.
Development of steel fiber reinforced concrete technology
Steel fiber reinforced concrete (SFRC) is a kind of concrete with cement as binder, which is reinforced by a certain amount of randomly distributed steel fibers. The random distribution of short steel fibers has significant anti cracking, strengthening and toughening effects on concrete. In 1910, H. F. Porter put forward the concept of "steel fiber reinforced concrete", and carried out the research on uniformly mixing steel fiber into concrete as reinforcement material. The results show that the strength and stability of concrete can be improved by adding steel fiber into ordinary reinforced concrete. In 1963, J. P. Romualdi and G. B. Batson published a series of research reports on the strengthening mechanism of steel fiber reinforced concrete. After the theory of average spacing of steel fibers was put forward, the development, test and application of steel fiber reinforced concrete developed rapidly. Ni Kun et al. studied the flexural behavior of concrete beams from three aspects: the amount of steel fiber, the ratio of length to diameter and the ratio of water to binder. Zhang Jingcai proposed the double-K criterion for steel fiber reinforced concrete.
development of fiber reinforced 3D printing concrete test
Since the 1980s, a large number of researches have been carried out on the theory and engineering application of fiber reinforced concrete. The application of fiber reinforced concrete materials in engineering is expanding, and the types of fiber added are increasingly rich, such as steel fiber, IOP Conf. Series: Earth and Environmental Science 643 (2021) 012009 IOP Publishing doi:10.1088/1755-1315/643/1/012009 3 polypropylene fiber, glass fiber, basalt fiber, carbon fiber, polyvinyl alcohol fiber, etc. In 2006, Markovic verified that the short straight fiber of 6 mm can pass through the nozzle smoothly during the printing process, which will not cause blockage and will not significantly affect the workability of concrete. In 2015, Ma Yihe added chopped glass fiber and hydroxypropyl methylcellulose into concrete to increase the bonding capacity between concrete, making it suitable for rapid prototyping 3D printing buildings. In 2017, Hambach and Volkmer researched 3D printing composite materials of Portland cement paste and reinforced fiber (3-6mm basalt, glass and carbon fiber) for the first time, and invented composite materials with high bending strength and compressive strength. In the same year, Biranchi panda et al. Explored the influence of glass fiber with different content and length on the tensile properties and fiber orientation of 3D printing concrete. In 2018. F. P. Bos et al. Carried out three-point bending test and numerical simulation of steel fiber reinforced 3D printing concrete beam, compared the mechanical properties between plain concrete and fiber reinforced concrete, and verified the accuracy of numerical simulation.
Although 3D printing has developed rapidly in the field of construction in recent years, the research on 3D printing concrete mainly focuses on the mix proportion of cement-based materials and the basic mechanical properties and working performance of concrete. There are few researches on Printing reinforced structures and concrete materials at the same time, and there are few numerical simulation and experimental studies on fiber reinforced 3D printing concrete, and steel fiber. The application in 3D printing process lacks sufficient data support.
establishment of finite element model
In this paper, the concrete damage plasticity (CDP) model proposed by Lubliner [15] et al. Is used to describe the concrete with fiber and without fiber, and it is assumed that the fiber is uniformly distributed when the fiber exists. Therefore, in this simulation, the steel fiber is dispersed into the concrete and becomes a whole with the concrete. The steel fiber concrete is considered as a kind of continuous homogenization, and the steel fiber concrete is reflected in the material constitutive relationship to realize the establishment of the model.
constitutive relation of materials
The constitutive relationship of steel fiber reinforced concrete can be divided into compression constitutive relation and tensile constitutive relation. However, a large number of studies show that the reinforcement effect of steel fiber on the compressive performance of concrete is not obvious, and the finite element model is controlled by tensile failure, so the influence of steel fiber is not considered in the compression constitutive relationship.
Compression constitutive relation
In this paper, the constitutive relation of concrete under compression proposed by Thorenfeldt et al: Where n 0.80 , k 1, 0.67 , 0 , fcm is the peak stress, εc1 is the strain corresponding to the corresponding peak stress, and fck is the standard value of axial compressive strength.
The constitutive relation proposed by Thorenfeldt et al. Consists of two stages: before and after concrete cracking. However, the CDP model in ABAQUS has a linear elastic stage before concrete cracking, which is not consistent with Thorenfeldt model. Therefore, the compressive constitutive relationship of concrete in this project can be divided into three stages, as shown in Figure 1: (I) linear IOP Conf. Series: Earth and Environmental Science 643 (2021) 012009 IOP Publishing doi:10.1088/1755-1315/643/1/012009 4 elastic stage; (II) hardening stage before peak stress (corresponding to Thorenfeldt model before cracking); (III) softening stage after reaching peak stress (corresponding to Thorenfeldt model cracking).
Tensile constitutive relation
For the behavior of concrete under tension, many researchers have proposed different constitutive relations, but these constitutive relations basically need two parameters, namely fracture energy GF and strain softening curve. The fracture energy is defined as (2) Where σ is the normal tensile stress and w is the normal crack width. In order to avoid the discretization of the results, Rots et al. pointed out that the fracture energy must be released within a certain crack width, that is, GF is constant and independent of the mesh size. Therefore, in the CDP model established in this project, the crack development width is set as a fixed value, where wc is the corresponding crack opening displacement when the stress is completely released, and εf is the corresponding strain when the stress is completely released. For the strain softening curve, the constitutive relation of plain concrete adopts the uniaxial tensile stressstrain curve proposed by hordijk, D.A., as shown in Fig. 2. However, the fracture energy of SFRC is much larger than that of plain concrete, and there is no generally accepted formula for SFRC after cracking. Therefore, the behavior of SFRC after cracking is expressed by interpolating stress-strain values in CDP model. wc is equal to the grid size and lc is the average cell area.
The residual stress fR,j is determined by the formula (4) (5), Among them, FL is the tensile force corresponding to the proportional limit, j = 1,2,3,4 corresponding to the crack opening width of 0.5,1.5,2.5 and 3.5mm respectively, Fjw is the tensile force on the corresponding crack opening width, and the values of hsp and l are shown in Table 2.
Fig. 4 FE model
The left end of the concrete beam is restrained vertically and horizontally, and the right end is restrained vertically. The load is applied above the crack opening.
Four point bending test simulation
According to the comparison between the simulation results of ABAQUS and the test results of f.p.bos [6], it can be seen that the three-point bending numerical simulation results of different gap opening width have the same change trend with the load displacement curve measured by the test. The numerical simulation results of the damage initiation point and damage evolution point are in good agreement with the experimental data, which verifies the feasibility of the numerical modeling process and the reliability of the simulation results. Therefore, the modeling method is used in the following numerical simulation.
Conclusion
The mechanical properties of 3D printed steel fiber reinforced concrete beam and plain concrete beam model were analyzed by finite element simulation For 3D printed steel fiber reinforced concrete beam model, fiber incorporation can significantly improve the flexural strength. Although the fiber orientation is very strong in the longitudinal direction, it has no significant effect on the performance of the test direction; Based on thorenfeldt's constitutive model compression law and customized constitutive law, the stress and strain values determined by CMOD test and grid geometry are reasonably fitted to the experimental results. However, for large separations, an accurate model needs to be tested based on uniaxial tensile constitutive relationship because the stress values cannot be determined clearly from CMOD tests. | 2021-05-11T00:05:24.500Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "db2e1a2c1b2be2c1c1919fb7933a9d52aefb83cd",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/643/1/012009",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7a8750e05302247b69d5974f35f5bdd012a14aa2",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
121327157 | pes2o/s2orc | v3-fos-license | Medium Modification of Charm Production in Relativistic Heavy Ion Collisions due to Pre-equilibrium Dynamics of Partons at $\sqrt{s_{\textrm {NN}}}$= 0.2--5.02 TeV
We study the production dynamics of charm quarks in the parton cascade model for relativistic heavy ion collisions at RHIC and LHC energies. The model is eminently suited for a study of the pre-equilibrium dynamics of charm quarks at modest transverse momenta. The treatment is motivated by QCD parton picture and describes the dynamics of an ultra-relativistic heavy-ion collision in terms of cascading partons which undergo scattering and multiplication while propagating. We find considerable suppression of charm quarks production in $AA$ collisions compared to those for $pp$ collisions at the same $\sqrt{s_{\text {NN}}}$ scaled by number of collisions. This may be important for an accurate determination of energy loss suffered by charm quarks while traversing the thermalized quark gluon plasma.
I. INTRODUCTION
The existence of Quark Gluon Plasma (QGP), a deconfined strongly interacting matter, which was predicted by lattice Quantum Chromo Dynamics calculations (see, e.g, Ref [1] for a recent review), is now routinely produced in relativistic heavy ion collisions at BNL Relativistic Heavy Ion Collider and CERN Large Hadron Collider. It is believed to have filled the early universe till a few micro-seconds after the Big Bang. The study of QGP has remained one of the most rewarding disciplines of modern nuclear physics for more than three decades.
The observation of large elliptic flow of hadrons [2, 3] and jet-quenching [4,5], because of the energy loss of high energy partons in the hot and dense medium, are the most prominent early signatures of the formation of QGP in these collisions. Additional confirmation has been provided by detection of thermal radiation of photons form these collisions [6][7][8]. The unexpected surprises have been provided by the parton recombination as a mechanism of hadronization [9] and the very low viscosity (see e.g., Ref [10][11][12]).
Coming years will explore its properties to a great deal of accuracy and once the Facility for Anti-proton and Ion Research, Darmstadt (FAIR) and the Nuclotron-based Ion Collider Faility, Dubna (NICA) start working, the next frontier of this fascinating field, namely QGP at high baryon density and low temperatures, which perhaps forms the core of large neutron stars [13], will be open for exploration. The Future Circular Collider (FCC) will provide an opportunity to study pp and AA collisions at unprecedented high centre of mass energies [14][15][16].
Charm quarks have emerged as a valuable probe of the evolution dynamics of quark gluon plasma. This was realized quite early in the literature. The large mass of charm quarks ensures that they are produced only in processes involving a sufficiently large Q 2 . This makes these * Electronic address: dinesh@vecc.gov.in † Electronic address: rupa@vecc.gov.in interactions amenable to perturbative QCD calculations. The later interactions conserve charm and the hadrons containing charm are easily identified. More than three decades ago, Svetistky [17] obtained the drag and diffusion coefficients for them by considering that they execute a Brownian motion in the QGP. A first estimate of the radiative energy loss of heavy quarks was also obtained by authors of Ref. [18] using some simplifying assumptions. These early studies have been brought to a very high degree of sophistication by now. The energy loss suffered by heavy quarks due to scatterings and radiation of gluons have been been estimated and its consequences have been explored in detail (see, e.g. [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38]). The temperature dependence of the drag coefficient has also been calculated using lattice QCD [39][40][41]. A phenomenological extension of Wang, Huang, Sarcevic model [42,43] was used by authors of Ref. [44] to extract energy loss of charm quarks in the quark gluon plasma and their azimuthal correlations [45] and a comparative study of different energy loss mechanisms for heavy quarks at central and forward rapidities was performed by authors of Ref. [46]. However, all the above studies mostly start with the assumption of a thermally and chemically equilibrated plasma at some formation time τ 0 ≈ 0.2-1.0 fm/c. The assumption of chemical equilibration has been relaxed in some studies for determination of the drag and diffusion [19]. The consequences of this relaxation for interactions [47,48] following their initial production in prompt collisions [49] have also been studied. The drag and diffusion co-efficients for heavy quarks for the pre-equilibrium phase have been studied by replacing the distributions of quarks and gluons by Colour Gluon Condensate model inspired distributions [50].
The thermalization of heavy quarks by assuming that they perform Brownian motion in a thermal bath of gluons was studied quite some time ago [51]. Heavy quark thermalization and flow has been studied in considerable detail within a partonic transport model BAMPS (Boltzmann Approach of Multi Parton Scatterings) by the authors of Ref. [52,53], where the initial distribution of charm quarks was sampled from PYTHIA. The parton cascade model proposed by Geiger and Muller [54] has been refined by Bass, Muller, and Srivastava [55] with improved implementation of the dynamics of the collision with several interesting insights and results. It was further extended to implement the production of heavy quarks [56]. A box-mode implementation was shown to provide an accurate description of energy loss suffered by charm quarks in QGP at a given temperature due to collisions and radiations [57].
Recently it was employed to study the transport dynamics of parton interactions in pp collisions at the LHC energies [58]. These studies are of interest because of the QGP like features observed in high multiplicity events of these collisions. The studies reported in Refs. [56,58] were performed by neglecting the Landau Pomeranchuk Migdal (LPM) effect, which results in enhanced parton multiplication.
Authors of Ref. [59] have reported results for charm production in pp collisions with the accounting of LPM effect. Their results indicate that pp collisions at the higher LHC energies may lead to formation of a dense medium. This, in turn triggers a suppression of radiations (and parton multiplication) due to the LPM effect. However, it was reported that, even after these suppressions, multiple scatterings occur among the partons and the transverse momentum distribution of charm quark is rather sensitive to such scatterings. These calculations also provided a reasonably good description of the charm distribution measured at LHC energies. The bottom quarks, on the other hand, due to their very large mass are not likely to be produced in multiple scatterings after the initial collisions and were not affected by this suppression [60], at least for pp collisions.
These considerations presage a considerable influence of LPM effect in AA collisions. We aim to study this pre-equilibrium dynamics for charm production in AA collisions, in the present work.
We briefly discuss the details of our formulation in the next section, give our results in Sec. III, and conclusions in Sec. IV.
II. FORMULATION
First of all, let us state the limited objective of our work in a little more detail. We realize that charm quarks, after their initial production in hard partonic scatterings will be a witness to the emergence of a thermally and possibly chemically equilibrated QGP followed by its expansion and cooling. These will see the beginning of the flow and hadronization. The heavy mesons containing charm quarks may also undergo scatterings during the hadronic phase. Thus these are valuable chroniclers of the system.
As mentioned earlier, the drag, diffusion, energy loss and flow experienced by them need to be understood in quantitative detail so that we can use them to determine the properties of the medium precisely. We realize that the charm quarks will experience a considerable turmoil, before the thermally and chemically equilibrated plasma sets in at some formation time τ 0 of the order of 0.2-1.0 fm/c. This suggests that we understand their dynamics before the system enters the so-called QGP phase, as some amount of medium modification of their momentum distribution could already happen by then. In absence of this the medium modification already sustained during the pre-equilibrium phase will have to be, per-force, accounted by adjusting the values for drag, diffusion and radiative energy loss during the QGP phase and later.
The parton cascade model [54,55] is eminently suited for this study for the following reasons. It starts from experimentally measured parton distribution functions and proceeds to pursue a Monte Carlo implementation of the Boltzmann equation to study the time evolution of the parton density, due to semi-hard perturbative Quantum Chromo Dynamics (pQCD) interactions including scatterings and radiations within a leading log approximation [61]. The 2 → 2 scatterings among massless partons use the well-known matrix elements (see, e.g. [62]) at leading order pQCD. The singularities present in the matrix elements are regularized by introducing a transverse momentum cut-off (p cut-off T fixed at 2 GeV in the present work).
The radiation processes (g → gg and q → qg) are, in turn, regularized by introducing a virtuality cut-off, where m i is the current mass of quark (zero for gluons) and µ 0 is taken as 1 GeV. This is implemented using the well tested procedure implemented in PYTHIA [63]. It has been reported earlier that the introduction of the LPM effect minimizes the dependence of the results on the precise value of µ 0 [59,65].
The matrix elements for the gg → QQ and qq → QQ processes do not have a singularity and the minimum Q 2 for them is 4M 2 Q , which for charm quarks is more than 7 GeV 2 and amenable to calculations using pQCD. The qQ → qQ and gQ → gQ processes will require a p cut-off T to avoid becoming singular, and it is taken as 2.0 GeV as before. The matrix elements for these are taken from Combidge [64]. For more details, the readers are referred to earlier publications [56].
The scatterings among partons and radiation of gluons will lead to a rapid formation of a dense partonic medium for AA collisions, even though the PCM involves only those partons which participate in the collisions leading to momentum transfers larger than p cut-off T and are radiated only their till virtuality of the mother parton drop to µ i 0 (see above). This would necessitate introduction of LPM effect. We have already noted its importance for pp collisions [59].
We have implemented the LPM effect by assigning a formation time τ to the radiated particle: where ω is its energy and k T is its transverse momentum with respect to the emitter. We further implement a scheme such that during the formation time, the radiated particle does not interact. The emitter, though continues to interact and if that happens, the radiated particle is removed from the list of participants and is thus excluded from later evolution of the system [65] (see [59], for more detail). These aspects are incorporated in the Monte Carlo implementation of the parton cascade model, VNI/BMS which we use for the results given in the following. Before proceeding, we insist that PCM does not include the soft scatterings which lead to flow etc. We have already mentioned that a good description of charm production at LHC energies, using this procedure, was reported earlier [59].
III. RESULTS
We have calculated production of charm quarks for Au + Au collisions at 200 AGeV and for P b + P b collisions at 2.76 ATeV and 5.02 ATeV for zero impact parameter. Results for pp collisions at the same centre of mass energy have also been included for a comparison and for estimating the medium modification factor R AA , such that where N coll is the number of binary nucleon-nucleon collisions for the given centrality. We shall also use the ratio of p T and y integrated results to denote the possible deviation of the production of charm quarks from the N coll values for pp interactions: We expect the final results for the medium modification to deviate substantially from what is reported here, which is only due to pre-equilibrium dynamics of the system.
Charm will be conserved during the later evolution of the system. Thus, a rich structure should emerge for the final modification, once the energy loss suffered by the charm quarks and the consequence of the collective flow is accounted for as they traverse the quark gluon plasma. The depletion of charm quark at larger momenta should be accompanied by an enhancement at lower momenta. This enhancement may depend strongly on the transverse momentum as the p T spectrum falls rapidly with increase in p T . Further, as charm quarks participate in the collective expansion of the medium, their flow would lead to a depletion of charm having very low momenta which would result in an enhancement at intermediate momenta.
In Fig.1 we plot the p T distribution of charm quarks for central AA collisions at y = 0 along with the same for pp collisions at the corresponding √ s NN scaled by the number of collisions as appropriate for production mechanisms involving hard interactions. We notice a p T dependent suppression of charm production in AA collisions, increasing with the centre of mass energy of the collisions.
The p T integrated rapidity distributions shown in Fig. 2 brings this fact even more clearly. It additionally suggests that these modifications are limited to central rapidities at the lowest centre of mass energy (200A GeV) considered here but extend to more forward (backward) rapidities as the energy rises to those available at LHC.
The medium modification of total charm production R, is shown in Fig. 3 as a function of centre of mass energy per nucleon. We note that the suppression increases with √ s NN and tends to saturate at LHC energies. We are not sure that the importance of this has been sufficiently emphasized in literature.
Let us discuss this in a little more detail. The experimentally measured R AA at 200 AGeV [66], 2.76 ATeV [68], and 5.02 ATeV [69] for the central rapidity is always less than one. We know that the charm production during the thermally equilibrated phase is rather negligible. This trend should persist at larger rapidities.
The authors of Ref. [58] reported emergence of LPM effect already in pp collisions, signalling the formation of a dense medium. As stated earlier, it was found that even with this suppression of scatterings and parton multiplications, there was multiple parton scatterings beyond the primary-primary collisions followed by fragmentations which provided a reasonable explanation to the experimental data. In AA collisions the LPM effect should be quite strong. This will result in a large scale suppression of partonic collisions as parton multiplication is arrested due to the delayed fragmentaions of off-shell partons following scatterings. This should then lead to an overall suppression of charm production beyond that expected from a straight forward scaling of results of pp collisions by the number of collisions, seen here. It is also expected that this effect would get stronger as the centre of mass energy rises. We recall that this was not seen in calculations performed with the neglect of the LPM effect [56], where R AA ≥ 1 was seen for low p T (recall also that the p T distributions drop rapidly with increasing p T ). This implies that in the absence of LPM effect, the parton multiplications and multiple scatterings would lead to a charm production in AA collisions well beyond that obtained from a scaling of the corresponding results for pp collisions. We have verified that it is indeed so, for all the three energies considered here. This has one interesting and important consequence. While the final R AA will result from a rich interplay of collective flow and energy loss of the charm quarks, its value at lower p T would already be quite smaller than unity. An effort to attribute this entire suppression due to energy loss during the QGP phase alone would necessarily require a larger value for dE/dx.
We give our results for medium modification of charm production at y = 0 for most central collisions in Fig. 4. We emphasize that we neither intend nor expect to explain the experimental R AA . These are shown only to denote how this rich input of medium modification of charm phase space distribution due the pre-equilibrium dynamics, has to provide the platform for the launch of their journey through the hydrodynamic evolution of the system. During this period they will be subjected to the collective flow and further energy loss.
We do note one interesting and possibly valuable result of these studies. The distribution of charm quarks having large p T ≥ 6 GeV or so does not seem to be affected strongly by the pre-equilibrium dynamics discussed here.
Thus we feel that a charm distribution whose momenta are sampled from those for pp collisions and the points of production distributed according to the nuclear thickness function T AA (x, y) is not quite adequate as in input for study of charm propagation during the QGP phase of the system produced in relativistic collision of nuclei.
IV. SUMMARY AND DISCUSSION
We have calculated the dynamics of production of charm quark using parton cascade model, which should provide a reasonable description of the parton dynamics, albeit limited to, p T ≥ p cut-off T and a reasonably modest virtuality ≥ µ 0 defined earlier during the pre-equilibrium phase. The LPM effect provides a rich structure to the so-called medium modification factor for charm quarks, defined as a ratio of the results for AA collisions divided by the results for pp collisions (at the same √ s NN ) scaled by number of collisions.We noticed an over all suppression of charm production, which we attribute to the LPM effect.
The medium modification factor as a function of p T shows a rich structure which evolves with energy, deviating (supression) from unity by about 10% at low p T and approaching unity as intermediate p T at √ s NN = 200 GeV. This deviation (suppression) is seen to rise to about 40% at LHC energies. An interesting result, seems to be the supression of large p T charm at 2.76 TeV, but not at 5.02 TeV, which we are unable to understand.
Realizing that this should form input to the calculations using hydrodynamics and with collisional and radiative energy loss of charm quarks to determine dE/dx, one expect some interesting deviations with those with the neglect of these suppressions.
A future study, which is under way, will use more modern structure functions, (we have used GRV HO [70] in these preliminary studies) and account for straight forward corrections like nuclear shadowing, which will further suppress the production of charm quarks beyond what is reported here. The results for the phase space distribution of charm quarks at the end of the pre-equilibrium phase will then be used as inputs to hydrodynamics based calculations, as indicated above.
In brief, we see that the pre-equilibrium dynamics of parton scattering and fragmentation along with LPM effect provides a rich structure to the production of charm quarks. We suggest that this effect should be taken into account to get a precise value for the energy loss suffered by charm quarks and the modification of their p T distributions due to the flow. Acknowledgments DKS gratefully acknowledges the grant of Raja Ramanna Fellowship by the Department of Atomic Energy. We thankfully acknowledge the High Performance Computing Facility of Variable Energy Cyclotron Centre Kolkata for all the help, We thank S. A. Bass for a valuable discussion which triggered this study. [ | 2019-04-18T09:29:30.000Z | 2019-04-18T00:00:00.000 | {
"year": 2019,
"sha1": "256fc69c94255f3e1943875be3997486a263396e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "256fc69c94255f3e1943875be3997486a263396e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
1147550 | pes2o/s2orc | v3-fos-license | Multilocus sequence typing of Cryptosporidium hominis from northern India
Background & objectives: Human cryptosporidiosis is endemic worldwide, and at least eight species have been reported in humans; the most common being Cryptosporidium hominis and C. parvum. Detailed understanding of the epidemiology of Cryptosporidium is increasingly facilitated using standardized universal technique for species differentiation and subtyping. In this study micro- and minisatellite targets in chromosome 6 were used to assess genetic diversity of C. hominis by sequence length polymorphisms along with single nucleotide polymorphisms (SNPs). Methods: A total of 84 Cryptosporidium positive stool specimens were subjected to speciation and genotyping using small subunit (SSU) ribosomal RNA (rRNA) as the target gene. Genetic heterogeneity amongst C. hominis isolates was assessed by sequencing minisatellites, microsatellites and polymorphic markers including genes encoding the 60 kDa glycoprotein (GP60), a 47 kDa protein (CP47), a mucin-like protein (Mucin-1), a serine repeat antigen (MSC6-7) and a 56 kDa transmembrane protein (CP56). Results: Of the 84 Cryptosporidium positive stool specimens, 77 (92%) were positive by SSU rRNA gene polymerase chain reaction (PCR) assay. Of these 77 isolates, 54 were identified as C. hominis and 23 as C. parvum. Of all the loci studied by multilocus sequence typing (MLST), GP60 gene could reveal the highest genetic diversity. Population substructure analysis of C. hominis performed by combined sequence length and nucleotide polymorphism showed nine multilocus subtypes, all of which were distinct groups in the study population. Interpretation & conclusions: MLST, a powerful discriminatory test, demonstrated both variations and distribution pattern of Cryptosporidium species and its subtypes.
and C. hominis are 9.11 and 9.16 Mb, respectively.Genome comparison has shown that C. hominis and C. parvum are very similar and exhibit only 3-5 per cent sequence divergence, with no large insertions, deletions or rearrangements 6 .The genomic data for the genome representatives are available online (http://CryptoDB org., http://cryptodb.org/cryptodb/).
Various molecular tools are available to distinguish different Cryptosporidium spp. that are pathogenic to humans 2 .Molecular typing studies based on a single locus are limited in their abilities to capture the structure of a population, which is better explored using a multilocus approach.Hence, for better understanding of the transmission dynamics, genetic diversity and population structure of the Cryptosporidium spp., high-resolution markers are required to be studied.Multilocus sequence typing (MLST) that has been extensively used for population biology study of many microorganisms 7,8 has been used for subtyping primarily for C. hominis and to a lesser extent for C. parvum 9,10 .Studies from Scotland and the United States have indicated that C. hominis shows a clonal population structure as it displays lower number of subtypes and much stronger linkage disequilibrium (LD) compared to C. parvum 10 .In this study, microsatellite and minisatellite targets in chromosome 6 were used to study genetic diversity with special reference to C. hominis by studying sequence length polymorphisms along with single nucleotide polymorphisms (SNPs).
Material & Methods
This study was conducted in the department of Microbiology, All India Institute of Medical Sciences, a tertiary care referral and teaching hospital in New Delhi, India.The study protocol was reviewed and approved by the Institutional Ethics Committee.All participants provided written informed consent before participating in the study.Stool specimens from 84 patients comprising 82 diagnosed as positive for Cryptosporidium species by microscopy 11 and an additional two by polymerase chain reaction (PCR) assay using 18S ribosomal RNA (rRNA) gene 12 were collected for the study.The clinical samples that were routinely submitted for relevant examination were obtained from both inpatient and outpatient departments of the hospital from June 2010 to November 2012.Clinical data such as age, sex and major clinical symptoms were recorded in a structured questionnaire for each patient and/or guardian in case of children.
DNA extraction and species identification: DNA was extracted from all positive clinical specimens using QIAamp DNA stool kit (QIAGEN Inc., USA) following the manufacturer's protocol except for one modification that the initial lysis duration was increased from 10 to 45 min at 95°C.Speciation and genotyping was performed for each specimen by nested PCR assay and restriction fragment length polymorphism (RFLP) analysis that amplifies an approximately (~) 830 bp fragment of the small subunit ribosomal RNA (SSU rRNA) gene 13 .Restriction analysis of PCR products was done with the enzymes SspI and AseI (New England Bio Labs, Beverly, MA, USA) 13 .
Targets for multilocus sequence typing (MLST):
Five gene loci in chromosome 6 were targeted for the subtype analysis of C. hominis.The genetic loci included 60 kDa glycoprotein (GP60) (accession no.JF495139), a 47 kDa protein (CP47) (accession no.AAM46174), a serine repeat antigen (MSC6-7) (XM_660698), a mucin-like protein (Mucin-1) (accession no.XM_661200) and a 56 kDa transmembrane protein (CP56) (accession no.XM_661161).GP60 and CP47 are the microsatellites, Mucin-1 and MSC6-7 are the minisatellites and CP56 is a SNP marker.The GP60 gene encodes glycoproteins gp15 and gp45, which are implicated in attachment to and invasion of host cells 14 .Microsatellites and minisatellites are DNA sequences that consist of tandemly repeated sequence motifs of 1-4 base pair (microsatellites) or more (minisatellites).These markers evolve at higher rates than nuclear genes because these are variations in the number of tandem repeats and also in the form of SNPs.This makes them ideal targets to be used in the study of population genetics 15 .Nested PCR assay was performed for all gene targets.The sequences of primary and secondary primers, annealing temperatures used and size of the expected PCR products are given in Table I.
Multilocus sequence typing (MLST) polymerase chain reaction (PCR) assays:
For PCR assay, the total volume of PCR mixture was 100 µl, which contained 2 µl of DNA (for primary PCR) and 1 µl of the primary PCR product (for secondary PCR), primer pair at a concentration of 0.4 µM (for both the primary and secondary PCR) (Integrated DNA Technologies, USA), 0.2 mM deoxyribonucleotide triphosphate mix (Fermentas, USA), 3 mM MgCl 2 , 1x PCR buffer and 2.5 U of Taq DNA polymerase (Bangalore Genei, India).The primary PCR reaction also contained 400 ng/ml of nonacetylated bovine serum albumin (Sigma-Aldrich, USA).PCR amplification was carried out with an initial denaturation at 94°C for five minutes, 35 cycles of 94°C for 45 sec, specific annealing temperature for each target gene (Table I) for 45 sec, and 72°C for one minute and a final extension of the PCR product at 72°C for 10 min 15 .PCR products were visualized under ultraviolet light by ethidium bromide staining of two per cent agarose gel (Sigma-Aldrich, USA).
DNA sequence analysis:
Gel-based extraction of the PCR products was performed as per the manufacturer's instructions using MinElute Gel Extraction Kit (Qiagen, USA).Sequencing for the study isolates was performed in both forward and reverse direction on ABI 3500xL Genetic Analyzer from Chromous Biotech, India.Nucleotide sequences were read using the software ChromasPro (www.technelysium.com.au/ChromasPro.html).Alignment of consensus sequences obtained and those from the GenBank database was done using ClustalX (http:// bips.ustrasbg.fr/fr/Documentation/ClustalX/)after manual editing of the alignments using the BioEdit program version 7.0.4(http://www.mbio.ncsu.edu/BioEdit/bioedit.html).Phylogenetic analysis: Phylogeny was inferred by constructing an unweighed pair group method with arithmetic mean (UPGMA) tree for length polymorphism using ClustalW (www.ebi.ac.uk/Tools/msa/clustalw2/) and MEGA5 software (www.megasoftware.net).The evolutionary distances were also computed by MEGA5 software using the Kimura 2-parameter method.The rate variation amongst sites was modelled with a gamma distribution.The phylogram for the MLST loci for C. hominis was inferred by constructing neighbour-joining tree using MEGA5 software.The evolutionary distances were computed using the Kimura 2-parameter method.
Results
Eighty four patients with cryptosporidiosis presenting with chronic diarrhoea comprised 40 children (26 males) and 44 adults (28 males).The mean age of children was 1.5 ± 1.1 yr and of adults II).
At the GP60 locus, seven subtypes were observed based on length polymorphism at a subtype frequency of 12.5 per cent (n=3), 4.2 per cent (n=1), 16.7 per cent (n=4), 4.2 per cent (n=1), 50 per cent (n=12), 8.3 per cent (n=2) and 4.2 per cent (n=1) for subtypes 1 (825 bp); 2 (834 bp), 3 (847 bp), 4 (864 bp), 5 (884 bp), 6 (890 bp) and 7 (971 bp), respectively, for all 24 isolates.At the GP60 locus, subtype classification was based on a combination of the number of trinucleotide TCA, TCG or TCT repeats in the microsatellite region and the SNP in the rest of the sequence.Using this method, seven subtypes were identified of some of the reference subtypes that have been listed in Table III In addition, initially, three subtypes were identified at the MSC6-7 locus at a frequency of 37.5 per cent (n=9), 58.3 per cent (n=14) and 4.2 per cent (n=1) for subtypes 1 (471 bp), 2 (498 bp) and 3 (483 bp), respectively.The MSC6-7 locus had a 15 bp TGATGATGAT(G) GAACC(T) minisatellite repeat region from position 91-105 bp and an additional insertion or deletion of a 12 bp fragment of TTCATCTTCATT between positions 202 and 213 amongst all the sequences (Fig. 1).One extra subtype was identified, resulting in a total of Fig. 2. The boxed area indicates the 15bp repeat region and the circled area refers to the single nucleotide polymorphisms amongst MSC6-7 gene isolates.
four subtypes due to the presence of SNP outside the repeat region of the two isolates.The isolate CH1 had an insertion of one base pair at 24 th position upstream of the repeat region, whereas the isolate CH4 had a deletion of one base pair at 422 nd position downstream of the repeat region (Fig. 2).Subtype frequencies for subtypes 1-4 were, therefore, 33.3 per cent (n=8), 8.3 per cent (n=2), 54.2 per cent (n=13) and 4.2 per cent (n=1), respectively.
The mucin-1 locus has a 63 bp minisatellite repeat region (CCAAAACCTGAAAAAGATTCTA AGTCATCATG TGCTTGCTCTAAATCAGATAAA) in addition to SNP within the minisatellite region of the gene fragment.There are 9-11 copies of the repeat in C. hominis Mucin-1 gene sequence.All the Mucin-1 gene sequences in our study had 11 copies of the minisatellite repeats.Diversity was observed due to SNP, which resulted in the formation of three subtypes at a frequency of 87.5 per cent (n=21), 8.3 per cent (n=2) and 4.2 per cent (n=1).CP56 locus is a highly polymorphic subtelomere target in chromosome 6.Polymorphisms in this locus occur due to SNPs across approximately 785 bp fragment.Of the known three subtypes, subtype 3 was present at a frequency of 83.3 per cent (n=20) followed by 16.7 per cent subtype 2 (n=4) (Table II).Subtype 1 was not observed in the present study.
Phylogenetic analysis:
As the sequence length polymorphisms were detected only at GP60, CP47 and MSC6-7 loci, the relationship between multilocus subtypes based on length polymorphisms was inferred by constructing an UPGMA tree for the three loci using MEGA5 software.The evolutionary distances were computed using the Kimura 2-parameter method.Multilocus length polymorphism data from 12 isolates at the three loci inferred by UPGMA method resulted in the generation of nine multilocus subtypes (Fig. 3).Likewise, phylogenetic tree was constructed by concatenating all five MLST loci for 12 C. hominis isolates and was inferred by neighbour-joining method.The isolates showed nine major subtype groups (Fig. 4).
Discussion
The present study confirmed that cryptosporidiosis was endemic in India as also evident in other studies from India 16,17 .In Delhi, a prevalence of 12-20 per cent of Cryptosporidium spp.by microscopy has been documented in HIV seropositive patients [18][19][20][21] .In a multisite study conducted in India in children with cryptosporidiosis from Delhi, Trichy and Vellore, C. hominis was the commonly identified species (88%, Fig. 3. Unweighed pair group method with arithmetic mean analysis showing relationship between multilocus subtypes based on length polymorphism.The optimal tree with the sum of branch length = 0.65904592 is shown.The percentage of replicate trees, in which the associated taxa clustered together in the bootstrap test (1000 replicates) are shown next to the branches.The tree is drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree.The evolutionary distances were computed using the Kimura 2-parameter method and are in the units of the number of base substitutions per site.The rate variation amongst sites was modelled with a gamma distribution (shape parameter=1).The analysis involved 12 nucleotide sequences.Evolutionary analyses were conducted in MEGA5.
Fig. 4. Phylogram for multilocus sequence typing genes for
Cryptosporidium hominis isolates (n=12).The evolutionary history was inferred using the neighbour-Joining method.The percentage of replicate trees, in which the associated taxa clustered together in the bootstrap test (1000 replicates) are shown next to the branches.The tree is drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree.The evolutionary distances were computed using the Kimura 2-parameter method and are in the units of the number of base substitutions per site.The rate variation amongst sites was modelled with a gamma distribution (shape parameter = 1).The analysis involved 12 nucleotide sequences.Codon positions included were 1 st +2 nd +3 rd +Noncoding.All ambiguous positions were removed for each sequence pair.Evolutionary analyses were conducted in MEGA5.59/67), followed by C. parvum and C. meleagridis 22 .In another study from south India, 81 per cent (47/58) based on sequence length polymorphism at each locus and subsequently based on the combined length polymorphism and SNP by concatenating all sequences, and the data were analyzed as a single multilocus gene.Sequence length polymorphism was analyzed at GP60, CP47 and MSC6-7 loci, CP56 and Mucin-1 loci being monomorphic in size.Amongst all the markers studied, GP60 showed highest gene diversity as shown earlier 26,27 .Many subtypes in GP60 family had the same length even though the sequences were different.The most common C. hominis subtype family in the present study was Id which had two subtypes of the same length (884 bp) while DNA sequencing revealed that these were two different subtypes, namely IdA15G1 (41.7%) and IdA17G1 (4.2%).
MLST analysis of isolates from our study differed from isolates analyzed from the eastern part of the country i.e., National Institute of Cholera and Enteric Diseases, Kolkata, in many aspects.MLST based on both length polymorphism and SNPs in our study showed that only CP47 gene was monomorphic, whereas the study conducted by Gatei et al 15 reported all other loci to be polymorphic except HSP70.Six CP47 subtypes were identified by Gatei et al 15 but the present study could identify only five CP47 subtypes.CP47 subtype 2 corresponding to 472 bp was not found in our study isolates.Six GP60 subtypes were reported in their study based on length polymorphism; however, we observed seven GP60 subtypes and all except subtype 847 bp were different.Based on SNP, two different subtypes were identified which were similar in ours as well as their study.Although three subtypes were observed based on length polymorphism at MSC6-7 locus in both the studies, these were different from each other.In addition, one extra subtype was identified due to the presence of SNP outside the 15 bp repeat region in two of our isolates (1 deletion and 1 insertion).An insertion or deletion of a 12 bp fragment of TTCATCTTCATT was reported by Gatei et al 15 between positions 268 and 280 in the C. hominis sequence; we, however, observed the 12 bp deletion between positions 202 and 213 in one of our isolates.The Mucin-1 locus had 11 copies of 63 bp minisatellite repeat in both the studies.Three subtypes were observed based on SNP in both the studies.Based on this analysis, the UPGMA tree of Kolkata isolates generated twenty multilocus subtypes and the MLST phylogram inferred by Bayesian method showed four major subtype groups.The UPGMA tree in the present study generated nine multilocus subtypes and the MLST phylogram inferred by neighbour-joining method also showed nine subtype groups (Table IV).
In conclusion, our study demonstrated the population structure of C. hominis by MLST typing.MLST is a better tool to study transmission dynamics of Cryptosporidium subtypes due to its high resolution.However, to further understand the significance of distinct subtypes with clinical manifestations of the disease and transmission risks, it will be necessary to analyze a large number of samples and if possible from different geographical areas.The limitation of this study was that the population structure for C. hominis based on MLST could not be assessed by measuring intragenic and intergenic LD and recombination rates due to inability in using some bioinformatics software.
Fig. 1 .
Fig. 1.The boxed area indicates the deletion of 12 bp region between position 201-213 bp of MSC-6 gene amongst Cryptosporidium hominis isolates.
Table I .
Primers used in the multilocus sequence typing, annealing temperatures used in the polymerase chain reaction assay and ± 15.5 yr.Of the total 84 isolates, 72 (92%) were positive by SSU rRNA gene PCR assay.The seven isolates could not be amplified probably because of the presence of inhibitors in the stool specimens, insufficient DNA content in the clinical sample and/or degradation or shearing of DNA due to repeated freeze and thaw.Of the total 77 isolates, 54 were identified as C. hominis and 23 as C. parvum.
Classification of subtypes: Based on the length polymorphism of 24 isolates of C. hominis, five subtypes were identified at the CP47 locus at a frequency of 8.3 per cent (n=2) for subtype 1 (406 bp), 8.3 per cent (n=2) for subtype 2 (484 bp), 8.3 per cent (n=2) for subtype 3 (490 bp), 12.5 per cent (n=3) for subtype 4 (496 bp) and 62.5 per cent (n=15) for subtype 5 (526 bp).To identify the subtypes, trinucleotide TAA was coded as 'A' while the TAG/TGA was coded as 'G' with the following digit showing the number of each trinucleotide repeat.The
Table II .
Length polymorphism (in base pairs) and frequencies (%) of multilocus sequence typing-based subtypes at each locus (n=24)
Table III .
60 kDa glycoprotein gene subtypes with their | 2017-10-19T16:26:47.919Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "e6b2a8f6bb652886abf753deb94c78df0e205b65",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijmr.ijmr_1064_14",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e6b2a8f6bb652886abf753deb94c78df0e205b65",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
33925403 | pes2o/s2orc | v3-fos-license | Xmn1-158 γGVariant in B-Thalassemia Intermediate Patients in South-East of Iran.
Background: Xmn-1 polymorphism of 𝜸Gglobin gene (HBG2) is a prominent quantitative trait loci (QTL) in β-thalassemia intermediate (β-TI). In current study, we evaluated the frequency of Xmn-1 polymorphism and its association with β-globin gene (HBB) alleles and Hb F level in β-TI patients in Sistan and Balouchestan province, south-east of Iran. Subjects and Methods: 45 β-TI patients were enrolled. HBB gene mutations and Xmn-1 polymorphism were determined by amplification-refractory mutation system (ARMS) PCR method. Hemoglobin profile was determined using capillary electrophoresis. Results: The study participants consisted of 26 (58%) males and 19 (42%) females. Mean age of the patients was 10.7±3.1 years old. Overall, Xmn-1 polymorphism was observed in 28 (62%) patients. Homozygous (TT) and heterozygous (CT) genotypes of the polymorphism represented with frequencies of 12 (26%) and 16 (35%), respectively. Main recognized HBB gene mutation was IVSI-5(G>C) with homozygous frequency of 44%. Non-zero (β+) alleles of HBB gene constituted 11.1 % (4 patients with heterozygous β+ and one with homozygous β+ genotype). Hb F level was significantly higher in patients with at least one Xmn-1allele (67.9±[Formula: see text]17.9%) than those without the polymorphism (19.5±20.3%, P<0.0001). Also, patients with homozygous genotype demonstrated significantly higher Hb F compared to heterozygous (CT) cases (respective percentages of 85±[Formula: see text]6.8 and 54.7±[Formula: see text]10.5, p<0.0001). Conclusion: Our results highlighted the role of Xmn-1 polymorphism as the main phenotypic modifier in β-TI patients in Sistan and Balouchestan province.
INTRODUCTION
β-Thalassemia intermediate (β-TI) represents a highly heterogeneous entity lying between two extreme forms of β-thalassemia syndromes; βthalassemia minor and β-thalassemia major. 1,2 Clinical picture of β-TI ranges from nonsymptomatic to severe transfusion-dependent forms. Wide-spectrum phenotypic appearance of β-TI can be partly attributed to its great genetic diversity. 3,4 Accordingly, multiple genetic loci are present inside and outside of the β-globin gene (HBB) cluster which can modulate the clinical severity of β-TI. 5 However, the main pathophysiological factor determining the severity of β-TI is the ratio of α-globin/non-α-globin chains within erythroid precursors. 5,6 The majority of the known phenotype modifiers of β-TI execute through counterbalancing the above-mentioned ratio. Multiple genetic polymorphisms within HBB like genes, specific erythroid transcription factors and genes involved in oxido-reductase reactions have been introduced as quantitative trait loci (QTLs) modulating β-thalassemia clinical appearance. 5 Although the mechanisms exploited by these genetic modifiers are largely obscure, induction of Hb F is considered as an established contributor. Xmn-1 polymorphism results from a C > T base substitution at the-158 position of G globin (HBG2) gene, and is a well-known HbF inducer ameliorating β-TI severity. 7 This polymorphism resides in close proximity to locus control region of β-globin gene (β-LCR) which controls differential expression of βlike globin genes throughout the life. 8 Actually, the "T" allele of Xmn-1 polymorphism leads to weaker binding of transcription inhibitors to the β-LCR, and subsequently results in persistent activation of HBG2gene beyond the infancy period. 8,9 Studies indicated substantial impact of Xmn-1 polymorphism on improvement of β-thalassemia clinical severity. [10][11][12] Also, there are reports suggesting a role for Xmn-1 polymorphism in predicting the response rate to the Hb F inducer therapeutics in β-thalassemia major. 8,13,14 Nevertheless, Xmn-1 polymorphism has demonstrated a variable penetrance among different populations. 15,16 In Iranian β-TI patients, this polymorphism has been characterized as the main genetic contributor to the compromised phenotype in β-thalassemia patients. 17,18 Despite this, there has been no study on the frequency of this polymorphism in Sistan and Balouchestan province in south-east of Iran. Considering that the province is one of the primordial locations of βthalassemia in the country (with estimated frequency of 2500 registered β-thalassemia major cases), [19][20][21] we aimed to evaluate the frequency and clinical significance of Xmn-1 polymorphism in β-TI patients in this region.
MATERIALS AND METHODS
The patients (45 cases represented with β-TI) were selected from Ali-Asghar Children Hospital, Zahedan, Sistan and Balouchestan province. These patients have been seeking medical care since their diagnosis in this center. Inclusion criteria were mild symptoms of anemia, intermittent transfusion requirements, and age of starting transfusion >2 years old. Our study was approved by the Research Deputy of Azad University, as well as the Medical Ethics Committee of the Pasteure Institute of Iran. Furthermore, an informed consent was acquired from the patients or their parents. Routine hematological indices were measured by Sysmex K1000 (Japan) blood auto analyzer. Capillary electrophoresis was performed for quantification of HbA 2 and Hb F. DNA extraction was carried out using proteinase K method with a standard protocol previously described. 22 Amplification-refractory mutation system (ARMS)-PCR (dNTP cat. No. DN7604C (CinnaGen Company, Karaj-Iran), TaqDNA polymerase Cat. No. TA8109C (CinnaGen Company, Karaj-Iran)) was conducted to determine the Xmn-1 polymorphism and common HBB gene mutations as previously reported in East of Iran. 20,23,24 Furthermore, mutations identified in β-TI patients were further confirmed in patient's parents. The sequences of the used primers (Biolegio Company, Nijmegen-Netherland) have been presented in Table 1.
167
International Journal of Hematology Oncology and Stem Cell Research ijhoscr.tums.ac.ir 8) shows the TT genotype, cases 5and 6 (lanes 9 to 12) represent the CT genotype and case 7 (lanes 13, 14) reveals the CC genotype. Lanes 15 and 16 are negative controls.
The 100 base pair (bp) ladder line has been depicted by "L".
DISCUSSION
Xmn-1 polymorphism is a prominent mediator ameliorating β-thalassemia phenotype through inducing fetal hemoglobin expression. 25,26 This polymorphism exhibited 62.3 % frequency in the present study with 35.6% heterozygous and 26.7% homozygous genotypes. In a recent study on 51 Iranian β-TI patients, 68.6% of whom showed CT genotype of Xmn-1 polymorphism, while TT genotype was identified in none of the cases. 8 In other studies in Iran, Arab et al. 17 and Akbarietal. 18 reported the respective Xmn-1 frequencies of 76.9% and 60% in β-TI patients. In the study of Karimi et al. in our neighbor province, Fars, Xmn-1 variant was detected in 40.6% of 48 β-TI patients and 14% of 50 healthy subjects. 15 In another study in the western province, Kermanshah, 16.3% and 22.3% of patients with severe form of β-thalassemia demonstrated homozygous and heterozygous genotypes of Xmn-1 variant. 27 In studies conducted in Iraq 28 and Kuwait, 29 Xmn-1 polymorphism was described in 47% and 75% of β-TI patients, respectively. We observed that Hb F level was significantly higher in patients who had at least one Xmn-1 variant allele than the patients without this polymorphism (67% vs. 19%). This is in consistent with results obtained by Motovali et al. and Galanello et al. 8,30 In addition, we found that the patients who were homozygous for Xmn-1 polymorphism had significantly higher mean Hb F (85.5%) compared to heterozygous subjects (54.7%), which is consistent with the findings from prior works. 8,31,32 In parallel, Nemati et al. also reported a higher level of HbF in β-thalassemia patients with homozygous genotype of Xmn-1 polymorphism than the ones without this genetic combination. 27 From molecular perspective, "T" base substitution at Xmn-1polymorphic site is supposed to interfere with interaction of specific transcription inhibitors with regulatory sequences at β-LCR. 8 This may be suggestive for possible effects of Xmn-1 polymorphism in bypassing the attachment of the specific transcriptional inhibitors to the regulatory sequences of HBG2 gene. This idea is further supported by studies indicating that polymorphisms in two main suppressive mediators of HBG2 expression, BCL11A and MYB are associated with moderate clinical picture in β-thalassemia major. [33][34][35][36] These findings conclusively indicate that main QTLs of β-thalassemia phenotype, including Xmn-1 polymorphism potentially interfere with binding of inhibitory transcription factors responsible for silencing of Hb F expression. This is particularly important for consideration of targeted therapies interfering with interaction of these transcription inhibitors with β-LCR. Collocation of Xmn-1 polymorphism with specific βthalassemia alleles have been suggested in β-TI patients. In this regard, a significant association has been described between homozygous state of IVS-II-I (G>A) mutation and Xmn-1 polymorphism by Karimi et al. 15 In line with this finding, we also detected the presence of Xmn-1 polymorphism in all six patients who had at least one IVS-II-I (G>A) allele (Table 2). Along with this, from 20 patients with homozygous IVSI-5(G>C) genotype, 8 (40%) had at least one Xmn-1 allele which may be in part indicative of a relationship between this allele and co inheritance of Xmn-1 polymorphism. Furthermore, Xmn-1 polymorphism was observed in both patients homozygous for FSC8/9 (+G) allele. Nevertheless, the number of our patients with International Journal of Hematology Oncology and Stem Cell Research ijhoscr.tums.ac.ir either IVS-II-I (G>A), IVSI-5(G>C) or FSC8/9(+G) mutation was not adequate for exploiting a certain association. Larger population-based studies are recommended to confirm a potential link between certain HBB alleles and Xmn-1 polymorphism. It has been suggested that Xmn-1 polymorphism may be restricted to specific β-TI genotypic combinations. Reportedly, the main genetic signature harboring Xmn-1 polymorphism in β-TI patients has been inherited β 0 alleles. 12 In parallel, we identified the Xmn-1 polymorphism in 17/31 (54%) and 3/4 (75%) of our patient who had β 0 /β 0 and β 0 /β + signatures, respectively. In accordance with our results, Xmn-1 polymorphism has also been associated with β 0 thalassemia mutations in 55%-60% of intermediate patients in earlier reports from Iran. 1,17 Likewise, association of Xmn-1 polymorphism with β 0 mutations reached as high as 80 % in an Iraqi study. 28 This association was also proposed in the study of Adekile et al., in which Xmn-1 polymorphism coinherited with β 0 alleles was more frequent than β + alleles in β-TI patients. 29 To sum up, the proposed relationship between inheritance of β 0 alleles and Xmn-1 polymorphism highlights the role of this polymorphism as a strong modifying factor in severeβ 0 -thalassemia cases.
There are some reports that are not in accordance with the defined role of Xmn-1 polymorphism in lessening the clinical presentation or boosting Hb F level in β-thalassemia patients. 31,[37][38][39] This notion can be understood from the identification of some patients harboring Xmn-1 polymorphism, and phenotypic picture of βthalassemia major. 37,40 These observations may highlight the impact of some unidentified genetic determinants acting upstream ofXmn-1 polymorphism. On the other hand, neither Xmn-1 polymorphism nor mild βglobin mutations were detected in 13 (27%) of our patients, indicating the possible contribution of other QTLs such as polymorphisms in BCLA11 and HBS1L-MYB transcription factors. 7,30,41 Morestudies on the molecular aspects of β-TI patients can provide us with a wider view on genetic contributors to the phenotype of β-thalassemia syndromes. Besides, there may be also a possible role for participation of other unrecognized factors acting independent of Hb F induction to alleviate the β-thalassemia phenotype. Since therapeutic strategies aiming to induction of Hb F have largely yielded inconsistent results in β-thalassemia syndromes, identification of Hb F independent mechanisms provides a new promising field of research in this area.
CONCLUSION
Our results revealed the Xmn-1 polymorphism as the most prominent molecular basis of β-TI in Sistan and Balouchestan province. However, further studies are recommended for elucidating the possible role of other known QTLs to the better understanding of β-TI molecular basis in our region.
ACKNOWLEDGEMENT
Special thanks to the patients and their families for their kind contribution to the study.
CONFLICT OF INTEREST:
Authors declare that they have no conflict of interests. | 2018-04-03T02:31:40.929Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "461dddb0ff471c421849350a64de41e30ddba342",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "81e25569c1a594b8bb72e879498664ab150377d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33785077 | pes2o/s2orc | v3-fos-license | Evaluation of peripheral binocular visual field in patients with glaucoma: a pilot study.
OBJECTIVE
The objective of this study was to evaluate the peripheral binocular visual field (PBVF) in patients with glaucoma using the threshold strategy of Humphrey Field Analyzer.
METHODS
We conducted a case-control pilot study in which we enrolled 59 patients with glaucoma and 20 controls. All participants were evaluated using a custom PBVF test and central 24 degrees monocular visual field tests for each eye using the threshold strategy. The central binocular visual field (CBVF) was predicted from the monocular tests using the most sensitive point at each field location. The glaucoma patients were grouped according to Hodapp classification and age. The PBVF was compared to controls and the relationship between the PBVF and CBVF was tested.
RESULTS
The areas of frame-induced artefacts were determined (over 50 degrees in each temporal field, 24 degrees superiorly and 45 degrees inferiorly) and excluded from interpretation. The patients presented a statistically significant generalized decrease of the peripheral retinal sensitivity compared to controls for Hodapp initial stage--groups aged 50-59 (t = 11.93 > 2.06; p < 0.05) and 60-69 (t = 7.55 > 2.06; p < 0.05). For the initial Hodapp stage there was no significant relationship between PBVF and CBVF (r = 0.39). For the moderate and advanced Hodapp stages, the interpretation of data was done separately for each patient.
CONCLUSIONS
This pilot study suggests that glaucoma patients present a decrease of PBVF compared to controls and CBVF cannot predict the PBVF in glaucoma.
Introduction
The amount of binocular visual field loss in glaucoma was extensively investigated, considering its influence on the quality of life and the activities of daily living [1][2][3][4][5]. Both central and peripheral visions were investigated, but the algorithm to evaluate the binocular visual field is still to be determined as there is no common protocol on the strategy of testing or on the extent of the evaluated visual field [6][7][8][9][10][11][12].
The purpose of our study was to evaluate the peripheral binocular visual field (PBVF) in patients with glaucoma using the threshold strategy of Humphrey Field Analyzer. We designed a reproducible custom test and compared its results with controls and with the central binocular visual field (CBVF) test results of the patients themselves.
Participants
We conducted a case-control pilot study in which we enrolled 59 patients with various degrees of glaucomatous damage and 20 nonglaucomatous patients, who presented in the outpatient department of Ophthalmology of a tertiary care hospital. Informed consent was obtained from all 79 participants prior to testing. Our research adhered to the tenets of the Declaration of Helsinki.
The inclusion criteria for the cases were: confirmed diagnosis of glaucoma (based on the presence of optic nerve head's cup/ disc ratio above 0.3; intraocular pressure above 21 mmHg measured with Goldmann aplanation tonometry; visual field results of Glaucoma Hemifield Test 'outside normal limits' and a minimum of three clustered points with significantly depressed sensitivity, of which one with p<1%); absence of other ocular disease (e.g., corneal opacity, active uveitis, moderate/ dense cataract, vitreous deposits, retinal detachment, age-related macular degeneration, hypertensive retinopathy, diabetic retinopathy, retinal laser treatment, optic neuropathy other than glaucoma, amblyopia); absence of stroke or other known brain injuries (that may influence the results of the visual field testing); at least 3 central visual field tests performed in the past; all of the visual tests with false positive and false negative errors less than 10%; spherical refractive errors less than 6 diopters; cylindrical refractive errors less than 3 diopters. Patients with incipient cataract or intraocular lens were not excluded.
The non-glaucomatous patients were considered controls and were enrolled in our study if they did not present any ocular finding except for incipient cataract, intraocular lens, spherical refractive errors less than 6 diopters or cylindrical errors less than 3 diopters. The optic nerve head appearance was not suggestive for glaucoma, intraocular pressure was below 21 mmHg measured with Goldmann aplanation tonometry in the absence of ocular hypotensive treatment; visual field results of Glaucoma Hemifield Test were 'within normal limits'.
Visual field testing
All visual field tests were performed for both cases and controls using the Humphrey Field Analyzer (HFA II, Carl Zeiss Meditec, Dublin, CA), as follows: one monocular Central 24-2 Threshold Test for each eye, Swedish Interactive Threshold Algorithm -Fast strategy, and one peripheral binocular custom test. We established the reliability criteria for the central monocular tests as: fixation losses ≤ 25%; false positive errors ≤ 10%; false negative errors ≤10%.
The CBVF was obtained from the results of the two monocular central tests using the best location model, which states that for corresponding visual field locations, the binocular sensitivity is given by the most sensitive location between the two eyes [7]. Monocular tests were performed using the lens correction indicated by the Field Analyzer based on the patient's refraction. The lens was placed in front of the tested eye into the lens holder. An eye patch was placed over the non-tested eye. The scores of the retinal sensitivities as given on the printout for each point of the monocular test were manually introduced into a spreadsheet (Microsoft Excel; Microsoft Corporation, Redmond, WA) and combined using an algorithm which selected for each binocular point the highest value of the two corresponding monocular points. The values of the retinal sensitivities were expressed in decibels, as the values are given in decibels by the Field Analyzer.
We created the peripheral custom test by selecting System Setup from the Main Menu, then Additional Setup, then Custom Test and Create Threshold Test. Our peripheral test evaluated 54 points that extended from 30° to 75° in each temporal field, from 30° to 60° inferiorly and to 45° superiorly, as seen in Fig. 1. These points correspond to the region located at more than 30° from the fixation point of the Esterman test pattern.
Fig. 1 The peripheral binocular custom test pattern
Because the custom test was performed using the threshold strategy, appropriate lens correction was needed [13]. The perimeter's lens holder is designed for testing one eye at a time, so a custom-made trial frame was built in order to decrease to a minimum the frame-induced artefacts by using the minimum thickness of the frame components. The test was performed using the lens correction indicated by the Field Analyzer based on the patient's refraction for each eye; standard trial lenses were used. The Fixation Monitoring selection was 'Off' and the video eye monitor was aligned to the bridge of the nose. The participants were monitored throughout the test and were instructed to maintain the central fixation with both eyes. The reliability criteria for the peripheral test were: false positive errors ≤ 25%; false negative errors ≤ 25%. The printout contained only the numeric values (expressed in decibels) for each tested point, without gray scale, defect depth or other analysis available for a standard visual field test.
Statistical analysis
Based on the central monocular visual fields, the results were sorted according to Hodapp classification [14] and then by age. Briefly, the Hodapp classification stages the visual field loss as early, moderate and advanced based on the mean deviation (MD) and the number and location of points with different values of depressed retinal sensitivity.
For each age group and Hodapp stage, the PBVF was compared to controls using two-tailed paired t test and Pearson correlation coefficient. The output of statistical analysis was expressed as t-value and Pearson coefficient, considering a p-value less than 0.05.
For each Hodapp stage the relationship between the PBVF and CBVF was tested using the correlation coefficient (r). The parameter used for PBVF was the sum of the peripheral retinal sensitivities expressed in decibels. The parameters used for CBVF were the maximum MD between the eyes and the sum of the central retinal sensitivities expressed in decibels.
Results
The baseline characteristics of the participants are summarized in Table 1. Examining the PBVF test results, we noticed areas with no retinal sensitivity for both cases and controls: there were at least 3 points without retinal sensitivity for 88% of the cases (52 cases out of 59) and for 80% of controls (16 controls out of 20). These points were located in the regions represented in Fig. 2. As they were surrounded by points with retinal sensitivity, we considered these peripheral deficits as frame-induced artefacts and decided to eliminate the entire region from statistical interpretation. The resultant binocular visual field extends to 50° in each temporal field, 24° superiorly and 45° inferiorly. The distribution of patients according to Hodapp stage and decade of age is presented in Table 2. The results of the tested correlation between the sums of peripheral retinal sensitivities obtained for the patients and for the controls from the corresponding decades are comprised in Table 3. There is a statistical significant difference for the groups of 50-59 and 60-69 years of age, meaning that the peripheral retinal sensitivity is lower in patients with glaucoma compared to normal participants. For the same age groups, the Pearson coefficient is high, meaning a high correlation between the variation of peripheral retinal sensitivities in patients and normal participants, or in other words, the decrease of the peripheral retinal sensitivity in patients with glaucoma is generalized.
Because of the size of the samples, the results derived from the other age groups and Hodapp stages were analyzed separately, by comparison with the results of the controls. For all 8 cases, we observed a general decrease of the peripheral retinal sensitivity.
For the decades 50-59, 60-69 and 70-79 included in the initial Hodapp stage, the correlation coefficient between the maximum MD and the sum of peripheral retinal sensitivities was r = 0.32 and the correlation coefficient between the sum of the central and peripheral retinal sensitivities was r = 0.39, indicating the absence of correlation between the parameters of CBVF and PBVF.
Discussion
The visual field loss in glaucoma was markedly investigated, given its impact on the quality of life [1][2][3][4][5]. Owen et al. [2] focused their research on the binocular visual field as a measure of predicting the visual loss to a level below the legal standard for driving. Kulkarni et al. [5] compared eight methods of staging visual field damage in glaucoma with a performancebased measure of the activities of daily living and self-reported quality of life; their conclusion was that the most accurately predictors for functional ability and quality of life in glaucoma were the amount of binocular visual field loss and the status of the better eye.
Regarding the central visual field, Crabb and Viswanathan [9] described a method of merging the results from monocular fields to obtain the integrated visual field. Nelson-Quigg et al. [7] compared four models of prediction of CBVF from the monocular results and concluded that the binocular summation and best location models provided the best predictions. All of these tests were performed using the threshold strategy of the Humphrey Field Analyzer; the latter corresponds to the method described by Crabb and Viswanathan [9] and is the method we used in our study to obtain the CBVF.
The peripheral binocular visual field was investigated by two methods of computerized perimetry: the Esterman binocular test [8][9][10]15] and peripheral custom tests [8,15]. The Esterman binocular test is the only standard binocular test available on Humphrey Field Analyzer [16] and it is using a non-adjustable high level of stimulus brightness which is unable to detect subtle defects of the visual field. A 10 decibels stimulus is presented in 120 points of the visual field to an extent of 150° bilateral horizontal field width, with more points being tested in the inferior field than superiorly [16]. The results of the Esterman binocular test were compared to the results of custom tests which also used a non-threshold strategy: Jampel et al. [8] designed two custom peripheral binocular visual fields using non-adjustable levels of brightness of the stimuli, but with a decreased intensity compared to the intensity used for the Esterman test -20 and 22 decibels, respectively. Their results indicate that the custom tests provide a wider range of responses compared to Esterman binocular test, but do not correlate better with patient assessment of vision, suggesting the need of a better method of testing, such as threshold strategy [8]. The objective of threshold testing is to determine the differential sensitivity for each retinal point tested; the stimuli are either dimmed or made brighter in steps until the patient marks the seen stimulus [16].
Morescalchi et al. [15] designed another custom binocular program in order to quantify peripheral visual impairment. It was used the screening 3-zone strategy, which provides only certain symbols for the seen stimuli, relative defects and absolute defects, without the retinal sensitivity values [13]. However, the authors mention that the new test was proposed for evaluation of visual impairment only for legal purposes; its results correlated better with patient-reported assessment of vision in comparison with binocular Esterman test [15].
According to European Glaucoma Society's guidelines [17], threshold strategy of computerized perimetry is the recommended standard for evaluation of glaucoma patients. The central 30° visual field is most investigated because this central area corresponds to the location of the great majority of retinal ganglion cells [17]. The PBVF is evaluated in most cases for legal purposes. However, the peripheral visual tests available on computerized perimetry can detect only the advanced defects [8,10,15,16] and most of the information about the peripheral visual field in glaucoma was obtained using kinetic perimetry [17].
We designed this pilot study to test the PBVF of patients with glaucoma with a reproducible custom binocular test using the threshold strategy of Humphrey Field Analyzer. We did not perform peripheral monocular visual field tests in order to integrate them into one PBVF as to the best of our knowledge, there is no such model for the peripheral visual field tested with computerized perimetry.
The results of our study suggest that glaucoma patients present a decrease of PBVF compared to controls for the patients aged 50-59 and 60-69 included in Hodapp initial stage. Moreover, the pattern of this decrease is generalized. This result indicates the need of peripheral visual field evaluation in patients with glaucoma, not only for the advanced cases, but also for the Hodapp initial stage, according to the classification based on the results of the central monocular tests.
The fact that we did not find a correlation between the parameters of PBVF and CBVF suggests that the status of the central visual field cannot predict the status of the peripheral visual field, indicating again the need of peripheral visual field evaluation in patients with glaucoma.
The presence of frame-induced artefacts we observed in our study raises the question of the functional impact of the eyeglasses in every day life. From the patient's point of view, wearing eyeglasses may be a part of functional binocular visual field, but from the researcher's point of view, it is difficult to assess the role of the frames on the visual field as the manufacture of the trial frame is restricted by the size and shape of trial lenses available. Nelson-Quigg et al. [7] used in their study a modified pediatric trial frame for binocular testing, but their test examined only the central 30° visual field and no frame-induced artefacts were reported [7]. According to the Humphrey Field Analyzer User Manual [13], it is needed to use the patient's glasses to perform the Esterman binocular test if the patient does require glasses for the activities of daily living. Among the studies we found that used the Esterman binocular test [5,8,10] no information is provided about optical correction or the potential frameinduced artefacts for this test.
One limitation of our study may be due to the non-standardized trial frame we used for the evaluation of the PBVF, although its manufacture was being influenced by the diameter of the standard trial lenses.
The small number of patients included in our study in moderate and advanced Hodapp stages is a consequence of the restrictive inclusion criteria, as these patients usually have other ocular findings that can influence the test results. We established restrictive inclusion criteria because we decided to investigate the effects of glaucoma alone on binocular visual field.
The difficulty in enrolling the participants was even greater with the controls than with the cases. We found challenging to exclude glaucoma and other ocular or brain conditions which may alter the visual field test results in a person aged more than 50. Moreover, a glaucoma patient has learnt during the numerous follow-up visits how a visual field test is performed and the importance of this examination, whereas a non-glaucomatous patient may not have the motivation to complete the test.
In conclusion, our pilot study suggests that glaucoma patients present a generally depressed PBVF compared to controls and CBVF cannot predict the PBVF in glaucoma. These results indicate the requirement of peripheral visual field assessment in glaucoma using the threshold strategy. | 2018-04-03T02:28:50.691Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "c8f8178a3e64262a9c8d5b86ee4927e5f6771c3b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c8f8178a3e64262a9c8d5b86ee4927e5f6771c3b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256363376 | pes2o/s2orc | v3-fos-license | Generic-reference and generic-generic bioequivalence of forty-two, randomly-selected, on-market generic products of fourteen immediate-release oral drugs
The extents of generic-reference and generic-generic average bioequivalence and intra-subject variation of on-market drug products have not been prospectively studied on a large scale. We assessed bioequivalence of 42 generic products of 14 immediate-release oral drugs with the highest number of generic products on the Saudi market. We conducted 14 four-sequence, randomized, crossover studies on the reference and three randomly-selected generic products of amlodipine, amoxicillin, atenolol, cephalexin, ciprofloxacin, clarithromycin, diclofenac, ibuprofen, fluconazole, metformin, metronidazole, paracetamol, omeprazole, and ranitidine. Geometric mean ratios of maximum concentration (Cmax) and area-under-the-concentration-time-curve, to last measured concentration (AUCT), extrapolated to infinity (AUCI), or truncated to Cmax time of reference product (AUCReftmax) were calculated using non-compartmental method and their 90% confidence intervals (CI) were compared to the 80.00%–125.00% bioequivalence range. Percentages of individual ratios falling outside the ±25% range were also determined. Mean (SD) age and body-mass-index of 700 healthy volunteers (28–80/study) were 32.2 (6.2) years and 24.4 (3.2) kg/m2, respectively. In 42 generic-reference comparisons, 100% of AUCT and AUCI CIs showed bioequivalence, 9.5% of Cmax CIs barely failed to show bioequivalence, and 66.7% of AUCReftmax CIs failed to show bioequivalence/showed bioinequivalence. Adjusting for 6 comparisons, 2.4% of AUCT and AUCI CIs and 21.4% of Cmax CIs failed to show bioequivalence. In 42 generic-generic comparisons, 2.4% of AUCT, AUCI, and Cmax CIs failed to show bioequivalence, and 66.7% of AUCReftmax CIs failed to show bioequivalence/showed bioinequivalence. Adjusting for 6 comparisons, 2.4% of AUCT and AUCI CIs and 14.3% of Cmax CIs failed to show bioequivalence. Average geometric mean ratio deviation from 100% was ≤3.2 and ≤5.4 percentage points for AUCI and Cmax, respectively, in both generic-reference and generic-generic comparisons. Individual generic/reference and generic/generic ratios, respectively, were within the ±25% range in >75% of individuals in 79% and 71% of the 14 drugs for AUCT and 36% and 29% for Cmax. On-market generic drug products continue to be reference-bioequivalent and are bioequivalent to each other based on AUCT, AUCI, and Cmax but not AUCReftmax. Average deviation of geometric mean ratios and intra-subject variations are similar between reference-generic and generic-generic comparisons. ClinicalTrials.gov identifier: NCT01344070 (registered April 3, 2011).
Background
One of the causes of economic inefficiency in healthcare is underuse of generic drug products [1], which is due, in part, to mistrust by healthcare professionals [2] and patients [3] and may be related to information availability [4], educational level [3], and healthcare system maturity [2,5,6].
An application for marketing approval of a generic drug product must provide evidence of its bioequivalence (BE) to a reference product that was approved based on clinical trials [7][8][9]. Although there are some differences among regulatory agencies worldwide [7][8][9], for immediate-release drugs, average bioequivalence (BE) testing is commonly performed in a single-dose, crossover study on healthy volunteers under fasting condition; with measurement of parent drug blood concentration, non-compartmental analysis of logarithmically transformed area-under-the-concentration-time curve (AUC) and maximum concentration (C max ) data, and computation of the 90% confidence interval (CI) on the test/reference geometric mean ratio, which should generally fall within the 80-125% BE range [10,11].
Under current regulations, BE studies among onmarket, reference-bioequivalent, generic products are not required, which raises the theoretical concern that a generic product at one end of the BE range might not be equivalent to another at the other end [23][24][25]. Few studies have addressed the issue; using retrospective analysis of reference-normalized data [26][27][28], simulation [29,30], or a prospective but restricted approach [31].
One size-fits-all BE approach may not adequately take intra-subject variability and therapeutic windows into account [32][33][34]. Intra-subject variability can be due to intra-drug variability (physiological metabolic variability), intra-product variability (unit to unit or batch to batch), or subject-by-product interaction. Generic intra-product variability and subject-by-product interaction are especially important for narrow therapeutic index (NTI) drugs, for which the 75/75 rule (75% of individual ratios are within ±25%), among other methods of analysis, have been proposed [10,35]. A simulation study was assuring [25] and few studies specific to antiepileptic medications [17,28,31] provided further support of the applicability of current BE standards to NTI drugs and led to revision of the American Epilepsy Society's guidelines concerning reference-to-generic and generic-to-generic switching [36]. However, there are still concerns that the results may not apply to countries with less stringent control over pharmaceuticals' quality [37].
In Saudi Arabia, the Saudi FDA requires demonstration of BE (applying the 80.00-125.00% BE limits on C max and AUC 90% CIs) before registering generic drug products, registered products are listed in the Saudi National Formulary, generic substitution for none-NTI drugs by pharmacists is permissive with patient's consent, and generic prescribing is encouraged [38]. Although the Saudi FDA has a policy to reexamine the products for which it receives complaints, it does not systemically assess the BE of on-market generic products. A 2015 study on a random sample of 178 physicians in 2 hospitals in the Riyadh showed that although 52% supported substitution by local generic products, only 22% believed that Saudi FDA-approved, local generic products are therapeutically equivalent to reference products [39].
Given the tremendous cost-saving and potential improvement in healthcare accessibility provided by generic drug products, the serious clinical implications of prescribing products with unacceptable bioavailability or switching between products that are not bioequivalent, the need to alleviate patients and healthcare professionals mistrust, and the paucity of empirical data world-wide, we set the present study as a field test of the current BE standards. Our main aim was to determine the extent of BE between on-market generic and reference products and among referencebioequivalent generic products. We also examined the percentages of individual, generic/reference and generic/ generic, pharmacokinetic parameters ratios that are outside the ±25% range.
Design
We identified the 15 oral, immediate-release, noncombinational drugs with the highest number of generic products on the Saudi National Formulary. We studied 14 out of the 15 drugs because the reference (R) product of one of them (enalapril) was not available on the Saudi market. On each drug, we conducted four-product, foursequence, four-period, sequence-randomized, crossover BE study using the R product and 3 randomly-selected generic products (Ga, Gb, and Gc). The four sequences, namely, Ga-Gb-Gc-R, Gb-R-Ga-Gc, Gc-Ga-R-Gb, and R-Gc-Gb-Ga, were designed so that every product appears the same number of time within each period and each sequence, and every product follows every other product the same number of times. Washout periods and blood sampling frames were drug-specific (Table 1) and extended to about 7 and 5 drug plasma half-lives, respectively.
Participants
We enrolled healthy, non-pregnant adults (age 18-60 years) with a body mass index (BMI) ≤35 kg/m 2 , who accepted to abstain from taking any medication for ≥2 weeks before, and during the study, and from smoking, alcohol, and xanthene-containing beverages or food for ≥48 h before, and during each of the four study periods. Volunteers were screened by medical history, physical exam, and laboratory tests that included complete blood count, renal profile, and liver profile. Subjects with history of hypersensitivity to the drug to be tested, recent acute illness, or clinicallyimportant laboratory tests' abnormality were excluded. For menstruating women, the study was conducted 5 to 19 days after last menstrual period and after obtaining a negative urine pregnancy test.
The The study could not distinguish product failure from failure to take the drug b All adverse events were minor and resolved spontaneously. HPLC High performance liquid chromatography, LC-MS Liquid chromatography-mass spectrometry, BP Blood pressure. Flu-like, influenza-like consent at enrolment and was compensated based on the Wage-Payment model [40] in a prorated manner.
Procedures and interventions
Reference and generic drug products were purchased from retail pharmacies in Riyadh, Saudi Arabia. After fasting for 10 h, drug products were administered with 240 ml of water at room temperature. Fasting from food and beverages continued for 4 h post-dosing. However, volunteers were allowed 120 ml water every hour, except for 1 h before and 1 h after drug administration. Standardized breakfast and standardized dinner were given 4 and 10 h after drug administration. Meal plans were identical in the four study periods. Volunteers remained ambulatory or seated upright (unless deemed medically necessary) for 4 h after drug administration. Strenuous physical activity was not permitted during study periods.
During each study period, in addition to a baseline blood sample, 17 blood samples were drawn (Additional file 1). Sampling schedules were drug specific and were designed to collect adequate number of samples before and around the expected C max and across 5 half-lives of the drug. Blood samples were collected in vacutainer tubes and centrifuged for 10 min at room temperature within 15 min of collection. Plasma samples were harvested in clean polypropylene tubes and placed immediately at -80 o C until analysed.
Compliance with study protocol was checked before drug administration in each study period. Volunteers were under continuous observation regarding occurrence of adverse events and compliance with study protocol during the first day of each period. In addition, they were asked about experiencing adverse events at the time of last blood collection of each period and at the beginning of subsequent periods.
Drug concentrations were blindly measured by inhouse, locally-validated, reversed-phase high performance liquid chromatography (HPLC) [41][42][43][44][45][46][47][48][49][50][51][52] or liquid chromatography-mass spectrometry (LC-MS) [53,54]. Lower limits of quantification are listed in Table 1. Intra-assay coefficient of variation (standard deviation/ mean * 100) and bias (measured concentration/nominal concentration * 100) were ≤3.1-14.4 and ≤5.0-17.0, respectively. A typical assay run included a series of 10 calibrators and several sets of four quality control samples (1 and 3 times lower quantification limit and 0.5 and 0.8-0.9 upper quantification limit). Samples from the four periods for each volunteer were analyzed in the same run. Samples with drug concentration greater than the upper quantification limit were reassayed after dilution. Samples with drug concentration below the lower quantification limit were assigned zero concentration. Drug concentrations of missing samples were assigned the average concentration of the two flanking samples in the same period.
Random sampling of generic drug products and randomization
For each of the 14 drugs, all of the Saudi formulary-listed generic products were assigned sequential numbers, the numbers were arranged randomly (by MMH) using an online random number generator [55], and the three generic products corresponding to the first three randomly-arranged numbers were selected and labeled Ga, Gb, and Gc, respectively.
For each of the 14 studies, blocked (block size = 4) randomization sequences were generated (by MMH) using an online program [55]. Randomization sequences were concealed from recruiting study coordinators and from potential participants.
Sample size
Sample size for each study was estimated using an online program [56]; assuming an AUC I and C max ratio of generic to reference product of 1.10, a power of 0.9, a left equivalence limit of 0.80, a right equivalence limit of 1.25, and 2 one-sided type I error of 0.05, Bonferroniadjusted for 6 comparisons (i.e., α = 0.0083). Sample size was rounded and inflated by 3-8 subjects to allow for potential withdrawals/dropouts. Intra-subject coefficient of variation (CV) was estimated from published studies as 50% of reported total CV (Additional file 2).
Outcome measures and analysis
The following pharmacokinetic parameters were determined using standard non-compartmental methods: AUC T (area-under-the-concentration-time curve from time zero to time of last measured concentration) calculated by linear trapezoidal method, AUC I (area-under-the-concentrationtime curve from time 0 to infinity) calculated as AUC T plus the ratio of last measured concentration to elimination rate constant, AUC T / AUC I , C max (maximum concentration) determined directly from the observed data, T max (first time of maximum concentration) determined directly from the observed data, λ (apparent first-order elimination rate constant) calculated by linear least-squares regression analysis from the last 4-8 quantifiable concentrations of a plot of natural log-transformed concentration versus time curve, t ½ (terminal elimination half-life) calculated as ln 2/ λ, AUC 72 (area-under-the-concentration-time curve truncated to 72 h) calculated by linear trapezoidal method, and AUC Reftmax (area-under-the-concentration-time curve to T max of reference product, calculated for each subject) calculated by linear trapezoidal method. When λ was not calculable in a given study period, the average of λs in other periods of the same volunteer was used to calculate AUC I for that period. AUC Reftmax was not calculated when data for the reference product were missing. Each generic AUC Reftmax with zero value was assigned 0.001 in order to perform log-transformation. Pharmacokinetic and statistical analyses included all evaluable data of all volunteers.
Primary outcome measures were C max , AUC T , and AUC I . Secondary outcome measures were T max , AUC Reftmax, and AUC 72 . The four products of each drug were compared by analysis of variance (ANOVA). The ANOVA model included, product, period, sequence, and subjects nested in sequence. Mean square residual (MSR) was used to test significance of period and product effects. Subjects nested in sequence mean square was used to test significance of sequence effect. For each pharmacokinetic parameter (except T max ), six pairwise (Ga-R, Gb-R, Gc-R, Ga-Gb, Gb-Gc, and Ga-Gc) 90% CIs on the difference between means of log-transformed values (i.e., geometric mean ratio) were determined using MSR without and with Bonferroni adjustment for 3 or 6 comparisons, and the antilogs of the 90% CI limits were compared to the BE limits of 80.00% and 125.00%. The null hypothesis (lack of bioequivalence) was rejected if the 90% CI was completely within 80.00% to 125.00%. If the null hypothesis was not rejected, the analysis would indicate either failure to show bioequivalence (the 90% CI crosses the BE limits) or bioinequivalence (the 90% CI is completely outside the BE limits). to The following were also calculated: percentage of generic products that are not bioequivalent to their reference product or not bioequivalent to each other based on C max , AUC T , AUC I , or AUC Reftmax , mean (SD) deviation of AUC T , AUC I , and C max genericreference and generic-generic point estimates from 100% and percentages of the deviations that were <6, <10, or >13 percentage points, percentage of individual C max , AUC T , AUC I , AUC 72 , T max , and AUC Reftmax generic/reference and generic/generic ratios that are 75% or 125%, and percentage of drugs that failed to fulfil the 75/75 rule (i.e.,75% of individual ratios are within ±25%) for each of the pharmacokinetic parameters. Pharmacokinetic and statistical analyses were performed (by MMH) on a personal computer using Microsoft Excel (Version 2010) with add-ins (PK Functions for Microsoft Excel, JI Usansky, A Desai, and D Tang-liu, Department of pharmacokinetics and Drug Metabolism, Allergan Irvine, CA, USA) and IBM SPSS Statistics version 21 software, respectively.
Results
The 14 immediate-release, non-combinational, oral drugs with the highest number of generic products on the Saudi National Formulary that were assessed were, in descending order, ciprofloxacin (18 generic products), ranitidine, amoxicillin, paracetamol, atenolol, cephalexin, ibuprofen, diclofenac, metformin, omeprazole, metronidazole, clarithromycin, amlodipine, and fluconazole (7 generic products). Commercial name, manufacturer name, formulation, strength, lot/ batch number, manufacture date, and expiry date for the reference and the 3 randomly-selected generic products as well as the number of listed generic products are presented in Additional file 3. About 52% of the 42 generic products were manufactured in Saudi Arabia, 14% in other Gulf States, 31% in Arabic non-Gulf States, and 2% in Portugal.
Seven hundred healthy volunteers participated in 14, four-product, four-sequence, four-period, sequencerandomized, crossover, BE studies. As shown in Table 1, the number of volunteers per study ranged from 28 to 80. The volunteers were 100% males for all but 3 studies which had 3-6% females. Mean (SD) age ranged from 30.5 (5.0) to 36.9 (8.7) years and mean BMI ranged from 23.0 (2.3) to 26.1 (3.7) kg/m 2 per study (grand mean age and BMI 32.2 (6.2) years and 24.4 (3.2) kg/m 2 , respectively). Withdrawal from ≥ one period ranged from 0% to 19% per drug, with a total of 145 missed periods (out of 2800). Withdrawal reasons were mostly personal but also included inadequate venous access, skin rash, vomiting, high blood pressure, and influenza-like symptoms, as well as incompliance (Table 1). Adverse events occurred in 0% (paracetamol) to 7% (fluconazole and metronidazole) of volunteers (Table 1); all were minor and resolved spontaneously.
Baseline drug concentration was not detectable in any period for any of the 14 drugs, indicating adequate wash-out periods. There were 12 missed blood samples (2 for clarithromycin, 5 for fluconazole, and 5 for ranitidine) out of the 47,790 scheduled samples (excluding withdrawals); these samples were assigned the average concentration of the two flanking samples of the same volunteer in the same period. In all samples of one volunteer, there was a plasma peak that interfered with the diclofenac assay; this volunteer was excluded from further analysis. In four volunteers, there was no measurable drug concentration in any sample from one study period only (amlodipine, R, 3rd period; ibuprofen, R, 2nd period; metronidazole, Gb, 1st period; and paracetamol, Gb, 2nd period). The unmeasurable concentrations could be due to product failure as the drugs were administered by one of the investigators and the volunteers denied incompliance when confronted; however, incompliance cannot be ruled out. Mean concentrationtime and log-concentration-time curves of the reference and the three generic products of each of the 14 drugs are presented in Additional files 4 and 5, respectively. We were not able to calculated λ in a total of 27 (1%) out of the 2647 pharmacokinetic analyses (clarithromycin: (1) Ga, (3) Gb, and (1) Gc; diclofenac: (4) Ga, (4) Gb, (3) Gc, and (7) R; omeprazole: (1) Gb, (2) Gc, and (1) R). Average of λs in other periods of the same volunteer was used to calculate AUC I for these 27 analyses.
No outlier values for any of the pharmacokinetic parameters were identified or removed from analysis. AUC T , AUC I , C max , T max , λ, t 1/2 , C max /AUC I , AUC T /AUC I , AUC Reftmax , and AUC 72 of the reference and the three randomly-selected generic products of each drug are summarized in Additional file 6. AUC T /AUC I ranged from 90% (ciprofloxacin) to 98% (clarithromycin), indicating adequate sampling frames.
MSR from ANOVA analysis and calculated intrasubject CV for AUC T , AUC I , and C max of each drug are presented in Table 2. Significant product, period, and sequence effects on AUC T , AUC I , and C max of the 14 drugs are summarized in Additional file 7. MSR and intra-subject CV for AUC Reftmax and AUC 72 are presented in Additional files 8 and 9, respectively.
Average bioequivalence of 3 on-market generic products to the reference product of 14 drugs Table 2 summarizes the results of the 42 predetermined BE analyses comparing three randomly-selected generic products to the corresponding reference product of each of the 14 drugs. The results are also depicted in Fig. 1. None of the AUC T or AUC I 90% CIs failed to show bioequivalence and 9.5% of C max 90% CIs only barely failed to show bioequivalence. When analyses were adjusted for 3 comparisons, 2.4% of AUC T 90% CIs, 0% of AUC I 90% CIs, and 11.9% of C max 90% CIs failed to show bioequivalence, and none showed bioinequivalence. When analyses were adjusted for 6 comparisons, 2.4% of AUC T 90% CIs (clarithromycin Gc vs. R), 2.4% of AUC I 90% CIs (clarithromycin Gc vs. R), and 21.4% of C max 90% CIs (clarithromycin Ga and Gc vs. R; diclofenac Ga, Gb, and Gc vs. R; ibuprofen Gb and Gc vs. R; omeprazole Gb and Gc vs. R) failed to show bioequivalence, and none showed bioinequivalence.
Mean absolute (SD) deviation of point estimates from 100% in the 42 comparisons was 3.2 (1.8), 3.2 (1.4), and 5.4 (3.3) percentage points for AUC T , AUC I , and C max , respectively. Further, the deviation was 10 percentage points in 95.2%, 95.2%, and 81.0% of the AUC T , AUC I , and C max comparisons, respectively. Furthermore, 0 % of the AUC T and AUC I and 9.5% of the C max deviations were >13 percentage points and 78.6%, 81.0%, and 50.0%, respectively, were <6 percentage points. Figure 2 (a) depicts BE analysis of AUC Reftmax between the three generic products and the corresponding reference product of each of the 14 drugs. The data are also summarized in Additional file 8. Twenty two (52.4%) of the 90% CIs failed to show bioequivalence. In addition, 6 (14.3%) showed bioinequivalence. Figure 2 (b) depicts BE analysis of AUC 72 between the three generic products and the corresponding reference product of the two drugs with long half-life (amlodipine and fluconazole). BE was demonstrated by all of the six 90% CIs. The data are also summarized in Additional file 9.
Individual pharmacokinetic parameter ratios of 3 onmarket generic products to the reference product of 14 drugs There were 1950 individual generic-reference comparisons. The percentages of individual AUC T, AUC I , and C max, ratios that were outside the ±25% range are presented in Fig. 3. On average, 16% of the AUC T ratios (ranging from 2% for cephalexin to 35% for atenolol and clarithromycin), 15% of the AUC I ratios (ranging from 2% for cephalexin to 34% for clarithromycin), and 32% of C max ratios (ranging from 8% for metronidazole to 57% for diclofenac), were outside the ±25% range. Further, individual AUC T , AUC I , and C max, ratios were within the ±25% range in 75% of individuals (i.e., fulfilled the 75/75 rule) for 79%, 79%, and 36% of the 14 drugs, respectively.
Out of 161 and 76 AUC 72 individual ratios for amlodipine and fluconazole, 16% and 1%, respectively, were outside the ±25% range (compared to 18% and 3%, respectively, for AUC T ). Figure 4 depicts the percentages of individual generic/ reference T max and AUC Reftmax ratios that were outside the ±25% range. On average, 60% of the T max ratios (ranging from 43% for amoxicillin to 72% for ibuprofen) and 58% of the AUC Reftmax ratios (ranging from 27% for metformin to 89% for omeprazole) were outside the ±25% range. Individual T max and AUC Reftmax ratios were within the ±25% range in 75% of individuals for none of the 14 drugs, respectively.
Average bioequivalence among 3 on-market generic products of 14 drugs Table 2 also summarizes the results of the 42 predetermined BE analyses among the three randomly-selected generic products of each of the 14 drugs. The results are also depicted in Fig. 5. Only one (2.4%) of each of the AUC T , AUC I , and C max 90% CIs failed to show bioequivalence. When analyses were adjusted for 3 comparisons, 2.4% of AUC T and AUC I 90% CIs and 9.5% of C max 90% CIs failed to show bioequivalence, and none showed bioinequivalence. When analyses were adjusted for 6 comparisons, 2.4% of AUC T and AUC I (clarithromycin Gb vs. Gc) and 14.3% of C max 90% CIs (cephalexin Ga vs. Gb and Gb vs. Gc; clarithromycin Gb vs. Gc and Ga vs. Gc; ibuprofen Gb vs. Gc and Ga vs. Gc) failed to show bioequivalence, and none showed bioinequivalence.
Mean absolute (SD) deviation of point estimates from 100% in the 42 comparisons was 2.5 (2.3), 2.6 (2.2), and 3.3 (3.1) percentage points for AUC T , AUC I , and C max , respectively. Further, the deviation was <10 percentage points in 95.2%, 95.2%, and 88.1% of the AUC T , AUC I , and C max comparisons, respectively. Furthermore, only 2.4% of the AUC T and AUC I and 7.1% of the C max deviations were >13 percentage points and 81.0%, 81.0%, and 59.5%, respectively, were <6 percentage points. Figure 6 (a) depicts BE analysis of AUC Reftmax among the three generic products of each of the 14 drugs. The data are also summarized in Additional file 8. Twenty three (54.8%) of the 90% CIs failed to show bioequivalence. In addition, 5 (11.9%) showed bioinequivalence. Figure 6 (b) depicts BE analysis of AUC 72 among the three generic products of the two drugs with long halflife. BE was demonstrated by all of the six 90% CIs. The data are also summarized in Additional file 9.
Individual pharmacokinetic parameter ratios among 3 onmarket generic products of 14 drugs
There were 1952 individual generic-generic comparisons. The percentages of individual AUC T , AUC I , and C max, ratios that were outside the ±25% range are presented in Fig. 7. On average, 17% of the AUC T ratios (ranging from 1% for metronidazole and fluconazole to 40% for clarithromycin), 16% of the AUC I ratios (ranging from 1% for metronidazole and fluconazole to 38% for clarithromycin), and 32% of the C max ratios (ranging from 5% for fluconazole to 59% for diclofenac) were outside the ±25% range. Further, individual AUC T , AUC I , and C max ratios were within the ±25% range in ˃75% of individuals for 71%, 71%, and 29% of the 14 drugs, respectively, Out of 161 and 76 AUC 72 individual ratios for amlodipine and fluconazole, 19% and 1%, respectively, were outside the ±25% range (compared to 25% and 1%, respectively, for AUC T ). Figure 8 depicts the percentages of individual generic/ generic T max and AUC Reftmax ratios that were outside the ±25% range. On average, 58% of the T max ratios (ranging from 42% for amlodipine to 73% for fluconazole) and 52% of the AUC Reftmax ratios (ranging from 18% for fluconazole to 82% for omeprazole) were outside the ±25% range. Individual T max and AUC Reftmax ratios were within the ±25% range in >75% of individuals for 0% and 7% of the 14 drugs, respectively.
Discussion
We assessed the adequacy of the commonly-used BE standards and of their application in a developing country through determining BE extent between onmarket generic and reference drug products and among reference-bioequivalent generic drug products. We studied 42 generic products of 14 immediate-release, non-combinational, oral drugs with the highest number of generic products on the Saudi market. We conducted a four-product, four-period, four-sequence, sequencerandomized, crossover BE study with a planned power of 0.9 on a reference and three randomly-selected generic products of each of the 14 drugs. For each drug, we computed six pairwise 90% CIs on geometric mean ratios of AUC T , AUC I , C max , AUC Reftmax, and AUC 72 without and with adjustment for multiple comparisons and determined percentages of individual untransformed ratios that fell outside the ±25%. We found that: 1) Onmarket generic drug products continue to be referencebioequivalent. 2) Reference-bioequivalent generic products are bioequivalent to each other. 3) Reference-generic AUC T is the area-under-the-concentration-time curve to last measured concentration. AUC I is AUC extrapolated to infinity. C max is maximum concentration. Data represent geometric mean ratios and unadjusted 90% confidence intervals. The number of subjects analyzed in each comparison is presented between parentheses in the first column. MSR is mean square residual from analysis of variance (ANOVA). CV is intra-subject coefficient of variation calculated as 100 x (exp(MSR)-1) 0.5 . Confidence intervals that cross the 80.00%-125.00% bioequivalence limits are bolded and generic-generic average deviations are small and similar. 4) Reference-generic and generic-generic C max intra-subject variations are large but similar. 5) Two thirds of generic-reference and generic-generic AUC Reftmax comparisons failed to show average bioequivalence/ showed bioinequivalence. The number of generic products for an off-patent drug is usually related to its market size. Therefore, it is reasonable to assume that the 14 drugs that we studied are among the commonly prescribed drugs in Saudi Arabia. They happened to include drugs for which rapid onset of action is clinically relevant (paracetamol, ibuprofen, diclofenac), drugs that are used in chronically and for which the concept of switchability is relevant (metformin, amlodipine), drugs with long half-life (fluconazole, amlodipine), and highly variable drugs (clarithromycin, diclofenac), but not NTI drugs. Almost all of the generic products were manufactured in Saudi Arabia or in a Middle Eastern state.
Marketed generic products of immediate-release, noncomputational, oral drugs continue to be bioequivalent to their corresponding reference products A generic drug product is commonly approved for continued marketing based on a single pre-marketing study demonstrating BE to its reference product; retesting of BE post-marketing is not routinely required. Our results confirm the validity of such practice. Using the 80.00-125.00% BE range, we found that 100% of the AUC T and AUC I generic-reference 90% CIs showed BE and only 9.5% of the C max 90% CIs barely failed to show BE. Even after adjusting for 6 comparisons, only 2.4% of the AUC T and AUC I 90% CIs and 21.4% of the C max 90% CIs failed to show BE. Our results are in line with some [17,22] but not all [15,16] published studies. a b c Fig. 1 Average bioequivalence of randomly-selected generic products to the reference product of 14 immediate-release, non-combinational, oral drugs. Each reference product (R) was compared to 3 generic products (Ga, Gb, Gc). Data represent generic/reference geometric mean ratios and unadjusted 90% confidence intervals. The shaded area indicates the area of bioequivalence (80.00%-125.00%). a Evaluation of area-underthe-concentration-time curve to last measured concentration (AUC T ). b Evaluation of area-under-the-concentration-time curve extrapolated to infinity (AUC I ). c Evaluation of maximum concentration (C max ) Previous studies evaluated generic products on other national markets, examined only one [17] or two [16,22] generic products of a single drug, or were not performed in vivo [15].
The outcome of a crossover BE study is affected by its sample size and intra-subject variability [57]. We estimated intra-subject CVs from published studies and planned each of the 14 studies to have a power of 0.9. It is of note that for the 4 drugs that failed to show BE in some of the comparisons (clarithromycin, diclofenac, ibuprofen, and omeprazole), current study intra-subject CVs were larger than estimated (Additional file 2). Intra-subject variability can be related to inter-product variability; however, it can be also attributed to the drug substance itself (being readily affected by intra-subject physiological variability), intraproduct variability, analytical variability, or unexplained random variability [57]. In fact, in a separate study [58] that compared the reference ibuprofen product used in this study to itself, using the same settings and a larger sample size, the C max 90% CI also failed to show BE. This suggests that at least some of the failures to show BE in the current study may not be due to real genericreference (inter-product) differences.
We found that the mean deviation of the generic/reference ratio from 100% was 3.2%, 3.2%, and 5.4% for AUC T , AUC I , and C max , respectively, and that the deviation was <10 percentage points in 95.2%, 95.2%, and 81.0% of the 42 comparisons. Similarly, the US FDA found a mean deviation of 3.47% for AUC T and 4.29% for C max in one retrospective study [59] and 3.56% for AUC T and 4.35% for C max in another [60], and that in about 98% of the studies, the AUC T difference was <10% [60]. Further, a reanalysis of 141 US FDA-approved antiepileptic generic products found that generic and reference AUC T and C max differed by <15% in 99% and 89% of BE studies, respectively [28]. Consistent with these BE findings, several metaanalysis and reviews showed that there is no evidence that cardiovascular [18,19], antiepileptic [20], or immunosuppressive [21] reference drug products are superior to their generic counterparts in terms of efficacy or side effects.
Reference-bioequivalent generic drug products continue to be underused world-wide, mainly due to mistrust by healthcare professionals [2] and patients [3], in a way that may be dependent on maturity of the country's healthcare system [2,5,6]. The misbelief that generic medicines are counterfeits and the placebo effect of packaging and price Fig. 2 Average bioequivalence of randomly-selected generic products to the reference product of 14 immediate-release, non-combinational, oral drugs. Each reference product (R) was compared to 3 generic products (Ga, Gb, Gc). Data represent generic/reference geometric mean ratios and unadjusted 90% confidence intervals. The shaded area indicates the area of bioequivalence (80.00%-125.00%). a Evaluation of area-underthe-concentration-time curve to time of maximum concentration of reference product, calculated for each subject (AUC Reftmax ). b Evaluation of areaunder-the-concentration-time curve truncated to 72 h (AUC 72 ). Only 2 drugs (amlodipine and fluconazole) in this study have terminal half-life >72 h differential are important to consider [61]. Further, prescribing a generic product by its brand name rather than its non-proprietary name (generic prescribing) may better convey the impression of individuality and improve patients' acceptance [62,63]. Importantly, information availability to healthcare professionals and patients has been identified as a facilitator of generic products uptake [4,39]. Our results provide strong supporting evidence of the post-marketing quality of generic products and of the adequacy of the current BE standards.
Marketed, reference-bioequivalent, generic products of immediate-release, non-combinational, oral drugs are bioequivalent to each other Commonly, there are several same-market drug products that are linked by a chain of reference; theoretical concerns have been raised that reference-bioequivalent generic products may not be bioequivalent to each other if their BE point estimates were on the opposite sides within the BE range [23,24]. Simulation studies predicted that two reference-bioequivalent generic products are likely to be equivalent to each other only under relatively restricted conditions [29,30]. However, using reference-normalized data to indirectly estimate 90% CIs, analysis of 19 BE studies on 2 anti-epileptic drugs showed generic-generic BE in almost all cases [26] and analysis of 120 BE studies on three immunosuppressants as well as six selected drugs showed BE in 90% of AUC T and 87% of C max comparisons with mean absolute deviation from 100% of 4.5% for AUC T and 5.1% for C max [27]. Further, a similar analysis of US FDA-approved antiepileptic generic products found that AUC T and C max differed by >15% in 17% and 39% of simulated generic-generic switches, respectively [28]. Nevertheless, there is little direct empirical evidence regarding the extent of BE among reference-bioequivalent generic products; two amoxicillin generic products did not show BE [16], whereas two metformin generic products [22] and the two most disparate generic lamotrigine products [31] did. In our prospective study of 42 direct generic-generic BE comparisons, only one (2.4%) comparison failed to show BE because of C max and one because of AUC T and AUC I . After adjusting for 6 comparisons, the percentages were 2.4% and 14.3%, respectively. Further, mean deviation of generic/generic ratio from 100% was only 2.5%, 2.6%, and 3.3% for AUC T , AUC I , and C max , respectively, and the deviation was <10 percentage points in 95.2%, 95.2%, and 88.1% of the 42 comparisons. Our results provide strong empirical evidence that it is very unlikely for two reference-bioequivalent generic products not to be bioequivalent to each other. Interestingly, in our study, mean deviation of generic/reference ratios from 100% was in the 6-13 percentage points range in 21.4%, 19%, and 40.5% of the AUC T and, AUC I , and C max comparisons, respectively. This suggests that, contrary to the result of previous simulation study [29], even when the bioavailability difference between generic and reference products is in the 6-13 percentage points range, reference-bioequivalent generic products are still likely to be bioequivalent.
Theoretically, the change in drug exposure resulting from generic-generic substitution might be expected to be more pronounced than the change resulting from generic-reference substitution [23,24]. However, our results indicate that the two changes in exposure are similar. Mean absolute deviation of point estimates in percentage points was 3.2 vs. 2.5 for AUC T , 3.2 vs. 2.6 for AUC I , and 5.4 vs. 3.3 for C max in the genericreference and generic-generic comparisons, respectively. Further, the deviations were <10 percentage points in similar proportions of the two types of comparisons.
Generic-reference and generic-generic intra-subject variability of bioequivalent drug products
Since average BE focuses on mean difference rather than difference between variances or subject-by-product interaction, it is possible that a patient on a referencebioequivalent but low-quality generic product may be sometimes overdosed and sometimes underdosed and that a patient using two bioequivalent products may have the highest drug exposure with one product and the lowest with another [64]. Such possibilities may be of particular concern when switching patients form one NTI drug product to another [24] and are usually reflected in individual ratios of the pharmacokinetic parameters. Few published studies have addressed BE at the individual level [17,24,25]. Despite having 90% CIs within the 80-125% limits, 18% and 38% of individual cyclosporine generic/reference AUC and C max ratios, respectively, were <0.80 [24] and 0% of individual lamotrigine generic/reference AUC and C max ratios and 3% and 18% of same-product, generic/generic AUC and C max ratios, respectively, were outside the ±25% range [17]. A simulation study (assuming 20% inter-subject variability and 10% intra-subject variability) predicted that when mean generic product's AUC is 80% to 123.5% of reference product's AUC, 3-4.6% and 9-12% of individual generic/reference and generic/generic AUC ratios, respectively, would fall outside the 0.67-1.5 range [25].
We found that 16% and 17% of individual generic/reference and generic/generic ratios, respectively, were outside the ±25% range in for AUC T , 15% and 16% for AUC T , and 32% and 32% for C max . Further, individual generic/reference and generic/generic AUC T , AUC I , and C max ratios fulfilled the 75/75 rule for 79% and 71%, 79% and 71%, and 36% and 29% of the 14 drugs, respectively. Based on a relatively large number of drug products, our results document the extent of intra-subject variability that would be expected despite fulfilment of average BE criteria and strongly suggest that the extents of generic-generic switchability and generic-reference switchability are similar.
It is not clear how much of the observed intra-subject variability is due to inter-product rather than intra-product variability. In the simulation study, 11.1% of the reference/ reference AUC ratios were predicted to fall outside the 0.8-1.25 range [25]. Further, 3% and 9% of individual lamotrigine reference/reference AUC and C max ratios [17] and 23%, 30%, and 30% of individual caffeine AUC T , AUC I , and C max ratios [65], respectively, were outside the ±25% range. Furthermore, when the cephalexin, ibuprofen, and paracetamol reference products used in this study were compared to themselves; respectively, 2%, 17%, and 2% of the individual ratios were outside the ±25% range for AUC T a b c Fig. 5 Average bioequivalence among randomly-selected, reference-bioequivalent generic products of 14 immediate-release, non-combinational, oral drugs. Three generic products (Ga, Gb, Gc) were compared. Data represent generic/generic geometric mean ratios and unadjusted 90% confidence intervals. The shaded area indicates the area of bioequivalence (80.00%-125.00%). a Evaluation of area-under-the-concentration-time curve to last measured concentration (AUC T ). b Evaluation of area-under-the-concentration-time curve extrapolated to infinity (AUC I ). c Evaluation of maximum concentration (C max ) (compared to 2%, 8%, and 8% of the generic-reference ratios in the current study), 4%, 3%, and 2% for AUC I , (compared to 2%, 8%, and 9% of the generic-reference ratios in the current study), and 25%, 33%, and 45% for C max , (compared to 39%, 22%, and 26% of the genericreference ratios in the current study) [58]. Together, the data strongly indicate that a major part of the intra-subject variability seen in average BE studies may not be related to comparing two products but rather to factors such as study setting, drug assay, and random variations in subject's physiologic status (for example, gastric emptying, intestinal transit speed, and luminal pH).
Large variability in AUC Reftmax and T max despite average bioequivalence
When time of onset of drug effect is important because of therapeutic or toxic issues, it is recommended to perform non-parametric analysis of non-transformed T max values and/or evaluate the 90% CI of AUC truncated at reference T max median or at reference T max , calculated for each subject (AUC Reftmax ) [7,8]. Onset of effect may important for only few drugs in the current study, however, we used the data on all the 14 drugs to examine the behaviour of T max and AUC Reftmax in general.
We found that two thirds of generic-reference and generic-generic AUC Reftmax comparisons failed to show BE or showed bioinequivalence. Further, on average, 60% and 58% of generic/reference and 58% and 52% of generic/generic individual T max and AUC Reftmax ratios, respectively, were outside the ±25% range. Moreover, generic/reference and generic/generic individual T max and AUC Reftmax ratios fulfilled the 75/75 rule in only 0-7% of the 14 drugs. The results confirm that average BE testing using AUC T , AUC I , and C max is insensitive to variability in T max and AUC Reftmax and suggest that intra-subject variabilities of the two parameters are similar and do not depend on whether a generic product is compared to a reference product or to another generic product.
Some patients' bad impression of generic products may be theoretically related to their different onset of effect as compared to reference products. However, this is not likely because onset of effect is mostly related to pharmacodynamic rather than pharmacokinetic characteristics. Further, since T max values are based on C max , which is, in turn, based on a single measurement of drug concentration, T max values are also very sensitive to study setting, subject's physiological status, assay variability, and random error. In a b Fig. 6 Average bioequivalence among randomly-selected, reference-bioequivalent generic products of 14 immediate-release, non-combinational, oral drugs. Three generic products (Ga, Gb, Gc) were compared. Data represent generic/generic geometric mean ratios and unadjusted 90% confidence intervals. The shaded area indicates the area of bioequivalence (80.00%-125.00%). a Evaluation of area-under-the-concentration-time curve to time of maximum concentration of reference product, calculated for each subject (AUC Reftmax ). b Evaluation of area-under-the-concentration-time curve truncated to 72 h (AUC 72 ). Only 2 drugs (amlodipine and fluconazole) in this study have terminal half-life >72 h fact, when the cephalexin, ibuprofen, and paracetamol reference products used in this study were compared to themselves [58]; respectively, 46%, 63% and 71% of individual ratios were outside the ±25% range for T max (compared to 54%, 72% and 69% of the generic-reference ratios in the current study) and 71%, 77% and 67% for AUC Reftmax (compared to 75%, 76% and 68% of the generic-reference ratios in the current study). This strongly indicates that most of the observed generic-reference and generic-generic intrasubject variability in T max and AUC Reftmax is not due to inter-product differences and that the usefulness of T max and AUC Reftmax in BE evaluation may be very limited.
AUC 72 is as informative as AUC T Two drugs in this study have long plasma half-life (around 49 and 29 h); the half-life for the other 12 drugs was <10 h. We were able to demonstrate average BE in all genericreference and all generic-generic AUC T and AUC 72 comparisons. Further, similar percentages of generic/reference and generic/generic individual AUC T and AUC 72 ratios were outside the ±25% range. The results lend further support to using AUC 72 instead of AUC T for drugs with long plasma half-life [7][8][9].
Limitations
The interpretation of the results of this study may be limited by the following. 1) We only studied non-combinational drug products. However, BE standards for combinational and non-combinational products are the same and it can be assumed that the results apply to combinational products. 2) We only studied solid immediate-release drug products, thus our results may not apply to liquid or modified-release products. 3) Our results may not be generalizable to other solid immediate-release drugs on the Saudi market since the drugs we studied were not randomly selected. Short of more relevant statistics, the number of on-market generic products is a reasonable reflection of the extent of drug utilization. Further, the generic products in our study were a b c Fig. 7 Individual pharmacokinetic ratios among randomly-selected, reference-bioequivalent generic products of 14 immediate-release, noncombinational, oral drugs. Three generic products (Ga, Gb, Gc) were compared. Data represent percentage of individual generic/generic ratios that are <0.75 (closed bars) or >1.25 (open bars). a Evaluation of area-under-the-concentration-time curve to last measured concentration (AUC T ). b Evaluation of area-under-the-concentration-time curve extrapolated to infinity (AUC I ). c Evaluation of maximum concentration (C max ) randomly selected. Thus it would be expected that the results apply to an important portion of drug products on the Saudi market. 4) Although Saudi Arabia's BE regulations are very similar to most BE regulations worldwide, our results may not apply to similar drugs on other national markets. 5) Our study was not designed to partition intrasubject variability into its various components. Thus, it is not clear how much of the observed intra-subject variability is related to the generic products themselves (generic product quality variability or subject-by-product variability) and how much to methodological issues. 6) We observed significant (unadjusted) period and sequence effects in 6 and 2 of the 14 studies, respectively. It is likely that the apparent significance is due in large part to multiple comparisons and relatively large sample sizes, since we have also observed significant product effect in 8 of the 14 studies. The presence of period or sequence effect doesn't influence BE conclusions. Sequence effect and period effect may indicate unequal carryover, which is not likely given the length of the washout periods and the fact that baseline drug concentrations were undetectable in all periods for all 14 drugs. Sequence effect may also indicate that the groups (the 4 sequences) are different, which is also not likely because of randomization. However, it may also be due to product-by-period effect, which cannot be rolled out. Finally, period effect may indicate temporal changes, such as changes in patients' comfort level, familiarization with study, compliance, venous access, and drug stability. The latter is not likely because analysis of all drugs was performed well within each drug's pre-established stability period. 7) We have loss of follow up for one or more periods in 13 of the 14 studies, however, this resulted in negligible imbalance among the 4 sequences and negligible loss of power. 8) Finally, in retrospect, few of the 14 studies did not have adequate power to show BE for C max , however, this would strengthen the main conclusions of the study.
Conclusions
Based on studying 42 randomly-selected generic products of 14 immediate-release, non-combinational, oral drugs with the highest number of generic products on the Saudi market, we can conclude that: 1) On-market generic products continue to be reference-bioequivalent. 2) Referencebioequivalent generic products are bioequivalent to each other, despite the presence of some generic-reference deviations that are >6 percentage points. 3) Referencegeneric and generic-generic average deviations are small (on average 3-5 percentage points) and similar. 4) a b Fig. 8 Individual pharmacokinetic ratios among randomly-selected, reference-bioequivalent generic products of 14 immediate-release, noncombinational, oral drugs. Three generic products (Ga, Gb, Gc) were compared. Data represent percentage of individual generic/generic ratios that are <0.75 (closed bars) or >1.25 (open bars). a Evaluation of time of maximum concentration (T max ). b Evaluation of area-under-the-concentration-time curve to time of maximum concentration of reference product, calculated for each subject (AUC Reftmax ) | 2023-01-30T15:18:33.932Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "6708ad5b6c53a17dbeece808643383b0bc792f20",
"oa_license": "CCBY",
"oa_url": "https://bmcpharmacoltoxicol.biomedcentral.com/track/pdf/10.1186/s40360-017-0182-1",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6708ad5b6c53a17dbeece808643383b0bc792f20",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
70350328 | pes2o/s2orc | v3-fos-license | Success and safety of endoscopic treatments for concomitant biliary and duodenal malignant stenosis: A review of the literature
Synchronous biliary and duodenal malignant obstruction is a challenging endoscopic scenario in patients affected with ampullary, peri-ampullary, and pancreatic head neoplasia. Surgical bypass is no longer the gold-standard therapy for these patients, as simultaneous endoscopic biliary and duodenal stenting is currently a feasible and widely used technique, with a high technical success in expert hands. In recent years, endoscopic ultrasonography (EUS) has evolved from a diagnostic to a therapeutic procedure, and is now increasingly used to guide biliary drainage, especially in cases of failed endoscopic retrograde cholangiopancreatography (ERCP). The advent of lumen-apposing metal stents (LAMS) has expanded EUS therapeutic options, and changed the management of synchronous bilioduodenal stenosis. The most recent literature regarding endoscopic treatments for synchronous biliary and duodenal malignant stenosis has been reviewed to determine the best endoscopic approach, also considering the advent of an interventional EUS approach using LAMS.
INTRODUCTION
Ampullary and periampullary malignant diseases, such as pancreatic cancer, cholangiocarcinoma, gallbladder cancer, and peripancreatic metastatic lesions are usually diagnosed at an advanced stage in which surgery is no longer indicated or the patients are unfit for surgical resection. Therefore, the treatments these patients can undergo are only palliative and, in some cases, chemotherapy is not indicated due to an end-stage disease. The survival of these patients is often not longer than 6 mo [1,2] . Ampullary and periampullary malignant disease can cause biliary or duodenal obstruction, and in previous case series between 6% and 9% of patients, following the placement of plastic stents for malignant biliary obstruction, developed a duodenal obstruction requiring surgical palliation with a gastrojejunostomy (GJS) [3] . Today, in the presence of a duodenal stenosis, the endoscopic stenting is preferred to the GJS, in the treatment for palliation of the gastric outlet obstruction (GOO), also because of the lower procedural costs and lesser hospital stay [4,5] , even if readmission and mortality rates can be similar [6] . The advent of the self-expandable metal stent (SEMS) has widened the therapeutic options, increasing the quality of life for these patients. The same consideration can be made for the malignant biliary obstructions for which the hepaticojejunostomy has been supplanted by biliary SEMS placement. The clinical success rate of duodenal SEMS placement in patients affected by GOO is from 84% to 93%, and a technical success rate ranging between 93% and 97% [7][8][9] .Over and tissue ingrowth, SEMS displacement, impaction of solid food can be possible adverse events after self-expandable stent placement. This eventuality require further endoscopic intervention in the 20%-25% of these patients [10] .
The treatment can be even more challenging when biliary and duodenal obstruction arise simultaneously. We aimed to systematically evaluate the published literature on the endoscopic approaches to bilioduodenal stenosis, also taking into account the advent of the EUS approach to the biliary tree using the lumen-apposing metal stents (LAMS).
LITERATURE SEARCH
A search of the literature was done in order to identify studies including patients with synchronous biliary and duodenal stenosis, published from January 1 st 2000 until June 2018, using the main electronic databases (PubMed, Scopus, and Google Scholar and the Cochrane Library). The medical literature was searched using the following keywords: Biliary stenosis, duodenal stenosis, stenting, self-expanding metallic stent, SEMS, lumen-apposing metal stent, and LAMS. Only studies in English were evaluated. Studies considering outcomes of non-synchronous biliary and duodenal stenosis were excluded.
Technique
A proposed classification of synchronous malignant bilioduodenal stenosis was proposed by Mutignani et al [11] in 2007. Three different types of synchronous bilioduodenal stenosis have been described based on clinical scenarios: type I, in which duodenal strictures are present in the duodenal bulb or in the duodenal genu; type II, in which the duodenal stenosis involves the papilla; and type III, in which duodenal stenosis occurs distally from the papilla, without its involvement. On the basis of this classification, the type of synchronous biliary stenosis determines the endoscopic palliative approach.
The most difficult scenarios for draining the biliary tree usually occur in the presence of the type I or II synchronous duodenal stricture. Nevertheless, if the duodenoscope passes through the duodenal stricture, endoscopic retrograde cholangiopancreatography (ERCP) can be performed, whereas if the duodenoscope does not pass across the stricture a duodenal uncovered metal stent has to be deployed. The common bile duct (CBD) is cannulated through the mesh of the duodenal stent and, after the sphincterotomy, the duodenal mesh can be dilated by pneumatic dilation. If the papilla is "jailed" by the enteral stent, argon plasma coagulation or rat-tooth forceps can be used to trim the enteral mesh to gain access to the ampulla.
Evidence
Currently, there are published studies stating that biliary stenting should not be attempted due to duodenal stenosis. The reported technical success of duodenal and biliary stent insertion in synchronous bilioduodenal stenosis ranges from 82.1% to 94.4%. The literature search found three prospective studies and eight retrospective studies regarding the efficacy of combined biliary and duodenal stenting during the same session (Table 1) [12][13][14][15][16][17][18][19][20] . The only prospective study is by Mutignani et al [11] , and was published in 2007. It comprised a consecutive series of 64 patients, of whom 14 had concurrent biliary and duodenal obstruction. Duodenal SEMS occlusion, after concomitant bilioduodenal stenting is not dependently associated with a higher risk of biliary occlusion of the SEMS; however, the majority of patients do not require further re-intervention for stent occlusion.
At present, the largest series of patients with synchronous bilio-duodenal malignant strictures comes from the Japanese group of Hori et al [21] , published in 2018. They retrospectively evaluated a total of 109 patients. The authors reported a technical success for resolution of synchronous bilioduodenal strictures of 99.1%, with an improvement of symptoms for biliary and duodenal obstruction of 81.7%. The rate of recurring biliary obstruction was 22.9%, and that of recurring duodenal obstruction was 11.9%, with a median time of 87 and 76 d, respectively. In the multivariable analysis, the significant data that emerged from this study was that duodenal uncovered SEMS was significantly associated with recurrent biliary obstruction. On the other hand, no predictive factors for recurrent duodenal obstruction were found, and the type of the duodenal SEMS was not associated with the duodenal obstruction time.
Synchronous bilioduodenal stenting was first reported in 1994 [22] . Duodenal FCSEMSs carry a risk of obstructive jaundice, or pancreatitis, because of the possibility of the stent to cover the papilla by the covering of the FCSEMS. Though the effectiveness and safety of placement of a fully-covered SEMS (FCSEMS) across the major papilla has been reported [23] , to our knowledge, no published manuscript comparing the clinical outcomes of duodenal uncovered SEMS vs FCSEMS in patients affected by synchronous bilioduodenal malignant strictures have been published. Hamada et al [24] showed as the placement of a duodenal stent is a risk factor for the dysfunction of a biliary SEMS, likely caused by increased duodeno-biliary reflux.
ROLE OF EUS IN THE MANAGEMENT OF SYNCHRONOUS BILIARY AND DUODENAL STENOSIS: EUS AS RESCUE THERAPY WHEN ERCP FAILS
In the last years, endoscopic ultrasonography (EUS) has widely changed from a diagnostic to a therapeutic tool, and is now progressively more performed for the endoscopic biliary drainage (BD) in cases of failed attempt of ERCP [25][26] .
Technique
In the management of EUS drainage, for the linear-array echoendoscope, with a 3.8 mm diameter channel, must be used because it allows the passage of large accessories. Two possible puncture routes for EUS-BD can be performed: trans-gastric for left intrahepatic bile duct drainage or the trans-duodenal (from the bulb) for the drainage of the extrahepatic bile duct.
Two major EUS-guided approaches have been used: the transgastric intrahepatic approach and the transduodenal extrahepatic approaches, the latters with 3 different techniques: (1) EUS-guided choledochoduonenostomy; (2) EUS-guided transduodenal extrahepatic or EUS-guided rendez-vous technique (EUS-RV); and (3) EUS-guided biliary antegrograde stenting. EUS-RV is indicated in the patients with a previous failed attempt of ERCP but presents a good endoscopic access to the Vater's papilla or to the anastomosic site. Different to the trans-luminal stenting, EUS-RV conserve the anatomical integrity of the biliary ducts and without creating a fistula between the biliary duct and the duodenal lumen.
Performing EUS drainage, the use of the color Doppler is mandatory to identify the possible interposed vessels between the lumen wall and the selected duct. The selected duct can be punctured, for the drainage, with a 19-or 22-gauge (G) needle. The 19 G needle is preferable because it allows the passage of a 0.035-inch guide-wire, which provides more stiffness. The 22 G needle lodges only a 0.018-inch guide-wire, which presents a major risk of displacement during the accessories exchanges. After accessing the selected duct with the 19 G or the 22 G needle, injection of a contrast medium can be useful to perform a cholangiogram to confirm the correct position of the needle inside the duct, and to clearly identify the stricture. Thereafter, using X-ray guidance, the guide-wire is advanced in the duct through the needle [27][28][29][30][31] .
If the chosen drainage is transmurally from the gastric wall, the intrahepatic ducts of the left liver side can be drained [hepaticogastrostomy (HGS)], while if the chosen duct is the CBD, the drainage can be performed from the bulb [choledochoduodenostomy (CDS)]. CDS can be performed using LAMS, which do not necessarily require the placement of a guidewire, obtaining direct access into the CBD when dilated. If the guidewire exits the ampulla, ERCP can be done to complete the drainage, using the rendez-vous technique. When the release of the LAMS is performed through the puncture route or across the stenosis or the papilla in an anterograde way, different accessories could be used to enlarge the punctured site, as the bougies (6 or 7 Fr), the balloons for pneumatic dilation (4 or 6 mm) or a cystotome (8.5 Fr). However, the use of LAMS has currently supplanted this route and has now become the main technique for BD. Both plastic and metal stents are used for HGS or choledoco-duodenostomy, though the partially-covered and fully-covered SEMS (FCSEMS) are most often used to prevent stent migration and bile leakage.
LAMS have recently changed the management of synchronous bilioduodenal stenosis. EUS biliary drainage is a salvage therapy reserved for type I and type II bilioduodenal stenosis when ERCP fails or as primary modality, especially if there is synchronous GOO and in patients with distorted anatomy. In malignant biliary obstruction (MBO) with synchronous GOO, ERCP may not be possible because the papilla cannot be reached [32] .
EUS-BD is generally performed using a direct transluminal approach. Less frequently, the antegrade approach is used. If enteral stenting is needed for synchronous GOO to allow the passage of a duodenoscope, ERCP is the preferred way to approach the CBD, despite high failure rates [33] . In these cases, EUS-BD can be considered a primary approach (Figure 1). The two possible EUS approaches are the CDS and HGS. Literature data on EUS-BD report an acceptable technical and clinical success. In a systematic review involving 1192 patients in 42 studies, EUS-BD was shown to have a technical and clinical success rate of 94.7% and 91.7%, respectively [34] . These data were recently confirmed in an international multicenter prospective series, where technical and clinical success rates were 95.8% and 89.5%, respectively, with an adverse event rate of 10.5% [35] . However, in consideration of the significant rates of adverse events with EUS-BD, ERCP remains the standard of care for the management of biliary obstruction, with EUS-BD as a rescue modality when ERCP fails. In the presence of malignant biliary obstruction with synchronous GOO, EUS-BD or percutaneous transhepatic biliary drainage (PTBD) can be considered the first-line treatments. In these cases, the majority of centers prefer PTBD to EUS-BD because of the higher expertise and experience of the radiologist in performing the procedure compared with the endoscopist performing EUS-BD.
Evidence
Literature data comparing EUS-BD with PTBD in patients with MBO have shown comparable technical success rates (94.1% for EUS-BD vs 96.9% for PTBD) and clinical success (87.5% for EUS-BD and 87.1% for PTBD), but with fewer adverse events for EUS-BD (8.8% for EUS-BD vs 31.2% for PTBD, P = 0.022) [36] . Nevertheless, overall comparative studies of the two modalities appear to favor EUS-BD [37,38] . Moreover, the major advantage of EUS-BD compared with PTBD is the possibility of performing the procedure during the same session of the failed ERCP [39] . Overall, EUS-BD appears to be an important therapeutic option in the management of MBO in the presence of synchronous GOO, and the major limitation of the implementation of EUS-BD is a lack of expertise. Recent developments, such as the one-step LAMS for EUS-BD, make the procedure easier and safer. In a systematic review of prospective and retrospective series, including series in which the EUS-BD was performed in two steps, the adverse event rate was 23.3%, including peritonitis 1.3%, bleeding 4%, cholangitis 2.4%, pneumoperitoneum 3%, bile leakage 4%, stent migration 2.7% and abdominal pain 1.5% [31] . The recent advent of LAMS and the one-step EUS-BD stent system has increased the safety of EUS-BD, with an overall rate of adverse events reported as ranging from 7% to 10.5% [40] . Results of the studies in which EUS for the treatment of bilio-duodenal malignant stenosis was performed are summarized in Table 2 [41][42][43][44][45][46][47][48][49][50][51] .
Stent migration is another potential serious adverse event of EUS-BD, especially in the setting of HGS. This risk can be minimized by ensuring appropriate stent length and avoiding the placement of partially covered metal stents. If stent migration occurs, any collection should be drained via an interventional radiology approach. Finally, patients with cholangitis or bleeding following EUS-BD should also be managed by a multidisciplinary team, including a radiologist performing PTBD for cholangitis and for embolization, with surgical backup for refractory bleeding.
CONCLUSION
Synchronous biliary and duodenal malignant obstruction is a challenging endoscopic scenario in patients affected with periampullary neoplasia. Surgical bypass has long been the gold standard therapy for these patients. Synchronous endoscopic biliary and duodenal stenting is a feasible technique, with a high rate of technical success. ERCP plus duodenal stenting is currently the preferred endoscopic therapy for these patients. We suggest performing endoscopic transpapillary biliary drainage before duodenal stent insertion if the duodenoscope can pass through the duodenal stricture, whereas, if the duodenal stricture cannot be passed, deploying an uncovered duodenal metal stent across the stricture before performing ERCP is recommended. EUS-BD should be performed by expert operators in cases of type I and type II bilioduodenal stenosis according to the Mutignani classification, when the ERCP fails or as primary modality in patients with distorted anatomy. Optimal clinical results and a low number of patients with this condition reported in the published series discussed in this paper should underline a possible bias. The future development of dedicated accessories and instruments, supported by further data, can contribute to the continual evolution of EUS-BD, which could become the first-line treatment option in patients with MBO in the near future. | 2019-03-11T17:24:59.994Z | 2019-02-27T00:00:00.000 | {
"year": 2019,
"sha1": "9fdc08be500ef84512ec64de345978a9d93f9f6b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4240/wjgs.v11.i2.53",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b70f3b7f05913ee3681519f194bbb6d12feab329",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
116107312 | pes2o/s2orc | v3-fos-license | Generator on Arcadyev-Marx scheme with peaking of the pulse front in its cascades for food disinfecting
Purpose. To obtain experimentally that the duration of the high-voltage pulse front is less than 1.5 nanoseconds on the load of a pulse voltage generator of less than 50 ohms in the form of more than two working chambers with a water-containing product. That increases the efficiency of disinfection of treated products. Methodology. To obtain high-voltage pulses in working chambers the generator load the pulse generation method was used according to the Arkadyev-Marx scheme. The pulses on the load were measured with a low-ohm resistive voltage divider, transmitted over a broadband coaxial cable, and recorded using a C7-19 oscilloscope with a 5 GHz bandwidth. The working chambers were filled with water and consisted of an annular body made of PTFE 4 and metal electrodes forming the bottom and the chamber cover having flat linings of food stainless steel for contact with the food product inside the chamber. Results. The high-voltage pulses on the generator load of about 50 Ohm or less have a trapezoidal shape with a rounded apex and a base duration of no more than 80 ns. The experimentally obtained pulse amplitudes on the generator load are up to 18 kV. As the load resistance decreases, the amplitude of the pulses decreases, and the duration of the front and pulse duration in general are shortened because of the accelerated discharge of cascade capacitive storages. Originality. For the first time we have obtained experimentally on the load of the generator in the form of three parallel working chambers with water, the active resistance of each of which is less than 50 Ohm, the pulse front duration tf≈1 ns. In addition, we have obtained experimentally a stable 9-10 channel triggering mode of the trigatron type spark gap in a five-cascade pulse voltage generator with a step-by-step peaking (exacerbation) of the pulse front in its cascades (GPVCP). Practical value. We have obtained experimentally the nanosecond pulse front duration on the GPVCP load and that opens the prospect of industrial application of such generators for microbiologically disinfecting treatment (inactivation of microorganisms in food) watercontaining food products. References 6, figures 8.
Introduction. Generators on Arcadyev-Marx are widely used in high-voltage pulse technology [1]. Due to the ability to obtain nanosecond fronts at a voltage of 100 kV and more on the load of such generators [2], repetition rates of 200 pulses per second or more, they are promising for decontaminating treatment of liquid watercontaining products.
The processing of products by pulsed electric fields (PEF) with nanosecond fronts makes it possible to conserve the initial quality of food products with the use of traditional thermal methods while reducing the specific energy consumption for inactivating microorganisms in them [3,4]. In the PEF method, or a complex of high-voltage impulse actions (CHIA), short electric pulses are used, which can be obtained with the help of Arcadyev-Marx generators. Decontamination treatment is carried out in working chambers with a processed product, which is a load for generators of high-voltage pulses. The typical duration of pulses of strong pulsed electric field strength in working chambers can vary from 50 ns to 1 μs, the amplitude ranges from 5 kV/cm to 200 kV/cm without breakdowns. Several working chambers with a watercontaining product connected to the generator output are a low-impedance load for the generator, which can not exceed 50 and can lead to undesired elongation of the voltage pulse front. In [5], a method for treating liquids and flowing products in several working chambers is proposed, which makes it possible to avoid undesirable extension of the front due to the use of impulse frontizers. The Arcadyev-Marx pulse voltage generators in the regime of the step-by-step aggravation of the pulse front (GPVCP) make it possible in practice to solve the problem of undesirable elongation of the front. In this paper, an experimental check of the operation of the GPVCP on a load of not more than 50 in the form of three working chambers with water, connected in parallel, without extension of the front of the pulses on the load, was carried out.
The goal of the work is to experimentally obtain on a generator load of less than 50 in the form of more than two working chambers with a water-containing product, the duration of the high-voltage pulse front less than 1.5 nanoseconds, which increases the efficiency of disinfection of the processed products.
Experimental installation. The installation circuit is shown in Fig. 1 Fig. 1 pre-charged to voltage U main sections of the power line and capacitive storages are shaded; N is the number of cascades; 1 -capacitive storage of a cascade with capacitance С st , which can be a long line with distributed parameters; 2 -power line -broadband homogeneous long line with distributed parameters with distance he between direct and reverse current conductors with wave impedance z e ; 3 -cascade discharger; 4capacitance (С gap ) between the discharge gap electrodes 3; 5 -starting discharger of the GPVCP; 6 -capacitance (С gap.о ) between the discharge gap electrodes 5; 7starting system (device); 8 -long transmission line with wave impedance Z п =Z e between the device 7 and the starting discharger 5; 9 -load with impedance Z load ; t tr.п and t tr.k are the times of the electromagnetic wave traveling along the line 8 and between two adjacent cascade dischargers, respectively; k is the number of the cascade (k = 1, 2,…, N); h c is the length of the discontinuity in the live current conductor, into which (discontinuity) the capacitive storage of the k-th cascade is connected.
Both CH1 and CH2 chargers are assembled according to the Cockcroft multiplication scheme [6] and are powered by step-up transformers fed with an adjustable AC voltage from the CS control system. Starting device 7 contains a ceramic capacitor K15-10 with of 10 nF (C ss ) and a two-electrode spark gap S ss , triggered by overvoltage (for self-breakdown).
In GPVCP, on which experiments were conducted, there are 5 cascades. Capacitive storage of cascades С st =3×10 -9 F are made in the form of low-resistance strip lines (which can be considered as flat capacitors when charged) from foil-coated glass-fiber laminate, 0.45 m in height and width of coatings, and with dielectric thickness h c = 5 mm.
The power line of this GPVCP is made in the form of a real strip line with a distance between the forward and reverse current conductor h e =50 mm [2]. The return current line is a brass sheet 1 m long, 0.4 m wide, 1 mm thick. It has a sheet of plexiglass 8 mm thick. The remaining space between the forward and reverse conductor is filled with air at atmospheric pressure.
The general view of a five-cascade pulse voltage generator with a step-like exacerbation of the front of the generated pulses, on which experiments are performed, is shown in Fig. 2.
Dischargers of cascades are of trigatron type with air filling at atmospheric pressure. Each of the two electrodes of the discharger is made in the form of a metal plate fixed to a plexiglass support, 5 mm thick, in which 10 holes are made at equal distances from each other. In these holes are inserted 10 needle electrodes connected in a short time with the corresponding capacitive storage plate of the GPVCP cascade and through the inductance L d ≈0.5 μH -with the plate.
Inter-electrode gaps in the dischargers are regulated along the length. Such a design of the dischargers provides a uniform electric field in them when the GPVCP cascades are charged and the field is sharply unhomogeneous during discharge. Therefore, when the GPVCP is discharged, spark channels are formed only between the corresponding two needle electrodes located on the same axis. In each of the cascade dischargers can be formed in the discharge from 1 to 10 sparks. The load 9 (Fig. 1) during the experiments was varied: it was carried out in the form of 10 resistors TBO-10 with nominal resistance of 560 each (measured value of resistance was from 580 to 630 ), in the form of one working chamber with water, three working chambers with water. Load resistors and working chambers were connected to the corresponding tip electrodes of the 10channel output gap of the GPVCP.
The experimental setup works as follows. With the help of the SC control system, capacitive storage of the GPVCP cascades is charged through CH1, and then the capacitive storage C ss of the starting system is charged through CH2 before the S ss self-breakdown. The charge level is controlled by kilovoltmeters C-196.
Pre-capacitive storage 1 of cascades of the generator are charged to the voltage U main (Fig. 1). In general, charging can be either rectified voltage or an impulse voltage. After preliminary charging, the only discharger on which there is no «on standby» voltage U main is the spark gap at the output of the GPVCP, it is also the discharger of the last N-th cascade. After charging the cascade from the start device 7 to the start-up discharger 5, a voltage pulse with an amplitude U ivp is provided along the line 8 on the generator, providing the time of its switching t sw.0 and the duration of the front t f.0 of the generated pulse by a shorter time 2t tr.0 of the double path of the electromagnetic wave between the starting discharger of the first cascade. The dischargers of the GPVCP are triggered sequentially, starting from the starting one, triggered by the control pulse from the startup system, and ending with the output discharger with the shortest switching time.
The beginning of the fall of the impulse voltage on the load of the GPVCP generator immediately after the voltage rise on it to the value А N [2]: where Z load is the load impedance ensured by the fact that possible reflections from the trigger device 7, which can lead to a slow increase in the voltage amplitude on the load up to 2А N , are compensated by the discharge of the cascade capacitors, and also because the start device is separated from the GPVCP proper by the transmission line 8 with the corresponding time The path of an electromagnetic wave along it. Number of cascades in this GPVCP N = 5.
Results of investigations. Investigation of the pulse characteristics at different loads of the GPVCP was carried out using a low-resistance resistive voltage divider connected to the load of the generator, a recording C7-19 oscilloscope with 5 GHz bandwidth and a broadband coaxial cable with Z c 50 impedance connecting the shielded low-voltage divider arm to the input of oscilloscope through an attenuator of 20 dB. The oscilloscope was located in the measuring cabin, which protects it from electromagnetic interference.
Resistance of high-voltage divider arm R 1 = 560 one of the load resistors TBO-10 in the GPVCP, the resistance of the low-voltage arm R 2 = 3 is collected from the parallel-connected resistors TBO-0,5 (Fig. 3). The low-voltage divider arm and the matching resistor R 3 = 50 are located in a shielding metal case of a cylindrical shape with a flange connected shortly with the generator's return current. The cable is connected to the low-voltage divider arm using a coaxial connector.
Импульс с выхода ГИНПО
Pulse from GPVCP output To oscilloscope Fig. 3. Circuit of the resistive divider on the GPVCP output Taking into account the fact that the input impedance of the oscilloscope C7-19 is 50 , the divider division factor K d = [(R 1 +R 2 )/R 2 ]×(R 3 +Z c )/Z c = = [(560+3)/3]×(50+50)/50 ≈ 375 (). Between the input of the oscilloscope C7-19 and the end of the cable with the connector, an attenuator 20 dB was inserted, which attenuates the incoming signal by a cable 10 times. Therefore, the total division factor K d total ≈3750. The sensitivity of the oscilloscope CS7-19 is 1.6 V/div = = 1.6 V/cm.
The output multichannel discharger of the GPVCP at the amplitude of the charging voltage of the high-voltage capacitive storage of the control system, which exceeds twice the amplitude of the charging voltage of the main storages of the GPVCP cascades, stably operates in the 9-10 channel mode (10 is the maximum possible number of discharge channels in the discharger). This mode when the GPVCP operates on a resistive load in the form of ten TBO-10 resistors with a nominal resistance 560 Ω each is illustrated in Fig. 4. After the formation of ten channels in the output discharger, all ten load resistors are connected in parallel. We note that the brightness of the discharge channels is approximately the same, which indicates that the current is uniformly distributed over the discharge channels. Oscillograms of pulses with nanosecond fronts on the GPVCP load in the form of TBO-10 resistors, one working chamber, and three working chambers are obtained. The shape of the pulses on the load is close to trapezoidal, which is illustrated in Fig. 5.
From the oscillograms in Fig. 5 it can be seen that the pulse front contains two parts: the first (initial) steep part and the second (closer to the top) more sloping. This indicates that in the GPVCP spark gap in this particular regime, there is not a complete, partial exacerbation of the front of the pulses being formed. Because of the presence of the sloping part, the total duration of the front of the t f pulses is approximately t f ≈20 ns. The gently sloping part of the pulse front also occurs due to reflections of electromagnetic waves caused by the triggering of dischargers, from various inhomogeneities in the GPVCP power line and in the launch system. The pulse duration along the base is approximately 80 ns, the amplitude is 18 kV. This is 1.5 times less than the calculated amplitude given above.
The smaller values of the experimentally obtained amplitude, in comparison with the calculated amplitude, are explained by the lengthening of the front due to its incomplete aggravation by cascade dischargers, inadequate matching of the wave resistance of the GPVCP power line with its resistive load, resulting from this undesirable voltage reflections in the GPVCP and a fairly rapid discharge of capacitive storage devices GPVCP. The oscillogram in Fig. 6 illustrates the shape of the voltage on the load in the form of one working chamber and five TBO-10 resistors at 560 .
From the oscillogram in Fig. 6 it follows that the pulse front duration is approximately 2.5 ns, and the amplitude is 12 kV. The amplitude decreased due to the fact that the load became more low-impedance after connecting the working chamber with water (see also formula (1)). The ring-shaped body of the working chamber is made of PTFE, and the metal electrodes forming the bottom and the chamber cover have flat linings of food grade stainless steel for contact with the food product inside the chamber.
The working volume of the working chamber filled with water has a disk shape with diameter D = 90 mm and height h = 15 mm. With a specific volume resistivity of water ρ = 10 × m, the active resistance R w of water in the working chamber is R w =ρh/(πD 2 /4) = = 10×0.015/(3.14×0.09 2 /4)≈23.6 . In connection with the decrease in the load resistance capacitive cascade storages began to discharge faster, which in turn led to a decrease in the amplitude. At the same time, the contribution of the non-rapid part to the pulse front time on the load decreased significantly, and the front was shortened to ≈2.5 ns.
When three working chambers are connected as a load (see Fig. 7), the voltage amplitude on them becomes even smaller (see Fig. 8) than on one working chamber. From the oscillogram in Fig. 8 it follows that the duration of the pulse front on the load is about 1 ns, and the amplitude is about 8 kV.
To increase the intensity of the pulsed electric field in the working chambers and the voltage on them without extending the front of the pulses, it is necessary to increase the charging voltages of the capacitive storage devices from the charging devices CH1 and CH2, thus increasing the gaps in the GPVCP dischargers accordingly.
The possibility shown experimentally (see Fig. 8) of obtaining in several working chambers, connected in parallel, the voltages, and consequently also the strengths of the pulsed electric field with a record short front (about 1 ns), opens up the prospect of reducing the specific energy consumption for microbiologically disinfecting treatment of water-containing food products, increasing the shelf life of these products without impairing their consumer value. And, consequently, the prospect of industrial application of GPVCP.
Conclusions. 1. A method is proposed for shortening the front of pulses in working chambers for inactivating microorganisms processing food products by using pulsevoltage generators in accordance with the Arcadyev-Marx scheme in the regime of a step-by-step exacerbation of the pulse front.
2. The front of duration t f ≈1 ns of pulses on the GPVCP load in the form of three parallel working chambers with water, the active resistance of each of which is less than 50 was experimentally obtained. Such a short duration of the pulse front confirms that generators -GPVCP are promising for microbiologically disinfecting treatment of water-containing food products (inactivation of microorganisms in products).
3. The stable 9-10 channel mode of the output of the five-cascade generator -GPVCP -is debugged.
4. The pulses on the load are measured with a lowresistance resistive voltage divider, as a transmission line a broadband coaxial cable is used, connected to a recording device -an oscilloscope C7-19 with bandwidth of 5 GHz.
5. Working chambers made in the form of an annular body made of fluoroplastic and metal electrodes forming the bottom and the chamber cover having flat linings of food stainless steel for contact with the food product inside the working volume are used. The chambers were filled with water.
6. High-voltage pulses on the GPVCP load of about 50 or less have a trapezoidal shape with base duration of no more than 80 ns, experimentally obtained pulse amplitudes on the load -up to 18 kV. As the load resistance decreases, the amplitude of the pulses decreases, and the duration of the front and pulses is generally shortened. The shortening of the front occurs as a result of the fact that the sloping (slow) part of the pulse front, which is due to reflections of electromagnetic waves caused by the operation of the dischargers, from various inhomogeneities in the GPVCP power line and in the launch system, is removed partially or fully by an accelerated discharge of capacitive cascade storages to a load with reduced resistance. | 2019-04-16T13:25:09.191Z | 2017-08-16T00:00:00.000 | {
"year": 2017,
"sha1": "05dc396375ea1a6d0f7bfb61f6e748c9a2a6cc2c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.20998/2074-272x.2017.4.08",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2c3e7842edb32812cf77e201deacc5f441ef75f2",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Engineering"
]
} |
55243478 | pes2o/s2orc | v3-fos-license | Strategical Approach for VoLTE Performance Improvement
Voice over long-term evolution is stated as VoLTE. It is youngest evolution in IP technology and evolving very fast. The main drivers for VoLTE have been the fact that LTE is a packet only architecture and LTE has no inbuilt voice+SMS service engine. A solution for Voice over LTE needed as soon as mobile terminals start to use the LTE access. VoLTE provides various advantages for operator as well as end users by improving spectral and network efficiency. Deployments of VoLTE services are increasing at very fast rate and it requires number of troubleshooting and optimizations methods to improve the high definition voice quality of service. This paper presents the various aspects used to optimize the VoLTE network.
Introduction
Today, Voice over LTE (VoLTE) is the most talked about application of the entire LTE mobile network in the global telecom industry. With Voice service being the largest revenue source of all services provided in 2G and 3G systems, there was for quite some time a debate in the Industry on what should be the voice solution for LTE and different alternatives popped up threatening with fragmentation. The solution and architecture of choice should offer the benefits of Voice service evolution [1][2][3][4][5] while securing today's telephony service; it should support to evolve the service offering making use of broadband capabilities and easily add further revenue generating services.
There are various motivations for adopting VoLTE technology over other OTT based-VoIP services as explained in Table 1.
As VoLTE is new technology, it requires various methods to improve its voice and radio quality. In this paper we compare the OTT-VoIP and VoLTE services based on voice efficiency and quality of service. The rest of the paper is organized as: in section 2, we will discuss about related work done for optimizing VoLTE network. Section 3 presents factors to optimize the VoLTE network by comparing results with other services. Section 4 describes the performance comparison of VoLTE and other CS services. Finally conclusion has been made in section 5.
Existing Performance Improvement Solutions
VoLTE specifications are referred as GSM Association IR.92 specifications. In 2012, early VoLTE deployment was expected and full nationwide VoLTE network by AT and T was deployed in 2013. In 2012, T. Koshimizu, I. Tanaka, and K. Nishida [6], presented idea in WTC (Wireless Communication Technical Committee) with focus on performance improvement of VoLTE and eSRVCC network using domain handover function. In 2013, Ozcan Ozturk [7], highlighted the performance of VoLTE network and suggested that Hetnet with Pico cells expansion improves the VoLTE capacity. Later, Mike Hibbered published his article that shows how VoLTE will change the ways of communication and emphasis on data handling.
In 2014, Jo, et al. [8], comes with the solution for inter-cell interference separation based resources allocation which increases the VoLTE capacity by 20%. They focus on packet prioritization and frequency allocation prioritization [8]. In 2015, Sumit Gautam and Durga Parsad Sharma [9] purposed solution to reduce the voice call interruption time during mobility. Their performance target for average voice interruption time is less than 0.3 sec and this is achieved by using ATCF (Access Transfer Control Function) and ATGW (Access Transfer Gateway) entities in network.
Currently 111 operators are investing in VoLTE in 52 countries and 30 operators (Telstra, Orange, AT and T, M1 and Viva etc.) already launched VoLTE High Definition services in 21 countries according to the report presented by GSA in October, 2015 [10][11][12][13][14].
Most of the previously purposed solutions [6][7][8] and studies focus on mobility and capacity of network. So these parameters are taken into care while designing the optimization methodology in section 3.
Optimization Areas
Deploying any technology requires various optimization and troubleshooting steps to get full advantage of technology's potential. We divided these optimization and troubleshooting areas into three major parts as given below: In this paper we compare three voice possibilities for excellence and efficiency ( Figure 1): • Native VoLTE client: VoLTE supported smartphone.
• Non-native VoLTE clients. Application which have feature to register to IMS and establish call using QCI1.
• Over-the-Top: VoIP applications. For example Skype.
Area I: VoLTE voice quality: Voice Codec sampling rate and audio bandwidth is responsible for the voice quality. For this analysis purpose, we used narrowband bandwidth range of 80-3700 MHz and wideband bandwidth of 50-7000 MHz. Regular call connection uses either narrowband or wideband but VoLTE by default uses wideband. Here we performed absolute category testing and mouth to ear delay testing for VoLTE, OTT and SIP services. Score is given between 1 to 10 scales i.e. is mean option score. Figure 2 presents the scores of voice quality with different voice applications. VoLTE secured score of 8.1 with wideband 24 kbps. Regular 2G/3G connection scored 5.6. Other OTT application (e.g. Viber, Skype etc.) scored 8.15 and SIP calling provided 6.7. Overall OTT average score is 8.15 which are very close to VoLTE score as shown in Figure 2.
Hence we can say that voice codec helps VoLTE to match the voice quality of Over the TOP services Now we tested the Voice quality and mouth to ear delay of OTT services and VoLTE using MAC priority scheduler [5,15] in good as well as bad RF conditions at different cell load. Cell loads percentage is measured by number of users divided by max limit of eNodeB. Considerations for test: The voice quality and mouth to ear delay for VoLTE remains constant for all loads as shown in Figure 3. Hence we can say that quality of service and smart MAC priority scheduler [4][5]11,12] in the eNodeB helps to optimize the network. Secondly it helps to the paging response delay which results in reduction of call setup time of VoLTE. Average call setup time for VoLTE call in these tests is between 1.5 to 2 seconds and for other calls is nearly 3-5 seconds. With enabling the TTI bundling, uplink become more robust and increases the coverage by 5-6 dB. During RF bad conditions BLER and CSSR reduced without TTI bundling as shown in Figure 4: Similarly, we experiment eNodeB with and without robust header compression feature. Robust header compression basically runs between UE and eNodeB. In small chunks of data RoHC helps to increase the network capacity. For example if the actual data size is 20 bytes and IP overhead [12][13][14] is 60 bytes, then most of the resources are blocked by header than the data. So in such scenario RoHC feature plays important role and increases capacity. In VoLTE, SIP and Voice traffic packets are two main kinds of packets. For SIP signaling part, RoHC does not apply on SIP signaling [12][13][14] and only applicable for data packets because for SIP signaling packet size is relatively large as compared to header size. RoHC functionality describes in Figure 5.
We tested capacity of eNodeB with 40 Bytes IP header (without RoHC) and with 5 bytes IP header (with RoHC). Capacity of network increased by 5% with RoHC and helps to admit more calls at given time. Figure 6 shows the score card of capacity with and without RoHC feature in eNodeB.
Area III: Battery saving in UE: Power consumption of phone is very important factor for smartphone users. UE battery can extend either by handset architecture modification or by efficient DRX feature in network. DRX is discontinuous reception, it used sleep mode in handset during reception of packets (every 20 ms). The DRX can be activated in such a way that two voice packets are able to transmit simultaneously and it increases the packet arrival time by 30-40 ms [16]. During our analysis we found that with DRX activation device power consumption reduced by 30-40% as shown in Figure 7 [17].
Performance Comparison
Score card for all optimization scenario are given below for voice quality. It shows that with smart MAC priority scheduler efficiency and capacity (10-20%) of VoLTE network can be enhanced (Table 2). Table 3 measurements shows that with the help of TTI bundling and RoHC feature, we can control accessibility and interference of network and results in strong key performance indicators of network. Table 4 shows that by using efficient DRX in network, we can easily enhance the battery life by 30-40%.
Conclusion
VoLTE is growing technology and supported by various new VoLTE capable devices. It enhances the operator's overall OPEX, CAPEX, and spectral efficiency and also provides high quality of Cap. Score(1-10) 8 5.7 service at end user. This paper demonstrates the various aspects to optimize the voice quality, radio network quality and methods to save UE battery life. It also informs about the related features required to optimize network. It is observed that VoLTE provides higher speech/voice quality, excellent network performance even in poor RF conditions and low power consumption over the OTT services. Hence network reliability can be achieved by activating various VoLTE related QoS features in network. | 2019-04-15T13:06:13.274Z | 2016-07-03T00:00:00.000 | {
"year": 2016,
"sha1": "71d8b8272c97ff47def4fecd39fce7d4fb45537b",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/strategical-approach-for-volte-performance-improvement-2167-0919-1000134.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c572ae86bee09167dc69051990534bcd8e74a922",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
80660480 | pes2o/s2orc | v3-fos-license | Experience of the spouse of a woman with breast cancer undergoing chemotherapy : a qualitative case study
Objective: To identify the meaning attributed to the experience of a spouse of a woman with breast cancer undergoing chemotherapy. Methods: Descriptive study, with a theoretical-methodological orientation based on medical anthropology and utilizing an ethnographic case study strategy. Data were collected through semi-structured interviews and participant observation. Results: The meanings revealed that the diagnosis cause suffering. Chemotherapy was seen as giving hope of healing the wife's cancer. During this process, the spouse had to deal with the strong adverse effects of the treatment and subordinate to his wife to reduce the conflict experienced by the couple, which violated the rules of his masculinity. Religion and family were important support networks on this path. Final considerations and implications for practice: The results showed the importance of considering cultural aspects of spouses when they are faced with disease in their wives. The way spouses deal with breast cancer will depend on their cultural systems. Nursing care must be comprehensive and extend to spouses whose wives have breast cancer.
INTRODUCTION
It is estimated that there will be 59,700 new cases of breast cancer in each year of the 2018-2019 biennium in Brazil, with an estimated risk of 56.33 cases per 100,000 women.Excluding non-melanoma skin tumors, this type of cancer is the most frequent in women living in the South, Southeast, Central-West, and Northeast regions of Brazil. 1 Among the therapies for breast cancer, because of its systemic nature, chemotherapy is the one which causes more adverse reactions, and a consequent decrease in the quality of life of patients.The impacts of the disease extend to the lives of families and couples, with repercussions in all aspects of their routines. 2,3n this context, families assume greater importance, because they play a relevant collaborative role in coping with the disease, treatments, and their effects.Studies have indicated that families are considered the main source of psychological, emotional, and social support in this situation.Thus, cancer can be considered a family disease, which, when diagnosed in one member, has the power to change the life experience of the other family members, including spouses. 4,5he repercussions of cancer in the context of patients' families include financial concerns, problems at work, and relationship issues.In addition, cancer can interfere with marriage, and couples may divorce as a consequence of this additional burden and changes in the family. 6eing the spouse of a woman with cancer has been described in the literature as living in the shadow of the disease.Recent investigations have shown that one year after the cancer diagnosis of their wives, spouses had a significantly higher rate of mood disorders, reactions to the severe stress, and ischemic cardiac disease.Anxiety levels become high, and uncertainty about the future and the possible recurrence of cancer unsettle the lives of these spouses.They feel vulnerable in the fact of the disease and cannot handle the suffering of their wives. 4,5 integrative literature review focused on the perceptions and experiences of spouses of women with breast cancer and the ways in which they cope with difficulties revealed that few Brazilian or international studies have addressed the proposed objective, pointing to a gap in knowledge of this area.The findings of this review stressed the need to invest in research with a stronger focus on this topic. 5onsidering that sociocultural influences have a significant impact on knowledge, behavior, ways to deal with different situations in life, and the meanings that spouses associate with the disease experience, the objective of the present study was to answer the following question: How do spouses of women with breast cancer define the meanings of the experience of chemotherapy of their wives?The goal of the present study was to identify the meanings associated with the experience of spouses of women with breast cancer undergoing chemotherapy.
METHODS
This was a descriptive and qualitative study with a theoretical-methodological orientation based on medical anthropology and utilizing an ethnographic case study strategy.
Medical anthropology aims to integrate health, disease, and culture.It considers health as the result of interaction between biological and cultural factors.According to the interpretative concept, culture is a symbolic and public system centered on individuals, who use it to interpret their world and actions It consists of beliefs, values, symbols, standards, and practices.Interaction between biological and cultural aspects is construed as the structuring element of the experience. 7he instrumental case study method was employed to develop the present qualitative investigation, because it provides an extended observation of cases under analysis.The details of this method allow studies to consider various dimensions and pay attention to the most central and intimate aspects, using researchers' skills to understand certain experiences.The focus of a case studies is analysis of how and why certain characteristics of the case are shaped by the context.This method allows the participation in multiple cases or just one, and the examination in different moments of the experience. 8,9he present study examined the case of the spouse of a woman with breast cancer undergoing chemotherapy.The spouse was the person who most frequently accompanied the patient, who was being treated in an oncology service, and he was invited to participate in the investigation.The initial contact occurred in the oncology outpatient facility of a teaching hospital in the countryside of the state of Minas Gerais, Brazil, after a nursing appointment.The subject was selected based on the following inclusion criteria: being the spouse of a woman with breast cancer undergoing the initial phases of chemotherapy, regardless of the chemotherapy scheme and cancer stage; being 18 years old or older; being married to or in a commonlaw marriage with the woman; having lived with the wife in the same house for at least one year; and agreeing to participate in the study and have the interviews recorded.After accepting the invitation to participate in the investigation, the individual signed a free and informed consent form.He was given a fictitious name to guarantee his anonymity.
The present study followed ethical principles for research with human subjects and was approved by the research ethics committee (CAAE Nº 40672614.5.0000.5152).
Data were collected by one of the researchers, in the oncology outpatient facility of the institution and the family household, from March to September 2015.Six interviews were carried out, with an average duration of 30 minutes, at difference points of the experience, to give the study greater.
The techniques used in data collection were observation, notes in a field journal, and recorded semi-structured interviews.These were guided by the following questions: How does it feel to share a life with a wife with cancer who is undergoing Data were analyzed according to the inductive thematic analysis technique.Initially, the transcriptions of the interviews and the records in the field journal were grouped into a single document by researchers.After reading all of the material, categorization of the relevant aspects of the data and classification into subjects were carried out, with a focus on the objective of the study.The subject is a level of standard answer, which interprets the meaning within the data set, regarding the research questions. 10 Explanatory models were used to describe how the spouse of a woman with breast cancer builds his experience during chemotherapy.These patterns are construed as cultural models into which people fit the disease experience, giving it a meaning for both themselves and members of their cultural context.The meanings are the descriptions of the process experienced by participants, with their ideas and actions regarding the disease and chemotherapy, explained through motivations and justifications extracted from their cultural history. 11,12
Context of the case
The context is presented in order to foster understanding of the case.The participant, whom we will call Francisco, was 41 years old, married, white, and had incomplete higher education.He worked as a public employee in hydrometry, with an eight-hour working day.His job generated an income of approximately five minimum wages, with which he supported his family.He was a practicing Mormon in a local church, where he held a leadership position and was assiduous in carrying out his responsibilities.
His wife, whom we will call Aline, was 39 years old and a housewife.She was responsible for managing and performing the tasks involving the household and children, including their school activities, before she became ill.In January 2014 she noticed a nodule in her left breast and immediately sought medical care, but a biopsy was not carried until December 2014.The anatomopathological result was compatible with a Grade 2 invasive ductal carcinoma.Because of its stage (T3N0M0), the patient's young age, and other factors identified in the examinations, neoadjuvant chemotherapy was proposed, with a protocol including four cycles of doxorubicin + cyclophosphamide (a combination is known as AC chemotherapy) and four cycles of docetaxel, with a 21-day interval between cycles.The chemotherapy treatment was carried out between February and June 2015, and was suspended because of medullary toxicity in the penultimate cycle.
The couple had been married for 21 years and had four children, whose ages ranged from 5 to 16 years old.The whole family lived in the house they owned.
Meaning units
Four meaning units were identified from the collected data.These units are depicted in Figure 1.
When someone says "cancer" we get scared
In the explanatory model of Francisco, the first sign of his wife's breast cancer was the appearance of a lump.The search for medical care began immediately after the perception of the breast change, but according to the information provided by him, there was a long period until the diagnosis, given that his wife's condition was evaluated by several healthcare professionals, who did not associate the symptoms with the possibility of breast cancer:
One of the doctors thought that it could have been from a blow to her breast, it looked like a trauma [...] Another doctor just told us to apply hot pads because it could be a swollen gland. (1 st interview)
Several examinations were carried out to define the diagnosis:
She had appointments with a gynecologist and a mastologist, and unfortunately, I do not know why, they could not identify the problem. The fifth doctor was the one who requested a biopsy and found out it was cancer. (his eyes teared up) (1 st interview)
The diagnosis was received by the participant with sadness, and her experienced "a shock" in his life reality, given that cancer has a cultural connotation of being overwhelming, severe, and incurable:
Experience of the spouse of a woman with cancer
Neris RR, Zago MMF, Ribeiro MA, Porto JP, Anjos ACY
When someone says" cancer" we get scared, we know it is serious, really serious. It is a disease that has no cure, it is hard to treat. (2 nd interview)
After the diagnosis, he looked for an explanation, which allowed him to create a new meaning for the experience finding out about the cancer.
It is a shocking situation, I know that everything in life has a purpose.Even if it is something painful, something hard to live with, we have to learn something from it.
(1 st interview) These excerpts reveal that the meaning associated with the cancer diagnosis was one of shock and sadness, because the disease is considered incurable and thought to involve suffering and the possibility of death.The experiences of relatives and people from his social circle reinforced the common sense of this meaning.
The chemotherapy process
After the impact of the diagnosis, considering the need for systemic treatment, the medical team suggested neoadjuvant chemotherapy.According to the lay explanatory model of the participant, chemotherapy: Is a treatment that aims to cure this disease, cancer.That's the main objective of using this medicine, this drug that chemotherapy is.(5 th interview) The news about chemotherapy was received with hope for the "chance of cure," and provided motivation for the strength to go on: We are happy because there is a treatment.When I heard that she would undergo chemotherapy I was relieved.We can see results, the tumor is shrinking, and we are very hopeful about the cure.(5 th interview) In Francisco's narrative, the acceptance and valorization of the proposed medical treatment are patent.For him, this was the best option, even if it caused changes and discomfort in his and his wife's lives: Her treatment has been difficult.Initially, she had nausea and now she has strong pain.It is complicated, but it is what I say to her, she has to do it, no matter what it can cause.We have to be thankful that there is a treatment.(5 th interview) During the chemotherapy process, he mentioned difficulties dealing with the adverse effects and repercussions of the treatment.He also reported lack of information from healthcare professionals.Thus, every new chemotherapy cycle was an "unknown situation", a "new surprise."All the problems they faced, mainly those resulting from adverse reactions and their levels of intensity, together with lack of guidance, made the participant question if they really originated from the treatment: We wondered: Is it really chemotherapy that is causing this pain?I do not remember her being oriented about that.Everything that happens to her is a surprise, it is becoming an unknown situation.(3 rd interview) His previous cultural knowledge of the therapy led him to expect that "white chemotherapy" (based on the administration of docetaxel) had mild adverse reactions: Some people said that this white chemotherapy attacks the body less, is simpler, is not going to cause nausea, and will be an easier process, but for Aline it was worse.(4 th interview) The common sense of patients and relatives divides chemotherapy drugs for breast cancer treatment into "red," with a connotation of being stronger and having more severe adverse effects, and "white," which according to Francisco would be "less aggressive".(4 th interview) There were moments of desperation and insecurity when they were faced with the severe pain presented by the wife: We were terrified of the pain.We went to the emergency care unit and they gave her morphine.What do you think in a situation like this?She is a step from death!Morphine, for lay people like me, is for those on the verge of death.
(3 rd interview) In the popular culture, the use of the analgesic morphine is associated with patients who are "on the verge of death," usually terminal cancer patients with chronic pain.
During the treatment, faced with adverse reactions stronger than expected and constant hospital admissions, it was necessary to interrupt chemotherapy after the penultimate cycle of treatment.This medical action was received with relief: Things are better because she is not having as many reactions as she did before, pain in her body, all those things that chemotherapy caused.(6 th interview)
Dealing with the wife's stress
During the chemotherapy process, Francisco reported that there were changes in his marital relationship: Our marriage got more complicated after the diagnosis.(4 th interview) Experience of the spouse of a woman with cancer Neris RR, Zago MMF, Ribeiro MA, Porto JP, Anjos ACY The relationship became troubled as a consequence of the "stress" of the wife after the beginning of chemotherapy, with more acute manifestations in the days leading up to the medicine infusion."In the days leading up to the session her stress gets worse."He uses the metaphor "I keep walking on eggshells with her" to refer to the experienced situation.(4 th interview) In his perception, his wife experiences constant feelings of dissatisfaction, "there was no pleasing her".According to him, he prefers to be alone and "stay cool"; he adopted the strategy of "constantly counting to ten" to avoid conflict (4 th interview).Thinking of the suffering experienced by the wife as a result of the disease and therapy, he said that he always "gives up" during the couple's arguments, "because I know that she is undergoing a treatment," but he does not know how long he will be able to continue this behavior: I have not reached my limit, but how long?The other partner keeps giving in, again and again, and gets sick of it.(6 th interview) These conflicts in his marital relationship can be illustrated by an excerpt from the field journal:
Today I witnessed an argument by the couple. It occurred in front of me and a secretary at the hospital. The wife got angry, raised her voice, and spoke impolitely to the participant in the study. The reason was the scheduling of the appointment with the odontology team, but I could not understand exactly what triggered the discussion. He was very embarrassed about the episode, but tried not to take it seriously and smiled apathetically to minimize the problem. (Field journal, 08/04/2015).
The repercussions in Francisco's life extended to his role as a male head of the household, which demanded that he begin to "look after the kids, take them to school and pick them up, shower and feed them," often taking over the "household chores and preparation of the family meals," tasks which were assigned to the wife before she got ill.(4 th interview) In order to accompany his wife to medical appointments, chemotherapy sessions, and examinations, the participant needed "to be frequently absent from work," which according to him "did not happen as often as it does now."In order to handle the care to his wife and children and manage household chores, in addition to his daily work, he reduced his participation in religious activities ("often, I set the work at church aside to stay with her").This was social activity in which he used to be very assiduous before his wife's cancer diagnosis (6 th interview).
Support networks in the therapy process
During the whole chemotherapy process, the participant had the help of the family group in overcoming situations and difficulties, whether they involved transport to the hospital on chemotherapy days or help with the care of the children: "I get help from the family, especially her mother."(2 nd interview) Thus, the family was fundamental as a source of support and incentive to continue along his path.He also sometimes counted on the help of people from the religious community for care to his wife and children and household chores: Her sisters from the church set a calendar to come here.We are getting help from the them from the church and my wife's family.(2 nd interview) Spirituality was an important source of support in his life, helping to maintain the hope for a cure: In the beginning, the doctor said "Think positive, have faith, believe in God, and everything will be fine.If you begin thinking negatively now, things may get worse" (5 th interview) In Francisco' case, spirituality was also considered a strategy to help overcome marital conflict.This element, practiced through faith and fulfillment of the promises made in the rituals of his religion, was supporting pillar in maintaining his marriage during conflict:
DISCUSSION
The recognition of something different in the body, the "lump," led the patient and her spouse to seek help in the health system.Confirmation of a cancer diagnosis is not always efficient and quick.Structural, organizational, and bureaucratic problems in the Brazilian health system make it difficult to obtain an early cancer diagnosis.The delayed diagnosis, in this case, may also be associated with the young age of the wife, which decreased the likelihood that doctors would initially suspect breast cancer.This is because of a low incidence of this disease in this age group.Therefore, different factors contributed to this delay, involving social, cultural, and economic aspects, involving the patient, her relatives, and the health system.][13] When Francisco learned of his wife's cancer diagnosis, he recognized it as a threat to her life, given that in his social context, cancer is a disease that is "hard to heal."It is a socially stigmatized illness, described as "overwhelming," and is associated with suffering and death.Considered as an incurable disease, its name evokes fear. 14,15xperience of the spouse of a woman with cancer Neris RR, Zago MMF, Ribeiro MA, Porto JP, Anjos ACY When the patient and her relatives received the news of diagnosis of a disease as "serious" as cancer, their lives became disorganized.Their routines became unsettled, which was reported as a feeling of "shock."Faced with the diagnosis, the patient and her relatives looked for an explanation, and tried to create a new meaning for the experienced process, described as "to learn something from it."The strategies adopted to deal with the wife's disease are individual characteristics, directly influenced by the spouse's cultural system. 11n his definition of the meaning of the chemotherapy treatment, the participant associated "chance of cure" with the cancer that affected his wife, although in his social context he recognized the disease as something to be feared, because it causes many reactions.Trust in chemotherapy is based on the sociocultural recognition that professional knowledge and the resources used in treatment can offer the longed-for "cure".In lay understanding, medical professionals do not make mistakes, because they can make the best decisions at the appropriate time.Patients and relatives consider themselves unable to make such decisions, which they consider too complex for their social context; in addition, they feel anguished and anxious because of the uncertainty of the results. 12t makes sense that perceptions of chemotherapy are pervaded by the belief that it can heal or kill.According to Francisco, "she has to do it, no matter what it can cause."Therefore, in his understanding, reactions must be endured, because they are less serious than the disease.There is a moral obligation to overcome adverse reactions and their limitations, as well as to strive to restore the normality of the body (cure).From Francisco's perspective, chemotherapy is a fundamental step toward reaching the cure. 12n the initial stages of chemotherapy, he needed to deal with the "surprises" of the adverse reactions that his wife presented, given that he had not been oriented about them.These reactions intensified his suffering and brought insecurity to his life.He often did not know how to deal with the situation.Francisco reported that it was expected that "white chemotherapy" would have milder effects than "red chemotherapy."This expectation resulted from his previous knowledge of the treatment, which he had acquired by observing his relatives and other cancer patients from his social circle.This situation illustrates the interaction of biomedical and cultural knowledge. 12n the social organization of masculinity, there are four ways to define what being a man is: hegemonic, subordinate, complicit, and marginalized.They are determined by behaviors and ways of acting and thinking when faced with different contexts of life.Hegemonic masculinity refers to patriarchy, which involves separation between behaviors considered masculine and feminine, and is characterized by dominance of men as bosses and subordination of women to men.
Complicit masculinity is defined as linked to hegemonic masculinity, but does not show complete incorporation of its elements.In complicit masculinity, there is interaction between men, women, and their social environment, without a need to establish who plays the dominant and subordinate roles.It is a more "complaisant" version of hegemonic masculinity, in which married men develop relationships of co-participation with their wives, rather than ruling them or displaying their authority.Marginalized masculinity is related to the supremacy of dominant classes or ethnic groups over subordinate categories, such as the rich and the poor, and employed and unemployed people.It is always associated with the authority of hegemonic masculinity of dominant groups.Subordinate masculinity pertains to domination and subordination between groups of men, for instance the domination of heterosexual men over homosexual ones, given that the latter are considered the inferior part of male hierarchy. 16hese types of masculinity are involved in constant conflict, persuasion, and changes of standards, symbols, and references among themselves to reach and practice "hegemony," with the remaining groups being labeled as competitors or supporters of the leaders.This is what makes the types of masculinity, not only multiple at all times in multicultural societies, but also changeable over time and in different contexts of social power relationships.Consequently, depending on the type of male hegemony currently in practice, men will face differently care for wives with cancer undergoing chemotherapy. 16,18hen observing the sociocultural characteristics of Francisco as a male, it was clear that hegemonic masculinity prevailed in his social group.He was the provider for his family, and had a job that supported this set of people.The wife was responsible for the care of their children and household chores.When the wife became ill, Francisco went through a redefinition of his masculinity, assuming behaviors typical of complicit masculinity.This was illustrated by the fact that he engaged in household duties such as "looking after the children and house" and was responsible for "preparing the meals for the family," given that these skills are not described as being characteristic of males, according to hegemonic masculinity. 16ccording to stereotypes, males are culturally conceived of as powerful, active, strong, self-confident, athletic, selfemployed workers, public people, and resistant to household chores.Behaviors such as looking after the children and the wife and carrying out household duties are attempts to escape stereotypes, which assume that they do not take care of other people.Within the scope of masculinity, attitudes like these are construed as hypo-masculine and a problem for the maintenance of this gender category.Faced with his wife's the disease and the new needs of the family, he assumed this new identity as a caretaker.The behaviors Francisco adopted during his wife's disease process are a cultural product, influenced by the way masculinity and gender are experienced in his social context. 16,18hen he reported that he "gives in" during the couple's arguments to maintain their relationship and reduce his wife's stress, he violated the social principles that rule hegemonic masculinity, considering that this category is characterized by domination by men and subordination of women. 16This "give in" Experience of the spouse of a woman with cancer Neris RR, Zago MMF, Ribeiro MA, Porto JP, Anjos ACY behavior depreciates the hegemonic masculine identity, putting Francisco in a situation of inferiority, submission, and femininity.Detachment from this masculinity leads to psychic suffering, severe stress, and even ischemic cardiac disease, according to the literature. 4This suffering may have been enhanced by his absence from his customary social activities, such as "the work at the church," in which he demonstrates his authority and supremacy over other men, given that he occupies a very high position in his religious system.Looking after his wife is an activity that is seen as feminine in the context of hegemonic masculinity. 16hen analyzing the masculinity displayed by Francisco, it is possible to realize that it is not static, because there is continuous definition of the social being and his cultural structure.Masculinity is a culturally learned concept, so it is not a hereditary characteristic, but a social code that standardizes the conduct of people as men. 17Consequently, masculinity is not a set entity embedded in the bodies or personality traits of individuals, but a configuration of practices surrounding the position of men, which are expressed in their social actions.][18] Spirituality is an important support network for patients and relatives during the disease process.][21] Studies have shown that the experience of spouses of women with breast cancer is emotionally difficult and shocking.The disease contributes to significant changes in their routines after the diagnosis, in both the couple's relationship and family dynamics. 22,23The results of an integrative review, whose objective was to analyze experiences of spouses of women with breast cancer, pointed to changes in the relationships after the disease and impaired sexual activity.The spouses began to perform household chores and take over the care of the children.Their real feelings of sadness and fear of losing their wives were hidden, and support from religion was relevant during the process of coping with the illness. 5ne study reported that psychological stress was higher among partners of patients with cancer than among patients themselves, and that partners were more likely to develop depression.In some cases, partners experienced fatigue, sleep problems, eating disorders, and frequency of pain episodes higher than that observed in patients.One year after the cancer diagnosis, partners of patients with cancer had significantly more mood disorders, reactions to severe stress, and ischemic cardiac disease in comparison with the number of episodes registered in the year before the diagnosis. 4
FINAL CONSIDERATIONS AND IMPLICATIONS FOR PRACTICE
Explanatory models allowed understanding of the experience of the spouse of a woman with breast cancer undergoing chemotherapy.The meanings found in the present study corresponded to suffering caused by the diagnosis, and fear in the face of this severe disease.Once the diagnosis period was over, chemotherapy was seen as giving hope for a cure of the illness that was incurable, according to the spouse's cultural knowledge at that time.The treatment process involves many difficulties with multiple aspects, such as physical adverse reactions, bringing insecurity to the spouse's life.In this process, in order to be able to manage the wife's stress and reduce the couple's conflict, he needed to violate the rules of hegemonic masculinity, the predominant form in his social group, by subordinating to her and becoming complicit in household chores.He relied on religion and family as support networks to help him in this process.
The contributions of the present study to nursing practice are related to the rich knowledge produced about the experience of being the spouse of a woman with breast cancer undergoing chemotherapy.Qualitative studies do not propose interventions in health practices, but raise issues that encourage change and contribute to improvements in nursing care.The present study revealed that partners also go through intense suffering during the disease process of their wives, and should receive care and support from healthcare teams.Nursing teams that act in the context of care for women with breast cancer must involve spouses in their interventions, offering guidance oriented toward responding to doubts and wishes concerning chemotherapy, adverse effects, and how to handle them.
It is important to stress there have been no studies before the present research that had the objective of examining the meanings of this experience from the anthropological perspective, exploring the questions of masculinity that surround spouses.The results reported in other investigations have been limited to presenting the identified subjects, but did not explore the meanings associated with the experience.
One of the limitations of the present study was that the participant did not mention the sexual aspect of the relationship during the interviews, which could have been explored to expand the comprehension of the meaning of being the spouse of a woman with breast cancer.The authors emphasize the need to design new investigations with more participants, using different approaches, in an attempt to try to expand the perceptions of healthcare professionals regarding the experiences of partners in different contexts.
Figure 1 .
Figure 1.Meanings representative of the experience of the spouse of a woman with breast cancer undergoing chemotherapy.Source: Designed by the authors.
Experience of the spouse of a woman with cancer Neris
RR, Zago MMF, Ribeiro MA, Porto JP, Anjos ACY therapy?How do you handle the situations that arise?Tell me about difficulties you have experienced and what you have done to overcome them.Have you received help from anybody? | 2019-03-18T14:02:22.061Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "cc92a9195162c7a60bde53b031e9ef7ac24dd994",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/ean/v22n4/1414-8145-ean-22-04-e20180025.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b3a47471695708edfa498326c7e611b7c3ad9076",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Art"
]
} |
86441810 | pes2o/s2orc | v3-fos-license | The Comparative Study on Pediatric Triage Decision-Making Power in Nurses and Nursing Students: A Cross Sectional Study
Background: Triage nurses are the first people in the emergency department providing care to patients. Their knowledge is very important in efficient triage. According to the few studies on the factors affecting triage, the current study aimed at investigating the nurses and nursing students’ knowledge about the triage of children. Objectives: Accordingly, the current study aimed at determining the level of knowledge of nursing students and nurses about pediatric triage and the impact of knowledge on the triage performance, in Guilan University of Medical Sciences, Iran. Methods: Thecurrentdescriptive,crosssectionalstudywasconductedon88nursesand88nursingstudentsselectedthroughacen-sussamplingfromaselectedhospital. Thedatawerecollectedoveronemonthin2017,bymeansof aresearcher-madequestionnaire thatincluded: Demographiccharacteristics(age,gender,degree,etc.) andknowledgelevelof staff. Thevalidityof thequestionnaire was determined by content validity and its reliability was measured by a test-retest method. After transferring the data into SPSS, statistical analysis was performed by descriptive and inferential statistical tests such as the Wilcoxon and Kruskal-Wallis. The level of significance was P < 0.05. Results: Atotalof 176questionnaireswerecompleted. Areviewof theresponsesgivenintheknowledgesectionrevealedthat94.3% of thenursesandstudentswerewithintheweakrange. Therewasnosignificantrelationshipbetweendemographiccharacteristics and triage knowledge in nurses and nursing students (P > 0.05). Conclusions: According to the current study results, knowledge of nurses and nursing students should be reinforced to better accomplishpatient’striage. Sinceemergencynursesareamongthemostimportantstaffinprovidingtheprioritizingtriage,there-fore, nursing education programs should include triage courses that retain mastery in this scope.
Background
Triage is a continuous decision making cycle that determines the needs of persons for medical attention when entering an emergency ward (1). It is defined as prioritizing patients to provide services according to the injury severity and provide the appropriate treatment in the shortest time (2)(3)(4)(5)(6)(7)(8)(9). In the emergency department, triage classifies and prioritizes patients' needs based on the type and acuity of the disease or injury (10).
The purpose of triage is to identify a process of injury or illness and prevent or minimize potential detrimental effects through rapid assessment and decision-making. The goal of an effective triage system is to provide appropriate and rapid therapies for life-threatening conditions and ensure that all patients receive emergency check-ups based on the severity of their clinical conditions (9). Correct triage and prioritizing of the patients are essential skills in nursing care, and sickness, and illness severity of the patient doubles the importance of this process (11)(12)(13).
Nurses are the first people in the triage department that take care of patients. Knowledge and experience of the emergency nurses are highly valuable in the appropriate decision-making (14)(15)(16). Triage nurses should be able to make appropriate decisions in a relatively short time, usually in a unknown and emotional situation (10). Nurses are one of the most important health care provider groups for children (17)(18)(19)(20)(21)(22)(23). Studies revealed several barriers to accurate triage, specially for children (24). In pediatric triage, identification of critically ill children, assigning an appropriate triage level, and suitable care is very essential (25).
Prediction and rapid identification of potential in-Copyright © 2019, Journal of Comprehensive Pediatrics. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/) which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited. juries are components of the assessment stage in the nursing process. This assessment includes a set of information about the patient's main complaint and clinical examination. Therefore, the decision-making in triage is a complicated and hard process, since it is usually made under uncertain conditions using incomplete and vague information provided by patients in limited time (26)(27)(28)(29). Triage nurses should detect useful keys to perform triage and make decisions based on these keys using available information (30).
According to the studies, there are many factors pertained to the triage decision-making of nurses such as training courses, nurses' workload, and continuous interruptions of their work related to crowd, appropriate use of visual keys, personal experience of nurses, social setting and nurses' work environments, communication, feedback, leadership and teamwork in the unit, and personal traits and personality, which affect the process and potential outcomes of triage (31). Based on studies pertaining to triage decision-making and its effective factors, it is determined that correct triage requires a high level of recognition and metacognitive processes such as knowledge, skill, experience, readiness, and mastery. In general, appropriate decision-making for the triage of patients is attributed to the skill, knowledge, excellence, and performance of nurses and that it can lead to the improvement of therapeutic outcomes such as patient health conditions, duration of hospital stay, patient satisfaction, and general quality of triage system (26,(32)(33)(34)(35)(36). The current study was conducted due to the few studies on the knowledge about children's triage.
Objectives
The current study aimed at investigating the knowledge of nurses and nursing students regarding triage in children and the impact of knowledge on the triage performance.
Methods
The study population consisted of nurses working in different departments of the selected hospital for children, including emergency, internal, infectious diseases, neonatal, and oncology as well as nursing students were in the 7th or 8th semester. They were selected by a census sampling method, since nurses in other departments of hospital assisted the emergency department when crowding. Therefore, all the nurses participated in the current study.
Inclusion criteria for nurses were nursing degree (Bachelor of Science) and at least one year of work experience in clinical settings. Inclusion criterion for nursing students was studying in the 7th or 8th semester.
In the current study, a researcher-made questionnaire was used to assess the knowledge of nurses and nursing students about pediatric triage on the basis of a review of internal and external studies (37). To create the questionnaire, Canadian five-level triage tool and other tools used in Iran were reviewed (37)(38)(39). To assess the validity and reliability of the questionnaire, a panel of faculty members commented on its items and the content validity ratio (CVR) was calculated. Then, the content validity index (CVI) was calculated, which was appropriate. To test the reliability of the questionnaire, a test-retest method was used (r = 0.9).The questionnaire had two parts: The first part consisted of demographic information (13 items for nurses and 14 for students).The second part comprised 15 items to assess the knowledge of nurses and nursing students about performing hospital triage (correct, wrong, and I do not know). Since there was not enough information about the method of scoring in the related articles, a linear transformation scoring method was employed. For each item, the correct answer had 1 point, while an incorrect answer had the score of 0 (right, wrong, do not know) (40).
In the current study, 176 questionnaires were distributed among the nurses and nursing students. The questionnaires were completed from September to October 2017 by qualified nurses and nursing students. The researchers distributed questionnaires among the nurses at the end of their shifts and among the students after completion of their classes. Completion of questionnaire took about 20 minutes. The whole sampling took one month to complete. After collecting data, information was transferred into the SPSS version 16. Descriptive statistics (i e, standard deviation, mean, the minimum, and the maximum) were used to measure demographic characteristics. To investigate the correlation of some of the personal characteristics of the nurses and students with their level of knowledge, the Wilcoxon and Kruskal-Wallis tests were used. In the current study, the significance level of the tests was set to P < 0.05.
To consider the ethical principles, the researchers initially obtained permission from the Deputy of Research and Technology of Guilan University of Medical Sciences and hospital matron; then described the research objectives to nurses and students. All the participants also signed informed consent forms. Participation in the study was completely voluntary and optional. All the participants were assured about the confidentiality of information obtained and their characteristics. The current study was approved by the Ethics Committee of Guilan University of Medical Sciences (ethical code: IR.GUMS.REC.1396.276).
Results
All the nurses participating in the current study were female. The results of the demographic information of the nursing students and nurses are presented in Tables 1 and 2 in accordance to their level of knowledge. Comparison of the results of nurses and nursing students' level of knowledge is presented in Table 3. The results of the level of knowledge of nurses and nursing students showed that 94.3% of the respondents were in the weak range. Chi-square test showed no significant relationship in the knowledge level between the two groups (P = 0.193).
Discussion
The current study was conducted on the role of Nurses and Nursing Research in health conditions (21,41,42). The participants' knowledge was assessed by quantitative methods. The study results showed that most of the nurses and nursing students did not have enough knowledge. In another study, Goransson et al. reported the average nurse's knowledge for triage 57.7% (39). The study conducted by Mirhaghi and Roudbari in Zahedan, Iran, with a similar goal found that 39.4% of the responses of the subjects were correct. In the current study, 22% of the respondents did not have any familiarity with the topic of triage (37). In the study by Dadashzadeh et al. 38.2% of the participants did not pass a special training course in triage (43). In the current study, during the statistical analysis of data, a significant relationship was observed between the lack of familiarity with the hospital triage and providing correct answers to the questionnaire; this calls for more education and holding specialized courses on triage. The study by Haghdoust et al. entitled "determining the effect of teaching triage on knowledge, attitude, and practice of nurses working in the emergency department", in Rasht found that the average score of knowledge on the triage was 16.25 before training, which increased to 30.25 after intensive training (44). A study by Hammad et al. found a huge difference in the training, skills, and experience of members on the triage staff in a hospital in China; this requires integrated training for personnel (45). The results of the study by Ponsiglion et al. revealed that a lot of factors such as training course and nurses' knowledge, nurses' workload and continuous interruptions of their work due to the crowd, appropriate use of visual keys, personal experience of nurses, social setting, and nurses' work environments, communication, feedback, leadership and teamwork of the unit, personal traits and personality, which affect the process and potential outcomes of triage, can affect nurses' triage decision-making power (31). In contrast, according to a study by Martin et al. the amount of triage experience did not contribute to accuracy of triage in emergency departments (46). A further study by Fry and Burr in Australia found that although 57% of the participants had a graduate degree in nursing and received other trainings about emergency, trauma, critical care, and acute care, they did not have enough knowledge about triage (28). In 2007, Considine et al. also argued that knowledge had an important and efficient role in nurses' decisionmaking for triage (9). But, in the study by Taheri et al. there was a high correlation between the clinical experience in the emergency department and the level of knowledge about triage (47). Mirhaghi and Roudbari did not find any significant relationship between work experience and triage knowledge (37). A study by Chen et al. found that the level of knowledge and triage training for staff, work experience, triage level, and hospital grade were effective factors of triage (48). The current study did not find a significant relationship between work experience and other demographic characteristics of the subjects with the score obtained from the questionnaire. Considering the traumatic damage and the importance of triage management in nurses, it is necessary to intervene in this regard (42,49,50).
Conclusions
The current study showed that the knowledge of nurses and nursing students was lower than the average level. The study found that more than half of the nurses and nursing students did not get a good score in terms of knowledge level. The awareness level of nurses involved in triage of patients was important and, therefore, properly training of human resources and adequate triage equipment are recommended in the emergency departments. Since performing triage is one of the main tasks of nurses and its related knowledge is first taught to nursing students at the university and then at clinics, nurses should retrain on healthcare practices and critical thinking in the form of short-term courses. An inexperienced nurse should not be convinced that his/her level of experience is the only guarantee of competence in triage. But, based on the evidence from the current study, nurses with clinical experience and triage knowledge can be assured about their effective role in improving the quality of triage in the emergency departments. The results of the current study should be considered in the policy making and compiling triage guidelines and continuous training in hospitals.
Limitations
The first limitation of the study was that the knowledge levels of the nurses and students were evaluated in different situations on the basis of the ethical considerations. The evaluation was performed on paper instead of the real environment. The results of the current study were obtained only from one children's hospital in Guilan province, Iran, and the number of samples was limited.
But, since the current study was the first one of its kind, it could obtain good knowledge about appropriate interventions for health policymakers.
Acknowledgments
Authors acknowledge their gratitude to the Deputy of Research and Technology of Guilan University of Medical Sciences for supporting this project and all the nurses, authorities, and students contributing to completion of the research.
Conflict of Interests:
Authors declared no conflict of interest.
Ethical Considerations:
To consider ethical principles in the current study, the researchers initially obtained permission from the Deputy of Research and Technology of Guilan University of Medical Sciences and hospital matron, then described the research objectives to nurses and students and written informed consent was obtained from the interested ones. Participation in the study was completely voluntary and optional. All the participants were assured that the information obtained and | 2019-03-28T13:33:21.064Z | 2019-02-17T00:00:00.000 | {
"year": 2019,
"sha1": "d07d6292fd57e2259b9f19c2df196c5f1dea288b",
"oa_license": "CCBYNC",
"oa_url": "https://jcp.kowsarpub.com/cdn/dl/47601c70-3a56-11e9-b366-2f658e9bed01",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7ba7a3af6f86e5a1e08a24ddab1ba9a226863709",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237937374 | pes2o/s2orc | v3-fos-license | Evaluation of a Lecithin Supplementation on Growth Performance, Meat Quality, Lipid Metabolism, and Cecum Microbiota of Broilers
Simple Summary Lecithin can not only provide energy to animals but also serves as an emulsifier and has the potential to enhance the utilization of dietary fat by animals. Thus, there is a need to elucidate the underlying mechanism of the positive effect in broilers. The present feeding trial aims to evaluate the effect of lecithin on broilers’ performance, meat quality, lipid metabolism, and cecum microbiota. The obtained results revealed significant improvements in broiler meat quality resulting from the lipid metabolism and microbiota that were affected by lecithin treatment. Consequently, it could be used in broilers’ diets for the aim of meat quality improvement. Abstract The present study was conducted to evaluate the effects of lecithin on the performance, meat quality, lipid metabolism, and cecum microbiota of broilers. One hundred and ninety-two one-day-old AA broilers with similar body weights (38 ± 1.0 g) were randomly assigned to two groups with six replicates of sixteen birds each and were supplemented with 0 and 1 g/kg of lecithin for forty-two days. Performance and clinical observations were measured and recorded throughout the study. Relative organ weight, meat quality, lipid-related biochemical parameters and enzyme activities were also measured. Compared with broilers in the control group, broilers in the lecithin treatment group showed a significant increase in L* value and tenderness (p < 0.05). Meanwhile, the abdominal adipose index of broilers was markedly decreased in lecithin treatment after 42 days (p < 0.05). In the lipid metabolism, broilers in the lecithin treatment group showed a significant increase in hepatic lipase and general esterase values at 21 days compared with the control group (p < 0.05). Lower Firmicutes and higher Bacteroidetes levels in phylum levels were observed in the lecithin treatment group after 21 and 42 days. The distribution of lactobacillus, clostridia, and rikenella in genus levels were higher in the lecithin treatment group after 21 and 42 days. No statistically significant changes were observed in performance, relative organ weight, or other serum parameters (p > 0.05). These results indicate that supplementation with lecithin significantly influence the lipid metabolism in broilers at 21 and 42 days, which resulted in the positive effect on the meat color, tenderness, and abdominal adipose in broilers.
Introduction
Phospholipids, which are found in thousands of organisms, are key components of the cell membrane. Dietary intake of exogenous phospholipids provides the majority of phospholipids organisms need. Lecithin, which is mostly obtained from soybean, oilseed rape, and sunflower seed in a normal diet, is widely known to be an important transporter of lipids in organisms [1]. Commercially available products are mostly extracted from soybeans, and they are widely used in healthcare for humans and animals [2]. Lecithin can not only provide energy to animals but also serves as an emulsifier and has the potential to enhance utilization of dietary fat by animals [3]. The physiological effects of lecithin have been studied extensively in recent years, although public health recommendations regarding lecithin intake currently have no limit. Even so, lecithin has become a popular animal dietary supplement for increasing performance and nutrient utilization. The effect of lecithin on cholesterol reduction was validated in monkeys, hamsters, and many other species [4]. It was reported in a previous study that a diet supplemented with lecithin could increase the daily gain of nutrients and nutrient digestibility in animals [5]. Moreover, lecithin treatment in intestinal cell membranes alters the permeability of cell bilayers and resulted in the greater influx of micro-and macro-molecules across the cell membrane [6]. The positive effect of lecithin is also attributed to healthy gut improvement [7]. Therefore, supplementing exogenous lecithin emulsifiers in the diets has become very popular in poultry production.
To date, limited studies have been well conducted with exogenous lecithin or emulsifiers, and inconsistent responses in broilers have been noted. Only a few studies on broilers reported that lecithin can maintain broiler performance with low energy diets [8]. Several studies reported that supplementation with lecithin to the diet of broilers improved growth performance [9]. Meanwhile, many studies of lecithin treatment in broilers show the positive effect on the improvement of apparent energy and nutrient utilization [10]. Contrarily, it has been reported that emulsifiers have no significant impact on the growth performance of broilers [11,12]. The different effect of lecithin or emulsifiers on broilers are attributed to the ingredients and test conditions. However, these studies have barely noticed the effect of lecithin on the gut microbial community of broilers. As the pivotal component of intestinal barrier, the composition and function of the gut microbiota is dynamic and affected by diet properties. Meanwhile, the gut microbiota has shown effect on lipid metabolism and lipid levels in blood and tissues [13]. Hence, we hypothesized that lecithin supplementation might have unknown effects on microbial communities in broilers, which might further affect the lipid digestibility and utilization in broilers. Therefore, the present study was conducted to evaluate the effects of lecithin on performance, meat quality, lipid metabolism, and the microbial community of broilers.
Birds, Diets, and Management
A total of one hundred and ninety-two one-day male Arbor Acres (AA) broilers were randomly assigned to two groups (control and treatment) with six replicates of sixteen birds each. The birds of each replicate were reared in a single cage (2.4 × 0.6 × 0.6 m) with a wire screen floor. Water and feed were provided ad libitum, with the photoperiod set at 23 L:1 D throughout the study. The temperature in the broiler house during the first week was 32 to 35 • C, after which it was lowered by 1 • C every other day until it reached 27 • C. The study was conducted according to the Regulations of the Experimental Animal Administration issued by the State Committee of Science and Technology of the People's Republic of China. The animal use protocol was approved by the Animal Care and Use Committee of the Poultry Institute at the Chinese Academy of Agriculture Science.
Lecithin was derived from soybeans (PHOSPHOLIPON 90 g, Lipoid Co. Ltd., Ludwigshafen, Germany) with 94.3% purity, 1.3% nonpolar lipids, 1.2% lysophosphatidylcholine, 0.2% water, 0.17% tocopherol, and other sterols. The diet of the birds was formulated to meet or slightly exceed all nutrient requirements (Table 1) (NRC, 1994) [14], and it was provided in mash form to avoid degradation of lecithin. All of the birds were fed diets supplemented with 0 (control) and 1 g/kg (treatment) lecithin for 42 days.
Growth Performance and Sample Collection
Cage-side observations, which included recording changes in clinical condition or behavior, were made at least twice daily throughout the study. All macroscopic abnormalities in the birds or deaths throughout the whole experiment were recorded after necropsy. The body weights of the birds from different replicate were determined at the beginning, 21 days, and 42 days into the study. Feed consumption was recorded on a replicate basis at 21 and 42 days. Feed conversion was expressed as the grams of feed consumed per grams of weight gain. The average daily gain (ADG), average daily intake (ADI) and FCR were calculated at 1 to 21 days of age, 22 to 42 days of age, and 1 to 42 days of age.
At days 21 and 42, 1 bird was randomly selected from each replicate (6 birds from each group) and weighed after 8 h of feed deprivation. Before the necropsy, 1.5 mL wing blood were centrifuged at 3500 g for 10 min so that serum could be collected for clinical blood chemistry and enzyme activity detection. After that, the birds were fed for 4 h and sacrificed by jugular bleeding, while the organs (liver, heart, spleen, and thymus) were removed and weighed. The abdominal adipose tissue of broilers were removed and weighed at 42 days. Breast muscle (both the pectoralis major and minor included) was collected from the right pectorals and stored at 4 • C for later analysis. The contents from both ceca were thoroughly mixed and stored at −80 • C for 16S rDNA amplicon sequencing analysis.
Meat Quality
The meat samples were stored in 4 • C for 24 h before the detection. Indices of meat quality including pH, color, shear force, and drip loss were determined with the methods described previously [15,16].
The pH value was measured with a portable pH meter (HI8424, Beijing Hanna Instruments Science & Technology Co. Ltd., Beijing, China) equipped with an insertion glass electrode calibrated in buffers at pH 4.01 and 7.00 at ambient temperatures. The measurements were made at the same location on individual breast and thigh muscle samples. The average pH value was calculated from 3 readings taken on the same muscle sample.
Meat color was assessed with a chroma meter (CR-10, Minolta Co. Ltd., Suitashi, Osaka, Japan) to measure CIE LAB values (L* means relative lightness, a* means relative redness, and b* means relative yellowness). The tip of the colorimeter measuring head was placed flat against the surface of the muscle. The meat color was expressed using the CIELAB dimensions of lightness (L), redness (a), and yellowness (b). The higher L* values were lighter, higher a* values were more red, and higher b* values were more yellow. Drip loss was estimated by determining expressible juice using a modification of the filter paper press method. A raw meat sample weighing 1.0 g was placed between 18 pieces of 11 cm diameter filter paper and pressed at 35 kg for 5 min at 25 • C. The expressed fluid was determined as the change in the weight of the original sample. The water-holding capacity was calculated as the ratio of expressible fluid/total moisture content.
A shear force test was done on the breast fillets using the razor blade method with an Instron Universal Mechanical Machine (Instron model 4411, Instron Crop., Canton, MA, USA). Meat samples were stored at 4 • C for 24 h and were then individually cooked in a water bath at 80 • C in plastic bags to an internal temperature of 70 • C. The samples were then removed and chilled to room temperature. Strips (1.0 cm (width) × 0.5 cm (thickness) × 2.5 cm (length)) parallel to the muscle fibers were prepared from the medial portion of the muscle and sheared vertically. Shear force was expressed in kilograms. Three values were recorded for each replicate sample and averaged.
Lipoprotein lipase (LPL) and hepatic lipase (HL) in serum were measured with colorimetric enzymatic methods using commercially available kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, Jiangsu, China). General esterase (GE) activity was calculated as the sum of LPL and HL.
HL detection in liver was performed in liver supernatant. Approximately 0.1 g of liver sample was transferred into a 1.5 mL precooled centrifuge tube with 0.9 mL physiologic saline and homogenized into 10% homogenates (50 Hz for 3 min through tissue grinder, SCIENZT-48, Scientz Biotechnology Co., Ltd., Ningbo, Zhejiang Province, China). It was then centrifuged through a low-speed centrifuge (2500-3000 rpm/min for 10 min, DL-5M, Xiangyi Power Testing Instrument Co. Ltd., Changsha, Hunan, China). The test kit was the same as serum.
DNA Extraction, PCR Amplification of 16S rDNA, Amplicon Sequence, and Sequence Data Processing
Microbial genomic DNA was extracted from 220 mg of cecal contents sample using a QIAamp DNA Stool Mini Kit (Tiangen Biotech Company Limited, Beijing, China) following the manufacturer's instructions. Successful DNA isolation was confirmed by an A260/280 ratio ranging between 1.8 and 2.0 and by agarose gel electrophoresis.
Based on previous comparisons, the V4 hypervariable regions of 16S rDNA were PCR amplified from microbial genomic DNA harvested from samples and were used for the remainder of the study. PCR primers flanking the V4 hypervariable region of bacterial 16S rDNA were designed. The barcoded fusion forward primer was 520F 5barcode + GCACCTAAYTGGGYDTAAAGNG-3 , and the reverse primer was 802R 5 -TACNVGGGTATCTAATCC-3 . The PCR conditions were as follows: one pre-denaturation cycle at 98 • C for 30 s, 25 cycles of denaturation at 98 • C for 15 s, annealing at 50 • C for 30 s, and elongation at 72 • C for 30 s, and one post-elongation cycle at 72 • C for 5 min. The PCR amplicon products were separated on 0.8% agarose gels and extracted from the gels. Only PCR products without primer dimers and contaminant bands were collected for sequencing by synthesis (Axygen Axy Prep DNA Gel Extraction kit, New York, NY, USA). Barcoded V4 amplicons were sequenced using the paired-end method by Illumina MiSeq (Sangon Biotech Company Limited, Shanghai, China) with a 600-cycle index read. Only sequences with an overlap longer than 10 bp and without any mismatch were assembled according to their overlap sequence. Reads that could not be assembled were discarded. Barcode and sequencing primers were trimmed from the assembled sequence [17,18].
Statistical Analysis
In this study, operational taxonomic unit (OTU) cluster analysis was used to classify the OTU sequences based on a 97% similarity criterion. The OTU abundance of each sample was generated at the genus level. The bacterial diversity is shown by the number of OTUs. The mean length of all effective bacterial sequences without primers was 280 bp. The abundance and diversity of microbiota were compared between each sample by calculating OTUs.
A pen of birds was the experimental unit for performance parameters. For all other measurements we used individual birds from each replicate. All the data are presented as the means ± SEM. Statistical analyses were carried out with SPSS 18.0 for windows (SPSS Inc., Chicago, IL, USA). Differences between groups were tested with a t-test for independent samples. A p value less than 0.05 was considered to indicate statistical significance, and a trend was considered present at p < 0.10.
Performance
The mean performance (ADG, ADI, and FCR) and mortality are shown in Table 2. No statistically significant differences were found among all performance parameters or mortality observed in either the treated group or the control group throughout the whole period (p > 0.05). The relative organ weight results are shown in Table 3. As shown in Table 3, no statistically significant changes were observed in relative organ weight of broilers in 21 or 42 days (p > 0.05). The abdominal adipose tissue of the control group at 42 days was significantly higher compared with the lecithin treatment groups (p < 0.05).
Meat Quality
The meat quality parameters results can be seen in Table 4. The L values were markedly higher in the lecithin treatment groups compared with the control group (p < 0.05). Drip loss and b value were higher in the lecithin treatment group than in the control groups, while the p values were 0.062 and 0.078 separately. the tenderness value (shear force) was markedly lower in the lecithin treatment groups than in the control group (p < 0.05) at 42 days. No other parameters were significantly affected by dietary lecithin supplementation (p > 0.05) at 21 or 42 days.
Lipid Metabolism
The results of the lipid-related biochemical parameters in serum are shown in Table 5, while the results of lipid-related enzyme activity in serum are shown in Table 6. As shown in Table 5, the GLU value at 42 days was significantly higher in lecithin-supplemented groups than in the control groups (p < 0.05), which confirmed that exogenous high lipid intake would increase the glucose metabolism in vivo. Meanwhile, the cholesterol level in serum of broilers at 21 days significantly decreased with lecithin supplementation, but no difference at 42 days.
16S rDNA Analysis of Bacterial Communities
Shifts of cecal microbial community along with body development: The results shown in Figures 1 and 2 describe the distribution of DNA sequences into phyla after 21 and 42 days. For the majority of phylum in intestinal content, Firmicutes was the most dominant phylum for all development stages. However, the abundance of Firmicutes (88.10%) in the control group was significantly higher than that in the lecithin treatment group (78.83%) after 21 days. Conversely, the Bacteroidetes level in the control group (5.34%) was significantly lower than that in the lecithin treatment groups (11.99%) after 21 days. Proteobacteria were observed at a higher level in lecithin treatment groups (11.99%) after 21 days. The results after 42 days were the same as after 21 days. The abundance of Firmicutes (82.26%) in the control group was significantly higher than that in the lecithin treatment group (70.10%) after 42 days. Conversely, the Bacteroidetes level in the control group (12.39%) was significantly lower than that in the lecithin treatment groups (26.19%) after 42 days. (Figure 3) illustrated the differences in distribution of microbial community. Based on the PCA analysis, the control group and the lecithin-supplemented group were significantly divided into two clusters. Combined with previous data, this result suggested that the lecithin treatment group had a significantly different gut microflora profile. Principal components analysis (PCA) plots of samples from after 21 (A) and 42 (B) days: PCA results (Figure 3) illustrated the differences in distribution of microbial community. Based on the PCA analysis, the control group and the lecithin-supplemented group were significantly divided into two clusters. Combined with previous data, this result suggested that the lecithin treatment group had a significantly different gut microflora profile. (Figure 3) illustrated the differences in distribution of microbial community. Based on the PCA analysis, the control group and the lecithin-supplemented group were significantly divided into two clusters. Combined with previous data, this result suggested that the lecithin treatment group had a significantly different gut micro-flora profile. Genus level significance analysis of the cecal microbial community after 21 and 42 days: Tables 7 and 8 represent the abundance of selected genera (>0.1% in at least one sample) across all samples. It clearly showed that there were apparent differences in genus distribution between control and lecithin treatment groups. The proportions of lactobacillus, lactobacillus agilis, clostridia, and rikinellaceae were higher in the lecithin treatment group, whereas the proportions of prausnitzii, erysipelotrichi, lachnospiraceae, and alactolyticus were higher in the control group after 21 days, which is the same as 42 days. Genus level significance analysis of the cecal microbial community after 21 and 42 days: Tables 7 and 8 represent the abundance of selected genera (>0.1% in at least one sample) across all samples. It clearly showed that there were apparent differences in genus distribution between control and lecithin treatment groups. The proportions of lactobacillus, lactobacillus agilis, clostridia, and rikinellaceae were higher in the lecithin treatment group, whereas the proportions of prausnitzii, erysipelotrichi, lachnospiraceae, and alactolyticus were higher in the control group after 21 days, which is the same as 42 days.
Growth Performance and Organ Weight
The result of this trial revealed that direct addition of high purity lecithin into the diet of broilers showed no significant effects on the performance or relative organ weight throughout the whole experiment period, which is the same as most previous studies. The energy requirements of commercial broilers is very high, while the fat source is limited in traditional diet. Exogenous supplementation of lipid becomes an inevitable trend in broiler production for better performance. However, the effect of lipid or emulsifier additives remain inconsistent. Through exogenous supplementation of phospholipids, lecithin had different effects on performance improvement in several reports. Previous reports showed that supplementation of lecithin in low energy and protein diets improved both performance and digestibility parameters in broilers [19], while some reported the effect was not significant [11]. Meanwhile, a previous study showed that the body weight of groups supplemented with lecithin at 21 days and 35 days were not significantly different compared to the control group [20]. Additionally, even more, lecithin supplementation in animal diet showed suppression effect in gastric emptying and resulted in vomiting and diarrhea in some reports [21]. Meanwhile, lecithin was used as exogenous emulsifiers to improve the utilization of fat and energy in weaning piglets [22] or broilers [19,23]. Fats, such as hydrophobic components, must aggregate to form micelles to be absorbed. Emulsifiers found in the digestive tract (mainly bile salts) naturally mediate this process and improve the formation of micelles [24]. However, most of the studies reported that no significant influences were observed on the N or energy digestibility. Due to the short digestive tract and unstable digestibility of the broilers, the effect of lecithin in increasing the fat digestibility seems not obvious [25]. That may explain the non-significant effect of lecithin on growth performance or organ weight in broilers.
Overall, these results may be affected with the source of dietary lipids, the formation and addition amount of phospholipid product, the breed of the chickens and the duration of the trial. Improvement effect of lecithin on performance or organ weight may be more consistent and obvious in the low energy and protein diets in broilers.
Meat Quality
In this study, dietary supplementation with lecithin had a positive effect on meat color, water-holding capacity, and tenderness at different periods, which may be due to the effect of lecithin supplementation in lipid metabolism. Zhao et al. [10] reported that lecithin supplementation could increase the transportation of lipids in the body and improve the deposition of fat in the muscle, which is consistent with our results. Meat characteristics, including tenderness, water-holding capacity, pH, and meat color, are important indices for evaluation of meat quality. Meat color attribute for the final evaluation and acceptance of a meat product in the consumption [26]. In this study, the color value of breast meat was increased by lecithin supplementation. Similar results were reported and indicated that the color value of meat also showed a trend in improvement with the increase of emulsifier dosage [25,27]. As the most important textural characteristic of meat, tenderness has great impact on the consumer experience of broiler meat. Shear force value in this trail was significantly increased by lecithin treatment, which is also related to the fat content in the meat [28]. The pH value showed no difference in this trail, revealing that lecithin treatment showed no effect on lactic content.
Lipid Metabolism
The cholesterol level in serum of broilers after 21 days significantly increased with the lecithin supplementation. This trial provides evidence that serum cholesterol level of broilers remains constant at 42 days. Similar results were detected by Zhao et al. [10]. The reason may be the immature development of the digestive tract in chicks. Previous studies reported that lecithin supplementation could increase the duodenum development [29], and the effect is more obvious in the early stage. The results of those studies suggested that the effect of lecithin on the serum profile of broilers may be more efficient in the starter period [9]. Previous research on cholesterol metabolism using isotope tracer techniques indicated that the net balance of cholesterol homeostasis was relatively stable and resistant to plant sterol supplementation in chickens [30].
Exogenous emulsifier can accelerate the emulsification of lipids in the small intestine and promote the activation of lipase. In addition, lecithin was reported to promote the secretion of endogenous bile acid, and further improve the utilization rate of fat [25]. This trial showed that the enzyme activity of HL and GE was significantly increased by supplementation with lecithin at 21 days. No other parameters were significantly affected by dietary lecithin supplementation at 21 or 42 days. Previous research reported that there is a positive correlation between the enzyme activities of lipase and the deposition of lipids across both in broilers and layer strains [31]. As the main content of lipoprotein, lecithin plays an important role in the lipid metabolism such as lipid transportation. Exogenous phospholipids can elevate the HDL level and decrease the cholesterol level in serum, resulted in the regulation of lipid deposition [19,23]. The lipid metabolism results of this study indicated that the lipid transportation and deposition was altered by lecithin supplementation. Exogenous lecithin accelerated the lipodieresis in liver and reduced the transport of lipids in serum, which resulted in fewer lipid depositions in the abdomen. Boontiam et al. [32] reported that lecithin supplementation increase the proportion of lipids that used in muscle formation. That may neatly explain the results of meat quality in this experiment. As the visual indicators of lipid deposition in broilers, tenderness, and abdominal adipose were affected in this trial through the addition of lecithin, which validates the prediction that lipid metabolism is affected by the lecithin addition.
Microbial Community
The composition of intestinal microbiota is important for maintaining homeostasis of the gastrointestinal tract and the health of the host [33]. The intestinal microbial community has been recognized as a strong determining factor of host physiology, especially through its critical role in the digestion of feed in a host [34]. The 16S rDNA gene sequencing in cecal content shown that addition of lecithin altered the microbiota composition at the phylum level in broilers. PCA plots confirmed the results at the phylum level. In our study, Bacteroidetes, Firmicutes, Tenericutes, and Proteobacteria were the main bacteria found in the broiler intestinal flora, which was consistent with previous research [35]. More specifically, lower Firmicutes and higher Bacteroidetes levels were observed in the lecithin treatment groups, whereas higher levels of Bacteroidetes and Proteobacteria were observed in the control group. Studies comparing the gut microbiota between obese and lean animals showed that lower Firmicutes and higher Bacteroidetes levels were associated with the lean phenotype [36]. It can also be concluded from many studies that lower Firmicutes and higher Bacteroidetes levels lead to fewer lipid deposits in animals. Lactobacillus were observed as the bacterial genus which can cure enteritis, while the clostridia were reported as synergistic action with lactobacillus [37]. The higher distribution of lactobacillus and Clostridia in lecithin treatment group showed that the distribution of bacterial in the genus level may be helpful in the lipid absorption of birds. Therefore, the altered profiles of the cecal microbiota after 21 and 42 days was involved in the process of lecithin effect on broiler lipid metabolism.
Conclusions
In summary, by taking advantage of 16S rDNA sequencing, this study revealed that lower Firmicutes and higher Bacteroidetes in levels are noted in broiler ceca, responding to lecithin treatment. Lecithin supplementation improved the enzyme activity of HL and GE in serum, also reducing the abdominal adipose tissue in broilers. Finally, this study provides evidence that lecithin supplementation had a positive effect on meat color and tenderness in broilers. | 2021-09-25T15:20:14.242Z | 2021-08-29T00:00:00.000 | {
"year": 2021,
"sha1": "6284c3e9a9e703eb259fb16b07ed9e8198af2258",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/9/2537/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b27c53d973bedc28efa79abd388c577a84816b3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8722061 | pes2o/s2orc | v3-fos-license | Spinal Cord Injury Incurred by Neck Massage
Massage is generally accepted as a safe and a widely used modality for various conditions, such as pain, lymphedema, and facial palsy. However, several complications, some with devastating results, have been reported. We introduce a case of a 43-year-old man who suffered from tetraplegia after a neck massage. Imaging studies revealed compressive myelopathy at the C6 level, ossification of the posterior longitudinal ligament (OPLL), and a herniated nucleus pulposus (HNP) at the C5-6 level. After 3 years of rehabilitation, his motor power improved, and he is able to walk and drive with adaptation. OPLL is a well-known predisposing factor for myelopathy in minor trauma, and it increases the risk of HNP, when it is associated with the degenerative disc. Our case emphasizes the need for additional caution in applying manipulation, including massage, in patients with OPLL; patients who are relatively young (i.e., in the fifth decade of life) are not immune to minor trauma.
INTRODUCTION
Spinal cord injury may cause diverse problems, such as motor impairment, sensory impairment, respiratory dysfunction, deep vein thrombosis, chronic pain and spasticity. 1 Most cases of the spinal cord injury occur because of the major trauma. However, if there are risk factors, they might also occur because of the minor trauma. The ossification of posterior longitudinal ligament (OPLL) may cause spinal stenosis. In addition, it has been known as a factor that triggers the occurrence of spinal cord in-jury, even following the minor injury, such as a whiplash injury or a low-height fall. 2 The massage therapy is performed for various purposes, such as alleviation of the musculoskeletal and neurogenic pain, improvement of blood circulation and that of lymphatic circulation. Although there are insufficient amount of objective evidences for the treatment effect, it has long been performed in different countries with various cultural backgrounds. 3,4 Several complications of massage therapy that have been reported are vascular injury, stroke, ulcer, pulmonary artery embolism, and etc. In regard to the safety of massage, Ernst reported 20 cases of complications of the massage therapy in 2003 and concluded that massage therapy is not always safe. 5 We experienced a case of spinal cord injury incurred by a neck massage, which has not been reported yet. Here, we report our case with a review of literatures. www.e-arm.org
CASE REPORT
A 43-year-old man had no notable findings on a past history, and he worked in a moving company. Due to the pain in the neck and shoulder, the patient received a cervical massage from a massage therapist with a private certificate at a massage center in Kyounggi province, in March of 2008. The massage was performed in such a manner that the massage therapist pressed the patient's back with the palm and then pushed it in an upward direction from the thoracic to the cervical in a prone position. During the massage, the patient had a sensation of paralysis in the right upper and lower extremities. Because the weakness was not severely notable, however, the patient returned home with no other specific treatments. On the evening of that day, the patient felt a weakness in all the extremities, and then visited an emergency care center of another hospital. An OPLL was re-vealed in the fifth cervical spine on a cervical computed tomography, which was performed at an emergency care center ( Fig. 1-A, B). On a cervical magnetic resonance imaging, there was an acute herniated nucleus pulposus (HNP) between the C5 and C6. Further, T2-weighted images showed a high signal intensity, which is indicative of myeolopathy around the sixth cervical spine (Fig. 2-A, B). The patient had undergone a sixth cervical corpectomy, the anterior cervical fusion and the device fixation for the fifth to seventh cervical spine, the posterior cervical fusion and the device fixation for the third to seventh cervical spine and the posterior laminectomy for the fourth to the sixth cervical spine (Fig. 3-A, B).
Four months following the onset of the trauma, the patient was referred to the department of rehabilitation medicine in our hospital to receive a comprehensive rehabilitative treatment. The results of a manual muscle test and sensation test are shown in Table 1. Based on a www.e-arm.org neurologic level of injury of C4 and an ASIA Impairment Scale of D according to the criteria of the American Spinal Injury Association (ASIA), the patient was diagnosed with incomplete spinal cord injury (Table 1). In addition, an electrophysiologic test was also performed at four months following the onset of the trauma. The sensory and motor nerve conduction studies were normal and no significant difference was noted between both sides. On a needle electromyography, abnormal spontaneous activi-ties were found in the right C5-C7, the left C4-C5, C6-C7 paraspinal muscles, the both triceps brachii, the flexor carpi radialis, the extensor digitorum communis and the first dorsal interosseous muscles. Besides, the recruitment patterns of the motor unit action potentials were decreased in both triceps brachii, the flexor carpi radialis and the extensor digitorum communis muscles. These led to the diagnosis of bilateral lower cervical radiculopathy. The muscle stretch reflexes were increased in the upper and lower extremities on both sides. Hoffmann's sign and ankle clonus were observed on both sides. The patient was able to walk using a walker at indoors, and he needed a minimal assist to perform activities of daily living, such as grooming, feeding, dressing, transfer from a bed to a wheelchair and a gait. But, the patient needed a moderate assist to perform such activities as toileting and bathing. Therefore, the Korea spinal cord independence measure (KSCIM) was 67/100. Following a 3-week of comprehensive rehabilitative therapy, the patient was discharged in such a condition that the patient walked independently indoors with a right ankle foot orthosis.
The patient received a physical and occupational therapy, and was monitored of the clinical course at an outpatient clinic of the rehabilitation medicine in our institution. Three years after the trauma, a manual muscle test showed that the muscle strength was increased in the left elbow extensor, both finger abductor, the right hip flexor, the right ankle dorsiflexor, the right ankle plantar flexor, and the right great toe extensor muscles. The pain and temperature sensation were normal up to the level of T8/T4 (right/left), and the proprioception was improved 3 3 3 3 T1 1 1 3 3 L2 3 4 4 4 L3 3 4 3 4 L4 2 4 3 4 L5 2 4 3 4 S1 2 4 3 4 Sensory intact level Pain C5 C5 T8 T4 Temperature C5 C5 T8 T4 Proprioception C4 C4 Intact Intact Motor grade: 1 (trace), 2 (poor), 3 (fair), 4 (good), 5 (normal) www.e-arm.org to the normal range at all the spinal levels. This led to changes in a neurologic level of an injury to C6 (Table 1). Furthermore, the patient was able to perform most of the activities of daily living, including outdoor walking and driving, without an orthosis. Therefore, the score of KSCIM was 97/100. But the patient needed to hold a balustrade, while going up and down the stairs, used a fork because of the difficulty in using a chopstick when having a meal, and usually wore clothes without buttons because of the difficulty in manipulating the buttons. That is, the patient had a restriction in going up and down the stairs and performing a fine motor coordination.
DISCUSSION
The massage therapy has such a long history that it was first recorded in the Chinese history in the second century BC, and it is referred to as a manual manipulation of the soft tissue for the purposes of treating the musculoskeletal pain, correcting the body alignment, and improving the blood circulation, improving the lymphatic circulation and correcting the body alignment. 3,4 Despite such a long history and a wide spread use of the massage therapy, however, there are almost no systematic studies regarding its effectiveness. According to Tsao, the massage therapy was effective for the treatment of non-specific low back pain when it was combined with exercise and education in patients with chronic non-malignant pain. This author noted, however, that the effects of massage therapy was relatively lower than that of the manipulation or transcutaneous electrical nerve stimulation; there was a moderate level of evidence for the treatment effect on the shoulder pain and headache; and there was a modest level of evidence for the treatment effect on the pain in the neck, carpal tunnel syndrome, fibromyalgia and mixed chronic pain conditions. The massage therapy has not been considered commonly as the primary therapy, but as the supplemental one to the physical or pharmacological treatments. Nevertheless, it has been increasingly demanded by patients. 4 This might be not only because patients have a lesser extent of fear against the side effects of the massage treatment, but also because of the accessibility, as the number of facilities or institutions for the massage therapy has been increasing.
There are almost no guidelines for the safety and contraindications of the massage therapy. But the traditional contraindications include infections, malignant tumor, dermatologic diseases, burn and thrombophlebitis. 6 Ernst reviewed literatures concerning the complications of massage therapy that had occurred during a period ranging from 1995 to 2001, where a total of 20 cases were described. The complications with the highest incidence are related with vascular injuries, there were embolism in the lung, kidney and retina, pseudoaneurysm of the popliteal artery, the stenosis of the internal carotid artery and giant hematoma. Moreover, there were also such cases as the death from asphyxia, the rupture of the uterus occurring during the abdominal massage of pregnant women and the intracranial hemorrhage of the fetus. 5 Spinal cord injury may cause various problems, such as motor and sensory impairment, respiratory dysfunction, deep vein thrombosis, pain and spasticity. These eventually lead to the shortened lifespan and the decreased quality of life. 1 According to the 2011's statistics of the National spinal cord injury statistical center, the causes of the spinal cord injury in the US include the traffic accident (33.8%), being the most prevalent, a fall (20.9%), a firearm accident (15.8%), diving (6.3%), bike accident (5.9%), accident due to falling object (2.9%) and the medical and surgical complications (2.5%). In most cases, the trauma is one of the major causes of the spinal cord injury. In particular, a great majority of injuries originated from major traumas, large high speed impacts. Because, the spinal stenosis could occur in the presence of OPLL, irreversible spinal cord injury could also occur by minor traumas, such as a fall, a whipplash injury due to a mild traffic accident and a collision with a flying object having no direct impact on the neck, or an object fallen from a height of less than 2 m. Katoh et al. conducted a followup study during the mean period of 5.5 years in 118 patients with OPLL. According to these authors, 27 patients had minor traumas. Among the 27 patients, 19 patients had no past history of myelitis, and of these, 13 patients (68.4%) were newly diagnosed with myelitis. Of the eight patients who already had a myelitis, seven (87.5%) had an aggravation of pre-existing lesion. 2 It is known that there is a relationship between the spinal cord injury and spinal stenosis in patients with OPLL. But, scarce cases have been reported in association with non-traumatic acute HNP. Cases of spinal cord injury occurred because of the non-traumatic acute HNP in a 68-year-old man 7 and spinal cord injury occurred by an www.e-arm.org acute HNP, following a spinal manipulation therapy in a 61-year-old woman 8 were reported in patients with OPLL in 2006 and 2010, respectively. However, there is no case of spinal cord injury that have been reported in association with the massage therapy.
In the present case, the spinal cord injury occurred because of the massage which was performed by the human hand, a physical impact that is generally considered to have a lesser extent of the detrimental effect. Presumably, the current case might be originated from OPLL and traumatic acute HNP that had not been diagnosed previously, and both of which are risk factors for the development of spinal stenosis.
According to the studies that had been conducted during a period ranging from 2002 to 2005 in Korean adults aged 16 years or older, the prevalence of OPLL was 0.6%. This figure is relatively smaller, as compared with those reported in Japan or other western countries. 9 In regard to the age of patients, the prevalence of OPLL has been increased as the age of patients was increased. Further, the prevalence was 1.36% in Korean patients in their sixth decade. According to this report, at the mean prevalence of 1/200 in Korea (approximately 1/100 in patients in their sixth decade), patients might have OPLL during a massage therapy irrespective of whether it is diagnosed or undiagnosed. Besides, considering that the massage therapy is commonly performed for various types of patient groups and is done more prevalently at private institutions in the non-medical environment, the risk of spinal cord injury might be more than the usual expectation.
In our case, compressive myelopathy occurred after massage therapy induced tetraplegia and neuropathic pain. This restricted some parts of the activities of daily living and those for work and social lives. In addition, eventually, there were fatal effects on the quality of life of the patient. Therefore, it is necessary to make an effort to identify the risk factors that make the patients vulnerable even to a mild trauma that are commonly encountered in an outpatient setting, and these include OPLL. In patients with OPLL, it would be imperative that they should be alert to a mild trauma, though it can commonly be overlooked, because it could be a risk factor for developing spinal cord injury. In addition, the massage therapy has been performed at various private institutions because it has been considered as a relatively safe method. However, it should be widely informed that the massage therapy deserves special attention, as this may induce an acute HNP and spinal cord injury. | 2018-04-03T00:00:38.021Z | 2012-10-01T00:00:00.000 | {
"year": 2012,
"sha1": "ebccd33e258e02d323161774e633f9586415d685",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-arm.org/upload/pdf/arm-36-708.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebccd33e258e02d323161774e633f9586415d685",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248403180 | pes2o/s2orc | v3-fos-license | Aerogel‐Functionalized Thermoplastic Polyurethane as Waterproof, Breathable Freestanding Films and Coatings for Passive Daytime Radiative Cooling
Abstract Passive daytime radiative cooling (PDRC) is an emerging sustainable technology that can spontaneously radiate heat to outer space through an atmospheric transparency window to achieve self‐cooling. PDRC has attracted considerable attention and shows great potential for personal thermal management (PTM). However, PDRC polymers are limited to polyethylene, polyvinylidene fluoride, and their derivatives. In this study, a series of polymer films based on thermoplastic polyurethane (TPU) and their composite films with silica aerogels (aerogel‐functionalized TPU (AFTPU)) are prepared using a simple and scalable non‐solvent‐phase‐separation strategy. The TPU and AFTPU films are freestanding, mechanically strong, show high solar reflection up to 94%, and emit strongly in the atmospheric transparency window, thereby achieving subambient cooling of 10.0 and 7.7 °C on a hot summer day for the TPU and AFTPU film (10 wt%), respectively. The AFTPU films can be used as waterproof and moisture permeable coatings for traditional textiles, such as cotton, polyester, and nylon, and the highest temperature drop of 17.6 °C is achieved with respect to pristine nylon fabric, in which both the cooling performance and waterproof properties are highly desirable for the PTM applications. This study opens up a promising route for designing common polymers for highly efficient PDRC.
Introduction
In the last two decades, a huge quantity of energy has been consumed with the rapid development of industry and the economy, leading to an imbalance between fossil resources supply environment. Compared to building heating, ventilation, and air conditioning (HVAC) systems, passive radiative cooling does not require external energy to achieve high-efficiency cooling; thus, it has attracted increasing attention in recent years. To achieve strong and effective cooling during the daytime, especially during hot summer days, the following two conditions must be met by a material: [10,11,16,17] 1) a high solar reflectance (R solar ) (≈1) in the wavelength range of 0.2-2.5 μm to avoid solar absorption, which can convert to heat and increase the temperature significantly, and 2) a high emissivity (≈1) in the long-wavelength infrared (LWIR) atmospheric transparency window (8-13 μm) for radiating heat to the cold space.
To date, various types of radiative cooling materials have been developed, including multilayered structures, [18][19][20] metamaterials, [21] randomly distributed particle structures, [22][23][24] and porous structures. [25][26][27][28][29][30] The first three materials generally include high-reflectivity metallic materials at the bottom to reflect sunlight, which are brittle and airtight; thus, they are unsuitable for PTM. Porous structures based on polymers, either in the form of textiles or films, show great potential for application in PTM. Human skin has a high IR emissivity of ≈0.98 in the range of 7-14 μm, which overlaps with the atmospheric transparency window. Therefore, materials with extremely high IR transmittance can help the heat dissipation of the human body. [11] Thus, polymers with high emissivity or transmittance are promising passive daytime radiative cooling (PDRC) materials. Nevertheless, these polymers are limited to polyethylene (PE) [31][32][33][34][35] for high transmittance and polyethylene oxide (PEO), [36] polyvinylidene fluoride (PVDF), and its derivatives [37][38][39][40] for high emittance. Thermoplastic polyurethane (TPU) is a linear block copolymer elastomer consisting of alternating coil-rod segments, which can be processed using various techniques, such as extrusion injection, blow molding, compression molding, or solution coating. TPUs have widely been used in textiles owing to their high transparency, elasticity, tensile strength, wear resistance, and corrosion resistance. [41] In this study, TPU films were designed and confirmed to be a highly efficient PDRC material using a scalable non-solventphase-separation (NSPS) strategy. The NSPS method transformed the highly transparent TPU film into a highly reflective white film with an average R solar of >94% and an IR emittance of >95% that could be tailored by the incorporation of superhydrophobic silica aerogels (SSA). A significant cooling performance of 10°C and a cooling power of 40 W m −2 were demonstrated by the TPU film during a hot daytime. Aerogelfunctionalized TPU (AFTPU) films were also prepared by the NSPS strategy, though their cooling performances were relative lower than that of TPU; the AFTPU films possessed high contact angles and were not easily wetted by water. Therefore, AFTPU films may be more attractive for water-resistance wearable usage for passive cooling. Apart from a self-supporting breathable film, the film can also be used as a coating on traditional textiles, which can be easily scaled up and is waterproof and breathable. The results suggest that, by careful structural design, other types of polymers excluding PE, PEO, and PVDF may also be used for PDRC, and they may even outperform the reported PDRC polymers in both cooling performance and comfort of wearing (flexibility, waterproof, breathability, and corrosion resistance).
Preparation of AFTPU Films via NSPS
TPU films prepared using traditional methods, such as solution casting (SC), are highly transparent (Figure S1a, Supporting Information), [42,43] even with the incorporation of SSA up to 25 wt% (Figure S1b-d, Supporting Information). The R solar of the TPU (SC) film was <20% ( Figure S2, Supporting Information). Thus, the TPU did not exhibit any PDRC performance. To solve this problem, the NSPS method was developed in this study. [44,45] As shown in Figure 1a,b, dimethylformamide (DMF) solutions of TPU (15 wt%) (or containing different amounts of SSA) were blade-coated on a clean glass substrate. The films were then exposed to the air for 5 min and gradually changed from transparent to translucent. The phase-separated films were solventexchanged with water, which turned them white and opaque. Finally, the films were oven-dried and denoted as AFTPU-n, where n indicates the weight content of the SSA. The term "TPU film" in the following section denotes pure TPU film prepared by the NSPS method without the presence of SSA unless specified.
The AFTPU films were white, opaque, freestanding, and mechanically strong. Figure 1c shows that the AFTPU films could be folded, rolled, and completely restored to their original shapes. They could be further formatted into various shapes (circle, rectangle, triangle, etc.) simply by cutting, which may find interesting applications in wearing. Additionally, the AFTPU films were hydrophobic and could not be wetted by water. Owing to these properties, the AFTPU films may be used in PTM, as shown in Figure 1d, [11,14] and the body's heat input is mainly derived from the sun and metabolic heat, whereas the heat output includes conduction, convection, evaporation, and radiation. The AFTPU films and coatings used for PDRC can block the heat input from the sun and dissipate heat via radiation without any energy consumption. [36,46] Their properties and performances are discussed in the following sections.
Characterization of the AFTPU Films
Figure 2a-d shows the scanning electron microscope (SEM) images of the AFTPU films, revealing porous structures with randomly distributed disorderly microscale pores. Adjusting the SSA content did not significantly impact the porous structure, suggesting that these micropores must have resulted from the NSPS process. The energy dispersive X-ray spectroscopy (EDS) mapping, shown in Figure S2 (Supporting Information), further suggested that the SSA were homogenously dispersed in the TPU matrix, and rough surfaces could be clearly observed from the carbon element. By contrast, the films prepared by the SC method were nonporous with smooth surfaces, even with the presence of 25 wt% SSA ( Figure S3, Supporting Information). Nevertheless, the hydrophobicity of the AFTPU films increased with the increasing SSA content. As shown in Figure 2e, the contact angles of the TPU, AFTPU-10, AFTPU-15, and AFTPU-25 films were 101°, 115°, 126°, and 135°, respectively. In addition, the average moisture permeabilities of the TPU, AFTPU-10, AFTPU-15, and AFTPU-25 films were 861, 1014, 353, and 778 g m −2 for 24 h, respectively. The moisture permeabilities of the films were relatively low and were not directly related to the SSA contains, , which were smaller than that of traditional textiles. The results indicated that the addition of SSA not only improved the hydrophobicity but also preserved the moisture permeability, making them promising waterproof coatings for textiles.
The mechanical properties of the AFTPU films are shown in Figure 2f. The SSA-free TPU film exhibited excellent flexibility and could be stretched by more than 500%. The increase in the SSA content of the AFTPU films resulted in decreased in mechanical strength. Nevertheless, the AFTPU-25 film could still be stretched by 250% and behaved as a thermoplastic elastomer. The Fourier transform infrared (FT-IR) spectra of AFTPU (Figure S4, Supporting Information) indicated that no chemical reaction occurred between SSA and TPU. Thus, SSA must be physically dispersed and embedded in the TPU matrix. [47,48] The soft and hard segments of TPU caused microscopic phase separation, and the hard segments acted as physical crosslinkers in the elastic chain. However, owing to the low interfacial bonding between the SSA and TPU, the mutual aggregation of particles constituted to local defects. The tensile force separated the interface between SSA and TPU, and the mechanical properties deteriorated. Thermogravimetric analysis (TGA) curves of the TPU and AFTPU films are shown in Figure 2g. The thermogram of the TPU film shows a two-step degradation. [42][43][44][45] The degradation temperatures started at 247 and 350°C, corresponding to the decomposition of the soft and hard segments, respectively. The residue mass was 4.8 wt%. The decomposition behaviors of the AFTPU films were almost identical to that of pure TPU; however, the residual mass significantly increased: 11.4, 13.7, and 21.4 wt% for AFTPU-10, AFTPU-15, and AFTPU-25, respectively. The increase in the residual mass could be ascribed to SSA, which exhibits high thermal stability. [49][50][51] Figure 3a shows the spectral reflectance and emissivity of the films according to the normalized ASTM G173 global solar spectrum and LWIR atmospheric transparency window. The average R solar values of the TPU, AFTPU-10, AFTPU-15, and AFTPU-25 films were 0.89, 0.84, 0.71, and 0.69, respectively, which were significantly higher than those of the TPU film and SSA composite TPU films (average R solar < 0.2, the transmittance values of TPU (SC), AFTPU-10 (SC), AFTPU-15 (SC), and AFTPU-25 (SC) were 0.803, 0.836, 0.843, and 0.831, respectively) prepared by the SC process ( Figure S5, Supporting Information). These results confirmed that the NSPS strategy was effective in improving the reflection of the TPU film. However, the average R solar values were reduced with the increasing SSA; the reason may be due to the fact that the SSA are highly transparent (91%), [52] as shown in Figure S5 (Supporting Information), and there was no phase separation between the TPU matrix and the SSA, which means no extra interfaces that can facilitate light reflection were formed with increasing SSA. Similar tendency was also observed for the films prepared by the SC method. Additionally, the porosities of the films are calculated [52] to be 69.2%, 67.6%, 58.5%, and 49.7% for TPU, AFTPU-10, AFTPU-15, AFTPU-15, respectively. The solar reflectance of the TPU films increased with increasing porosity, and the similar tendency had been observed in other porous films. [26] Although the increase in SSA content reduced the R solar values, and they were relatively lower than that of PVDF, [37][38][39][40] the emissivity increased from 0.93 (TPU) to 0.96 (AFTPU-15 and AFTPU-25), which makes AFTPU films potential candidates for PDRC. The increase in emissivity of the AFTPU films may be due to the vibrational absorption of Si-O-Si bonds in the SSA. As illustrated by the FT-IR spectra of the SSA ( Figure S6, Supporting Information), the fingerprint area of the SSA ranged from 1300 to 600 cm −1 , which coincides with the atmospheric transparency window (8-13 μm). The strong and highly selective emissivity of SSA may have significantly contributed to the high emissivity of the AFTPU films when the content was higher than 10 wt%. Figure 3b shows the setup used to evaluate the PDRC performance of the films in the outdoor environment. The TPU, AFTPU-10, AFTPU-15, and AFTPU-25 films were used as passive radiative coolers. The black substrate and ambient temperatures of the equipment were also measured. Five thermocouples were placed at the bottom of each sample and on the surface of the black substrate. Another thermocouple was suspended in the cavity to measure the ambient temperature. Considering the characteristics of solar radiation intensity on 18 September 2021, the time for the experiments was chosen to be from 9:30 a.m. to 3:30 p.m. The solar radiation intensity in this interval was highest during the day, and the maximum was greater than 800 W m −2 . The PDRC results are shown in Figure 3c. The solar irradiance reached its peak at noon, and the power density exceeded 820 W m −2 . Correspondingly, the ambient temperature and black substrate temperature increased to 45 and 75°C, respectively. Notably, all the AFTPU films exhibited lower temperatures than the ambient temperature, and the temperature drop was directly proportional to the R solar , as discussed earlier. To clearly understand the cooling performance of the film, the average cooling temperature difference for all films ( Figure S7, Supporting Information) was determined. The average temperature drops (10:50-12:50: sunlight intensity = 800 W m −2 ) of the TPU, AFTPU-10, AFTPU-15, and AFTPU-25 films were 9.99, 7.68, 5.82, and 3.80°C, respectively. Although the month was September, the weather was still hot, with the highest atmospheric temperature of 38°C when the study was conducted. These results indicate that the TPU and AFTPU films are powerful PDRC materials, even in extremely hot weather conditions.
Characterization and PDRC Performance of AFTPU as Coatings
TPU can be used not only as a self-supporting PDRC material but also as a coating for traditional fabrics. Considering the waterproof and moisture permeability requirement for wearing, AFTPU-10 was used for the coating because it showed both high cooling (7.7°C) and good water-resistance performances. The cooling performance and mechanical property of AFTPU-25 were relatively poor, while the waterproof property of AFTPU-5 was limited improved as compared to that of TPU and can be wetted by water ( Figure S8, Supporting Information); therefore, AFTPU-25 and AFTPU-5 were not used for the coating experiments in this work. The coated fabric was denoted as nylon/AFTPU-10. As depicted in Figure 4a, the nylon fabric has a distinct warp and weft structure. After coating, the surface was similar to that of AFTPU-10 (Figure 4b), whereas the other side of the nylon fabric remained unchanged (Figure 4c; Figure S9, Supporting Information). Figure 4d shows a photograph of nylon/AFTPU-10, which was white and could be scaled up for production. The inset image shows that the composite fabric nylon/AFTPU-10 still had excellent breathability and waterproof function, whereas its moisture permeability was 1026 g m −2 for 24 h. Moreover, nylon/AFTPU-10 had a high water contact angle of 123°( Figure S10, Supporting Information), and its FT-IR spectrum was comparable to that of AFTPU-10 ( Figure S11, Supporting Information). The mechanical properties of the fabric also improved after the coating. The stress-strain curve, as shown in Figure 4e, indicates that the elongation at break was 25% for the nylon fabric, which increased to 60% for nylon/AFTPU-10. The stress also improved slightly, suggesting that the tensile strength of nylon/AFTPU-10 was higher compared to that of the AFTPU-10 film. Because AFTPU-10 was physically coated on nylon fabrics, the mechanical stability of nylon/AFTPU-10 is critical for wearable use. Impressively, there were no observable changes for nylon/AFTPU-10 after been bended 500 and 1000 times ( Figure S12, Supporting Information). The SEM images of the nylon/AFTPU-10 before and after bending for differences times also exhibited similar morphologies. The AFTPU-10 was mainly filled in the nylon fiber pores to form an interlocked structure rather than chemically and closely coated on the nylon fibers ( Figure S12, Supporting Information). On the other hand, the AFTPU-10 was highly flexible and mainly showed the elastomer behavior similar to the TPU matrix ( Figure 2f). Therefore, the nylon/AFTPU-10 was mechanically stable and can undergo thousand times of bending without observable changes in both macro-and microscales. Figure 4f and Figure S13 (Supporting Information) show the spectral properties of the nylon/AFTPU-10 and commercial fabrics. The average R solar of nylon/AFTPU-10 was 0.84, which was the same as that of the AFTPU-10 film but increased as compared to that of nylon (0.63), cotton (0.71), and polyester (0.77) fabrics. Interestingly, the average emissivity of nylon/AFTPU-10 was 0.97, which was higher than that of the AFTPU-10 film. This could be due to the vibration of the C-N bonds in nylon. [17,36,46] The average emissivity of nylon was 0.86 with a lowest emissivity of 0.71 in the range of 8-13 μm, while that of the cotton and polyester fabrics were 0.91 and 0.87, respectively.
The PDRC performances of the AFTPU-coated fabrics were evaluated on 26 September 2021, using the same setup (Figure 3b). Nylon fabric, AFTPU-10, nylon/AFTPU-10, cotton, and polyester fabrics were tested as potential PDRC materials, and their cooling performances were compared. Figure 4g shows the daytime temperature measurements with the accompanying solar irradiance. Under strong sunlight (12:00 pm), the temperatures of the nylon fabric, AFTPU-10, nylon/AFTPU-10, cotton fabric, polyester fabric, and ambient atmosphere were 57.8, 40.2, 45, 56.1, 53.5, and 49.7°C, respectively. The temperatures of the traditional fabrics were much higher than the ambient temperature (up to 8.1°C), and they did not exhibit any cooling effect. However, significant temperature drops were observed for AFTPU-10 film (9.5°C) and nylon/AFTPU-10 (4.7°C). Moreover, when compared to traditional fabrics, the nylon/AFTPU-10 fabric exhibited a much lower temperature, i.e., 11.1 and 8.5°C lower than that of cotton and polyester fabrics, respectively. The AFTPU-10 film showed impressive temperature drops of 17.6, 15.9, and 13.3°C as compared to nylon, cotton, and polyester fabrics, respectively. These results confirmed that the AFTPU films and coatings are promising PDRC materials for PTM on extremely hot days with sun exposure.
Practical Characterization of the Nylon/AFTPU-10 with Sun Exposure
The application of the AFTPU films as a wearable PDRC material was demonstrated by outdoor tests. The AFTPU-10 and nylon/AFTPU-10 samples were sewn onto a black cotton shirt. The shirt was worn by a person who sat on a bench under direct sunlight for 70 min, and the temperature changes were mon-itored. With an increase in the exposure time to the sun, the surface temperature of the cloth increased. Four temperatures were measured: the ambient temperature (blue background) and the temperatures underneath AFTPU-10, nylon/AFTPU-10, and black shirt. It is noteworthy that the experiment was conducted on a cloudy day, and the sunlight was blocked by clouds intermittently; consequently, temperature fluctuations were observed (Figure 5a). Nevertheless, the temperature of AFTPU-10 was ≈11°C lower than that of the cloth and 7.5°C lower than that of nylon/AFTPU-10. Impressively, the temperature of AFTPU-10 was lower than the ambient temperature, confirming its radiative cooling performance on an actual wear. The lower temperature can also be clearly observed in Figure 5b, where the shirt became deep red after 70 min in sunlight (40.5°C), whereas the color of AFTPU-10 was still blue, corresponding to a temperature of only 22.7°C. It is noteworthy that both the wearing experiment and outdoor setup experiment confirmed a better cooling performance of AFTPU-10 than nylon/AFTPU-10, possibly due to the bilayer structure of nylon/AFTPU-10 (only one side of the nylon fabric was coated by AFTPU-10; Figure 4b,c), whereas the bottom of nylon may slightly affect the cooling performance of the nylon/AFTPU-10.
As shown in Figure 1d, the heat input to the outdoor environment mainly results from solar radiation. [11] The AFTPU films prepared in this study possess a high R solar and, thus, the heat gain from the sun can be significantly reduced (Figure 5c). In addition, the high emissivity of the AFTPU films and coatings overlapped with the atmospheric transparency window, through which heat could efficiently radiate to the extremely cold outer space (3 K). Thus, efficient cooling performance was demonstrated by the flexible TPU and AFTPU films. [25,27,32,36]
Conclusion
Highly flexible, robust, waterproof, and breathable TPU and AFTPU films were designed and prepared via a scalable NSPS strategy. The films were white and opaque, and exhibited a high solar reflectance range (0.69-0.89) and IR emissivity (0.90-0.96). Thus, the films showed excellent PDRC performance in outdoor environments with an impressive temperature drop of ≈10°C and a cooling power of 40 W m −2 under a solar radiation of 820 W m −2 . In addition, the AFTPU film could be used as a coating for traditional textiles to achieve an impressive PDRC performance, exemplified by 4.7°C lower than the ambient temperature, and 15.5 and 13.3°C lower than temperatures of cotton and polyester, respectively. Compared to the reported passive radiative cooling structures, the AFTPU and TPU films reported in this study can be adapted complicated shapes by cutting, folding, trenching, etc., which make them suitable for PTM.
Experimental Section
Materials: DMF was purchased from Sinopharm Chemical Reagent Co., Ltd. TPU (Pellethane 2363-80AE) was purchased from Lubrizol Advanced Materials, Inc. SSA were obtained from Shenzhen Yidahui Co., Ltd. The average diameter of the aerogels was 12 μm ( Figure S14, Supporting Information). The specific surface area ( Figure S15, Supporting Information), average pore size ( Figure S16, Supporting Information), and contact angle ( Figure S17, Supporting Information) of the aerogels were 1057 m 2 g −1 , 17 nm, and 137.5°, respectively. The anhydrous ethanol was obtained from Kunshan Chengxin Chemical Co., Ltd. All other solvents and reagents were of analytical grade and were used as received.
Preparation of the AFTPU Films via the NSPS Method: The AFTPU and TPU films were prepared via the NSPS process as follows: taking AFTPU-10 as an example, a TPU solution with a concentration of 15 wt% was first prepared by dissolving Pellethane 2363-80AE in DMF. Then the SSA (20 wt% with respect to the TPU) were immersed in anhydrous ethanol so that the pores of the SSA were filled with ethanol. Finally, the SSA was added to the TPU solution, followed by blade coating. A primitive TPU film was formed upon exposure to the open air for 5 min. The primitive film was then immersed in a water bath for completely phase separation. The wet AFTPU films thus obtained were solvent-exchanged with water to replace DMF and oven-dried at 40°C for 18 h. The thicknesses of the AFTPU films prepared using by this method were ≈200 μm. Significant shrinkage would occur if the bladed film did not undergo NSPS for 5 min and was directly immersed in water for solvent exchange ( Figure S18, Supporting Information).
Characterizations: The morphologies of the TPU and AFTPU films were characterized by a field emission scanning electron microscope (Quanta FEG 250, FEI) with an acceleration voltage of 10 kV. The contact angles were performed using an optical angle meter system (OCA 15EC, Data Physics Instruments GmbH). The wearing comfort of the fabrics was determined by the water vapor transmission (according to GB/T 12704.2-2009, YG601H, Ningbo Textile Instrument Factory, Zhejiang China). The tensile stress-strain curves were recorded by using an Instron 3365 tensile testing machine with a stretching rate of 10 mm min −1 . Thermogravimetric analysis (TG 209F1 Libra, NETZSCH) was carried out to measure the decomposition temperature profile with a heating rate of 10°C min −1 in a nitrogen atmosphere. The optical reflectivity of the films was measured using a UV-vis-NIR spectrophotometer (UV3600, Shimadzu Corporation). The infrared emissivity was determined using an FT-IR spectrometer (Bruker INVENIO) with an integrating sphere (PIKE INTERRATIR). Infrared thermal images were taken with an IR camera (TiX580, Fluke). Porosities was calculated by the equation: P = 1 − film / skeleton , [52] where film is the density of the TPU films and skeleton is the density of TPU polymer (1.12 g cm −3 ). The densities of the films were calculated from the volume and weight of the films.
Cooling Performance Evaluation: The radiative cooler and various reference fabrics were tested on the roof of a five-storied building to ensure full access to the open sky and to exclude the thermal radiation from surrounding buildings. The experimental setup was prepared according to the literature, and mainly consisted of a polystyrene foam box, aluminum foil, low-density polyethylene (LDPE) film, radiative cooler, high-temperature polyimide tape, thermocouples, and a luminometer. [40,[53][54][55] During the outdoor experiments, the relative humidity was 40-80%, and the setup was studied under sunlight on sunny and noncloudy days. As shown in Figure 3b, to reduce the temperature of other areas of the foam box owing to heat absorption, the foam box was wrapped with aluminum foil. A piece of transparent 0.013 mm thick LDPE film was applied on top of the thermal isolation box to reduce heat convection and conduction between the cavity and the environment. The size of the cavity of the isolation box was 44 × 38 × 4.5 cm (length × width × height). Each sample measured 60 × 60 mm. Furthermore, 0.063 mm thick tape was used to cover the edges of the sample and prevent heat loss. Temperatures were measured using thermocouples placed between the films and the black substrate (Figure 3b), and the thermocouples were closely contacted with the film by an adhesive tape, whereas the temperatures of the chamber and black substrate were monitored by the thermocouple suspended in the cavity and that placed on the surface of the black substrate, respectively. Temperature data were stored every 10 s in a USB flash drive using a handheld multichannel thermometer (JK808). Simultaneously, the solar irradiance was recorded by a solar power meter (TES-1333).
Cooling Power Calculations: To calculate the cooling power of the films, COMSOL was used to perform heat transfer. When the films in this study were exposed to a cloudless and clear sky, they could reflect most of the sunlight. Simultaneously, due to the temperature differences between the radiative cooler and the surrounding environment, there will be heat exchange between the environment and the cooling material through convection and conduction. [18,28,56,57] Here, the radiative cooling power, P cool , is defined as P cool (T) = P rad (T) − P atm (T) − P Sun − P cond+conv In Equation (1), the power radiated by the structure is given by In Equation (2), ∫ dΩ = 2 ∫ ∕2 0 d sin is the angular integral over a hemisphere. I BB (T, ) = 2hc 2 5 1 e hc∕( k B T) − 1 is the spectral distribution of the thermal energy radiated by a blackbody at any temperature T, where h is the Planck's constant, k B is the Boltzmann constant, c is the speed of light, and is the wavelength, ɛ( , ) is the spectral and angular emissivity of the radiative cooler of surface area A at any temperature T P atm (T amb ) = A ∫ dΩ cos ∫ ∞ 0 d I BB (T amb , ) ( , ) amb ( , ) Equation (3) means absorbed power due to incident atmospheric thermal radiation. P sun is the absorbed power by the films from incoming solar irradiance, which is defined as In the above two formulas, according to Kirchhoff's law of thermal radiation, under the condition of thermodynamic equilibrium, a material's absorptivity is equal to its emissivity. And the atmospheric emissivity is given by atm ( , ) = 1 − t( )1/cos , where t( ) is the atmospheric transmittance in the zenith direction. In Equation (4), I AM1.5 ( ) is the standard solar irradiance; the structure is assumed to face the sun at a fixed angle Sun , Thus, the term P Sun does not have an angular integral, and the structure's emissivity is represented by its value at Sun P cond+conv (T, T amb ) = Ah c (T amb − T) In Equation (5), h c is a coefficient of the nonradiative, which combined conduction and convection.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2022-04-28T06:23:25.177Z | 2022-04-27T00:00:00.000 | {
"year": 2022,
"sha1": "e7cda7beaa70f65e43b46d813b939ffd92b9b0eb",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202201190",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "e001c51dbbff9bb85c496d2101e6385d776df514",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234110349 | pes2o/s2orc | v3-fos-license | SIMULATOR ASSISTED ENGINEERING – APPLICATIONS IN NUCLEAR ENGINEERING EDUCATION AT KHALIFA UNIVERSITY
The Generic Pressurized Water Reactor (GPWR) simulator has been used in the Nuclear I&C Laboratory at Khalifa University (KU) since 2013 to improve student performance in nuclear engineering that is a multidisciplinary field involving nuclear reactor physics, thermodynamics, fluid mechanics, thermal hydraulics, radiation, etc. The simulator, developed by Western Service Corporation, has been integrated as a teaching and educational tool in different Engineering Programs at KU (Mechanical and Nuclear engineering). This lab is used in an undergraduate course where students apply the knowledge taught from different courses such as nuclear systems, fuel cycle, thermal hydraulics, safety principle, and control functions through a virtual operating NPP simulator. This real-time, full scope and high fidelity simulator allows to perform different operating conditions such as plant startups, shutdowns, and load maneuvers; as well as normal and abnormal plant transients, and critical scenarios and accidents. Since its installation in the Nuclear I&C Laboratory at KU in 2013, thirty students have benefited from this learning simulator. The main skills and learning outcomes expected to be achieved by students through the using of this tool are (i) ability to describe different NPP components and understand different process occurring in different subsystems, (ii) explain and apply safety principles and protective protocols, and (iii) analyze and interpret the plant behavior during transient operations and when severe accidents happen.
INTRODUCTION
Nuclear Engineering (NEng) is a multidisciplinary field involving nuclear reactor physics, thermodynamics, fluid mechanics, radiation, etc. The complexity and diversity of problems and concepts related to NEng in general and Nuclear Power Plant (NPP) operation in particular make nuclear engineering education a real academic challenge. The difficulty of problems representation in combination with complex mathematical and conceptual analysis encountered in NEng makes it extremely difficult for students to learn. To date, numerous educational tools and methods have been developed to improve student performance in nuclear engineering. One of these advanced learning methods is Simulator Assisted Engineering (SAE). For this purpose, a learning simulator, Generic Pressurized Water Reactor (GPWR), was installed in the Nuclear I&C Laboratory at Khalifa University (KU) in 2013, with additional upgrading in 2015 and 2019. The simulator, developed by Western Service Corporation, has been integrated as a teaching and educational tool in different Engineering Programs at KU (Mechanical and Nuclear engineering). It is used in an undergraduate course where students apply the knowledge taught from different courses such as nuclear systems, fuel cycle, thermal hydraulics, safety principle, and control functions through a virtual operating NPP simulator. It is also used by MSc students to conduct their research projects. This real-time, full scope and high fidelity simulator allows to perform different operating conditions such as plant startups, shutdowns, and load maneuvers; as well as normal and abnormal plant transients, and critical scenarios and accidents. Since its installation in the Nuclear I&C Laboratory at KU in 2013, thirty students have benefited from this learning simulator. The main skills and learning outcomes expected to be achieved by students through the using of this tool are (i) ability to describe different NPP components and understand different process occurring in different subsystems, (ii) explain and apply safety principles and protective protocols, and (iii) analyze and interpret the plant behavior during transient operations and when severe accidents happen. Class Hands-on assignments using simulator includes: x Manual Reactor Trip x Maximum rate power ramp from 100% down to ~75% and back up to 100% x Maximum size reactor coolant system rupture combined with loss of all offsite power x Maximum size unisolable main steam line rupture In the present paper, we present some examples of the studied scenarios and different relevant outputs that student can handle and analyze.
Manual reactor trip
The simulator GPWR allows to study the case of a manual reactor trip. Using a defined scenario, students can monitor different operation parameters during the test and verify all components status. They can verify mainly the position of the control rod which should be at the bottom of the reactor core, the turbine and steam generator valves, and the generator output. At the end of the process, a file is written and saved containing data about different parameter behaviors. The students can later analyze this output to understand the effect of the activation of the manual trip on the reactor behavior. Some of the parameters that can be analyzed are: i-the neutron flux: it should respond with an immediate decrease in flux from 100% to delayed neutron flux level in such reactor. This behavior can be verified by the students using the temporal evolution of the neutron flux as illustrated by Figure 1. ii-average temperature: during this transient test, it should decrease following the neutron flux behavior. In case of unexpected evolution, steam dump will be automatically activated to maintain the average temperature at its no-load value.
iii-pressurizer level: as the reactor is cool-down, the specific volume of the circulating coolant decreases and causes a pressure decrease at the pressurizer. To maintain this pressure at its normal operating level, the charging flow control valve is automatically open. Students can verify this behavior analyzing data as shown by Figure 2.
iv-steam generator pressure: it is an important parameter to control. It should rapidly rise following the main turbine behavior. In order to keep it at no-load conditions, the steam dumps firstly open causing an immediate decrease in steam pressure and then they gradually close to bring the pressure at the desired level.
Simulation of a simultaneous trip of main or all feedwater pumps
During this test, the reactor operates at 100% power for a short period and then the malfunctions are inserted. The simulator runs for 10 minutes and different parameters can be monitored and saved to be analyzed. Students can verify different operation parameters such as the generator power and the pressure in the secondary side of the steam generator, turbine and auxiliary motor driven feed pumps. i-Feedwater flow: as shown by Figure 3, the total feedwater flow for all steam generators decreases immediately to zero and remains until the end of the test. ii-Steam generator pressure: the behavior of the pressure during this event can be divided into three regions. First, it increases due to the loss of the feedwater and the deviation of the steam from the feedwater pumps to the main turbine. Second, the pressure decreases after the open of steam dumps. Finally, the steam dumps gradually close and the steam generator pressure is brought to no-load conditions. The temporal evolution of the steam generator pressure is illustrated by Figure 4.
Simulation of a simultaneous trip of all reactor coolant pumps
In this event, students are asked to monitor and verify the reactor behavior when all reactor coolant pumps are stopped. The duration of the test is 10 minutes during which data are recorded. During the test, the turbine main stop valves close and turbine trip is activated by reactor protection system. The generator power decreases immediately to a negative value and after generator breakers trip open it remains at 0 MW. The steam dump controller demand increases to 100% and the steam dump valves will be operated to maintain average temperature at the no-load value. Later, analysis of different parameters is requested such as: i- The neutron flux: due to trip of the coolant pumps, moderator temperature and fuel temperature will begin to increase providing net negative reactivity. Therefore, the reactor trip occurs and the neutron flux decreases from 100% to delayed neutron flux level in a subcritical reactor.
ii-Pressurizer pressure: it decreases immediately after coolant pump closing due to the increase in density of the reactor coolant. Then, it starts to increase after establishment of natural circulation in addition to the activation of the pressurized backup heaters. It continues increasing until reaching the set-point of the pressurizer power-operated relief valves, as shown in Figure 5. 2.4. Simulation of the maximum size reactor coolant system rupture combined with loss of all offsite power During this scenario, students can follow the different component behavior due to a rupture of coolant system. They can verify that all control rods drop to the bottom of the reactor core, the turbine main stop valves close, and pressurizer level decreases to zero as answer to these malfunctions. As primary parameters of interest, students can analyze: i-The containment temperature and pressure: they increase rapidly when the event starts and then decrease due to the effects of quench spray and recirculation spray cooler operation. The temporal evolution of the temperature is shown in Figure 6. ii-Steam generator level: its response will globally follow a reactor trip. As shown in Figure 7, it is characterized by the fluctuation during some period of the test. It will stabilize due to the decrease of the decay heat and be maintained by manually throttling auxiliary feedwater. 3. CONCLUSIONS According to the encouraging feedback of students and instructors involved on the use of the GPWR simulator, this learning method is found to be more efficient and attractive: students feel more comfortable with the direct application of the fundamental knowledge on real engineering problems. Therefore, and in order to enhance the efficiency of this educational tool, KU keeps updating and upgrading the simulator by integrating advanced tools allowing better use for education, training and R&D. For example, RELAP-3D engineering grade thermal-hydraulic model, developed by Idaho National Laboratory for real light water reactor analysis, is recently integrated and used to simulate real-time thermal hydraulic behavior of different components. | 2021-05-11T00:07:30.719Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "6377cc361eabdf6bdf142d7daf5a70b782f44768",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2021/01/epjconf_physor2020_14003.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "99541b8afb9f4bee88527ca9292313576eea88d1",
"s2fieldsofstudy": [
"Engineering",
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
52883486 | pes2o/s2orc | v3-fos-license | Outcome reporting bias in trials: a methodological approach for assessment and adjustment in systematic reviews
Systematic reviews of clinical trials aim to include all relevant studies conducted on a particular topic and to provide an unbiased summary of their results, producing the best evidence about the benefits and harms of medical treatments. Relevant studies, however, may not provide the results for all measured outcomes or may selectively report only some of the analyses undertaken, leading to unnecessary waste in the production and reporting of research, and potentially biasing the conclusions to systematic reviews. In this article, Kirkham and colleagues provide a methodological approach, with an example of how to identify missing outcome data and how to assess and adjust for outcome reporting bias in systematic reviews.
50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom E 50% reduction in seizure frequency was reported in this trial and therefore the trial must have measured seizure freedom. As the five studies reporting on seizure freedom reported non-significant results, it is likely that seizure freedom was analysed in this trial but not reported because of a non-significant result, especially as the 50% reduction in seizure frequency result was reported to be significant favouring topirimate.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harms data were collected during patient interviews at each visit.
Dizziness NA Outcome data reported in review meta-analysis.
Headache NA Outcome data reported in review meta-analysis.
Nausea/vomiting R1
The AEs reported in the trials were only those that occurred in >=15% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Paraesthesias NA Outcome data reported in review meta-analysis.
Weight loss/decrease NA Outcome data reported in review meta-analysis.
Fatigue NA Outcome data reported in review meta-analysis.
Somnolence R1
The AEs reported in the trials were only those that occurred in >=15% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Concentration impairment NA Outcome data reported in review meta-analysis.
Speech difficulty R1
The AEs reported in the trials were only those that occurred in >=15% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Thinking abnormally R1
The AEs reported in the trials were only those that occurred in >=15% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Ataxia R1
The AEs reported in the trials were only those that occurred in >=15% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Etherman 1999
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom NA Outcome data reported in review meta-analysis.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harms data were collected by interviewing patients (or parents/guardians) in a non-directed manner.
Dizziness R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Headache R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Nausea/vomiting R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Paraesthesias R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Weight loss/decrease NA Outcome data reported in review meta-analysis.
Fatigue NA Outcome data reported in review meta-analysis.
Somnolence NA Outcome data reported in review meta-analysis.
Concentration impairment NA Outcome data reported in review meta-analysis.
Speech difficulty R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Thinking abnormally R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Ataxia R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Faught 1996
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom E 50% reduction in seizure frequency was reported in this trial and therefore the trial must have measured seizure freedom. As the five studies reporting on seizure freedom reported non-significant results, it is likely that seizure freedom was analysed in this trial but not reported because of a non-significant result, especially as the 50% reduction in seizure frequency result was reported to be significant favouring topirimate.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harms data were recorded in the subject's diary and reviewed.
Dizziness NA Outcome data reported in review meta-analysis.
Headache NA Outcome data reported in review meta-analysis.
Nausea/vomiting R1
The AEs reported in the trials were only those that occurred in >=20% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Paraesthesias NA Outcome data reported in review meta-analysis.
Weight loss/decrease R1
The AEs reported in the trials were only those that occurred in >=20% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Fatigue NA Outcome data reported in review meta-analysis.
Somnolence NA Outcome data reported in review meta-analysis.
Concentration impairment R1
The AEs reported in the trials were only those that occurred in >=20% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Speech difficulty R1
The AEs reported in the trials were only those that occurred in >=20% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Thinking abnormally NA Outcome data reported in review meta-analysis.
Ataxia NA Outcome data reported in review meta-analysis.
Guberman 2002
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom NA Outcome data reported in review meta-analysis.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harms data collection method was not recorded.
Dizziness NA Outcome data reported in review meta-analysis.
Headache R1
The AEs reported in the trials were only those that occurred in >=5% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Nausea/vomiting R1
The AEs reported in the trials were only those that occurred in >=5% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Paraesthesias NA Outcome data reported in review meta-analysis.
Weight loss/decrease NA Outcome data reported in review meta-analysis.
Fatigue NA Outcome data reported in review meta-analysis.
Somnolence NA Outcome data reported in review meta-analysis.
Concentration impairment NA Outcome data reported in review meta-analysis.
Speech difficulty R1
The AEs reported in the trials were only those that occurred in >=5% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Thinking abnormally R1
The AEs reported in the trials were only those that occurred in >=5% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Ataxia R1
The AEs reported in the trials were only those that occurred in >=5% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Korean 1999
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom NA Outcome data reported in review meta-analysis.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harms data were assessed by physicians from the patient diaries.
Dizziness NA Outcome data reported in review meta-analysis.
Headache NA Outcome data reported in review meta-analysis.
Nausea/vomiting NA Outcome data reported in review meta-analysis.
Paraesthesias R1
The AEs reported in the trials were only those that occurred in >=5% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Weight loss/decrease NA Outcome data reported in review meta-analysis.
Fatigue R1
The AEs reported in the trials were only those that occurred in >=5% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Somnolence NA Outcome data reported in review meta-analysis.
Concentration impairment R1
The AEs reported in the trials were only those that occurred in >=5% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Speech difficulty NA Outcome data reported in review meta-analysis.
Thinking abnormally R1
The AEs reported in the trials were only those that occurred in >=5% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Ataxia NA Outcome data reported in review meta-analysis.
Privitera 1996
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom E
"A 75 to 100% reduction in seizure was not experienced by any patient in the placebo group but was observed in 23% of patients who received topiramate 600mg/day and 13% of patients treated with topiramate 800mg/day or 1000mg/day". Clearly no seizure free events on placebo but the possibility of some events on treatment. Outcome clearly measured but not reported in full.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harms data were collected and evaluated at each patient visit.
Dizziness NA Outcome data reported in review meta-analysis.
Headache NA Outcome data reported in review meta-analysis.
Nausea/vomiting R1
The AEs reported in the trials were only those that occurred in >=20% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Paraesthesias NA Outcome data reported in review meta-analysis.
Weight loss/decrease Q "Weight loss was present in some patients with anorexia; however, weight was not measured at each visit, and a quantitative assessment of weight change was not available". Clear outcome measurement not taken routinely for all patients Fatigue NA Outcome data reported in review meta-analysis.
Somnolence NA Outcome data reported in review meta-analysis.
Concentration impairment NA Outcome data reported in review meta-analysis.
Speech difficulty R1
The AEs reported in the trials were only those that occurred in >=20% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Thinking abnormally NA Outcome data reported in review meta-analysis.
Ataxia NA Outcome data reported in review meta-analysis.
Rosenfeld 1996 (this was an abstract only)
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom C
"Twenty-five percent of topiramate patients (placebo 5%) had>=75% reduction in total seizure frequency and 6% (placebo, none) became seizure free." Clear no patients became seizure free in placebo group; percentage data given for treatment group but no reliable numerator/denominator presented.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harms data were assessed by physicians from the patient diaries.
Dizziness NA Outcome data reported in review meta-analysis.
Headache R1
Most common harms were listed only (no threshold specified). Clear outcome was measured but not reported as likely to have been uncommon.
Nausea/vomiting NA Outcome data reported in review meta-analysis.
Paraesthesias R1
Most common harms were listed only (no threshold specified). Clear outcome was measured but not reported as likely to have been uncommon.
Weight loss/decrease R1
Most common harms were listed only (no threshold specified). Clear outcome was measured but not reported as likely to have been uncommon.
Fatigue NA Outcome data reported in review meta-analysis.
Somnolence NA Outcome data reported in review meta-analysis.
Concentration impairment R1
Most common harms were listed only (no threshold specified). Clear outcome was measured but not reported as likely to have been uncommon.
Speech difficulty R1
Most common harms were listed only (no threshold specified). Clear outcome was measured but not reported as likely to have been uncommon.
Thinking abnormally NA Outcome data reported in review meta-analysis.
Ataxia NA Outcome data reported in review meta-analysis.
Sharief 1996
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom NA Outcome data reported in review meta-analysis.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Dizziness R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Headache NA Outcome data reported in review meta-analysis.
Nausea/vomiting R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Paraesthesias R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Weight loss/decrease NA Outcome data reported in review meta-analysis.
Fatigue NA Outcome data reported in review meta-analysis.
Somnolence NA Outcome data reported in review meta-analysis.
Concentration impairment NA Outcome data reported in review meta-analysis.
Speech difficulty NA Outcome data reported in review meta-analysis.
Thinking abnormally R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Ataxia R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Tassinari 1996
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom NA Outcome data reported in review meta-analysis.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harms data collected by interviewing patients in a non-directed manner.
Dizziness NA Outcome data reported in review meta-analysis.
Headache NA Outcome data reported in review meta-analysis.
Nausea/vomiting NA Outcome data reported in review meta-analysis.
Paraesthesias R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Weight loss/decrease NA Outcome data reported in review meta-analysis.
Fatigue NA Outcome data reported in review meta-analysis.
Somnolence NA Outcome data reported in review meta-analysis.
Concentration impairment NA Outcome data reported in review meta-analysis.
Speech difficulty R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Thinking abnormally NA Outcome data reported in review meta-analysis.
Ataxia R1
The AEs reported in the trials were only those that occurred in >=10% in either treatment group. Clear outcome was measured but not reported as unlikely to have met the reporting threshold.
Yen 2000
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom E 50% reduction in seizure frequency was reported in this trial and therefore the trial must have measured seizure freedom. As the five studies reporting on seizure freedom reported nonsignificant results, it is likely that seizure freedom was analysed in this trial but not reported because of a non-significant result, especially as the 50% reduction in seizure frequency result was reported to be significant favouring topirimate.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harms data were inscribed into a diary or reported to the physician by phone. No questionnaires for AEs were used.
Dizziness S1 Dizziness was combined with somnolence in the reporting. Clearly measured for both treatment arms.
Headache NA Outcome data reported in review meta-analysis.
Nausea/vomiting NA Outcome data reported in review meta-analysis.
Paraesthesias NA Outcome data reported in review meta-analysis.
Weight loss/decrease NA Outcome data reported in review meta-analysis.
Fatigue T1
All harms appear to be reported with no reporting restrictions. Likely no events.
Somnolence S1 Dizziness was combined with somnolence in the reporting. Clearly measured for both treatment arms.
Concentration impairment T1
All harms appear to be reported with no reporting restrictions . Likely no events.
Speech difficulty T1
All harms appear to be reported with no reporting restrictions . Likely no events.
Thinking abnormally T1 All harms appear to be reported with no reporting restrictions . Likely no events.
Ataxia T1
All harms appear to be reported with no reporting restrictions . Likely no events.
Zhang 2011
Review outcome Classification Justification for classification 50% reduction in seizure frequency NA Outcome data reported in review meta-analysis.
Seizure freedom NA
Not reported in review meta-analysis but noted that there were no seizure free events in either treatment group.
Treatment withdrawal NA Outcome data reported in review meta-analysis.
Harm was assessed by the attending physician at the end of each 2-week interval. Data collection came from patient-held diaries.
Dizziness S1 Dizziness was combined with somnolence in the reporting. Clearly measured for both treatment arms.
Headache NA Outcome data reported in review meta-analysis.
Nausea/vomiting T1 All harms appear to be reported with no reporting restrictions. Likely no events.
Paraesthesias NA Outcome data reported in review meta-analysis.
Weight loss/decrease NA Outcome data reported in review meta-analysis.
Fatigue NA Outcome data reported in review meta-analysis.
Somnolence T1
All harms appear to be reported with no reporting restrictions . Likely no events.
Concentration impairment T1
All harms appear to be reported with no reporting restrictions . Likely no events.
Speech difficulty NA Outcome data reported in review meta-analysis.
Thinking abnormally T1 All harms appear to be reported with no reporting restrictions . Likely no events.
Ataxia T1
All harms appear to be reported with no reporting restrictions . Likely no events.
Coles 1999 (this was an abstract only)
Review outcome Classification Justification for classification 50% reduction in seizure frequency E "Seizure severity was measured using the Liverpool Scale (LS) and National Hospital Seizure Severity Scale (NHS3). Seizure frequency was recorded throughout by diary".
Clear outcome measured. Likely analysed from data collected in diary.
Seizure freedom E From above -clear outcome measured and likely analysed from data collected in diary.
Treatment withdrawal G
This outcome was not mentioned in the abstract. However this is an important outcome in this context and was measured and reported in all other studies. Clinical judgement says likely measured.
No data reported on harms.
Dizziness S2
No data on harms presented, perhaps due to space limitations (this was reported as an abstract only). Judgment suggests that it is likely that any harm was measured. The decision was based on both clinical judgment and what was reported in all other studies.
Headache S2
As for dizziness.
Nausea/vomiting S2 As for dizziness.
Weight loss/decrease S2 As for dizziness.
Fatigue S2
As for dizziness.
Somnolence S2
As for dizziness.
Concentration impairment S2
As for dizziness.
Speech difficulty S2
As for dizziness.
Thinking abnormally S2 As for dizziness.
Ataxia S2
As for dizziness. | 2018-10-11T13:45:54.205Z | 2018-09-28T00:00:00.000 | {
"year": 2018,
"sha1": "4ce3a4ba490dc7bcaad6f7637625087c7fe3ae35",
"oa_license": "CCBY",
"oa_url": "https://www.bmj.com/content/bmj/362/bmj.k3802.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "09c8e8786581175403a87c6226aff50d7344bd4a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
50747289 | pes2o/s2orc | v3-fos-license | Bounds on Geometric Eigenvalues of Graphs
The smallest nonzero eigenvalue of the normalized Laplacian matrix of a graph has been extensively studied and shown to have many connections to properties of the graph. We here study a generalization of this eigenvalue, denoted $\lambda(G, X)$, introduced by Mendel and Naor in 2010, obtained by embedding the vertices of the graph $G$ into a metric space $X$. We consider general bounds on $\lambda(G, X)$ and $\lambda(G, H)$, where $H$ is a graph under the standard distance metric, generalizing some existing results for the standard eigenvalue. We consider how $\lambda(G, H)$ is affected by changes to $G$ or $H$, and show $\lambda(G, H)$ is not monotone in either $G$ or $H$.
Introduction
The use of eigenvalues to study graphs has a long history in graph theory. Since at least the 1980s, the eigenvalues of various matrices have been used to study properties of a graph, including many connectivity features, distance and diameter properties, automorphisms, random walks, and a litany of graph invariants. Results involving spectra of a graph have been catalogued in many surveys and books, such as [1, 3-5, 17, 18], for example.
Of particular interest in the study of graph theory is the first nonzero eigenvalue of the normalized Laplacian matrix. This single quantity has ties to connectivity, the rate of convergence of a random walk over the graph, the diameter, discrepancy bounds on the number of edges between sets, and many other important properties. In addition, the first nonzero eigenvalue is used to classify expander graphs, applications for which have been found in many facets of computer science, group theory, geometry, topology, and other areas. There are many surveys available on the properties of expanders and the first eigenvalue, such as [7,10,11].
Recently, work has begun on generalizing the notion of the first nonzero eigenvalue of the normalized Laplacian matrix in geometric terms [6,[14][15][16]. This is related to the study of the distortion of embeddings between metric spaces, and this perspective has also appeared in the literature; see, for example [8,9,13]. We here build upon this literature by studying this embedding constant in a general setting.
To begin, let us examine the desired generalization. We start with the standard definition of the normalized Laplacian. Any notation not explicitly defined will be given in Section 2 below.
For a given graph G, we define the adjacency matrix A to be the {0, 1}-valued matrix indexed by V (G) such that A uv = 1 if u ∼ v and 0 otherwise. Define the diagonal degree matrix D to have D vv equal to d v . The normalized Laplacian matrix is defined to be L = I − D −1/2 AD −1/2 , where we take the convention that if D vv = 0, then D −1/2 vv = 0. It is well-known that the smallest eigenvalue of L is λ 0 = 0, with corresponding eigenvector D 1/2 ½. Hence, by the Courant-Fischer theorem, we have that λ 1 = inf f ⊥D 1/2 ½ (f T Lf )/(f T f ).
One can view the vector f as a function from V (G) to R, where f (v) = f v . From this perspective, some basic manipulations provide the following equivalent form for λ 1 (see, for example, [3]): Hence, one can view λ 1 as an attempt to compare the average distance between the embedding values at adjacent vertices to the average distance between the embedding values of an arbitrary pair of vertices. Roughly speaking, a small value of λ 1 indicates that adjacent vertices can be mapped quite close together, even as the vertices themselves are spread out. Intuitively (and actually) this would indicate poor connectivity of G, with the extreme case that λ 1 = 0 indicating that the graph is in fact disconnected.
In [14], the following geometrically based generalization was proposed. In Equation (1), one can view the quantity |f (u)−f (v)| 2 in terms of the distance between f (u), and f (v); that is, |f (u)−f (v)| 2 = ℓ 2 (f (u), f (v)) 2 . Hence, we can extend this definition to an arbitrary metric space (X, d) by replacing ℓ 2 (f (u), f (v)) 2 by M. Radcliffe, University of Washington, Seattle. C. Williamson, The Chinese University of Hong Kong, supported by Hong Kong RGC GRF grant CUHK410112.
1 d(f (u), f (v)) 2 , and taking the infimum over all functions from V (G) to X. Specifically, we define (2) λ(G, X) = inf Previous work on this constant has primarily been focused on regular graphs, and more specifically random regular graphs, and the ties between λ(G, X) and expansion in a graph [6,[14][15][16]. We here provide bounds on λ(G, X) in the case that the metric space X is itself a graph under the standard distance metric, and provide analogs to some classical theorems in spectral graph theory in this case. Specifically, we prove the following analogs to standard results in spectral graph theory. • λ(G, H) ≤ n n−1 , and equality is achieved if and only if G = K n . We also show that for a given graph family, if λ → 0, we must have that λ ∈ O 1 n 2 , and show by example that this bound is asymptotically tight. We also provide some general bounds on the constant λ(G, H) as a part of the proof of Theorem 1.1.
In [14], it is noted that for any metric space, λ(G, X) λ(G, R). We prove here a lower bound for λ(G, X) in terms of λ(G, R) when X is finite, namely the following. Theorem 1.2. Let X be a finite metric space. Then there exists an absolute constant C such that for every connected graph G, Finally, we consider how modifications to the graphs G or H can impact λ(G, H).
We provide examples showing that adding an edge to G can both increase and decrease the value of λ when H is held constant, so that λ is not monotone in G, and provide bounds on the ratio of the two eigenvalues. Similarly, we provide examples for which taking H ′ a subgraph of H also increases and decreases the value of λ when G is hold constant, so that λ is also not monotone in H. However, we do have the following theorem: Hence, the single edge provides an extreme case for calculating λ. To avoid confusion, throughout the remainder of this work, we shall refer to the classical first eigenvalue λ 1 as λ(G, R). We also take any graphs used as a metric space as connected, as otherwise the ratio ,f (v)) 2 dudv may be undefined. Since λ(G, H) for a disconnected graph H is equal to the minimum of λ(G, H j ) for connected components H j of H, it suffices to assume H is connected.
Notation
Throughout, we shall use standard graph theoretic notation, as follows. For G a graph, let V (G) denote the vertex set of G, and E(G) If needed, for clarification we will use d v (G) to denote the degree in G. The maximum degree in G will be denoted by ∆, and the minimum degree by δ. The distance between two vertices d G (u, v), is the number of edges in a shortest path between u and v. The diameter of G is the maximum distance between two vertices, and will be denoted by D G . For a collection S of vertices in G, write Vol (S) = u∈S d u . For simplicity, we write Vol (G) to denote Vol (V (G)). For two sets of vertices S, T ⊂ V (G), let e(S, T ) denote the number of edges incident to both S and T .
Throughout we will view graphs also as metric spaces, using the distance function defined above. More specifically, we will consider the quantity λ(G, H), where (H, d H ) is a metric space over a graph H. We shall typically write |V (H)| = k. As there are two graphs involved, for clarity we shall typically use letters u, v to indicate vertices in V (G) and i, j to indicate vertices in V (H).
The complete graph G = K n is the graph with edge set E(G) = V (G) 2 , that is, all possible pairs of vertices are an edge in K n . The complete bipartite graph G = K n1,n2 has vertex set V (G) = V 1 ∪ V 2 , where |V 1 | = n 1 , |V 2 | = n 2 , and {u, v} ∈ E(G) if and only if one of u, v is a member of V 1 and the other is a member of V 2 . Given a graph G, we define the density of G to be ρ = m ( n 2 ) ; that is, ρ is the proportion of possible edges that are present in G.
To compute λ(G, X), one must minimize the fraction given in Equation (2). For a given function f : . When the metric space and graph are understood, we write R f in place of R f (G, X), for simplicity. As the embedding constant λ(G, X) is related to metric embeddings, we shall make use of Bourgain's Embedding Theorem [2] to prove Theorem 1.2. Although this theorem takes many forms, the specific version we shall use is as follows (see, for example, [12]).
Theorem 2.1. There exist constants c, C such that, for all finite metric spaces X, there exists a function g : X → R K , where K = θ log 2 |X| such that, for all x, y ∈ X, We note that the constants c, C are independent of the metric space X. Let φ K : R K → R be the projection We then have the following immediate corollary to Bourgain's Embedding Theorem: There exist absolute constants c, C such that, for all finite metric spaces X, there exists a function f : X → R such that, for all x, y ∈ X,
Bounds on R f
One useful tool to provide simplistic bounds on λ(G, H) will be to bound R f simultaneously for all f . We present here some basic bounds that shall appear throughout the remainder of this work. We begin with the following optimization that will be useful in bounding the denominator of R f . Lemma 3.1. Let x ∈ R k , with k ≥ 2, be a vector satisfying: Then, x 2 2 ≤ C 2 − 2C + 2. Proof. First, if n = 2, this becomes an optimization problem in only one variable. If we set x 1 = j, then we need only determine max j∈Z 1≤j≤C−1 Basic calculus shows that the maximum occurs at the endpoints of the interval, namely, where j = 1 or j = C − 1, obtaining a maximum value of (C − 1) 2 + 1 = C 2 − 2C + 2, as desired. Now, let us suppose that x ∈ R k has at least three nonzero entries, say Note that y is also a feasible vector for the optimization, and that y 2 Hence, the optimum must occur at a vector with precisely 2 nonzero entries, and we may use the above argument for the case n = 2 to obtain the desired result.
We can immediately use this result to provide the following simple lower bound on the denominator in R f .
As f is a nonconstant function, we have that x is a feasible vector for the optimization problem in Lemma 3.1 with C = Vol (G), and thus Similarly, as for all x ∈ R k we have x 2 ≥ 1 √ k x 1 , we have the following simple upper bound on the denominator in R f .
Proof. Noting that for i = j ∈ V (H), we have d(i, j) ≤ D H , and following the technique and notation in the proof of Theorem 3.2, we obtain
Bounds on λ(G, H)
We begin by proving Theorem 1.1, in the following four theorems. Proof. First, suppose that G is disconnected, so exists a partition of V into sets V 1 and V 2 such that e(V 1 , V 2 ) = 0, and |V 1 |, Clearly, by definition, R f = 0, and hence 0 ≤ λ(G, X) ≤ R f = 0. For the other direction, suppose that G is connected, with diameter D. Let f : V → X be a nonconstant function, and let Therefore, we have that for any function f , and thus λ(G, X) > 0 for any connected graph G.
Note moreover that the proof technique yields the following immediate corollary.
If G is a connected graph with diameter D, and X is any metric space, then We note that if G is the complete graph, we obtain equality in the above bound. Indeed, if G = K n , then for any function f : 2 , and hence R f = n(n − 1)/(n − 1) 2 = n/(n − 1), regardless of the metric space into which we embed. In fact, this is the largest possible value that λ(G, X) can take.
Fix an arbitrary vertex w of G of minimal degree δ and define f : V (G) → X as mapping every vertex except w to a ∈ X and mapping w to b ∈ X, where d X (a, b) = ǫ. Plugging this function into the inequality yields: which is a contradiction.
We have seen already that the complete graphs achieve this bound. Next, we see what can be learned about G from knowing that λ(G, X) = n n−1 . Theorem 4.4. Suppose that for some G, λ(G, X) = n n−1 . Then, G is complete. Proof. The assumption means that inf f R f = n n−1 . Plugging in the same function from the proof of Th. 8.1 yields: But this implies that G is δ−regular since δ is the smallest degree.
By assumption, we know that for all f : We now turn to the proof of Theorem 1.2.
Proof of Theorem 1.2. Suppose that X is a finite metric space. Let c, C be the constants guaranteed by Corollary 2.2, and let f : X → R be the function guaranteed by the same corollary. Take g : G → X to be any nonconstant function. Then we obtain As this bound holds for all g : G → X, taking the infimum yields the result.
Asymptotic lower bounds on λ.
Here, we investigate how quickly λ can decrease to 0. We first prove a naive lower bound, and show that asymptotically this is best possible. Using this together with the bound found in Theorem 3.3, we obtain that for any f , .
Note that as Vol (G) ≤ n 2 , this result implies that for any graph family G and fixed graph H, the eigenvalues of G n ∈ G with respect to H decay no more rapidly than order 1/n 2 . As the next example shows, this is the optimal order of decay.
Example. Construction a dumbbell graph G = G n by taking two copies of K n/2 and attaching single edge between them. Let H be K 2 . Define a function f : V (G) → V (H) by mapping the vertices in the two dumbbells to opposite vertices in H. Then we obtain On the other hand, the lower bound given by Theorem 4.5 in this case is 4 n(n/2−1)+2 . Note that both bounds here are order 1/n 2 , and indeed, the constant is also the same; that is, both bounds decay as 8/n 2 . Thus, the bound given in Theorem 4.5 is asymptotically best possible. Now we turn our attention to regular graphs. We first note the following naive bound for λ(G, H) for regular graphs Proof. As G is regular, note that the denominator of R f may be written as d 2 u,v d(f (u), f (v)) 2 . Hence, we have that for any nonconstant function f : Example. Construct a "regularized dumbbell" graph as follows. First, take two copies of K n/2 . In each copy, select two vertices and delete the edge between them. Add two new edges between the two copies of K n/2 \{e} by connecting the each endpoint of the deleted edge in one copy to an endpoint of the deleted edge in the other copy.
Let H = K 2 . Then, map the vertices in the dumbells to opposite vertices in H as before. Then we obtain Note that the estimate given in Theorem 4.6 is 2/nd, and hence asymptotically, we have λ(G, H) decays to 0 as quickly as possible.
Bounds relating λ(G, H) to λ(G, H ′ )
Throughout this section, we will take the underlying metric space to be a graph H. We shall consider the effect to λ(G, H) when changes are made to the graph H.
Corollary 5.2. Let G be a connected graph, and let H, H ′ be two connected graphs on the same vertex set V (H). Let all notation be as in Theorem 5.1. Then
Corollary 5.3. Let G be a connected, d-regular graph with diameter D, and let H, H ′ be connected graphs on vertex set V (H). Then In a similar way, we have the following bound.
Here, we have used the facts that given a fixed function f , any term in the sum u∼v d(f (u), f (v)) 2 can increase by at most a factor of D 2 H ′ , that 1 − 1/k ≤ 1, and that Vol(G) Vol(G)−1 ≤ 6 5 . As before, we obtain the following immediate corollary.
In comparing the bounds found in Theorems 5.1 and 5.4, it seems that the stronger bound will be decided by the density of G. Indeed, as S G = ( n 2 − m)D 2 , we note that if m is quite large, Theorem 5.1 will give a stronger bound, whereas if m is quite small, the bound in Theorem 5.4 will likely be stronger. Finally, we can improve the bound if a further constraint is made on H.
Theorem 5.6. Take H to be a complete graph and obtain H ′ by removing one edge from H. Let λ = λ(G, H) and λ ′ = λ(G, H ′ ). Then, Proof.
So, we have that λ ′ ≤ 4∆ 2 δ 2 λ, and going in the opposite direction, we obtain the stated result.
We now turn to the question of whether λ is monotone in H.
Theorem 5.7. Let H ′ be a connected subgraph of H. Then, 1 Proof. Note that for any pair of vertices u, v, as desired.
However, we do obtain Theorem 1.3 as a corollary: Proof. Note that as H is a connected graph, then H ′ = K 2 is a subgraph of H. Moreover, D H ′ = 1, and hence by Theorem 5.7, we have λ(G, H) ≤ λ(G, H ′ ).
Hence, although we do not have monotonicity in H, we have that H = K 2 always provides an extreme value for λ.
On the other hand, as every graph on k vertices is a subgraph of K j for all j > k, we have the following corollary.
Bounds relating λ(G, H) to λ(G ′ , H)
Here we consider the impact on λ of adding or deleting edges to the graph G. We first consider what happens to λ when one edge is added to G. Theorem 6.1. Let G be a connected graph with Vol (G) ≥ 6, and suppose that G ′ is obtained from G by adding one edge from E(G). Let H be a graph with diameter D H and |V (H)| = k. Then and Proof. We first consider the upper bound (5).
and Vol (G ′ ) = Vol (G) + 2. Hence, , and hence we obtain yielding the upper bound. For the lower bound, we will prove the two bounds separately. First, note that Hence, by Theorem 3.2, we obtain yielding the first bound. For the second bound, we simply note that and since (du+1)(dv+1) dudv ≤ 4, we get that We note that there are cases in which each of the two terms in the lower bound is larger, and hence both bounds can be useful.
From the above theorem, it is unclear whether adding an edge to a graph G will increase or decrease λ(G, X). In fact, both possibilities can occur. To illustrate, we turn to the complete multipartite graph K n,j . Here, K n,j represents the j-partite graph where each partition set V 1 , ..., V j has exactly n vertices.
We shall consider the geometric eigenvalue λ(K n,j , K 2 ). Write v 1 , v 2 as the vertices of K 2 . Note that the only relevant pieces of information to evaluate R f (K n,j , K 2 ) for a function f are Indeed, if we denote these values by x i , then we obtain Theorem 6.2. λ(K n,j , K 2 ) = 1 for all n, j.
Proof. Using notation as above, define a function f such that x i = 1 for all i ∈ [j]. Then by the above, x i x j for any real numbers x 1 , ..., x j , then we have that R f ≥ 1 for all f . Therefore, it must be that the given function f achieves the infimum, and λ(K n,j , K 2 ) = 1. So, we only need to show that (x 1 + ... + x j ) 2 ≥ j j−1 i<j x i x j holds, which by expanding the square is equivalent to demonstrating that (j − 1) This is true since the left hand side equals i<j (x i − x j ) 2 , which is non-negative as it is a sum of squares.
Using this result, we immediately have the following. Theorem 6.3. There exists a pair of graphs G, G ′ , such that G ′ is obtained from G by adding one edge, and λ(G, K 2 ) < λ(G ′ , K 2 ).
Proof. Let G 1 = K n,n , so as seen above, λ(G 1 , K 2 ) = 1. Moreover, λ(K n , K 2 ) = n n−1 > 1, and hence if we add the nonedges of G 1 sequentially, we will encounter a pair of graphs satisfying the condition. Theorem 6.4. There exists a pair of graphs G, G ′ , such that G ′ is obtained from G by adding one edge, and λ(G, K 2 ) > λ(G ′ , K 2 ).
The two results above suffice to show that λ(G, H) is not monotone in G when H = K 2 . However, the proof also works for λ(G, R) (instead of mapping into K 2 , map to the real numbers 0 and 1). Lemma 6.5. λ(K n,n , K k ) = 1 for all n, k.
Proof. Note that by Theorem 6.2 and Corollary 5.9, it suffices to show that λ(K n,n , K k ) ≥ 1 for all n, k.
For all i ∈ [k], and for a function f : We consider x and y to be the vectors of these numbers, and we know that ||x|| 1 = ||y|| 1 = n. For a fixed x, ||x|| 2 2 − x T y is minimized when y is a multiple of x. Since y cannot equal cx for any c = 1 (due to the 1-norm constraint on y), we know that ||x|| 2 2 − x T y is minimized when y = x, and this is lower bounded by 0. Thus, we have: Then, we have that: For any f : V (G) → K k , we have: So now we can extend our proof of non-monotonicity to H = K k .
Corollary 6.6. λ(G, H) is not monotonic in adding an edge to G, whenever H is a complete graph.
Proof. By lemma 6.8 and corollary 6.6, we know that λ(K n,n , K k ) = 1. But then the same example as in theorem 6.4 also works to disprove monotonicity in the generalized case, as λ(G ′ , K k ) ≤ λ(G ′ , K 2 ) < 1.
We note that a graph being bipartite is not equivalent to λ(G, H) equaling 1. Of course, a disconnected graph can be bipartite, but λ will equal 0. Similarly, if one wants a connected counterexample, it is easy to check that λ(P 3 , K 2 ) = 4 3 , where P 3 is the path on three vertices. Conversely, our computer simulation tells us that λ(G, K 2 ) = 1 where G is the nonbipartite graph that is formed by taking K 4 and deleting two edges that touch a common vertex. We have established that adding an edge to G can decrease λ(G, H), despite the improvement in connectivity. In fact, we have something stronger. If we have a k-regular graph, we can add enough edges to it so as to make it k + 1-regular and still have a decrease in lambda. A variant of this has been considered in [15], and a similar bound developed. Theorem 6.7. Let G ′ be a k + 1-regular supergraph of k-regular graph G. Then, Proof. First, note that For the lower and upper bounds, we use: Example. For n even, let G = K n,n , the complete, balanced, bipartite graph on 2n vertices. We already know that this is a n-regular graph with λ(G, K 2 ) = 1. Denote the vertices on the left partition as v 1 , v 2 , ..., v n and the vertices on the right as v n+1 , ..., v 2n . Then, create G ′ by adding the edges v 1 ∼v 2 , v 3 ∼v 4 , ..., v n−1 ∼v n , v n+1 ∼v n+2 , ..., v 2n−1 ∼v 2n . Then, G ′ is n + 1-regular and applying the function that maps v 1 , v 2 , v n+1 , v n+2 to one vertex in K 2 and the other vertices to the other vertex in K 2 , we obtain that λ(G ′ , K 2 ) ≤ n n+1 , which meets the lower bound in theorem 6.10. Perhaps a natural question to ask at this point is how many of the nonedges can be added to a graph so that λ(G, K 2 ) still decreases. We consider this problem by trying to maximize the number of added edges divided by the total number of nonedges in G, for some graph family. One goal is to create a graph family so that for each member of the family, some constant fraction of the edges can be added in such a way that ambda decreases. The following is the closest we could come to that. Theorem 6.8. There exists a graph family {G n } such that for each G ∈ {G n }, there exists a graph G ′ such that λ(G ′ , K 2 ) < λ(G, K 2 ) and G ′ is obtained from G by adding a set of edges to G of size O 1 n ǫ |E| , for any ǫ > 0.
Proof. The construction is to take a complete, balanced, bipartite graph G = K n,n , so that we have λ(G, K 2 ) = 1. On each side of the partition V 1 , V 2 , let there be a set of n 1−ǫ/2 red vertices and a set of n − n 1−ǫ/2 blue vertices. Create G ′ by adding edges to G so that the red vertices in V 1 form a clique and the red vertices in V 2 form another clique. We have added n 1−ǫ/2 (n 1−ǫ/2 − 1) edges. Thus, we have added n 1−ǫ/2 (n 1−ǫ/2 −1) n(n−1) = O 1 n ǫ |E| of the missing edges.
Finally, recall from Theorem 1.1 that λ(G, H) < λ(K n , H) = n n−1 for all non-complete graphs G on n vertices. We end with an examination of the "nearly" complete graph K n \{e}, a graph with exactly one nonadjacent pair of vertices. Theorem 6.9. For n ≥ 3, λ(K n \{e}, K 2 ) ≥ 1.
Proof. Denote the vertices of K n \{e} as v 0 , ..., v n−1 and let there be no edge between v 0 and v 1 . Write V (K 2 ) = {0, 1}. It suffices to consider two cases, depending on whether v 0 and v 1 map to the same vertex in K 2 .
The second equality follows from the fact that x + y = n − 2. Dividing the numerator and denominator by (n − 1) 2 , it is clear that this ratio is at least 1, concluding the proof. Corollary 6.10. lim n→∞ λ(K n \{e}, K 2 ) = 1 Proof. Combine the previous theorem with the result from Theorem 1.1, so that λ(K n \{e}, K 2 ) < λ(K n , K 2 ) = n n − 1 → 1.
Note that since λ(K 3 \{e}, K 2 ) = 4 3 ≥ 1, transforming a graph from K n \{e} to K n+1 \{e} can decrease lambda. This means that taking a graph and adding a new vertex which is connected to all previous vertices can decrease lambda, although intuitively it might appear that this should produce a graph that is "better" connected than the original. We note that this operation can also increase λ; if G = K 2,2 , for example, then adding a vertex adjacent to all four original vertices will produce a graph with a larger λ value with respect to K 2 . | 2015-02-27T19:34:44.000Z | 2015-01-14T00:00:00.000 | {
"year": 2015,
"sha1": "c4a12ed4b722a0af73f59554ac1727f1f858d7d7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9777a55aeb3dc262c8d703425b478de326d35240",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
221820489 | pes2o/s2orc | v3-fos-license | Effect of procyanidins on lipid metabolism and inflammation in rats exposed to alcohol and iron
Background Lifestyle involving uncontrolled alcohol consumption coupled regularly with red meat and other iron sources has detrimental effects on the liver, which in the long term, results in Alcoholic Liver Disease (ALD). Procyanidin has lately garnered increasing attention and has become the focus of research owing to its antioxidant properties. This study explores the anti-inflammatory effects of procyanidins, in preventing ALD, by analyzing the biological activities of the compound on liver injury caused by excessive alcohol and iron. Method Male SPF Wistar rats were placed in 4 groups; the control Group A (basic diet); the model Group B (excess alcohol 8–12 mL/kg/d and iron 1000 mg/kg diet); the low dose procyanidin Group C (model group diet plus 60 mg/kg/d of procyanidin); and the high dose procyanidin Group D (model group diet plus 120 mg/kg/d of procyanidin). Serum biochemical markers for liver damage were measured spectrophotometrically. The NFκB and IκB mRNA expression levels were determined using RT-PCR; the NFκB p65 and IκB protein expression levels were assessed via western blotting, while ELISA was used to detect serum inflammatory factors. Results The pathological score of the model Group B, low and high dose procyanidin Groups C and D were 6.58 ± 0.90,4.69 ± 0.70 and 2.00 ± 0.73, respectively (P < 0.05). The results showed that high alcohol and iron contents in the model group led to significant damage of liver structure, increased low-density lipoproteins (LDLs), steatosis, and increased levels of inflammatory cytokines. High amounts of procyanidins led to the preservation of the liver structure, production of high-density lipoproteins, and reduction in serum inflammatory cytokines while also significantly decreasing the expression levels of NFκB p65. Conclusion The results prove that procyanidins have hepatoprotective potential and could be effective in reversing histopathology, possibly by alleviating inflammation and improving lipid metabolism.
Introduction
Jacques Masquelier was the first person who began intensive research on procyanidins in the 1940s when he investigated the pine bark brew used by Native Americans to treat Scurvy. The researcher determined that monomeric proanthocyanidins were the main components that gave the brew its healing properties, while also being safe for consumption [1,2]. Procyanidins, molecular formula C 30 H 26 O 13 , are polyphenols that are homo-oligomeric (epi)catechin having two B-ring hydroxyl groups and are built from (À)-epicatechin and flavan-3-ols (þ)-catechin [1]. The compound can be categorized into A-type, which has an interflavan bond and an ether linkage between the carbon-2 and the hydroxyl group of the A-ring, while B-type only has a single interflavan bond connecting the carbon-4 of the B-ring to the carbon-6 or carbon-8 of the C-ring [3].
Procyanidin, a natural health food, is commonly found in plants such as vegetables, fruits, legumes, grains, and nuts. Foods such as red wine, grapes, berries, chocolate, cranberry juice, and certain varieties of apples are known to contain large amounts of procyanidins [4,5,6]. Procyanidins, also known as condensed tannins, have drawn much attention for their hepatoprotective effect at the microscopic level [7] and potent activities like inhibiting levels of NO, PGE2, TNF-α, and ROS, and altering NFκB and activating IκB [8,9]. They have been under intense investigation by researchers as they exhibit cardioprotective, anti-oxidant, anti-cancer, anti-inflammatory, and anti-diabetic properties [6,10,11]. Their medicinal properties have seen their increased use in dietary supplements and alternative medicines [12].
The harmful effect of alcoholism counts for annually three million global causalities representing 5.3% of all deaths [13]. As the liver is the primary location for alcohol metabolism, it sustains the highest degree of damage [14]. Chronic alcoholism leads to alcohol-related liver disease, which is characterized by the development of hepatic steatosis, alcoholic hepatitis, fibrosis, cirrhosis, and may eventually lead to end-stage liver disease [15]. The early sign of heavy drinking is the build-up of fat in the liver, known as steatosis [15]. This simple steatosis leads to inflammation followed by fibrosis, which may eventually result in further damage as cirrhosis [16]. Alcoholic fatty liver is a reversible condition, fibrosis is variable, but cirrhosis in most cases is a sign of permanent damage [17].
The liver also plays a noteworthy role in iron homeostasis, a process that maintains plasma iron levels within a specific range [18]. High iron intake causes the activation of hepatocytes, which produce hepcidin that ensures homeostasis [19]. More than one-half of the cases with advanced ALD and one-third of the alcohol-dependent subjects exhibit high liver iron content [20], as alcohol-induced oxidative stress down-regulates hepcidin production [21,22]. This increased iron buildup in the liver is associated with greater mortality from alcoholic cirrhosis, indicating a pathogenic role for iron in alcohol-related liver disease [22].
In the liver tissue, alcohol metabolism and high iron deposits are involved in the generation of reactive oxygen species (ROS) and oxidative stress development [23,24], causing changes in lipid metabolism and inflammatory factors [25,26]. Thus, drinking alcohol while eating iron-rich foods can have detrimental effects on one's health status as the two substances mediate and promote oxidative stress and hepatic fibrosis, revealing the close relation between hemochromatosis and excessive alcohol consumption [27], this means that these two substances, help in facilitating ALD development.
For determining whether procyanidin, a powerful antioxidant, can be used to rectify liver damage in patients with ALD and its effectiveness in preventing ALD, our study aimed to explore their positive effect on lipid metabolism and inflammation in rats with alcohol and iron-induced hepatic injury.
Reagents
All reagents used in our research were of analytical purity. Grape seed procyanidin extract (GSPE), was obtained from Xi'an Tianxingjian Natural Biological Products Co., Ltd. China. At the same time, Iron(II) sulfate heptahydrate (FeSO 4 .7H 2 O) was purchased from Shanghai Maclean Biochemical Technology Co., Ltd. China, and ethanol was obtained from Sinopharm Chemical Reagent Beijing Co., Ltd. China.
Animal treatment and experimental design
Male SPF Wistar rats (n ¼ 51, weighing 150-160 g) were bought from Shandong Lukang Pharmaceutical Co., Ltd. The animals (Certificate of Quality No. SCXK (Lu) 20140007) were provided humane care in compliance with international standards for the care and management of Laboratory Animals (Laboratory Animal Resources Institute, Life Science Commission, National Research Council, 1996). The research was approved by the Animal Ethics Committee of Qingdao Medical University. The animals (6 weeks old) were housed in a standard environment (25 AE 1 C temperature, 55 AE 5% humidity, and 12/12 h natural light/ dark cycle) with unlimited rodent chow and water access.
After acclimatizing the rats for three weeks, they were assigned randomly to four groups: A, Control Group (n ¼ 11) was provided basic diet (iron content of 50 mg/kg) and normal saline; B, Model Group (n ¼ 12) was provided with iron-rich food (1000 mg/kg) while administering 50% v/v ethanol (8 ml kg À1 d À1 for 2 weeks þ 12 ml kg À1 d À1 gavage for 10 weeks); C, Low-dose Procyanidin Treatment Group (n ¼ 14) and D, High-dose Procyanidin Treatment Group (n ¼ 14) were given the same model group diet rich in alcohol and iron content, while also being administered procyanidin aqueous solution of 60 mg kg À1 d À1 and 120 mg kg À1 d À1 , respectively. They were kept under strict observation.
After 12 weeks under sterile conditions, the animals were anesthetized with 30 mg/kg sodium pentobarbital. For serum lipid analysis, blood samples were drawn from the abdominal aorta. Liver biopsies were taken, and the tissue samples were divided into two portions; one treated for pathological and the other for biochemical analysis. The remaining samples were frozen in liquid nitrogen and stored at -80 C.
Liver index calculation
Formula for liver index calculation: Liver index (%) ¼ liver mass/body mass  100.
Preparation of liver tissue for pathological examination
Formalin-fixed liver tissues (0.9 Â 0.9 Â 0.5 cm) were paraffinembedded and stained with hematoxylin-eosin (HE) to assess inflammation and steatosis, as previously described [28,29]. Pathological observation and photography were performed using an Olympus BX60 multi-function microscope. Liver pathology scoring was done as previously described by Yin et al [30].
Preparation of liver tissue for ultrastructural observation with transmission electron microscopy (TEM)
For ultrastructural observation, as previously described [28], a portion of tissue was fixed with 2.5% glutaraldehyde for 24 hrs at 4 C for pathological analysis (1 Â 1 Â 3 mm) and then washed with 0.1M cacodylate buffer, dehydrated with ethanol, fixed with 1% citric acid, dehydrated with acetone, embedded with epoxy resin (EPON812). Then the ultra-thin sections (50-70 nm) were placed on grids, stained with uranyl acetate and lead citrate, and then observed in a TEM.
Preparation of serum and liver tissue for detection of serum enzyme indices and liver metabolic indicators
Blood samples were collected into sodium citrate monovette for measuring biochemical markers such as serum alanine aminotransferase (ALT), aspartate aminotransferase (AST), and gamma-glutamyl transpeptidase (GGT). These markers, along with serum and liver tissue TG, TC, LDL, and HDL were measured spectrophotometrically by the automatic biochemical analyzer (AU5400, Beckman, USA) according to the kit instructions, as described previously [31]. Table 1 are the primer sequences.
2.7.1.2. Extraction of sample total RNA. Tissue was homogenized with liquid nitrogen, then incubated with Trizol, and allowed to stand for 5-10 min at room temperature. Again 0.2 ml chloroform was added and mixed vigorously for 15-30s. It was then allowed to stand for 3 min and centrifuged at 12,000 rpm for 15 min at 4 C. The aqueous phase containing the RNA was transferred to a new EP tube, 0.5 ml isopropanol was added, mixed, kept in an ice bath for 10 min, and centrifuged at 4 C, 12,000 rpm for 10 min. The aqueous phase was transferred to a fresh EP tube again, precipitated with 75% ethanol in ice, centrifuged for 5 min at 12,000 rpm at 4 C. It was then dried and treated with DEPC to dissolve RNA, then RNA concentration was detected and stored at -80 C. Electrophoresis of 5 μL of RNA was done on 1% agarose gel to detect RNA integrity.
2.7.1.3. Reverse transcription synthesis of cDNA. TIANScript RT kit instructions were followed strictly. Preparation of RT reaction was done by combining the following components in a sterile microcentrifuge tube: 1μg total RNA, 2 μL Oligo (dT) 15 , 2 μL Super Pure dNTP and RNase-Free ddH 2 O to make a volume of 14.5 μL. The mixture was heated at 70 C for 5 min and cooled rapidly on ice for 2 min. Briefly centrifuged with the addition of the following components: 0.5 μL 5X First-Strand Buffer (including DTT), 0.5 μL RNasin and 1 μL TIANScript M-MLV; and mixed gently, with incubation at 25 C for 10 min, 42 C for 50 min, followed by heating at 95 C for 5 min. The reaction system was diluted to 50 μL with RNase-Free ddH2O and stored at -20 C.
2.7.1.4. Real-time PCR reaction. The NFκB and IκB mRNA were amplified by the target gene primer and the reference gene primer by fluorescence quantitative PCR, and the dissolution curve was analyzed at 60-95 C. The experiment was carried out in strict accordance with the product specifications. The reaction system was prepared by mixing 10μL of SuperReal PreMix Plus, 0.6μL of upstream primer (10 μM), 0.6μL of downstream primer (10 μM), 100ng of cDNA, 0.4μL of ROX Reference Dye and RNase-Free ddH 2 O to make a volume of 20μL. The PCR amplification procedures included initial denaturation at 95 C for 15 min, followed by 40-50 cycles at 95 C for 10 s, 58 C for 30 s, and 72 C for 30 s. The samples were heated gradually from 72 C to 95 C to obtain melting curves and the fusion temperature of amplicons. See Supplementary Figure 1 for amplification and dissolution curves.
Relative quantitative analysis of the data was performed using the 2 -ΔΔCt method. The formula is as follows: the ratio of the target gene of the intervention group to the target gene expression of the control group
Western blot analysis
The western blot analysis was done according to a published method [33]. Proteins were extracted from liver homogenates by SDS-polyacrylamide gel electrophoresis and then transferred onto a PVDF membrane (Millipore, Bedford, MA) for 1 h at room temperature.
The membranes were blocked with TBST containing 10% skimmed milk for 2 h at room temperature and then probed with the primary antibody against NFκB p65 (1:1000) (Santa Cruz Biotechnology, California, US), IκB (1:1000) (Santa Cruz Biotechnology, California, US), β-actin (1:5000) (Easybio, Beijing, China), and Histone H3 (1:5000) (Easybio, Beijing, China) overnight at 4 C. After three 10-minute washes in TBST, the membranes were incubated with the horseradish peroxidase-conjugated secondary antibody (1:10000) at room temperature for 60 min. The membranes were washed with TBST three times. The protein bands were then visualized with an enhanced chemiluminescence detection kit (Beyotime, Shanghai, China). The blots were analyzed by scanning densitometry with a GS-700 imaging densitometer and Quantity One software. The optical density ratio of the target proteins to their internal reference proteins was used as the relative expression of the target protein.
Statistical analysis
Statistical analysis was performed on SPSS 17.0 software. ANOVA was used to compare multiple groups, expressed as ( xAES), the test level α ¼ 0.05. P < 0.05 was considered statistically significant.
Liver index of rats in each group
There was a statistically significant difference in the liver index when comparing the high dose procyanidin Group D to the model Group B and low dose procyanidin Group C (P < 0.05) (Supplementary Table 1).
Effect of procyanidins on pathology (H&E) and ultrastructure (TEM) of rat liver tissue
H&E staining of the control Group A showed normal lobular structure, orderly hepatic cord, absence of steatosis. In the model Group B, alcohol and iron exhibited liver damage such as Mallory body, fat vacuoles, inflammatory infiltration, hepatocyte hyaline degeneration, and microvesicular steatosis. Alcohol and iron significantly increased liver pathology score in the model Group B as compared to the control Group A. In the different procyanidin Groups C and D, the liver pathology score was improved, the structure of the hepatic lobules was also damaged, but the degree of hepatocyte necrosis was significantly reduced in comparison with the model Group B. In the low dose procyanidin Group C, fat droplets were found in the cytoplasm of the hepatocytes, the hepatic cord was irregularly arranged and showed inflammatory cell infiltration. In high dose procyanidin Group D, steatosis was reduced, and hepatic histopathological changes improved significantly, hepatic cord arrangement and tissue structure were normal. High dose procyanidin improved liver pathology score and steatosis markedly as compared to low dose procyanidin. The pathological score of the model Group B, low and high dose procyanidin Groups C and D are 6.58 AE 0.90, 4.69 AE 0.70, and 2.00 AE 0.73, respectively (P < 0.05) ( Figure 1A) (Supplementary Table 2). Hepatic ultrastructure of the control Group A showed hepatocytes without droplet fat, intact nuclear membrane, oval or round nuclei, clear nucleolus, clear ridge structure, normal mitochondrial morphology and rough endoplasmic reticulum, abundant ribosomes, capillary bile ducts, and tight junctions were in the cell boundary when compared to the model Group B. The model Group B showed the hepatic cells with irregular shape and increased droplet fat, mitochondria showed deformity with swelling, the nuclear membrane was fuzzy or irregular, ridge structure was fuzzy, lysosomes increased, and the rough endoplasmic reticulum showed disorganization, swelling, and fracture. In the low dose procyanidin Group C, the mitochondrial lesion decreased, and the number of mitochondria increased, the degree of rough endoplasmic reticulum breakage and disorder was improved, and the number of lysosomes and lipid droplets decreased. In high dose procyanidin Group D as compared to the model Group B the morphology of liver cells was normal, with the absence of fat droplets in the cytoplasm, the nuclear membrane remained intact, the mitochondria were nearly normal, with reduced pathological changes, while the disordered and degraded rough ER improved ( Figure 1B). Table 3
and 4).
3.4. Effect of procyanidins on NFκB, IκB protein and mRNA expression levels in rat liver tissue Compared to model Group B, procyanidin treated Groups C and D had significantly reduced mRNA expression levels of NFκB (P < 0.05). The expression level of IκB mRNA in the procyanidin Groups C and D significantly increased (P < 0.05), as compared to the model Group B. The western blotting results revealed, NFκB p65 protein expression level was decreased, and IκB protein expression level was increased in procyanidin Groups C and D as compared to the model Group B (P < 0.05). Procyanidins in a dosage-dependent manner affect protein, and their expression levels ( Figure 3) (Supplementary Table 5 and 6).
Effect of procyanidins on serum inflammatory factor levels in rats
The concentration of cytokines TNF-α, IL-6, IL-4, and IL-10 were measured and found to be elevated in the model Group B. TNF-α and IL-4 concentrations increased in the low dose procyanidin Group C compared to the control Group A, but no significant difference compared to the model Group B. The concentrations of IL-6 and IL-10 in low dose Group C significantly increased as compared to the control Group A and decreased comparatively with the model Group B. High dose procyanidin treated Group D showed significant improvement in cytokine levels as compared to the model Group B, signifying the anti-inflammatory effect of procyanidins (P < 0.05) (Figure 4) (Supplementary Table 7).
Discussion
The outcome of the study highlights that procyanidins, members of the procyanidin class of flavonoids, are effective in reversing the adverse effects of high alcohol consumption and dangerous levels of iron in the liver, which are responsible for causing ALD. These beneficial effects of procyanidin are obvious from the following improvements: decreased histopathological damage, suppressed serum biochemical indices, improved serum lipid profiles, attenuated serum inflammatory cytokines, down-regulated NFκB mRNA, and NFκB p65 expressions, and upregulated IκB mRNA expression. Procyanidin's hepatoprotective effect could be the result of its ROS scavenging nature, whose overproduction and buildup are sparked by oxidative stress, thereby alleviating hepatic injury. These findings strongly suggest that procyanidin can effectively prevent the progression of hepatic damage caused by high alcohol and iron. The result of the study correlates with previous findings on similar subjects that evaluated the effect of procyanidins on liver functions. A study by Blumberg et al. also indicated that procyanidins in cranberry juice resulted in better serum lipid profiles, glucoregulation, and improvement in the activities of serum biomarkers for inflammation control [35]. In another study, researchers confirmed the protective ability of procyanidin against hepatotoxicity in rats at the mitochondria level, owing to its antioxidant, metal chelating, and free radical scavenging properties [7]. Previous studies have shown that alcohol-iron exposure can cause histopathological changes in the liver, like hepatocyte swelling, the appearance of fat droplets, inflammatory cell infiltration, and hepatic cord derangement, reflecting the functional and morphological damage in alcohol-related liver injury [20]. Typical ALD characteristics demonstrated in Group B included the existence of Mallory bodies, fat vacuoles, inflammatory cell infiltration, hepatocyte hyaline degeneration, and microvesicular steatosis. Although there was evidence of liver damage in the procyanidin groups C and D, the degree of hepatocyte necrosis was significantly lower compared to the model Group B, as indicated by HE. However, a better recovery was observed in the high-dose procyanidin Group D, with animals exhibiting less steatosis, inflammation, and necrosis, showing that the effect of procyanidin in addressing liver damage is dose-dependent. Our analysis revealed that procyanidin intervention alleviated inflammation and fat accumulation in the liver and improved pathological injuries, the result being consistent with a previous report [36].
The high presence of iron in the liver is a major cause of oxidative stress, as research shows that it induces and increases the Fenton reaction, resulting in high production of free radicals in large amounts [37,38]. The free radicals-antioxidant interaction is a normal process that occurs in hepatocyte cells, where the liver tries to limit the free radicals production and accumulation in the body through antioxidants [39,40]. When the ROS production surpasses the ability of the antioxidants to neutralize them, it is referred to as oxidative stress [41]. ROS such as hydroxyl radicals, singlet oxygen, superoxide radicals, and hydrogen peroxide are produced usually by the mitochondria of inflammatory and immune cells such as the Kupffer cells and in other processes such as arachidonic acid metabolism [41]. Cells use superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx) as antioxidants to defend themselves against the increased ROS in the body, they are not effective against high amounts of ROS produced when the presence of iron in the liver is high [38,41]. The high production of ROS affects the other cell organelles, which was noted in the model Group B where the liver cells presented irregular shapes with increasing droplet fat characterized by a fuzzy or irregular nuclear membrane, swollen and deformed mitochondria, larger lysosomes, fuzzy ridge structure, with a swollen, fractured and disorganized rough endoplasmic reticulum, as indicated by TEM.
Enzyme markers, such as ALT, AST, and GGT, are widely used in animal research as indices of liver function and injury [42,43]. The liver TG level is known to be an indicator of hepatic lipid accumulation [44]. In the current study, elevated serum ALT, AST, GGT, and liver TG levels verified the hepatic injury in rats treated with alcohol and iron. The results demonstrated that procyanidin intervention could decrease the activity levels of serum ALT, AST, GGT, and liver TG. In conclusion, procyanidin induced hepatoprotective effects in alcohol-iron induced liver injury, as indicated by liver histopathological examinations and liver function-enzyme assays.
Ioannou et al. had previously stated that chronic alcohol consumption causes increased ferritin concentration and serum transferrin saturation, and also elevated hepatic iron stores [45]. Alcohol also increases cholesterol metabolism in the liver by elevating its synthesis, absorption, and excretion while limiting the absorption of cholesterol in the intestines [31,46]. Silva et al. found that high iron intake could cause lipid metabolism disorder in rats [47]. Alcohol and iron also cause high serum LDL concentration and proliferation of serum markers, which result in the interference of hepatocyte functions, thus making alcohol and iron, the immediate culprits of causing hepatosteatosis [48,49]. In contrast, high levels of procyanidins are associated with the prevention of hepatosteatosis due to the downregulation of production of TG, TC, and LDL, which resulted in the preservation of liver structure and functions as found in the study.
The result of the experiment also showed that there was a marked increase in serum inflammatory cytokines TNF-α, IL-6, IL-4, and IL-10 in the model Group B. The high dose procyanidin Group D showed significant improvement in cytokine levels as compared to the model Group B signifying the anti-inflammatory effect of procyanidins [1,8]. Research shows that ethanol-mediated microbial colonization and increase in the gut opens up the tight junctions in the intestines, facilitating the release of large amounts of endotoxins into the intestinal lumen and subsequently transported to the liver [50]. The inability of the liver to clear these high amounts of endotoxins in the blood causes them to accumulate, resulting in the activation of the immune system [51,52]. This activation is mainly by Kupffer cells, which release large amounts of chemokines and pro-inflammatory cytokines in response to the metabolic and functional deficiencies brought about by the high presence of endotoxins [53]. Lipopolysaccharide (LPS) and other endotoxins affect the biological functions of the non-immune, immune, and parenchymal cells, and the observed inflammatory response in the experiment is usually a primary occurrence in the development of ALD.
Another observation in the experiment that can be explained using research findings is the increased level of protein expression of NFκB p65 and subsequent inhibition of IκB in the model Group B. The occurrence of this phenomenon is related to the ethanol-mediated proliferation of gram-negative bacteria in the gut, which results in the production of endotoxins. The IκB helps in the cellular reaction to inflammation and is usually phosphorylated by LPS, antigen receptors, growth factors, and cytokines through activation of the IKK complex [54]. The activity of the IκB is affected by the upregulation of NFκB p65 in the presence of high amounts of intravenous endotoxins because the latter participates in the production of cytokines, which explains why the serum inflammatory cytokines TNF-α, IL-6, IL-4, and IL-10 were observed in high amounts in the model Group B [54]. The NFκB dimer is usually kept inactive by IκB in the cytoplasm of normal cells. But when the body is invaded by pathogens, the NFκB p65 signaling pathway is activated, causing an immune response [54]. The activation of the NFκB helps in the production of antibodies that try to limit the number of endotoxins in the bloodstream. It is usually a crucial stage in ALD development as it signifies the inability of the liver to eliminate numerous endotoxins in the blood, requiring the innate immune response to help in ameliorating the conditions. In this study, high dose procyanidin Group D showed a lower NFκB activity in the liver, which is a sign of a reduction in the expression of hepatic inflammatory markers, underlining the ability of procyanidins to control liver inflammation by inhibiting the production of proinflammatory cytokines.
A study conducted by Terra et al. compared rats fed with a high-fat diet with those on a high-fat diet together with procyanidins from grape seeds. The researchers found that rats fed with only a high-fat diet had high production of C-reactive protein (CRP), while those that consumed procyanidins had lower plasma CRP levels, which is because of the downregulation of CRP mRNA expression especially in the mesenteric white adipose tissue and the liver [55]. The researchers also found out that procyanidin downregulated the expression of TNF-α and IL-6 pro-inflammatory cytokines, while increased adiponectin mRNA levels in the mesenteric white adipose tissue [55]. The study shows that procyanidin controls CRP when it is being synthesized, and in addition to the inhibition of the expression of proinflammatory cytokines TNF-α and IL-6 and the upregulation of anti-inflammatory cytokine adiponectin the compound, can be useful in treating inflammatory illnesses [55]. The findings from this study and that from the literature underline the potential of procyanidins in lowering obesity-related adipokine dysregulation, which can be useful in managing metabolic and cardiovascular risk factors.
Researchers have also investigated the effectiveness of procyanidins in preventing oxidative stress and preserving liver integrity. In a study by Decorde et al., the researchers examined the effectiveness of polyphenolic grape seed extract (GSE) in overcoming obesity by reducing oxidative stress and addressing adipokine imbalance [56]. The study showed that procyanidins in the GSE reduced abdominal fat, insulinemia, higher plasma glucose, and leptinemia by more than 16.5% and increased the levels of adiponectin by 61% [56]. Rodríguez-Ramiro et al. determined that cocoa polyphenolic extract together with procyanidin, are effective in preventing oxidative stress caused by dietary acrylamide, by enhancing the redox status of cells and by obstructing the apoptotic pathways created by acrylamide [57]. Polyphenols such as procyanidin and epicatechin can greatly help to improve personal health.
The effectiveness of polyphenols such as procyanidins, epicatechin, phenolic acids, carotenoids, and flavonoids in preventing a myriad of illnesses such as cancers, tumors, diabetes, heart problems, liver issues, and neurogenerative disorders is due to their activities as antioxidants [56]. Antioxidants are free radical scavengers, helping to control the production of these reactive oxygen species (ROS) that are responsible for damaging small blood vessels and releasing inflammatory cytokines that result in organ damages [58]. When ROS production overwhelms the body's ability to regulate them, oxidative stress ensues a condition that adversely alters proteins, lipids, and DNA [59]. Therefore, the ability of antioxidants such as procyanidins, to control the production of ROS and reduce oxidative stress is significant in preventing diseases.
Patients with chronic ALD are usually at risk of liver failure, which can put their lives in danger when the situation is not addressed. The present study highlighted the potential of procyanidins to attenuate the debilitating effects of excess alcohol consumption and iron overload, which are the major risk factors of acquiring ALD [60,61]. The compound can be used by individuals who are at risk of getting ALD, and in alleviating situations where prevention of liver inflammation is required to ensure the liver disease does not worsen. Procyanidins are highly abundant in plants and can be a cheaper alternative to enhancing personal health as supplements instead of relying on expensive medications. This study shows that there is great potential in using the compound to treat and prevent other ailments emanating from oxidative stress in the liver, such as diabetes and associated illnesses.
Declarations
Author contribution statement Funding statement | 2020-09-10T10:17:17.680Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "164bb4805471075f6a937fdbe1b10f9d022f0808",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S240584402031690X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba47fe8a3535b73f8ca4edb24403681c5dffc71d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
256425425 | pes2o/s2orc | v3-fos-license | Bioinspired Adaptive, Elastic, and Conductive Graphene Structured Thin-Films Achieving High-Efficiency Underwater Detection and Vibration Perception
The lateral-line-like underwater mechanical sensor (LUMS) realizes the imitation of the structure and function of the lateral line. The detection range of water depth can be controlled by adjusting the size of the graphene/Ecoflex Janus film on LUMS. The maximum measured depth is 1.8 m. Similar to the fish, the mechanical stimuli from land and water can be sensitively captured by LUMS in real time. The lateral-line-like underwater mechanical sensor (LUMS) realizes the imitation of the structure and function of the lateral line. The detection range of water depth can be controlled by adjusting the size of the graphene/Ecoflex Janus film on LUMS. The maximum measured depth is 1.8 m. Similar to the fish, the mechanical stimuli from land and water can be sensitively captured by LUMS in real time. Underwater exploration has been an attractive topic for understanding the very nature of the lakes and even deep oceans. In recent years, extensive efforts have been devoted to developing functional materials and their integrated devices for underwater information capturing. However, there still remains a great challenge for water depth detection and vibration monitoring in a high-efficient, controllable, and scalable way. Inspired by the lateral line of fish that can sensitively sense the water depth and environmental stimuli, an ultrathin, elastic, and adaptive underwater sensor based on Ecoflex matrix with embedded assembled graphene sheets is fabricated. The graphene structured thin film is endowed with favourable adaptive and morphable features, which can conformally adhere to the structural surface and transform to a bulged state driven by water pressure. Owing to the introduction of the graphene-based layer, the integrated sensing system can actively detect the water depth with a wide range of 0.3–1.8 m. Furthermore, similar to the fish, the mechanical stimuli from land (e.g. knocking, stomping) and water (e.g. wind blowing, raining, fishing) can also be sensitively captured in real time. This graphene structured thin-film system is expected to demonstrate significant potentials in underwater monitoring, communication, and risk avoidance.
Introduction
Exploration in extreme environments, especially underwater monitoring and communication, is of great significance to understand the unknown and/or unmapped underwater world. Precisely and stably capturing underwater signal of depth and dramatic/ tiny vibration can provide abundant and essential information for underwater forewarning, creature tracking, and environmental considerations [1][2][3][4][5][6]. To date, extensive efforts have been devoted to developing underwater sensing materials and their integrated devices [7][8][9][10][11][12]. Soft materials featured with anti-corrosion and flexibility are considered as a promising candidate, enabling piezoresistive [13][14][15], capacitive [16,17] and piezoelectric [7,18] mechanism for underwater sensing. Since the water environment may adversely affect the conductivity of the sensors, some strategies have been developed to tackle this problem, including superhydrophobic sealing [19][20][21][22][23], and polymer encapsulation [24][25][26]. Among them, the superhydrophobic sealing method heavily relies on the micro-nano-structures [15,19], where the long-term stability may experience unfavourable wettability failure. And the encapsulation one may lose sensitivity of devices. In addition, eye-catching forms of materials and integrated devices may be easily attacked by underwater creatures, which should also be considered. Thus, the design of underwater sensors with flexible, environmentally stable and imperceptible properties for broad water depth detection and tiny vibration perception still remains a challenge in one integrated system.
In nature, fish can actively sense the external environment (e.g. water depth and mechanical vibration) and further adapt their behaviours to the surroundings. In the fish sensing systems, the lateral line has played vital roles in perceiving the external water pressure, flow movement and diverse vibrations (Fig. 1a-c). In water, the pressure change can go through the pores on the scales to the lateral line canal, which can act on the nerves for real-time and high-sensitive underwater sensing (Fig. 1d) [27][28][29][30][31]. Inspired by the fish sensing system, herein, we developed an artificial lateral line system enabled by a flexible and imperceptible thin-film in a self-supported state to simultaneously perceive wide-range underwater depth and large/tiny vibration. To achieve stable underwater sensing, a Janus film structure was designed and further integrated as a self-supported sensor with Ecoflex layer exposed to water. As a proof of concept, an artificial fish lateral line system was constructed to imitate the underwater sensing functions of fish. Owing to the ultrathin and elastic features, the fabricated film-based sensor can experience a remarkable deformation to achieve reversible resistance change for real-time water depth detection with a maximum depth of 1.8 m. Moreover, similar to fish, it can sensitively perceive the tiny vibration above and below the water surface, such as wind blowing, raining, underwater creatures' attacking, etc. The novel and simple design of this work is expected to demonstrate significant potentials in applications of underwater monitoring, communication and rescue.
Materials
Graphene slurry (5 wt%) with surface defects (Supplementary Fig. S1) obtained by mechanical exfoliation was acquired from Ningbo Morsh Technology Co., Ltd, and the size range of graphene flakes was ~ 2.5-7 μm (Supplementary Fig. S2). Owing to the multi-layered structure, the thickness of graphene flakes was ~ 10 nm ( Supplementary Fig. S3). Silicon rubber (Ecoflex™ 00-50) was obtained from Smooth-On, Inc. N-heptane (AR, 98%) was acquired from Shanghai Aladdin Biochemical Technology Co., Ltd. Ethyl alcohol (AR, ≥ 99.7%) was purchased from Sinopharm Chemical Reagent Co., Ltd. Deionized water was used as the substrate to prepare single graphene film and graphene/ Ecoflex Janus film at the air/water interface.
Fabrication of Graphene Film
Firstly, graphene slurry (5 g) was dispersed in 250 mL anhydrous ethanol, followed by strong ultrasonication (250 W) for about 6 h to obtain a stable dispersion. Then, the graphene dispersion (35 mL) was sprayed equably on the water surface. After stabilization for about 30 min, the microporous sponge was put on one side of the water/air interface to siphon water, while the graphene nanosheets were tightly stacked toward the opposite direction of the siphon. Finally, the graphene film with a closely packed structure was formed at the water/air interface until the pre-assembled film could not be compressed further.
Fabrication of Graphene/Ecoflex Janus Film
The Ecoflex part A and part B (1:1, w/w) were dissolved in n-heptane with a weight ratio of 7.71% and then ultrasonicated for 5 min to obtain a homogeneous solution. After that, the
3
Ecoflex solution (35 mL) was dropwise added onto the graphene/air interface along the container wall, followed by the evaporation of n-heptane and curing of Ecoflex at room temperature for 6 h. Subsequently, the graphene/Ecoflex ultrathin Janus hybrid film was obtained at the water/air interface.
Preparation of Lateral-Line-Like Underwater Mechanical Sensor
At first, a hole with an appropriate diameter was cut out in the middle of the petri dish, and then an ultrathin layer of Ecoflex prepolymer was scraped on its surface after rinsing the petri dish with anhydrous ethanol. The prepared graphene/Ecoflex Janus film was transferred onto the surface of the petri dish with part of the film attached to the substrate and the other part self-supported. After the curing of Ecoflex prepolymer layer at 60 °C for 1 h, the film was closely attached to the substrate. Then, the aluminium wire was connected to the graphene side with the aid of silver paste curing in a 60 °C oven. After that, another polymethyl methacrylate (PMMA) lid (diameter: 36 mm, height: 15 mm) was covered onto the graphene side and sealed with Ecoflex. Finally, the lateralline-like underwater mechanical sensor, whose graphene sensing layer was encapsulated and Ecoflex elastic layer was exposed to water, was successfully fabricated.
Characterization and Measurements
Surface and cross section morphology of the film was observed by Hitachi-S4800 field-emission scanning electron The microscopic images of fish scales and tapes were captured by an OLYMPUS BX51 polarizing microscope. The Raman scattering measurements were carried out by an R-3000HR spectrometer (Raman Systems, Inc., R-3000 series) excited by a solid-state diode laser (532 nm), with a frequency range of 3500-200 cm −1 . Z1 Zwick/Roell Universal Testing System was used to test the stress-strain characteristics of films. When LUMS detected underwater depth or external stimuli, the electrodes at both ends of the sensor were connected to the electrochemical workstation through wires, and the electrochemical workstation was connected to the computer ( Supplementary Fig. S4). Electrochemical Workstation (CH Instruments, CHI660E.Chenhua Co., Shanghai, China) was used to record the real-time current (I) accompanied by a constant voltage (V 0 ) of 1 V, while the real-time resistance (R) was calculated by the equation R = V 0 /I. The water contact angle was measured using Dataphysics OCA25 instrument by carefully dropping a 3-µL water droplet on the surface of film. AFM measurements were conducted using Dimension ICON SPM (Bruker, USA) in a Peak Force tapping mode.
Fabrication and Application of the Janus Films
Inspired by the structure and sensing mechanism of the lateral line, a Janus film composed of graphene and Ecoflex was designed via interfacial functionalization strategy. The assembled graphene film functioned as a sensing layer of the Janus film. It was fabricated via Marangoni effect induced self-assembly and the capillary force driving compression at the air/water interface [32][33][34]. And the thickness of the assembled graphene film was about 200 nm (Supplementary Fig. S5). Subsequently, Ecoflex prepolymer elastomer dispersed in heptane solution was gradually dropped onto the surface of graphene film, followed by a curing procedure. The interfacial functionalization strategy enables the formation of ultrathin Janus film at the air/water interface ( Supplementary Fig. S6a), and the initial resistance of the Janus film (59.5 μm × 1.5 cm × 3.9 cm) was 6.98 kΩ. In order to explore the hydrophilicity and hydrophobicity of the Janus film, water contact angle was measured by carefully dropping a 3-µL water droplet on the surface of films. In comparison with the pure graphene film with a water contact angle of 20.2°, the graphene side of the Janus film displayed a bigger water contact angle of 88.9° ( Supplementary Fig. S7). In addition, the water contact angle of Ecoflex side of the Janus film is 109.4°, similar to that of the pure Ecoflex film ( Supplementary Fig. S7). The hydrophobic property of Ecoflex layer ensures the hydrophobicity of the Janus film and makes it possible to be used underwater.
Owing to the ultrathin and conductive features of the Janus film, it can be driven to deform by tiny water vibration and low/high water pressure for real-time electrical signal output. As an analogue to the lateral line of fish, it can actively detect the mechanical stimuli from the surrounding environments and sense the change of water pressure decided by the water depth (Fig. 1e). Therefore, by mimicking the structure of the real lateral line, the acquired Janus film can be conformally transferred onto a model fish to achieve a lateral-line-like underwater mechanical sensor (LUMS). In our system, a model with a hollow structure was designed to assemble with the Janus film for a self-supported structure, in which the Ecoflex layer was exposed to water and the graphene layer was sealed in the air chamber of the model. Besides, the electrodes were applied on the surface of the graphene layer to form an LUMS (Fig. 1f). With the increase in water depth, the Janus film can experience a simultaneous deformation, accompanied by a corresponding current change for real-time depth detection. As shown in Fig. 1g, LUMS was located at a series of water depths, such as 0, 10, 20, 30, and 40 cm. It can be clearly observed that the film can be actuated by the water pressure and actively experience a gradual increase in the degree of deformation. The real-time mechanical deformation is expected to induce the corresponding electrical signal change for efficient water depth detection.
Structural Characterization and Properties of the Janus Film
Since the interface functionalization strategy can enable the formation of an ultrathin and transferrable film, the achieved Janus film can be easily transferred onto diverse targets for conformal and robust adhesion. As shown in Fig. 2a, hollow models with different shapes were employed to assemble with the Janus film for a self-supported one. It is observed that the film can adapt smoothly to various sophisticated shapes (e.g.
3
triangle, circle, square, and pentacle). When it functioned as a self-supported one, the film can even support objects such as an iron ball with a weight of 8.34 g. The result suggests that the film can demonstrate robust mechanical strength (Fig. 2b). In addition, the ultrathin Janus film also shows excellent conformal characteristic, which can adapt conformally to complex surfaces with complicated embossment.
Here, a fish model with fine structure of scales and fins was used. Figure 2c represents that the film can spread smoothly along the structured embossment of the fish model, in which the scales and fins can be entirely and clearly observed. Similarly, models such as starfish and seaweed were also selected as targeted substrates, demonstrating a good conformal ability of the film (Supplementary Figs. S8 and S9). Moreover, the interface functionalization method also allows the formation of stable interface between graphene and Ecoflex layer. As shown in Fig. 2d, the adhesive tape was tightly adhered onto the graphene side of the film, followed by a peeling-off operation without visible residues. As shown in Supplementary Fig. S10, although there are some black residues on the surface of the tape, they are unevenly distributed and stand for bits of weak-bonded graphene on the surface. And most of the graphene is embedded in Ecoflex and stays stable on the film. In order to evidence the advantage of this interface functionalization method, two control samples were fabricated in
Fig. 2 a
The digital photographs of self-supported graphene/Ecoflex Janus film on frames with different shapes. b The digital photographs of the graphene/Ecoflex Janus film holding an iron ball with a weight of 8.34 g. c The digital photographs of the graphene/Ecoflex Janus film attached to the surface of the model fish. d Peeling-off of the adhesive tape from the graphene side of the Janus film. e, f SEM images of the graphene side surface of the graphene/Ecoflex Janus film. g Raman spectra of the pure Ecoflex film, pure graphene film and both sides of the graphene/Ecoflex Janus film. h Representative tensile stress-strain curves of the pure Ecoflex film and Janus film. i Current of three kinds of films before and after applying 20% tensile strain our experiments, including the conventional casting method (Supplementary Fig. S6b) and transferring method (Supplementary Fig. S6c). In detail, for the conventional casting method, the assembled graphene film at the water/air interface was transferred onto the glass surface, followed by casting Ecoflex prepolymer solution on its surface. However, the interaction between the graphene film and the glass substrate is too strong to allow for intact peeling-off of the doublelayer film (Supplementary Fig. S11a). For the transferring one, a pure Ecoflex film was firstly fabricated at the water/ air interface, which was further cut and transferred onto the glass substrate. Then, the prepared graphene film was transferred and covered on the Ecoflex film at the air/water interface. After natural air-drying procedure, the doublelayered film was obtained. The results show that the curing Ecoflex film cannot effectively interact with the graphene nanosheets for a Janus film due to the weak penetration to the closely packed sheets. Sample of the transferring method presents poor stability of the graphene layer, which can be easily peeled off using the adhesive tape ( Supplementary Fig. S11b). Generally, compared to the conventional casting method and transferring method, the Janus film fabricated by the interface functionalization method demonstrates a stable interface between the graphene layer and the Ecoflex layer. Furthermore, SEM characterization was also conducted to investigate the micro-scale morphology of the Janus film. Figure 2e, f and Supplementary Fig. S12 clearly show that the graphene nanosheets are partially embedded into the elastomeric matrix and the exposed graphene sheets can provide contact sites for the formation of a conductive pathway.
And there was only a wrinkled structure of elastomer on the Ecoflex side ( Supplementary Fig. S13).
To further investigate the structural information of the Janus film, Raman spectra was also used to characterize the asymmetric structure. As illustrated in Fig. 2g, the result reveals that the Ecoflex side of Janus film has the same Raman curve as the pure Ecoflex, and the characteristic peak location is mainly in the low frequency (490 cm −1 , 710 cm −1 ) and high frequency (2906 cm −1 , 2965 cm −1 ) regions. For the graphene side, it not only has the same D (1348 cm −1 ), G (1579 cm −1 ) and 2D (2718 cm −1 ) characteristic peaks as the pure graphene film, but also has the same characteristic peaks as the pure Ecoflex. The result further indicates that the side of the graphene layer is partially wrapped by Ecoflex to form a semi-embedded structure, which may result from the infiltration of Ecoflex solution into the graphene layer during the preparation process. In addition, the Raman mapping can also clearly illustrate the Janus structure of this film (Supplementary Fig. S14). More importantly, compared with the pure Ecoflex film, the asymmetric introduction of the graphene layer into the elastomer system may not remarkably affect the mechanical strength (Fig. 2h). Additionally, the electrical information of three samples was also measured in Fig. 2i. When a 20% strain was applied and then released on the samples, only the current of Janus film can recover to the initial value, representing good electrical stability and repeatability (Supplementary Fig. S15).
Moreover, the strain sensing performance of the Janus film-based strain sensor was investigated, including sensitivity, stability as well as the responses to different strains and stretching frequencies, which was represented by a relative resistance change (ΔR/R 0 ): where R is the instantaneous resistance at the stretched state and R 0 is the initial resistance at the relaxed state. The tensile strain (ε) and Gauge factor (GF) are calculated by Eqs. (2) and (3), respectively: As shown in Supplementary Fig. S16, with the increase in the concentration of the graphene dispersion, the response of tensile strain range of the Janus film expanded. And the sensitivity of the Janus film gradually decreased in the same tensile strain range. In order to ensure the excellent conductivity and high sensitivity of the Janus film under large deformation, graphene dispersion (1 mg mL −1 ) was chosen to prepare the assembled graphene film and the Janus film. As shown in Fig. 3a, the Janus film-based strain sensor displays a large GF in the full sensing range. The corresponding GFs for the sensor are 36 (ɛ: 0-10%), 90 (ɛ: 15-20%), and 1070 (ɛ: 30-35%), respectively, which demonstrate that the Janus film-based strain sensor has the potential ability to sense external stimuli with high sensitivity. The relative resistance increases with the increasing strain under cyclic tensile strain of 1%, 5%, 10%, 15%, and 20% (Fig. 3b), which shows good resolution at each different strain. The stability of the Janus film was also studied. As shown in Fig. 3c, the relative resistance change is almost independent of the frequency with a tensile strain of 20% within a range of 0.1-3 Hz. Besides, the relative resistance change varies periodically and no obvious fluctuation or drift occurs in each cycle, presenting excellent stability and long-term durability during cyclic tensile tests between 0 and 20% tensile strain for 5000 cycles (Fig. 3d).
Water Depth Detection of the LUMS
Based on the sensitive and stable electrical performance of the Janus film, it was further integrated into LUMS to achieve water depth detection. The sensitivity of the conductive material might be negatively affected when assembled into underwater sensors fabricated by polymer encapsulation method. During the polymer encapsulation, the polymer penetrates into the gap of the conductive material, resulting in an increase in the initial resistance of the conductive material. As shown in Supplementary Fig. S17, compared with the sensor achieved by the Janus film encapsulated by Ecoflex, LUMS based on the Janus film possesses good sensing sensitivity and electrical stability. As displayed in Supplementary Fig. S18, a specific model with a hollow circle structure was designed and the Janus film was subsequently transferred onto the model surface with integrated electrodes. Similar to the lateral line of the fish that can sense the water pressure mediated by the pores on the scales, LUMS allows the Ecoflex side exposed to water and the graphene side encapsulated in the chamber. As a result, with the increase in water depth, the generated water pressure can drive the film to deform to a balanced state. Note that the water pressure (P 1 ) applied on the Janus film was calculated according to Eq. (4): The relationship between gas pressure (P 2 ) and volume (V) in the covered seal system can be as follows: where ρ represents the density of water, g is the acceleration of gravity, and h is the height from LUMS to the water surface, n is the amount of gas substance, T is the absolute temperature, and R is a constant of about 8.314 J K −1 mol −1 . When LUMS was employed to detect water depth, the film experienced inward deformation driven by the water pressure. The film deformation can prominently reduce the volume of the air chamber to further increase the inner air pressure. When P 1 was in equilibrium with P 2 , the dynamic deformation of the film achieved an equilibrium state. In our system, with the increase in the water depth, there appeared apparent film deformation and simultaneous terraced current platform located at a certain depth. Take the sample of the self-supported film with a diameter of 10 mm as an example, LUMS can respond sensitively to the depth of 3 cm and detect up to the depth of 50 cm (Fig. 4a). When further increasing the depth to 60 cm, the resulted current no more changed and the resultant maximum depth was finally achieved (Supplementary Fig. S19). More interestingly, it is found that the degree of deformation at the same depth is also related to the diameter of the self-supported film. When increasing the diameter of the hollow circle, the degree of deformation may decrease gradually. Thus, devices with four kinds of diameter were designed, including 10, 15, 20, and 25 mm. As illustrated in Fig. 4b-d, all the three samples with certain diameters demonstrate step-like change in current with the increase in water depth. It is found that the maximum detection depth is positively related to the diameter. For example, the sample with a diameter of 25 mm can realize a good electrical response at a depth up to 1.8 m. However, owing to the decreased film deformation at a certain depth, the relative resistance of these samples experienced a reduced tendency at the same depth with the increase in diameters (Fig. 4e). Furthermore, Fig. 4f and Supplementary Fig. S20 clearly show the positive linear relationship between diameter and maximum water depth. To further explore the influence of film diameter on sensing performance of LUMS, the response time of sensors of different sizes to the same stimulus was detected. The response time is defined as the time required to reach a relative resistance variation that is equal to 90% of the peak value. As shown in Supplementary Fig. S21, the response time of the sensor with a film diameter of 10, 15, 20, and 25 mm was 0.032, 0.046, 0.050, and 0.076 s, respectively. In general, the response time of the sensor increases with the increase of the film diameter. Moreover, repeatability test was also conducted, where the device was moved back and forth between the liquid level and the maximum measurable depth and demonstrated good stability of the electrical performance (Fig. 4g). Considering that LUMS was used for underwater sensing, the influence of water temperature on the sensor's sensing performance was studied. As shown in Supplementary Fig. S22, when the temperature suddenly increased by 3 °C, the relative resistance change kept increasing, and it still could not reach a stable state even after 200 s, which differed greatly from the response time when LUMS was subjected to external stimuli. Therefore, when detecting water depth or perceiving external stimuli, the effect of the water temperature change on the sensing performance of LUMS was almost negligible.
Multifunctional LUMS for Underwater and Waterside Monitoring
It is well known that the fish can not only sense the water depth, but sensitively capture the signal over the water, to avoid some potential dangers. Similarly, LUMS was further employed to sense the mechanical stimuli from the external environment. As revealed in Fig. 5a, LUMS with a diameter of 10 mm was placed into water to perceive (Fig. 5b), the device could clearly record the process that the ball hit and bounced off the table before coming to rest on the table. Moreover, the relative resistance was almost linearly related to the falling heights of the iron ball (Fig. 5c), and rebound times was precisely distinguished by LUMS, which was consistent with that counted by video (Fig. 5c).
In addition, the time interval between each peak could be used to infer the bounce height of the iron ball (Fig. 5d). When LUMS was placed at a distance of 0, 2, and 4 cm below the water surface, the maximum relative resistance changes detected by the sensor were 0.05361, 0.11281, and 0.14284, respectively ( Supplementary Fig. S23). But the response time of the sensor showed no significant difference with the change of underwater position. In general, the intensity of the relative resistance change generated by mechanical vibration increased as the depth of LUMS increased underwater. Besides, LUMS could detect signals with different frequencies produced by hitting the ground with a hammer (Fig. 5e) as well as the signal produced by stomping (Fig. 5f). Tests were also carried out to evaluate LUMS's perception of objects falling at different distances (20,30, and 40 cm) away from the water (Fig. 5g), where the relative resistance decreased with the increase of the distance, demonstrating the good sensitivity. Therefore, our well-designed LUMS is expected to demonstrate potential applications in underwater and waterside monitoring.
To further simulate the excellent capability of underwater perception of fish, analogous to the function of the lateral line, LUMS can also sensitively capture the tiny and intense mechanical stimuli from and beneath the water surface. For instance, when the simulated wind with varied velocities was applied on the water surface, LUMS could rapidly perceive the fluctuation induced by the wind. Note that there are remarkable differences in the normalized resistance between the velocity of 3.4 and 4.2 m s −1 (Fig. 6a). In addition, a water droplet from different heights can also be sensitively detected. As shown in Fig. 6b, the droplet fallen from the height of 20, 40, 60 cm above the water surface could be effectively recognized. Moreover, in addition to the single water droplet, the raining condition was also simulated in our system. Figure 6c illustrates that LUMS can regularly respond to the simulated raining with different intensities, including 60, 144, and 490 mL min −1 . More interestingly, similar to the fish, LUMS can actively perceive the water fluctuation induced by the fallen fishhook (Fig. 6d). In addition, some natural fallen objects such as leaf, branch, and stone, which led to tiny or drastic vibration/fluctuation of water surface, could be clearly detected (Fig. 6e). Furthermore, to efficiently tackle the potential danger beneath water surface, the sensor also needs to realize a sensitive perception of the underwater stimuli. As a proof of concept, a robotic shark swinging its tail was used to move around the sensor. As a result, LUMS could effectively monitor the distinct movement states of shark tails with low or large amplitudes, demonstrating significant potentials in applications of underwater monitoring (Fig. 6f).
Conclusions
In summary, enlightened by the fish sensing system, an artificial fish lateral line enabled by the graphene-based thin film was designed to simultaneously detect wide-range water depth and sensitively perceive tiny vibration. The Janusstructured graphene/Ecoflex ultrathin film was endowed with conformal, elastic, conductive and deformable properties, which could adapt smoothly to the structured surface. When integrated to be a self-supported sensor, it can achieve an efficient and stable detection of water depth up to 1.8 m. Furthermore, tiny vibration (e.g. wind blowing, raining, leaf falling, and underwater attacking) from water surface or underwater can also be sensitively captured. The bioinspired Janus film-enabled underwater sensor shows significant potentials in the field of water depth detection and vibration perception in the water environment.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2023-02-01T15:23:34.271Z | 2022-02-15T00:00:00.000 | {
"year": 2022,
"sha1": "ec5563a7e23887762153ce96d165f62ae39e9733",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40820-022-00799-4.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "ec5563a7e23887762153ce96d165f62ae39e9733",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Environmental Science",
"Physics"
],
"extfieldsofstudy": []
} |
126350302 | pes2o/s2orc | v3-fos-license | Limit distributions for asymptotically linear statistics with spherical error
The aim of this work is to obtain general results for the limit distributions of asymptotically linear statistics when the error is spherical, increasing non-centrality. These results apply directly to homoscedastic normal error thus to high precision measurements. We present a numerical example on cylinder volume to illustrate the usefulness of our approach.
Introduction
Let r(u u u) be the spectral radius of the hessian matrix g(u u u) of g(u u u), then we take If, whatever d > 0, with g(.) the gradient of g(.), the function g(.) will be asymptotically linear, see [6], [7] and [8].
In this paper we intend to obtain limit distributions for statistics Y = g(a a a + e e e) − g(a a a) ∥g(a a a)∥ , where g (.) is asymptotically linear and the error e e e has spherical density, when ∥a a a∥ → ∞. Numerical methods may be used to obtain a lower bound for ∥a a a∥ such that the distribution of Y is sufficiently near to the limit distribution for this to be used. Namely, this approach was applied in [6] and [8] leading to the establishing of applications domains for the limit distributions. We point out that those domains are defined from lower bounds for ∥a a a∥ and not from minimums sample sizes. Besides this, considering an observation X = µ + e with mean value µ and variance σ 2 will have non-centrality µ 2 σ 2 which decreases with σ 2 . In this way high noncentrality will be associated to great precision. We thus may associate the application of these limit distributions to high precision observations.
1
In the next section we will present the required results on spherical densities. This will be followed by the presentation of the key result that the limit density will be the marginal density of e e e whose components have identical densities. The case in which e e e is normal is singled out in Section 4. Namely, we will show how to use additional information to overcome e e e which will have variance-covariance matrix σ 2 I I I k with unknown σ 2 . In Section 5 we apply our results to a numerical study considering the cylinder volume. Finally we present some concluding remarks.
, and a a a ′ X X X will have densityf (.| ∥a a a∥γ), whenever X X X has density f (.|γ); 4. the marginal densities and f (.|γ) are symmetrical.
Proof
Let X X X have spherical density f (.). Then, with P P P orthogonal, since the jacobian of this transformation is equal to one, so 1. is established. Let P P P i be the orthogonal matrix whose first row has all null elements, except the i-th which is equal to 1, i = 1, ..., k. Then X X X • i = P P P i X X X will have the same density than X X X and its first marginal density will be the i-th marginal of f (.), i = 1, ..., k. Thus all marginal of f (.) will be identical and 2. is established.
Next, let P (a a a) be the orthogonal matrix whose first row vector is 1 ∥a a a∥ a a a. Thus a a a ′ e e e will be the product by ∥a a a∥ of the first component of P (a a a)e e e. This first component has densityf (.|γ), the marginal density of f (.|γ). Since γ is a dispersion parameter, the density of a a a ′ X X X will bef (.| ∥a a a∥γ).
The last part of the thesis follows from −I I I k being an orthogonal matrix.
Limit distributions
We will take the statistics Y = g(a a a + e e e) − g(a a a) ∥g(a a a)∥ , and Z = (g(a a a)) ′ e e e ∥g(a a a)∥ , 2 LIMIT DISTRIBUTIONS FOR ASYMPTOTICALLY LINEAR STATISTICS WITH SPHERICAL ERROR whatever the random vector e e e. With F L the distribution of L, we have where −→ u stands for uniform convergence, whenever F Z does not depend on (as long as it has norm 1), see [6].
As we saw in the previous section, if e e e has spherical density, the density f Z of Z will bef (.), which corresponds to the marginal density of f . If there is a dispersion parameter the density will bef (.|γ).
We thus establish the following theorem.
is asymptotically linear and e e e has spherical density the limit density of Y , when ∥a a a∥ → ∞, will be the density of the components of e e e.
Normal case
If e e e ∼ N (0 0 0, σ 2 I I I k ), its components will have distribution N (0, σ 2 ) so, from Theorem 1, we can conclude that, N (0, σ 2 ) will also be the limit distribution of Y , whatever the asymptotically linear function g(.). Let us consider an example. We will take a a a = µ µ µ, assuming that X X X = µ µ µ + e e e ∼ N (µ µ µ, σ 2 I I I k ) and the asymptotically linear function g(u u u) = ∥u u u∥ 2 .
We obtain { g(u u u) = 2u u u g(u u u) = 2I I I k , and, according to the Theorem 1, the limit density of when ∥µ µ µ∥ → ∞, will be the density of the components of e e e. So, for large values of ∥µ µ µ∥, where ∼ o indicates "approximately distributed". With y a value taken by Y and ∥x x x∥ 2 the value taken by ∥X X X∥ 2 we have an equation on ∥µ µ µ∥ where the solution is If we have additional information, for instance that σ 2 =σ 2 , we can generate samples where iid indicates independent and identical distributed, and from these obtain the samples According to the reverse Glivenko-Cantelli theorem, in whatever interval [q, 1 − q], with q ≤ p ≤ 1 − p, Sup{|u n,p − u p |} −→ n→∞ 0, where u n,p [u p ] is the p-th empirical [exact] quantile for ∥µ µ µ∥, see [4] and [5].
Another interesting situation is when, instead of additional information, we have X X X independent of S, where S is the product by σ 2 of a central chi-square with r degrees of freedom, S ∼ σ 2 χ 2 r . Then, see [4], with s the value taken by S, the q-th quantile for the distribution induced by s/χ 2 r for σ 2 is where χ r,1−q denote the (1 − q)-th quantile for the distribution of χ 2 r . Moreover, we can replace the expression for ∥ µ µ µ∥ by where w is a value taken by a χ 2 r and δ will be a simulated value for the non-centrality parameter The q-th quantile of δ 1/2 will be given by where ( s w ) q denotes the q-th quantile for σ 2 . So we can conclude that δ 1/2 q decreases with s w . We can also use the reverse Glivenko-Cantelli theorem to obtain confidence intervals for δ 1/2 and δ. These intervals can be used to test, through duality, the hypothesis Namely we may be interested in certain applications for STATIS methodology, see e.g. [10], on testing H 0 against H 1 : δ > δ 0 since only when H 0 is rejected we can be confident in certain model formulation applying.
Numerical example: Cylinder volume
In this section we will apply the proposed methodology to the cylinder volume, see [2] and [8]. Now, the asymptotically linear function involved is g(u u u) = π 4 u 2 1 u 2 , that corresponds to the volume of a cylinder with diameter u 1 and height u 2 . So we have Considering e e e ∼ N (0 0 0, σ 2 I I I 2 ) we obtain X X X = µ µ µ + e e e ∼ N (µ µ µ, σ 2 I I I 2 ) and, for large values of ∥µ µ µ∥, where X 1 , X 2 are the components of X X X and µ 1 , µ 2 the components of µ µ µ. We will consider the data used in Nunes et al. [8]. In this research the authors generated samples with size 30 using R software, assuming the diameters and heights to be normal distributed with mean values 2 and 4, respectively, and standard deviation 0.01. The results are presented in Tables 1 and 2. The corresponding volumes are presented in Table 3 and the values of Y in Table 4. In this case, S ∼ χ 2 58 and s = 0.005. The quantiles of δ 1/2 , δ 1/2 q , are presented in Table 5. The high values obtained for these quantiles are due to the fact that we worked with small variance. So we can conclude that we are in a non-central situation in which the limit distributions, obtained through the asymptotic linearity, apply.
Final Remarks
With this research it was shown that the general results of the limit distributions apply when the error has spherical density, namely if it is normal. The numerical application on cylinder volume illustrates the usefulness of our approach. Moreover the approach presented for the normal case can be applied to Wishart matrices. Namely, we intend to publish results on limit distributions for these matrices, their trace and determinant. Others applications may be found in [2], [6] and [8]. | 2019-04-22T13:12:16.745Z | 2018-08-19T00:00:00.000 | {
"year": 2018,
"sha1": "04e35728c438ad0a724a031c150706dc867c4ef6",
"oa_license": "CCBY",
"oa_url": "http://www.iapress.org/index.php/soic/article/download/soic.20180907/362",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "165abc3f2946c3eccacf54d5724394f55fc468b9",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
204938657 | pes2o/s2orc | v3-fos-license | Quantum fields with topological defects
Domain walls, strings and monopoles are extended objects, or defects, of quantum origin with topologically non--trivial properties and macroscopic behavior. They are described in Quantum Field Theory in terms of inhomogeneous condensates. We review the related formalism in the framework of the spontaneous breakdown of symmetry.
I. INTRODUCTION
The ordered patterns we observe in condensed matter and in high energy physics are created by the quantum dynamics. Macroscopic systems exhibiting some kind of ordering, such as superconductors, ferromagnets, crystals, are described by the underlying quantum dynamics. Even the large scale structures in the Universe, as well as the ordering in the biological systems appear to be the manifestation of the microscopic dynamics ruling the elementary components of these systems. Thus we talk of macroscopic quantum systems: these are quantum systems in the sense that, although they behave classically, nevertheless some of their macroscopic features cannot be understood without recourse to quantum theory.
The question then arises of how the quantum dynamics generates the observed macroscopic properties. In other words, how it happens that the macroscopic scale characterizing those systems is dynamically generated out of the microscopic scale of the quantum elementary components [1].
Moreover, we also observe a variety of phenomena where quantum particles coexist and interact with extended macroscopic objects which show a classical behavior, e.g. vortices in superconductors and superfluids, magnetic domains in ferromagnets, dislocations and other topological defects (grain boundaries, point defects, etc.) in crystals, and so on.
We are thus faced also with the question of the quantum origin of topological defects and of their interaction with quanta [1]: This is a crucial issue for the understanding of symmetry breaking phase transitions and structure formation in a wide range of systems from condensed matter to cosmology [2][3][4].
Here, we will review how the generation of ordered structures and of extended objects is explained in Quantum Field Theory (QFT). We follow refs. [1] in our presentation. We will consider systems in which spontaneous symmetry breaking (SSB) occurs and show that topological defects originate by inhomogeneous (localized) condensation of quanta. The approach followed here is alternative to the usual one [5], in which one starts from the classical soliton solutions and then "quantizes" them, as well as to the QFT method based on dual (disorder) fields [6].
In Section 2 we first introduce some general features of QFT useful for our discussion and we then treat some aspects of SSB and the rearrangement of symmetry. In Section 3 we discuss the boson transformation theorem and the topological singularities of the boson condensate. Section 4 contains as an example a model with U (1) gauge invariance in which SSB, rearrangement of symmetry and topological defects are present [7]. There we show how macroscopic fields and currents are obtained from the microscopic quantum dynamics. The Nielsen-Olesen vortex solution is explicitly obtained as an example. Section 5 is devoted to conclusions.
II. SYMMETRY AND ORDER IN QFT: A DYNAMICAL PROBLEM
QFT deals with systems with infinitely many degrees of freedom. The fields used for their description are operator fields whose mathematical significance is fully specified only when the state space where they operate is also assigned. This is the space of the states, or physical phase, of the system under given boundary conditions. A change in the boundary conditions may result in the transition of the system from one phase to another one. For example, a change of the temperature from above to below the critical temperature may induce the transition from the normal to the superconducting phase in a metal. The identification of the state space where the field operators have to be realized is thus a physically non trivial problem in QFT. In this respect, the QFT structure is drastically different from the one of Quantum Mechanics (QM). The reason is the following.
The von Neumann theorem in QM [8] states that for systems with a finite number of degrees of freedom all the irreducible representations of the canonical commutation relations are unitarily equivalent. Therefore in QM the physical system can only live in one single physical phase: unitary equivalence means indeed physical equivalence and thus there is no room (no representations) for physically different phases. Such a situation drastically changes in QFT where systems with infinitely many degrees of freedom are treated. In such a case the von Neumann theorem does not hold and infinitely many unitarily inequivalent representations of the canonical commutation relations do in fact exist [9,10]. It is such a richness of QFT which allows the description of different physical phases.
A. QFT as a two-level theory
In the perturbative approach, any quantum experiment or observation can be schematized as a scattering process where one prepares a set of free (non-interacting) particles (incoming particles or in-fields) which are then made to collide at some later time in some space region (space-time region of interaction). The products of the collision are expected to emerge out of the interaction region as free particles (outgoing particles or out-fields). Correspondingly, one has the in-field and the out-field state space. The interaction region is where the dynamics operates: given the in-fields and the in-states, the dynamics determines the out-fields and the out-states.
The incoming particles and the outgoing ones (also called quasi-particles in solid state physics) are well distinguishable and localizable particles only far away from the interaction region, at a time much before (t = −∞) and much after (t = +∞) the interaction time: in-and out-fields are thus said to be asymptotic fields, and for them the interaction forces are assumed not to operate (switched off).
The only regions accessible to observations are those far away (in space and in time) from the interaction region, i.e. the asymptotic regions (the in-and out-regions). It is so since, at the quantum level, observations performed in the interaction region or vacuum fluctuations there occurring may drastically interfere with the interacting objects thus changing their nature. Besides the asymptotic fields, one then also introduces dynamical or Heisenberg fields, i.e. the fields in terms of which the dynamics is given. Since the interaction region is precluded from observation, we do not observe Heisenberg fields. Observables are thus solely described in terms of asymptotic fields.
Summing up, QFT is a "two-level" theory: one level is the interaction level where the dynamics is specified by assigning the equations for the Heisenberg fields. The other level is the physical level, the one of the asymptotic fields and of the physical state space directly accessible to observations. The equations for the physical fields are equations for free fields, describing the observed incoming/outgoing particles.
To be specific, let the Heisenberg operator fields be generically denoted by ψ H (x) and the physical operator fields by ϕ in (x) 1 . They are both assumed to satisfy equal-time canonical (anti-)commutation relations.
For shortness, we omit considerations on the renormalization procedure, which are not essential for the conclusions we will reach. The Heisenberg field equations and the free field equations are written as where Λ(∂) is a differential operator, x ≡ (t, x) and J is some functional of the ψ H fields, describing the interaction. Eq.(1) can be formally recast in the following integral form (Yang-Feldman equation): where * denotes convolution. The symbol Λ −1 (∂) denotes formally the Green function for ϕ in (x). The precise form of Green's function is specified by the boundary conditions. Eq.(3) can be solved by iteration, thus giving an expression for the Heisenberg fields ψ H (x) in terms of powers of the ϕ in (x) fields; this is the Haag expansion in the LSZ formalism [9,11] (or "dynamical map" in the language of refs. [1]), which might be formally written as 2 We stress that the equality in the dynamical map (4) is a "weak" equality, which means that it must be understood as an equality among matrix elements computed in the Hilbert space of the physical particles.
We observe that mathematical consistency in the above procedure requires that the set of ϕ in fields must be an irreducible set; however, it may happen that not all the elements of the set are known since the beginning. For example there might be composite (bound states) fields or even elementary quanta whose existence is ignored in a first recognition. Then the computation of the matrix elements in physical states will lead to the detection of unexpected poles in the Green's functions, which signal the existence of the ignored quanta. One thus introduces the fields corresponding to these quanta and repeats the computation. This way of proceeding is called the self-consistent method [1]. We remark that it is not necessary to have a one-to-one correspondence between the sets {ψ j H } and {ϕ i in }, as it happens whenever the set {ϕ i in } includes composite particles.
B. The dynamical rearrangement of symmetry As already mentioned, in QFT the Fock space for the physical states is not unique since one may have several physical phases, e.g. for a metal the normal phase and the superconducting phase, and so on. Fock spaces describing different phases are unitarily inequivalent spaces and correspondingly we have different expectation values for certain observables and even different irreducible sets of physical quanta. Thus, finding the dynamical map involves singling out the Fock space where the dynamics has to be realized.
Let us now suppose that the Heisenberg field equations are invariant under some group G of transformations of ψ H : with g ∈ G. The symmetry is spontaneously broken when the vacuum state in the Fock space H is not invariant under the group G but only under one of its subgroups [1,9,11].
On the other hand, Eq.(4) implies that when ψ H is transformed as in (5), then with g ′ belonging to some group of transformations G ′ and such that When symmetry is spontaneously broken it is G ′ = G, with G ′ the group contraction of G [13]; when symmetry is not broken G ′ = G.
Since G is the invariance group of the dynamics, Eq.(4) requires that G ′ is the group under which free fields equations are invariant, i.e. also ϕ ′ in is a solution of (2). Since Eq.(4) is a weak equality, G ′ depends on the choice of the Fock space H among the physically realizable unitarily inequivalent state spaces. Thus we see that the (same) original invariance of the dynamics may manifest itself in different symmetry groups for the ϕ in fields according to different choices of the physical state space. Since this process is constrained by the dynamical equations (1), it is called the dynamical rearrangement of symmetry [1].
In conclusion, different ordering patterns appear to be different manifestations of the same basic dynamical invariance. The discovery of the process of the dynamical rearrangement of symmetry leads to a unified understanding of the dynamical generation of many observable ordered patterns. This is the phenomenon of the dynamical generation of order. The contraction of the symmetry group is the mathematical structure controlling the dynamical rearrangement of the symmetry [13]. For a qualitative presentation see Ref. [14].
One can now ask which ones are the carriers of the ordering information among the system elementary constituents and how the long range correlations and the coherence observed in ordered patterns are generated and sustained. The answer is in the fact that SSB implies the appearance of boson particles [15,16], the so called Nambu-Goldstone (NG) modes or quanta. They manifest as long range correlations and thus they are responsible of the above mentioned change of scale, from microscopic to macroscopic. The coherent boson condensation of NG modes turns out to be the mechanism by which order is generated, as we will see in an explicit example in Section 4.
III. THE "BOSON TRANSFORMATION" METHOD
We now discuss the quantum origin of extended objects (defects) and show how they naturally emerge as macroscopic objects (inhomogeneous condensates) from the quantum dynamics. At zero temperature, the classical soliton solutions are then recovered in the Born approximation. This approach is known as the "boson transformation" method [1].
A. The boson transformation theorem
Let us consider, for simplicity, the case of a dynamical model involving one scalar field ψ H and one asymptotic field ϕ in satisfying Eqs. (1) and (2), respectively.
As already remarked, the dynamical map is valid only in a weak sense, i.e. as a relation among matrix elements. This implies that Eq.(4) is not unique, since different sets of asymptotic fields and the corresponding Hilbert spaces can be used in its construction. Let us indeed consider a c-number function f (x), satisfying the ϕ in equations of motion (2): The boson transformation theorem [1] states that the field is also a solution of the Heisenberg equation (1). The corresponding Yang-Feldman equation takes the form The difference between the two solutions ψ H and ψ f H is only in the boundary conditions. An important point is that the expansion Eq.(9) is obtained from that in Eq.(4), by the space-time dependent translation The essence of the boson transformation theorem is that the dynamics embodied in Eq. (1), contains an internal freedom, represented by the possible choices of the function f (x), satisfying the free field equation (8).
We also observe that the transformation (11) is a canonical transformation since it leaves invariant the canonical form of commutation relations.
Let |0 denote the vacuum for the free field ϕ in . The vacuum expectation value of Eq. (10) gives: The c-number field φ f (x) is the order parameter. We remark that it is fully determined by the quantum dynamics. In the classical or Born approximation, which consists in taking 0|J [ψ f H ]|0 = J [φ f ], i.e. neglecting all the contractions of the physical fields, we define φ f cl (x) ≡ lim →0 φ f (x). In this limit we have i.e. φ f cl (x) provides the solution of the classical Euler-Lagrange equation. Beyond the classical level, in general, the form of this equation changes. The Yang-Feldman equation (10) gives not only the equations for the order parameter Eq.(13), but also, at higher orders in , the dynamics of the physical quanta in the potential generated by the "macroscopic object" φ f (x) [1].
One can show [1], that the class of solutions of Eq.(8) which lead to topologically non-trivial (i.e. carrying a non-zero topological charge) solutions of Eq.(13), are those which have some sort of singularity with respect to Fourier transform. These can be either divergent singularities or topological singularities. The first are associated to a divergence of f (x) for |x| = ∞, at least in some direction. Topological singularities are instead present when f (x) is not single-valued, i.e. it is path dependent. In both cases, the macroscopic object described by the order parameter, carries a non-zero topological charge.
B. Topological singularities and massless bosons
An important result is that the boson transformation functions carrying topological singularities are only allowed for massless bosons [1].
Consider a generic boson field χ in satisfying the equation and suppose that the function f (x) for the boson transformation χ in (x) → χ in (x) + f (x) carries a topological singularity. It is then not single-valued and thus path-dependent: On the other hand, ∂ µ f (x), which is related with observables, is single-valued, i.e. [∂ ρ , ∂ ν ] ∂ µ f (x) = 0. Recall that f (x) is solution of the χ in equation: From the definition of G + µν (x) and the regularity of ∂ µ f (x) it follows, by computing ∂ µ G + µν (x), that This equation and the antisymmetric nature of G + µν (x) then lead to ∂ 2 f (x) = 0, which in turn implies m = 0. Thus we conclude that (15) is only compatible with massless equation for χ in .
The topological charge is defined as Here C is a contour enclosing the singularity and S a surface with C as boundary. N T does not depend on the path C provided this does not cross the singularity. The dual tensor G µν (x) is and satisfies the continuity equation: Eq. (20) completely characterizes the topological singularity [1].
IV. AN EXAMPLE: THE ANDERSON-HIGGS-KIBBLE MECHANISM AND THE VORTEX SOLUTION
We consider a model of a complex scalar field φ(x) interacting with a gauge field A µ (x) [17][18][19]. The lagrangian density L[φ(x), φ * (x), A µ (x)] is invariant under the global and the local U (1) gauge transformations 3 : respectively, where λ(x) → 0 for |x 0 | → ∞ and/or |x| → ∞ and e 0 is the coupling constant. We work in the Lorentz gauge ∂ µ A µ (x) = 0. The generating functional, including the gauge constraint, is [7] B(x) is an auxiliary field which implements the gauge fixing condition [7,20]. Notice the ǫ−term where v is a complex number. Its rôle is to specify the condition of symmetry breaking under which we want to compute the functional integral and it may be given the physical meaning of a small external field triggering the symmetry breaking [7]. The limit ǫ → 0 must be made at the end of the computations. We will use the notation The fields φ, A µ and B appearing in the generating functional are c-number fields. In the following the Heisenberg operator fields corresponding to them will be denoted by φ H , A H µ and B H , respectively. Thus the spontaneous symmetry breaking condition is expressed by 0|φ H (x)|0 ≡ṽ = 0, withṽ constant.
Since in the functional integral formalism the functional average of a given c-number field gives the vacuum expectation value of the corresponding operator field, e.g.
Let us introduce the following decompositions:
A. The Goldstone theorem
Since the functional integral (23) is invariant under the global transformation (21), we have that ∂Z[J, K]/∂θ = 0 and subsequent derivatives with respect to K 1 and K 2 lead to In momentum space the propagator for the field χ has the general form Here Z χ and a χ are renormalization constants. The integration in Eq.(25) picks up the pole contribution at p 2 = 0, and leads toṽ The Goldstone theorem [15] is thus proved: if the symmetry is spontaneously broken (ṽ = 0), a massless mode must exist, whose field is χ(x), i.e. the NG boson mode. Since it is massless it manifests as a long range correlation mode.
(Notice that in the present case of a complex scalar field model the NG mode is an elementary field . In other models it may appear as a bound state, e.g. the magnon in (anti-)ferromagnets). Note that and because m ρ = 0, the r.h.s. of this equation vanishes in the limit ǫ → 0; thereforeṽ is independent of |v|, although the phase of |v| determines the one ofṽ (from Eq.(25)): as in ferromagnets, once an external magnetic field is switched on, the system is magnetized independently of the strength of the external field.
B. The dynamical map and the field equations
Observing that the change of variables (21) (and/or (22) ) does not affect the generating functional, we may obtain the Ward-Takahashi identities. Also, using B(x) → B(x) + λ(x) in (23) gives ∂ µ A µ (x) ǫ,J,K = 0. One then finds the following two-point function pole structures [7]: The absence of branch cut singularities in propagators (29)-(31) suggests that B(x) obeys a free field equation. In addition, Eq.(31) indicates that the model contains a massless negative norm state (ghost) besides the NG massless mode χ. Moreover, it can be shown [7] that a massive vector field U µ in also exists in the theory. Note that because of the invariance (χ, A µ , B) → (−χ, −A µ , −B), all the other two-point functions must vanish.
The dynamical maps expressing the Heisenberg operator fields in terms of the asymptotic operator fields, are found to be [7]: where : ... : denotes the normal ordering and the functionals F and F µ are to be determined within a particular model. In Eqs.(32)-(34), χ in denotes the NG mode, b in the ghost mode, U µ in the massive vector field and ρ in the massive matter field. In Eq.(34) c is a c-number constant, whose value is irrelevant since only derivatives of B appear in the field equations (see below). Z 3 represents the wave function renormalization for U µ in . The corresponding field equations are with m V 2 = Z3 Zχ (e 0ṽ ) 2 . The field equations for B H and A H µ read [7] with j H µ (x) = δL(x)/δA µ H (x). One may then require that the current j H µ is the only source of the gauge field A H µ in any observable process. This amounts to impose the condition: p b|∂ µ B H (x)|a p = 0, i.e.
where |a p and |b p denote two generic physical states and A 0µ . Eq.(38) are the classical Maxwell equations. The condition p b|∂ µ B H (x)|a p = 0 leads to the Gupta-Bleuler-like condition where χ (−) in and b (−) in are the positive-frequency parts of the corresponding fields. Thus we see that χ in and b in cannot participate in any observable reaction. This is confirmed by the fact that they are present in the S matrix in the combination (χ in − b in ) [7]. It is to be remarked however that the NG boson does not disappear from the theory: we shall see below that there are situations in which the NG fields do have observable effects.
with ∂ 2 λ(x) = 0, are induced by the in-field transformations On the other hand, the global phase transformation φ H (x) → e iθ φ H (x) is induced by with ∂ 2 f (x) = 0 and the limit f (x) → 1 to be performed at the end of computations. Note that under the above transformations the in-field equations and the S matrix are invariant and that B H is changed by an irrelevant c-number (in the limit f → 1 ). Consider now the boson transformation χ in (x) → χ in (x) + α(x): In local gauge theories the boson transformation must be compatible with the Heisenberg field equations but also with the physical state condition (39). Under the boson transformation with α(x) =ṽZ − 1 2 χ θf (x) and ∂ 2 f (x) = 0, B H changes as Eq.(38) is thus violated when the Gupta-Bleuler-like condition is imposed. In order to restore it, the shift in B H must be compensated by means of the transformation on U µ in : with a convenient c-number function a µ (x). The dynamical maps of the various Heisenberg operators are not affected by (44) since they contain U µ in and B H in a combination such that the changes of B H and of U µ in compensate each other provided Eq. (45) thus obtained is the Maxwell equation for the massive potential vector a µ [7]. The classical ground state current j µ turns out to be The term m 2 V a µ (x) is the Meissner current, while The key point here is that both the macroscopic field and current are given in terms of the boson condensation function f (x).
Two remarks are in order: First, note that the terms proportional to ∂ µ f (x) are related to observable effects, e.g. the boson current which acts as the source of the classical field. Second, note that the macroscopic ground state effects do not occur for regular f (x) (G + µν (x) = 0). In fact, from (45) we obtain a µ (x) = 1 e0 ∂ µ f (x) for regular f (x) which implies zero classical current (j µ = 0) and zero classical field (F µν = ∂ µ a ν − ∂ ν a µ ), since the Meissner and the boson current cancel each other.
In conclusion, the vacuum current appears only when f (x) has topological singularities and these can be created only by condensation of massless bosons, i.e. when SSB occurs. This explains why topological defects appear in the process of phase transitions, where NG modes are present and gradients in their condensate densities are nonzero [2,3].
On the other hand, the appearance of space-time order parameter is no guarantee that persistent ground state currents (and fields) will exist: if f (x) is a regular function, the space-time dependence ofṽ can be gauged away by an appropriate gauge transformation.
Since, as said, the boson transformation with regular f (x) does not affect observable quantities, the S matrix is actually given by This is indeed independent of the boson transformation with regular f (x): since a µ (x) = 1 e0 ∂ µ f (x) for regular f (x). However, S ′ = S for singular f (x): S ′ includes the interaction of the quanta U µ in and φ in with the classically behaving macroscopic defects [1].
D. The vortex solution
Below we consider the example of the Nielsen-Olesen vortex string solution. We show which one is the boson function f (x) controlling the non-homogeneous NG boson condensation in terms of which the string solution is described. For shortness, we only report the results of the computations. The detailed derivation as well as the discussion of further examples can be found in Ref. [1].
In the present U (1) problem, the electromagnetic tensor and the vacuum current are [1,7] respectively, and satisfy ∂ µ F µν (x) = −j ν (x). In these equations and the only non-vanishing component of F µν : Finally, the vacuum current Eq.(50) is given by We observe that these results are the same of the Nielsen-Olesen vortex solution [22]. Notice that we did not specify the potential in our model but only the invariance properties. Thus, the invariance properties of the dynamics determine the characteristics of the topological solutions. The vortex solution manifests the original U (1) symmetry through the cylindric angle θ which is the parameter of the U (1) representation in the coordinate space.
V. CONCLUSIONS
We have discussed how topological defects arise as inhomogeneous condensates in Quantum Field Theory. Topological defects are shown to have a genuine quantum nature. The approach reviewed here goes under the name of "boson transformation method" and relies on the existence of unitarily inequivalent representations of the field algebra in QFT.
Describing quantum fields with topological defects amounts then to properly choose the physical Fock space for representing the Heisenberg field operators. Once the boundary conditions corresponding to a particular soliton sector are found, then the Heisenberg field operators embodied with such conditions contain the full information about the defects, the quanta and their mutual interaction. One can thus calculate Green's functions for particles in the presence of defects. The extension to finite temperature is discussed in Refs. [12,21].
As an example we have discussed a model with U (1) gauge invariance and SSB and we have obtained the Nielsen-Olesen vortex solution [22] in terms of localized condensation of Goldstone bosons. These thus appear to play a physical role, although, in the presence of gauge fields, they do not show up in the physical spectrum as excitation quanta. The function f (x) controlling the condensation of the NG bosons must be singular in order to produce observable effects. Boson transformations with regular f (x) only amount to gauge transformations. For the treatment of topological defects in non-abelian gauge theories, see Ref. [21].
Finally, when there are no NG modes, as in the case of the kink solution or the sine-Gordon solution, the boson transformation function has to carry divergence singularity at spatial infinity [1,12]. In ref. [23] the boson transformation has been also discussed in connection with the Bäklund transformation at a classical level and the confinement of the constituent quanta in the coherent condensation domain.
For further reading on quantum fields with topological defects, see Ref. [24]. We thank MIUR, INFN, INFM and the ESF network COSLAB for partial financial support. | 2019-04-12T09:06:35.387Z | 2004-02-13T00:00:00.000 | {
"year": 2004,
"sha1": "948a42bf6713b9ae38a389afa9bd8586064db435",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b67615f1344d5b8d9b538eee0dd9c46ca7e56eca",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
9533357 | pes2o/s2orc | v3-fos-license | Use of RE-AIM to develop a multi-media facilitation tool for the patient-centered medical home
Background Much has been written about how the medical home model can enhance patient-centeredness, care continuity, and follow-up, but few comprehensive aids or resources exist to help practices accomplish these aims. The complexity of primary care can overwhelm those concerned with quality improvement. Methods The RE-AIM planning and evaluation model was used to develop a multimedia, multiple-health behavior tool with psychosocial assessment and feedback features to facilitate and guide patient-centered communication, care, and follow-up related to prevention and self-management of the most common adult chronic illnesses seen in primary care. Results The Connection to Health Patient Self-Management System, a web-based patient assessment and support resource, was developed using the RE-AIM factors of reach (e.g., allowing input and output via choice of different modalities), effectiveness (e.g., using evidence-based intervention strategies), adoption (e.g., assistance in integrating the system into practice workflows and permitting customization of the website and feedback materials by practice teams), implementation (e.g., identifying and targeting actionable priority behavioral and psychosocial issues for patients and teams), and maintenance/sustainability (e.g., integration with current National Committee for Quality Assurance recommendations and clinical pathways of care). Connection to Health can work on a variety of input and output platforms, and assesses and provides feedback on multiple health behaviors and multiple chronic conditions frequently managed in adult primary care. As such, it should help to make patient-healthcare team encounters more informed and patient-centered. Formative research with clinicians indicated that the program addressed a number of practical concerns and they appreciated the flexibility and how the Connection to Health program could be customized to their office. Conclusions This primary care practice tool based on an implementation science model has the potential to guide patients to more healthful behaviors and improved self-management of chronic conditions, while fostering effective and efficient communication between patients and their healthcare team. RE-AIM and similar models can help clinicians and media developers create practical products more likely to be widely adopted, feasible in busy medical practices, and able to produce public health impact.
Background
The Institute of Medicine [1] outlined six criteria as the basis for preventive and chronic disease care: patient centered, effective, safe, timely, efficient, and equitable. One way of achieving these aims in primary care is by implementing the core criteria of the Patient-Centered Medical Home (PCMH), which has gained considerable traction as an important part of healthcare reform [2][3][4].
Achieving the aims of the PCMH, however, can be challenging due to the complexity and multiple competing demands on primary care. The PCMH model includes an emphasis on patient self-management support strategies that provide patients with the information, tools, and support they need to adopt healthy behaviors and take care of their health problems in their daily lives. However, primary care clinicians and staff often lack training in identifying and addressing health behavior and self-management support issues. Stange et al. [5] concluded that the average amount of time that primary care physicians can devote to prevention in a typical visit is one minute. Data documenting the routine adoption of these changes into primary care practice have been disappointing [6][7][8][9][10][11][12][13][14][15][16][17]; a large chasm remains between what is possible and what has been achieved [1]. To address this challenge, we describe an approach based on interactive behavior change technology (IBCT) as a vehicle for facilitating the adoption of PCMH strategies into primary care. The reach, effectiveness, adoption, implementation, maintenance/sustainability (RE-AIM) model [18,19] was used to develop the IBCT program to enhance its chances of successful adoption, implementation, and sustainability in primary care.
Addressing primary care challenges IBCT can provide efficient methods for achieving the goals of the PCMH. In a review of the literature, members of our team concluded that 'if constructed to draw on the strengths of primary care and to use patient-centered principles, IBCT can inform, leverage, and support patient-provider communication and enhance behavior change [20].' Integration of self-management support, a major component of the PCMH, into primary care practices can be facilitated through an easy-to-use, time-efficient IBCT system that addresses the most important, behavioral, and psychosocial challenges, especially if focused on the needs of patients with the most common chronic conditions. The major goals of IBCT, which fit well with PCMH, are to: detect and then monitor patient needs for selfmanagement support over time; prompt clinician/patient discussions to engage patients in behavior change; establish individualized priorities for identified problems; provide guidance and options for intervention at the point of care; and monitor success over time and prompt follow-ups [20,21]. However, to our knowledge no comprehensive system exists that includes prevention and multiple chronic disease monitoring and intervention that is based on practical, well-documented measures and directly tied to actionable resources and recommendations for clinicians and patients [22][23][24][25][26][27][28][29][30][31][32]. To date, IBCTs have not been widely adopted in real world primary care settings. We posit that one of the reasons for this may be that implementation science concerns and approaches like RE-AIM have not been integrated into the development and testing of the majority of IBCTs. In this article, we summarize key points of the RE-AIM implementation science model, and then describe how it was used to develop an IBCT for the PCMH [33,34].
The purposes of this article are to: describe the characteristics and design of the IBCT-based Connection to Health self-management support system to support the PCMH; illustrate the use of the RE-AIM model to guide development of Connection to Health; present qualitative results from a focus group discussion of Connection to Health with clinicians and staff members; and discuss practical implications and directions for future research and practice.
RE-AIM planning and evaluation framework
RE-AIM was developed to help health planners and evaluators to attend to specific implementation factors essential for success in the real and complex world of healthcare and community settings [18,34]. It is an acronym that focuses attention on five key issues related to successful impact and can help design interventions that can: reach a broad and representative proportion of the target population; effectively lead to positive changes in patient self-management and quality of life that are robust across diverse groups; be adopted across a broad and representative proportion of settings; lead to consistent implementation of strategies at a reasonable cost; and lead to maintained self-management in patients and sustained delivery within primary care clinics [19,35,36].
RE-AIM can be a valuable planning tool for implementing self-management support and IBCT programs, especially considering the Institute of Medicine aims to provide efficient, patient-centered, equitable care and reduce health disparities. For example, a focus on the representativeness (i.e., reach) of those who engage with the technology and the robustness of the program's effect is critical. With this in mind, developers of an IBCT for self-management support should design features to ensure that appropriate audio and visual aids are in place to assist all patients, particularly low literacy, minority, less acculturated, older, poorer, or less educated patients who may feel overwhelmed with the healthcare system and confused by complex forms and procedures.
A focus on the RE-AIM factors of adoption, implementation, and sustainability of an IBCT self-management support system also addresses the larger issue of actionable information. With primary care already stretched beyond capacity to deal with care recommendations [5,37,38], adding additional assessment information will not solve the problem. Any additional information will need to be customized in ways that are compatible and integrated with practice flow, styles, priorities, and preferences to yield feasible, actionable outcomes. RE-AIM has previously been successfully applied to evaluate the impact of interactive technology approaches and clinic changes, providing an assessment of potential public health impact [20,39,40].
Complexity
Many patients with chronic conditions experience major barriers to change related to ongoing co-morbid depression or disease-related distress, distinct conditions with different implications for care [41,42]. For example, depression is about twice as prevalent among patients with diabetes compared to community samples, and ongoing distress related to managing a demanding chronic disease like diabetes has an average prevalence rate of 18% to 35% [43]. Often, clinicians make recommendations for patients, only to see them not enacted because of feelings of hopelessness or being overwhelmed with the ongoing demands of chronic disease management. The delivery of actionable information must be tailored to the patient's capacity for change and the presence of emotional and distress-related barriers [41][42][43].
Characteristics of the Connection to Health system
The Connection to Health Patient Self-Management System is designed to deliver an array of tools to assist patients and providers in the assessment, monitoring, and management of a variety of health behaviors, psychosocial concerns, and chronic disease problems. The automated, web-based system uses engaging graphics, multimedia, and educational design techniques, and database-driven responses to provide three primary modules to address patient interaction and self-management-ongoing patient assessment, delivering summary self-management support reports, and providing recommendations for patients and healthcare teams. The assessment module uses brief evidence-based screening scales to assess behaviors (including diet, tobacco use, risky drinking, physical activity, and medication adherence) and chronic conditions (including obesity, diabetes, coronary heart disease, hypertension, hyperlipidemia, asthma, stress, and depression). The reporting module offers summary reports to both clinicians and patients that include assessment results, areas of concern, discussion options, and patient trends over time. The recommendations module provides clinician and patient with patient-tailored and prioritized suggestions for action, including development of goals and action plans in a variety of health behavior and psychosocial domains. Clinics or practices that adopt the system can customize the Connection to Health website through an administrative portal to reflect their local identity and resources ( Figure 1). The system is adaptable for integration with electronic health records (EHRs) so that the results can be shared easily across clinical team members, and patient self-management support status can be monitored over time.
Welcome
1. The clinic uses the administrative portal to enter initial patient contact information into the Connection to Health database. The system then sends an e-mail or letter to the patient with an embedded link to the secure, Health Insurance Portability and Accountability Act (HIPAA)-compliant website. The patient clicks on the link and is presented with a multimedia (audio and/ or video) welcome message designed to engage the user and encourage participation, including a message from the practice to indicate that the program is part of the care provided by their clinician.
Assessment
Prior to each regularly scheduled chronic disease or preventive healthcare office visit, patients are prompted to complete a brief online assessment through the Connection to Health system. This assessment can be conducted through a patient portal to the website through a home computer, practice computer kiosk or pen tablet computer, or a paper-and-pencil application that can be scanned into the system.
Reporting
Once the patient has navigated through and completed the assessment module, the Connection to Health system uses validated algorithms to quickly score the assessments and display reports for both the patient and provider. The one-page patient report (example in Figure 2) can be viewed immediately through the patient portal or printed out hardcopy. It displays assessment results (including a history of recent assessments), areas of medical concern, and possible treatment options to discuss with the healthcare team. If the Connection to Health website is integrated with an EHR or laboratory reporting system, the patient report can also display selected, relevant laboratory results. The patient is encouraged to review the report, add her own notes or comments, and then have it sent or bring it to the next office visit or discussion with their clinician.
The physician report ( Figure 3) contains much of the same information, but includes more details related to patient complexity, cardiovascular risk, health literacy and numeracy, and guideline concordant action recommendations. The goal of both reports is to provide an immediate, straightforward understanding of the patient's current health status; the self-management, psychosocial, and biologic areas of greatest patient concern; a prioritized list of items to discuss at the office visit; and an actionable set of self-management options and recommendations for flagged issues.
Recommendations
Tailored recommendations for action, based on the results of the assessments, are included in the patient and provider reports. For example, if the patient scored low in physical activity and consumed many high fat foods and had a high low-density lipoprotein (LDL) reading, the recommendations might include tips for beginning a conversation about eating patterns and a Connection to Health action plan for healthful eating and physical activity. The primary care team can review the patient and physician reports prior to the office visit, providing the primary care physician (PCP) with a concise set of assessment results and treatment options and tips for guiding the discussion with the patient.
The Connection to Health action plan module, available through the patient portal, provides a strategy for patient self-management that can be selected for use with patients who would respond to an interactive web-based action planning program and/or in situations where the practice does not have the time or appropriate staffing to complete the action planning process. This area of the website is derived from our series of successful interventions based upon problem-solving theory [44,45]. This section offers engaging multimedia modules that guide the user through an action planning process for selected key health behaviors, including diet, exercise, medication adherence, smoking cessation, alcohol use, and depression/distress. These interactive modules facilitate patient selection of goals in any of these areas, and identification of benefits, barriers to success, and strategies for overcoming these barriers. The Connection to Health action plan module stores patient action plans and provides ongoing access to the plans by the healthcare team and the patient for selfmonitoring and follow-up. Alternatively, the healthcare team may decide to provide intervention resources in person in the clinic or to refer the patient to a community resource (e.g., YMCA programs, voluntary associations, telephone help lines, or quit smoking cessation resources).
Follow-up
The Connection to Health System provides ongoing monitoring and prompts follow-up by both the patient and the practitioner. The self-monitoring component allows the patient to track their progress over time. Shortly before the patient is scheduled for another visit to the clinic or practice, he or she can be prompted to complete another set of brief assessments in advance of that visit and to review their history and progress.
Current Connection to Health measures
In choosing areas for screening and more in-depth assessment, we selected measures that address prevalent conditions or problems that have large public health impact, considered participant burden, and lead to actionable outcomes. Congruent with the recent policy recommendation from the Society of Behavioral Medicine http://www.sbm.org/policy/patient-reported_measures.pdf, we emphasized brief scales that were reliable, sensitive to change, appropriate for repeated administration, and age appropriate [46]. As can be seen in Figures 1 and 2, Connection to Health currently includes assessments for depression, disease-related distress, medication adherence, smoking, physical activity, risky drinking, eating patterns, current stressors, and health literacy and numeracy. In addition, questions related to the patient's chronic diseases assess aspects of their management of those conditions. Additional file 1, Appendix 1 provides a brief summary of each instrument included in the Connection to Health assessment package.
Use of RE-AIM for Connection to Health development
We used the RE-AIM model [19,33,35] in developing the Connection to Health tool, by applying it to the goals of the PCMH. Table 1 summarizes how we addressed each of the RE-AIM elements.
Reach
Connection to Health is designed to have high reach through several design features, including multiple modalities for data input and output. Patients can be provided with their choice of entry modality, and systems can be created to ensure that the entire patient panel of the practice is screened. Future iterations of Connection to Health will be designed with the capability to also accept data from automated telephone calls, cell phone data entry, a personal health record or EHR, and future data entry modalities. Effectiveness Effectiveness is enhanced in multiple ways: use of practical, validated scales and measures [46][47][48][49]; links to evidence-based electronic and community resources; and patient choice at multiple steps in the process [50]. Patient choice has been shown to be related to enhanced perceptions of autonomy support and improved outcomes [50]. We also use expert system tailoring [51,52] to select tailored intervention strategies based upon key behavioral and psychosocial factors. The system can easily be enhanced or modified overtime by adding in additional relevant local self-management support resources or other evidence-based links or information.
Adoption
Connection to Health offers practices numerous incentives for adoption, providing techniques and options to assist practices in goals related to enhancing patient-centeredness, a primary goal of PCMH. Assessments can be completed before or after office visits, thus not taking any office time or interfering with patient flow. It addresses psychosocial issues such as distress and depression/anxiety, includes an efficient method for helping patients to prioritize their goals and questions, helps patients attend office visits well-prepared and engaged, and by doing so, saves practices time and increases efficiency. The use of Connection to Health also could assist the practice in meeting the standards for recognition as a PCMH and improve quality measures.
Implementation
Being automated, Connection to Health ensures consistent delivery, accurate scoring, and immediate reporting of results. The administrative report feature enhances implementation by providing regular patient and panellevel reports at intervals specified by the practice and documents improvement over time.
Maintenance
Helping practices achieve, and be reimbursed for, higher performance on PCMH and quality measures should enhance maintenance. Maintenance at the patient level is enhanced by increased goal accomplishment, regular follow-up and feedback, and self-monitoring of individually targeted behaviors [53][54][55].
Initial provider reactions to Connection to Health
The initial version of the Connection to Health Patient self-management support was presented to a focus group of clinicians and staff from 10 family medicine practices working on implementation of the PCMH model. Field notes were taken by the two facilitators, and the participants also provided written comments using a structured format.
Feedback was very positive, providing important input regarding the assessment, the practice reports, and the potential implementation of the system in their practices. Comments highlighted the following issues: 1. Clinicians particularly liked that this system is designed to assist in focusing discussions of selfmanagement issues between clinicians and patients and not to be a stand-alone system. They indicated that if the system was automated outside the practice, they believed that it would not be successful due to lack of reinforcement by the primary care clinicians. 2. Clinicians could be resistant because the system might cause them to feel separated from their patients. However, if the system is well-integrated within the practice, it will need to be done is a manner that minimizes the time commitment. 3. The flexibility and ability to customize the Connection to Health to fit needs, patient flows, and preferences of local clinics should aid adoption. Practices will have varying personnel and workflow that will necessitate different strategies for implementing the Connection to Health system at different points in patient flow and using different modalities in different practices. 4. Clinicians that have an EHR would like a seamless interface of the Connection to Health system with
Discussion
Most self-management support programs address a single disease or single behavior, and few are designed for primary care practices [51,56]. In contrast, Connection to Health has broad applicability across diseases, prevention, multiple behaviors, and varied primary care settings for a wide range of adult patients. It can be accessed through several modalities and is appropriate for patients with diverse socioeconomic and educational backgrounds. It is designed to be integrated into primary care, creating efficiency while prompting informed provider-patient communication. Connection to Health should support the PCMH, create more informed and efficient office visits, and prompt and promote critical but often not completed follow-up support.
The primary purposes of this paper were to describe the Connection to Health system and how the RE-AIM framework was used proactively to develop it. Although controlled and comparative effectiveness studies are needed to determine the ultimate impact of the Connection to Health, use of implementation science models such as RE-AIM or other dissemination frameworks at the design stage [57,58] should greatly facilitate greater uptake, implementation success, and long-term results. The Connection to Health is intentionally a work in progress, with iterative improvements to be made in the selection of measurement items and domains, patient and provider interfaces, and data input and output modalities.
Connection to Health is to our knowledge the only tool for addressing a wide variety of prevalent behavioral, psychosocial, and disease management problems managed in primary care. Time-efficient tools such as Connection to Health can help both patients and healthcare team members come to interactions more informed and prepared. This, in turn, should improve both outcomes and satisfaction [21,25,26,59]. Finally, the panel management features of the Connection to Health should facilitate continuity of care and consistent follow-up, which is the element of care recommendations least often accomplished [60,61].
Potential limitations include that the Connection to Health system is likely only appropriate for adult primary care patients, and not for children and adolescents (different measures would be needed). Currently, it is available only in English. Although computer administered, including automated skip patterns and individualized tailoring, it does not employ item response theory or formal computer-assisted testing procedures http://www.nihpromis.org/default.aspx. It is also possible that with repeated use over time that patients would begin to find the assessment process burdensome, and a Connection to Health quick-scan form may need to be developed for prevalent, well-defined subgroups of patients (e.g., overweight diabetes patients). The degree to which active follow-up with a patient within the PCMH model could overcome this limitation is an area ripe for investigation. Finally, although we found the RE-AIM model useful for planning and developing Connection to Health, other implementation science models could also have been used and RE-AIM does not explicitly address some issues such as stakeholder engagement. Readers interested in applying RE-AIM for program development and planning purposes should find the resources listed in Table 2 helpful for gaining a more complete understanding of the model and its implications. Future research should evaluate and document the actual use, time efficiency, multifaceted impact, reach or percent and characteristics of patients who can be assessed with it, and its actual implementation in primary care, using RE-AIM [34] or other implementation science models. In particular, comparative effectiveness research studies are indicated to determine, for example, if the Connection to Health is more cost-effective than alternatives, such as simple paper and pencil assessments followed by more traditional face-to-face interventions. Practical implications are that implementation science models, such as RE-AIM, should be employed throughout the design process to maximize impact.
Funding
The Colorado Health Foundation and Robert Wood Johnson Foundation provided funding that supported a portion of the planning and development of the Connection to Health Patient Self-management System.
Disclosure
Dr. Glasgow is now employed at the National Cancer Institute (NCI). This work was completed before he transitioned to the NCI and the opinions expressed do not necessarily reflect those of the NCI. | 2016-05-12T22:15:10.714Z | 2011-10-21T00:00:00.000 | {
"year": 2011,
"sha1": "b5ee0899fcd56d92ca27d652705ce42eefe2d470",
"oa_license": "CCBY",
"oa_url": "https://implementationscience.biomedcentral.com/track/pdf/10.1186/1748-5908-6-118",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5ee0899fcd56d92ca27d652705ce42eefe2d470",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231622295 | pes2o/s2orc | v3-fos-license | The anti‐parasitic drug miltefosine suppresses activation of human eosinophils and ameliorates allergic inflammation in mice
Background and Purpose Miltefosine is an alkylphosphocholine drug with proven effectiveness against various types of parasites and cancer cells. Miltefosine is not only able to induce direct parasite killing but also modulates host immunity, for example by reducing the severity of allergies in patients. To date, there are no reports on the effect of miltefosine on eosinophils, central effector cells involved in allergic inflammation. Experimental Approach We tested the effect of miltefosine on the activation of human eosinophils and their effector responses in vitro and in mouse models of eosinophilic migration and ovalbumin‐induced allergic lung inflammation. Key Results The addition of miltefosine suppressed several eosinophilic effector reactions such as CD11b up‐regulation, degranulation, chemotaxis and downstream signalling. Miltefosine significantly reduced the infiltration of immune cells into the respiratory tract of mice in an allergic cell recruitment model. Finally, in a model of allergic inflammation, treatment with miltefosine resulted in an improvement of lung function parameters. Conclusion and Implications Our observations suggest a strong modulatory activity of miltefosine in the regulation of eosinophilic inflammation in vitro and in vivo. Our data underline the potential efficacy of miltefosine in the treatment of allergic diseases and other eosinophil‐associated disorders and may raise important questions regarding the immunomodulatory effect of miltefosine in patients treated for leishmania infections.
| INTRODUCTION
To date, miltefosine (Impavido®) is the only oral drug approved for the treatment of leishmaniasis with limited mild or moderate side effects (Pijpers et al., 2019). The development of miltefosine is a success story of public-private partnership, a breakthrough in medicine affordability and patient drug adherence, landing it on the World Health Organization (WHO)'s List of Essential Medicines (Berger et al., 2017;Sunyoto et al., 2018). Miltefosine disrupts membrane structures and affects phosphatidylcholine synthesis in susceptible promastigote cells (Pinto-Martinez et al., 2018;Rakotomanga et al., 2007). Due to its detergentlike properties, miltefosine is thought to interact with the mucosa of the gastrointestinal tract during oral use and cause its most commonly listed side effects-nausea, vomiting and diarrhoea (Bhattacharya et al., 2007). During prolonged treatment, the severity of the side effects was reported to decrease over time (8.2% during Week 1 to 3.2% during Week 4) (Bhattacharya et al., 2007).
Miltefosine exerts immunomodulatory effects on human cancer cells by inhibiting the PI3K/Akt signalling pathway (Ruiter et al., 2003), induces IL-12-dependent Th1 responses (Wadhone et al., 2009) and shows anti-inflammatory effects in endothelial cells, suppressing vascular inflammation (Fang et al., 2019). However, the immunomodulatory effects of miltefosine on primary human cells have so far only been described for T cells (Bäumer et al., 2010) and mast cells (Weller et al., 2009).
Miltefosine increases membrane fluidity (Moreira et al., 2014), modulates lipid raft-dependent signalling (Weller et al., 2009) and could therefore be an attractive drug candidate for the treatment of diseases characterized by abundant lipid raft activation, such as allergic diseases (Dölle et al., 2010). Miltefosine attenuates allergic inflammation in T cell-dependent mouse models of dermal inflammation (Bäumer et al., 2010), improves local dermatitis in patients with atopic dermatitis (Dölle et al., 2010), inhibits activation and degranulation of mast cells, and significantly reduces allergic disease manifestation in patients Maurer et al., 2013;Rubíková et al., 2018).
Surprisingly, there are no reports on the effects of miltefosine on eosinophils, a key cell type involved in the initiation and propagation of immune responses in allergic diseases (Stone et al., 2010). Here, we studied in detail whether miltefosine exerts immunomodulatory effects on eosinophils in vitro and in mouse models of allergic lung inflammation. Fixative solution was prepared by adding 9 ml of distilled water and 30 ml of FACS sheath fluid (BD Biosciences) to 1 ml of CellFix (BD Biosciences, Vienna, Austria) as described previously (Knuplez, Curcic, et al., 2020).
What is already known
• Miltefosine is an orphan drug marketed for the treatment of leishmaniasis.
• Miltefosine reduces the severity of allergies in patients.
What this study adds
• Miltefosine inhibits activation of human eosinophils and suppresses human eosinophil effector responses.
• Miltefosine inhibits the infiltration of immune cells in the airways and improves animal lung function.
What is the clinical significance • Miltefosine may serve as a potential candidate for the treatment of eosinophil-related diseases.
• Miltefosine treatment may influence eosinophil host responses in leishmania-infected patients.
| Mice
Animal studies are reported in compliance with the ARRIVE guidelines (Percie du Sert et al., 2020) and with the recommendations made by the British Journal of Pharmacology (Lilley et al., 2020) where there were randomly divided in three groups (negative control-vehicle; positive control-ovalbumin or eotaxin stimulated and miltefosine pretreated and ovalbumin or eotaxin stimulated group). Experiments, where bronchoalveolar lavage fluid was collected, could not be performed blinded, due to investigator treating the mice prior to fluid collection. Lung function testing was performed blinded, since Investigator 1 treated the mice and Investigator 2 independently performed lung function testing on mice in a random order.
For all animal experiments, at least five mice were included in each group and at least two repeat experiments were carried out.
Experiments were designed to make sample sizes relatively equal and randomized among comparison groups. Sample sizes were determined according to previous studies with similar analyses (Knuplez, Curcic, et al., 2020;Theiler et al., 2019).
| Blood sampling and eosinophil isolation
Blood sampling from healthy volunteers was approved by the Institutional Review Board of the Medical University of Graz (17-291 ex 05/06). All participants signed a written informed consent.
Firstly, platelet-rich plasma was removed by centrifugation. Next, red blood cells and platelets were removed by dextran sedimentation and polymorphonuclear leukocytes preparations were obtained by density gradient separation. Eosinophils were isolated from polymorphonuclear leukocytes by negative magnetic selection using a cocktail of biotin-conjugated antibodies against CD2, CD14, CD16, CD19, CD56 (neural cell adhesion molecule 1), CD123 (interleukin 3 receptor, α subunit) and CD235a (glycophorin A) as well as Anti-Biotin Micro-Beads from Miltenyi Biotec (Bergisch Gladbach, Germany). Eosinophil purity was determined by morphological analysis of Kimura-stained cells and was typically greater than 97%.
| Flow cytometric analysis of intracellular kinase phosphorylation
Isolated eosinophils were pretreated with either vehicle or miltefosine 20 (μM) (15 min, RT). Following the pretreatment, cells were incubated with 10-nM eotaxin-1 (CCL11) (3 min, 37 C). Subsequently, cells were fixed, permeabilized and stained as described previously (Knuplez, Curcic, et al., 2020). Phosphorylation of Akt residues in fixed eosinophils was quantified as the increase of fluorescence in the FITC fluorescence channel from unstimulated control.
| In vivo chemotaxis
In vivo eosinophil migration was induced by intranasal application of 4-μg eotaxin-2 CCL24 in 8-week-old male and female heterozygous IL-5 transgenic (IL-5Tg) mice (BALB/c background). The mice and their littermate controls received oral gavages of miltefosine (20 mgÁkg −1 in 0.9% NaCl) or vehicle for three consecutive days before CCL24 application. Bronchoalveolar lavage fluid was collected 4 h after experiment had started. Migration of eosinophils was evaluated by flow cytometric counting of highly granular (high side scatter) CD11c − / Siglec-F + cells, as described previously (Knuplez, Curcic, et al., 2020).
| Corticosterone measurement in plasma
Corticosterone levels were assessed in plasma of BALB/c mice treated with oral gavages of miltefosine (20 mgÁkg −1 ) once daily for 3 days. A blood sample was collected via cheek bleed 5 h after first miltefosine application on Day 1, as well as 4 h after last treatment on Day 3. Corticosterone levels were determined with a specific enzyme immunoassay kit (Assay Designs, Ann Arbor, MI, USA) with a sensitivity of 0.027 ngÁml −1 as previously described (Farzi et al., 2015) and
| Statistical analysis
The data and statistical analysis comply with the recommendations of the British Journal of Pharmacology on experimental design and analysis in pharmacology (Curtis et al., 2018). Statistical analysis was performed using the GraphPad Prism™ 6 software (GraphPad Software, Inc., CA, USA). Data were normalized to baseline (1 or 100%) of the means of negative control in experiments performed with eosinophils isolated from human donors to reduce interindividual source of variation.
Statistical analysis was only performed for groups where n ≥ 5.
Additional preliminary data (n = 3) on p-Akt phosphorylation in eosinophils were included in the manuscript to suggest a mechanism previously shown for other cell types (Chugh et al., 2008;Ruiter et al., 2003). The group size given for each experiment is the number of independent values (individual human eosinophil donors or mice). Statistical analysis was performed using these independent values.
Data were tested for normality using D'Agostino and Pearson omnibus normality test. If normality was assumed, comparisons among multiple groups were performed with one-way ANOVA or two-way ANOVA. For these analyses, post hoc pairwise comparisons were performed using Bonferroni's multiple comparison test (or Dunnett's multiple comparison test, when comparing samples to the control group), only if a main effect for at least one factor or the interaction between two factors showed statistical significance and if there was no significant variance in homogeneity. Cytokine levels were compared using Mann-Whitney U test. Significance level for the analyses was set to α = 0.05 and significant differences are indicated with the corresponding P value, *P ≤ 0.05.
| Miltefosine suppresses eosinophil activation in vitro
First, we tested the viability of eosinophils after pretreatment with different concentrations of miltefosine. Importantly, miltefosine (up to 20 μM, in the presence of 1-mgÁml −1 bovine serum albumin) showed no toxic effects on eosinophils ( Figure S1).
During the state of allergic inflammation, elevated concentrations of cytokines and chemoattractants in the blood activate eosinophils, which leads to a rearrangement of their actin filaments (the so-called "shape change") (Willetts et al., 2014) and results in an up-regulation of the adhesion molecules integrins (e.g., CD11b/CD18 and Mac-1) on the cell surface (Jia et al., 1999). When human eosinophils were pretreated with miltefosine, we could observe a statistically significant inhibition of their shape change (by approx. 50%) induced by CCL11 stimulation (Figure 1a,b) when using the highest concentration of miltefosine (20 μM). Miltefosine addition did not alter eosinophil shape change in the absence of external stimuli ( Figure S2). When isolated eosinophils were pretreated with 20-μM miltefosine, up-regulation of CD11b was reduced by about 50% (Figure 1c,d).
To determine whether miltefosine has an effect on the chemotaxis of human eosinophils, we performed in vitro chemotaxis assays
| Miltefosine ameliorates ovalbumin-induced lung inflammation
Next, we investigated whether the in vitro results obtained with isolated human eosinophils are also relevant in vivo. We first performed Ca 2+ flux assays using mouse bone marrow-derived eosinophils to test whether mouse eosinophils behave similar to human-isolated eosinophils (Figure 4a Next, we performed an in vivo eosinophilic migration test using IL-5Tg mice. This strain of mice is characterized by eosinophilia due to increased production of IL-5. Together, intranasal eotaxin application in IL-5-primed eosinophils results in abundant and eosinophil accumulation in the bronchoalveolar lavage fluid and lungs of animals (Ochkur et al., 2007). We treated IL-5Tg mice for three consecutive days perorally with miltefosine (20 mgÁkg −1 ) (Figure 4c). We used a dosing regimen comparable with other studies in mice testing miltefosine Figure S5A); however, when BALB/c mice were treated with miltefosine, no increase in neutrophils was observed ( Figure S5B). By testing plasma of BALB/c mice for their corticosterone levels, we observed no significant differences at both of the two tested time points (Figure S6A,B).
We next tested the efficacy of miltefosine in an acute model of allergic lung inflammation. Ovalbumin was used as a model allergen to reproduce key features of clinical asthma, such as airway hyperresponsiveness to methacholine (Kumar et al., 2008). The treatment protocol of the model is shown in Figure 5a. We observed that daily peroral treatment with miltefosine markedly reduced the number of several infiltrating immune cells into airways of ovalbumin-challenged wild-type mice. Flow cytometric analysis of the composition of immune cells showed that the number of eosinophils as well as infiltrating T cells, B cells and dendritic cells was reduced by 50% upon miltefosine treatment (Figure 5b). Of note, mice treated with miltefosine showed significantly improved lung resistance and a trend towards improved lung compliance (Figure 5c). In order to test whether a decrease in eosinophil numbers was responsible for the reduction of other immune cells, eosinophil-deficient (Δdbl GATA-1) mice were exposed to the same ovalbumin-induced allergic model. In
| DISCUSSION
In the present study, we show for the first time that the Food and Drug Administration (FDA)-approved drug miltefosine inhibits the The effects of miltefosine have previously been studied on some other immune cells. Notably, miltefosine was found to inhibit degranulation and antigen-induced chemotaxis of mast cells by modulating lipid rafts and by inhibiting cytosolic PKC (Rubíková et al., 2018). In contrast to our findings with eosinophils, calcium flux in mast cells was apparently not affected by miltefosine pretreatment, indicating cell type-specific differences. However, similar to mast cells, miltefosine led to an inhibition of effector functions and mediator release in eosinophils. In macrophages, miltefosine was found to (Iacano et al., 2019). Given the fact that TLR-4 stimulation on eosinophils can help polarize macrophages towards pro-or antiinflammatory phenotypes (Yoon et al., 2019), this finding further supports the evidence that miltefosine may influence the interplay and balance between various immune cell types during the state of inflammation.
It is noteworthy that in all our in vitro experiments, non-toxic concentrations of miltefosine were used to distinguish our results from the non-specific cytolytic effects of the drug. In particular, since homeostatic functions such as tissue remodelling and plasma cell survival (Jacobsen et al., 2012) have recently been attributed to eosinophils, we were mainly interested in inhibiting eosinophil overactivation, to prevent their potential tissue-damaging effector functions. For our in vivo experiments, we used a dosage regimen, comparable with other studies in mice testing miltefosine (Bäumer et al., 2010). Δdbl GATA-1 mice. We discovered that the decreased infiltration of most immune cells was at least partially due to the decreased eosinophil numbers. This is not unexpected, since activated eosinophils are known to attract and activate other immune cell types such as neutrophils (Yousefi et al., 1995) or B cells (Chu et al., 2011). Moreover, eosinophil-derived CCL17 and CCL22 have proven to be crucial in attracting effector T cells in localized allergic inflammation (Jacobsen et al., 2008). Interestingly, we observed a decrease in dendritic cell since it was discovered that eosinophil-derived IFN-γ induces airway hyperresponsiveness and lung inflammation even in the absence of lymphocytes (Kanda et al., 2009). Interestingly, IFN-γ was also found to up-regulate several eosinophil effector functions (Ishihara et al., 1997;Takaku et al., 2011) and promote their survival (Fujisawa et al., 1994).
When we examined the composition of immune cells in mouse blood, miltefosine-treated and CCL24-stimulated IL-5Tg animals showed an increased neutrophil count, yet miltefosine-treated BALB/c animals showed no altered neutrophil numbers at baseline. A previous study showed that patients treated with miltefosine exhibited increased levels of the neutrophilic chemokine IL-8 (CXCL8) (Mukhopadhyay et al., 2011. This finding remains to be confirmed in mice. Increased corticosterone levels in mice induced by miltefosine could be another plausible explanation for both increased neutrophil numbers (Liles et al., 1995) and decreased airway inflammation (Suqin et al., 2009). Furthermore, an inverse association between endogenous glucocorticoid and IFN-γ levels was observed in allergic lung inflammation (Suqin et al., 2009). Nonetheless, we observed no significant alterations in corticosterone levels in miltefosinetreated mice. one of the primary cells recruited to the sites of leishmania infection (de Oliveira Cardoso et al., 2010) and have been shown to help control parasite load (Watanabe et al., 2004) in mice, it might be of interest to further investigate this issue in patients treated with miltefosine. In line with the present study, we have previously shown that saturated lysophosphatidylcholines, which are structurally similar to miltefosine, inhibit eosinophil effector responses (Knuplez, Curcic, et al., 2020;Knuplez, Krier-Burris, et al., 2020;Trieb et al., 2019).
A limitation of our work needs to be noted. Ovalbumin was used as a model allergen in our in vivo studies, albeit this model fails to completely reflect the aetiology of human asthma and its multi-step developmental process, including environmental factors associated with the disease. Further experiments with other physiological relevant antigens are needed to validate the relevance of our data in human disease setting.
In summary, we demonstrate the inhibitory effect of the orphan drug miltefosine on human eosinophils and its anti-inflammatory effect in vivo in a model of allergic inflammation. Our data highlight the potential efficacy of miltefosine or related molecules in the treatment of allergic diseases and other eosinophil-associated disorders.
FUNDING INFORMATION
This study was supported by the Austrian Science Fund (FWF Grants
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. Some data may not be made available because of privacy or ethical restrictions. | 2021-01-17T06:16:13.626Z | 2021-01-15T00:00:00.000 | {
"year": 2021,
"sha1": "a38bc634bafb48142801b266c44920cb2bb92b21",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e74af91e3e1558a98c5616d9f840fe3b1d223e5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255215761 | pes2o/s2orc | v3-fos-license | Study of Spatio-Temporal Evolution, Integrated Prevention, and Control Measures of COVID-19 in the Yangtze River Delta
Objective: The study analyzes the spatial characteristics of the epidemic. It evaluates the effectiveness of its differentiated prevention and control policies implemented at different stages of the epidemic in the Yangtze River Delta. Methods: The study divided the epidemic into 2 stages and analyzed the spatial evolution characteristics of the COVID-19 epidemic in the region by using Anselin Local Moran’s I and standard deviation ellipse. Results: In the first stage, the high value of confirmed cases was concentrated in the eastern and southern cities. The trajectory of the barycenter showed a V-shaped change characterized by a southward shift followed by a northward fluctuation. In contrast, the second stage was mainly concentrated in Jiangsu Province and Shanghai, and the Barycenter did not change over time. The diversified prevention and control measures enabled ‘zero new cases’ in the Yangtze River Delta within a month. Conclusion: The prevention and control policy implemented in the Yangtze River Delta has worked well. With the global pandemic of COVID-19, it is recommended that other countries follow the example of the Yangtze River Delta, tighten prevention policies, and speed up vaccination to avoid a rebound of the epidemic.
Introduction
The global COVID-19 epidemic is currently at a high level 1 , and many countries must continue to rely on personal health measures to control the disease2 while promoting COVID-19 vaccination. 2 However, vaccination and control measures are currently not being successfully implemented. The World Health Organization estimates that 500000 more people will die from COVID-19 in Europe by March, 2022 if no action is taken. 3 In contrast, China has now entered a normalized phase of epidemic prevention and control at home, with nearly 2.4 billion doses of vaccination, but the continued risk of epidemic importation from abroad, combined with fall and winter influenza and other respiratory infectious disease epidemics, has greatly increased the complexity, and difficulty of epidemic prevention and control at home. 4 The Yangtze River Delta is 1 of China's most economically active regions. After the 2020 COVID-19 epidemic, the Yangtze River Delta region's economy rebounded rapidly, surpassing the pre-epidemic level. Multiple outbreaks have occurred in the Yangtze River Delta between December 2019 and November 2021. The most recent outbreak centered at Lukou Airport in Nanjing, whose main strain is the Delta strain which has affected several provinces and cities both within and outside the Yangtze River Delta. Because the epidemic's spread and transmission speed varied at different times in the Yangtze River Delta, the region implemented various prevention and control policies at various stages of the outbreak, ranging from city closures to small-scale control, and from vaccination to vaccine booster shots.
This study uses spatial analysis to examine the epidemic's geographical distribution in the Yangtze River Delta from January 2020 to November 2021, and the epidemic policies undertaken in the region during various stages of the epidemic.
Methods
The basic geographic information data were the base map of our spatial analysis in ArcGIS software (Redlands, California, USA). The city-level administrative division boundary containing the Yangtze River Delta was preprocessed using the WGS-84 coordinate system and obtained via National Geomatics Center of China.
This study used news from Lilac Garden, Sina News, and Tencent News to analyze policies and problems in the Yangtze River Delta.
Epidemic statistical data were used for spatial analysis. The epidemic data of Dingxiangyuan was gathered using the python crawling strategy and compared to data from Harvard University's China Data Lab. 5 Based on the characteristics of the epidemic in the Yangtze River Delta, the study divided the outbreak into 2 phases. The first phase was from January 22, 2020, to March 31, 2020, which was the outbreak phase when the COVID-19 epidemic was detected, and the second phase was from July 1, 2021, to October 1, 2021, which was the outbreak phase of the delta strain in the Yangtze River Delta. These 2 phases are the 2 periods of virus transmission in the Yangtze River Delta with many infections. There were no more significant outbreaks in the Yangtze River Delta during the other periods; therefore, only the outbreaks during these 2 time periods were considered in this study.
We collected epidemic statistics from January 10, 2020, to November 1, 2021. These statistics include the number of new and death cases daily in 2 stages. Our purpose was to analyze the data's clustering characteristics to reflect the epidemic's clustering characteristics.
Spatial autocorrelation is a spatial analysis method that can be used to analyze the spatial clustering characteristics of epidemics in a certain region. It can be divided into global autocorrelation and local autocorrelation. 6,7 Global autocorrelation also includes local spatial positive correlation and local spatial negative correlation. 8 This study used Anselin Local Moran's I to indicate spatial variation characteristics of COVID-19 in the Yangtze River Delta. The main parameters of Anselin Local Moran's I include conceptual spatial model, measurement construct, and standardization method.
Inverse distances are best for continuous data: the closer 2 features are in space, the more likely they will influence each other. Therefore, we chose the inverse distance as a conceptual spatial model. Euclidean and Manhattan are generally used when measuring spatial distance, but Manhattan is more suitable when the dataset has discrete or binary properties. Therefore we chose Euclidean as a measurement construct. Epidemic statistics just reflect the spatial characteristics of the epidemic. Hence, no standardized method was required.
Anselin Local Moran's I can output the LISA map. The LISA map reflects the local indicators of spatial association (Local indicators of spatial association, abbreviated as LISA). In the LISA diagram, agglomeration is divided into 4 cases('high-high,' 'high-low,' 'low-high,' and 'low-low'), each of which identifies a region and its relationship to its neighbors (e.g., 'high-low' means that the center of the area is a high-value cluster, and the surrounding area is a low-value area).
In the meantime, to understand the direction of the central shift of COVID-19, we introduced the standard deviation ellipse (SDE), which can be used to identify the spatial directional characteristics and spread trend of the epidemic and discuss whether there are spatial shift characteristics. 9 Usually, SDE has 2 main parameters: weight and ellipse size. SDE also has 3 levels of ellipses indicating that the generated ellipse can contain 68%, 95%, and 99% of data at 3 levels. We wanted to consider the main areas where the outbreak occurred, so the largest SDE was required, and the weights were confirmed cases and deaths.
Spatial distribution characteristics of COVID-19 in the Yangtze River Delta
The study found that in the first outbreak (Figure 1), confirmed cases in the Yangtze River Delta had spatial clustering distribution characteristics with significant local autocorrelation. Clusters of high confirmed cases were primarily distributed in the eastern and southern cities, like Shanghai and Wenzhou, and clusters of low confirmed cases in the northern cities, like Xuzhou and Lianyungang. Spatial analysis showed that the cities with high risk were all related to imported cases in Hubei province in the first round. The cities with high concentrations of confirmed cases have good railway connections to Hubei Province. Figure 2 shows that the regional variance of confirmed cases in the second epidemic at different dates is significant. The spatial clustering area is primarily in Jiangsu Province and Shanghai, indicating significant local autocorrelation. On July 28, Nanjing reported 47 additional confirmed cases, making it the first epicenter of the epidemic, while Zhenjiang reported no new cases, indicating a 'high-low' and 'low-high' distribution, respectively. The low-value area is in Lianyungang City; 348 new cases were reported in Yangzhou City between August 6 and August 13, making it the epidemic's second epicenter after Nanjing. Other cities in the Yangtze River Delta had no new confirmed cases on August 27, and Yangzhou and Shanghai had a few new cases, making them the 'high-low' gathering regions. Spatial analysis shows that the outbreak had mainly affected 3 cities, Nanjing, Yangzhou, and Shanghai. The cases in the Yangtze River Delta region were all related to imported cases, and the main transmission chain involved Lukou Airport.
Gravity shift feature of COVID-19 in the Yangtze River Delta
The study created the standard deviation ellipse with the ellipse center, reflecting the epidemic's spatial distribution's center of gravity. 10 The trajectory of the barycenter of confirmed cases in the first round of the epidemic showed a V-shaped characteristic of moving southward and then northward, and the spatial distribution of the number of cases tended to move southward and westward, according to the trend of the ellipse (Figure 1f). The confirmed case distribution shows an evident regional clustering phenomenon with a 'northwest-southeast' spatial distribution pattern. In general, confirmed cases were more concentrated in the Yangtze River Delta region's center and southern cities, whereas northern cities fared better; most confirmed cases were imported from outside the municipality, with few intra-city outbreaks and inter-city transfers. Most of the strains involved in this cycle of epidemics are the original strain.
In the second stage, the barycenter of confirmed cases mainly remained constant (Figure 2f). The confirmed cases were grouped in Jiangsu and Shanghai, with no cases in Zhejiang Province, indicating a 'northwest-southeast' spatial distribution pattern. The number of confirmed cases in the Yangtze River Delta is relatively concentrated, with the epidemic's epicenter in Nanjing in July and Yangzhou in August, while Zhejiang Province had virtually no cases. The majority of confirmed cases and associated cases are imported from other countries. The Delta strain makes up most cases in this cycle of outbreaks.
Discussion
According to the characteristics of the epidemic in the Yangtze River Delta, we divided the COVID epidemic into 2 stages. First, we used Anselin Local Moran's I to analyze the aggregation of confirmed cases in the 2 stages. Clusters of high confirmed cases are primarily distributed in the eastern and southern cities, like Shanghai and Wenzhou, with clusters of low confirmed cases in the northern cities, like Xuzhou and Lianyungang. In the second stage, spatial analysis shows that the outbreak had mainly affected 3 cities: Nanjing, Yangzhou, and Shanghai.
This study showed that places with the highest risk were those with high population inflow from Hubei Province in the first outbreak. Similar studies show results consistent with this observation.
A study conducted by Lei et al. showed that the places with the highest risk are those with high population inflow from Wuhan and Hubei Province. 11 Majority of the confirmed and related cases in the second epidemic were imported from outside China, and the epidemic strain was Delta. The National Health Commission of China has proved that. 12 The spatial clustering area is primarily in Jiangsu Province and Shanghai.
Second, we used the SDE to analyze the distribution direction of confirmed cases at different times to infer the transfer direction of the epidemic center. The trajectory of the barycenter of confirmed cases in the first stage showed a V-shaped characteristic of moving southward and then northward. In the second stage, the barycenter of confirmed cases mainly remained constant.
Disaster Medicine and Public Health Preparedness
Finally, the epidemic prevention and control policies in the Yangtze River Delta are as follows. In the first outbreak, the Yangtze River Delta had established a joint prevention and control mechanism among cities during the epidemic. 13 Lockdown and keeping social distance were common measures. In the second outbreak, the policies were different. Immunization had been hastened, and vaccine booster injections had been allowed. Small-scale controls, such as street or neighborhood closures and quarantines, will be implemented according to risk levels to stop the disease from spreading. For epidemic areas, a differentiated nucleic acid testing policy was introduced, with 3-day testing for epidemic areas inside the city until the requirement of no new cases for 21 days was met, and weekly testing for non-epidemic areas within the city, with a minimum of 3 tests. These policies brought the outbreak under control within months.
In this study, we compared the different policies applied to the 2 stages of the epidemic in the Yangtze River Delta. Our comparison may provide new thinking on epidemic prevention and control for other countries or regions.
Conclusion
The study found that the combined prevention, control, and 0 transmission policies established in the Yangtze River Delta region
L Yang and H Ren
were effective, based on the spatial distribution features and prevention and control policies of the 2 rounds of epidemic in the Yangtze River Delta. Given the current global crisis involving the COVID-19 pandemic, we recommend that other countries follow the example of the Yangtze River Delta region and implement the zero-COVID policy. | 2022-12-29T16:16:43.371Z | 2022-12-27T00:00:00.000 | {
"year": 2022,
"sha1": "6d6b5f032781699ff3c38cc7ac360beaa8c3a276",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Cambridge",
"pdf_hash": "6c5728567748d6bedf097d65e338972379d0457f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
265489299 | pes2o/s2orc | v3-fos-license | Diffuse Optical Tomography Provides a High Sensitivity at the Sensory-Motor Gyri: A Functional Region of Interest Approach
: Diffuse optical tomography (DOT) technology enables a differentiation between oxyhe-moglobin (HbO) and deoxyhemoglobin (HbR) in the sensory and motor cerebral gyri, resulting in greater sensitivity for cerebral activation compared to functional magnetic resonance imaging (fMRI). Here, we introduce a novel approach where functional regions of interest (ROIs) are created based on the specific signal behavior observed in DOT measurements in contrast to the conventional use of structural-ROI obtained from anatomical information. The generation of cerebral activation maps involves using the general linear model (GLM) to compare the outcomes obtained from both the functional and structural-ROI approaches. DOT-derived maps are then compared with maps derived from fMRI datasets, which are considered the gold standard for assessing functional brain activity. The results obtained demonstrate the effectiveness of employing functional-ROI to improve the spatial location of functional activations in the sensory and motor cerebral gyri by leveraging the neural synchronization data provided by DOT. Furthermore, this methodology simplifies data processing, where anatomical differences can pose challenges. By incorporating functional-ROI prior to GLM application, this study offers enhancements to DOT analysis techniques and broadens its applicability.
Introduction
Optical measurements play a crucial role in examining brain physiology, particularly in exploring the connection between neural activity and changes in hemodynamics.In neuroimaging studies, diffuse optical tomography (DOT) has been extensively utilized to detect functional alterations related to visual, motor, somatosensory, or cognitive stimuli [1].Previous research has introduced a novel approach to processing DOT data, treating them as functional magnetic resonance imaging (fMRI) volumes.This approach was successfully validated in the prefrontal cortex using a cognitive paradigm [2] and in sensory and motor areas [3].The DOT technique also enables the precise measurement of hemodynamic changes during intricate brain processes, such as motor imagery, which reveals more subtle cerebral activations compared to motor execution [4].
The presence of noises from non-target anatomical regions, such as the scalp or the external layers of the skull, poses a significant challenge in DOT measurements.These signals introduce short-term variability that affects spatial and temporal changes throughout the brain and scalp [5].To overcome this challenge, various methods have been employed that do not rely on assumptions about the hemodynamic model.Approaches like principal component analysis (PCA) or independent component analysis (ICA) have been utilized to generate cerebral activation maps [6] and reduce noise originating from these external layers.Similar to common practices in neuroimaging studies, a widely used method is region of interest (ROI) analysis, which relies on the structural anatomy of the brain.This approach involves selecting specific regions within the brain based on their anatomical landmarks or predefined brain atlases.By focusing on these regions of interest, researchers can analyze the DOT data specifically within these areas, allowing for more targeted and accurate examination of the functional activations.This structural-ROI-based analysis in DOT is an approach commonly employed in neuroimaging studies, where researchers define specific regions of interest based on structural anatomy to investigate brain activity and connectivity patterns [7], particularly when a magnetic resonance (MR) device is available.However, a notable drawback of using DOT in human brain studies is the reliance on a structural MRI image for the accurate localization of functional activations within the anatomy.This requirement for an MRI scan limits the usability of DOT techniques in situations where an MRI scan is not easily accessible.
Here, we investigate whether an ROI analysis could be constructed based on the intrinsic signal/brain characteristics across voxels rather than relying on a structural-ROI.To test this hypothesis, we analyzed DOT data recorded from the well-studied sensorymotor area, where the spatial locations of activations are known.The data were collected from a group of healthy subjects while they performed finger movements with their right hands.The cerebral maps obtained from DOT were compared with the maps derived from fMRI data, providing a basis for comparison and validation.
Subjects and Experimental Design
A total of nine participants who were right-handed and had no history of neurological disease were included in the study.Prior to the experiment, the participants were given a thorough explanation of the procedures and purpose of the study, and they provided written consent to participate.The study received ethical approval from the local ethics committee at the University of La Laguna with approval code CEIBA2015-0153 and was conducted in accordance with the guidelines stated in the Declaration of Helsinki.
A block design began with a 16 s resting condition, which consisted of the observation of a static word (Stop) in the center of the screen.The instructions appeared on the screen during all periods of each task block.The execution condition consisted of the performance of the opposition movement between the thumb finger versus the rest of the fingers for 16 s.A metronome set at 3 Hz was used for the execution condition.In order to stabilize physiological fluctuations and ensure steady-state magnetization of the tissue during fMRI, a 20 s dummy time was incorporated prior to each motor execution.The visual presentation of the experimental paradigm was facilitated through the use of Presentation software, developed by Neurobehavioral Systems, Inc., based in Albany, California.A total of twenty-four task blocks for each condition were performed with both DOT and fMRI devices.All instruments and facilities belonged to the Magnetic Resonance Service for Biomedical Research (Servicio de Resonancia Magnética para Investigaciones Biomédicas-SRMIB SEGAI) at the University of La Laguna, Santa Cruz de Tenerife, Spain.
Data Acquisition and Preprocessing in MRI
Functional magnetic resonance images were collected using a 3.0 T Signa Excite HD scanner manufactured by General Electric (GE) Healthcare.For precise anatomical localization, a T1-weighted volume was acquired with the following parameters: repetition time (TR) of 6 ms, echo time (TE) of 1 ms, flip angle of 12 • , matrix size of 256 × 256 pixels, in-plane resolution of 0.98 × 0.98 mm, spacing between slices of 1 mm, slice thickness of 1 mm, and no interslice gap.The acquired anatomical slices encompassed the entire brain and were obtained parallel to the anterior-posterior commissure.During the motor paradigm, a series of 385 T2-weighted echo-planar imaging (EPI) volumes were obtained.The EPI sequence parameters included 36 axial slices covering the entire head, with a field of view of 25.6 mm, slice thickness of 4 mm, interslice gap of 1 mm, matrix size of 64 × 64, flip angle of 90 • , TR of 2 s, and TE of 22.1 ms.
The preprocessing of fMRI volumes was conducted using Statistical Parametric Mapping (SPM12) software developed by The Wellcome Trust Centre for Neuroimaging at University College London.The following steps were applied: realignment to correct for motion artifacts, slice timing correction, registration with the T1-weighted structural image, and transformation into the standard anatomical space of the Montreal Neurological Institute (MNI).To suppress noise and account for residual differences in functional and gyri anatomy, the EPI images were subjected to isotropic smoothing with an 8 mm full-width half-maximum kernel.Additionally, a high-pass filter with a cutoff period of 64 s was used to eliminate low-frequency noise associated with breathing and pulse signals [8].
Data Acquisition and Preprocessing for DOT Measurements
To acquire the DOT data, a DYNOT 232 instrument manufactured by NIRx Medizintechnik GmbH in Berlin, Germany, was utilized.The instrument employs continuous-wave measurements and operates with two laser sources at frequencies of 760 nm and 830 nm.The measurements were performed in a time-multiplexed scanning fashion with a sampling rate of 1.8 Hz.For light transmission and detection, optical fibers (optodes) were used, allowing near-infrared (NIR) light to travel to and from the DOT device.In this study, a total of thirty-two optical fibers were employed to measure hemodynamic changes in the contralateral cerebral hemisphere, specifically the left side.These optical fibers were configured as source-detector pairs, enabling the establishment of 1024 optical channels to capture and analyze the changes in the measured light intensity.The optodes were arranged in a rectangular grid at a distance of 1 between each of them, covering the C3 position referring to the EEG 10-20 system [9] (Figure 1a).The preprocessing of fMRI volumes was conducted using Statistical Parametric Mapping (SPM12) software developed by The Wellcome Trust Centre for Neuroimaging at University College London.The following steps were applied: realignment to correct for motion artifacts, slice timing correction, registration with the T1-weighted structural image, and transformation into the standard anatomical space of the Montreal Neurological Institute (MNI).To suppress noise and account for residual differences in functional and gyri anatomy, the EPI images were subjected to isotropic smoothing with an 8 mm fullwidth half-maximum kernel.Additionally, a high-pass filter with a cutoff period of 64 s was used to eliminate low-frequency noise associated with breathing and pulse signals [8].
Data Acquisition and Preprocessing for DOT Measurements
To acquire the DOT data, a DYNOT 232 instrument manufactured by NIRx Medizintechnik GmbH in Berlin, Germany, was utilized.The instrument employs continuouswave measurements and operates with two laser sources at frequencies of 760 nm and 830 nm.The measurements were performed in a time-multiplexed scanning fashion with a sampling rate of 1.8 Hz.For light transmission and detection, optical fibers (optodes) were used, allowing near-infrared (NIR) light to travel to and from the DOT device.In this study, a total of thirty-two optical fibers were employed to measure hemodynamic changes in the contralateral cerebral hemisphere, specifically the left side.These optical fibers were configured as source-detector pairs, enabling the establishment of 1024 optical channels to capture and analyze the changes in the measured light intensity.The optodes were arranged in a rectangular grid at a distance of 1 between each of them, covering the C3 position referring to the EEG 10-20 system [9] (Figure 1a).Optical channels were preprocessed following the approach in prior studies that includes the following: (1) optical channels were filtered using Bayesian filtering to remove Optical channels were preprocessed following the approach in prior studies that includes the following: (1) optical channels were filtered using Bayesian filtering to remove physiological noise [10]; (2) optical fiber grid positions on individual heads were marked for a posterior translocation from individual space to precalculated forward model space from the BrainModeler tool from NIRx NAVI imaging.The BrainModeler tool utilizes finite element models (FEMs) [11] to describe uniform optical properties (µa = 0.06 cm −1 , µs 0 = 10 cm −1 ).This tool defines the source-detector pair placement on the head's surface and the internal optical properties, creating a forward model that characterizes changes in boundary data due to absorption variations in the tissue for each channel-node combination.A submesh was selected for the cerebral hemisphere, depending on the positions of the fiber grid.The left hemisphere's submesh comprised 4518 nodes and 19,573 tetrahedrons with dimensions of 7.65 cm (width) × 7.05 cm (height) × 7.20 cm (thickness), as depicted in Figure 1b.(3) DOT volume reconstruction employed the normalized differ-ence method [12], which establishes a connection between surface head measurements and changes in the internal optical properties relative to a reference medium, utilizing the perturbation approach [13].Alterations in absorption at two distinct wavelengths were leveraged to generate reconstructed images representing relative concentrations of oxyhemoglobin (HbO) and deoxyhemoglobin (HbR) with the aid of extinction coefficients for both wavelengths [14].Reconstructing DOT images presents a challenge, as it involves the inversion of the weight matrix J, which becomes an ill-posed problem due to the significant attenuation of NIR light with increasing depth.To address this, the weight matrix is inverted using a truncated singular value decomposition algorithm with the selection of the truncated number of singular values determined by the minimum description length as a selection criterion.A total of 1397 DOT volume series with a size for each volume of 64 × 64 × 64 voxels were reconstructed for each hemoglobin state, HbO and HbR.
Functional Region of Interest (Functional-ROI)
Once DOT and fMRI data were preprocessed and normalized to standard MNI space, functional-ROI was calculated.This calculation is based on the assumption that brain regions involved in a particular function tend to exhibit similar temporal fluctuations in neural activity [15].This implies that neighboring voxels within a brain region should exhibit similar patterns of activity over time.Therefore, it provides information about the coherence of neural activity within a specific brain region.The time series for each voxel within the DOT volume is obtained by extracting the hemodynamic (BOLD, HbO and HbR) signals, which reflects hemodynamic changes related to neural activity.Thus, for a voxel at time V i = 1,...,n , the rank R(t) is defined as: where C is the number of neighbors (7 voxels) of the voxel, and n i is the mean across its neighbors at the ith time point.Then, Kendall's coefficient is used to measure the similarity of the time series among neighboring voxels within the ROI.The Kendall coefficient of concordance (KCC) is a statistical measure which provides a measure of how well the time-series data within a cluster of neighboring voxels are synchronized or correlated.In other words, KCC helps assess the homogeneity or regularity of neural activity within a particular region of the brain [16].A KCC value of 0 indicates no concordance among the time series of neighboring voxels.This implies a lack of synchronization in neural activity in that region, suggesting that the activity patterns are dissimilar and irregular.Meanwhile, a KCC value of 1 indicates perfect concordance among the time series in the neighboring voxels.This signifies that the time-series data are highly synchronized, indicating a highly regular and coordinated pattern of neural activity in that brain region.Finally, the obtained KCC values are then transformed into Z-scores to allow for statistical comparisons based on one sample t-test (family-wise error rate; FWER threshold p < 0.001).As a result, the statistical maps of the group analysis, which spatially represent the synchronization of HbO, HbR, and BOLD signals within the reconstructed DOT volume, serve as functional-ROIs during the general linear model (GLM) calculation.
General Linear Model (GLM) for Both fMRI and DOT Data Sets
The approach developed by the authors to treat DOT volumes as if they were fMRI volumes using canonical SPM 12 software was used.In order to improve the signal-to-noise ratio, both the fMRI and DOT volumes underwent filtering by applying a high-pass filter based on discrete cosine transformation.The filter had a cutoff period of 64 s.The design matrix utilized in the analysis included two regressors: one for the rest condition and another for the task condition defined as x (Equation ( 2)).These regressors were convolved with the canonical hemodynamic response function (HRF).It is worth noting that the convolutions were reversed to visualize the negative response corresponding to the HbR signal.Then, the estimation using the GLM was performed: where y represents the hemodynamic response which can be BOLD, HbO or HbR, x denotes the regressors that explain the data, β represents the coefficients indicating the extent to which each regressor explains the data, and ε signifies the variance in the data that remains unexplained by the regressors (i.e., noise).After completing the estimation, cerebral activation maps were created through a fixed-effects analysis.It is important to note that a fixed-effects analysis does not facilitate making general population-level conclusions but is well suited for this specific sample.To specifically examine the motor execution condition compared to the rest condition, a contrast was computed.This resulted in the production of T-contrast maps for each dataset in both the DOT and fMRI modalities.
Results
Cerebral maps onto a standard space (MNI), generated through GLM analysis incorporating both structural and functional-ROIs, were visualized using XjView 8.1 for measurements obtained from DOT and fMRI.
GLM Analysis for the fMRI and DOT Datasets Using Structural-ROI
Previous research has revealed that during finger-to-thumb opposition movements, the primary sensorimotor cortex (SMC) exhibits contralateral activation, which extends throughout the pre-and postcentral gyri.A GLM analysis using a structural-ROI (see Figure A1) was calculated to ensure the results for the following analysis steps using the same dataset.As expected, Figure 2 shows the cerebral activations across the left preand postcentral gyri, covering the following Brodmann areas (BAs): BA2 and BA3, which correspond to the primary somatosensory cortex, and BA4 related to the motor cortex (M1).
GLM Analysis for the fMRI and DOT Datasets Using the Functional-ROI
The resulting t-maps reveal cerebral activations robustly in the contralateral pre-and postcentral gyri, specifically in Brodmann areas 1, 2, 3, 4, and 6, for both HbO and HbR (see Figure A2).Compared to the GLM based on structural-ROI, GLM using functional-ROI seems to be less sensitive to constant event-related hemodynamic responses.How-
GLM Analysis for the fMRI and DOT Datasets Using the Functional-ROI
The resulting t-maps reveal cerebral activations robustly in the contralateral pre-and postcentral gyri, specifically in Brodmann areas 1, 2, 3, 4, and 6, for both HbO and HbR (see Figure A2).Compared to the GLM based on structural-ROI, GLM using functional-ROI seems to be less sensitive to constant event-related hemodynamic responses.However, its major advantage is the ability to detect unpredicted (event-nonrelated) hemodynamic responses that the GLM structural-ROI method itself failed to identify (see Figure A2).
Unpredicted hemodynamic responses combined with event-related hemodynamic responses help to understand the high complexity of human brain function, reaching the highest spatial resolution for loci at the gyri level.Figure 3
Frequency Domain Analysis
Functional-ROIs analysis relies on assessing the temporal homogeneity of a voxel and its neighboring voxels independent of the hemodynamic model employed by the GLM.Consequently, the signal dynamics of these voxels were analyzed using both structural and functional-ROIs for all signals.Figure 4 shows a frequency analysis revealing a main peak at 0.03 Hz (corresponding to the task which lasted 16 s) and well represented by GLM analysis.However, the magnitude of task frequency (0.03) is also increased in the GLM analysis when the functional-ROI is applied (max 0.35) (Figure 4b) in contrast to the structural-ROI GLM analysis (max 0.2) (Figure 4a).A higher magnitude indicates stronger neural activity in the area being measured.A structural-ROI covers a broader anatomical region in the brain and does not distinguish between specific functional areas, encompassing all the voxels within that anatomical zone.In contrast, a functional-ROI is more targeted, containing only the voxels actively engaged in the task.The strong correlation be-
Frequency Domain Analysis
Functional-ROIs analysis relies on assessing the temporal homogeneity of a voxel and its neighboring voxels independent of the hemodynamic model employed by the GLM.Consequently, the signal dynamics of these voxels were analyzed using both structural and functional-ROIs for all signals.Figure 4 shows a frequency analysis revealing a main peak at 0.03 Hz (corresponding to the task which lasted 16 s) and well represented by GLM analysis.However, the magnitude of task frequency (0.03) is also increased in the GLM analysis when the functional-ROI is applied (max 0.35) (Figure 4b) in contrast to the structural-ROI GLM analysis (max 0.2) (Figure 4a).A higher magnitude indicates stronger neural activity in the area being measured.A structural-ROI covers a broader anatomical region in the brain and does not distinguish between specific functional areas, encompassing all the voxels within that anatomical zone.In contrast, a functional-ROI is more targeted, containing only the voxels actively engaged in the task.The strong correlation between the functional-ROI and neural activity indicates its precise representation and capture of neural processes.
Images Analysis
Figure 5 shows that the optical imaging combined with the utilization of a functional-ROI is capable of detecting subtle variations in the lateral distribution of activation foci.Moreover, these brain regions are situated within deep sulcal structures, posing challenges for optical approaches to access them.The loci distributions for both the GLM using a structural-ROI and GLM using a functional-ROI results are along the sensorimotor and motor cortices.
Although the activation patterns show the strongest qualitative similarities in both methods, consistent differences worth mentioning were also observed.GLM using functional-ROI shows the major number of voxels distributed across the postcentral gyri (BA1, BA2 and BA3), which covers the primary somatosensory cortex for DOT and fMRI measurements.This fact matches prior studies in fMRI during finger tapping, which showed higher regional homogeneity in the sensory cortex than in the motor cortex itself [17].The premovement sensory cortex shows a slight delay in encoding information about the impending activity of the forelimb muscles compared to the motor cortex.As a result, the sensory cortex receives information regarding motor output before the arrival of sensory feedback signals.This suggests that sensory processing engages in the real-time processing of somatosensory signals through interactions with anticipatory information [18].
Images Analysis
Figure 5 shows that the optical imaging combined with the utilization of a functional-ROI is capable of detecting subtle variations in the lateral distribution of activation foci.Moreover, these brain regions are situated within deep sulcal structures, posing challenges for optical approaches to access them.The loci distributions for both the GLM using a structural-ROI and GLM using a functional-ROI results are along the sensorimotor and motor cortices.
Although the activation patterns show the strongest qualitative similarities in both methods, consistent differences worth mentioning were also observed.GLM using functional-ROI shows the major number of voxels distributed across the postcentral gyri (BA1, BA2 and BA3), which covers the primary somatosensory cortex for DOT and fMRI measurements.This fact matches prior studies in fMRI during finger tapping, which showed higher regional homogeneity in the sensory cortex than in the motor cortex itself [17].The premovement sensory cortex shows a slight delay in encoding information about the impending activity of the forelimb muscles compared to the motor cortex.As a result, the sensory cortex receives information regarding motor output before the arrival of sensory feedback signals.This suggests that sensory processing engages in the real-time processing of somatosensory signals through interactions with anticipatory information [18].
Discussion
Finger-tapping tasks are widely utilized in neuroimaging studies to inve matosensory and motor functions.This choice is primarily due to the higher am cerebral activations observed during finger tapping compared to other paradi as cognitive tasks.Additionally, the spatial distribution of cerebral activations finger tapping is well established and reproducible.
Firstly, creating a functional-ROI, in contrast to using structural-ROI, is anatomy to mask the background signals that are mixed with external layers scalp).The results presented here demonstrate that applying functional-ROI to umes enables the attainment of precise cerebral activation information at the l gyri.This approach effectively eliminates extracerebral signals and does not rel tural anatomy, which can be particularly challenging in situations where MRI d unavailable.A functional-ROI analysis may reveal shared neurovascular coupl anisms involved in both synchronized spontaneous and task-related activities demonstrate the strong reliability of functional-ROI analysis when applied to umes by comparing it with fMRI volumes.
As we expected, the outcomes obtained from the GLM analysis using a ROI (Figure 1) align with previous research conducted on both fMRI and DO [4].These studies have specifically focused on the sensory-motor cortices, name and postcentral gyri.Furthermore, the functional-ROI analysis (Figure A2) a with previous fMRI studies [17,19] that have investigated sensorimotor areas d ger movements.
Discussion
Finger-tapping tasks are widely utilized in neuroimaging studies to investigate somatosensory and motor functions.This choice is primarily due to the higher amplitude of cerebral activations observed during finger tapping compared to other paradigms, such as cognitive tasks.Additionally, the spatial distribution of cerebral activations elicited by finger tapping is well established and reproducible.
Firstly, creating a functional-ROI, in contrast to using structural-ROI, is based on anatomy to mask the background signals that are mixed with external layers (skull and scalp).The results presented here demonstrate that applying functional-ROI to DOT volumes enables the attainment of precise cerebral activation information at the level of the gyri.This approach effectively eliminates extracerebral signals and does not rely on structural anatomy, which can be particularly challenging in situations where MRI devices are unavailable.A functional-ROI analysis may reveal shared neurovascular coupling mechanisms involved in both synchronized spontaneous and task-related activities.Here, we demonstrate the strong reliability of functional-ROI analysis when applied to DOT volumes by comparing it with fMRI volumes.
As we expected, the outcomes obtained from the GLM analysis using a structural-ROI (Figure 1) align with previous research conducted on both fMRI and DOT datasets [4].These studies have specifically focused on the sensory-motor cortices, namely the pre-and postcentral gyri.Furthermore, the functional-ROI analysis (Figure A2) also aligns with previous fMRI studies [17,19] that have investigated sensorimotor areas during finger movements.
Secondly, the results obtained from the GLM analysis using a functional-ROI reveal detailed cortical activation patterns in the pericentral motor and somatosensory cortices during task performance (Figure 5).Previous fMRI studies have demonstrated that these func-tional maps bear striking similarity to Penfield's electrophysiological maps, suggesting that non-invasive techniques can access the homuncular organization in healthy adults [20,21].The findings presented in this study validate similar approaches utilizing high-density optical imaging [18].By combining a functional-ROI approach with a model-driven analysis (GLM), it becomes possible to accurately localize major cerebral activations in the pre-and postcentral gyri across a group of subjects while they perform contralateral movements, using both neuroimaging technologies.This methodology yields a spatial distribution of activations that aligns with the anatomical homuncular organization in the pre-and postcentral gyri.It is important to note that in this study, individual finger movements cannot be distinguished.However, the results demonstrate the spatial localization of general finger movements, which is consistent with a recent study presenting an integrated-isolate model of action and motor control [22].
A notable inconsistency in this study is the dissimilarity in spatial distribution patterns between DOT and fMRI, which exhibit differences in voxel activation.These inconsistencies in hemodynamic behavior are explained by the neurovascular coupling.Initially, when neuronal activity begins, there is an increase in HbR due to the initial consumption [23] of oxygen.If neuronal activity persists, it triggers a vasodilation process [24], during which HbR is washed out by the arrival of oxygenated blood from the broader vasodilation occurring in neighboring regions.It is worth noting that both DOT and fMRI techniques measure hemodynamic changes but do so using different signals or sampling rates.Moreover, in this scenario, the issue might arise due to the utilization of a standardized finite element (FE) mesh.The employment of a generic head model for the forward light modeling could necessitate the alignment of subject-specific MR anatomy with the head model's MR anatomy to pinpoint the optodes' positions accurately.This process may result in diminished spatial accuracy of the derived activation foci due to variations in brain structures and the thickness of extracerebral tissue.
Finally, DOT is an imaging technique specialized in providing detailed insights into tissue composition, including parameters such as oxygenation and variations in chromophore concentrations like HbO and HbR.However, it should be noted that DOT does not inherently offer direct information about blood flow dynamics, which is a capability of Speckle Contrast Optical Tomography (SCOT) [25].Although both SCOT and DOT fall within the realm of optical imaging, they serve distinct purposes.SCOT is engineered for the real-time monitoring of blood flow and perfusion, making it the method of choice for applications where dynamic vascular assessments are paramount [26].Conversely, DOT is a versatile tool that excels in the comprehensive study of tissue composition, oxygenation levels, and a broad spectrum of physiological and pathological processes.
Conclusions
The fine-grained resolution of functional cortical activations presented here, along with several advantages such as non-invasiveness and safety, makes it suitable for specific clinical applications where other imaging methods may not be feasible.These applications include patients with medullary lesions, brain injuries, or muscle atrophy, allowing them to interact with their environment, especially during movements.Additionally, the potential applications of optical technology in monitoring hemodynamic changes for controlling robotic structures, particularly in the context of brain-computer interfaces (BCIs) and exoskeletons, are noteworthy.Moreover, this approach can provide valuable insights, particularly when animal models are employed to test and develop new technologies.Animal models may have limited access to neuroimaging tools for creating structural anatomical ROIs, making functional-ROI based on neural synchronization a more accessible and convenient option when working with DOT technology.Considering the less invasive nature of optical technology compared to electrophysiological microarray implants, future research should prioritize the design of brain implants based on optical technology for recording inputs from the sensorimotor cortex.
Figure 1 .
Figure 1.Finite model element (FEM) selection.(a) Atlas with an FEM (blue) covering the sensorymotor cerebral area C3.(b) Localizations of optical fibers (circles) on the boundary.Red dots correspond to source-detector pairs.Yellow dots correspond to EEG 10-20 system.
Figure 1 .
Figure 1.Finite model element (FEM) selection.(a) Atlas with an FEM (blue) covering the sensorymotor cerebral area C3.(b) Localizations of optical fibers (circles) on the boundary.Red dots correspond to source-detector pairs.Yellow dots correspond to EEG 10-20 system.
Figure 2 .
Figure 2. The t-contrast maps of cerebral activation for the motor execution > rest contrast measured by the fMRI and DOT devices in a subject group (N = 9) for (a) BOLD, (b) HbO and (c) HbR signals in coronal and axial views.All results were mapped onto a standard space (MNI).FWER threshold p < 0.05 at the voxel level for all signals.
Figure 2 .
Figure 2. The t-contrast maps of cerebral activation for the motor execution > rest contrast measured by the fMRI and DOT devices in a subject group (N = 9) for (a) BOLD, (b) HbO and (c) HbR signals
Figure 3 .
Figure 3.The T-contrast maps of cerebral activation using functional-ROI for the motor execution > rest contrast measured by the fMRI and DOT devices in a subject group (N = 9) for (a) BOLD, (b) HBO and (c) HbR signals in coronal and axial views.All results were mapped onto a standard space (MNI).FWER-threshold p < 0.05 at the voxel level for all signals.The color bar depicts the signal changes.
Figure 3 .
Figure 3.The T-contrast maps of cerebral activation using functional-ROI for the motor execution > rest contrast measured by the fMRI and DOT devices in a subject group (N = 9) for (a) BOLD, (b) HBO and (c) HbR signals in coronal and axial views.All results were mapped onto a standard space (MNI).FWER-threshold p < 0.05 at the voxel level for all signals.The color bar depicts the signal changes.
Figure 4 .
Figure 4. Power spectrum for the time series of BOLD (green lines); HbO (red lines); HbR (blue lines) signals in the cerebral activation maps generated by (a) GLM using structural-ROI, (b) GLM using functional-ROI.Vertical axis: Normalized Power Spectrum; Horizontal axis: Frequency (Hz).
Figure 4 .
Figure 4. Power spectrum for the time series of BOLD (green lines); HbO (red lines); HbR (blue lines) signals in the cerebral activation maps generated by (a) GLM using structural-ROI, (b) GLM using functional-ROI.Vertical axis: Normalized Power Spectrum; Horizontal axis: Frequency (Hz).
Figure 5 .
Figure 5. T-contrast maps measured by DOT and fMRI devices in a subject group (N = 9 hemisphere localized on the postcentral gyri (upper figure) and the precentral gyri (bott All results were mapped onto a standard space (MNI) and statistically analyzed by G structural-ROI (left images) and GLM using a functional-ROI (right images).FWER th 0.05 at the voxel level for motor execution > rest contrast.BOLD (green), HbR (blue) and signals are displayed in coronal view.
Figure 5 .
Figure 5. T-contrast maps measured by DOT and fMRI devices in a subject group (N = 9) on the left hemisphere localized on the postcentral gyri (upper figure) and the precentral gyri (bottom figure).All results were mapped onto a standard space (MNI) and statistically analyzed by GLM using a structural-ROI (left images) and GLM using a functional-ROI (right images).FWER threshold p < 0.05 at the voxel level for motor execution > rest contrast.BOLD (green), HbR (blue) and HbO (red) signals are displayed in coronal view.
Figure A2 .
Figure A2.Unpredicted hemodynamic response.Statistical maps indicating t-test (FWER p < 0.001) results of the z-score of the amplitude of fluctuation differences for (a) BOLD; (b) HbO and (c) HbR signals for the group subjects (N = 9) during the contralateral finger movements in coronal and axial views.All results were mapped onto standard space (MNI). | 2023-11-29T16:17:32.962Z | 2023-11-27T00:00:00.000 | {
"year": 2023,
"sha1": "c0801f7c5e591cfeb7ceb6077414583a498c7e71",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/13/23/12686/pdf?version=1701051172",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ec0e1ca5603e6199c091516c76237fcf827138a8",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
55337032 | pes2o/s2orc | v3-fos-license | Low Osmolar Oral Rehydration Solution ( ORS ) for Treating Diarrhea in Children : A Systematic Review and Meta-Analysis
Context: Standard WHO-ORS reduces dehydration, but does not reduce stool volume and duration of diarrhea. Low osmolar ORS produce maximal water absorption. This meta-analysis was conducted to evaluate the efficacy of low osmolar ORS in comparison to standard WHO-ORS. Evidence acquisition: A systematic review and meta-analysis of Randomized Controlled Trials (RCTs) comparing efficacy of low osmolar ORS and standard WHOORS in childhood diarrhea was carried out. RCTs were searched in PubMed, Cochrane CENTRAL, DOAJ, Google Scholar and Google. The data was extracted in Excel and entered in Review Manager 5.3 for calculation of effect sizes. Results: The outcome of stool output was reported in 9 trails. Reduced osmolarity ORS resulted in significantly reduced stool output as compared with standard WHO-ORS (pooled standardized mean difference -0.44, 95% CI -0.72 to -0.15). Information for the outcome of duration of diarrhea was available from 6 trials. The pooled standardized mean difference was -0.21 (95% CI -0.79 to 0.37), suggesting that reduced osmolarity ORS did not have significant effect on the duration of diarrhea as compared to standard WHO-ORS. The outcome of need for intravenous fluid therapy was reported in 8 trials. The meta-analysis revealed that reduced osmolarity ORS when compared to WHO standard ORS was associated with fewer unscheduled intravenous infusions (Odds Ratio 0.62, 95% CI 0.47 to 0.83). The meta-analysis for the outcome of vomiting reported in 5 clinical trials showed that children treated with low osmolar ORS were less likely to vomit than children treated with standard WHOORS (Odds Ratio 0.74, 95% CI 0.57 to 0.97). Conclusion: Low osmolar ORS when compared to standard WHO-ORS is associated with reduced stool output, reduction in need for unscheduled intravenous infusion and lesser episodes of vomiting. However, there was no significant difference in duration of diarrhea
stool output, vomiting episodes and duration of diarrhea through meta-analysis of randomized controlled trials.
Evidence Acquisition
Criteria for considering studies for review: Studies: Randomized controlled trials were included in this review (quasi-randomized trials were excluded).Participants: Children presenting with a complaint of diarrhea were included in this review.Adults were excluded from this review as the authors opined that ORS distribution in the public health facilities is mainly concentrated on children.Interventions: Trials comparing low osmolar ORS (osmolarity 250 mmol/L or less) with standard WHO-ORS (90 mmol/L sodium, 111 mmol/L glucose, total osmolarity 311 mmol/L).Outcomes: Pre-defined outcomes were stool output, duration of diarrhea, episodes of vomiting and intravenous fluid therapy during the course of treatment.Search strategy for documentation of studies: Clinical trials were searched in PubMed, Cochrane CENTRAL, DOAJ, Google Scholar and Google.The detailed search strategy used in PubMed and Cochrane CENTRAL is mentioned in Table 1.The review is updated till September 2013.
Data collection and analysis Selection of studies:
The search was done by the first author.First and second author examined the abstracts independently for inclusion and exclusion from the review.Any differences were resolved by discussion with the third author.Assessment of risk of bias was carried out as per the guidelines of Cochrane Handbook for Systematic Reviews of Interventions.[5] Statistics: The data was extracted by first and second author and was maintained in Microsoft Excel software.The continuous data were log transformed into converted means as described in the Cochrane Handbook for Systematic Reviews of Interventions.[5] The dichotomous data were entered as it is.Data were then entered in Review Manager software version 5.3 and effect sizes were calculated.Dichotomous data (episodes of vomiting and need for intravenous infusion) were pooled in the form of Odds Ratio (with 95% CI) and continuous data (stool output and duration of diarrhea) were pooled in the form of standardized mean difference (with 95% CI).
Results
A total of 12 RCTs were included in the meta-analysis.The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram recording the selection of trials and reasons for exclusion is demonstrated in Figure 1. [6]
Figure 1: Flow diagram recording the selection of trials and reasons for exclusion
The characteristics of included studies are elaborated in Table 2.The characteristics of excluded studies with the reasons for exclusion are illustrated in Table 3.
Table 3: Characteristics of excluded studies (reasons for exclusion of full texts from meta-analysis) Trial
Reasons for exclusion Yang 2007 [19] Not in English Shornikova 1997 [20] Comparison group was not standard WHO-ORS Velásquez-Jones L 1990 [21] Duplicate study to Moreno-Sanchez H 1990. Moreno-Sanchez H 1990 was considered for meta-analysis as it had the maximum sample size.
Rautanen T 1993 [22] Comparison group was not standard WHO-ORS
Valentiner
No data relevant to the current meta-analysis Branth 1999 [23] Mallet E 1990 [24] Comparison group was not standard WHO-ORS Bhattacharya MK 1998 [25] Comparison group was not standard WHO-ORS, but low osmolar rice based ORS Rautanen T 1998 [26] Compared 2 low osmolar ORS (not WHO standard ORS) Rautanen T 1997 [27] Comparison group was not standard WHO-ORS Pulungsih SP 2006 [28] Included 12-60 years subjects (adults also) The risk of bias in each clinical trial included in the metaanalysis is elucidated in Table 4. Funnel plot to assess publication bias was not plotted as the number of clinical trials in each outcome was less than 10.Information for the outcome of stool output was available from 9 trails (n=2261) [Figure 2].The meta-analysis revealed a statistically significant reduction in for unscheduled intravenous infusion for participants receiving reduced osmolarity oral rehydration solution (ORS) when compared with WHO standard ORS was demonstrated (odds ratio 0.62, 95% CI 0.47 to 0.83).
For the outcome of children vomiting during diarrhea, the data was available from 5 clinical trials (n=1267) [Figure 5].The meta-analysis showed that children in the low osmolar ORS group were less likely to vomit that children in WHO standard ORS group (odds ratio 0.74, 95% CI 0.57 to 0.97).
Discussion
The present systematic review and meta-analysis observes the superiority of low osmolar ORS in comparison to WHO standard ORS in children suffering from diarrhea.The low osmolar ORS was observed to have beneficial effect in reducing stool output, unscheduled intravenous infusion and episodes of vomiting.Although, there was no difference observed in duration of diarrhea in children being treated with either ORS formulations.We intended to do subgroup analysis between patients with cholera and non-cholera diarrhea, but there was insufficient data to do so.We could not assess publication bias through funnels plots as the number of clinical trials were small to arrive at an opinion on that.Placebo-controlled double blind Randomized Controlled Trials are required to study the use of reduced osmolarity ORS in diarrhea due to cholera as diarrhea due to cholera is secretory in nature.
Oral Rehydration Solution (ORS), along with Zinc tablets, is now used as the first line of therapy to prevent dehydration due to diarrhea.ORS has got a significant amount of public health importance due to its very wide reach in terms of its use till the village health worker level.Thereby, it becomes necessary to document the effectiveness of different forms of ORS being used currently.The current review reflects, supplements and updates the existing knowledge on low osmolar ORS.Policy makers have now shifted to the reduced osmolarity ORS for preventing dehydration due to diarrhea even in areas where cholera coexists with other diarrheas.WHO and UNICEF now recommend the use of low osmolar ORS (245 mOsm/L) for preventing dehydration due to diarrhea.
Conclusion
Low osmolar ORS when compared to standard WHO-ORS is associated with reduced stool output, reduction in need for unscheduled intravenous infusion and lesser episodes of vomiting.However, there was no significant difference in duration of diarrhea.
Key Messages
Low osmolar ORS, instead of WHO standard ORS, should be used to prevent dehydration in children suffering from diarrhea.
Figure 2 :
Figure 2: Forest plot for the outcome of stool outputThe outcome of stool output was measured with different units in different trials.Therefore, standardized mean difference was used to analyze this data.As the outcome of stool output was a skewed distribution, we took log-normal approximation for the same.The pooled standardized mean difference in the log scale is -0.44 (95% CI -0.72 to -0.15), suggesting that reduced osmolarity ORS resulted in significantly reduced stool output as compared with WHO standard ORS.Information for the outcome of duration of diarrhea was available from 6 trials (n=1889) [Figure3].
Figure 3 :
Figure 3: Forest plot for the outcome of duration of diarrheaThe pooled standardized mean difference in the log scale is -0.21 (95% CI -0.79 to 0.37), suggesting that reduced osmolarity ORS did not have significant effect on the duration of diarrhea as compared to WHO standard ORS.Information for the outcome of need for intravenous fluid therapy was available from 8 trials (n=1775) [Figure4].
Figure 4 :
Figure 4: Forest plot for the outcome of need for intravenous fluid therapy
Figure 5 :
Figure 5: Forest plot for the outcome of vomiting during diarrhea. | 2018-12-08T14:46:54.150Z | 2015-10-15T00:00:00.000 | {
"year": 2015,
"sha1": "a6f20efbee06eac1513202cf5323dc2437a2cb05",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=35416",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a6f20efbee06eac1513202cf5323dc2437a2cb05",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25553416 | pes2o/s2orc | v3-fos-license | MMCR4NLP: Multilingual Multiway Corpora Repository for Natural Language Processing
Multilinguality is gradually becoming ubiquitous in the sense that more and more researchers have successfully shown that using additional languages help improve the results in many Natural Language Processing tasks. Multilingual Multiway Corpora (MMC) contain the same sentence in multiple languages. Such corpora have been primarily used for Multi-Source and Pivot Language Machine Translation but are also useful for developing multilingual sequence taggers by transfer learning. While these corpora are available, they are not organized for multilingual experiments and researchers need to write boilerplate code every time they want to use said corpora. Moreover, because there is no official MMC collection it becomes difficult to compare against existing approaches. As such we present our work on creating a unified and systematically organized repository of MMC spanning a large number of languages. We also provide training, development and test splits for corpora where official splits are unavailable. We hope that this will help speed up the pace of multilingual NLP research and ensure that NLP researchers obtain results that are more trustable since they can be compared easily. We indicate corpora sources, extraction procedures if any and relevant statistics. We also make our collection public for research purposes.
Introduction
Text Corpora form the backbone of data-driven Natural Language Processing tasks ranging from automatic text segmentation to syntactic and semantic analysis to discourse. Bilingual parallel corpora which contain the same sentences in two languages are not only useful for Machine Translation tasks but also enable one to use an analysis tool developed for one language for another language by using transfer learning. Multilingual Multiway corpora (also known as N lingual corpora; terms which we will use interchangeably) are special corpora where the same sentence is available in multiple languages. Formally speaking a Nway corpus is one in which the same sentence is present in N languages. Large trilingual corpora are quite common since most countries maintain transcripts of various meetings (business, legal etc.) in English, the native language and an additional language. This additional language can depend on the situation such as geographical proximity or diplomatic and economic relations. The ASPEC corpus which is a trilingual Japanese-Chinese-English corpus is a product of joint collaboration between Japan and China in order to boost relations and promote research. One of the most attractive features of a N-lingual corpus is that by adding an additional language it automatically gets direct links to each of the N languages. This is a very desirable property since it is possible to perform transfer learning from one language to another. For example it should be possible to transfer part of speech tagging or parsing information from a language which has annotated data and parsers to a language that has none. In this paper we will list out various N-lingual corpora which are either publicly available or have been extracted by us. We list the extraction procedures along with various corpora level statistics. We also group corpora by language families where ever possible so that it becomes easier to study linguistic phenomena for related languages. We make our collection public so that people can work on them directly instead of having to spend time on searching for and extracting them. Most of these corpora are available online but they are not organized for multilingual experiments and researchers end up spending a significant amount of time writing boilerplate code to organize and use said corpora. Moreover, because there is no official MMC collection researchers tend to use their own splits of the datasets which makes it difficult to compare against their proposed approaches. And thus in the cases where official development and test sets are unavailable we have created our own training, development and test splits which we hope will be used by everyone to ensure fair comparison of methodologies and their results.
Our key contributions are as follows: • We have systematically collected, organized (or identified) and made available N lingual corpora from Europarl, TED talks, ILCI, Bible and UN corpora.
• We have also made available some of the scripts we used to organize our corpus collection, thereby eliminating the need to write boilerplate code.
• Our collection spans 5 domains and can be used for studies on domain adaptation and transfer learning.
• We have also organized some of the corpora by grouping languages accoring to language families to facilitate NLP for related languages and investigate how language relatedness impacts various transfer learning tasks.
Related Work
Our work revolves around accumulating and organizing existing N-lingual corpora. The most popular examples are: United Nations (Ziemski et al., 2016), Europarl (Koehn, 2005), Ted Talks (Cettolo et al., 2012), ILCI (Jha, 2010) and Bible (Christodouloupoulos and Steedman, 2015) corpora. These corpora have been used mainly for machine translation (Zoph and Knight, 2016;Och and Ney, 2001) and for various studies on language relatedness studies (Asgari and Mofrad, 2016), cross lingual part of speech tagging (Agić et al., 2015) and cross lingual parsing (Agic et al., 2016). To the best of our knowledge there has been no active work on creating a single compilation of multilingual corpora.
Corpus Extraction Mechanism
Most corpora are not directly available as N-way corpora but as N-1 bilingual corpora where one of the languages is always English. As such we simply extract the N-way corpus by retaining the N-1 sentences which have the same English translation. We use the following procedure: • target-sentences = hashmap(hashset()) • all-corpora = hashmap(hashmap()) • for each language-pair, corpus in corpora: for key in target-sentences.keys()]) • for sentence in common-sentences: write sentence to corresponding file for language-pair in language-pair-list: * source-sentence = all-corpora[languagepair][sentence] * write source-sentence to corresponding file We load all the corpora in a dictionary to ensure quick extraction. Although it might seem to quite memory intensive our method works well in practice since there are not too many corpora that are large enough to cause out of memory issues. Additionally we will make the scripts, for extracting the N lingual corpora from text and XML files, publicly available.
Multilingual Multiway Corpora
In this section we list all the multilingual multiway corpora we have managed to acquire/identify along with relevant statistics. Our collection is available here: http://lotus.kuee.kyoto-u.ac.jp/ raj/mmcr4nlp/ 1
UN corpus
The UN corpus 2 spans 6 languages (ar, fr, ru, en, zh and es) is directly available in its N-lingual form. It contains roughly 11.36M lines and the average sentence lengths vary from 23 for Arabic to 30 for Spanish. Additionally there are 6 lingual development and test sets of 4K lines each. Since the UN corpus is directly available in its 6 lingual form we do not include it in our (downloadable) collection.
Spoken Language and Subtitles corpus
The Spoken Language and Subtitles corpus is an excellent source of parallel sentences in the spoken language domain. We have three different sources of TED talks corpora, two 3 of which come from the IWSLT 2016 and 2017 shared tasks and the third which was crawled from the TED website 4 .
IWSLT 2016 corpus
The IWSLT 2016 task focused on 5 languages (fr, de, ar, cz and en) where English was the target language. One development set (dev2010) and 4 test sets (tst2010 to tst2013) are available. This corpus is not directly 5 lingual and thus using English as the pivot we extracted 3, 4 and 5 lingual versions 5 of the parallel corpus. We found that the test sets for 2011 and 2012 are not 5 lingual and thus exclude them from
IWSLT 2017 corpus
The IWSLT 2017 task focused on 5 languages (de, nl, it, ro and en) but the objective was on a single multilingual system. One development set (dev2010) and one test set (tst2010) were provided. Like the IWSLT 2016 corpus this corpus is not directly 5-lingual either. We extracted 3 lingual (de, nl and en) and 5 lingual (de, nl, it, ro and en) versions 6 of the corpus. We do not list the 4 lingual version since it is of roughly the same as the 5 lingual corpus. Refer to table 2 for details.
Generic Ted Talks Corpus
We found an unofficial 13 lingual (ar, de, es, fr, he, it, ja, ko, nl, pt (Brazilian Portuguese), ru, zh (Mainland Chinese) and tw (Taiwanese Chinese)) TED talks corpus of 349049 lines which was crawled 7 from the TED talks site. This repository also contains many pairs of bilingual corpora but one unusual aspect of this corpus is that it does not contain English as either a source or a target language. A 4 lingual version of this corpus which spans only ja, ko zh and tw contains an additional 40K lines for a total of 389764 lines. Since, there is no specific development or test set in the case of this 13 lingual corpus 8 we created our own splits. For both these 4 and 13 lingual corpora we remove the last 4000 sentences (from the end of the corpus) and split them into development and test sets of 2000 sentences each. We believe that this corpus should be useful for future IWSLT tasks which focus on multilinguality.
Bible corpus
The Bible corpus is probably the only corpus which is translated into over 100 languages. However the corpus available online is present in XML format and needs to be preprocessed. We developed a simple XML parsing script that can produce a N-lingual version given the XML files for each language. While inspecting the corpus we discovered that the bible is not fully translated into about 40 of the languages and thus we exclude them from our collections. The English version of the bible contains about 31102 verses and we only considered the languages which contain 30000 or more translated verses. Another problem is that some verses (each of which have a unique id in the XML file) of the bible are not available in some of the XML files leading to fewer number of N lingual entries. In order to make it easier for researchers to work on languages belonging to the same language family we extracted N lingual corpora for the following 8 language families: Slavic, Uralic, Indo Aryan, Dravidian, Germanic, East Asian, South East Asian and Romance. For all these corpora groups we remove the last 2000 sentences (from the end of the corpus) and split them into development and test sets of 1000 sentences each. We extract the following N lingual versions 9 of the corpus where English is always one of the languages: • 55 lingual spanning all the languages mentioned in Section 3.2. except Punjabi, Tamil, Taiwanese and Latvian. We chose these 55 languages since these are the only ones that are completely translated (with the exception of a few accidental omissions). This 55 lingual corpus contains 26121 lines and the missing 5000 lines are a result of the randomly missing translations for a number of verses.
• 8 lingual Romance languages corpus of 30133 lines which includes en, esp, fr, it, la, pt, ro and es.
• 5 lingual Indo Aryan languages corpus of 30049 lines which includes en, hi, mr, my and ne.
• 8 lingual Germanic languages corpus of 28854 lines which includes af, da, nl, en, de, ic, no and sw.
• 4 lingual Dravidian languages corpus of 30651 lines which includes en, kn, ml and te.
• 4 lingual East Asian languages corpus of 31063 lines which includes en, zh, ja and ko.
• 7 lingual South-east Asian languages corpus of 29621 lines which includes en, ce, id, ma, ta, th and vi.
• 4 lingual Uralic languages corpus of 30885 lines which includes en, et, fi and hu.
ILCI corpus
The ILCI corpus has been used frequently in the Indian Languages Machine Translation shared tasks in ICON 2014 10 and 2015 11 . The ILCI corpus is a 6 lingual multilingual corpus spanning the languages Hindi, English, Tamil, Telugu, Marathi and Bengali was provided as a part of the task. The training, development and test sets 12 contain 45600, 1000 and 2400 6-lingual sentences respectively. Half of the corpus (train/dev/test) belongs to the tourism domain and the other half to the health domain. There is work on a 12 lingual equivalent of the ILCI corpus 13 but it is not publicly available. Most of the sentences in the corpus are short and simple in terms of grammatical complexity. The average sentence length varies from 12 for Tamil (morphologically rich) to 17 for English (morphologically poor) indicating that most of the sentences are intended to be used in survival situations. This is quite different from the case of the UN corpus where the average sentence length for English is around 25.
Europarl corpus
The Europarl corpus covers the following 21 European languages: bg, cz, da, de, gr, en, es, et, fi, fr, hu, it, li, lt, nl, po, pt, sl, sv, and sw Just like we did for the Bible corpus, we also extract N lingual corpora for the following language families: Germanic, Slavic, Uralic, Baltic and Romance. Since English is the pivot language we used for extraction we include it in all the collections. As in the case of the TED corpus we remove the last 4000 sentences (from the end of the corpus for each group) and split them into development and test sets of 2000 sentences each. The details of the corpora 15 are as follows: • 21 lingual spanning all the languages. This corpus is of 189310 lines.
• 10 lingual spanning en, da, de, es, fi, fr, it, nl, pt and sv. These 10 languages are the largest in the collection. This corpus is of 1.071M lines.
• 6 lingual Slavic languages corpus of 342845 lines which includes bg, cz, en, po, sl and sv. We also extract a 4 lingual Slavic languages corpus of 569962 lines which includes sl, sv, cz and en. This increase in the number of lines is due to the exclusion of Polish and Bulgarian which contain significantly fewer number of entries in their respective monolingual corpora.
• 6 lingual Romance languages corpus of 294192 lines which includes en, fr, it, pt, ro and es. We also extract a 5 lingual version of 1.454M lines by excluding Romanian.
• 5 lingual Germanic languages corpus of 1.408M lines which includes da, nl, en, de, and sw. •
Conclusion
In this paper we have described our collection of Multilingual Multiway corpora. Our collection, which we believe to be the first of its kind, spans 59 languages and 5 domains. We have also extracted N lingual development and test sets from existing bilingual development and test sets for the IWSLT corpora. For the Bible, Europarl and TED corpora where official N lingual development and test sets are unavailable we defined our own training, development and test splits and encourage other researchers to use these splits for ease of comparison. Our collection can be used for NLP research in low (Bible and ILCI), medium (IWSLT and TED) as well as high resource (Europarl and UN) scenarios. In the cases of the Bible and Europarl corpora we have extracted N lingual corpora for various language families to facilitate research on how language relatedness affects the final results of NLP tasks. Due to page limit restrictions we do not give various statistics such as word count and average sentence length for each instance of the N lingual corpora we extracted but plan to include it later. We make our collection available to the public 16 and also plan on expanding it in the future to improve coverage in terms of domains, number of lines and number of languages. Some additional and promising sources of N lingual corpora are: BTEC corpus 17 (Paul et al., 2013) and the QED corpus 18 . We also plan on looking into extracting bilingual parallel corpora from Wikipedia and then extract multilingual multiway versions of those corpora. | 2017-10-04T07:46:21.000Z | 2017-10-03T00:00:00.000 | {
"year": 2017,
"sha1": "399aa1bdbe4c544cfe12be08b6adab48fc01167d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "399aa1bdbe4c544cfe12be08b6adab48fc01167d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
266994837 | pes2o/s2orc | v3-fos-license | The influence of prenatal dexamethasone administration before scheduled full-term cesarean delivery on short-term adverse neonatal outcomes: a retrospective single-center cohort study
Objective There has been a gradual increase in the prevalence of cesarean section deliveries and more healthcare professionals are considering the prophylactic use of corticosteroids before planned full-term cesarean sections. However, the association between dexamethasone administration before full-term cesarean delivery and short-term adverse neonatal outcomes is unclear. This study analyzed the disparities in short-term adverse neonatal effects in neonates born via full-term elective cesarean delivery with or without antenatal dexamethasone treatment. Study design This single-center retrospective cohort study involved neonates aged 37–39 weeks. The primary neonatal outcomes included various short-term adverse events, including neonatal admission to the neonatal intensive care unit, neonatal access to the special care baby unit, transient neonatal respiratory distress, respiratory distress syndrome, and the requirement of intravenous antibiotics or ventilatory support. Multiple logistic regression analysis was used to assess the association between these outcomes and dexamethasone exposure while adjusting for covariates. Results Of the 543 neonates included in the study, 121 (22.2%) had been exposed to prenatal dexamethasone. When compared with the control group, the dexamethasone-exposed group exhibited significantly higher rates of transient neonatal respiratory distress, respiratory distress syndrome, administration of intravenous antibiotics, the need for ventilatory support, and longer duration of neonatal hospitalization (P < 0.05). The association between dexamethasone exposure and short-term adverse neonatal outcomes remained significant after adjusting for potential confounders (odds ratio: 12.76, 95% confidence interval: 6.9–23.62, P < 0.001). Conclusion The dexamethasone-exposed group had a higher likelihood of experiencing short-term adverse outcomes when compared with non-exposed neonates, suggesting that dexamethasone may have detrimental effects on infants delivered at full term. This implies the importance of exercising caution when contemplating the use of antenatal corticosteroids.
Introduction
In recent decades, the rate of cesarean section deliveries has markedly increased worldwide, especially in high and middleincome countries (1)(2)(3).However, cesarean delivery independently contributes to the risk of neonatal respiratory complications, mainly respiratory distress syndrome and transient neonatal tachypnea (4,5).Because the risk associated with elective cesarean delivery diminishes as gestational age progresses (6), it is recommended to postpone the procedure until the pregnancy reaches 39 weeks (7)(8)(9)(10).However, approximately 10%-15% of women who choose cesarean section may deliver before the recommended gestational age (11).The Antenatal Steroid Trial for Full-Term Elective Cesarean Sections (12) and the Cochrane Review on the use of corticosteroids to prevent respiratory disorders in neonates after full-term elective cesarean deliveries (13) have led to widespread clinical acceptance of the use of prophylactic corticosteroids before scheduled fullterm cesarean sections as a mitigation for potential neonatal respiratory system risks.
The Royal College of Obstetricians and Gynaecologists (RCOG) Green Top Guideline No. 7 (RCOG 2010) (14) recommended the prophylactic use of corticosteroids before fullterm planned cesarean sections.However, this recommendation is not included in the current guidelines by the National Institute for Health and Care Excellence (NICE) for cesarean sections (NICE 2021) (15).
Dexamethasone, a synthetic glucocorticoid, is widely used to manage preterm labor and promote fetal pulmonary maturation, especially in low-resource settings (16,17).The administration of prenatal corticosteroids promotes the maturation of the fetal pulmonary system in premature infants, thereby decreasing the occurrence of respiratory distress syndrome, the need for respiratory support, the duration of intensive care hospitalization, and the prevalence of various premature neonate complications, such as intraventricular hemorrhage, necrotizing enterocolitis, and neonatal mortality (18).The Maternal and Child Health Survey, a comprehensive investigation by the World Health Organization involving 359 institutions in 29 countries found that the prevalence of synthetic glucocorticoid administration was 54%, although some countries have rates of up to 91% (19).
The use of dexamethasone during late preterm birth is controversial, especially in low-resource settings (17).A thorough assessment was conducted to determine the efficacy of administering corticosteroids before full-term, scheduled cesarean deliveries and if it provides significant advantages while avoiding unnecessary harm.This harm is attributable to a limited understanding of the cellular and molecular mechanisms governing fetal lung maturation (20), coupled with inherent limitations of currently available markers (21).Evidence indicates that neonates exposed to prenatal corticosteroids may have a higher risk of unfavorable outcomes (22)(23)(24)(25).Thus, the use of prenatal corticosteroids in the context of term deliveries warrants careful consideration.
Data source and study cohort
The data used underlying this study are from the Dryad Digital Repository (https://doi.org/10.5061/dryad.g79cnp5qs).This retrospective single-center cohort study was carried out from December 2016 to February 2019 at the Professorial Unit, Department of Obstetrics and Gynecology, Colombo South Teaching Hospital University.The study population was described previously (26).The study participants were divided into the experimental and control groups.The experimental group comprised of mothers who were administered two intramuscular injections of dexamethasone (12 mg) at 12-hour intervals, commencing from one week to 24 h prior to delivery.The control group was made up of mothers who did not receive corticosteroid treatment before delivery.
This study involved maternal-fetal dyads who underwent elective cesarean sections at the gestational age of 37-39 weeks and who met the inclusion criteria.The cesarean sections analyzed in this study were categorized as elective, defined as cesarean sections scheduled in advance and not conducted under emergency circumstances.Information on cesarean section indications, such as elective factors like maternal request, large fetus, malpresentation/breech, and repeat cesarean section, was obtained from medical records.Neonatal care was provided within a single unit, ensuring uniform diagnostic and admission prerequisites.
The study cohort was subjected to exclusionary criteria, which included symptoms of severe maternal hypertension, severe fetal rhesus alloimmunization, or intrauterine infection characterized by maternal pyrexia, tachycardia, fetal distress, and meconiumstained liquor at delivery.Pregnant women who were concurrently receiving steroids for reasons unrelated to the study protocol, those with multiple gestations, those with emergency conditions necessitating mandated cesarean section, and cases with insufficient covariate data, were also excluded.
The requirement for ethical approval was waived because all data were meticulously anonymized and the study strictly adherence to the protocols and regulations established by the Dryad Digital Repository.
Data acquisition
Relevant patient bedside records were retrieved from archival records and subsequently transcribed onto a designated data collection template by an assistant researcher.To mitigate observer bias, a second assistant researcher not involved in developing the study protocol or in patient management, compiled the maternal demographic data, including data on maternal age, diversity of gestation, previous and current medical conditions, gestational age at the time of cesarean section, intricate surgical particulars, and postoperative complications.Moreover, data on the administration of corticosteroids to the mothers was meticulously documented.Relevant neonatal data, including birth weight and Apgar scores, were recorded.The primary outcomes under investigation included neonatal admission to the neonatal intensive care unit (NICU), assignment to the special care baby unit, transient neonatal tachypnea, respiratory distress syndrome, intravenous antibiotic administration, need for ventilatory support, and duration of neonatal hospitalization.Any of the first six abovementioned criteria indicated short-term adverse neonatal effects.
Statistical analysis
First, we summarized the study cohort's baseline characteristics and categorized them based on dexamethasone exposure status.For continuous data, descriptive statistics involved the use of either the mean and standard deviation or the median and interquartile range depending on data distribution.Categorical data were presented as frequencies and corresponding percentages.Categorical and nonnormally distributed continuous data were analyzed using the Pearson χ 2 test, the Fisher exact test, or the Kruskal-Wallis test, as deemed appropriate.P < 0.05 indicated statistically significant differences.Covariate adjustments were applied where any of the following criteria were met: (1) confounders reported in the literature, (2) univariate analysis yielding a P-value of <0.1, or (3) a change in effect size of >10% upon covariate inclusion.Comprehensive univariate regression analysis was conducted on all variables to ascertain the potential factors that predict primary outcomes (adverse short-term neonatal effects).Univariate analysis was used to reveal the trends associated with adverse short-term neonatal outcomes.Logistic regression analysis was used to assess the independent association between dexamethasone exposure and adverse short-term neonatal effects.Subgroup analyses by age, parity, gravidity, and common comorbidities, such as pregnancyinduced hypertension (PIH) and gestational diabetes mellitus (GDM), were used to examine the stability of the association between dexamethasone exposure and adverse short-term neonatal effects.Smooth curve-fitting and threshold saturation effect analyses were used to assess the probability of short-term adverse neonatal effects.The likelihood of these effects was quantified using odds ratios (OR) and standard error with 95% confidence intervals (CI).Statistical analyses were done on R (http://www.R-project.org) and Free Statistics software version 1.8.A two-tailed test was employed, P < 0.05 indicating statistically significant differences.
Baseline characteristics
The baseline characteristics of the study's participants based on dexamethasone treatment status are shown in Table 1.From the original dataset of 560 observations, 17 entries were excluded because of a lack of crucial covariate information.Of these, 12 were excluded because of missing Apgar score data at one minute, one because of missing Apgar score data at 10 min, two because of missing gravidity data, and two because of missing GDM and PIH data.Hence, the final analysis involved 543 women and their neonates.The women had an average age of 32.3 ± 4.5 years and the majority (95.4%) identified as Sinhalese.Of the participants, 121 (22.2%) underwent planned cesarean section and received dexamethasone before the procedure (Table 1).The fetal growth restriction (FGR) rate was 5% more prevalent in the nondexamethasone-treated group compared to a 0.8% lower incidence in the dexamethasone-treated group (P = 0.038).However, various factors, such as reproductive history (gravidity, parity, children), pregnancy comorbidities (GDM, polyhydramnios, and PIH), gestational age at cesarean section, and important neonatal characteristics like neonatal weight and Apgar scores at 1, 5, and 10 min) did not differ significantly between the two groups (P > 0.05).
The dexamethasone-treated cohort had significantly higher rates of primary outcome measures, such as transient neonatal tachypnea, respiratory distress syndrome, intravenous antibiotic administration, ventilatory support, and the duration of neonatal hospitalization when compared with the control group (P < 0.05).Moreover, in the dexamethasone group, the probabilities of neonatal admission into the NICU or placement in the special care baby unit were 1.7% and 3.3%, respectively, which was higher than in the control group.
This study observed a notable disparity in the utilization of antibiotics between the groups exposed to dexamethasone and those not exposed.A review of existing literature indicated varying rates of antibiotic usage for newborns admitted to neonatal intensive care units (NICUs), ranging from 2.4% to 97.1% (27).In order to address this issue, logistic single-factor analysis was conducted, and adjustments were made to the effect size if it exceeded a 10% change upon the inclusion of covariates.Potential confounding factors that were taken into consideration included variations in Admission to Neonatal Intensive Care Unit, Documented Transient Tachypnea of Newborn, and Documented Respiratory Distress Syndrome between the two groups (Supplementary Table S3).Furthermore, owing to the higher occurrence of short-term adverse outcomes in the dexamethasone-treated group, there was an increased likelihood of admission to the Neonatal Intensive Care Unit and diagnoses of Documented Transient Tachypnea of Newborn and Documented Respiratory Distress Syndrome, thereby contributing to the escalated usage of antibiotics in the dexamethasone group.
Sensitivity and subgroup analysis
The results of stratified and interaction analyses of dexamethasone and short-term adverse neonatal outcomes subgroups of key factors were analyzed in stratified and interaction analyses.The stratified analysis showed that PIH, age, parity, and gravidity were not statistically significant after stratification (P > 0.05), indicating that the effect of dexamethasone on short-term adverse neonatal outcomes was stable and it was not affected by changes in covariates (Figure 1).The association between GDM and dexamethasone was examined for short-term adverse neonatal outcomes (P = 0.003).There is an increased likelihood that dexamethasone is linked to short-term adverse pregnancy outcomes in neonates whose mothers did not have gestational diabetes before pregnancy when compared with the control group.
A comprehensive analysis of the correlation between the duration of neonatal hospitalization and dexamethasone, focusing on immediate negative consequences revealed a significant increase in the average length of hospitalization for neonates in the dexamethasone group when compared with the control group, indicating a comparatively more critical condition.Notably, the difference in the mean hospital stay between the two groups was only 0.9 days, with the maximum hospital stay being 12 days and there were no neonatal mortalities.This suggests that dexamethasone may have a transient effect on short-term adverse neonatal outcomes with a better overall prognosis (Figure 2).
Discussion
This study explored the association between dexamethasone usage during full-term elective cesarean deliveries and short-term adverse neonatal outcomes.This single-center retrospective cohort study made the following key findings: (1) there was a significantly elevated likelihood of short-term adverse neonatal outcomes in the cohort that received dexamethasone before fullterm planned cesarean sections, (2) dexamethasone was significantly associated with short-term adverse neonatal outcomes, even after adjusting for baseline characteristics and other comorbidities, and (3) the possible effects of dexamethasone on short-term adverse neonatal outcomes are transient.Prenatal corticosteroid therapy, a fundamental component of perinatal care, warrants comprehensive evaluation because of its potential impact on fetal development and programming, given its influence on up to 20% of the transcriptome (28,29).To avert adverse short-term neonatal outcomes it is important to minimize fetal exposure to such medications.Notably, full-term and late preterm infants are inherently exposed to elevated levels of endogenous steroids and may additionally receive prenatal corticosteroids (30).A recent meta-analysis involving 1.6 million infants found that early prenatal exposure to corticosteroids, when compared to no direction, was associated with an increased risk of neonatal intensive care unit admissions among full-term infants (OR: 1.49, 95% CI: 1.19-1.86)(23).This highlights the pressing need for informed, evidence-based use of corticosteroids to minimize the potential for over-treatment and subsequent neonatal mortality.
Furthermore, previous studies indicate that dexamethasone administration during pregnancy has potential adverse consequences, including an increased risk of cardiovascular disease in the offspring and neurotoxicity (31)(32)(33)(34)(35).
Our study revealed a significant correlation between dexamethasone usage and short-term adverse neonatal outcomes in full-term elective cesarean deliveries, which persisted even after adjusting for covariates linked to short-term neonatal adverse effects.Specifically, neonates in the dexamethasone-treated group exhibited a higher risk of various unfavorable outcomes, including respiratory distress syndrome, intravenous antibiotic use, ventilatory support, and a greater likelihood of admission into neonatal intensive care.These findings raise concerns about the potential risks associated with dexamethasone, particularly during full-term elective cesarean deliveries.Countries must exercise prudence when incorporating interventions into healthcare policies and ensure that decisions are rooted in evidence of efficacy and a thorough risk assessment (36).
However several studies with similar aims found that prenatal corticosteroid treatment did not clearly establish a significant association with adverse neonatal outcomes (11,12,37).Although these studies are relevant to our topic, they differ in their primary outcome indicators.Specifically, Stutchfield et al. (12) focused on the incidence of neonates admitted to the intensive care nursery for respiratory distress, whereas our study focused on neonates requiring specialized care as the outcome indicator.This disparity could potentially serve as a significant determinant influencing the disparate findings observed in our study.Concurrently, it is worth acknowledging that the systematic review conducted by Sotiriadis et al. (11) exhibits certain uncertainties regarding the precision of its findings, owing to the limitations associated with the certainty of evidence available in the existing literature.Conversely, our study primarily classified unfavorable outcomes based on the requirement for specialized medical attention, with a specific emphasis on the potential detrimental impacts of medications on neonatal outcomes.While acknowledging the potential divergence from existing literature, it is our firm conviction that our study contributes significant insights into distinct neonatal outcomes.
Our study found that the effect of dexamethasone on shortterm adverse neonatal outcomes was transient.Specifically, although the dexamethasone group was associated with an increased risk of adverse neonatal outcomes, this effect was mainly manifested in the short term and did not persist over an extended period.No neonatal deaths were observed despite a prolonged duration (days) of hospitalization, and the mean difference was relatively small.A retrospective cohort study involving data from 588,077 live births found that antenatal corticosteroid exposure was associated with significantly lower odds of neonatal mortality and 5-minute Apgar scores of <7 in 121,151 women (38).However, there was an increased incidence of some adverse neonatal outcomes, such as surfactant replacement therapy, prolonged mechanical ventilation, antibiotics for suspected neonatal sepsis, and NICU admissions.The causes may include lung maturation and the antiinflammatory effects of prenatal corticosteroids, which may enhance alveolar complexity (39), and the immunosuppressive effects of corticosteroids, which may cause or worsen infections (40).However, further studies are needed to confirm the longterm neuropsychiatric and cardiac risks.
Limitations of the study and suggestions for future research
(1) Data Source and Confounders: The present study relied on secondary data obtained from previous research, thereby imposing inherent constraints on the availability of information about potential confounding variables.(2) Study design: Being retrospective, this investigation is susceptible to inherent limitations, such as the potential existence of unmeasured confounders.Furthermore, the observational nature of the study does not establish causality but only offers evidence of association.The single-center nature of the study may limit the generalizability of its findings.(3) Based on the admission records, this retrospective cohort study identified one case of neonatal hypoglycemia among the hospitalized newborns.However, postnatal neonatal blood glucose data were not collected.
To address these limitations and advance our understanding of dexamethasone usage during full-term elective cesarean deliveries, future research should consider the following: (1) performing multicenter prospective studies, which can enable a more comprehensive and robust examination of the role of dexamethasone in full-term elective cesarean deliveries.These studies should include a wider range of patient characteristics and factors to better evaluate the impact of dexamethasone.(2) Future research should delve deeper into the timing of dexamethasone administration and how patient characteristics may influence its outcomes.This will help elucidate the nuances of dexamethasone use in different clinical scenarios.(3) The impact of prenatal dexamethasone on maternal well-being, including conditions like hyperglycemia and hypertension also warrants investigation.Furthermore, it is important to examine the potential of enduring consequences of prenatal steroid exposure on childhood development (41), including neurodevelopmental outcomes and related factors.
Addressing these limitations and conducting further research can improve our understanding of the benefits and potential risks of dexamethasone in the context of full-term elective cesarean deliveries.
Conclusion
Despite our study's limitations, our findings indicate a possible correlation between dexamethasone administration during elective cesarean delivery at full term and negative neonatal outcomes in the short term.However, these findings require further validation through thorough investigations.Nonetheless, healthcare professionals should exercise caution when considering dexamethasone therapy and carefully evaluate the trade-off between potential advantages and risks, while engaging in shared decision-making with patients to determine the most appropriate treatment strategy.
FIGURE 2
FIGURE 2Bar chart of the duration of hospitalization in the dexamethasone and control groups.
TABLE 1
Baseline characteristics of study patients.
TABLE 3
Multivariate regression analysis of the association between dexamethasone and short-term adverse neonatal outcomes.Model 3 + birth weight + apgar score at 1 min + apgar score at 5 min + days of hospital stay + gestational age at cesarean section.
TABLE 2
Association of covariates dexamethasone and short-term adverse neonatal outcomes risk. | 2024-01-16T16:25:26.762Z | 2024-01-11T00:00:00.000 | {
"year": 2024,
"sha1": "be11591bc93c8aab43daeeae3fa06fbcee67c2d6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2023.1323097/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef790bae346f505721a86eee235d7e4091a14c84",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
118548058 | pes2o/s2orc | v3-fos-license | Cosmological constant influence on cosmic string spacetime
We investigate the line element of spacetime around a linear cosmic string in the presence of a cosmological constant. We obtain the metric and argue that it should be discarded because of asymptotic considerations. Then a time dependent and consistent form of the metric is obtained and its properties are discussed.
I. INTRODUCTION
The most reliable measurement of the redshift-magnitude relation uses supernovae of Type Ia [1]. Two groups the High-z Supernova Search Team [2] and the Supernova Cosmology project [3] working independently and using different methods of analysis each have found evidence for accelerated expansion of the universe. Type Ia supernovae are characterized by the absence of hydrogen lines in the spectra and they are thought to be the result of thermonuclear disruption of white dwarf stars [4]. The data require Λ > 0 at two or three standard deviations , depending on the choice of data and method of analysis [5,6]. The measurements agree with the relativistic cosmological model with Ω k0 = 0 , meaning no space curvature and Ω Λ0 ∼ 0.7 meaning we are living in a cosmological constant dominated universe [7,8,9]. Also the latest data of the Wilkinson Microwave Anisotropy Probe are in favour of a flat Λ-dominated universe [10]. These observationally confirmed results and the other theoretical believes in a non-zero cosmological constant [11] make us to consider the effects of this parameter on different parts of our studies in cosmology. In this article we intend to find out its effect on the solution of field equations of a cosmic string. Cosmic strings are topologically stable objects which may have formed during the breaking of a local U(1) gauge symmetry in the very early universe [12,13,14,15]. First we find out the static solutions of the Einstein's field equations with non-zero cosmological constant for a straight cosmic string. Then we show that in the limiting case when µ → 0 the static form of the solutions do not satisfy the required prediction of the observed slope 5 in the magnitude-redshift relation for low redshifts (z ≤ 0.2). Finally time dependent form of the solutions which have consistent asymptotic behaviour are obtained.
II. THE STATIC LINE-ELEMENT
In this study we consider an infinitely long, thin, straight , static string laying along the z-axis with the following stress-energy tensor: where µ is the mass per unit length of the string in the z-direction.
For such a gravitating cosmic string the spacetime possesses the same symmetry and is invariant under time translations, spatial translations in the z-direction, rotation around the z-axis and Lorentz boosts in the z-direction. These special symmetries of the problem, according to the special form of the stress-energy tensor introduced by Eq.(1) guide us to choose the following form of the line-element in the cylindrical coordinate system (ρ, φ, z) : For the case Λ = 0, the solutions of a(ρ) and b(ρ) are well-known [13,16,17]: Einstein's field equations with cosmological constant Λ, in units of c = 1 , are : In the case of Λ > 0, a(ρ) and b(ρ) must satisfy the following equations: where prime stands for derivation with respect to ρ. These are ρρ , θθ and zz components of the field equations for the exterior part of the string, where the stress-energy tensor and its trace are zero. Actually tt component makes the same equation as the zz component.
Combination of Eqs. (5),(6)and (7) yields to the result: General solutions of a(ρ) and b(ρ) have the form where α, β and γ are constants. To fix these constants it is sufficient to impose the condition that the metric (2) should match the form (3) in the limiting case when Λ → 0. With doing this the consistent form of the metric(2) is : When µ → 0 i.e. in the absence of string , Eq. (12) gives the form of Shwarzschild de Sitter spacetime with m = 0 in the cylindrical coordinates. We have previously shown that in the case of Λ = 0 the so called static isometry of de Sitter solution i.e.
does not fulfill the requirement to predict the observed relation of magnitude redshift for low redshifts z ≤ 0.2 [18]. Now let us check this for the metric (12). Evidently in a static spacetime like Eq.(12) the gravitational redshift of a source located at a point with coordinates (ρ, θ, 0) is proportional to the 00-component of the metric at that point [19]. In this case calculation of the difference between apparent and absolute magnitudes m − M as a function of redshift z yields: Inspecting the logarithmic slope of Eq. (14) for small values of z , it turns out to be 2.5.
Then a comparison of Eq. (14) with the so called redshift-magnitude relation in a FRW model reveals a shortcoming of the static metric (12) in predicting the experimentally tested value of slope 5 [18]. So it provides enough motivation to seek for a nonstatic solution that overcomes this deficiency. Now we continue to find non-static solutions of the field equations for cosmic strings.
III. THE NON-STATIC LINE-ELEMENT
To investigate the nonstatic solution of the cosmic strings we may choose the metric Eq.(2) multiplied by a scaling factor, i.e.
To find the unknown functions of the metric it reveals to be more simple if we rescale the time coordinate to write the line-element in the form It remains to solve the field equations to determine the functions a , b and R. Direct calculations of the Ricci tensor lead to the following field equations : where prime and dot indicate differentiation with respect to ρ and τ respectively. A physically nontrivial solution of Eq.(21) may be treated as a to be constant. To be consistent with Eq.(3) when Λ → 0 this constant should be equal to zero. In this case the Eq. (17) yields the following result for R; So that This in turn results that: Evidently the solution b = const. satisfies the Eq.(24). If we accept that the metric (3) should be recovered in the limiting case Λ → 0, consequently this constant should be equal to (1 − 4Gµ) 2 . Therefore the nonstatic line-element of the cosmic string is: By a transformation of polar angle, θ → (1 − 4Gµ)θ, the metric becomes the flat-space deSitter metric. As expected , spacetime around a cosmic string is that of empty space.
However, the range of the flat-space polar angle θ is only 0 ≤ θ ≤ 2π(1 − 4Gµ) rather than 0 ≤ θ ≤ 2π. Due to this and nonstatic nature of (25), the observer will see two images of the source , with the angle seperation δα between the two images defined by Here l(d) is the distance from string to the source (observer) at the observation epoch. It is valid for the approximation range of Λ 3 (l+d) to be small compared to 1. Thus the existence of cosmological constant will cause to weaken the double images of objects located behind the string. The other important point to mention concerning (25) is that like deSitter spacetime | 2008-12-27T10:18:20.000Z | 2003-05-16T00:00:00.000 | {
"year": 2008,
"sha1": "f70618c9d2ea3134b9f00ab43c109d766f067065",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0812.4729",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f70618c9d2ea3134b9f00ab43c109d766f067065",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
51708140 | pes2o/s2orc | v3-fos-license | Localization of SOX2-positive stem/progenitor cells in the anterior lobe of the common marmoset (Callithrix jacchus) pituitary
Studies on mouse and rat pituitaries reported that Sox2-expressing cells play roles as stem/progenitor cells in the adult pituitary gland. The presence of cells with stem cell-like properties in the pituitary adenoma and SOX2-positive cells has been demonstrated in the human pituitary. However, considering the difficulty in fully examining the stem/progenitor cell properties in the human pituitary, in the present study, we analyzed the SOX2-positive cells in the pituitary of the adult common marmoset (Callithrix jacchus), which is used as a non-human primate model. Immunohistochemistry demonstrated that localization pattern of SOX2-positive cells in the common marmoset pituitary was similar to that observed in the rodent pituitary, i.e., in the two types of niches (marginal cell layer and parenchymal-niche) and as scattered single cells in the parenchyma of the anterior lobe. Furthermore, most of the SOX2-positive cells express S100 and were located in the center or interior of LAMININ-positive micro-lobular structures. Collectively, the present study reveals properties of SOX2-positive cells in the common marmoset pituitary and suggests that the common marmoset proves to be a useful tool for analyzing pituitary stem/progenitor cells in a non-human primate model.
Accumulating evidence from studies on mouse and rat pituitaries demonstrated that Sex-determining region Y-box 2 (Sox2)-expressing cells play roles as stem/progenitor cells in the adult pituitary gland [3,4]. These SOX2-positive cells form two types of niches (stem/ progenitor cell microenvironment); the marginal cell layer (MCL)niche facing the residual lumen of Rathke's pouch (Rathke's cleft) and the dense SOX2-positive cell clusters scattered in the parenchyma of the adult rodent anterior lobe (parenchymal-niche), in addition to being singly scattered in the parenchyma [2,5,6]. Moreover, SOX2-positive stem/progenitor cells in the adult rodent anterior pituitary are composed of sub-populations based on the expression of a calcium-binding protein, S100β; approximately 82% of SOX2positive cells in the adult rat anterior lobe [7] and 60% in the mouse anterior lobe [3] express S100β.
Several studies addressing the human pituitary stem/progenitor cells have been reported, including the demonstration of their stem cell-like properties in the human pituitary adenoma [8,9]. Immunohistochemical analysis also demonstrated that SOX2-positive cells exist in the human pituitary gland [10]. However, considering the limitations in conducting sufficient examination of the property of stem/progenitor cells in the human pituitary, studies using a non-human primate model would be preferable.
In recent decades, the common marmoset (Callithrix jacchus), or New World monkey, has been increasingly employed as a nonhuman primate model. The common marmosets have an early onset of puberty (at 1.5 years of age), a relatively short gestation period (145-148 days), high reproductive efficiency (an approximately 28-day ovarian cycle, similar to humans), and a relatively high frequency of deliveries (twice a year), along with its small size [11]. Therefore, the common marmoset is an important primate model in various areas of biomedical research, such as neuroscience, toxicology, reproductive biology, and regenerative medicine, to bridge the gap between rodent studies and clinical applications [12].
In the present study, we analyzed the stem/progenitor cells in the adult common marmoset pituitary, focusing on SOX2-positive cells. Taken together, we identified the localization of SOX2-positive cells and their niches, and observed that most of the SOX2-positive cells express S100 in the adult common marmoset pituitary.
Animals
Five-year-old common marmosets (Callithrix jacchus), both female and male, were obtained from CLEA Japan (Tokyo, Japan). The animals were housed in pairs, in stainless steel cages, in a conditioned animal room maintained at 26-28°C and 40-60% humidity with a 12:12 h light/dark cycle. The animals were fed a commercial New World primate diet (CMS-1M, CLEA Japan) with added vitamins, and tap water was available ad libitum; the food was moistened with hot water to vary the texture. The marmosets were also fed supplemental food, such as sponge cake, apple jelly, or biscuits. The animals were exsanguinated under anesthesia with intramuscular injection of 50 mg/kg of ketamine (Fujita Pharmaceutical, Tokyo, Japan) and 4 mg/kg xylazine (Bayer, Leverkusen, Germany), and inhalation of isoflurane (Pfizer Japan, Tokyo, Japan). The animal experimental protocol was approved by the CIEA Institutional Animal Care and Use Committee (approval no. 17029A).
Wistar-Imamichi rats (10-week-old males) were housed individually in a temperature-controlled room under a 12:12 h light/dark cycle. Rats were sacrificed by cervical dislocation under anesthesia.
Immunohistochemistry
The pituitary glands of the female and male common marmosets (part of the intermediate lobes were missing upon removal) and those of the male rats were fixed with 4% paraformaldehyde in 20 mM phosphate buffer (pH 7.5) overnight at 4°C, followed by immersion in 30% trehalose in 20 mM HEPES to cryoprotect the tissues. They were embedded in Tissue-Tek O.C.T. Compound (Sakura Finetek Japan, Tokyo, Japan) and frozen immediately. Frozen sections (6μm thick) were prepared from the coronal planes of the pituitaries. Depending on the antibody, the sections were subjected to antigen retrieval by an ImmunoSaver (0.05% citraconic anhydride solution, pH 7.4; Nisshin EM, Tokyo, Japan) ( Table 1) for 60 min at 80°C. The sections were incubated with 10% (v/v) fetal bovine serum and 0.4% (v/v) Triton X-100 in HEPES buffer (blocking buffer) for 60 min at room temperature. After washing, the sections were incubated with primary antibodies (Table 1) in blocking buffer at 4°C overnight. After the immunoreaction, the sections were incubated with secondary antibodies using Cy3-, Cy5-, or FITC-conjugated AffiniPure donkey anti-goat, guinea pig, and rabbit IgG (1:500; Jackson ImmunoResearch, West Grove, PA, USA). The sections were washed and incubated in VECTASHIELD Mounting Medium (Vector Laboratories, Burlingame, CA, USA) with 4,6′-diamidino-2-phenylindole dihydrochloride (DAPI). Immunofluorescence was observed under a BZ-8000 fluorescence microscope (KEYENCE, Osaka, Japan). For multi-staining of rabbit IgG against rat LAMININ and rabbit IgG against human PRL, we labeled rabbit IgG against rat LAMININ by Zenon Alexa Fluor 488 Rabbit IgG Labeling Kit (Thermo Fisher Scientific, Waltham, MA, USA). The proportion of SOX2-positive cells and S100-positive cells in the SOX2-positive population in the parenchyma of anterior lobe was measured by counting three areas (1,001-1,178 cells counted in each area of 0.16 mm 2 ) in the independent sections prepared from each of the single female and male common marmosets. The data are presented as means ± SE for three sections. Guinea pig antiserum against rat FSHβ 1 : 5,000 Guinea pig antiserum against rat LHβ 1 : 5,000 Guinea pig antiserum against rat TSHβ 1 : 30,000 Guinea pig antiserum against human ACTH 1 : 10,000 Guinea pig antiserum against human GH kindly provided by Dr. S. Tanaka at Shizuoka University, Shizuoka, Japan 1 : 6,000 Rabbit IgG against human PRL Dako, Troy, MI., USA
Localization of SOX2-positive cells in the anterior pituitary of adult common marmosets
We first analyzed the localization of SOX2-positive cells in the adult common marmoset pituitary. Immunohistochemistry for SOX2 clearly demonstrated that SOX2-positive cells exist in the MCL of both anterior and intermediate lobes (Fig. 1, dotted lines). In the parenchyma of the anterior lobe, although most of the SOX2-positive cells were singly scattered, dense SOX2-positive cell clusters were also detected (Fig. 1, arrowhead). The proportion of SOX2-positive cells in the parenchyma of the anterior lobe was approximately 14.9 ± 0.9% and 15.4 ± 0.3% in the female and male animals, respectively. In the mouse pituitary, but not that of the rat, immuno-positive signals for SOX2 in the cytoplasm (cytoplasmic-SOX2) were also observed [13]. However, in the adult common marmoset pituitaries, these cytoplasmic-SOX2 were not observed (Fig. 1).
Co-localization of SOX2 and S100 in the anterior lobe of adult common marmosets
We performed double-immunohistochemistry for SOX2 and S100 in the adult common marmoset pituitary. Double-immunohistochemistry demonstrated that most of the SOX2-positive cells were positive for S100 in the MCL (Fig. 2A). In the parenchyma of the anterior lobe, SOX2 mostly co-localized with S100 at a high frequency, while SOX2positive/S100-negative cells ( Fig. 2A and 2B, closed-arrowheads) and SOX2-negative/S100-positive cells (Fig. 2C, open-arrowheads) were also detected. The proportion of S100-positive in SOX2-positive cells in the anterior lobe was approximately 88.7 ± 1.2% and 89.2 ± 2.1% in female and male animals, respectively.
Localization of SOX2-positive cells in the micro-lobular structure of the anterior lobe of the adult common marmoset pituitary
In the anterior lobe of adult common marmoset pituitary, to analyze the localization of SOX2-positive cells within the micro-lobular structure composed of basement membranes [14], we performed immunohistochemistry using an antibody against SOX2 and pan-LAMININ, which is a major component of basement membranes. Immunohistostaining demonstrated that LAMININ-positive microlobular structures, including blood vessels, exist as round or elliptical structures in the pituitary section (Fig. 3). In the parenchyma, the immunohistochemical staining for LAMININ and SOX2 demonstrated that most of the SOX2-positive cells were located either in the center or interior of the LAMININ-positive micro-lobular structures, and very few SOX2-positive cells were attached to the LAMININ-positive basement membranes (Fig. 3A). In addition, although most of the LAMININ-positive micro-lobular structures in the parenchyma of the pituitary sections include SOX2-positive cells, a few micro-lobular structures without SOX2-positive cells were also detected in the sections (Fig. 3A, asterisks). On the contrary, in the MCL-niche, the LAMININ-positive micro-lobular structures were not observed, and SOX2-positive cells were hardly attached to the LAMININ-positive cells (Fig. 3B).
Localization of hormone-positive cells and SOX2-positive cells in the micro-lobular structure of the anterior lobe of the adult common marmoset pituitary
Finally, we performed immunohistochemistry for each pituitary hormone, SOX2, and LAMININ. Immunostaining demonstrated that SOX2-positive cells exist as non-hormone-producing cells in the common marmoset pituitary as well as rodent ones (Fig. 4 and Supplementary Fig. 1: online only). Furthermore, most of the hormone-positive cells were attached to the LAMININ-positive basement membranes in the common marmoset pituitary of both female (Fig. 4) and male animals ( Supplementary Fig. 1). In addition, a few hormone-positive cells were located in the center of the LAMININpositive micro-lobular structure ( Fig. 4 and Supplementary Fig. 1). Especially, GH-positive cells, but not PRL-positive cells, tended to remain attached to LAMININ-positive basement membranes ( Fig. 4 and Supplementary Fig. 1).
Discussion
Accumulating evidences from rodent models show that SOX2positive cells exist as stem/progenitor cells in the anterior lobe of the adult pituitary. In the present study, we performed immunohistochemical analysis of SOX2-positive cells in the pituitary of the common marmosets, a non-human primate model.
In the adult rodent pituitary, SOX2-positive stem/progenitor cells showed three localization patterns: lining the MCL (MCLniche, also known as a primary niche), clustering in the parenchyma (parenchymal-niche, also known as secondary niches), and singly scattered in the parenchyma [2,5,6]. In the human pituitary, although the immunohistochemistry demonstrated that SOX2-positive cells exist in the MCL around the Rathke's cleft and parenchyma of the anterior lobe, the existence of parenchymal-niches has not been shown yet. The present study demonstrated that the localization pattern of SOX2-positive cells in the common marmoset pituitary was similar to that in rodents. In relation to the niches of the rodent pituitary, the formation of the parenchymal-niche occurs during the neonatal period by migration of SOX2-positive cells from the MCL-niche, which is formed during the early embryonic period [5]. These data suggest that the parenchymal-niche might have important roles in pituitary function, especially in postnatal pituitary across species. While there are some conserved points of pituitary SOX2-positive cells between rodents and the common marmoset, the present study also showed distinct cellular localization patterns of SOX2, unlike that in the mouse pituitary, cytoplasmic-SOX2 was not observed in the adult common marmoset pituitaries as that in rats [7,13]. Vankelecom and colleagues reported that cytoplasmic-SOX2 in the mouse pituitary tend to be co-localized with hormones, and hypothesized that the cytoplasmic localization by post-translational regulation promoted the initiation of differentiation [2,13]. Although SOX2 must be a common factor showing undifferentiated cells in the anterior lobe of the pituitary of common marmoset as well as rodents, further post-translational regulation and functional analysis of SOX2 are needed.
Several studies using rodent pituitary demonstrated that SOX2positive stem/progenitor cells are composed of sub-populations based on the expression of several genes [2]. Among them, S100 is known to partially exist in the SOX2-positive cells of remnants of Rathke's cleft cysts in the human pituitary [10]. In the common marmoset pituitary, characterization with S100 clearly showed that SOX2-positive cells were positive for S100 in the MCL-niche and parenchyma of the anterior lobe. Notably, the main population of SOX2 exists as S100-positive cells, while each SOX2-positive/ S100-negative and S100-positive/SOX2-negative cells exists similar to that in the rat pituitary [7]. Recently, we reported that SOX2-positive cells isolated from the parenchymal-niches of the rat pituitary show different properties based on the difference in S100β expression, where S100β-expressing SOX2-positive cells exhibit high proliferation and differentiation activities than non-S100β-expressing cells [15]. Although we analyzed an insufficient number of common marmoset pituitaries (one female and one male) by only immunohistochemical Fig. 2. Immunohistochemistry for SOX2 and S100 in the anterior lobe of the adult female common marmoset pituitary. Immunohistochemistry for SOX2 and S100 was performed using 4% paraformaldehyde-fixed frozen sections of the anterior lobe of an adult female common marmoset. SOX2 was visualized with Cy3 (red) and S100 was visualized with Cy5 (green); merged image showing both SOX2 and S100 with nuclear DAPI staining analysis, the present study suggests that a sub-population of SOX2positive cells and their properties may be conserved across rodent and common marmoset pituitaries.
In the rat pituitary, scanning electron microscopy images demonstrated that micro-lobular-like structures, but not really micro-lobular, composed of basement membranes, exist in the anterior lobe [14]. Indeed, it is difficult to identify the micro-lobular-like structure by immunohistochemistry for LAMININ using rat pituitary sections ( Supplementary Fig. 2: online only). In the present study using the common marmoset pituitary sections, LAMININ-positive cells outlined the micro-lobular structure more clearly than in rat sections. Notably, most of the SOX2-positive cells were located in the center or interior of the LAMININ-positive basement membranes, whereas most of hormone-positive cells were located attached to the LAMININ-positive basement membranes. Since a few LAMININpositive micro-lobular structures, having no SOX2-positive cells, exist in the section, the present study, lacking a three-dimensional analysis, could not conclude whether SOX2-positive cells exist in all micro-lobular structures. However, considering that most of the LAMININ-positive micro-lobular structures include SOX2-positive cells in the pituitary sections, these data suggest that SOX2/S100double positive cells might generally exist in the micro-lobular structure of the anterior lobe. Moreover, the central localization of SOX2/S100-double positive cells might suggest the migration of some of the SOX2-positive cells to the outside for differentiation into hormone-producing cells.
In summary, immunohistochemistry of the common marmoset pituitary demonstrated that localization of SOX2-positive stem/ progenitor cells in the common marmoset pituitary is similar to that in rodent pituitary, and has some commonalities with the human pituitary as well. Therefore, the common marmoset might prove to be a useful experimental animal to analyze pituitary stem/progenitor cells and their functions in non-human primate models. | 2020-04-08T23:48:01.044Z | 2018-07-21T00:00:00.000 | {
"year": 2018,
"sha1": "f244bdfe66839012f024341e2b3b6fe8239c4cad",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jrd/64/5/64_2018-043/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f244bdfe66839012f024341e2b3b6fe8239c4cad",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
46808996 | pes2o/s2orc | v3-fos-license | Effects of Ovariectomy in an hSOD1-G93A Transgenic Mouse Model of Amyotrophic Lateral Sclerosis (ALS)
Background Amyotrophic lateral sclerosis (ALS) is a devastating neurodegenerative disorder characterized by progressive muscular dystrophy and paralysis; most ALS patients die from respiratory failure within 3 to 5 years, and there is currently no effective treatment. Some studies have indicated sex differences in the incidence of ALS, and evidence suggests a neuroprotective role for estrogen. Material/Methods We used human Cu/Zn superoxide dismutase (hSOD1-G93A) transgenic mice to determine the effects of ovariotomy on the onset of disease and behavior; we also used Western blotting to measure the expression of aromatase and estrogen receptors, as well as the inflammatory cytokines and apoptosis markers, in the lumbar spinal cord to determine the mechanism of estrogen-mediated neuroprotection. Results Ovariectomy advanced the onset of disease, down-regulated aromatase and estrogen receptor alpha (ER-α) expression, and inhibited expression of the anti-inflammatory factors arginase-1 and the anti-apoptotic factor B-cell lymphoma-2 (Bcl-2) in the lumbar spinal cord of hSOD1-G93A transgenic mice. Conclusions Ovariectomy resulted in earlier disease onset and attenuated the anti-inflammatory and anti-apoptotic actions of estrogen in hSOD1-G93A transgenic mice. Therefore, estrogen may play an important role in protecting spinal cord motor neurons.
Background
Amyotrophic lateral sclerosis (ALS) is a destructive neurodegenerative disorder characterized by progressive muscular dystrophy and paralysis caused by the progressive and selective loss of motor neurons in the cerebral cortex, brain stem, and spinal cord; most ALS patients die of respiratory failure within 3 to 5 years. Unfortunately, ALS has no cure, and its exact pathogenesis remains unclear [1]. Approximately 2% of ALS cases are attributed to a mutation in the Cu/Zn superoxide dismutase (SOD1) gene [2], and transgenic mice carrying the human SOD-G93A (hSOD-G93A) mutation gene display similar clinical and histopathological features as ALS patients [3]; consequently, hSOD1-G93A transgenic mice are commonly used as an animal model to study ALS pathogenesis and treatment.
Some studies have identified potential sex differences in the incidence of ALS [4][5][6]. These sex differences prompted us to consider the role of estrogen in ALS. Indeed, increasing evidence supports a neuroprotective role for estrogen in a variety of neurodegenerative disease models, including ALS [7][8][9][10]. In the central nervous system (CNS), circulating estrogen derived from the ovaries, as well as local estrogen synthesized via aromatase, may exhibit neuroprotective effects, which include increasing the survival of motor neurons [9] and increasing neurotrophic factors [11,12]. Classically, neuroprotective female sex hormones activate 2 nuclear receptors: estrogen receptor alpha (ER-a) and estrogen receptor-beta (ER-b) [13,14]. In addition, G protein-coupled receptor 30 (GPR30), a novel membrane-bound G-proteincoupled receptor expressed in the CNS [15,16], has been found to show a high affinity for estrogen. Therefore, GPR30 may participate in the neuroprotective mechanism of estrogen [17,18].
Although numerous studies have investigated the effects of ovariectomy in hSOD1-G93A transgenic mice, there are only a few reports that describe the effects on the age of disease onset [7,8]. The literature also contains few reports describing the mechanisms underlying the effects of ovariectomy in this ALS animal model. Therefore, in the present study, we determined the effects of ovariotomy on the onset of disease and behavior in hSOD1-G93A transgenic mice; we also used Western blotting to measure the expression of aromatase and estrogen receptors, as well as the inflammatory cytokines and apoptosis markers, in the lumbar spinal cord to determine the mechanism of estrogen-mediated neuroprotection.
Animal model
Female hSOD1-G93A transgenic mice, the offspring of a transgenic male and a B6/SJL F1 female, were bred in a temperature-controlled room with a 12: 12 h light/dark schedule. The mice received sterilized specific pathogen-free (SPF) rodent food and sterile water. All animal experiments were performed in accordance with the Laboratory Animal Management Guidelines established by the Ministry of Science and Technology of the People's Republic of China, as well as the internationally recognized guidelines issued by the National Institutes of Health.
Identification of hSOD1-G93A transgenic mice
The ends of the tails from approximately 30-day-old offspring were cut and placed in sterile centrifuge tubes containing 150 μl of 50 mmol/l NaOH in a 95°C water bath for 30 min. Then, 12.5 μl of 1 mol/l Tris-HCl (pH 8.0) was added to each tube, followed by centrifugation for 2 min; the supernatant contained the genomic DNA. Next, polymerase chain reaction (PCR) assays were performed to identify hSOD1-G93A gene expression. Specific primers for the hSOD1-G93A gene were used in the PCR amplification (forward: 5'-CAT CAG CCC TAA TCC ATC TGA-3'; reverse: 5'-CGC GAC TAA CAA TCA AAG TGA-3'). The PCR amplification conditions were as follows: initial denaturation at 95°C for 3 min; 35 cycles of denaturation at 95°C for 45 s, annealing at 60°C for 30 s, and extension at 72°C for 30 s; and a final extension at 72°C for 5 min. The PCR amplification products were separated on a 1.5% agarose gel at 80 V for 45 min, and the gel was photographed on a GBOX-HR fully automated gel imaging system. Mice with bands at 200-300 bp (236 bp) were identified as hSOD1-G93A-positive transgenic mice; mice without these bands were identified as non-hSOD1-G93A transgenic mice.
Experimental groups
We randomly divided 45-week-old hSOD1-G93A mice into 3 experimental groups (n ³12): the ovariectomized group (OVX), the sham operation group (Sham), and the positive control group (NO-OVX). The negative control group (CON) included non-transgenic mice that were the same age as the experimental mice. These mice were used in behavioral studies.
Similarly, another 9 hSOD1-G93A mice of the same age were divided into 3 groups (OVX, Sham, and NO-OVX) of 3 mice per group. These and another 3 non-transgenic mice were used for Western blotting.
Ovariectomy and sham operation
At 5 weeks of age, the mice in the OVX group were anesthetized using 10% chloral hydrate, and a longitudinal incision was made 0.5 cm above where the upper border line of the 2 hind limbs connect with the back. The skin incision was pulled approximately 0.5 cm horizontally to the left or the right; then, we were able to visualize 2 soft, white, shiny fat masses next to the lower pole of the kidneys. The left and right ovaries of the mouse could be found in the respective fat masses, and they were bi-laterally ovariectomized. After the skin incision was sutured, the mice were placed in a heated chamber to recover.
Mice in the Sham group underwent a similar procedure as the OVX group, except the ovaries were left intact and only some adipose tissue near the ovaries was removed.
Determination of symptom onset of mice
Mice were assessed using Vercelli's score of 1-5 as the reference standard [19]: a score of 4, which indicated that hind limb tremors appeared when the tail was suspended, was defined as the time of onset.
Behavior assessment
Motor function was evaluated by assessing changes in the weights and step lengths of the mice, as well as their performance on the rotarod test and hanging-wire test.
Changes in animal weight were measured twice a week from the eighth week and 3 times a week from the eleventh week to the end-point.
Mouse footprints were collected once a week from the ninth week. The hind feet of the mice were dyed with different colored ink. Mice were placed at one end of the groove and driven forward toward the other end. Footprints from 4 continuous runs of each mouse were collected and the average of the values of 3 separate distances was taken on each side. Then, the average of the 2 sides was recorded.
Rotarod tests were administered starting from the ninth week. The mice were trained for 5 days to adapt them to the experiment, and measurements were then conducted once per week. The rollers started at 1 revolution per minute (rpm) and accelerated to 15 rpm over 3 min. The time that the mouse continued to move on the roller was measured (counted in seconds), and the longest duration was recorded.
The hanging-wire test was used to assess the muscle strength of the mice 1 day after the rotarod test. The mice were suspended upside down from the cage cover, with the bottom of the cage covered with a soft cushion. Three trials were administered for each animal, and the longest latency until falling was analyzed.
Western blotting
The lumbar spinal cords of the mice were dissected at disease onset and frozen at -80°C. Total protein was then extracted using a protein extraction kit (Beijing Applygen Technologies Inc., P1250). Forty micrograms of protein from each sample was denatured and separated via 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). Then, the proteins were transferred to PVDF membranes (Millipore, Billerica, MA, USA) at 100 V for 120 min. Afterwards, the membranes were blocked for 1 h with 5% non-fat milk and incubated overnight with primary antibodies at 4°C. The next day, the membranes were incubated for 1 h with fluorescence-conjugated secondary antibodies at room temperature. Then, the bands of interest on the membrane were detected using an Odyssey Infrared Imaging System (LI-COR, Lincoln, NE, USA).
Statistical analysis
Onset age was calculated using the Kaplan-Meier method. The behavioral assessments were analyzed by two-way analysis of variance (ANOVA), and other statistical analyses were performed using one-way ANOVA followed by least significant difference (LSD) tests with SPSS 21.0 statistical software. All values are expressed as the mean ± standard deviation (SD). Differences were considered statistically significant when P<0.05.
Results
Ovariectomy accelerates disease onset in hSOD1-G93A transgenic mice compared with NO-OVX mice.
The onset time for the OVX group was 78.07±3.091 days, which was 14 days earlier than that for the NO-OVX group (92.50±3.907 days, P=0.008). The onset times were 83.50±3.598 days for the Sham group, which was not significantly different from either the NO-OVX group or the OVX group (Table 1, Figure 1A).
Behavior
Significant weight loss relative to the CON group was observed 4 weeks earlier in the OVX group than in the other positive groups (P<0.05). From the eleventh week to the fourteenth Step length (cm) Weeks of age week, the weights of only the mice in the OVX group decreased significantly compared with those of mice in the CON group. However, at the fifteenth week, the weights of mice in all 3 positive groups (OVX, Sham, and NO-OVX groups) were significantly different compared with those of mice in the CON group ( Figure 1B).
From the eleventh week to the twentieth week, the results of the hanging-wire test for the 3 positive groups were significantly different compared with those for the CON group ( Figure 1C).
A significantly decreased rotarod test time relative to the CON group was observed 1 week earlier in the OVX group than in the other positive groups (P<0.05). On the fifteenth week, the rotarod test time of the OVX group differed significantly from that of the CON group. From the sixteenth to the twentieth week, the rotarod test times of all 3 positive groups were significantly different compared with that of the CON group ( Figure 1D).
From the sixteenth week to the twentieth week, the step lengths of the 3 positive groups were significantly different compared with those of the CON group ( Figure 1E).
Ovariectomy reduced aromatase expression in the lumbar spinal cord
Western blotting revealed that ovariectomy reduced the expression of aromatase in the lumbar spinal cord and that the level of aromatase in the OVX group was the lowest among all the groups (F=8.853, P=0.006, Figure 2A, 2B).
In the lumbar spinal cord, ovariectomy decreased ER-a expression but not ER-b expression, while both ovariectomy and the sham operation decreased GPR30 expression There were obvious differences in the levels of ER-a in the 4 experimental groups (F=8.742, P=0.002; Figure 2C, 2D). ER-a levels in the OVX group were the lowest among all 4 groups (P<0.05); ER-a levels were lower in the NO-OVX group and the Sham group than in the CON group (P<0.05), but did not differ significantly from each other (P>0.05).
The largest difference of ER-b protein levels was between the OVX and CON groups ( Figure 2D, 2E); however, no difference was detected between the OVX and NO-OVX groups (P>0.05).
Western blotting indicated that GPR30 protein expression in the OVX group was significantly lower than that in the CON and NO-OVX groups (P<0.05, Figure 2D, 2F), but we found no significant difference between the OVX and Sham groups (P>0.05). Ovariectomy decreased the expression of the antiinflammatory factors arginase-1 Western blotting revealed significant differences in arginase-1 levels among the 4 groups (F=36.141, P=0.000, Figure 3A, 3B), with arginase-1 levels in the OVX group being the lowest among all other groups (P<0.05). Arginase-1 levels in the NO-OVX and Sham groups were also lower than those in the CON group (P<0.05); however, arginase-1 levels were not significantly different between these 2 transgenic groups (P>0.05). TGF-b expression in all experiment groups was similar to that of arginase-1 ( Figure 3A, 3C), although we found no significant changes between the OVX group and Sham group. TNF-a levels were significantly up-regulated in the OVX group ( Figure 3A, 3D) compared with the CON and NO-OVX groups (P<0.05) but were not different compared with the Sham group (P>0.05).
Ovariectomy promoted apoptosis by down-regulating the expression of the anti-apoptotic protein Bcl-2 Bcl-2 expression was the lowest in the OVX group (F=9.799, P=0.002; P=0.000 vs. CON; P=0.011 vs. NO-OVX; P=0.032 vs. Sham, Figure 4A, 4B). Furthermore, Bcl-2 levels in the NO-OVX and Sham groups were lower than those in the CON group (P<0.05), but no significant differences were detected between the NO-OVX and Sham groups (P>0.05). In contrast, Bax expression was higher in the OVX group than that in the CON or NO-OVX group (P<0.05, Figure 4C, 4D).
Discussion
In the present study, significant weight loss and decreased rotarod test times relative to the CON group were observed 4 weeks earlier and 1 week earlier, respectively, in the OVX group compared with the other hSOD1-G93A groups. However, the hanging-wire test and the step length test showed no significant differences among the 3 positive groups. Therefore, weight changes and rotarod test results are more sensitive for evaluating the behavior of hSOD1-G93A transgenic mice when assessing the effect of ovariectomy.
As stated earlier, circulating estrogen and local estrogen both exert neuroprotective effects. In the present study, ovariectomy accelerated ALS onset in hSOD1-G93A transgenic mice; this earlier onset was accompanied by weight loss 4 weeks earlier and decreased rotarod test times 1 week earlier than observed in non-ovariectomized mice. Therefore, we inferred that the decrease in circulating estrogen levels after ovariectomy likely plays a neuroprotective role in this ALS animal model.
Aromatase catalyzes the rate-limiting enzyme in estrogen synthesis and is widely expressed in the nervous system; aromatase exerts neuroprotective effects through its role in producing estrogen [20,21]. The present study showed that aromatase expression was reduced in the spinal cords of mice after ovariectomy; consequently, we inferred that estrogen synthesis in the spinal cord was decreased and that the decrease in locally synthesized estrogen likely contributed to the earlier disease onset in the OVX group.
The neuroprotective effects of estrogen are well known to be mediated through the activation of estrogen receptors, including ERa, ERb, and GPR30 [11,20]. Therefore, as the targets of estrogen, estrogen receptors levels in the lumbar spinal cord were assessed in the present study. Ovariectomy down-regulated ER-a expression but had no effect on that of ER-b, while ovariectomy and the sham operation both decreased GPR30 in the lumbar spinal cord at the onset stage. Furthermore, these results indicated that reduced estrogen levels resulted in decreased neuroprotection via reducing expression of ER-a, and probably GPR30. Studies demonstrated that the protective effects of 17b-estradiol were mediated through ER-a rather than ER-b [22,23], consistent with our findings. Furthermore, GPR30 might be another important component of estrogen-mediated neuroprotection [24,25], or possibly combined signaling through both ER-a and GPR30, but further study is needed to determine the exact mechanisms.
Previous studies have shown that inflammation, apoptosis, and oxidative stress participate in the pathogenesis of ALS [26][27][28] and that the activation of microglia is involved in the mechanism of this disease [29]. Moreover, previous studies have shown that estrogen exerts neuroprotection effects via its anti-inflammatory [30] and anti-apoptotic [31] functions. Our study agrees with these previous studies. Microglia is known to adopt 2 different phenotypes: the classically activated phenotype (M1) and the alternatively activated phenotype (M2). M1 microglia secrete pro-inflammatory cytokines, such as TNF-a, nitric oxide (NO), and interleukin-1 (IL-1) [32]. M2 microglia are likely involved in neuroprotection and repair following injury because they express the cytokines IL-4, IL-10, chitinaselike 3 (Ym1), TGF-b and arginase-1, which may exert anti-inflammatory effects and trigger the production of neurotrophic factors [30,33]. Our findings revealed that ovariectomy promoted inflammation by decreasing the expression of the antiinflammatory cytokines arginase-1 and TGF-b, the latter also being inhibited by the sham operation, and by increasing the expression of the pro-inflammatory factor TNF-a. Furthermore, ovariectomy increased apoptosis by down-regulating the expression of the anti-apoptotic protein Bcl-2, while both ovariectomy and the sham operation up-regulated the expression of 684 the pro-apoptotic protein Bax. These results show that ovariectomy decreased the anti-inflammatory and anti-apoptotic effects of estrogen and also demonstrate that estrogen can prevent inflammation and apoptosis.
Thus, we speculate that estrogen exerts its neuroprotective effects by inhibiting inflammation and apoptosis in hSOD1-G93A transgenic mice. The neuroprotective effects of estrogen were also confirmed by many previous animal experiments [7][8][9][10]. However, in contrast to our study, previous clinical studies did not demonstrate that estrogen replacement was effective in delaying the onset of ALS [34]. At present, the protective effect of estrogen on ALS is controversial, as well as on other neurodegenerative diseases, such as Alzheimer's disease (AD) [35,36]. The reasons for these inconsistent results here are uncertain and may be related to the different duration of estrogen deprivation, the type of hormone deprivation, or different window for estrogenic intervention. Further analysis is necessary.
In our study, we used a Sham operation group as one of the control groups. Unexpectedly, although the onset time was 5 days later in the Sham group than in the OVX group, this difference was not significant. Moreover, GPR30, TGF-b, TNF-a, and Bax expression levels were not significantly different between the Sham and OVX groups. Mice in the Sham group underwent the sham operation, which may be considered a trauma. Prior studies have reported that trauma is likely associated with ALS risk [37,38]. Based on these results and our experimental results, we inferred that trauma might contribute to the onset of ALS in hSOD1-G93A transgenic mice. In our study, trauma may have triggered ALS through a mechanism partly similar to that of estrogen, but this hypothesis needs further study.
Conclusions
Ovariectomy accelerated the onset of ALS and down-regulated the expression of aromatase, ER-a, and GPR30 (the last of which was also inhibited by the sham operation) in the spinal cord of hSOD1-G93A transgenic mice. Moreover, ovariectomy promoted inflammation and apoptosis. Therefore, we speculate that the anti-inflammatory and anti-apoptotic effects of estrogen likely occur via its binding to ER-a and probably GPR30; these effects might be responsible for the neuroprotective properties of estrogen in hSOD1-G93A transgenic mice. In addition, trauma may contribute to the onset of ALS in hSOD1-G93A transgenic mice. | 2018-04-03T04:17:20.914Z | 2018-02-02T00:00:00.000 | {
"year": 2018,
"sha1": "e3c857db070007f656e10c9205c0285e3d1d16b2",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc5806477?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3c857db070007f656e10c9205c0285e3d1d16b2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252128654 | pes2o/s2orc | v3-fos-license | Risk Predictors of Advanced Fibrosis in Non-Alcoholic Fatty Liver Disease
The assessment of fibrosis in chronic liver diseases using non-invasive methods is an important topic in hepatology. The aim of this study is to identify patients with non-alcoholic fatty liver disease (NAFLD) and advanced liver fibrosis by establishing correlations between biological/ultrasound markers and non-invasively measured liver stiffness. This study enrolled 116 patients with non-alcoholic fatty liver disease, which were evaluated clinically, biologically, and by ultrasound. Liver fibrosis was quantified by measuring liver stiffness by shear wave elastography (SWE). Multiple correlation analysis of predictors of liver fibrosis identified a number of clinical, biological, and ultrasound parameters (BMI, blood glucose, albumin, platelet count, portal vein diameter, bipolar spleen diameter) that are associated with advanced liver fibrosis in patients with non-alcoholic fatty liver disease. The correlations between the degree of liver fibrosis and the risk values of some serological and ultrasound markers obtained in our study could be useful in clinical practice for the identification of advanced fibrosis in patients with NAFLD.
Introduction
Approximately 25% of the population is affected globally by various forms of nonalcoholic fatty liver disease (NAFLD), with a prevalence of up to 6.5% of the active form of disease represented by non-alcoholic steatohepatitis (NASH) [1]. The international scientific and professional community makes sustained efforts to stratify the risk of disease progression and establish a follow-up program for patients with non-alcoholic fatty liver disease [2]. The variability, invasiveness, and cost of the methods used to assess the grade of fibrosis in patients with NAFLD, are the elements that have prompted numerous research activities with the aim of identifying and implementing realistic, feasible, and reproducible fibrosis risk prediction strategies for this category of patients [2][3][4][5].
Research groups recommend that, in selected cases, non-invasive diagnostic methods, including imaging techniques and laboratory test markers, should be favored in determining the risk of progression of liver fibrosis [6][7][8][9][10].
Materials and Methods
We conducted a prospective study with diagnostic strategy, which took place in a tertiary gastroenterology and hepatology center, from January 2020 to June 2021. The research was conducted in a group of 116 consecutive patients aged > 18 years, diagnosed with NAFLD (history of at least 6 months), who agreed to participate in the study and undergo the proposed investigations.
Previously known patients with chronic liver disease of other etiologies, with previous splenectomy, pregnant women, and patients with hepato-portal encephalopathy were excluded.
The study was conducted after obtaining the approval of the Ethics Committee of 'Grigore T. Popa' University of Medicine and Pharmacy Iasi. Each patient included in the study signed an informed consent.
Study Protocol
For each subject included in the study, personal pathological history, chronic medication, details of alcohol consumption, smoking, and dietary habits were noted. Clinical examination included determination of anthropometric indices as well as clinical examination. The paraclinical evaluation consisted of laboratory tests, ultrasound, and 2D-SWE.GE elastography.
Laboratory tests parameters included complete blood count, liver function tests, markers to exclude other etiologies of liver disease, cardiovascular risk parameters, lipid, carbohydrate, and protein metabolism balance.
All patients were examined fasting by abdominal ultrasound followed by 2D-SWE.GE elastography. Imaging explorations were performed by a single operator with a General Electric Logic 9, as shown in Figures 1 and 2 (Figures 1 and 2).
Research groups recommend that, in selected cases, non-invasive diagnostic meth ods, including imaging techniques and laboratory test markers, should be favored in de termining the risk of progression of liver fibrosis [6][7][8][9][10].
We conducted a study to identify patients with NAFLD and advanced liver fibros by establishing correlations between laboratory tests/ultrasound markers and liver stif ness measured non-invasively by 2D-SWE.GE elastography.
Materials and Methods
We conducted a prospective study with diagnostic strategy, which took place in tertiary gastroenterology and hepatology center, from January 2020 to June 2021. The re search was conducted in a group of 116 consecutive patients aged > 18 years, diagnose with NAFLD (history of at least 6 months), who agreed to participate in the study an undergo the proposed investigations.
Previously known patients with chronic liver disease of other etiologies, with prev ous splenectomy, pregnant women, and patients with hepato-portal encephalopathy wer excluded.
The study was conducted after obtaining the approval of the Ethics Committee o 'Grigore T. Popa' University of Medicine and Pharmacy Iasi. Each patient included in th study signed an informed consent.
Study Protocol
For each subject included in the study, personal pathological history, chronic med cation, details of alcohol consumption, smoking, and dietary habits were noted. Clinic examination included determination of anthropometric indices as well as clinical exam nation. The paraclinical evaluation consisted of laboratory tests, ultrasound, and 2D SWE.GE elastography.
Laboratory tests parameters included complete blood count, liver function test markers to exclude other etiologies of liver disease, cardiovascular risk parameters, lipid carbohydrate, and protein metabolism balance.
All patients were examined fasting by abdominal ultrasound followed by 2D SWE.GE elastography. Imaging explorations were performed by a single operator with General Electric Logic 9, as shown in Figures 1 and 2 (Figures 1 and 2). The ultrasound parameters assessed were: craniocaudal diameter of both, right lob and left lobe of the liver (RLD and LLD), portal vein caliber (PV), and bipolar splenic d ameter (BSD). The grade of steatosis was determined using a 4-grade classification system (grade 0-normal, grade 1-mild steatosis, grade 2-moderate steatosis, grade 3-sever steatosis) [11].
Subsequently, based on the threshold values of some serological markers, a logist regression equation was determined to identify advanced liver fibrosis.
The statistical analysis of the data was performed in SPSS 27.0, using descriptive an inferential studies. Quantitative variables were characterized by descriptive statistics pa rameters and categorical variables by calculating the frequency distributions. Th Shapiro-Wilk normality test was applied to determine which of the quantitative variable analyzed followed the law of normal distribution. In the case of variables that followe the normal distribution law, t-Student and ANOVA tests were used for comparisons, an the Mann-Whitney and Kruskal-Wallis tests were used for variables that did not follow the normal distribution law. In comparisons between more than two groups, we applie post hoc tests to locate statistically significant differences identified (LSD and Tamhan tests). The association between pairs of quantitative variables was assessed by calculatin Pearson correlation coefficients and their associated level of significance. For the compa ative study of the categorical variables, we used the chi-square test and estimated the ris factors by calculating the OR coefficients and associated confidence intervals, 95% CI. Ris factors were entered into a binary logistic regression model (forward LR method) to iden tify their relationship with the presence of advanced fibrosis diagnosis. The results ob tained were considered statistically significant at p 0.05.
Results
A total of 116 patients with non-alcoholic fatty liver disease were included in th study. Of these, 5 patients (4.31%) were excluded as no diagnostic values were obtaine at the 2D-SWE.GE elastography assessment (IQR > 30%). One hundred and eleven pa tients diagnosed with non-alcoholic fatty liver disease, 62 males and 49 females aged be tween 26 and 76 years (mean age 53.56 years; p = 0.104) were fully assessed. The ultrasound parameters assessed were: craniocaudal diameter of both, right lobe and left lobe of the liver (RLD and LLD), portal vein caliber (PV), and bipolar splenic diameter (BSD). The grade of steatosis was determined using a 4-grade classification system (grade 0-normal, grade 1-mild steatosis, grade 2-moderate steatosis, grade 3-severe steatosis) [11].
Subsequently, based on the threshold values of some serological markers, a logistic regression equation was determined to identify advanced liver fibrosis.
The statistical analysis of the data was performed in SPSS 27.0, using descriptive and inferential studies. Quantitative variables were characterized by descriptive statistics parameters and categorical variables by calculating the frequency distributions. The Shapiro-Wilk normality test was applied to determine which of the quantitative variables analyzed followed the law of normal distribution. In the case of variables that followed the normal distribution law, t-Student and ANOVA tests were used for comparisons, and the Mann-Whitney and Kruskal-Wallis tests were used for variables that did not follow the normal distribution law. In comparisons between more than two groups, we applied post hoc tests to locate statistically significant differences identified (LSD and Tamhane tests). The association between pairs of quantitative variables was assessed by calculating Pearson correlation coefficients and their associated level of significance. For the comparative study of the categorical variables, we used the chi-square test and estimated the risk factors by calculating the OR coefficients and associated confidence intervals, 95% CI. Risk factors were entered into a binary logistic regression model (forward LR method) to identify their relationship with the presence of advanced fibrosis diagnosis. The results obtained were considered statistically significant at p < 0.05.
Results
A total of 116 patients with non-alcoholic fatty liver disease were included in the study. Of these, 5 patients (4.31%) were excluded as no diagnostic values were obtained at the 2D-SWE.GE elastography assessment (IQR > 30%). One hundred and eleven patients diagnosed with non-alcoholic fatty liver disease, 62 males and 49 females aged between 26 and 76 years (mean age 53.56 years; p = 0.104) were fully assessed.
Mean values of the parameters investigated in the group of patients diagnosed with NAFLD are presented in Table 1.
Analysis of the Grade of Steatosis and Fibrosis in the Study Group
The distribution of steatosis grades in the NAFLD population was as follows: mild steatosis (S1) 5.4%, moderate steatosis (S2) 36.9%, severe steatosis (S3) 57.7%.
The following are risk values for patients with NAFLD in relation to fibrosis grade (F0-F1 vs.
Determination of a Logistic Regression Equation
The final objective of the study was to determine a logistic regression equation to diagnose advanced liver fibrosis in relation to the threshold values of non-invasive markers. In a first step, the risk values associated with each of the laboratory tests markers investigated for advanced liver fibrosis (F3-F4) were determined (Table 4).
Determination of a Logistic Regression Equation
The final objective of the study was to determine a logistic regression equation to diagnose advanced liver fibrosis in relation to the threshold values of non-invasive markers. In a first step, the risk values associated with each of the laboratory tests markers investigated for advanced liver fibrosis (F3-F4) were determined (Table 4). Table 5 shows predictors identified for advanced fibrosis-binary logistic regression. Predictors identified for advanced fibrosis by multivariate analysis are (Table 5): • Platelet counts below the risk threshold with a risk of 10.874: • Dilated PV above the risk threshold, with an associated risk of 8.234: In the last step, we determined the logistic regression equation: Logit (p) = ln(p/(1 − p) = −1.362 + 2.386 × Platelet counts + 2.108 × Modified PV (above 13 mm) p = probability of developing F3-F4 fibrosis
Discussion
Analyzing the results obtained, it is found that for most patients with NAFLD in the study group, a grade of severe steatosis S3 (57.7%) and significant fibrosis (54.9% moderate and severe fibrosis F2-F4) was documented. This distribution, with the prevalence of cases of increased severity, can be explained by the fact that the study was conducted in a group of patients who addressed a tertiary center. Most commonly, patients with mild steatosis and low fibrosis are seen on an outpatient basis in primary care.
Taking into account the demographic parameters, Leonardo et al. [13] conducted a meta-analysis which concluded that age and gender are major physiological factors at risk of developing NAFLD, along with race and genetic factors. In our study, the prevalence of male patients (73.8%) was found among patients diagnosed with non-alcoholic fatty liver disease, a result similar to that reported by Camhi et al. [14] and most studies in the literature. While in our study, a higher number of male patients with NAFLD was recorded in the whole group (regardless of the degree of fibrosis), there were no statistically significant differences between male and female patients reported in the subset of patients with severe steatosis (S3) and significant fibrosis (F2, F3, and F4) compared to Ciecko-Michalska [15], who reported a significantly higher prevalence of advanced fibrosis in male patients compared to female patients.
Another risk factor involved in the development and progression of NAFLD is age, with advanced fibrosis being significantly more common in older patients [1]. A number of studies that have analyzed groups of patients with non-alcoholic fatty liver disease reported a mean age between 51 years [16] and 63 years [17]. The mean age of patients with non-alcoholic fatty liver disease in our study was 53.56 years, which is within the range of values presented by the literature data.
Regarding the distribution of fibrosis grades by age category, there were statistically significant differences: in patients under 40 years of age cases with normal liver stiffness values are more common, while in patients over 60 years of age the incidence of cases with advanced fibrosis increases. These results are consistent with data in the literature [1,16].
BMI and metabolic parameters, along with demographic data, are major factors incriminated in the development of hepatic steatosis and progression to fibrosis. In our group of patients with NAFLD, 60.7% of patients with obesity (BMI above 30 kg/m 2 ) were identified. A percentage of 59.5% of patients with risk values for serum cholesterol, 47.7% with risk values for triglycerides, and 37.8% of patients with hyperglycemia were also enrolled. In this regard, insulin resistance is currently considered to be a major mechanism in the development and evolution of non-alcoholic fatty liver disease towards steatohepatitis and advanced fibrosis [18,19]. In support of this hypothesis, there are also a number of results reported in the literature that show that the prevalence of non-alcoholic fatty liver disease can reach up to 80% in diabetic patients and advanced fibrosis is identified in a higher percentage in diabetic patients than in patients without diabetes [16,17,20,21].
A large number of serological markers have been proposed for the non-invasive assessment of liver fibrosis. In the particular case of NAFLD, it is considered necessary to differentiate tests based on serological markers that can be used in the diagnosis of steatosis from those that can be used in the prediction of severe fibrosis [10].
Analyzing the parameters measured from the perspective of frequency of risk values, in the subgroup of patients with NAFLD and early or moderate fibrosis, the most frequently recorded risk values (in more than one-third of patients) were found for parameters: BMI, cholesterol, glycemia, GGT, triglycerides, and ultrasound changes, respectively, RLD and BSD. For patients with NAFLD and advanced fibrosis, the same parameters identified in patients with mild but significantly higher fibrosis (over 50%) are found with increased frequency. In addition, thrombocytopenia and PV diameter are added. These findings are consistent with data in the literature [6][7][8][22][23][24].
The correlations between the values of the parameters followed and the numerical values obtained by elastometry were analyzed. In this context, the statistical analysis performed revealed the existence of directly proportional, statistically significant correlations of elastometric values with: BMI (moderate correlation), PV caliber (moderate correlation), and RLD (weak correlation). There was also a statistically significant inversely proportional correlation between platelet count and elastometric values. These results are fully consistent with data in the literature [25].
In relation to the relationship of cytolysis enzymes to elastometric values, no statistically significant correlations were identified. Results are contradictory in the literature: some studies have shown that elevated liver cytolysis enzymes are considered predictive for advanced fibrosis in NAFLD [26][27][28], and other studies have also shown that liver cytolysis levels do not correlate with the grade of steatosis or fibrosis [29].
Other parameters analyzed to identify correlations with liver stiffness were GGT, serum albumin, blood glucose, cholesterol, and triglycerides. For GGT values, the statistical analysis revealed the existence of a directly proportional, statistically significant correlation with elastometric values. Literature data show that in patients with non-alcoholic fatty liver disease GGT may be increased, with values of this parameter correlating with both advanced fibrosis and increased mortality [28]. For serum albumin values statistical analysis revealed a statistically significant, moderate inversely proportional correlation. These results are supported by the literature [30].
Serum triglyceride values correlate directly proportionally, statistically significant, but with low significance, to elastometric values, while serum cholesterol values correlate inversely proportionally, but statistically insignificant, to elastometric values.
In the studied group, glycemia (G) values were directly proportionally correlated with elastometric values. In most studies in the literature, diabetes mellitus is considered an important risk factor for hepatic steatosis, advanced fibrosis, and cirrhosis [17,20,29].
The study also aimed to identify a logistic regression equation to take into account the predictors for advanced fibrosis and to be easily used in the detection of advanced fibrosis.
Two predictors were identified for advanced fibrosis: the first predictor was platelet counts below the risk threshold and the second predictor was the PV diameter. Taking into account these identified predictors, using statistical analysis methods we obtained a logistic regression equation for advanced fibrosis exposed above.
Considering the specificity of the progression of the liver disease, a prospective study could not be carried out, because long monitoring intervals were necessary to evaluate the progression of liver fibrosis and the occurrence of clinically significant portal hypertension.
The advantage of the non-invasive character of the investigative methods used is realized from the perspective of the objectivity and sensitivity of the data obtained, in a real limitation represented by the lack of liver histopathological exploration as a control operator, with diagnostic value of certainty, regarding the real levels of fatty liver load and of liver fibrosis.
The real advantage of daily clinical practice is the stratification of the risk of developing liver fibrosis and clinically significant portal hypertension.
Conclusions
Following the implementation of mathematical modeling methods, a number of parameters can be used as tools to estimate the severity of chronic liver disease. Obesity, hyperglycemia, hypoalbuminemia, thrombocytopenia, portal vein dilation, and splenomegaly are predictive factors for advanced fibrosis, with the highest predictive power being platelet count and portal vein diameter.
The results on the correlations between the degree of liver fibrosis and the risk values of some serological and ultrasound markers obtained in our study could be applied in clinical practice, which are supported by the data in the literature.
The strength of the study is the use of non-invasive tests to evaluate liver fibrosis, considering the fact that there are few data in the literature about 2D-SWE.GE elastography. The limitation of the study is the reduced number of patients included and the absence of correlation with histological findings.
In order for the logistic regression equation resulting from the statistical analysis to prove its practical usefulness for the non-invasive and easy identification of advanced fibrosis, further studies are needed on an increased number of patients. | 2022-09-09T17:05:20.604Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "25bc984aa630f81cc0828fa8cf0639a50d163fe1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/12/9/2136/pdf?version=1662116994",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89c5c43972e6d10022918caa37b994be6be1af17",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270884013 | pes2o/s2orc | v3-fos-license | Inheritance of Some Traits in Crosses between Hybrid Tea Roses and Old Garden Roses
The limited knowledge about the inheritance of traits in roses makes the efficient development of rose varieties challenging. In order to achieve breeding goals, the inheritance of traits needs to be explored. Additionally, for the inheritance of a trait like scent, which remains a mystery, it is crucial to know the success of parental traits in transmitting them to the next generation. Understanding this allows for accurate parental selection, ensuring sustainability in meeting market demand and providing convenience to breeders. The aim of this study was to assess the success of cross-combinations between scented old garden roses and hybrid tea roses used in cut roses in transferring their existing traits, with the objective of achieving scented cut roses. The evaluated traits included recurrent blooming, flower stem length, flower diameter, petal number, scent, and bud length of both parents and progenies. The inheritance of these traits was evaluated through theoretical evaluations, including calculating heterosis and heterobeltiosis and determining narrow-sense heritability. The combinations and examined traits were assessed using a hierarchical clustering heat map. The results of this study indicated that flower stem length, flower diameter, petal number, and bud length traits had a moderate degree of narrow-sense heritability, suggesting the influence of non-additive genes on these traits. This study observed a low success rate in obtaining progenies with scent in cross combinations between cut roses and old garden roses, indicating the challenges in obtaining scented genotypes. The discrepancy between the observed phenotypic rates and the expected phenotypic and genotypic rates, according to Punnett squares, suggests that the examined traits could be controlled by polygenic genes. The progenies were observed to exhibit a greater resemblance to old garden roses than hybrid tea roses and did not meet the commercial quality standards for cut flowers. The significant negative heterosis observed in 65.12% (petal number) and 99.61% (flower diameter) of the progenies provides strong evidence of resemblance to old garden roses. Considering these findings, it is recommended to consider old garden roses as parents, taking into account their suitability for other breeding objectives.
Introduction
The rose, which belongs to the Rosa genus in the Rosaceae family, is naturally distributed in the Northern Hemisphere, including Asia, Europe, the Middle East, and North America.It is one of the plant species widely used in the ornamental plant industry, cosmetic industry, food, and medicine sectors [1].There are more than 100 to 250 species of roses identified worldwide [2].The haploid chromosome number of roses is x = 7, and chromosome numbers vary depending on the ploidy level, ranging from 2n = 2x = 14 in diploids to 2n = 8x = 56 in octoploids.In a recent study, an endemic wild rose species (Rosa praelucens Byhouwer) with a chromosome number of 2n = 10x = 70 has also been reported [3].
The rose is the most traded cut flower in the world [4].Through breeding studies, thousands of new varieties have been developed to meet consumer expectations.More than 37,000 rose varieties have been introduced to the sector [5].Crossbreeding is preferred in the development of new rose varieties because it generates greater genetic variation and leads to improved outcomes [6].The cost is also lower compared to the use of molecular methods [7].
In the breeding of cut roses, certain characteristics such as flower stem length and thickness, bud size, yield, disease tolerance, flowering time, and vase life gained prominence [8,9].However, consumer preferences evolve over time, and there has been a growing interest in scented cut roses in recent years.Although the natural scent of roses is considered economically significant, the scent has been lost in cut roses due to intensive breeding efforts focused on other traits [10].Most commercial cut rose varieties have no scent since the scent was not among the desired selection criteria for many years [11].
Due to the increase in consumer demand for scented cut roses, breeding companies have planned programs to incorporate the scent trait into cut roses by including scent as one of the selection criteria [11,12].Although a limited number of scented cut rose varieties have been developed, rose breeding is primarily carried out by highly competitive commercial companies, resulting in the genetic control of traits being treated as proprietary information [13,14].Additionally, there is still a lack of sufficient scientific literature on the breeding of scented cut roses that meet commercial quality standards, and the inheritance of these traits has not yet been fully elucidated [10].
Most of the old garden and wild rose species that have spread worldwide are scented, such as R. alba L., R. damascena Mill., R. centifolia L., R. odorata L. cv. Louis XIV, R. gallica L., and R. moschata Herrm [15][16][17].However, these rose species do not have commercial use in the cut flower industry because their morphological characteristics do not meet the criteria for commercial quality.Moreover, it is not clearly known how successful old garden roses are in developing varieties that meet the cut flower quality criteria.The lack of sufficient information on breeding scented and commercial-quality cut roses poses a significant challenge for researchers and amateur breeders.To develop new scented varieties suitable for commercial quality criteria in cut roses, it is essential to understand the ability to transfer the characteristics of these species to the next generation and have information about the inheritance of the desired traits [18].
It is believed that the rate of developing new varieties that are scented and meet commercial quality criteria may be high through crosses between old garden roses and commercial cut rose varieties.This study aimed to assess the ability of old garden roses to transfer certain plant and flower characteristics to the next generation and to gather information about the inheritance of these traits through crosses between cut roses and old garden roses.The results of this study are expected to aid in identifying suitable materials for breeding new high-quality scented cut flower varieties.
Qualitative and Quantitative Traits of F 1 Progenies
In the combinations where Damask rose was used as the pollen parent, an evaluation of the traits could not be made since neither the progeny could be obtained (Avalanche × Damask rose and Sweet Avalanche × Damask rose) nor was recurrent blooming observed, and there was a long period of juvenile sterility (Layla × Damask rose, Samourai × Damask rose, Magnum × Damask rose, and First Red × Damask rose).Therefore, measurements were conducted on 258 progenies.
Recurrent Blooming
Among all F 1 progenies, 83.06% showed recurrent blooming.In all cross combinations in which the Black rose was used as the pollen parent, all genotypes had the recurrent blooming trait.Some F 1 progenies of cross combinations in which Damask rose was used as a pollen parent did not show recurrent blooming.Moreover, some of them did not show any blooms for a year.Juvenile sterility has been observed.In the F 1 progenies of cross combinations in which Cabbage rose was used as a pollen parent, the rate of progenies showing recurrent blooming depending on the seed parent, except Sweet Avalanche, varied between 20% and 100% (Figure 1).
Recurrent Blooming
Among all F1 progenies, 83.06% showed recurrent blooming.In all cross combinations in which the Black rose was used as the pollen parent, all genotypes had the recurrent blooming trait.Some F1 progenies of cross combinations in which Damask rose was used as a pollen parent did not show recurrent blooming.Moreover, some of them did not show any blooms for a year.Juvenile sterility has been observed.In the F1 progenies of cross combinations in which Cabbage rose was used as a pollen parent, the rate of progenies showing recurrent blooming depending on the seed parent, except Sweet Avalanche, varied between 20% and 100% (Figure 1).In previous studies conducted to investigate the inheritance of the recurrent blooming trait in roses, it was reported that recurrent blooming was controlled by a homozygous recessive gene [19][20][21].However, Shupert [22] found that the observed rates of recurrent blooming did not align with the expected rates, challenging the hypothesis of a homozygous recessive gene controlling the trait.Jones [14] determined that among 11 different cross combinations, one combination deviated from the expected rate, with several genotypes blooming only once a year.Shubin et al. [23] discovered that none of the 296 hybrid progenies obtained from cross combinations between R. chinensis 'Old Blush' (recurrent blooming) and R. wichuriana Basye's Thornless (blooming once a year) exhibited recurrent blooming.Additionally, they reported that out of 300 hybrid progenies obtained from backcrossing F1 progenies with R. chinensis 'Old Blush,' only 83 displayed recurrent blooming, which was inconsistent with the expected rate.
The results of this study are in line with the findings reported by Semeniuk [19,20], de Vries and Dubois [24], and Debener [21], suggesting that in combinations using Black rose and Damask rose as pollen parents, the recurrent blooming is controlled by a homozygous recessive gene.All progenies obtained from crosses between recurrently blooming Black roses and recurrently blooming hybrid tea roses exhibited recurrent blooming.Similarly, the progenies obtained from crosses between once-blooming Damask roses and recurrently blooming hybrid tea roses bloomed once a year.When assuming that recurrent blooming is controlled by a homozygous recessive gene, the scenarios of hybrid tea roses, Black roses being homozygous recessive for the recurrent blooming trait, and Damask roses being homozygous dominant are consistent.
Recurrent blooming
Blooming once-year In previous studies conducted to investigate the inheritance of the recurrent blooming trait in roses, it was reported that recurrent blooming was controlled by a homozygous recessive gene [19][20][21].However, Shupert [22] found that the observed rates of recurrent blooming did not align with the expected rates, challenging the hypothesis of a homozygous recessive gene controlling the trait.Jones [14] determined that among 11 different cross combinations, one combination deviated from the expected rate, with several genotypes blooming only once a year.Shubin et al. [23] discovered that none of the 296 hybrid progenies obtained from cross combinations between R. chinensis 'Old Blush' (recurrent blooming) and R. wichuriana Basye's Thornless (blooming once a year) exhibited recurrent blooming.Additionally, they reported that out of 300 hybrid progenies obtained from backcrossing F 1 progenies with R. chinensis 'Old Blush,' only 83 displayed recurrent blooming, which was inconsistent with the expected rate.
The results of this study are in line with the findings reported by Semeniuk [19,20], de Vries and Dubois [24], and Debener [21], suggesting that in combinations using Black rose and Damask rose as pollen parents, the recurrent blooming is controlled by a homozygous recessive gene.All progenies obtained from crosses between recurrently blooming Black roses and recurrently blooming hybrid tea roses exhibited recurrent blooming.Similarly, the progenies obtained from crosses between once-blooming Damask roses and recurrently blooming hybrid tea roses bloomed once a year.When assuming that recurrent blooming is controlled by a homozygous recessive gene, the scenarios of hybrid tea roses, Black roses being homozygous recessive for the recurrent blooming trait, and Damask roses being homozygous dominant are consistent.
It is expected that all progenies should display recurrent blooming in crosses between recurrent blooming hybrid tea roses and Cabbage roses, where the Cabbage rose is presumed to be homozygous recessive, similar to the Black rose, for exhibiting recurrent blooming.However, both recurrent blooming and once-blooming progenies were observed.When Cabbage rose was assumed to be heterozygous, whether hybrid tea roses were considered heterozygous or homozygous, the segregation ratios did not match the expected ratios.These findings are not in line with the notion that the recurrent blooming trait is controlled by a single gene.For instance, in the case where both parents are assumed to be heterozygous, the number of progenies not exhibiting recurrent blooming in the Samourai × Cabbage rose combination was much higher than expected.Similar results have been reported by Jones [14], where the number of recurrent blooming progenies was found to be less than the number of non-recurrent blooming progenies.
Based on these findings, the recurrent blooming trait may be controlled by multiple genes.There are other studies suggesting that the recurrent blooming trait in diploid roses is controlled by two recessive genes, as reported by Shubin et al. [23] and Smulders et al. [25].
Another significant finding of this study is the occurrence of juvenile sterility in progenies resulting from crosses involving Damask rose.According to Zlesak [26], juvenile sterility in once-year-blooming roses can persist for one to several years.It is possible that the traits of recurrent blooming and juvenile sterility are influenced by the same genetic mechanisms.
Scent
The highest rate of the scented progenies appears to have been determined in the Samourai × Cabbage rose and Sweet Avalanche × Cabbage rose combinations.However, only one F 1 progeny survived in both cross combinations.Since an adequate number of hybrid progenies could not be obtained to demonstrate segregation, neither of the cross combinations was evaluated individually.Among the F 1 progenies, 78.93% were determined to be scentless or had a barely perceptible scent, whereas 21.07% were scented.Among the scented hybrid progenies, only 5.66% were identified as having a strong scent.These results were obtained when Layla and Sweet Avalanche were used as seed parents (Figure 2).It is expected that all progenies should display recurrent blooming in crosses between recurrent blooming hybrid tea roses and Cabbage roses, where the Cabbage rose is presumed to be homozygous recessive, similar to the Black rose, for exhibiting recurrent blooming.However, both recurrent blooming and once-blooming progenies were observed.When Cabbage rose was assumed to be heterozygous, whether hybrid tea roses were considered heterozygous or homozygous, the segregation ratios did not match the expected ratios.These findings are not in line with the notion that the recurrent blooming trait is controlled by a single gene.For instance, in the case where both parents are assumed to be heterozygous, the number of progenies not exhibiting recurrent blooming in the Samourai × Cabbage rose combination was much higher than expected.Similar results have been reported by Jones [14], where the number of recurrent blooming progenies was found to be less than the number of non-recurrent blooming progenies.
Based on these findings, the recurrent blooming trait may be controlled by multiple genes.There are other studies suggesting that the recurrent blooming trait in diploid roses is controlled by two recessive genes, as reported by Shubin et al. [23] and Smulders et al. [25].
Another significant finding of this study is the occurrence of juvenile sterility in progenies resulting from crosses involving Damask rose.According to Zlesak [26], juvenile sterility in once-year-blooming roses can persist for one to several years.It is possible that the traits of recurrent blooming and juvenile sterility are influenced by the same genetic mechanisms.
Scent
The highest rate of the scented progenies appears to have been determined in the Samourai × Cabbage rose and Sweet Avalanche × Cabbage rose combinations.However, only one F1 progeny survived in both cross combinations.Since an adequate number of hybrid progenies could not be obtained to demonstrate segregation, neither of the cross combinations was evaluated individually.Among the F1 progenies, 78.93% were determined to be scentless or had a barely perceptible scent, whereas 21.07% were scented.Among the scented hybrid progenies, only 5.66% were identified as having a strong scent.These results were obtained when Layla and Sweet Avalanche were used as seed parents (Figure 2).In studies on the inheritance of scent in roses, it has been reported that scent is a homozygous recessive trait controlled by polygenic genes [27,28].In a hybridization In studies on the inheritance of scent in roses, it has been reported that scent is a homozygous recessive trait controlled by polygenic genes [27,28].In a hybridization study conducted by Cherri-Martin et al. [11] using two different rose varieties known for their distinct scent compositions, it was found that the concentration and diversity of volatile compounds in the hybrid progenies were lower compared to those in the parents.The researchers observed that while the parents contained 30 different volatile compounds, only 11 of these compounds were in the progenies.Furthermore, they observed that the variation in scent quality correlated with the levels of monoterpenes, indicating the influence of these compounds on the overall scent characteristics, and that the hybrid genotypes possessed scent metabolism characteristics from both parent plants.The researchers reported a limited occurrence of hybrid progenies with a pleasant scent, suggesting the complexity of scent inheritance in hybrid roses.Spiller et al. [29] conducted a study to investigate the inheritance patterns of specific scent components in diploid rose genotypes.It was found that the scent components in hybrid progenies exhibited two different segregation rates.In a separate study by Nadeem et al. [30], hybridization experiments were performed using modern rose varieties with different scent profiles, and the resulting progenies exhibited varying degrees of scent intensity.Strong-scented progenies were obtained from crosses between roses with a strong scent, while crosses between roses with a moderate scent and roses with a strong scent resulted in progenies with a moderate scent.Combinations of strong × moderate scents resulted in progenies with a moderate scent, while combinations of strong × weak scents produced progenies with a weak scent.
It is evident from the studies that the scent trait possessed by parents is not observed in the progenies as expected, both in terms of intensity and compound composition, and obtaining scent progenies is challenging.Similarly, in this study, it was determined that the expected ratios between scentless × strongly scented combinations differed from each other.None of the genotypic predictions, according to the Punnett square, matched the phenotypic segregation.The rate of scented progeny in all combinations was much lower than expected, regardless of scent intensity, and it was difficult to obtain scented progeny.All these findings indicate that the scent trait may be controlled by polygenic genes, and the genes responsible for its inheritance may be recessive.It may also suggest that recessive alleles could be more prevalent than dominant ones.Since breeding studies have been focused more intensively on characteristics such as flower shape, flower stem length, and vase life, obtaining scented varieties with long vase lives using conventional breeding methods has been challenging.Consequently, scent has not been considered a prioritized selection criterion for many years [11,[31][32][33].This may be attributed to the elimination of ethylene-sensitive progeny to enhance postharvest durability and the negative selection of scented progenies due to their shorter postharvest.
Petal Number and Flower Doubleness
The number of petals varied between 6.67 and 155.56 among the F 1 progenies, regardless of the cross combinations.The percentage of F 1 progenies with single petals in all combinations was 1.63%, while the percentage of progenies with 41 or more petals was 21.14%.There was no F 1 progeny in the single petal group in the cross combinations where Cabbage rose was used as the pollen parent.At least 50% of the F 1 progenies obtained from all combinations, except for First Red × Black rose, fell into the full and very full groups.F 1 progenies with more than 100 petal numbers were obtained from Layla × Black rose, Layla × Cabbage rose, and First Red × Cabbage rose.There was only one F 1 progeny each in Samourai × Cabbage rose and Sweet Avalanche × Cabbage rose (Figure 3).
In studies on the inheritance of the number of petals and/or the flower doubleness trait in roses, it has been reported that the double flower trait may be controlled by a single dominant gene, while the single flower trait may be homozygous recessive [13,14,21].Debener [21] conducted a crossbreeding study using various hybrid rose genotypes that possessed double flowers.As a result, progeny with single flowers was obtained.In the same study, it was observed that out of 109 hybrid progenies, 79 had double flowers (>7 petals) and 30 had single flowers (<7 petals).The number of petals in double-flowered genotypes ranged from 15 to 82, and it was noted that there was a decrease in the number of male organs with an increase in the number of petals.Shupert [22] stated that, as a result of hybridization between parents with single and double flowers on roses, 8 out of 19 hybrid progenies had double flowers, while 11 of them had single flowers.The researcher also stated that the parent with double flowers might be heterozygous in terms of flower form.Jones [14] created different cross combinations with single and double flowers in roses and obtained findings that the double flower trait was controlled together with the additive genes that determine the number of petals, except for two cross combinations.The researcher reported that the deviation in segregation rates in cross combinations that did not conform to this hypothesis was likely due to chance, with a probability of 30%.According to Nadeem et al. [30], as a result of morphological observations and measurements made on hybrid progenies obtained from 30 different combinations with 9 different hybrid rose varieties, it was determined that the number of petals varied between 16 and 40 according to the combinations and that some cross combinations showed positive heterosis and some showed negative heterosis.In studies on the inheritance of the number of petals and/or the flower doubleness trait in roses, it has been reported that the double flower trait may be controlled by a single dominant gene, while the single flower trait may be homozygous recessive [13,14,21].Debener [21] conducted a crossbreeding study using various hybrid rose genotypes that possessed double flowers.As a result, progeny with single flowers was obtained.In the same study, it was observed that out of 109 hybrid progenies, 79 had double flowers (>7 petals) and 30 had single flowers (<7 petals).The number of petals in double-flowered genotypes ranged from 15 to 82, and it was noted that there was a decrease in the number of male organs with an increase in the number of petals.Shupert [22] stated that, as a result of hybridization between parents with single and double flowers on roses, 8 out of 19 hybrid progenies had double flowers, while 11 of them had single flowers.The researcher also stated that the parent with double flowers might be heterozygous in terms of flower form.Jones [14] created different cross combinations with single and double flowers in roses and obtained findings that the double flower trait was controlled together with the additive genes that determine the number of petals, except for two cross combinations.The researcher reported that the deviation in segregation rates in cross combinations that did not conform to this hypothesis was likely due to chance, with a probability of 30%.According to Nadeem et al. [30], as a result of morphological observations and measurements made on hybrid progenies obtained from 30 different combinations with 9 different hybrid rose varieties, it was determined that the number of petals varied between 16 and 40 according to the combinations and that some cross combinations showed positive heterosis and some showed negative heterosis.
In the results of this study, similar to the study by Debener [21], it was observed that the majority of F1 progenies exhibited the double flower trait, while a few progenies had single flowers.The production of progeny with single flowers, despite all parental genotypes being double-flowered (a range of 27 to 97 petals and being classified as full blooms or very full blooms according to ARS standards), suggests that the parents may be heterozygous for this trait, assuming that the trait is controlled by a single gene and the single-flower trait is homozygous recessive.However, the variation in segregation ratios among the combinations, the variation in the number of petals within the same progeny during both flowering periods, the presence of individuals with a lower number In the results of this study, similar to the study by Debener [21], it was observed that the majority of F 1 progenies exhibited the double flower trait, while a few progenies had single flowers.The production of progeny with single flowers, despite all parental genotypes being double-flowered (a range of 27 to 97 petals and being classified as full blooms or very full blooms according to ARS standards), suggests that the parents may be heterozygous for this trait, assuming that the trait is controlled by a single gene and the single-flower trait is homozygous recessive.However, the variation in segregation ratios among the combinations, the variation in the number of petals within the same progeny during both flowering periods, the presence of individuals with a lower number of petals than their parental genotypes, as well as individuals with significantly higher numbers of petals among the F 1 progenies (the heterosis and heterobeltiosis values indicating that 65.12% of the progenies exhibited negative heterosis and 58.91% showed heterobeltiosis are provided in Supplementary Files), and the deviation of the observed phenotypic segregation ratios from the phenotypic distribution predicted by the Punnett square and the genotypic segregation ratios suggest gene interactions.Similarly, Jones [14] reported that two genes play a role in determining the flower form of roses.While one gene controls the double flower trait, the other additive gene influences the number of petals in genotypes exhibiting the double flower trait.However, the results of this study suggest that genetic interactions in petal number and flower doubleness traits exhibit a more complex mechanism than simple additive genetic effects.The presence of heterosis in the number of petals suggests the involvement of non-additive gene effects.Various studies have indicated a potential correlation between dominance, other non-additive genetic effects, and the formation of heterosis [34][35][36].
Flower Stem Length
Among the F 1 progenies, the flower stem length ranged from 6.30 cm to 87.20 cm.Notably, all F 1 progenies with very short stem lengths were obtained from cross combinations using Black rose as the pollen parent.The majority of the F 1 progenies exhibited flower stem lengths within the 30-49 cm range.The flower stem length of 3.84% of F 1 progenies was shorter than 15 cm; 6.16% of them were between 16 cm and 29 cm; 80.39% of them were between 30 cm and 49 cm; 8.45% of them were between 50 cm and 69 cm; and 1.16% of them were over 70 cm (Figure 4).one gene controls the double flower trait, the other additive gene influences the number of petals in genotypes exhibiting the double flower trait.However, the results of this study suggest that genetic interactions in petal number and flower doubleness traits exhibit a more complex mechanism than simple additive genetic effects.The presence of heterosis in the number of petals suggests the involvement of non-additive gene effects.
Various studies have indicated a potential correlation between dominance, other non-additive genetic effects, and the formation of heterosis [34][35][36].
Flower Stem Length
Among the F1 progenies, the flower stem length ranged from 6.30 cm to 87.20 cm.Notably, all F1 progenies with very short stem lengths were obtained from cross combinations using Black rose as the pollen parent.The majority of the F1 progenies exhibited flower stem lengths within the 30-49 cm range.The flower stem length of 3.84% of F1 progenies was shorter than 15 cm; 6.16% of them were between 16 cm and 29 cm; 80.39% of them were between 30 cm and 49 cm; 8.45% of them were between 50 cm and 69 cm; and 1.16% of them were over 70 cm (Figure 4).Cut roses traded commercially are propagated through cuttings and/or grafting.Therefore, measuring the flower stem length in clonally propagated plants of the hybrid progeny can provide more reliable results regarding the known flower stem length of the progenies and their evaluation in terms of cut flower quality criteria.In breeding studies, measuring the flower stem length of F1 progenies propagated from seeds is considered to significantly contribute to the preliminary evaluation of the "short-medium-long" classification for the measured hybrid progeny's flower stem length.Indeed, our F1 progenies, which were grown under the same conditions and clonally propagated, also exhibited flower stem lengths falling within the same class, albeit with varying range intervals.
The F1 progenies demonstrated variations in mid-parent and better parent heterosis.However, 96.51% of them displayed negative heterosis, indicating a performance decrease compared to the better parents, which were hybrid tea roses.Additionally, 58.91% of the progenies exhibited negative heterobeltiosis, indicating lower performance com- Cut roses traded commercially are propagated through cuttings and/or grafting.Therefore, measuring the flower stem length in clonally propagated plants of the hybrid progeny can provide more reliable results regarding the known flower stem length of the progenies and their evaluation in terms of cut flower quality criteria.In breeding studies, measuring the flower stem length of F 1 progenies propagated from seeds is considered to significantly contribute to the preliminary evaluation of the "short-medium-long" classification for the measured hybrid progeny's flower stem length.Indeed, our F 1 progenies, which were grown under the same conditions and clonally propagated, also exhibited flower stem lengths falling within the same class, albeit with varying range intervals.
The F 1 progenies demonstrated variations in mid-parent and better parent heterosis.However, 96.51% of them displayed negative heterosis, indicating a performance decrease compared to the better parents, which were hybrid tea roses.Additionally, 58.91% of the progenies exhibited negative heterobeltiosis, indicating lower performance compared to the mid-parents (Supplementary Files).A significant percentage of the F 1 progenies obtained in the study were classified as having a medium stem length in terms of flower stem length.Furthermore, the collected data revealed a continuous variation among the F 1 progenies.When considering the overall average rates of the F 1 progenies representing all cross combinations, it can be observed that they are distributed around a general mean in a manner consistent with a normal distribution curve.This, in conjunction with the presence of heterosis, supports Byrne's [37] hypothesis that traits related to growth type, such as flower stem length, branch number, and plant height, are controlled by polygenic genes.
Flower Bud Length
The average bud length of the F 1 progenies obtained from different combinations was determined to be 3.42 cm, with bud lengths ranging from 1.48 cm to 5.97 cm depending on the specific cross combinations.The combination of Magnum × Cabbage rose produced the longest average bud length of 4.69 cm.Some combinations involving Black rose as the pollen parent resulted in buds smaller than 2 cm.Conversely, the Layla × Black rose combination produced F 1 progenies with bud lengths approaching 6 cm.In terms of bud length, more than 50% of the F 1 progenies were classified as large or very large (Figure 5).by polygenic genes.
Flower Bud Length
The average bud length of the F1 progenies obtained from different combinations was determined to be 3.42 cm, with bud lengths ranging from 1.48 cm to 5.97 cm depending on the specific cross combinations.The combination of Magnum × Cabbage rose produced the longest average bud length of 4.69 cm.Some combinations involving Black rose as the pollen parent resulted in buds smaller than 2 cm.Conversely, the Layla × Black rose combination produced F1 progenies with bud lengths approaching 6 cm.In terms of bud length, more than 50% of the F1 progenies were classified as large or very large (Figure 5).Although bud length is a trait related to bud size and petal length in roses, limited research has been conducted specifically on the inheritance of bud length.However, there is existing information regarding the inheritance of petal length.It has been reported that petal length in roses is controlled by polygenic genes [38].Shupert [22] investigated petal length in the F1 progeny obtained from crosses between R. wichuraiana 'Basyes Thornless' and R. chinensis 'Old Blush,' as well as in the F2 progeny resulting from backcrossing three hybrid progenies with R. chinensis 'Old Blush.'This study revealed wide variation in petal length among both the F1 and F2 progenies, with the average petal length of the F1 progenies being lower than that of the parents.This led to the conclusion that additive genes play a significant role in the inheritance of petal length.In another hybridization study conducted by Nadeem et al. [30] on modern rose varieties, petal lengths were examined, and it was observed that petal length exhibited positive heterosis in certain cross combinations while showing negative heterosis in others.
The findings obtained in this study suggest that, similar to petal length, bud length could also be controlled by polygenic genes.It was observed that the bud lengths of the F1 progeny exhibited negative transgressive segregation in some cross combinations.For Although bud length is a trait related to bud size and petal length in roses, limited research has been conducted specifically on the inheritance of bud length.However, there is existing information regarding the inheritance of petal length.It has been reported that petal length in roses is controlled by polygenic genes [38].Shupert [22] investigated petal length in the F 1 progeny obtained from crosses between R. wichuraiana 'Basyes Thornless' and R. chinensis 'Old Blush,' as well as in the F 2 progeny resulting from backcrossing three hybrid progenies with R. chinensis 'Old Blush.'This study revealed wide variation in petal length among both the F 1 and F 2 progenies, with the average petal length of the F 1 progenies being lower than that of the parents.This led to the conclusion that additive genes play a significant role in the inheritance of petal length.In another hybridization study conducted by Nadeem et al. [30] on modern rose varieties, petal lengths were examined, and it was observed that petal length exhibited positive heterosis in certain cross combinations while showing negative heterosis in others.
The findings obtained in this study suggest that, similar to petal length, bud length could also be controlled by polygenic genes.It was observed that the bud lengths of the F 1 progeny exhibited negative transgressive segregation in some cross combinations.For example, in the Samourai × Black rose combination, while the average bud length of the parents was 4.65 cm (5.8 cm × 3.5 cm), the average bud length (3.36 cm) and the longest bud length (3.65 cm) in the F 1 progeny remained below this value.Similarly, in the Magnum × Cabbage rose combination, the average bud length in the F 1 progeny was 4.69 cm, whereas the average bud length of the parents was 5.1 cm (5.7 cm × 4.5 cm).In terms of mid-parent and better parent heterosis in the F 1 progeny, 99.61% of the progenies exhibited negative heterosis, indicating a decrease in performance compared with the better parent.Furthermore, 92.25% of the progenies displayed negative heterobeltiosis, indicating a significant reduction in performance even when compared to the mid-parent (Supplementary Files).
Flower Diameters
The flower diameters of the F 1 progenies ranged from 3.78 cm to 12.81 cm, with a mean diameter of 7.54 cm.When considering the different cross combinations, 4.62% of the F 1 progenies had small flower diameters, 36.15% had medium diameters, 37.31% had large diameters, and 21.92% had very large diameters.It was observed that F 1 progenies with small flower diameters only occurred in combinations where the Black rose was Plants 2024, 13, 1797 9 of 19 used as the pollen parent.On the other hand, all progenies resulting from combinations involving Sweet Avalanche and Avalanche as seed parents had flower diameters above 5.0 cm (Figure 6).
Flower Diameters
The flower diameters of the F 1 progenies ranged from 3.78 cm to 12.81 cm, with a mean diameter of 7.54 cm.When considering the different cross combinations, 4.62% of the F1 progenies had small flower diameters, 36.15% had medium diameters, 37.31% had large diameters, and 21.92% had very large diameters.It was observed that F1 progenies with small flower diameters only occurred in combinations where the Black rose was used as the pollen parent.On the other hand, all progenies resulting from combinations involving Sweet Avalanche and Avalanche as seed parents had flower diameters above 5.0 cm (Figure 6).In studies conducted to determine the heritability of flower diameter in roses, it has been reported that the trait is significantly influenced by both the seed and pollen parents as well as gene interactions [39].Furthermore, it has been found to be associated with petal length rather than petal number [14].Dugo et al. [39] observed that the average flower diameter ranged between 2.05 and 5.72 cm in the F1 progeny, resulting from crosses with parents having an average flower diameter of 4.00 cm.The researchers noted that the flower diameter trait exhibited transgressive variation in both positive and negative directions.Jones [14] determined that the flower diameter varied between 2.00 and 6.90 cm in progeny, resulting from crosses between genotypes with flower diameters ranging from 2.00 to 5.00 cm.Nadeem et al. [30] reported that flower diameters ranged from 3.00 to 5.00 cm in progeny obtained from crosses between modern rose varieties with flower diameters ranging from 4.00 to 6.00 cm.Additionally, they found that flower In studies conducted to determine the heritability of flower diameter in roses, it has been reported that the trait is significantly influenced by both the seed and pollen parents as well as gene interactions [39].Furthermore, it has been found to be associated with petal length rather than petal number [14].Dugo et al. [39] observed that the average flower diameter ranged between 2.05 and 5.72 cm in the F 1 progeny, resulting from crosses with parents having an average flower diameter of 4.00 cm.The researchers noted that the flower diameter trait exhibited transgressive variation in both positive and negative directions.Jones [14] determined that the flower diameter varied between 2.00 and 6.90 cm in progeny, resulting from crosses between genotypes with flower diameters ranging from 2.00 to 5.00 cm.Nadeem et al. [30] reported that flower diameters ranged from 3.00 to 5.00 cm in progeny obtained from crosses between modern rose varieties with flower diameters ranging from 4.00 to 6.00 cm.Additionally, they found that flower diameters displayed positive heterosis in some cross combinations, while negative heterosis was observed in others.
In this study, the obtained values for the flower diameter showed a wide variation.Some combinations exhibited negative transgressive segregations in terms of the flower diameter.For example, in the Layla × Black rose combination, the average flower diameter of the parents was 9.08 cm (10.15 cm × 8.00 cm), while the average flower diameter of the F 1 progeny was determined to be 7.08 cm.In the Magnum × Cabbage rose combination, the average flower diameter of the parents was 11.08 cm (10.50 cm × 11.65 cm), while the average flower diameter of the F 1 progeny was determined to be 10.15 cm.Additionally, the Black rose species, being the parent with the smallest flower diameter after the Damask rose species, produced progeny with a small flower diameter that was not observed in other pollen parents when used as pollen parents in combinations (Samourai × Black rose, Avalanche × Black rose).It has been determined that 94.19% of the progenies exhibit negative heterosis, and 84.88% exhibit negative heterobeltiosis (Supplementary Files).These findings support the quantitative inheritance of the flower diameter trait, as reported by other researchers, and indicate that some of the progenies fall below the mid-parents, making them unsuitable for certain market quality criteria for cut flowers.
Hierarchical Clustering Heat Map of Examined Traits
A hierarchical clustering heat map of the average values of the F 1 seedlings obtained from 10 different combinations and parents' traits was plotted in order to determine the phenotypic relationship among parents and progenies.Based on the six characteristics examined, two main groups were formed.Seed parents differed from both pollen parents and F 1 progenies in terms of flower diameter, bud length, and flower stem length.While the Cabbage rose was separated from the seed parents in terms of scent, petal number, and flower stem length, it differed from both pollen parents and F 1 progenies in terms of all other traits except flower stem length.Except for the scent, the seedlings in all combinations where Black rose was the pollen parent showed similarities to Black rose.Seedlings obtained from combinations where Cabbage rose was the pollen parent were distinguished from F 1 seedlings obtained from other combinations in terms of the petal numbers, except for Magnum × Cabbage rose.Flower diameter and bud length traits were included in the same subgroup and were more closely related to flower stem length than other traits.Similarly, the petal number and scent traits were included in the same subset and seemed to be more closely related to each other than other traits.Parents and combinations were clustered into two main groups.The first main group consisted of only seed parents.In the second main group, there were three different subgroups.The first group consisted only of Cabbage roses.The second group consisted of combinations where Cabbage rose was the pollen parent.The third group consisted of combinations where Black rose and Black rose were pollen parents.It is understood that F 1 seedlings were more similar to pollen parents than seed parents in terms of the examined characteristics except for scent.In other words, hybrid seedlings obtained from hybridizations with old garden roses were generally of lower quality than commercially available modern rose varieties (Figure 7).
Variance Components, Heritability Estimation, and Phenotypic Correlation Matrix of Quantitative Traits
The values of variance components for the pollen parent (σ 2 α), seed parent within the pollen parent (σ 2 β(α)), error (σ 2 e), and narrow-sense heritability (d) estimates obtained for traits including flower stem length, number of petals, flower diameter, and bud length in roses obtained using Bayesian methods are shown in Table 1.The TAD (Trace-Autocorrelation-Density) plots are provided in Supplementary Files.The σ 2 α was determined to be higher than the σ 2 β(α) for all the traits examined, and the σ 2 α was found to be 1.4 times greater than the σ 2 β(α) in terms of flower stem length, 1.3 times greater in terms of petal number, and 2.0 times greater in terms of flower diameter and flower bud length.
Variance Components, Heritability Estimation, and Phenotypic Correlation Matrix of Quantitative Traits
The values of variance components for the pollen parent (σ 2 α ), seed parent within the pollen parent (σ 2 β(α) ), error (σ 2 e ), and narrow-sense heritability (d) estimates obtained for traits including flower stem length, number of petals, flower diameter, and bud length in roses obtained using Bayesian methods are shown in Table 1.The TAD (Trace-Autocorrelation-Density) plots are provided in Supplementary Files.The σ 2 α was determined to be higher than the σ 2 β(α) for all the traits examined, and the σ 2 α was found to be 1.4 times greater than the σ 2 β(α) in terms of flower stem length, 1.3 times greater in terms of petal number, and 2.0 times greater in terms of flower diameter and flower bud length.The σ 2 e was low for all of the variables (<3%), with the exception of flower stem length.Moreover, all quantitative traits had moderately low narrow-sense heritability.The flower diameter trait was determined to have the highest heritability, with a value of 46.9%, while flower stem length was identified as the trait with the lowest heritability, at 24.9%.The phenotypic correlation matrix among flower stem length, petal number, flower diameter, and flower bud length revealed the following correlations: there was a weak positive correlation between flower stem length and flower diameter (r = 0.231) as well as flower bud length (r = 0.278).Furthermore, a strong positive correlation was observed between flower diameter and flower bud length (r = 0.904).However, no significant correlation was found between petal number and other traits (Table 2, Supplementary Files).There are other studies in which the heritability of quantitative traits in roses is estimated.Panwar et al. [40] reported high heritability (>80%) for flower diameter, petal number, and plant height traits in roses.Gitonga [41] examined the broad-sense heritability of flower stem length and petal number in 148 F 1 progenies obtained from the hybridization of tetraploid P540 and P867 rose genotypes.The heritability values for flower stem length ranged between 0.86 and 0.91, while the heritability values for petal number varied between 0.88 and 0.99.Environmental conditions were found to have minimal effects on these traits.In a study by Liang [42], the narrow-sense heritability for flower diameter was determined as 0.24, and the narrow-sense heritability for petal number was determined as 0.12.The researcher concluded that flower size traits were moderately narrow and highly broad heritable.Lau et al. [43] investigated narrow-sense and broad-sense heritability in diploid roses across different seasons.They found narrow-sense heritability for flower diameter to be 0.38 and broad-sense heritability to be 0.70.The narrow-sense heritability for petal number ranged between 0.26 and 0.33, and the broad-sense heritability ranged between 0.85 and 0.91 in two different years.The researchers emphasized that flower diameter was influenced not only by genetic factors but also by environmental effects.Soujanya et al. [44] examined heritability and genetic variability in 25 different hybrid tea roses and observed high heritability for all traits, including plant height.Wu et al. [45] reported a narrow-sense heritability of 0.50 for plant height in their study on diploid roses, indicating it to be a highly heritable trait.
In this study, the narrow-sense heritability of the flower stem length of 258 rose progenies was estimated to be 0.25, petal number 0.31, flower diameter 0.47, and bud length 0.37.Broad-sense heritability was not evaluated.Since all measurements were made under controlled conditions, there was no seasonal component or contribution to the variance.When comparing these results with the aforementioned studies, the heritability of petal number and flower diameter generally aligns but does not exactly overlap.There are limited studies on flower stem length, but the heritability reported by Wu et al. [45] partially corresponds to the findings of this study.No data were found regarding bud length.When comparing heritability, it is important to consider studies conducted on the same population and traits.Different calculation methods and environmental conditions may result in variations in narrow-sense heritability, especially for polygenic traits, due to the influence of environmental factors on genetic factors [46].Therefore, a direct comparison should not be expected.In fact, this study used the Bayesian method [47], which is considered superior to the commonly preferred ANOVA and REML methods used in previous studies.Additionally, the population and climatic conditions differed from those of other studies.
In the study, the narrow-sense heritability of 0.47 calculated for the flower diameter trait was higher compared to the other traits.This indicates that the additive gene effect in the inheritance of the flower diameter is stronger than for other traits.However, the narrow-sense heritability of 0.25 calculated for the flower stem length trait was lower than that of the other traits.This suggests that the inheritance of the flower stem length is more influenced by non-additive gene effects compared to the other traits.Moreover, the narrow-sense heritability of the petal number and bud length showed similar values.The moderately narrow sense of heritability for all traits suggests the presence of non-additive gene effects on these traits.These results show that the inheritance of traits has a complex structure and that the effects of environmental factors should be evaluated.
Estimates obtained using the Punnett square and heritability estimations generally align with each other and partially correspond to the results reported in previous studies.Previous studies have suggested that traits such as flower doubleness and petal number may be regulated by a single dominant gene or exhibit an additive gene effect.However, both the Punnett square analysis and narrow-sense heritability estimates in this study indicate the presence of both additive and non-additive gene effects.The findings of Gitonga et al. [41] also support the possibility of polygenic control for these traits.The moderate, narrow-sense heritability estimates align with the notion that flower diameter, flower stem length, and bud length traits may be influenced by polygenic genes.The non-additive gene effect may involve one parent's genes dominating those of the other parent, and the observation that all progenies display dominant traits inherited from the pollen parent can serve as evidence of the non-additive gene effect on these traits.
Narrow-sense heritability can serve as an indicator of the rate at which a desired trait can progress through selection [48], and the magnitude of heritable variability is the most significant factor in terms of its impact on the response to selection [49].In this study, the moderately narrow sense heritability values obtained for all traits suggest that selections with moderate intensity for flower diameter, petal number, and bud length can be made under controlled conditions during the third flowering period.However, performing selection during later flowering periods would be more appropriate under field conditions.Furthermore, for all traits, including flower stem length, it would be more suitable to conduct a specific selection for each season, and multiple measurements should be taken for each progeny.It is predicted that a single measurement would not be sufficient to fully characterize the genotype.In fact, Kawamura et al. [50] also noted that the non-genetic variance, along with differences between plants, is significantly influenced by variations in measurements taken on the same plant.
When examining the phenotypic correlation matrix, a significantly high positive genetic relationship was observed between flower diameter and bud length, while no significant relationship was found between petal number and flower diameter (p < 0.01).These results align with the findings of Jones [14], who also reported no correlation between flower diameter and petal number.Jones suggested that flower diameter was more related to petal length than to petal number.The exact relationship between petal and bud lengths is not yet known.However, in some roses, the outer row of petals is longer than the inner row of petals.Therefore, if bud length is solely evaluated in relation to the outer row of petals, the relationship may be misinterpreted.Akhtar et al. [51] determined a relationship between flower diameter and bud length, suggesting that these traits may be controlled by common or related genes, while environmental factors that influence both traits may also play a role.The weak but positive relationship between flower stem length and flower diameter, as well as bud length, may indicate that stem length has some effect on flower size.However, there are likely other factors that have a more significant impact on flower size than stem length.Indeed, Plaut et al. [52] reported that flower size varies not only based on flower stem length but also on the number and size of leaves on the flower stem.
Materials and Methods
In this study conducted between 2018 and 2020, F 1 progeny cultivation and the measurement of various quantitative and qualitative traits were carried out in a modern plastic greenhouse at the Department of Horticulture, Faculty of Agriculture, Ankara University, located in Ankara province, Turkey [39
Plant Material
The parent plants used in the study titled "Success of Hybridization in Hybrid Tea Rose × Old Garden Rose Combinations" and the progeny resulting from the hybridization of these parents were used as plant materials.Some qualitative and quantitative traits of parents are given in Table 3.All traits of the parents were determined using the method described in "Morphological Characteristics Examined in F 1 Progenies".
A total of 18 combinations were derived from the F 1 progenies, where Layla, Magnum, Sweet Avalanche, Samourai, and Avalanche were used as the seed parents and Black rose, Cabbage rose, and Damask rose were used as the pollen parents.
The process from the procurement of plant material (including planting, applied cultural practices, determination of ploidy levels and pollen quality, pollination studies, seed collection, cold stratification, and seed sowing) to transplanting the F 1 progenies is exten-sively described in the study titled "Success of Hybridization in Hybrid Tea Rose × Old Garden Rose Combinations".
Transplanting and Cultural Practice of F 1 Seeds
After the F 1 progenies developed their first four true leaves, they were transplanted into pots filled with a 3:1 v/v mixture of coco peat and pumice.In April 2019, measurements were taken to assess the morphological characteristics of the F 1 progenies.Irrigation, fertilization, and other cultural practices applied to the F 1 progenies were carried out similarly to those used for the parent plants.
Qualitative and Quantitative Traits Examined in F 1 Progenies
During two flowering periods in one-year-old F 1 progenies, several characteristics were examined under the same conditions.The details of the traits examined are provided below: Recurrent blooming: The F 1 progenies were categorized based on their flowering behavior.Those that bloomed more than once a year were classified as having recurrent blooming (+), while others were indicated as having no recurrent blooming (−).
Scent: The scent of F 1 progenies was evaluated using the magnitude estimation procedure of the sensory evaluation method [54].The classification process was conducted by the author team and relied on their personal experiences and perceptions.A more intuitive approach was used to assess scent sensitivity.The rose species known for their strong scent, namely Black rose, Damask rose, and Cabbage rose, which were also used as parents in this study, were used as reference points, and the scent intensity was considered 4 points.Scentless commercial hybrid varieties were rated at 1 point.Intermediate classes are relatively established.The results obtained at each flowering period were checked for consistency.Scent intensity was evaluated in a well-ventilated and isolated room during the morning hours when the flowers were in full bloom.Scent intensity was divided into four classes: scentless and barely perceptible (1 point), slightly scented (2 points), moderately scented (3 points), and strongly scented (4 points).
Petal number and flower doubleness: When the stigma and anthers were visible in the flowers of F 1 progenies, the number of petals was counted for at least two flowers in each genotype [55].Only the petals of terminal flowers were considered.The F 1 progenies were classified according to the criteria set by the American Rose Society: single (4-8 petals), semi-double (9-16 petals), double (17-25 petals), full (26-40 petals), and very full (≥41 petals).These criteria were considered more comprehensive than those of UPOV and more suitable than commercial quality criteria [56].
Flower stem length: The length of the flower stem was measured in cm for flowers harvested from the bottom of the second 5-part leaf [57].The distance from the cut point to the apex of the bud was recorded.The flower stem length of genotypes was divided into five classes: very short (≤15 cm), short (16 to 29 cm), medium (30 to 49 cm), long (50 to 69 cm), and very long (≥70 cm) based on the obtained measurements.
Flower bud length: The length of flower buds in mature F 1 progenies was determined by measuring the distance between the lower and upper ends of the bud using a digital caliper.Only terminal flowers were considered for measurement.The bud length of genotypes was categorized into four classes: small (1.2 to 2.5 cm), medium (2.5 to 3.0 cm), large (3.1 to 4.9 cm), and very large (≥5.0 cm) based on the obtained measurements.
Flower diameter: Fully blooming flowers were harvested, and their diameters were measured using a digital caliper.Measurements were taken on terminal flowers.The flower diameters of genotypes were divided into four classes: small (3.0 to 4.9 cm), medium (5.0 to 6.9 cm), large (7.0 to 8.9 cm), and very large (≥9.0 cm), based on the obtained measurements and following the UPOV guidelines [58].
Data Analysis
The Punnett square was used to theoretically predict the inheritance of qualitatively measured traits such as recurrent blooming, scent, and flower doubleness.Heterosis and heterobeltiosis were calculated for quantitative traits such as petal number, flower diameter, flower stem length, and bud length, following the formula described by Nadeem et al. [30]: Ht = heterosis, BP = better parent value, Hbt = heterobeltiosis, MP: mid-parent value.
In conjunction with heterosis and heterobeltiosis, the Bayesian variance component estimation method was employed to estimate the heritability and phenotypic correlation of these quantitative traits.For the analysis of the two-way random nested design using the Bayesian method, the model provided by Kaya Başar and Fırat [47] was used.To perform a Bayesian analysis, both the likelihood function based on the observed data and the prior distributions for each parameter of the model are needed, as they form the components of the posterior distribution.The Gibbs sampling algorithm was used to apply Bayesian methods.The Gibbs sampling algorithm is a method for generating random samples from the full conditional distributions of the parameters without having to calculate the density.It starts with an initial starting point for the parameters and samples a value for each parameter of interest one at a time, given the values for the other parameters and data.Once all of the parameters of interest have been sampled, the nuisance parameters are sampled, given the parameters of interest and the observed data.This process is then repeated.The power of the Gibbs sampling algorithm is that it allows the joint distribution of the parameters to converge to the joint probability of the parameters given the observed data.After estimating the variance components of the pollen parent (σ 2 α ), seed parent within pollen parent (σ 2 β(α) ), and error (σ 2 e ), as well as the phenotypic correlations, the heritability coefficient (d) was estimated by using the variance components.The statistical analysis for the Bayesian method was performed using the BGLIMM procedure within the SAS software version 9.4.Moreover, a heat map was generated to visualize the hierarchical clustering, and the values were standardized for each combination to evaluate the relationships between the examined quantitative and qualitative traits.
Conclusions
In this study, the obtained F 1 progenies deviated from the commercial quality criteria as cut flowers and exhibited traits more similar to those of old garden roses.The moderately narrow sense of heritability observed in the examined traits indicated the influence of nonadditive gene effects.In terms of flower diameter, bud length, and flower stem length traits, it appears that old garden roses may exert a dominant effect on hybrid tea roses when used as pollen parents.However, this was not the case for the scent trait.The similarity of the F 1 progenies to the seed parents in terms of scent suggests that the scent trait may be more dominant in hybrid tea roses than in old garden roses.The inheritance of the scent trait seems to be particularly challenging due to the complex genetic background of roses, necessitating the use of molecular methods to elucidate it.When evaluating these combinations, especially for purposes such as disease resistance and stress tolerance, it may be more appropriate to include combinations where old garden roses are used as pollen parents to achieve the desired outcomes in breeding programs.It can be a challenge for cut flower breeding programs.
During the rose selection, it is advisable to avoid a rigorous selection based solely on the first two flowering periods.Instead, repeated measurements should be taken for each plant, and if necessary, separate measurements should be made for each season.Although flower doubleness is clearly understood in the first flowering period, the variation in the petal number can sometimes be misleading.For selections based on flower diameter and bud length, evaluating either one of these traits alone may be sufficient.
The results of this study are predicted to guide interspecies hybridization in roses and contribute to breeding programs aiming to obtain diverse hybrid populations.By using available data on parental performance, the selection of cross combinations can be more targeted, increasing the likelihood of success compared with random selection.Moreover, the selection of parents and identification of cross combinations entail significant costs, efforts, and time.Expanding the seed and pollen parent gene pool makes a great contribution and is highly recommended for breeders.While large-scale hybridization strategies employed by well-funded international breeding programs may not be feasible for many public-sector national breeding programs, adopting fewer but carefully chosen cross combinations and parent selection methods can enhance the efficiency of breeding programs, leading to the introduction of new varieties into the market.It is anticipated that these findings will contribute significantly to genotype selection during parental and progeny selection.Subsequent steps in this study should involve investigating the contribution of old garden roses as seed parents to the inheritance of traits.Further crossstudies on trait inheritance need to be conducted, with particular emphasis on incorporating molecular approaches, especially for complex traits such as scent that are challenging to inherit.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/plants13131797/s1,Table S1.Heterosis and heterobeltiosis (%) of F1 progenies for various traits.Table S2.Descriptive statistics.Table S3.Characteristics of F1 progenies.Figure S1.The posterior TAD (trace-autocorrelation density) panels for flower stem length of σ2e, σ2α and σ2β(α).The Markov chain converges very well with very low autocorrelation and almost a perfect normal posterior distribution in all TAD panels representing different parameters.Overall, this dataset is sufficient to allow more precise estimates of the parameters.Figure S2.The posterior TAD (trace-autocorrelation density) panels for petal number of σ2e, σ2α and σ2β(α).The Markov chain converges very well with very low autocorrelation and almost a perfect normal posterior distribution in all TAD panels representing different parameters.Overall, this dataset is sufficient to allow more precise estimates of the parameters.Figure S3.The posterior TAD (trace-autocorrelation density) panels for flower diameter of σ2e, σ2α and σ2β(α).The Markov chain converges very well with very low autocorrelation and almost a perfect normal posterior distribution in all TAD panels representing different parameters.Overall, this dataset is sufficient to allow more precise estimates of the parameters.Figure S4.The posterior TAD (trace-autocorrelation density) panels for flower bud length of σ2e, σ2α and σ2β(α).The Markov chain converges very well with very low autocorrelation and almost a perfect normal posterior distribution in all TAD panels representing different parameters.Overall, this dataset is sufficient to allow more precise estimates of the parameters.Figure S5.Phenotypic correlation matrix between quantitaive traits (p ≤ 0.01).
Author Contributions: T.K.: data curation, investigation, methodology, validation, visualization, writing-original draft, and writing-review and editing.S.K.: conceptualization, methodology, writing-review and editing, and supervision.E.D.M.: data curation and investigation.E.K.: resource and writing-review and editing.All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Figure 7 .
Figure 7. Hierarchical clustering heat map of examined traits and cross combinations.The color-coded scale indicates an increase from red to black and then green.M: Magnum, A: Avalanche, L: Layla, FR: First Red, SA: Sweet Avalanche, S: Samourai, BR: Black rose, CR: Cabbage rose, FD: flower diameter, FBL: flower bud length, FSL: flower stem length, RB: recurrent blooming, PN: petal number, SC: scent.
Figure 7 .
Figure 7. Hierarchical clustering heat map of examined traits and cross combinations.The color-coded scale indicates an increase from red to black and then green.M: Magnum, A: Avalanche, L: Layla, FR: First Red, SA: Sweet Avalanche, S: Samourai, BR: Black rose, CR: Cabbage rose, FD: flower diameter, FBL: flower bud length, FSL: flower stem length, RB: recurrent blooming, PN: petal number, SC: scent.
Table 1 .
Bayesian estimation of variance components and heritability coefficient based on variance components from the pollen parent.
* The data were normalized on a logarithmic scale due to the non-normal distribution of the dataset (Supplemental Files).
Table 2 .
Phenotypic correlation matrix among quantitative traits.
Table 3 .
Some qualitative and quantitative traits of rose genotypes were used as parents. | 2024-07-03T15:11:18.899Z | 2024-06-28T00:00:00.000 | {
"year": 2024,
"sha1": "e4f4d824b797c91bec1fb893acf0b4fd2a57fdac",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/plants13131797",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c6ff23d921f4f7d0552a8544ca8c2091739479e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.